My thanks to Dr. Joe Fortuna for raising a question in email correspondence, about Evidence-Based Management. My response, which comes first from the field of behavioral health, provides a perspective on the domains of both Evidence-Based Practice, and Practice-Based Evidence (and by extension, to the work of assessing change in complex human processes/systems).
I encountered the issues of Evidenced-Based Practice when I came to SAMHSA and the field of behavioral health. I began reading the literature on EBP, which is at the foundation of the grant-making process. Research purports to establish a certain practice, as “evidence-based.” The grant requestor alleges they will use this practice if they are given the money by SAMHSA. SAMHSA says we’ll give you the money, if you show us proof that you have acted with “fidelity” to the EBP. All sounds reasonable and logical.
But it isn’t, in my view. The concept of “evidence based” stuff comes in large part from pharmaceuticals. Random clinical trials can isolate to a single variable in the study, and researchers can control both the trial and control populations. Determination of adherence to a particular evidence-based practice or study is comparatively easy.
Not so, in behavioral health, and I believe by extension, not so for the work of managing or improving processes with human beings.
Here’s why. The three standard definitions of Evidence-Based Practice in behavioral health come from SAMHSA, the Institute of Medicine, and the American Psycological Association. All three recognize by definition, the inherent complexity and variability of the clinician-patient relationship. So what we call “evidence” in the human world of behavioral health, is itself filled with a very high degree of variation in practice. When the EBP is put into practice (by a grantee, for example), there is even more variability that enters the picture. The manual for a certain practice may require a measure of cultural competence, or language ability. It is not always clear if the org engaged in the EBP actually has the means to strictly adhere to the practice. “Fidelity” becomes a relative term.
This is not just my opinion. There is a growing movement in behavioral health that suggests instead of EBP, we need “Practice-Based Evidence” (see Michelle Eliason’s book on this, e.g.). What this means is something similar to the way we assess orgs applying for the Baldrige award. Teams of observers are trained, so as to align their sense- and meaning-making. Observing an organization in practice, they can begin to derive the evidence of efficacy.
My own explanation for this is that we are dealing with two different types of challenges and processes. In those situations like random clinical trials, where there is little variation and high degrees of control, we can use EBP to drive results. But in general, the challenges of managing people and changing what they do, are complex problems. These are best assessed by the inverse PBE approach. I gave a talk on this topic to the Plexus Summit in Philadelphia in 2008.
I’d refer broadly to the work of Harvard’s Ron Heifetz on this, and to Dave Snowden’s Cynefin model, which was the subject of an article in HBR. In behavioral health, there was an article co-written I think, by Jeffrey Pfeiffer, that addresses some of the same issues. The Heifetz model distinguishes between Technical and Adaptive problems. Snowden describes the realms of simple and complicated (technical to Heifetz) and complex/chaotic (adaptive). Each requires a somewhat different sequence of inquiry, sense-making, analysis and response (clearly described by Snowden).
So, not really an “either…or…” but more of a “both…and…” Good luck out there!
Bruce Waltuck, M.A., Complexity, Chaos, and Creativity