Three years ago I went to work as a Senior Advisor at SAMHSA, the Federal Substance Abuse and Mental Health Services Administration. My job was to oversee process improvement efforts at the agency responsible for giving out $3billion a year in grants for mental health, and substance abuse prevention and treatment services. At that time I was not a behavioral health professional. But I WAS a professional in organizational change leadership, and business process improvement. I had previously created and managed the US Department of Labor’s award-winning improvement initiative.
Although the employees of SAMHSA care deeply about helping those people whose services are in part funded by SAMHSA, the day-to-day work of SAMHSA is fundamentally about making and managing grants.If I was going to help SAMHSA do its work better, I needed to quickly learn as much as I could about the agency and its work. Literally from my first day on the job, I heard a phrase I would hear almost every day during my tenure at SAMHSA- “evidence-based practice.” I knew about “best practices” from the world of business process improvement. For companies who manufactured cars, there would be a “best practice” somewhere, that was the best available way to perform a task to take the least time, cost less to make, and have the highest possible quality. From my years at DOL, I knew that “better, faster, cheaper” was a phrase that policy-makers and stakeholders liked to use too. But I also knew from years of managing change efforts at DOL, that changing people and their behaviors was among the hardest things to do. People could not be managed and controlled the way machines on a factory floor could be controlled.
So very soon after my arrival at SAMHSA, I went to the agency’s library, and asked the librarian for books and articles to help me better understand the work of substance abuse prevention and treatment. One of the books given to me was “Improving Substance Abuse Treatment” by Michele Eliason. Just pages into the book, I began to read about “Evidence-Based Practice.” A bit farther on, around page 27 I think, I read about three standard definitions of “Treatment.” These came from SAMHSA/NREPP, the IOM, and the APA. All three made clear that the relationship between the clinician and patient/recipient, was to be controlled by the patient/recipient. All three definitions clearly noted the highly complex and variable nature of the treatment dynamic.
I immediately wondered how such a highly variable process could then describe “evidence” or an “evidence-based” practice. In the world of business process improvement – frequently a world of manufacturing – variation is literally “the enemy.” When we buy a car, we want to assure that it was made completely within the specifications of the manufacturer, and that it will perform exactly the way the manufacturer claims it will. When we go to Starbucks, we have every right to expect that our java frappawhatzis will be consistently made every time.
Given the high degree of variation from one clinician and patient/recipient to the next, could we call a type of commonly-used practice “evidence?”
I went on to read more in the book and in other literature. I learned that the notion of Evidence-Based Practice came from the world of random clinical drug trials. Again, my mental radar went on alert. In a random drug trial, testers could easily control the sample population, and they could control the experiment to isolate the single variable they were exploring. This did not seem to be the case in behavioral health.
When SAMHSA makes a grant, the phrase “fidelity to” inevitably precedes the phrase “evidence-based practice.” I understood that if I gave you a million dollars to do something I wanted done, that I’d sure as heck want to know that you did what you said you’d do. Fidelity. To the Evidence-Based Practice. But then I read more about how these EBP’s were implemented in the real world. More variation, more problems. Did the implementing facility have any language barriers? Cultural competency barriers? Did they have sufficient staff, sufficiently trained in the EBP? Were there technology requirement barriers?
So, within a matter of weeks after arriving at SAMHSA, I had serious doubts about the validity of the whole notion of applying EBP to the field of behavioral health. The bad news is that I was not a bheavioral health professional, and might be seen as lacking credibility in question EBPs. But the good news is that I found in Michele Eliason’s book and elsewhere, a small but growing counter-movement to the reliance on EBPs in behavioral health. In fact, there were a number of professionals advocating instead for what they called “Practice-Based Evidence.” But what did this mean?
In the world of business organizations, the highest American award for excellence is the Baldrige Award. Many states have their own equivalent. Assessment explores seven categories of organizational performance, including leadership, process management, and results. To know how well a company was really doing, the award organizers trained a group of observers to go onsite at an award applicant’s facilities. I did this for two years in my state award system. When I was trained with my team, we were told that the training was aimed at “aligning” or “calibrating” us. In other words, the award administrators knew that each person saw the world differently. But through common training and shared learning, we could narrow the range of variation in our perceptions, and make effective collective observations and conclusions.
This is the approach also recommended in the alternative Practice-Based Evidence movement.
Why is this important? Is there a “difference that makes a difference” between PBE and EBP? Read on to the next post to learn more.