Tomorrow I am having the latest in a series of conversations with people who, like me, have had lengthy and successful careers in managing government agencies. We will be talking about the challenges right now in crafting new narratives of mission and strategy. New lenses of core values, and operating principles. New ways to define the results of coordinated action.
In 2009, while working with a consulting firm in Virginia, I developed my own framework for what I called Integrated Improvement. A way of understanding and using a “both. . . and. . . ” mindset for engaging different types of challenges.
Below is the set of questions I first posed for any manager and employee (we are looking at the Federal government, but frankly this applies to any organization):
As you shape an agenda, I would offer several questions that may be useful:
How can managers and employees best coordinate thinking and action to achieve objectives?
How can managers and employees best distinguish between challenges that are complicated (where experts can offer reliable solutions), and complex challenges whose ambiguity and uncertainty make solutions presently unknowable?
How can managers and employees best define appropriate measures of action and outcome for complicated, and complex challenges, respectively?
In an era of relative chaos, where meaning and response is unclear to many, and the status quo is in doubt, how can managers and employees best create shared meaning and purpose for effective coordinated action?
AND. . . Below is the response I sent to my colleagues, when one, a person strongly rooted in the use of “best practice” global measures, seemed to question the notion of complexity in my framework:
It is not merely my belief, but I believe it is science fact, that systems and challenges are not all the same, and that there are both optimal patterns of response for each type, and optimal forms of metrics and assessment. The biggest cause of failed change and improvement initiatives is the treatment of challenges for which no solution is now known or knowable, and acting as if some expert knew a reliable and predictable way to solve it.
The single best source I would put in the hands of every person, is the book “Developmental Evaluation” by Michael Quinn Patton. A short valuable reference is the award-winning 2007 HBR article “A Leader’s Framework For Decision-Making.”
This is not about persuading people that complexity is real, and must be treated differently than complicated technical stuff. It is as real as the sun and the moon. Best practice and expert practice are critical in the domains of the known and knowable solution. Emergent and novel practice are what we experience when we act into the uncertainty and ambiguity of other stuff. The reluctance to act knowing that failure is possible or even likely, hampers true adaptive capacity and success.
Constructing a coherent narrative and shared purpose is the first task of an organization. Who are we, and what are we here to do? Why it is important, and how we will do it, follows next. Then. . . Yes. . . How do we measure the impact of what we have done. Developing a global set of measures is possible. But it is my clear belief that this must take into consideration the realities of different types of situations. To ignore the reality is to program for failure.