I was recently asked if there is a decision making approach that’s designed specifically for situations characterized by volatility, uncertainty, complexity, and ambiguity (VUCA). I don’t know of a one-size-fits-all solution, but I can speak to what’s needed to optimize decisions made in VUCA conditions. Here are the main ingredients:
The ability to adjust one’s decision-making approach to meet the demands of a particular problem: For example, some problems must be addressed immediately and autocratically, others are best addressed more collaboratively and with a greater focus on data collection and perspective seeking.
The ability to make high-quality autocratic decisions: By setting up systems that keep stakeholders continuously appraised of one another’s perspectives and data, we can improve the quality of autocratic decisions by ensuring that there are fewer surprises and that rapid decisions are informed decisions.
Dynamic steering: Every leader in an organization should be constantly cultivating this skill. It increases the agility of teams and organizations by building skill for efficient decision-making and timely adjustment.
The most complete information possible (under conditions in which complete information is impossible), which requires:
Collaborative capacity: highly complex problems, by definition, are beyond the comprehension of even the most developed individuals. Collaborative skills ensure that leaders can effectively leverage key perspectives.
Systems and structures that foster ongoing two-way communication up and down the organizational hierarchy, across departments, divisions, and teams, and between internal and external stakeholders.
Systems and structures that cultivate excellent perspective-taking and -seeking skills. These include…
Building in opportunities for collaborative decision-making,
“Double linking”—the formal inclusion, in high-stakes or policy decision-making, of representatives from lower and higher levels in the organizational hierarchy or from cross-disciplinary teams, and
Embedding virtuous cycles to ensure that all processes are continuously moving toward higher functioning states, and that employees are constantly building knowledge and skills.
Where appropriate, technologies for constructing models of highly complex problems:
For a comprehensive overview of options, see Decision Making Under Uncertainty: Theory and Application, by Mykel J. Kochenderfer.
Our flagship adult assessment, the Leadership Decision-Making Assessment (LDMA), was designed for the US government to document and assess the level of sophistication individuals and teams demonstrate on key skills for making optimal decisions in VUCA conditions.
Conventional top-down project planning and decision making approaches, in combination with systems and structures that enforce conventional hierarchical relationships work pretty well in the absence of volatility, uncertainty, and change. But the same structures that enforce order and help mitigate risk under relatively stable conditions also reduce adaptivity, which means that in our current highly complex and volatile marketplace, many conventionally structured organizations are struggling to adapt.
Several specific needs have been identified, including:
Employees who embrace change and lifelong learning (especially with respect to their capacity to work with increasing complexity),
Organizational cultures characterized by continuous learning & development, innovation, engagement, and collaboration within and across teams,
Decision-making processes, planning processes, people development processes, and governance structures that actively support 1 & 2.
Most change processes address 1 and 2, but there has been less attention to 3. Until recently.
Many of the change processes that address number 3 involve the creative use of intentional virtuous cycles—like the one that’s at the core of our learning model (VCoL+7). Virtuous cycles like VCoL, scrum, dynamic steering, and design thinking are now being implemented in large organizations to increase agility, innovation, collaboration, learning, and engagement. And when it comes to managing complexity, they may well be the most effective tools available.
As an example, Google, which works with agile & scrum as well as other virtuous cycles, is well known for it’s culture of collaboration, continuous learning, and innovation. And its organizational structure, which eliminates silos and is sustained by cross-team collaboration, is part of what keeps that culture alive.
VCoL, like other virtuous cycles, can be embedded in organizational systems to help foster a learning culture. The classic, The fifth discipline: The art & practice of the learning organization (Peter Senge), and the more accessible, An everyone culture: Becoming a deliberately developmental organization (Robert Kegan, Lisa Laskow Lahey) describe two approaches that involve VCoLs. Lectical Assessments are designed to support approaches like these—improving performance by fostering optimal learning and development, and supporting dynamic steering by measuring program effectiveness.
Before I write about the relation between Kegan's Subject-Object Interview and the LSUA (the Lectical Self-Understanding Assessment), I'd like to explain some differences between these assessments. First, the SOI is both an interview and an assessment system. It was developed by studying the interviews of a small sample of respondents (Does anyone know how many?) who were interviewed on several occasions over the course of several years (Again, does anyone know how many or how often?). The level definitions and the scoring criteria in the SOI are tied to the subject matter of the interviews in the original sample (construction sample). For this reason, the SOI is called a domain-specific assessment. Researchers would say that the levels were defined by "bootstrapping" from the longitudinal data. Critiques of this kind of assessment point to bias in their level definitions (due to their small and culturally narrow construction samples), the related conflation (confusion) of particular conceptual content with developmental levels, and a weak articulation of the lowest levels, which are not based on direct empirical evidence from appropriate-aged respondents.
With respect to the LSUA, I want to clarify that it is scored with the Lectical Assessment System (LAS), a content-independent developmental scoring system that was created, in part, by identifying the dimension that underlies all longitudinally bootstrapped developmental assessment systems*. The SOI was one of the assessment systems I studied on the way to developing the LAS. Consequently, if the LAS does what it is supposed to do, it should capture the developmental dimension that underlies Kegan's system even better than his scoring system, because the LAS is a second generation developmental scoring system that is not restrained by a content-driven scoring process (Dawson, 2002; Dawson, Xie, & Wilson, 2003: There is much written about this in our published work, available on our web site.)
What is the relation between the LSUA and the Subject-Object Interview?
This is a difficult question to answer, partly because there is no research that directly compares the SOI and the LSUA. However, because the LAS is a domain independent scoring system that can be used to score any text that includes judgments and justifications, I have used it to score the SOI scoring manual. The developmental sequence for SOI levels 3 to 5 corresponds well to the dimension captured by the LAS, and levels 3-5 correspond roughly with Lectical Levels 10-12. However, Kegan's lower levels do not match up as well, possibly because his construction sample (the sample used to define his levels), as far as we can determine, did not include young children. (Kegan's original research was never published in a form that would allow us to evaluate the approach he took to defining his levels or the reliability and validity of the SOI. All we can locate are a few very small studies of inter-rater reliability, most of which are unpublished [Kegan, 2002].)
Comparisons of the Subject-Object Interview with other developmental assessment systems
There is some research comparing the SOI with other developmental assessment systems. In general, this research finds that the SOI and these other systems are likely to tap the same developmental dimension (see Pratt, et. al., 1991).
Ideally, we would like to conduct a direct comparison of the LAS and the scoring system Kegan developed to score the SOI, as we have done with other developmental assessment systems. (We are working with a graduate student who is planning do do this kind of comparison.) In the mean time, we can point to comparisons between the LAS and several other developmental assessment systems (Kohlberg, Armon, Kitchener & King, Perry) that were developed using methods similar to those used by Kegan, and have routinely found strong correlations (above .85) between these scoring systems and the LAS, especially when they are used to score the same material (Dawson, 2000, 2001 2002a, 2004; Dawson, Xie, & Wilson, 2003 ).
Finally, some of Kegan's level definitions are almost identical to those of Kohlberg and Selman. In fact, I would argue that they are primarily an extension of Selman's original work on socio-moral perspective, which has informed most domain-based developmental assessment systems (including all of the systems mentioned here) since it was introduced in the 1960's (and was a great help to me when I was developing the LAS).
*The claim that there is a single developmental dimension that underlies these systems is NOT the same thing as a claim that an individual will be at the same level in different knowledge/skill areas.
Commons, M. L., Armon, C., Richards, F. A., Schrader, D. E., Farrell, E. W., Tappan, M. B., et al. (1989). A multidomain study of adult development. In D. Sinnott, F. A. Richards & C. Armon (Eds.), Adult development, Vol. 1: Comparisons and applications of developmental models. (pp. 33-56). New York: Praeger Publishers.
Dawson, T. L. (2000). Moral reasoning and evaluative reasoning about the good life. Journal of Applied Measurement, 1(4), 372-397.
Dawson, T. L. (2001). Layers of structure: A comparison of two approaches to developmental assessment. Genetic Epistemologist, 29, 1-10.
Dawson, T. L. (2002a). A comparison of three developmental stage scoring systems. Journal of Applied Measurement, 3, 146-189.
Dawson, T. L. (2002b). New tools, new insights: Kohlberg’s moral reasoning stages revisited. International Journal of Behavioral Development, 26, 154-166.
Dawson, T. L., Xie, Y., & Wilson, M. (2003). Domain-general and domain-specific developmental assessments: Do they measure the same thing? Cognitive Development, 18, 61-78.
Dawson, T. L. (2004). Assessing intellectual development: Three approaches, one sequence. Journal of Adult Development, 11, 71-85.
Kegan, R. (2002). A guide to the subject-object interview. Unpublished Scoring manual. Harvard Graduate School of Education.
King, P. M., Kitchener, K. S., Wood, P. K., & Davison, M. L. (1989). Relationships across developmental domains: A longitudinal study of intellectual, moral, and ego development. In M. L. Commons, J. D. Sinnot, F. A. Richards & C. Armon (Eds.), Adult development. Volume 1: Comparisons and applications of developmental models (pp. 57-71). New York: Praeger.
Lambert, H. V. (1972). A comparison of Jane Loevinger's theory of ego development and Lawrence Kohlberg's theory of moral development. University of Chicago, Chicago, IL.
Pratt, M. W., Diessner, R., Hunsberger, B., Pancer, S. M., & Savoy, K. (1991). Four pathways in the analysis of adult development and aging: Comparing analyses of reasoning about personal-life dilemmas. Psychology & Aging, 6, 666-675.
Sullivan, E. V., McCullough, G., & Stager, M. A. (1970). A developmental study of hte relationship between conceptual, ego, and moral development. Child Development, 41, 399-411.