The complexity of national leaders’ thinking: How does it measure up?

Special thanks to my Australian colleague, Aiden Thornton, for his editorial and research assistance.

This is the first in a series of articles on the complexity of national leaders’ thinking. These articles will report results from research conducted with CLAS, our newly validated electronic developmental scoring system. CLAS will be used to score these leaders’ responses to questions posed by prominent journalists.

In this first article, I’ll be providing some of the context for this project, including information about how my colleagues and I think about complexity and its role in leadership. I’ve embedded lots of links to additional material for readers who have questions about our 100+ year-old research tradition, Lectica’s (the nonprofit that owns me) assessments, and other research we’ve conducted with these assessments.

Context and research questions

Lectica creates diagnostic assessments for learning that support the development of mental skills required for working with complexity. We make these learning tools for both adults and children. Our K-12 initiative—the DiscoTest Initiative—is dedicated to bringing these tools to individual K-12 teachers everywhere, free of charge. Its adult assessments are used by organizations in recruitment and training, and by colleges and universities in admissions and program evaluation.

All Lectical Assessments measure the complexity level (aka, level of vertical development) of people’s thinking in particular knowledge areas. A complexity level score on a Lectical Assessment tells us the highest level of complexity—in a problem, issue, or task—an individual is likely to be able to work with effectively.

On several occasions over the last 20 years, my colleagues and I have been asked to evaluate the complexity of national leaders’ reasoning skills. Our response has been, “We will, but only when we can score electronically—without the risk of human bias.” That time has come. Now that our electronic developmental scoring system, CLAS, has demonstrated a level of reliability and precision that is acceptable for this purpose, we’re ready to take a look.​

Evaluating the complexity of national leaders’ thinking is a challenging task for several reasons. First, it’s virtually impossible to find examples of many of these leaders’ “best work.” Their speeches are generally written for them, and speech writers usually try to keep the complexity level of these speeches low, aiming for a reading level in the 7th to 9th grade range. (Reading level is not the same thing as complexity level, but like most tests of capability, it correlates moderately with complexity level.) Second, even when national leaders respond to unscripted questions from journalists, they work hard to use language that is accessible to a wide audience. And finally, it’s difficult to identify a level playing field—one in which all leaders have the same opportunity to demonstrate the complexity of their thinking.

Given these obstacles, there’s no point in attempting to evaluate the actual thinking capabilities of national leaders. In other words, we won’t be claiming that the scores awarded by CLAS represent the true complexity level of leaders’ thinking. Instead, we will address the following questions:

  1. When asked by prominent journalists to explain their positions on complex issues, what is the average complexity level of national leaders’ responses?
  2. How does the complexity level of national leaders’ responses relate to the complexity of the issues they discuss?

Thinking complexity and leader success

At this point, you may be wondering, “What is thinking complexity and why is it important?” A comprehensive response to this question isn’t possible in a short article like this one, but I can provide a basic description of complexity as we see it at Lectica, and provide some examples that highlight its importance.

All issues faced by leaders are associated with a certain amount of built-in complexity. For example:

  1. The sheer number of factors/stakeholders that must be taken into account.
  2. Short and long-term implications/repercussions. (Will a quick fix cause problems downstream, such as global unrest or catastrophic weather?)
  3. The number and diversity of stakeholders/interest groups. (What is the best way to balance the needs of individuals, families, businesses, communities, states, nations, and the world?)
  4. The length of time it will take to implement a decision. (Will it take months, years, decades? Longer projects are inherently more complex because of changes over time.)
  5. Formal and informal rules/laws that place limits on the deliberative process. (For example, legislative and judicial processes are often designed to limit the decision making powers of presidents or prime ministers. This means that leaders must work across systems to develop decisions, which further increases the complexity of decision making.)

Over the course of childhood and adulthood, the complexity of our thinking develops through up to 13 skill levels (0–12). Each new level builds upon the previous level. The figure above shows four adult complexity “zones” — advanced linear thinking (second zone of level 10), early systems thinking (first zone of level 11), advanced systems thinking (second zone of level 11), early principles thinking (first zone of level 12). In advanced linear thinking, reasoning is often characterized as “black and white.” Individuals performing in this zone cope best with problems that have clear right or wrong answers. It is only once individuals enter early systems thinking, that we begin to work effectively with highly complex problems that do not have clear right or wrong answers.

Leadership at the national level requires exceptional skills for managing complexity, including the ability to deal with the most complex problems faced by humanity (Helbing, 2013). Needless to say, a national leader regularly faces issues at or above early principles thinking.

Complexity level and leadership—the evidence

In the workplace, the hiring managers who decide which individuals will be put in leadership roles are likely to choose leaders whose thinking complexity is a good match for their roles. Even if they have never heard the term complexity level, hiring managers generally understand, at least implicitly, that leaders who can work with the complexity inherent in the issues associated with their roles are likely to make better decisions than leaders whose thinking is less complex.

There is a strong relation between the complexity of leadership roles and the complexity level of leaders’ reasoning. In general, more complex thinkers fill more complex roles. The figure below shows how lower and senior level leaders’ complexity scores are distributed in Lectica’s database. Most senior leaders’ complexity scores are in or above advanced systems thinking, while those of lower level leaders are primarily in early systems thinking.

The strong relation between the complexity of leaders’ thinking and the complexity of their roles can also be seen in the recruitment literature. To be clear, complexity is not the only aspect of leadership decision making that affects leaders’ ability to deal effectively with complex issues. However, a large body of research, spanning over 50 years, suggests that the top predictors of workplace leader recruitment success are those that most strongly relate to thinking skills, including complexity level.

The figure below shows the predictive power of several forms of assessment employed in making hiring and promotion decisions. The cognitive assessments have been shown to have the highest predictive power. In other words, assessments of thinking skills do a better job predicting which candidates will be successful in a given role than other forms of assessment.

Predictive power graph

The match between the complexity of national leaders’ thinking and the complexity level of the problems faced in their roles is important. While we will not be able to assess the actual complexity level of the thinking of national leaders, we will be able to examine the complexity of their responses to questions posed by prominent journalists. In upcoming articles, we’ll be sharing our findings and discussing their implications.

Coming next…

In the second article in this series, we begin our examination of the complexity of national leaders’ thinking by scoring interview responses from four US Presidents—Bill Clinton, George W. Bush, Barack Obama, and Donald Trump.

 


Appendix

Predictive validity of various types of assessments used in recruitment

The following table shows average predictive validities for various forms of assessment used in recruitment contexts. The column “variance explained” is an indicator of how much of a role a particular form of assessment plays in predicting performance—it’s predictive power.

Form of assessment Source Predictive validity Variance explained  Variance explained (with GMA)
Complexity of workplace reasoning (Dawson & Stein, 2004; Stein, Dawson, Van Rossum, Hill, & Rothaizer, 2003) .53 28% n/a
Aptitude (General Mental Ability, GMA) (Hunter, 1980; Schmidt & Hunter, 1998) .51 26% n/a
Work sample tests (Hunter & Hunter, 1984; Schmidt & Hunter, 1998) .54 29% 40%
Integrity (Ones, Viswesvaran, and Schmidt, 1993; Schmidt & Hunter, 1998) .41 17% 42%
Conscientiousness (Barrick & Mount, 1995; Schmidt & Hunter, 1998). .31 10% 36%
Employment interviews (structured) (McDaniel, Whetzel, Schmidt, and Mauer, 1994; Schmidt & Hunter, 1998) .51 26% 39%
Employment interviews (unstructured) (McDaniel, Whetzel, Schmidt, and Mauer, 1994 Schmidt & Hunter, 1998) .38 14% 30%
Job knowledge tests (Hunter and Hunter, 1984; Schmidt & Hunter, 1998) .48 23% 33%
Job tryout procedure (Hunter and Hunter, 1984; Schmidt & Hunter, 1998) .44 19% 33%
Peer ratings (Hunter and Hunter, 1984; Schmidt & Hunter, 1998) .49 24% 33%
Training & experience: behavioral consistency method (McDaniel, Schmidt, and Hunter, 1988a, 1988b; Schmidt & Hunter, 1998; Schmidt, Ones, and Hunter, 1992) .45 20% 33%
Reference checks (Hunter and Hunter, 1984; Schmidt & Hunter, 1998) .26 7% 32%
Job experience (years) Hunter, 1980); McDaniel, Schmidt, and Hunter, 1988b; Schmidt & Hunter, 1998) .18 3% 29%
Biographical data measures Supervisory Profile Record Biodata Scale (Rothstein, Schmidt, Erwin, Owens, and Sparks, 1990; Schmidt & Hunter, 1998) .35 12% 27%
Assessment centers (Gaugler, Rosenthal, Thornton, and Benson, 1987; Schmidt & Hunter, 1998; Becker, Höft, Holzenkamp, & Spinath, 2011) Note: Arthur, Day, McNelly, & Edens (2003) found a predictive validity of .45 for assessment centers that included mental skills assessments. .37 14% 28%
EQ (Zeidner, Matthews, & Roberts, 2004) .24 6% n/a
360 assessments Beehr, Ivanitskaya, Hansen, Erofeev, & Gudanowski, 2001 .24 6% n/a
Training &  experience: point method (McDaniel, Schmidt, and Hunter, 1988a; Schmidt & Hunter, 1998) .11 1% 27%
Years of education (Hunter and Hunter, 1984; Schmidt & Hunter, 1998) .10 1% 27%
Interests (Schmidt & Hunter, 1998) .10 1% 27%

References

Arthur, W., Day, E. A., McNelly, T. A., & Edens, P. S. (2003). A meta‐analysis of the criterion‐related validity of assessment center dimensions. Personnel Psychology, 56(1), 125-153.

Becker, N., Höft, S., Holzenkamp, M., & Spinath, F. M. (2011). The Predictive Validity of Assessment Centers in German-Speaking Regions. Journal of Personnel Psychology, 10(2), 61-69.

Beehr, T. A., Ivanitskaya, L., Hansen, C. P., Erofeev, D., & Gudanowski, D. M. (2001). Evaluation of 360 degree feedback ratings: relationships with each other and with performance and selection predictors. Journal of Organizational Behavior, 22(7), 775-788.

Dawson, T. L., & Stein, Z. (2004). National Leadership Study results. Prepared for the U.S. Intelligence Community.

Dawson, T. L. (2017, October 20). Using technology to advance understanding: The calibration of CLAS, an electronic developmental scoring system. Proceedings from Annual Conference of the Northeastern Educational Research Association, Trumbull, CT.

Dawson, T. L., & Thornton, A. M. A. (2017, October 18). An examination of the relationship between argumentation quality and students’ growth trajectories. Proceedings from Annual Conference of the Northeastern Educational Research Association, Trumbull, CT.

Gaugler, B. B., Rosenthal, D. B., Thornton, G. C., & Bentson, C. (1987). Meta-analysis of assessment center validity. Journal of Applied Psychology, 72(3), 493-511.

Helbing, D. (2013). Globally networked risks and how to respond. Nature, 497, 51-59.

Hunter, J. E., & Hunter, R. F. (1984). The validity and utility of alternative predictors of job performance. Psychological Bulletin, 96, 72-98.

Hunter, J. E., Schmidt, F. L., & Judiesch, M. K. (1990). Individual differences in output variability as a function of job complexity. Journal of Applied Psychology, 75, 28-42.

Johnson, J. (2001). Toward a better understanding of the relationship between personality and individual job performance. In M. R. R. Barrick, Murray R. (Ed.), Personality and work: Reconsidering the role of personality in organizations (pp. 83-120).

McDaniel, M. A., Schmidt, F. L., & Hunter, J., E. (1988a). A Meta-analysis of the validity of training and experience ratings in personnel selection. Personnel Psychology, 41(2), 283-309.

McDaniel, M. A., Schmidt, F. L., & Hunter, J., E. (1988b). Job experience correlates of job performance. Journal of Applied Psychology, 73, 327-330.

McDaniel, M. A., Whetzel, D. L., Schmidt, F. L., & Maurer, S. D. (1994). Validity of employment interviews. Journal of Applied Psychology, 79, 599-616.

Rothstein, H. R., Schmidt, F. L., Erwin, F. W., Owens, W. A., & Sparks, C. P. (1990). Biographical data in employment selection: Can validities be made generalizable? Journal of Applied Psychology, 75, 175-184.

Stein, Z., Dawson, T., Van Rossum, Z., Hill, S., & Rothaizer, S. (2013, July). Virtuous cycles of learning: using formative, embedded, and diagnostic developmental assessments in a large-scale leadership program. Proceedings from ITC, Berkeley, CA.

Tett, R. P., Jackson, D. N., & Rothstein, M. (1991). Personality measures as predictors of job performance: A meta-analytic review. Personnel Psychology, 44, 703-742.

Please follow and like us:

From Piaget to Dawson: The evolution of adult developmental metrics

I've just added a new video about the evolution of adult developmental metrics to YouTube and LecticaLive. It traces the evolutionary history of Lectica's developmental model and metric.

If you are curious about the origins of our work, this video is a great place to start. If you'd like to see the reference list for this video, view it on LecticaLive.

 

 

Please follow and like us:

Decision making & the collaboration continuum

Lectical Scale (our developmental scale). The collaboration continuum has emerged from this research.

Many people seem to think of decision making as either top-down or collaborative, and tend to prefer one over the other. But several thousand decision-making leaders have taught us that this is a false dichotomy. We’ve learned two things. First, there is no clear-cut division between autocratic and collaborative decision making—it’s a continuum. And second, both more autocratic and more collaborative decision making processes have legitimate applications.

As it applies to decision making, the collaboration continuum is a scale that runs from fully autocratic to consensus-based. We find it helpful to divide the continuum into 7 relatively distinct levels, as shown below:


Level Basis for decision Applications Limitations

LESS COLLABORATION

Fully autocratic  personal knowledge or rules, no consideration of other perspectives everyday operational decisions where there are clear rules and no apparent conflicts quick and efficient
Autocratic personal knowledge, with some consideration of others' perspectives (no perspective seeking) operational decisions in which conflicts are already well-understood and trust is high quick and efficient, but spends trust, so should be used with care
Consulting personal knowledge, with perspective-seeking to help people feel heard operational decisions in which the perspectives of well-known stakeholders are in conflict and trust needs reinforcement time consuming, but can build trust if not abused
Inclusive personal knowledge, with perspective seeking to inform a decision operational or policy decisions in which the perspectives of stakeholders are required to formulate a decision time consuming, but improves decisions and builds engagement
Compromise-focused leverages stakeholder perspectives to develop a decision that gives everyone something they want making "deals" to which all stakeholders must agree time consuming, but necessary in deal-making situations
Consent-focused leverages stakeholder perspectives to develop a decision that everyone can consent to (even though there may be reservations) policy decisions in which the perspectives of stakeholders are required to formulate a decision can be efficient, but requires excellent facilitation skills and training for all parties
Consensus-focused leverages stakeholder perspectives to develop a decision that everyone can agree with. decisions in which complete agreement is required to formulate a decision requires strong relationships, useful primarily when decision-makers are equal partners

MORE COLLABORATION

As the table above shows, all 7 forms of decision making on the collaboration continuum have legitimate applications. And all can be learned in any adult developmental level. However, the most effective application of each successive form of decision making requires more developed skills. Inclusive, consent, and consensus decision making are particularly demanding, and generally require formal training for all participating parties.

The most developmentally advanced and accomplished leaders who have taken our assessments deftly employ all 7 forms of decision making, basing the form chosen for a particular situation on factors like timeline, decision purpose, and stakeholder characteristics.


(The feedback in our LDMA [leadership decision making] assessment report provides learning suggestions for building collaboration continuum skills. And our Certified Consultants can offer specific practices, tailored for your learning needs, that support the development of these skills.) 

 

Please follow and like us:

Leadership, vertical development & transformative change: a polemic

This morning, while doing some research on leader development, I googled “vertical leadership” and “coaching.” The search returned 466,000 results. Wow. Looks like vertical development is hot in the coaching world!

Two hours later, after scanning dozens of web sites, I was left with the following impression: 

Vertical development occurs through profound, disruptive, transformative insights that alter how people see themselves, improve their relationships, increase happiness, and help them cope better with complex challenges. The task of the coach is to set people up for these experiences. Evidence of success is offered through personal stories of transformation.

But decades of developmental research contradicts this picture. This body of evidence shows that the kind of transformative experiences promised on these web sites is uncommon. And when it does occur it rarely produces a fairytale ending. In fact, profound disruptive insights can easily have negative consequences, and most experiences that people refer to as transformational are really just momentary insights. They may feel profound in the moment, but don’t actually usher in any measurable change at all, much less transformative change. 

 

"The good news is, you don’t have to work on transforming yourself to become a better leader."

 

The fact is, insight is fairly easy, but growth is slow, and change is hard. Big change is really, really hard. And some things, like many dispositions and personality traits, are virtually impossible to change. This isn’t an opinion based on personal experience, it’s a conclusion based on evidence from hundreds of longitudinal developmental studies conducted during the last 70 years. (Check out our articles page for some of this evidence.)

The good news is, you don’t have to work on transforming yourself to become a better leader. All you need to do is engage in daily practices that incrementally, through a learning cycle called VCoL, help you build the skills and habits of a good leader. Over the long term, this will change you, because it will alter the quality of your interactions with others, and that will change your mind—profoundly.

 

Please follow and like us:

Decision-making under VUCA conditions

VUCA

I was recently asked if there is a decision making approach that’s designed specifically for situations characterized by volatility, uncertainty, complexity, and ambiguity (VUCA). I don’t know of a one-size-fits-all solution, but I can speak to what’s needed to optimize decisions made in VUCA conditions. Here are the main ingredients:

Agility

  1. Acrobatic-catThe ability to adjust one’s decision-making approach to meet the demands of a particular problem: For example, some problems must be addressed immediately and autocratically, others are best addressed more collaboratively and with a greater focus on data collection and perspective seeking.
  2. The ability to make high-quality autocratic decisions: By setting up systems that keep stakeholders continuously appraised of one another’s perspectives and data, we can improve the quality of autocratic decisions by ensuring that there are fewer surprises and that rapid decisions are informed decisions.
  3. Dynamic steering: Every leader in an organization should be constantly cultivating this skill. It increases the agility of teams and organizations by building skill for efficient decision-making and timely adjustment.

The most complete information possible (under conditions in which complete information is impossible), which requires:

  1. Collaborative capacity: highly complex problems, by definition, are beyond the comprehension of even the most developed individuals. Collaborative skills ensure that leaders can effectively leverage key perspectives.
  2. Systems and structures that foster ongoing two-way communication up and down the organizational hierarchy, across departments, divisions, and teams, and between internal and external stakeholders.
  3. Systems and structures that cultivate excellent perspective-taking and -seeking skills. These include…
    • Building in opportunities for collaborative decision-making,  
    • “Double linking”—the formal inclusion, in high-stakes or policy decision-making, of representatives from lower and higher levels in the organizational hierarchy or from cross-disciplinary teams, and
    • Embedding virtuous cycles to ensure that all processes are continuously moving toward higher functioning states, and that employees are constantly building knowledge and skills.

Where appropriate, technologies for constructing models of highly complex problems:

  • For a comprehensive overview of options, see Decision Making Under Uncertainty: Theory and Application, by Mykel J. Kochenderfer.

Our flagship adult assessment, the Leadership Decision-Making Assessment (LDMA), was designed for the US government to document and assess the level of sophistication individuals and teams demonstrate on key skills for making optimal decisions in VUCA conditions.

 

Please follow and like us:

Jaques’ Strata and Lectical Levels

We often receive queries about the relation between Lectical Levels the Strata defined by Jaques. The following table shows the relation between Lectical Levels and Strata as they were defined by Jaques in Requisite Organization. These relations were determined by using the Lectical Assessment System to score Jaques’ definitions. We have not yet had an opportunity to compare the results of scoring the same material with the Lectical Assessment System and any scoring system based on Jaques’ definitions as we have done with other comparisons of scoring systems. Our interpretation of Jaques’ Strata definitions may differ from the interpretations of other researchers, leading to differences between theoretical and actual comparisons.

Strata by Lectical Level

References

Jaques, E. (1996). Requisite organization (2 ed.). Arlington, VA: Cason Hall.

Please follow and like us:

Vertical development & leadership skills

What is vertical development?

In our view, learning involves two interrelated processes—the accumulation of knowledge and the organization of that knowledge into mental maps and the neural nets that support them. Over time, if we engage in activities that promote development, our mental maps become increasingly complex. More complex mental maps allow for more complex thinking. This increasing capacity to handle complexity is sometimes called vertical development.

Vertical development and leadership

As leaders move into more senior positions, the task demands of their role increase in complexity. They must juggle more (and more complex) perspectives, cope with more ambiguity, and make an increasing number of adaptive decisions. It's no surprise that more complex thinkers are more likely to rise into senior management roles.

For 15 years, we've been building learning tools that support vertical development by diagnosing leaders' current capabilities and making targeted learning recommendations. The first step in this process is measuring the developmental level of leaders' skills on the Lectical® Scale. The figure below shows how the performances of lower-level (n=1108) and senior managers (n=222) on the LDMA (our decision making assessment)are distributed on this scale. As you can see, the distribution of senior managers is higher on the Lectical Scale than the distribution of lower-level managers. In fact senior leaders, on average, are several years ahead of lower-level managers in their vertical development. This means they are considerably better at working with complexity.

management level by Lectical Level

Lectical Assessments are designed to advance vertical development—to help build the capacity of individuals and teams to meet the demands of an increasingly complex world. In the hands of competent coaches, mentors, and educators, Lectical Assessments double the rate of vertical development that typically occurs in effective leadership programs. This is possible because they support the natural learning cycle by providing learning suggestions that are "just right."

To learn more about the relation between vertical development and job complexity see the post: Task demands and capabilities.

To learn more about the way we think about learning and assessment, listen to this interview with Dr. Dawson: The ideal relationship between learning and assessment.

To learn more about the research with Lectical Assessments, visit our Validity and reliability page.

Source: 2014_0339_all_LDMA_scores.xlsx

Please follow and like us:

Mental development in organizations

Aside

Mental development involves the dynamic integration of thoughts and feelings through interactions with the social and physical environment. It takes place in “virtuous” cycles of learning, application, and reflection that are accompanied by natural learning emotions like eager anticipation, mild frustration, and satisfaction. When components of the cycle are missing or the natural learning emotions are replaced with negative emotions like dread, severe frustration, anger, or fear—virtuous cycles of learning are disrupted and mental growth stalls. An organization that wants to create a learning culture needs policies and programs that support virtuous cycles of learning.

Please follow and like us:

Task demands and capabilities (the complexity gap)

For decades, my colleagues and I have been working with and refining a developmental assessment system called the Lectical Assessment System (now also an electronic scoring system called CLAS). It can be used to score (a) the complexity level of people’s arguments and (b) the complexity level—“task demands”—of specific situations or roles. For example, we have analyzed the task demands of levels of work in large organizations and assessed the complexity level of employees’ thinking in several skill areas — including reflective judgment/critical thinking and leadership decision-making.

The figure on the left shows the relation between the task demands of 7 management levels and the complexity level scores received on an assessment of decision making skills taken by leaders occupying these positions. The task demands of most positions increase in a linear fashion, spanning levels 10–13 (a.k.a. 1000–1399).

After work level 2 (entry level management), the capabilities of leaders do not, for the most part, rise to these task demands.

This pattern is pervasive—we see it everywhere we look—and it reflects a hard truth. None of us is capable of meeting the task demands of the most complex situations in today's world. I've come to believe that in many situations our best hope for meeting these demands is to (1) recognize our human limitations, (2) work strategically on the development of our own skills and knowledge, (3) learn to work closely with others who represent a wide range of perspectives and areas of expertise, and (4) use the best tools available to scaffold our thinking.


We aren't alone. Others have observed and remarked upon this pattern:

Jaques, E. (1976). A general theory of bureaucracy. London: Heinemann Educational.

Habermas, J. (1975). Legitimation crisis (T. McCarthy, Trans.). Boston: Beacon Press.

Kegan, R. (1994). In over our heads: The mental demands of modern life. Cambridge, MA: Harvard University Press.

Bell, D. (1973) The coming of post-industrial society. New York: Basic Books

Please follow and like us: