Growth curves vs. individual growth

Individual growth trajectories often don’t stick to statistically determined expectations.

The illustration above depicts the growth trajectory of a woman named Eleanore. Between the ages 12 and 68, she completed two different developmental assessments several times. The first assessment was the LRJA, a test of reflective judgment (critical thinking), which she completed on 8 different occasions. The second assessment was the LDMA, a test of decision-making skills, which she completed four times between the ages of 42 and 68. As you can see, Eleanore has continued to develop throughout adulthood, with periods of more and less rapid growth.

The graph on which Eleanore’s scores are plotted shows several potential developmental curves (A–H), representing typical developmental trajectories for individuals performing in different levels at age 10. You can tell right away that Eleanore is not behaving as expected. Over time, her scores have landed on two different curves (D & E), and she shows considerable growth in age ranges for which no growth is expected — on either curve.

Eleanore, who was born in 1942, was a bright child who did well in school. By the time she graduated from high school in 1960, she was in the top 15% of her class. After attending two years of community college, she joined the workforce as a legal secretary. At 23 she married a lawyer, and at 25 she gave birth to the first of two children. During the next 15 years, while raising her her children, her scores hovered closer to curve E than curve D. When her youngest entered high school, Eleanore decided it was time to complete her bachelor of science degree, which she did, part time, over several years. During this period she grew more quickly than in the previous 10 years, and her LRJA scores began to cluster around curve D.

Sadly, shortly after completing her degree (at age 43), Eleanore learned that her mother had been diagnosed with dementia (now known as Alzheimer’s). For the next 6 years, she cared for her ailing mother, who died only a few days before Eleanore’s 50th birthday. While she cared for her mother, Eleanore learned a great deal about Alzheimer’s — from both personal experience and the extensive research she did to help ensure the best possible care for her mother. This may have contributed to the growth that occurred during this period. Following her mother’s death, Eleanore decided to build upon her knowledge of Alzheimer’s, spending the next 6 years earning a Ph.D. focused on its origins. At the time of her last assessment, she was a respected Alzheimer’s researcher.

And now I must confess. Eleanore is not a real person. She’s a compilation based on 70 years of research in which the growth of thousands of individuals has been measured over periods spanning 8 months to 25 years. Eleanore’s story has been designed to illustrate several phenomena my colleagues and I have observed in these data:

First, although statistics allow us to describe typical developmental trajectories, individual development is usually more or less atypical. Eleanore does not stay on the curve she started out on. In fact she actually drops below this curve for a time, then develops beyond it in later adulthood. She also grew during age-ranges in which no growth at all was expected. Both life events and formal education clearly influenced her developmental trajectory.

Second, many people develop throughout adulthood — especially if they are involved in rich learning experiences (like formal schooling), or when they are coping productively with life crises (like reflectively supporting an ailing parent).

Third, developmental spurts happen. The figure above shows a (real) growth spurt that occurred between the ages of 46 and 51. This highly motivated individual engaged in a sustained and varied learning adventure during this period — just because he wanted to build his interpersonal and leadership skills.

Fourth, developmental growth can happen late in life, given the right opportunities and circumstances. The (real) woman whose scores are shown here responded to a personal life crisis by embracing it as an opportunity to learn more about herself as person and as a leader.

My colleagues and I find the statistically determined growth curves shown on the figures in this article enormously useful in our research, but it’s important to keep in mind that they’re just averages. Many people can jump from one curve to another given the right learning skills and opportunities. On the other hand, these curves are associated with some constraints. For example, we’ve never seen anyone jump more than one of these curves, no matter how excellent their learning skills or opportunities have been. Unsurprisingly, nurture cannot entirely overcome nature.

Growth is predicted by a number of factors. Nature is a big one. How we personally approach learning is also pretty big — with approaches that feature virtuous cycles of learning taking the lead. And, of course, our growth is influenced by how optimally the environments we live, learn, and work in support learning.


Find out how we put this knowledge to work in leader development and recruitment contexts, with LAP-1 and LAP-2.

Please follow and like us:

Fit-to-role, clarity, & VUCA skills: Strong predictors of senior & executive recruitment success

 

Mental ability is by far the best predictor of recruitment success — across the board.* During the 20th century, aptitude tests were the mental ability metrics of choice — but this is the 21st century. The workplace has changed. Today, leaders don’t need skills for choosing the correct answer from a list. They need skills for coping with complex issues without simple right and wrong answers. Aptitude tests don’t measure these skills.

Today, success in senior and executive roles is best predicted by (1) the fit between the complexity of leaders’ thinking and the complexity of their roles, (2) the clarity of their thinking in real workplace contexts, and (3) their skills for functioning in VUCA (volatile, uncertain, complex, and ambiguous) conditions.

Fit-to-roleFit-to-role is the relation between the complexity level of an individual’s reasoning and the complexity level of a given role. Good fit-to-role increases well-being, engagement, effectiveness, and productivity.

ClarityClarity involves the degree to which an individual’s arguments are coherent and persuasive, how well their arguments are framed, and how well their ideas are connected. Individuals who think more clearly make better decisions and grow more rapidly than individuals who think less clearly.

VUCA skillsVUCA skills are required for making good decisions in volatile, uncertain, complex, or ambiguous contexts. They are…

  • perspective coordination—determining which perspectives matter, seeking out a diversity of relevant perspectives, and bringing them together in a way that allows for the emergence of effective solutions.
  • decision-making under complexity — employing a range of decision-making tools and skills to design effective decision-making processes for complex situations.
  • contextual thinking — being predisposed to think contextually, being able to identify the contexts that are most likely to matter in a given situation and determine how these contexts relate to a particular situation.
  • collaboration — understanding the value of collaboration, being equipped with the tools and skills required for collaboration, and being able to determine the level of collaboration that’s appropriate for a particular decision-making context.

Fit-to-role

Getting fit-to-role right increases well-being, engagement, effectiveness, and productivity. Our approach to role fit pairs an assessment of the complexity of an individual’s thinking — when applied to a wicked real-world workplace scenario — with an analysis of the complexity of a particular workplace role

Fit-to-role for 8 candidates

The Lectical Scores in the figure on the left represent the complexity level scores awarded to eight job candidates, based on their performances on a developmental assessment of leader decision making (LDMA). The fit-to-role score tells us how well the Lectical Score fits the complexity range of a role. Here, the complexity range of the role is 1120–1140, represented by the vertical teal band. The circles represent the Lectical Scores of candidates. The size of these circles represents the range in which the candidate’s true level of ability is likely to fall.

The “sweet spot” for a new hire is generally at the bottom end of the complexity range of a role, in this case, 1120. There are two reasons for this.

  • The sweet spot is where the challenge posed by a new role is “just right” — just difficult enough to keep an employee in flow — what we call the Goldilocks zone. Placing employees in the sweet spot increases employee satisfaction, improves performance, and optimally supports learning and development.
  • An existing team is more likely to embrace candidates who are performing in the sweet spot. Sweet spot candidates are likely to welcome support and mentoring, which makes it easier to integrate them into an existing team than it is to integrate candidates performing at higher levels, who may be viewed as competitors.

In the figure above, teal circles represent candidates whose scores are in or very near the sweet spot — fit to role is excellent. Yellow circles represent individuals demonstrating marginal fit, and red circles represent individuals demonstrating poor fit.

We can use circle color to help us figure out who should advance to the next level in a recruitment process.

The first cut

Based on the results shown above, it’s easy to decide who will advance to the next step in this process. Red circles mean, “This person is a poor fit to the complexity demands of this role.” Therefore, candidates with red circles should be eliminated from consideration for this role. Celia, Amar, Chilemba, and Jae-Eun, just don’t fit.

However, this does not mean that these candidates should be ignored. Every single one of the eliminated candidates has high or acceptable Clarity and VUCA scores. So, despite the fact that they did not fit this role, each one may be good fit for a different role in the organization.

It’s also worth noting that Jae-Eun demonstrates a level of skill — across measures — that’s relatively rare. When you identify a candidate with mental skills this good, it’s worth seeing if there is some way your organization can leverage these skills.

The second cut

The first cut left us with 4 candidates that met basic fit-to-role qualifications, Jewel, YiYu, Alistair, and Martin. The next step is to find out if their Clarity and VUCA scores are good enough for this role.

Below, you can see how we have interpreted the Clarity and VUCA scores for each of the remaining candidates, and made recommendations based on these interpretations. Notice that YiYu and Alistair are recommended with reservations. It will be important to take these reservations into account during next steps in the recruitment process.

What’s next?

Let’s assume that, Jewel, YiYu, and Alistair move to the next step in the recruitment process. Once the number of candidates has been winnowed down to this point, it’s a good time to administer personality or culture fit assessments, conduct team evaluations, view candidate presentations, or conduct interviews. You already know the candidates are equipped with adequate to excellent mental skills and fit-to-role. From here, it’s all about which candidate you think is likely to fit in to your team.


As soon as we have it, my colleagues and I either publish our reliability and validity evidence in refereed journals or conference presentations or present them on our web site. We believe in total transparency regarding the validity and reliability of all assessments employed in the workplace.

 


*Schmidt, F. L., Oh, I.-S., & Shaffer, J. A. (2016). Working paper: The validity and utility of selection methods in personnel psychology: Practical and theoretical implications of 100 years of research findings.

Please follow and like us:

By popular demand—two new self-guided courses from Lectica

Introducing LAP-1 & LAP-2 Light

For some time now, people have been asking us how they can learn at least some of what we teach in our certification courses—but without the homework! Well, we’ve taken the plunge, with two new self-guided courses.


All profits from sales support Lectica’s mission to deliver the world’s best assessments free of charge to K-12 teachers everywhere!


LAP-1 Light

In LAP-1 Light, we’ve brought together the lectures and much of the course material offered in the certification version of the course—Lectical Assessments in Practice for Coaches. You’ll take a deep dive into our learning model and learn how two of our most popular adult assessments—the LDMA (focused on leadership decision making) and the LSUA (focused on leaders’ understanding of themselves in workplace relationships)—are used to support leader development.

This course is perfect for coaches or consultants who are thinking about certifying down the road.

LEARN MORE

LAP-2 Light

In LAP-2 Light, we’re offering all of the lectures and much of the course material from LAP-2—Lectical Assessments in Practice for Recruitment Professionals. You’ll learn about Lectica’s Human Capital Value Chain, conventional recruitment practices, how to evaluate recruitment assessments, and all about Lectica’s recruitment products—including Lectica First (for front-line to mid-level recruitment) and Lectica Suite (for senior recruitment).

This course is perfect for recruitment professionals of all kinds, or for anyone who is toying with the idea of becoming accredited in the use of our recruitment tools.

LEARN MORE

Upgrades

Upgrades to our certification courses are available for both LAP-1 Light and LAP-2 Light!

 

Please follow and like us:

National Leaders’ thinking: Australian Prime Ministers

How complex are the interview responses of the last four Australian prime ministers? How does the complexity of their responses compare to the complexity of the U.S. presidents’ responses?

Special thanks to my Australian colleague, Aiden M. A. Thornton, PhD. Cand., for his editorial and research assistance.

This is the 4th in a series of articles on the complexity of national leaders’ thinking, as measured with CLAS, a newly validated electronic developmental scoring system. This article will make more sense if you begin with the first article in the series.

Just in case you choose not to read or revisit the first article, here are a few things to keep in mind:

  • I am an educational researcher and the CEO of a nonprofit that specializes in measuring the complexity level of people’s thinking skills and supporting the development of their capacity to work with complexity.
  • The complexity level of leaders’ thinking is one of the strongest predictors of leader advancement and success. See the National Leaders Intro for evidence.
  • Many of the issues faced by national leaders require principles thinking (level 12 on the skill scale/LecticalScale), illustrated in the figure below). See the National Leaders Intro for the rationale.
  • To accurately measure the complexity level of someone’s thinking (on a given topic), we need examples of their best thinking. In this case, that kind of evidence wasn’t available. As an alternative, my colleagues and I have chosen to examine the complexity level of prime ministers’ responses to interviews with prominent journalists.

Benchmarks for complexity scores

  • Most high school graduates perform somewhere in the middle of level 10.
  • The average complexity score of American adults is in the upper end of level 10, somewhere in the range of 1050–1080.
  • The average complexity score for senior leaders in large corporations or government institutions is in the upper end of level 11, in the range of 1150–1180.
  • The average complexity score (reported in our National Leaders Study) for the three U. S. presidents that preceded President Trump was 1137.
  • The average complexity score (reported in our National Leaders Study) for President Trump was 1053.
  • The difference between 1053 and 1137 generally represents a decade or more of sustained learning. (If you’re a new reader and don’t yet know what a complexity level is, check out the National Leaders’ Series introductory article.)

The data

In this article, we examine the thinking of the four most recent prime ministers of Australia—Julia Gillard, Kevin Rudd, Tony Abbott, and Malcolm Turnbull. For each prime minister, we selected 3 interviews, based on the following criteria: They

  1. were conducted by prominent journalists representing respected news media;
  2. included questions that requested explanations of the Prime Minister’s perspective; and
  3. were either conducted within the Prime Minister’s first year in office or were the earliest interviews we could locate that met the first two criteria.

As noted in the introductory article of this series, we do not imagine that the responses provided in these interviews necessarily represent competence. It is common knowledge* that prime ministers and other leaders typically attempt to tailor messages to their audiences, so even when responding to interview questions, they may not show off their own best thinking. Media also tailor writing for their audiences, so to get a sense of what a typical complexity level target for top media might be, we used CLAS to score 11 articles from Australian news media on topics similar to those discussed by the four presidents in their interviews. We selected these articles at random—literally selecting the first ones that came to hand—from recent issues of the Canberra Times, The Age, the Sydney Morning Herald, and Adelaide Now. Articles from all of these newspapers landed in the lower range of the early systems thinking zone, with a mean score of 1109 (15 points lower than the mean for the U.S. media sample) and a range of 45 points.

Hypothesis

Based on the mean media score, and understanding that politicians generally attempt, like media, to tailor messages for their audience, we hypothesized that prime ministers would aim for a similar range. Since the mean score for the Australian media sample was lower by 15 points than the mean score for the U. S. media sample, we anticipated that the average score received by Australian prime ministers would be a bit lower than the average score received by U. S. presidents.

The results

The Table below shows the complexity scores received by the four prime ministers. (Contact us if you would like a copy of the interviews.) Complexity level scores are shown in the same order as interview listings.

All of the scores received by Australian prime ministers fell well below the complexity level of many of the problems faced by national leaders. Although we cannot assume that the interview responses we scored are representative of these leaders’ best thinking, we can assert that we can see no evidence in these interviews that these prime ministers have the capacity to grasp the full complexity of many of the issues they faced (or are currently facing) in office. Instead, their scores suggest levels of skill that are more appropriate for mid- to upper-level managers in large organizations.

Prime minister

Interview by date

Complexity level scores

Mean complexity level

Mean zone

Julia Gillard (2010-2013)

Laurie Oakes, Weekend Today, 6/27/2010; Jon Faine, ABC 774, 6/29/2010; Deborah Cameron, ABC Sydney, 7/07/2010

1108, 1113, 1113

1111

Early systems thinking

Kevin Rudd (2013-2013)

Kerry O’Brien, ABC AM, 4/24/2008; Lyndal Curtis, ABC AM, 5/30/2008; Jon Faine, ABC 774 Brisbane, 6/06/2008

1133, 1138, 1129

1133

Early systems thinking

Tony Abbott (2013-2015)

Alison Carabine, ABC Radio National, 12/16/2013; Ray Hadley, 1/29/2014; Chris Uhlman, ABC AM, 9/26/2014

1133, 1129, 1117

1126

Early systems thinking

Malcolm Turnbull (2015-)

Michael Brissendon, ABC AM, 9/21/2015; Several journalists, 12/1/2015; Steve Austin, ABC Radio Brisbane, 1/17/2017

1133, 1138, 1113

1128

Early systems thinking

Comparison of U.S. and Australian results

There was less variation in the complexity scores of Australian prime ministers than in the complexity scores of U. S. presidents. Mean scores for the U. S. presidents ranged from 1054–1163 (109 points), whereas the range for Australian prime ministers was 1111–1133 (22 points). If we exclude President Trump as an extreme outlier, the mean score for U. S. Presidents was 12 points higher than for Australian prime ministers.

You may notice that the scores of two of the prime ministers who received a score of 1133 on their first interview, had dropped by the time of their third interview. This is reminiscent of the pattern we observed for President Obama.

The mean score for all four prime ministers was 14 points higher than the mean for sampled media. Interestingly, if we exclude President Trump as an extreme outlier, the difference between the average score received by U. S. presidents is almost identical at 13 points. Almost all of the difference between the mean scores of prime ministers and presidents (excluding President Trump) could be explained by media scores.

Country

Complexity score range

Complexity score difference

Leader average

Media average

Leader average – media average

USA

1054–1163

109

1116

1137 (without P. Trump

1124

-8

13

Australia

1111–1133

22

1125

1111

14

The sample sizes here are too small to support a statistical analysis, but once we have conducted our analyses of the British and Canadian prime ministers, we will be able to examine these trends statistically—and find out if they look like more than a coincidence.

Discussion

In the first article of this series, I discussed the importance of attempting to “hire” leaders whose complexity level scores are a good match for the complexity level of the issues they face in their roles. I then posed two questions:

  • When asked by prominent journalists to explain their positions on complex issues, what is the average complexity level of national leaders’ responses?
  • How does the complexity level of national leaders’ responses relate to the complexity of the issues they discuss?”

We now have a third question to add:

  • What is the relation between the complexity level of National Leaders’ interview responses and the complexity level of respected media?

So far, we have learned that when national leaders explain their positions on complex issues, they do not — with the possible exception of President Obama — demonstrate that they are capable of grasping the full complexity of these issues. On average, their explanations do not rise to the mean level demonstrated by executive leaders in Lectica’s database.

We have also learned that when national leaders explained their positions on complex issues to the press, their explanations were 13–14 points higher on the Lectical Scale than the average complexity level of sampled media articles. We will be following this possible trend in upcoming articles about the British and Canadian leaders.

Interestingly, the Lectical Scores of two prime ministers whose average scores were above the media average dropped closer to the media average in their third interviews. We observed the same pattern for President Obama. It’s too soon to declare this to be a trend, but we’ll be watching.

As noted in the article about the thinking of U. S. presidents, the world needs leaders who understand and can work with highly complex issues, and particularly in democracies, we also need leaders whose messages are accessible to the general public. Unfortunately, the drive toward accessibility seems to have led to a situation in which candidates are persuaded to simplify their messages, leaving voters with one less way to evaluate the competence of our future leaders. How are we to differentiate between candidates whose capacity to comprehend complex issues is only as complex as that of a mid-level manager and candidates who have a high capacity to comprehend and work with these issues but feel compelled to simplify their messages? And in a world in which people increasingly seem to believe that one opinion is as good as any other, how do we convince voters of the critical importance of complex thinking and the expertise it represents?


*The speeches of presidents are generally written to be accessible to a middle school audience. The metrics used to determine reading level are not measures of complexity level. They are measures of sentence, word length, and sometimes the commonness of words. For more on reading level see: How to interpret reading level scores.


 Other articles in this series

Please follow and like us:

Fit-to-role, well-being, & productivity

How to recruit the brain’s natural motivational cycle—the power of fit-to-role.

People learn and work better when the challenges they face in their roles are just right—when there is good fit-to-role. Improving fit-to-role requires achieving an optimal balance between an individual’s level of skill and role requirements. When employers get this balance right, they increase engagement, happiness (satisfaction), quality of communication, productivity, and even cultural health.

video version

Here’s how it works.

In the workplace, the challenges we’re expected to face should be just big enough to allow for success most of the time, but not so big that frequent failure is inevitable. My colleagues and I call this balance-point the Goldilocks zone, because it’s where the level of challenge is just right. Identifying the Goldilocks zone is important for three reasons:

First, and most obviously, it’s not good for business if people make too many mistakes.

Second, if the distance between employees’ levels of understanding and the difficulty of the challenges they face is too great, employees are less likely to understand and learn from their mistakes. This kind of gap can lead to a vicious cycle, in which, instead of improving or staying the same, performance gradually deteriorates.

Third, when a work challenge is just right we’re more likely to enjoy ourselves—and feel motivated to work even harder. This is because challenges in the Goldilocks zone, allow us to succeed just often enough to stimulate our brains to release pleasure hormones called opioids. Opioids give us a sense of satisfaction and pleasure. And they have a second effect. They also trigger the release of dopamine—the striving hormone—which motivates us to reach for the next challenge (so we can experience the satisfaction of success once again).

The dopamine-opioid cycle will repeat indefinitely in a virtuous cycle, but only when enough of our learning challenges are in the zone—not too easy and not too hard. As long as the dopamine-opioid cycle keeps cycling, we feel engaged. Engaged people are happy people—they tend to feel satisfied, competent, and motivated. [1]

People are also happier when they feel they can communicate effectively and build understanding with those around them. When organizations get fit-to-role right for every member of a team, they’re also building a team with members who are more likely to understand one another. This is because the complexity level of role requirements for different team members are likely to be very similar. So, getting fit to role right for one team member means building a team in which members are performing within a complexity range that makes it relatively—but not too—easy for members to understand one another. Team members are happiest when they can be confident that—most of the time and with reasonable effort—they will be able to achieve a shared understanding with other members.

A team representing a diversity of perspectives and skills, composed of individuals performing within a complexity range of 10–20 points on the Lectical Scale is likely to function optimally.

Getting fit-to-role right, also ensures that line managers are slightly more complex thinkers than their direct reports. People tend to prefer leaders they can look up to, and most of us intuitively look up to people who think a little more complexly than we do. [2] When it comes to line managers, If we’re as skilled as they are, we tend to wonder why they’re leading us. If we’re more skilled than they are, we are likely to feel frustrated. And if they’re way more skilled than we are, we may not understand them fully. In other words, we’re happiest when our line managers challenge us—but not too much. (Sound familiar?)

Most people work better with line managers who perform 15–25 points higher on the Lectical Scale than they do.

Unsurprisingly, all this engagement and happiness has an impact on productivity. Individuals work more productively when they’re happily engaged. And teams work more productively when their members communicate well with one another.[2]

The moral of the story

The moral of this story is that employee happiness and organizational effectiveness are driven by the same thing—fit-to-role. We don’t have to compromise one to achieve the other. Quite the contrary. We can’t achieve either without achieving fit-to-role.

Summing up

To sum up, when we get fit to role right—in other words, ensure that every employee is in the zone—we support individual engagement & happiness, quality communication in teams, and leadership effectiveness. Together, these outcomes contribute to productivity and cultural health.

Getting fit-to-role right requires top-notch recruitment and people development practices, starting with the ability to measure the complexity of (1) role requirements and (2) people skills.

When my colleagues and I think about the future of recruitment and people development, we envision healthy, effective organizations characterized by engaged, happy, productive, and constantly developing employees & teams. We help organizations achieve this vision by…

  • reducing the cost of recruitment so that best practices can be employed at every level in an organization;
  • improving predictions of fit-to- role;
  • broadening the definition of fit-to-role to encompasses the role, the team, and the position of a role in the organizational hierarchy; and
  • promoting the seamless integration of recruitment with employee development strategy and practice.

[1] Csikszentmihalyi, M., Flow, the psychology of happiness. (2008) Harper-Collins.

[2] Oishi, S., Koo, M., & Akimoto, S. (2015) Culture, interpersonal perceptions, and happiness in social interactions, Pers Soc Psychol Bull, 34, 307–320.

[3] Oswald, A. J., Proto, E., & Sgroi, D. (2015). Happiness and productivity. Journal of labor economics, 33, 789-822.

Please follow and like us:

President Trump passed the Montreal Cognitive Assessment

Shortly after the President passed the Montreal Cognitive Assessment, a reader emailed with two questions:

  1. Does this mean that the President has the cognitive capacity required of a national leader?
  2. How does a score on this test relate to the complexity level scores you have been describing in recent posts?

Question 1

A high score on the Montreal Cognitive Assessment dos not mean that the President has the cognitive capacity required of a national leader. This test result simply means there is a high probability that the President is not suffering from mild cognitive impairment. (The test has been shown to detect existing cognitive impairment 88% of the time [1].) In order to determine if the President has the mental capacity to understand the complex issues he faces as a National Leader, we need to know how complexly he thinks about those issues.

Question 2

The answer to the second question is that there is little relation between scores on the Montreal Cognitive Assessment and the complexity level of a person’s thinking. A test like the Montreal Cognitive Assessment does not require the kind of thinking a President needs to understand highly complex issues like climate change or the economy. Teenagers can easily pass this test.

Related articles


Benchmarks for complexity scores

  • Most high school graduates perform somewhere in the middle of level 10.
  • The average complexity score of American adults is in the upper end of level 10, somewhere in the range of 1050–1080.
  • The average complexity score for senior leaders in large corporations or government institutions is in the upper end of level 11, in the range of 1150–1180.
  • The average complexity score (reported in our National Leaders Study) for the three U. S. presidents that preceded President Trump was 1137.
  • The average complexity score (reported in our National Leaders Study) for President Trump was 1053.
  • The difference between 1053 and 1137 generally represents a decade or more of sustained learning. (If you’re a new reader and don’t yet know what a complexity level is, check out the National Leaders Series introductory article.)

[1] JAMA Intern Med. 2015 Sep;175(9):1450-8. doi: 10.1001/jamainternmed.2015.2152. Cognitive Tests to Detect Dementia: A Systematic Review and Meta-analysis. Tsoi KK, Chan JY, Hirai HW, Wong SY, Kwok TC.

 

Please follow and like us:

President Trump on climate change

How complex are the ideas about climate change expressed in President Trump’s tweets? The answer is, they are even less complex than ideas he has expressed about intelligence, international trade, and immigration—landing squarely in level 10. (See the benchmarks, below, to learn more about what it means to perform in level 10.)

The President’s climate change tweets

It snowed over 4 inches this past weekend in New York City. It is still October. So much for Global Warming.
2:43 PM – Nov 1, 2011

 

It’s freezing in New York—where the hell is global warming?
2:37 PM – Apr 23, 2013

 

Record low temperatures and massive amounts of snow. Where the hell is GLOBAL WARMING?
11:23 PM – Feb 14, 2015

 

In the East, it could be the COLDEST New Year’s Eve on record. Perhaps we could use a little bit of that good old Global Warming…!
7:01 PM – Dec 28, 2017

Analysis

In all of these tweets President Trump appears to assume that unusually cold weather is proof that climate change (a.k.a., global warming) is not real. The argument is an example of simple level 10, linear causal logic that can be represented as an “if,then” statement. “If the temperature right now is unusually low, then global warming isn’t happening.” Moreover, in these comments the President relies exclusively on immediate (proximal) evidence, “It’s unusually cold outside.” We see the same use of immediate evidence when climate change believers claim that a warm weather event is proof that climate change is real.

Let’s use some examples of students’ reasoning to get a fix on the complexity level of President Trump’s tweets. Here is a statement from an 11th grade student who took our assessment of environmental stewardship (complexity score = 1025):

“I do think that humans are adding [gases] to the air, causing climate change, because of everything around us. The polar ice caps are melting.”

The argument is an example of simple level 10, linear causal logic that can be represented as an “if,then” statement. “If the polar ice caps are melting, then global warming is real.” There is a difference between this argument and President Trump’s argument, however. The student is describing a trend rather than a single event.

Here is an argument made by an advanced 5th grader (complexity score = 1013):

“I think that fumes, coals, and gasses we use for things such as cars…cause global warming. I think this because all the heat and smoke is making the years warmer and warmer.”

This argument is also an example of simple level 10, linear causal logic that can be represented as an “if,then” statement. “If the years are getting warmer and warmer, then global warming is real.” Again, the difference between this argument and President Trump’s argument is that the student is describing a trend rather than a single event.

I offer one more example, this time of a 12th grade student making a somewhat more complex argument (complexity score = 1035).

“The temperature has increased over the years and studies show that the ice is melting in the north and south pole, so, yes humans are causing climate change.”

This argument is also an example of level 10, linear causal logic that can be represented as an “if,then” statement. “If the temperature has increased and studies show that the ice at the north and south poles are melting, then humans are causing climate change. But in this case, the student has mentioned two trends (warming and melting) and explicitly uses scientific evidence to support her conclusion.

Based on these comparisons, it seems clear that President Trump’s Tweets about climate change represent reasoning at the lower end of level 10.

“Humans have caused a lot of green house gasses…and these have caused global warming. The temperature has increased over the years and studies show that the ice is melting in the north and south pole, so, yes humans are causing climate change.

This argument is also an example of level 10, linear causal logic that can be represented as an “if,then” statement. “If the temperature has increased and studies show that the ice at the north and south poles are melting, then humans are causing climate change. In this case, the student’s argument is a bit more complex than in previous examples. She has mentioned two variables (warming and melting) and explicitly uses scientific evidence to support her conclusion.

Based on these comparisons, it seems clear that President Trump’s Tweets about climate change represent reasoning at the lower end of level 10.

Reasoning in level 11

Individuals performing in level 11 recognize that climate is an enormously complex phenomenon that involves many interacting variables. They understand that any single event or trend may be part of the bigger story, but is not, on its own, evidence for or against climate change.

Summing up

It concerns me greatly that someone who does not demonstrate any understanding of the complexity of climate is in a position to make major decisions related to climate change.


Benchmarks for complexity scores

  • Most high school graduates perform somewhere in the middle of level 10.
  • The average complexity score of American adults is in the upper end of level 10, somewhere in the range of 1050–1080.
  • The average complexity score for senior leaders in large corporations or government institutions is in the upper end of level 11, in the range of 1150–1180.
  • The average complexity score (reported in our National Leaders Study) for the three U. S. presidents that preceded President Trump was 1137.
  • The average complexity score (reported in our National Leaders Study) for President Trump was 1053.
  • The difference between 1053 and 1137 generally represents a decade or more of sustained learning. (If you’re a new reader and don’t yet know what a complexity level is, check out the National Leaders Series introductory article.)

 

Please follow and like us:

National leaders’ thinking: The US presidents

How well does the thinking of recent US Presidents stand up to the complexity of issues faced in their role?pictures of the last 4 US presidentsSpecial thanks to my Australian colleague, Aiden M. A. Thornton, PhD. Cand., for his editorial and research assistance.

This is the second in a series of articles on the complexity of national leaders’ thinking, as measured with CLAS, a newly validated electronic developmental scoring system. This article will make more sense if you begin with the first article in the series.

Just in case you choose not to read or revisit the first article, here are a few things to keep in mind.

  • I am an educational researcher and the CEO of a nonprofit that specializes in measuring the complexity level of people’s thinking and supporting the development of their capacity to work with complexity.
  • The complexity level of leaders’ thinking is one of the strongest predictors of leader advancement and success.
  • Many of the issues faced by national leaders require principles thinking (level 12 on the skill scale, illustrated in the figure below).
  • To accurately measure the complexity level of someone’s thinking (on a given topic), we need examples of their best thinking. In this case, that kind of evidence wasn’t available. As an alternative, my colleagues and I have chosen to examine the complexity level of Presidents’ responses to interviews with prominent journalists.

The data

In this article, we examine the thinking of the four most recent Presidents of the United States — Bill Clinton, George W. Bush, Barack Obama, and Donald Trump. For each president, we selected 3 interviews, based on the following criteria: They

  1. were conducted by prominent journalists representing respected news media;
  2. included questions that requested explanations of the president’s perspective; and
  3. were either conducted within the president’s first year in office or were the earliest interviews we could locate that met the first two criteria.

As noted in the introductory article of this series, we do not imagine that the responses provided in these interviews necessarily represent competence. It is common knowledge* that presidents and other leaders typically attempt to tailor messages for their audiences, so even when responding to interview questions, they may not show off their own best thinking.

Media also tailor writing for their audiences, so to get a sense of what a typical complexity level target for top media might be, we used CLAS to score 11 articles on topics similar to those discussed by the four presidents in their interviews. We selected these articles at random — literally selecting the first ones that came to hand — from recent issues of the New York Times, Guardian, Washington Post, and Wall Street Journal. Articles from all of these newspapers landed in the middle range of the early systems thinking zone, with an average score of 1124.

Based on this information, and understanding that presidents generally attempt to tailor messages for their audience, we hypothesized that presidents would aim for a similar range.

The results

The results were mixed. Only Presidents Clinton and Bush consistently performed in the anticipated range. President Trump stood out by performing well-below this range. His scores were all identical — and roughly equivalent to the average for 12th graders in a reasonably good high school. President Obama also missed the mark, but in the opposite direction. In his first interviews, he scored at the top of the advanced systems thinking zone. But he didn’t stay there. By the time of September’s interview, he was responding in the early systems thinking zone. He even mentioned simplifying communication in this interview. Commenting on his messaging around health care, he said, “I’ve tried to keep it digestible… it’s very hard for people to get… their whole arms around it.”

The Table below shows the complexity scores received by our four presidents. (All of the interviews can readily be found in the presidential archives.)

Discussion

In the first article of this series, I discussed the importance of attempting to “hire” leaders whose complexity level scores are a good match for the complexity level of the issues they face in their roles. I then posed two questions:

  1. When asked by prominent journalists to explain their positions on complex issues, what is the average complexity level of national leaders’ responses?
  2. How does the complexity level of national leaders’ responses relate to the complexity of the issues they discuss?”

The answer to question 1 is that the average complexity level of presidents’ responses to interview questions varied dramatically. President Trump’s average complexity level score was 1054 — near the average score received by 12th graders in a good high school. President Bush’s average score was 1107 — near the average score received by entry- to mid-level managers in a large corporation. President Clinton’s average score was 1141, near the average score received by upper level managers in large corporations. Obama’s, average score was 1163 — near the the average score of senior leaders in large corporations. (Obama’s highest scores were closer to the average for CEOs in our database.)

With respect to question 2, the complexity level of presidents’ responses did not rise to the complexity level of many of the issues raised in their interviews. These issues ranged from international relations and the economy to health care and global warming. All of these are thorny problems involving multiple interacting and nested systems—early principles and above. Indeed, many of these problems are so complex that they are beyond the capability of even the most complex thinkers to fully grasp. (See my article on the Complexity Gap for more on this issue.) President Obama came closest to demonstrating a level of thinking complexity that would be adequate for coping with problems of this kind. (For more on this, see the third article in this series, If a U. S. President thought like a teenager…)

Obama also demonstrated some of the other qualities required for working well with complexity, such as skills for perspective seeking and perspective coordination, and familiarity with tools for working with complexity—but that’s another story.

In addition to addressing the two questions posed in the first article of this series, we were able to ask if these U. S. presidents seemed to tailor the complexity level of their interview responses for the audiences of the media outlets represented by journalists conducting the interviews.

First, the responses of presidents Bush and Clinton were in the same zone as a set of articles collected from these media outlets. Of course, we can’t be sure the alignment was intentional. There are other plausible explanations, including the possibility that what we witnessed was their best thinking.

In contrast, however, President Trump’s responses were well below the zone of the selected articles, making it difficult to argue that he was tailoring his responses for their audiences. Individuals whose thinking is complex are likely to find thinking at lower levels of complexity simplistic and unsatisfying. Delivering a message that is likely to lead to judgments of this kind does not seem like a rational tactic — especially for a politician.

It seems more plausible that President Trump was demonstrating his best thinking about the issues raised in his interviews. If so, his best would be far below the complexity level of most issues faced in his role. Indeed, individuals performing in the advanced linear thinking zone would not even be aware of the complexity inherent in many of the issues faced daily by national leaders.

President Obama confronted a different challenge. The complexity of thinking evident in his early interviews was very high. Even though, as with Bush and Clinton, it isn’t possible to say we witnessed Obama’s best thinking, we would argue that what we saw of President Obama’s thinking in his first two interviews was a reasonable fit to the complexity of the challenges in his role. However, it appears that Obama soon learned that in order to communicate effectively with citizens, he needed to make his communications more accessible.

In the results reported here, Democrats scored higher than Republicans. We have no reason to believe that conservative thinking is inherently less complex than liberal thinking. In fact, in the past, we have identified highly complex thinking in both conservative and liberal leaders.

We need leaders who can cope with highly complex issues, and particularly in a democracy, we also need leaders we can understand. President Obama showed himself to be a complex thinker, but he struggled with making his communications accessible. President Trump’s message is accessible, but our results suggest that he may not even be aware of the complexity of many issues faced in his role. Is it inevitable that the tension between complexity and accessibility will sometimes lead us to “hire” national leaders who are easy to understand, but lack the ability to work with complexity? And how can we even know if a leader is equipped with the thinking complexity that’s required if candidates routinely simplify communications for their audience? Given our increasingly volatile and complex world, these are questions that cry out for answers.

We don’t have these answers, and we’ve intentionally resisted going deeper into the implications of these findings. Instead, we’re hoping to stimulate discussion around our questions and the implications that arise from the findings presented here. Please feel free to chime in or contact us to further the conversation. And stay tuned. The Australian Prime Ministers are next!


*The speeches of presidents are generally written to be accessible to a middle school audience. The metrics used to determine reading level are not measures of complexity level, but reading level scores are moderately correlated with complexity level.


 


Lectica

Assessments: Adult assessments | K-12 assessments | CLAS demo

Subscriptions: LecticaLive for Schools | LecticaLive for Teachers | LecticaLive for Parents | My LecticaLive

Just for organizations: LecticaFirst | Capability Assessment | Fitness Assessment | Compatibility Assessment | Role Complexity Analysis | Lectica for the C-Suite | Organizational Snapshot

Courses: LAP-1: coachingLAP-2: recruitment | FOLA: foundations

Please follow and like us:

National leaders’ thinking: How does it measure up?

Special thanks to my Australian colleague, Aiden Thornton, for his editorial and research assistance.

This is the first in a series of articles on the complexity of national leaders’ thinking. These articles will report results from research conducted with CLAS, our newly validated electronic developmental scoring system. CLAS will be used to score these leaders’ responses to questions posed by prominent journalists.

In this first article, I’ll be providing some of the context for this project, including information about how my colleagues and I think about complexity and its role in leadership. I’ve embedded lots of links to additional material for readers who have questions about our 100+ year-old research tradition, Lectica’s (the nonprofit that owns me) assessments, and other research we’ve conducted with these assessments.

Context and research questions

Lectica creates diagnostic assessments for learning that support the development of mental skills required for working with complexity. We make these learning tools for both adults and children. Our K-12 initiative—the DiscoTest Initiative—is dedicated to bringing these tools to individual K-12 teachers everywhere, free of charge. Its adult assessments are used by organizations in recruitment and training, and by colleges and universities in admissions and program evaluation.

All Lectical Assessments measure the complexity level (aka, level of vertical development) of people’s thinking in particular knowledge areas. A complexity level score on a Lectical Assessment tells us the highest level of complexity—in a problem, issue, or task—an individual is likely to be able to work with effectively.

On several occasions over the last 20 years, my colleagues and I have been asked to evaluate the complexity of national leaders’ reasoning skills. Our response has been, “We will, but only when we can score electronically—without the risk of human bias.” That time has come. Now that our electronic developmental scoring system, CLAS, has demonstrated a level of reliability and precision that is acceptable for this purpose, we’re ready to take a look.

Evaluating the complexity of national leaders’ thinking is a challenging task for several reasons. First, it’s virtually impossible to find examples of many of these leaders’ “best work.” Their speeches are generally written for them, and speech writers usually try to keep the complexity level of these speeches low, aiming for a reading level in the 7th to 9th grade range. (Reading level is not the same thing as complexity level, but like most tests of capability, it correlates moderately with complexity level.) Second, even when national leaders respond to unscripted questions from journalists, they work hard to use language that is accessible to a wide audience. And finally, it’s difficult to identify a level playing field—one in which all leaders have the same opportunity to demonstrate the complexity of their thinking.

Given these obstacles, there’s no point in attempting to evaluate the actual thinking capabilities of national leaders. In other words, we won’t be claiming that the scores awarded by CLAS represent the true complexity level of leaders’ thinking. Instead, we will address the following questions:

  1. When asked by prominent journalists to explain their positions on complex issues, what is the average complexity level of national leaders’ responses?
  2. How does the complexity level of national leaders’ responses relate to the complexity of the issues they discuss?

Thinking complexity and leader success

At this point, you may be wondering, “What is thinking complexity and why is it important?” A comprehensive response to this question isn’t possible in a short article like this one, but I can provide a basic description of complexity as we see it at Lectica, and provide some examples that highlight its importance.

All issues faced by leaders are associated with a certain amount of built-in complexity. For example:

  1. The sheer number of factors/stakeholders that must be taken into account.
  2. Short and long-term implications/repercussions. (Will a quick fix cause problems downstream, such as global unrest or catastrophic weather?)
  3. The number and diversity of stakeholders/interest groups. (What is the best way to balance the needs of individuals, families, businesses, communities, states, nations, and the world?)
  4. The length of time it will take to implement a decision. (Will it take months, years, decades? Longer projects are inherently more complex because of changes over time.)
  5. Formal and informal rules/laws that place limits on the deliberative process. (For example, legislative and judicial processes are often designed to limit the decision making powers of presidents or prime ministers. This means that leaders must work across systems to develop decisions, which further increases the complexity of decision making.)

Over the course of childhood and adulthood, the complexity of our thinking develops through up to 13 skill levels (0–12). Each new level builds upon the previous level. The figure above shows four adult complexity “zones” — advanced linear thinking (second zone of level 10), early systems thinking (first zone of level 11), advanced systems thinking (second zone of level 11), early principles thinking (first zone of level 12). In advanced linear thinking, reasoning is often characterized as “black and white.” Individuals performing in this zone cope best with problems that have clear right or wrong answers. It is only once individuals enter early systems thinking, that we begin to work effectively with highly complex problems that do not have clear right or wrong answers.

Leadership at the national level requires exceptional skills for managing complexity, including the ability to deal with the most complex problems faced by humanity (Helbing, 2013). Needless to say, a national leader regularly faces issues at or above early principles thinking.

Complexity level and leadership—the evidence

In the workplace, the hiring managers who decide which individuals will be put in leadership roles are likely to choose leaders whose thinking complexity is a good match for their roles. Even if they have never heard the term complexity level, hiring managers generally understand, at least implicitly, that leaders who can work with the complexity inherent in the issues associated with their roles are likely to make better decisions than leaders whose thinking is less complex.

There is a strong relation between the complexity of leadership roles and the complexity level of leaders’ reasoning. In general, more complex thinkers fill more complex roles. The figure below shows how lower and senior level leaders’ complexity scores are distributed in Lectica’s database. Most senior leaders’ complexity scores are in or above advanced systems thinking, while those of lower level leaders are primarily in early systems thinking.

The strong relation between the complexity of leaders’ thinking and the complexity of their roles can also be seen in the recruitment literature. To be clear, complexity is not the only aspect of leadership decision making that affects leaders’ ability to deal effectively with complex issues. However, a large body of research, spanning over 50 years, suggests that the top predictors of workplace leader recruitment success are those that most strongly relate to thinking skills, including complexity level.

The figure below shows the predictive power of several forms of assessment employed in making hiring and promotion decisions. Assessments of mental ability have been shown to have the highest predictive power. In other words, assessments of thinking skills do a better job predicting which candidates will be successful in a given role than other forms of assessment.

The match between the complexity of national leaders’ thinking and the complexity level of the problems faced in their roles is important. While we will not be able to assess the actual complexity level of the thinking of national leaders, we will be able to examine the complexity of their responses to questions posed by prominent journalists. In upcoming articles, we’ll be sharing our findings and discussing their implications.

Coming next…

In the second article in this series, we begin our examination of the complexity of national leaders’ thinking by scoring interview responses from four US Presidents—Bill Clinton, George W. Bush, Barack Obama, and Donald Trump.


References

Arthur, W., Day, E. A., McNelly, T. A., & Edens, P. S. (2003). A meta‐analysis of the criterion‐related validity of assessment center dimensions. Personnel Psychology, 56(1), 125-153.

Becker, N., Höft, S., Holzenkamp, M., & Spinath, F. M. (2011). The Predictive Validity of Assessment Centers in German-Speaking Regions. Journal of Personnel Psychology, 10(2), 61-69.

Beehr, T. A., Ivanitskaya, L., Hansen, C. P., Erofeev, D., & Gudanowski, D. M. (2001). Evaluation of 360 degree feedback ratings: relationships with each other and with performance and selection predictors. Journal of Organizational Behavior, 22(7), 775-788.

Dawson, T. L., & Stein, Z. (2004). National Leadership Study results. Prepared for the U.S. Intelligence Community.

Dawson, T. L. (2017, October 20). Using technology to advance understanding: The calibration of CLAS, an electronic developmental scoring system. Proceedings from Annual Conference of the Northeastern Educational Research Association, Trumbull, CT.

Dawson, T. L., & Thornton, A. M. A. (2017, October 18). An examination of the relationship between argumentation quality and students’ growth trajectories. Proceedings from Annual Conference of the Northeastern Educational Research Association, Trumbull, CT.

Gaugler, B. B., Rosenthal, D. B., Thornton, G. C., & Bentson, C. (1987). Meta-analysis of assessment center validity. Journal of Applied Psychology, 72(3), 493-511.

Helbing, D. (2013). Globally networked risks and how to respond. Nature, 497, 51-59.

Hunter, J. E., & Hunter, R. F. (1984). The validity and utility of alternative predictors of job performance. Psychological Bulletin, 96, 72-98.

Hunter, J. E., Schmidt, F. L., & Judiesch, M. K. (1990). Individual differences in output variability as a function of job complexity. Journal of Applied Psychology, 75, 28-42.

Johnson, J. (2001). Toward a better understanding of the relationship between personality and individual job performance. In M. R. R. Barrick, Murray R. (Ed.), Personality and work: Reconsidering the role of personality in organizations (pp. 83-120).

McDaniel, M. A., Schmidt, F. L., & Hunter, J., E. (1988a). A Meta-analysis of the validity of training and experience ratings in personnel selection. Personnel Psychology, 41(2), 283-309.

McDaniel, M. A., Schmidt, F. L., & Hunter, J., E. (1988b). Job experience correlates of job performance. Journal of Applied Psychology, 73, 327-330.

McDaniel, M. A., Whetzel, D. L., Schmidt, F. L., & Maurer, S. D. (1994). Validity of employment interviews. Journal of Applied Psychology, 79, 599-616.

Rothstein, H. R., Schmidt, F. L., Erwin, F. W., Owens, W. A., & Sparks, C. P. (1990). Biographical data in employment selection: Can validities be made generalizable? Journal of Applied Psychology, 75, 175-184.

Stein, Z., Dawson, T., Van Rossum, Z., Hill, S., & Rothaizer, S. (2013, July). Virtuous cycles of learning: using formative, embedded, and diagnostic developmental assessments in a large-scale leadership program. Proceedings from ITC, Berkeley, CA.

Tett, R. P., Jackson, D. N., & Rothstein, M. (1991). Personality measures as predictors of job performance: A meta-analytic review. Personnel Psychology, 44, 703-742.

Please follow and like us:

From Piaget to Dawson: The evolution of adult developmental metrics

I've just added a new video about the evolution of adult developmental metrics to YouTube and LecticaLive. It traces the evolutionary history of Lectica's developmental model and metric.

If you are curious about the origins of our work, this video is a great place to start. If you'd like to see the reference list for this video, view it on LecticaLive.

 

 

Please follow and like us: