While my colleagues and I have been gathering the data we’ll use to analyze the complexity level of the high profile interview responses of British prime ministers, I’ve been exploring alternative approaches to sharing our findings. As I wrapped up my post on developmental trajectories, it occurred to me that the Lectical® Scores of national leaders could be livened up by plotting them on a graph of growth trajectories. The result is shown above.
The growth trajectory in red (top trajectory) shows what growth would look like for individuals whose thinking complexity would be most likely grow to the level of many of the issues faced by national leaders. In reality, very few people are on a trajectory like this one, and even if they are, it is limited to one area of expertise. This means that in order to cope with the most complex issues, even our most complex thinkers need the best decision making tools and teams of highly qualified advisors.
The circles with initials in them represent the highest score (with confidence intervals) received by each leader. The data we scored were responses to high profile interviews with leading journalists. To learn more about the research and our analysis, see the articles listed below.
Caveat: It is important to keep in mind that we do not claim that the complexity level measurements we have taken represent the full capability of national leaders (with the possible exception of President Trump). Our research so far corroborates existing evidence that national leaders systematically attempt to simplify their messages, often approximating the complexity level of political stories in prominent media. Because of this, the figure shown here should be interpreted cautiously.
How well does the thinking of recent US Presidents stand up to the complexity of issues faced in their role?Special thanks to my Australian colleague, Aiden M. A. Thornton, PhD. Cand., for his editorial and research assistance.
This is the second in a series of articles on the complexity of national leaders’ thinking, as measured with CLAS, a newly validated electronic developmental scoring system. This article will make more sense if you begin with the first article in the series.
Just in case you choose not to read or revisit the first article, here are a few things to keep in mind.
The complexity level of leaders’ thinking is one of the strongest predictors of leader advancement and success.
Many of the issues faced by national leaders require principles thinking (level 12 on the skill scale, illustrated in the figure below).
To accurately measure the complexity level of someone’s thinking (on a given topic), we need examples of their best thinking. In this case, that kind of evidence wasn’t available. As an alternative, my colleagues and I have chosen to examine the complexity level of Presidents’ responses to interviews with prominent journalists.
In this article, we examine the thinking of the four most recent Presidents of the United States — Bill Clinton, George W. Bush, Barack Obama, and Donald Trump. For each president, we selected 3 interviews, based on the following criteria: They
were conducted by prominent journalists representing respected news media;
included questions that requested explanations of the president’s perspective; and
were either conducted within the president’s first year in office or were the earliest interviews we could locate that met the first two criteria.
As noted in the introductory article of this series, we do not imagine that the responses provided in these interviews necessarily represent competence. It is common knowledge* that presidents and other leaders typically attempt to tailor messages for their audiences, so even when responding to interview questions, they may not show off their own best thinking.
Media also tailor writing for their audiences, so to get a sense of what a typical complexity level target for top media might be, we used CLAS to score 11 articles on topics similar to those discussed by the four presidents in their interviews. We selected these articles at random — literally selecting the first ones that came to hand — from recent issues of the New York Times, Guardian, Washington Post, and Wall Street Journal. Articles from all of these newspapers landed in the middle range of the early systems thinking zone, with an average score of 1124.
Based on this information, and understanding that presidents generally attempt to tailor messages for their audience, we hypothesized that presidents would aim for a similar range.
The results were mixed. Only Presidents Clinton and Bush consistently performed in the anticipated range. President Trump stood out by performing well-below this range. His scores were all identical — and roughly equivalent to the average for 12th graders in a reasonably good high school. President Obama also missed the mark, but in the opposite direction. In his first interviews, he scored at the top of the advanced systems thinking zone. But he didn’t stay there. By the time of September’s interview, he was responding in the early systems thinking zone. He even mentioned simplifying communication in this interview. Commenting on his messaging around health care, he said, “I’ve tried to keep it digestible… it’s very hard for people to get… their whole arms around it.”
The Table below shows the complexity scores received by our four presidents. (All of the interviews can readily be found in the presidential archives.)
In the first article of this series, I discussed the importance of attempting to “hire” leaders whose complexity level scores are a good match for the complexity level of the issues they face in their roles. I then posed two questions:
When asked by prominent journalists to explain their positions on complex issues, what is the average complexity level of national leaders’ responses?
How does the complexity level of national leaders’ responses relate to the complexity of the issues they discuss?”
The answer to question 1 is that the average complexity level of presidents’ responses to interview questions varied dramatically. President Trump’s average complexity level score was 1054 — near the average score received by 12th graders in a good high school. President Bush’s average score was 1107 — near the average score received by entry- to mid-level managers in a large corporation. President Clinton’s average score was 1141, near the average score received by upper level managers in large corporations. Obama’s, average score was 1163 — near the the average score of senior leaders in large corporations. (Obama’s highest scores were closer to the average for CEOs in our database.)
With respect to question 2, the complexity level of presidents’ responses did not rise to the complexity level of many of the issues raised in their interviews. These issues ranged from international relations and the economy to health care and global warming. All of these are thorny problems involving multiple interacting and nested systems—early principles and above. Indeed, many of these problems are so complex that they are beyond the capability of even the most complex thinkers to fully grasp. (See my article on the Complexity Gap for more on this issue.) President Obama came closest to demonstrating a level of thinking complexity that would be adequate for coping with problems of this kind. (For more on this, see the third article in this series, If a U. S. President thought like a teenager…)
Obama also demonstrated some of the other qualities required for working well with complexity, such as skills for perspective seeking and perspective coordination, and familiarity with tools for working with complexity—but that’s another story.
In addition to addressing the two questions posed in the first article of this series, we were able to ask if these U. S. presidents seemed to tailor the complexity level of their interview responses for the audiences of the media outlets represented by journalists conducting the interviews.
First, the responses of presidents Bush and Clinton were in the same zone as a set of articles collected from these media outlets. Of course, we can’t be sure the alignment was intentional. There are other plausible explanations, including the possibility that what we witnessed was their best thinking.
In contrast, however, President Trump’s responses were well below the zone of the selected articles, making it difficult to argue that he was tailoring his responses for their audiences. Individuals whose thinking is complex are likely to find thinking at lower levels of complexity simplistic and unsatisfying. Delivering a message that is likely to lead to judgments of this kind does not seem like a rational tactic — especially for a politician.
It seems more plausible that President Trump was demonstrating his best thinking about the issues raised in his interviews. If so, his best would be far below the complexity level of most issues faced in his role. Indeed, individuals performing in the advanced linear thinking zone would not even be aware of the complexity inherent in many of the issues faced daily by national leaders.
President Obama confronted a different challenge. The complexity of thinking evident in his early interviews was very high. Even though, as with Bush and Clinton, it isn’t possible to say we witnessed Obama’s best thinking, we would argue that what we saw of President Obama’s thinking in his first two interviews was a reasonable fit to the complexity of the challenges in his role. However, it appears that Obama soon learned that in order to communicate effectively with citizens, he needed to make his communications more accessible.
In the results reported here, Democrats scored higher than Republicans. We have no reason to believe that conservative thinking is inherently less complex than liberal thinking. In fact, in the past, we have identified highly complex thinking in both conservative and liberal leaders.
We need leaders who can cope with highly complex issues, and particularly in a democracy, we also need leaders we can understand. President Obama showed himself to be a complex thinker, but he struggled with making his communications accessible. President Trump’s message is accessible, but our results suggest that he may not even be aware of the complexity of many issues faced in his role. Is it inevitable that the tension between complexity and accessibility will sometimes lead us to “hire” national leaders who are easy to understand, but lack the ability to work with complexity? And how can we even know if a leader is equipped with the thinking complexity that’s required if candidates routinely simplify communications for their audience? Given our increasingly volatile and complex world, these are questions that cry out for answers.
We don’t have these answers, and we’ve intentionally resisted going deeper into the implications of these findings. Instead, we’re hoping to stimulate discussion around our questions and the implications that arise from the findings presented here. Please feel free to chime in or contact us to further the conversation. And stay tuned. The Australian Prime Ministers are next!
*The speeches of presidents are generally written to be accessible to a middle school audience. The metrics used to determine reading level are not measures of complexity level, but reading level scores are moderately correlated with complexity level.
The best way we know of to accelerate learning is to slow down! It may be counterintuitive, but learning slowly—in ways that foster deep understanding—is the best way to speed up growth! You’ve achieved deep understanding when you’re able to connect new knowledge with your existing knowledge, then put it to work in a variety of real-world contexts.
In an earlier post, I presented evidence that building deep understanding accelerates learning (relative to learning correct answers, rules, definitions, procedures, or vocabulary). In this post, I’m going to explain why.
If you’re a regular reader, you’ll know that my colleagues and I work with a learning model called the” Virtuous Cycle of Learning and +7 skills” (VCoL+7). This model emphasizes the importance of giving learners ample opportunity to build deep understanding through cycles of goal setting, information gathering, application, and reflection. We argue that evidence of deep understanding can be seen in the coherence of people’s arguments—you can’t explain or defend an idea coherently if you don’t understand it—and other evidence of their ability to apply knowledge. When we learn deeply—the VCoL way—we build robust knowledge networks that provide a solid foundation for future learning.
Because poorly understood ideas provide a weak foundation for future learning, my colleagues and I hypothesized that, over time, learners with lower levels of understanding would grow more slowly than learners with higher levels of understanding. We measured understanding by scoring learners’ written arguments for their coherence—how clear and logical they were—on a scale from 1-10. We measured their developmental growth with the Lectical Assessment System, a well-validated developmental scoring system. For details about the study, see the full report.
For the figure below, I’ve borrowed the third graph from the “stop trying to speed up learning” post, which showed growth curves for students in three different kinds of schools. The first (faded teal) group represents students in private schools that emphasized VCoL, the second (faded lime) group represents students in private schools with conventional curricula emphasizing correctness, and the third group (faded red) represents students in public inner city schools with conventional curricula emphasizing correctness. I’ve faded the learning curves for students in these schools into the background.
To this figure, I have added three brightly colored growth curves. These predicted growth curves based on the results of our study on the impact of coherence (which represents level of understanding) on developmental growth. At this point, it’s important to reveal that all of the students included in our study of coherence were students in inner city public schools with a high percentage of students from low income families (the faded red group). Each curve stands for the predicted growth of a hypothetical student from these schools. In the 4th grade, our hypothetical students received time 1 coherence scores of 5.5, 6.5, and 7.5. These values were selected because they were close to the actual time 1 coherence scores for the three groups of students in the background graphic. (Actual average 4th grade scores are shown on the right.) The vertical scale represents developmental level and the horizontal scale represents school grade.
As you can see, the distance between grade 8 scores predicted by the hierarchical regression is a bit less than half of the difference between the actual average scores in the background image. What this means is that in grade 8, almost half of the difference between students in the three types of schools can be explained by depth of understanding (as captured by our measure of coherence).
Both type of instruction and wealth predict learners’ growth trajectories. The results from our study of the impact of coherence on development suggest that if we use forms of instruction that support deep understanding, we can accelerate learning—even for disadvantaged students. These results are consistent with patterns observed in adult learning, in which programs that employ VCoL have been found to accelerate learning relative to programs that emphasize correctness or motivation.
Lectica’s nonprofit mission is to help educators foster deep understanding and lifelong growth. We can do it with your help! Please donate now. Your donation will help us deliver our learning tools—free—to K-12 teachers everywhere.
Recently, I was asked by a colleague for a clear, simple example that would show how DiscoTest items differ from the items on conventional standardized tests. My first thought was that this would be impossible without oversimplifying. My second thought was that it might be okay to oversimplify a bit. So, here goes!
The table below lists four differences between what Lectica measures and what is measured by other standardized assessments.1 The descriptions are simplified and lack nuance, but the distinctions are accurate.
Other standardized assessments
level of understanding on a learning scale backed by 100 years of research
number of correct answers
the depth of an individual’s understanding (demonstrated in the complexity of arguments and the way the test taker works with knowledge)
the ability to recall facts, or to apply rules, definitions, or procedures (demonstrated by correct answers)
paragraph length written responses
primarily multiple choice or short written answers2
explanations, applications, and transfer
right/wrong judgments or right/wrong applications of rules and procedures
I chose a scenario-based example that we’re already using in an assessment of students’ conceptions of the conservation of matter. We borrowed the scenario from a pre-existing multiple choice item.
Sophia balances a pile of stainless steel wire and ordinary steel wire on a scale. After a few days the ordinary wire in the pan on the right starts rusting.
Conventional multiple choice question
What will happen to the pan with the rusting wire?
The pan will move up.
The pan will not move.
The pan will move down.
The pan will first move up and then down.
The pan will first move down and then up.
(Go ahead, give it a try! Which answer would you choose?)
Lectical Assessment question
What will happen to the height of the pan with the rusting wire? Please explain your answer thoroughly.
Here are three examples of responses from 12th graders.
Lillian: The pan will move down because the rusted steel is heavier than the plain steel.
Josh: The pan will move down, because when iron rusts, oxygen atoms get attached to the iron atoms. Oxygen atoms don’t weigh very much, but they weigh a bit, so the rusted iron will “gain weight,” and the scale will to down a bit on that side.
Ariana: The pan will go down at first, but it might go back up later. When iron oxidizes, oxygen from the air combines with the iron to make iron oxide. So, the mass of the wire increases, due to the mass of the oxygen that has bonded with the iron. But iron oxide is non-adherent, so over time the rust will fall off of the wire. If the metal rusts for a long time, some of the rust will become dust and some of that dust will very likely be blown away.
The correct answer to the multiple choice question is, “The pan will move down.”
There is no single correct answer to the Lectical Assessment item. Instead, there are answers that reveal different levels of understanding. Most readers will immediately see that Josh’s answer reveals more understanding than Lillian’s, and that Ariana’s reveals more understanding than Josh’s.
You may also notice that Arianna’s written response would result in her selecting one of the incorrect multiple-choice answers, and that Lillian and Josh are given equal credit for correctness even though their levels of understanding are not equally sophisticated.
Why is all of this important?
It’s not fair! The multiple choice item cheats Adriana of the chance to show off what she knows, and it treats Lillian and Josh as if their level of understanding is identical.
The multiple choice item provides no useful information to students or teachers! The most we can legitimately infer from a correct answer is that the student has learned that when steel rusts, it gets heavier. This correct answer is a fact. The ability to identify a fact does not tell us how it is understood.
Without understanding, knowledge isn’t useful. Facts that are not supported with understanding are useful on Jeopardy, but less so in real life. Learning that does not increase understanding or competence is a tragic waste of students’ time.
Despite clear evidence that correct answers on standardized tests do not measure understanding and are therefore not a good indicator of usable knowledge or competence, we continue to use scores on these tests to make decisions about who will get into which college, which teachers deserve a raise, and which schools should be closed.
We value what we measure. As long as we continue to measure correctness, school curricula will emphasize correctness, and deeper, more useful, forms of learning will remain relatively neglected.
None of these points is particularly controversial. Most educators agree on the importance of understanding and competence. What’s been missing is the ability to measure understanding at scale and in real time. Lectical Assessments are designed to fill this gap.
1Many alternative assessments are designed to measure understanding—at least to some degree—but few of these are standardized or scalable.
2See my examination of a PISA item for an example of a typical written response item from a highly respected standardized test.
I'm frequently asked about benchmarks. My most frequent response is something like: "Setting benchmarks requires more data than we have collected so far," or "Benchmarks are just averages, they don't necessarily apply to particular cases, but people tend to use them like they do." Well, that last excuse will probably always hold true, but now that our database contains more than 43,000 assessments, the first response is a little less true. So, I'm pleased to announce that we've published a benchmark table that shows how educational and workplace role demands relate to the Lectical Scale. We hope you find it useful!
All of our assessments are calibrated to the same learning scale, called the "Lectical Scale". To people who are familiar with how most educational assessments work, this seems pretty weird. In fact, it can sound to some people like we're claiming that we make a bunch of assessments that all measure exactly the same thing. So why bother making more than one?
In fact, we ARE measuring exactly the same thing with all of our assessments, but we're measuring it in different contexts. Or put another way, we're using the same ruler to measure the development of different skills and ideas. The claim we're making is that people's ability to think about all things grows in the same fundamental way.
To understand what we mean by this, it helps to think about how thermometers work. We can use the temperature scale to describe the heat of anything. This is because temperature is a fundamental property. It doesn't change if the context changes. When we say someone's temperature is 102º Fahrenheit, we can say that they are likely to be sick. However, we cannot say what is causing them to be sick unless we make other kinds of measurements or observations.
Similarly, the Lectical Assessment System (our human scoring system) and CLAS (our computer scoring system) measure the complexity of thinking as it shows up in what people write or say. Evidence shows that complexity of thinking is a fundamental property. A Lectical Score tells us how complex what someone has written or said is, so we can say that people who share that score demonstrate the same thinking complexity. But the Lectical Score doesn't tell us exactly what they are thinking. In fact, there are many, many ways in which two people can get the same score on one of our assessments, so in order to say what the score means on a particular test, we need to make other kinds of measurements or observations.
Almost all of today's standardized educational assessments are technologically sophisticated, but Lectical Assessments are both technologically and scientifically sophisticated. We think of our approach as the "rocket science" of educational assessment. And Lectica's mission as a whole can be thought of, in part, as an ambitious and research-intensive engineering project.
Our aim is nothing less than a comprehensive account of human learning that covers the verbal lifespan. You can think of this account as a "taxonomy of learning". At its core is the Lectical Dictionary, a continuously vetted and growing developmental inventory of the English language. We use this dictionary to support our understanding of the development of specific concepts and skills. It's also at the heart of CLAS, our electronic scoring system, and our as-yet-unnamed developmental spell checker. Every Lectical Assessment that's taken helps us increase the accuracy of the Lectical Dictionary, and every Lectical Assessment we create expands its scope.
In this video, I explain how you can use Lectical Assessments to find out (1) if your leaders are up to the complexity demands of their jobs and (2) how Lectical Assessments can help them build the skills they need to close the complexity gap.
Since 2002, my colleagues and I have been documenting the development of people’s conceptions of leadership. We’ve learned a lot about how thinking about leadership develops over time. This article provides a small sampling of what we’ve learned.
I’ll be describing what conceptions of leadership and leadership skills look like in four developmental “zones.” A zone is 1/2 of a Lectical Level (a level on Lectica’s well-validated lifespan developmental scale). Four zones are regularly observed in adulthood. These are illustrated in the figure below:
You can think of what my colleagues and I call Lectical Development as growth in the complexity and integration of people’s neural networks. As illustrated in the above figure, one way this increasing complexity shows up is in people’s ability to work effectively with increasingly broad and layered perspectives. It also appears in people’s reasoning about specific concepts, including conceptions of leadership. The table below provides brief general descriptions of what reasoning about leadership looks like in the four adult zones.
Reasoning about leadership in the four adult zones
good leadership is…
advanced linear thinking
a collection of traits, dispositions, habits, or skills
early systems thinking
a complex set of interrelated traits, dispositions, learned qualities, and skills that are applied in particular contexts
advanced systems thinking
a complex and flexible set of interrelated and constantly developing skills, dispositions, learned qualities, and behaviors
early integrative thinking
the actualization of context-independent, consciously cultivated qualities, disposition, and skills that have evolved through purposeful and committed engagement and reflective interaction with others
The next table provides examples of some of the ways people think about sharing power, courage, working with emotion, and social skills — in each of the four adult zones. Note how the conceptions at successive levels build upon one another and increase in scope. It’s easy to see why individuals performing at higher levels tend to rise to the top of organizations and institutions—they can see more of the picture.
The development of reasoning about leaderhip skills
working with emotion
advanced linear thinking
sharing the work load with others or letting other people make some of the decisions
the ability to face, conquer, or conceal fear, admit when you are wrong, stand up for others, believe in yourself, or stand up for what you believe is right
being able to keep staff satisfied and productive, calm down overly emotional staff, or support staff during difficult times
being able to listen or communicate well, control your emotions, or put yourself in the other person’s shoes
early systems thinking
empowering others by giving them opportunities to share responsibility, knowledge, and/or benefits
the ability to function well in the face of fear or other obstacles, or being willing to take reasonable risks or make mistakes in the interest of a “higher” goal
being able to manage your own emotions and to maintain employee morale, motivation, happiness, or sense of well-being
having the skills required to foster compassionate, open, accepting, or tolerant relationships or interactions
advanced systems thinking
sharing responsibility and accountability as a way to leverage the wisdom, expertise, or skills of stakeholders
the ability to maintain and model integrity, purpose, and openness or to continue striving to fulfill one’s vision or purpose—even in the face of obstacles or adversity
having enough insight into human emotion to foster an emotionally healthy culture in which emotional awareness and maturity are valued and rewarded
being able to foster a culture that supports optimal social relations and the ongoing development of social skills
early integrative thinking
strategically distributing power by developing systems and structures that foster continuous learning, collaboration, and collective engagement
the ability to serve a larger principle or vision by strategically embracing risk, uncertainty, and ambiguity—even in the face of internal and external obstacles or resistance
having the ability to work with others to establish systems and structures that support the emergence of, and help sustain, an emotionally healthy culture
being able to develop adaptive systems that respond to the emergent social dynamics of internal and external relationships
The Lectical Level at which leaders understand leadership affects how they choose to lead, and is a strong predictor of the level of complexity they can work with effectively. Lectical Assessments are designed to measure and foster growth on the Lectical Scale. If you’d like to learn more or have questions, we’d love to hear from you.