Individual growth trajectories often don’t stick to statistically determined expectations.
The illustration above depicts the growth trajectory of a woman named Eleanore. Between the ages 12 and 68, she completed two different developmental assessments several times. The first assessment was the LRJA, a test of reflective judgment (critical thinking), which she completed on 8 different occasions. The second assessment was the LDMA, a test of decision-making skills, which she completed four times between the ages of 42 and 68. As you can see, Eleanore has continued to develop throughout adulthood, with periods of more and less rapid growth.
The graph on which Eleanore’s scores are plotted shows several potential developmental curves (A–H), representing typical developmental trajectories for individuals performing in different levels at age 10. You can tell right away that Eleanore is not behaving as expected. Over time, her scores have landed on two different curves (D & E), and she shows considerable growth in age ranges for which no growth is expected — on either curve.
Eleanore, who was born in 1942, was a bright child who did well in school. By the time she graduated from high school in 1960, she was in the top 15% of her class. After attending two years of community college, she joined the workforce as a legal secretary. At 23 she married a lawyer, and at 25 she gave birth to the first of two children. During the next 15 years, while raising her her children, her scores hovered closer to curve E than curve D. When her youngest entered high school, Eleanore decided it was time to complete her bachelor of science degree, which she did, part time, over several years. During this period she grew more quickly than in the previous 10 years, and her LRJA scores began to cluster around curve D.
Sadly, shortly after completing her degree (at age 43), Eleanore learned that her mother had been diagnosed with dementia (now known as Alzheimer’s). For the next 6 years, she cared for her ailing mother, who died only a few days before Eleanore’s 50th birthday. While she cared for her mother, Eleanore learned a great deal about Alzheimer’s — from both personal experience and the extensive research she did to help ensure the best possible care for her mother. This may have contributed to the growth that occurred during this period. Following her mother’s death, Eleanore decided to build upon her knowledge of Alzheimer’s, spending the next 6 years earning a Ph.D. focused on its origins. At the time of her last assessment, she was a respected Alzheimer’s researcher.
And now I must confess. Eleanore is not a real person. She’s a compilation based on 70 years of research in which the growth of thousands of individuals has been measured over periods spanning 8 months to 25 years. Eleanore’s story has been designed to illustrate several phenomena my colleagues and I have observed in these data:
First, although statistics allow us to describe typical developmental trajectories, individual development is usually more or less atypical. Eleanore does not stay on the curve she started out on. In fact she actually drops below this curve for a time, then develops beyond it in later adulthood. She also grew during age-ranges in which no growth at all was expected. Both life events and formal education clearly influenced her developmental trajectory.
Second, many people develop throughout adulthood — especially if they are involved in rich learning experiences (like formal schooling), or when they are coping productively with life crises (like reflectively supporting an ailing parent).
Third, developmental spurts happen. The figure above shows a (real) growth spurt that occurred between the ages of 46 and 51. This highly motivated individual engaged in a sustained and varied learning adventure during this period — just because he wanted to build his interpersonal and leadership skills.
Fourth, developmental growth can happen late in life, given the right opportunities and circumstances. The (real) woman whose scores are shown here responded to a personal life crisis by embracing it as an opportunity to learn more about herself as person and as a leader.
My colleagues and I find the statistically determined growth curves shown on the figures in this article enormously useful in our research, but it’s important to keep in mind that they’re just averages. Many people can jump from one curve to another given the right learning skills and opportunities. On the other hand, these curves are associated with some constraints. For example, we’ve never seen anyone jump more than one of these curves, no matter how excellent their learning skills or opportunities have been. Unsurprisingly, nurture cannot entirely overcome nature.
Growth is predicted by a number of factors. Nature is a big one. How we personally approach learning is also pretty big — with approaches that feature virtuous cycles of learning taking the lead. And, of course, our growth is influenced by how optimally the environments we live, learn, and work in support learning.
Find out how we put this knowledge to work in leader development and recruitment contexts, with LAP-1 and LAP-2.
Mental ability is by far the best predictor of recruitment success — across the board.* During the 20th century, aptitude tests were the mental ability metrics of choice — but this is the 21st century. The workplace has changed. Today, leaders don’t need skills for choosing the correct answer from a list. They need skills for coping with complex issues without simple right and wrong answers. Aptitude tests don’t measure these skills.
Today, success in senior and executive roles is best predicted by (1) the fit between the complexity of leaders’ thinking and the complexity of their roles, (2) the clarity of their thinking in real workplace contexts, and (3) their skills for functioning in VUCA (volatile, uncertain, complex, and ambiguous) conditions.
Clarity — Clarity involves the degree to which an individual’s arguments are coherent and persuasive, how well their arguments are framed, and how well their ideas are connected. Individuals who think more clearly make better decisions and grow more rapidly than individuals who think less clearly.
VUCA skills — VUCA skills are required for making good decisions in volatile, uncertain, complex, or ambiguous contexts. They are…
perspective coordination—determining which perspectives matter, seeking out a diversity of relevant perspectives, and bringing them together in a way that allows for the emergence of effective solutions.
decision-making under complexity — employing a range of decision-making tools and skills to design effective decision-making processes for complex situations.
contextual thinking — being predisposed to think contextually, being able to identify the contexts that are most likely to matter in a given situation and determine how these contexts relate to a particular situation.
collaboration — understanding the value of collaboration, being equipped with the tools and skills required for collaboration, and being able to determine the level of collaboration that’s appropriate for a particular decision-making context.
Getting fit-to-role right increases well-being, engagement, effectiveness, and productivity. Our approach to role fit pairs an assessment of the complexity of an individual’s thinking — when applied to a wicked real-world workplace scenario — with an analysis of the complexity of a particular workplace role.
The Lectical Scores in the figure on the left represent the complexity level scores awarded to eight job candidates, based on their performances on a developmental assessment of leader decision making (LDMA). The fit-to-role score tells us how well the Lectical Score fits the complexity range of a role. Here, the complexity range of the role is 1120–1140, represented by the vertical teal band. The circles represent the Lectical Scores of candidates. The size of these circles represents the range in which the candidate’s true level of ability is likely to fall.
The “sweet spot” for a new hire is generally at the bottom end of the complexity range of a role, in this case, 1120. There are two reasons for this.
The sweet spot is where the challenge posed by a new role is “just right” — just difficult enough to keep an employee in flow — what we call the Goldilocks zone. Placing employees in the sweet spot increases employee satisfaction, improves performance, and optimally supports learning and development.
An existing team is more likely to embrace candidates who are performing in the sweet spot. Sweet spot candidates are likely to welcome support and mentoring, which makes it easier to integrate them into an existing team than it is to integrate candidates performing at higher levels, who may be viewed as competitors.
In the figure above, teal circles represent candidates whose scores are in or very near the sweet spot — fit to role is excellent. Yellow circles represent individuals demonstrating marginal fit, and red circles represent individuals demonstrating poor fit.
We can use circle color to help us figure out who should advance to the next level in a recruitment process.
The first cut
Based on the results shown above, it’s easy to decide who will advance to the next step in this process. Red circles mean, “This person is a poor fit to the complexity demands of this role.” Therefore, candidates with red circles should be eliminated from consideration for this role. Celia, Amar, Chilemba, and Jae-Eun, just don’t fit.
However, this does not mean that these candidates should be ignored. Every single one of the eliminated candidates has high or acceptable Clarity and VUCA scores. So, despite the fact that they did not fit this role, each one may be good fit for a different role in the organization.
It’s also worth noting that Jae-Eun demonstrates a level of skill — across measures — that’s relatively rare. When you identify a candidate with mental skills this good, it’s worth seeing if there is some way your organization can leverage these skills.
The second cut
The first cut left us with 4 candidates that met basic fit-to-role qualifications, Jewel, YiYu, Alistair, and Martin. The next step is to find out if their Clarity and VUCA scores are good enough for this role.
Below, you can see how we have interpreted the Clarity and VUCA scores for each of the remaining candidates, and made recommendations based on these interpretations. Notice that YiYu and Alistair are recommended with reservations. It will be important to take these reservations into account during next steps in the recruitment process.
Let’s assume that, Jewel, YiYu, and Alistair move to the next step in the recruitment process. Once the number of candidates has been winnowed down to this point, it’s a good time to administer personality or culture fit assessments, conduct team evaluations, view candidate presentations, or conduct interviews. You already know the candidates are equipped with adequate to excellent mental skills and fit-to-role. From here, it’s all about which candidate you think is likely to fit in to your team.
As soon as we have it, my colleagues and I either publish our reliability and validity evidence in refereed journals or conference presentations or present them on our web site. We believe in total transparency regarding the validity and reliability of all assessments employed in the workplace.
*Schmidt, F. L., Oh, I.-S., & Shaffer, J. A. (2016). Working paper: The validity and utility of selection methods in personnel psychology: Practical and theoretical implications of 100 years of research findings.
For some time now, people have been asking us how they can learn at least some of what we teach in our certification courses—but without the homework! Well, we’ve taken the plunge, with two new self-guided courses.
All profits from sales support Lectica’s mission to deliver the world’s best assessments free of charge to K-12 teachers everywhere!
In LAP-1 Light, we’ve brought together the lectures and much of the course material offered in the certification version of the course—Lectical Assessments in Practice for Coaches. You’ll take a deep dive into our learning model and learn how two of our most popular adult assessments—the LDMA (focused on leadership decision making) and the LSUA (focused on leaders’ understanding of themselves in workplace relationships)—are used to support leader development.
This course is perfect for coaches or consultants who are thinking about certifying down the road.
In LAP-2 Light, we’re offering all of the lectures and much of the course material from LAP-2—Lectical Assessments in Practice for Recruitment Professionals. You’ll learn about Lectica’s Human Capital Value Chain, conventional recruitment practices, how to evaluate recruitment assessments, and all about Lectica’s recruitment products—including Lectica First (for front-line to mid-level recruitment) and Lectica Suite (for senior recruitment).
This course is perfect for recruitment professionals of all kinds, or for anyone who is toying with the idea of becoming accredited in the use of our recruitment tools.
How to recruit the brain’s natural motivational cycle—the power of fit-to-role.
People learn and work better when the challenges they face in their roles are just right—when there is good fit-to-role. Improving fit-to-role requires achieving an optimal balance between an individual’s level of skill and role requirements. When employers get this balance right, they increase engagement, happiness (satisfaction), quality of communication, productivity, and even cultural health.
In the workplace, the challenges we’re expected to face should be just big enough to allow for success most of the time, but not so big that frequent failure is inevitable. My colleagues and I call this balance-point the Goldilocks zone, because it’s where the level of challenge is just right. Identifying the Goldilocks zone is important for three reasons:
First, and most obviously, it’s not good for business if people make too many mistakes.
Second, if the distance between employees’ levels of understanding and the difficulty of the challenges they face is too great, employees are less likely to understand and learn from their mistakes. This kind of gap can lead to a vicious cycle, in which, instead of improving or staying the same, performance gradually deteriorates.
Third, when a work challenge is just right we’re more likely to enjoy ourselves—and feel motivated to work even harder. This is because challenges in the Goldilocks zone, allow us to succeed just often enough to stimulate our brains to release pleasure hormones called opioids. Opioids give us a sense of satisfaction and pleasure. And they have a second effect. They also trigger the release of dopamine—the striving hormone—which motivates us to reach for the next challenge (so we can experience the satisfaction of success once again).
The dopamine-opioid cycle will repeat indefinitely in a virtuous cycle, but only when enough of our learning challenges are in the zone—not too easy and not too hard. As long as the dopamine-opioid cycle keeps cycling, we feel engaged. Engaged people are happy people—they tend to feel satisfied, competent, and motivated. 
People are also happier when they feel they can communicate effectively and build understanding with those around them. When organizations get fit-to-role right for every member of a team, they’re also building a team with members who are more likely to understand one another. This is because the complexity level of role requirements for different team members are likely to be very similar. So, getting fit to role right for one team member means building a team in which members are performing within a complexity range that makes it relatively—but not too—easy for members to understand one another. Team members are happiest when they can be confident that—most of the time and with reasonable effort—they will be able to achieve a shared understanding with other members.
A team representing a diversity of perspectives and skills, composed of individuals performing within a complexity range of 10–20 points on the Lectical Scale is likely to function optimally.
Getting fit-to-role right, also ensures that line managers are slightly more complex thinkers than their direct reports. People tend to prefer leaders they can look up to, and most of us intuitively look up to people who think a little more complexly than we do.  When it comes to line managers, If we’re as skilled as they are, we tend to wonder why they’re leading us. If we’re more skilled than they are, we are likely to feel frustrated. And if they’re way more skilled than we are, we may not understand them fully. In other words, we’re happiest when our line managers challenge us—but not too much. (Sound familiar?)
Most people work better with line managers who perform 15–25 points higher on the Lectical Scale than they do.
Unsurprisingly, all this engagement and happiness has an impact on productivity. Individuals work more productively when they’re happily engaged. And teams work more productively when their members communicate well with one another.
The moral of the story
The moral of this story is that employee happiness and organizational effectiveness are driven by the same thing—fit-to-role. We don’t have to compromise one to achieve the other. Quite the contrary. We can’t achieve either without achieving fit-to-role.
To sum up, when we get fit to role right—in other words, ensure that every employee is in the zone—we support individual engagement & happiness, quality communication in teams, and leadership effectiveness. Together, these outcomes contribute to productivity and cultural health.
Getting fit-to-role right requires top-notch recruitment and people development practices, starting with the ability to measure the complexity of (1) role requirements and (2) people skills.
When my colleagues and I think about the future of recruitment and people development, we envision healthy, effective organizations characterized by engaged, happy, productive, and constantly developing employees & teams. We help organizations achieve this vision by…
reducing the cost of recruitment so that best practices can be employed at every level in an organization;
improving predictions of fit-to- role;
broadening the definition of fit-to-role to encompasses the role, the team, and the position of a role in the organizational hierarchy; and
promoting the seamless integration of recruitment with employee development strategy and practice.
 Csikszentmihalyi, M., Flow, the psychology of happiness. (2008) Harper-Collins.
 Oishi, S., Koo, M., & Akimoto, S. (2015) Culture, interpersonal perceptions, and happiness in social interactions, Pers Soc Psychol Bull, 34, 307–320.
 Oswald, A. J., Proto, E., & Sgroi, D. (2015). Happiness and productivity. Journal of labor economics, 33, 789-822.
Shortly after the President passed the Montreal Cognitive Assessment, a reader emailed with two questions:
Does this mean that the President has the cognitive capacity required of a national leader?
How does a score on this test relate to the complexity level scores you have been describing in recent posts?
A high score on the Montreal Cognitive Assessment dos not mean that the President has the cognitive capacity required of a national leader. This test result simply means there is a high probability that the President is not suffering from mild cognitive impairment. (The test has been shown to detect existing cognitive impairment 88% of the time .) In order to determine if the President has the mental capacity to understand the complex issues he faces as a National Leader, we need to know how complexly he thinks about those issues.
The answer to the second question is that there is little relation between scores on the Montreal Cognitive Assessment and the complexity level of a person’s thinking. A test like the Montreal Cognitive Assessment does not require the kind of thinking a President needs to understand highly complex issues like climate change or the economy. Teenagers can easily pass this test.
The difference between 1053 and 1137 generally represents a decade or more of sustained learning. (If you’re a new reader and don’t yet know what a complexity level is, check out the National Leaders Series introductory article.)
 JAMA Intern Med. 2015 Sep;175(9):1450-8. doi: 10.1001/jamainternmed.2015.2152. Cognitive Tests to Detect Dementia: A Systematic Review and Meta-analysis. Tsoi KK, Chan JY, Hirai HW, Wong SY, Kwok TC.
How complex are the ideas about climate change expressed in President Trump’s tweets? The answer is, they are even less complex than ideas he has expressed about intelligence, international trade, and immigration—landing squarely in level 10. (See the benchmarks, below, to learn more about what it means to perform in level 10.)
The President’s climate change tweets
It snowed over 4 inches this past weekend in New York City. It is still October. So much for Global Warming. 2:43 PM – Nov 1, 2011
It’s freezing in New York—where the hell is global warming? 2:37 PM – Apr 23, 2013
Record low temperatures and massive amounts of snow. Where the hell is GLOBAL WARMING? 11:23 PM – Feb 14, 2015
In the East, it could be the COLDEST New Year’s Eve on record. Perhaps we could use a little bit of that good old Global Warming…! 7:01 PM – Dec 28, 2017
In all of these tweets President Trump appears to assume that unusually cold weather is proof that climate change (a.k.a., global warming) is not real. The argument is an example of simple level 10, linear causal logic that can be represented as an “if,then” statement. “If the temperature right now is unusually low, then global warming isn’t happening.” Moreover, in these comments the President relies exclusively on immediate (proximal) evidence, “It’s unusually cold outside.” We see the same use of immediate evidence when climate change believers claim that a warm weather event is proof that climate change is real.
Let’s use some examples of students’ reasoning to get a fix on the complexity level of President Trump’s tweets. Here is a statement from an 11th grade student who took our assessment of environmental stewardship (complexity score = 1025):
“I do think that humans are adding [gases] to the air, causing climate change, because of everything around us. The polar ice caps are melting.”
The argument is an example of simple level 10, linear causal logic that can be represented as an “if,then” statement. “If the polar ice caps are melting, then global warming is real.” There is a difference between this argument and President Trump’s argument, however. The student is describing a trend rather than a single event.
Here is an argument made by an advanced 5th grader (complexity score = 1013):
“I think that fumes, coals, and gasses we use for things such as cars…cause global warming. I think this because all the heat and smoke is making the years warmer and warmer.”
This argument is also an example of simple level 10, linear causal logic that can be represented as an “if,then” statement. “If the years are getting warmer and warmer, then global warming is real.” Again, the difference between this argument and President Trump’s argument is that the student is describing a trend rather than a single event.
I offer one more example, this time of a 12th grade student making a somewhat more complex argument (complexity score = 1035).
“The temperature has increased over the years and studies show that the ice is melting in the north and south pole, so, yes humans are causing climate change.”
This argument is also an example of level 10, linear causal logic that can be represented as an “if,then” statement. “If the temperature has increased and studies show that the ice at the north and south poles are melting, then humans are causing climate change. But in this case, the student has mentioned two trends (warming and melting) and explicitly uses scientific evidence to support her conclusion.
Based on these comparisons, it seems clear that President Trump’s Tweets about climate change represent reasoning at the lower end of level 10.
“Humans have caused a lot of green house gasses…and these have caused global warming. The temperature has increased over the years and studies show that the ice is melting in the north and south pole, so, yes humans are causing climate change.
This argument is also an example of level 10, linear causal logic that can be represented as an “if,then” statement. “If the temperature has increased and studies show that the ice at the north and south poles are melting, then humans are causing climate change. In this case, the student’s argument is a bit more complex than in previous examples. She has mentioned two variables (warming and melting) and explicitly uses scientific evidence to support her conclusion.
Based on these comparisons, it seems clear that President Trump’s Tweets about climate change represent reasoning at the lower end of level 10.
Reasoning in level 11
Individuals performing in level 11 recognize that climate is an enormously complex phenomenon that involves many interacting variables. They understand that any single event or trend may be part of the bigger story, but is not, on its own, evidence for or against climate change.
It concerns me greatly that someone who does not demonstrate any understanding of the complexity of climate is in a position to make major decisions related to climate change.
Benchmarks for complexity scores
Most high school graduates perform somewhere in the middle of level 10.
The average complexity score of American adults is in the upper end of level 10, somewhere in the range of 1050–1080.
The average complexity score for senior leaders in large corporations or government institutions is in the upper end of level 11, in the range of 1150–1180.
The average complexity score (reported in our National Leaders Study) for the three U. S. presidents that preceded President Trump was 1137.
The difference between 1053 and 1137 generally represents a decade or more of sustained learning. (If you’re a new reader and don’t yet know what a complexity level is, check out the National Leaders Series introductory article.)
Special thanks to my Australian colleague, Aiden Thornton, for his editorial and research assistance.
This is the first in a series of articles on the complexity of national leaders’ thinking. These articles will report results from research conducted with CLAS, our newly validated electronic developmental scoring system. CLAS will be used to score these leaders’ responses to questions posed by prominent journalists.
In this first article, I’ll be providing some of the context for this project, including information about how my colleagues and I think about complexity and its role in leadership. I’ve embedded lots of links to additional material for readers who have questions about our 100+ year-old research tradition, Lectica’s (the nonprofit that owns me) assessments, and other research we’ve conducted with these assessments.
Context and research questions
Lectica creates diagnostic assessments for learningthat support the development of mental skills required for working with complexity. We make these learning tools for both adults and children. Our K-12 initiative—the DiscoTest Initiative—is dedicated to bringing these tools to individual K-12 teachers everywhere, free of charge. Its adult assessments are used by organizations in recruitment and training, and by colleges and universities in admissions and program evaluation.
All Lectical Assessments measure the complexity level (aka, level of vertical development) of people’s thinking in particular knowledge areas. A complexity level score on a Lectical Assessment tells us the highest level of complexity—in a problem, issue, or task—an individual is likely to be able to work with effectively.
On several occasions over the last 20 years, my colleagues and I have been asked to evaluate the complexity of national leaders’ reasoning skills. Our response has been, “We will, but only when we can score electronically—without the risk of human bias.” That time has come. Now that our electronic developmental scoring system, CLAS, has demonstrated a level of reliability and precision that is acceptable for this purpose, we’re ready to take a look.
Evaluating the complexity of national leaders’ thinking is a challenging task for several reasons. First, it’s virtually impossible to find examples of many of these leaders’ “best work.” Their speeches are generally written for them, and speech writers usually try to keep the complexity level of these speeches low, aiming for a reading level in the 7th to 9th grade range. (Reading level is not the same thing as complexity level, but like most tests of capability, it correlates moderately with complexity level.) Second, even when national leaders respond to unscripted questions from journalists, they work hard to use language that is accessible to a wide audience. And finally, it’s difficult to identify a level playing field—one in which all leaders have the same opportunity to demonstrate the complexity of their thinking.
Given these obstacles, there’s no point in attempting to evaluate the actual thinking capabilities of national leaders. In other words, we won’t be claiming that the scores awarded by CLAS represent the true complexity level of leaders’ thinking. Instead, we will address the following questions:
When asked by prominent journalists to explain their positions on complex issues, what is the average complexity level of national leaders’ responses?
How does the complexity level of national leaders’ responses relate to the complexity of the issues they discuss?
Thinking complexity and leader success
At this point, you may be wondering, “What is thinking complexity and why is it important?” A comprehensive response to this question isn’t possible in a short article like this one, but I can provide a basic description of complexity as we see it at Lectica, and provide some examples that highlight its importance.
All issues faced by leaders are associated with a certain amount of built-in complexity. For example:
The sheer number of factors/stakeholders that must be taken into account.
Short and long-term implications/repercussions. (Will a quick fix cause problems downstream, such as global unrest or catastrophic weather?)
The number and diversity of stakeholders/interest groups. (What is the best way to balance the needs of individuals, families, businesses, communities, states, nations, and the world?)
The length of time it will take to implement a decision. (Will it take months, years, decades? Longer projects are inherently more complex because of changes over time.)
Formal and informal rules/laws that place limits on the deliberative process. (For example, legislative and judicial processes are often designed to limit the decision making powers of presidents or prime ministers. This means that leaders must work across systems to develop decisions, which further increases the complexity of decision making.)
Over the course of childhood and adulthood, the complexity of our thinking develops through up to 13 skill levels (0–12). Each new level builds upon the previous level. The figure above shows four adult complexity “zones” — advanced linear thinking (second zone of level 10), early systems thinking (first zone of level 11), advanced systems thinking (second zone of level 11), early principles thinking (first zone of level 12). In advanced linear thinking, reasoning is often characterized as “black and white.” Individuals performing in this zone cope best with problems that have clear right or wrong answers. It is only once individuals enter early systems thinking, that we begin to work effectively with highly complex problems that do not have clear right or wrong answers.
Leadership at the national level requires exceptional skills for managing complexity, including the ability to deal with the most complex problems faced by humanity (Helbing, 2013). Needless to say, a national leader regularly faces issues at or above early principles thinking.
Complexity level and leadership—the evidence
In the workplace, the hiring managers who decide which individuals will be put in leadership roles are likely to choose leaders whose thinking complexity is a good match for their roles. Even if they have never heard the term complexity level, hiring managers generally understand, at least implicitly, that leaders who can work with the complexity inherent in the issues associated with their roles are likely to make better decisions than leaders whose thinking is less complex.
There is a strong relation between the complexity of leadership roles and the complexity level of leaders’ reasoning. In general, more complex thinkers fill more complex roles. The figure below shows how lower and senior level leaders’ complexity scores are distributed in Lectica’s database. Most senior leaders’ complexity scores are in or above advanced systems thinking, while those of lower level leaders are primarily in early systems thinking.
The strong relation between the complexity of leaders’ thinking and the complexity of their roles can also be seen in the recruitment literature. To be clear, complexity is not the only aspect of leadership decision making that affects leaders’ ability to deal effectively with complex issues. However, a large body of research, spanning over 50 years, suggests that the top predictors of workplace leader recruitment success are those that most strongly relate to thinking skills, including complexity level.
The figure below shows the predictive power of several forms of assessment employed in making hiring and promotion decisions. Assessments of mental ability have been shown to have the highest predictive power. In other words, assessments of thinking skills do a better job predicting which candidates will be successful in a given role than other forms of assessment.
The match between the complexity of national leaders’ thinking and the complexity level of the problems faced in their roles is important. While we will not be able to assess the actual complexity level of the thinking of national leaders, we will be able to examine the complexity of their responses to questions posed by prominent journalists. In upcoming articles, we’ll be sharing our findings and discussing their implications.
Arthur, W., Day, E. A., McNelly, T. A., & Edens, P. S. (2003). A meta‐analysis of the criterion‐related validity of assessment center dimensions. Personnel Psychology, 56(1), 125-153.
Becker, N., Höft, S., Holzenkamp, M., & Spinath, F. M. (2011). The Predictive Validity of Assessment Centers in German-Speaking Regions. Journal of Personnel Psychology, 10(2), 61-69.
Beehr, T. A., Ivanitskaya, L., Hansen, C. P., Erofeev, D., & Gudanowski, D. M. (2001). Evaluation of 360 degree feedback ratings: relationships with each other and with performance and selection predictors. Journal of Organizational Behavior, 22(7), 775-788.
Dawson, T. L., & Stein, Z. (2004). National Leadership Study results. Prepared for the U.S. Intelligence Community.
Dawson, T. L. (2017, October 20). Using technology to advance understanding: The calibration of CLAS, an electronic developmental scoring system. Proceedings from Annual Conference of the Northeastern Educational Research Association, Trumbull, CT.
Dawson, T. L., & Thornton, A. M. A. (2017, October 18). An examination of the relationship between argumentation quality and students’ growth trajectories. Proceedings from Annual Conference of the Northeastern Educational Research Association, Trumbull, CT.
Gaugler, B. B., Rosenthal, D. B., Thornton, G. C., & Bentson, C. (1987). Meta-analysis of assessment center validity. Journal of Applied Psychology, 72(3), 493-511.
Helbing, D. (2013). Globally networked risks and how to respond. Nature, 497, 51-59.
Hunter, J. E., & Hunter, R. F. (1984). The validity and utility of alternative predictors of job performance. Psychological Bulletin, 96, 72-98.
Hunter, J. E., Schmidt, F. L., & Judiesch, M. K. (1990). Individual differences in output variability as a function of job complexity. Journal of Applied Psychology, 75, 28-42.
Johnson, J. (2001). Toward a better understanding of the relationship between personality and individual job performance. In M. R. R. Barrick, Murray R. (Ed.), Personality and work: Reconsidering the role of personality in organizations (pp. 83-120).
McDaniel, M. A., Schmidt, F. L., & Hunter, J., E. (1988a). A Meta-analysis of the validity of training and experience ratings in personnel selection. Personnel Psychology, 41(2), 283-309.
McDaniel, M. A., Schmidt, F. L., & Hunter, J., E. (1988b). Job experience correlates of job performance. Journal of Applied Psychology, 73, 327-330.
McDaniel, M. A., Whetzel, D. L., Schmidt, F. L., & Maurer, S. D. (1994). Validity of employment interviews. Journal of Applied Psychology, 79, 599-616.
Rothstein, H. R., Schmidt, F. L., Erwin, F. W., Owens, W. A., & Sparks, C. P. (1990). Biographical data in employment selection: Can validities be made generalizable? Journal of Applied Psychology, 75, 175-184.
Stein, Z., Dawson, T., Van Rossum, Z., Hill, S., & Rothaizer, S. (2013, July). Virtuous cycles of learning: using formative, embedded, and diagnostic developmental assessments in a large-scale leadership program. Proceedings from ITC, Berkeley, CA.
Tett, R. P., Jackson, D. N., & Rothstein, M. (1991). Personality measures as predictors of job performance: A meta-analytic review. Personnel Psychology, 44, 703-742.
Lectical Scale (our developmental scale). The collaboration continuum has emerged from this research.
Many people seem to think of decision making as either top-down or collaborative, and tend to prefer one over the other. But several thousand decision-making leaders have taught us that this is a false dichotomy. We’ve learned two things. First, there is no clear-cut division between autocratic and collaborative decision making—it’s a continuum. And second, both more autocratic and more collaborative decision making processes have legitimate applications.
As it applies to decision making, the collaboration continuum is a scale that runs from fully autocratic to consensus-based. We find it helpful to divide the continuum into 7 relatively distinct levels, as shown below:
Basis for decision
personal knowledge or rules, no consideration of other perspectives
everyday operational decisions where there are clear rules and no apparent conflicts
quick and efficient
personal knowledge, with some consideration of others' perspectives (no perspective seeking)
operational decisions in which conflicts are already well-understood and trust is high
quick and efficient, but spends trust, so should be used with care
personal knowledge, with perspective-seeking to help people feel heard
operational decisions in which the perspectives of well-known stakeholders are in conflict and trust needs reinforcement
time consuming, but can build trust if not abused
personal knowledge, with perspective seeking to inform a decision
operational or policy decisions in which the perspectives of stakeholders are required to formulate a decision
time consuming, but improves decisions and builds engagement
leverages stakeholder perspectives to develop a decision that gives everyone something they want
making "deals" to which all stakeholders must agree
time consuming, but necessary in deal-making situations
leverages stakeholder perspectives to develop a decision that everyone can consent to (even though there may be reservations)
policy decisions in which the perspectives of stakeholders are required to formulate a decision
can be efficient, but requires excellent facilitation skills and training for all parties
leverages stakeholder perspectives to develop a decision that everyone can agree with.
decisions in which complete agreement is required to formulate a decision
requires strong relationships, useful primarily when decision-makers are equal partners
As the table above shows, all 7 forms of decision making on the collaboration continuum have legitimate applications. And all can be learned in any adult developmental level. However, the most effective application of each successive form of decision making requires more developed skills. Inclusive, consent, and consensus decision making are particularly demanding, and generally require formal training for all participating parties.
The most developmentally advanced and accomplished leaders who have taken our assessments deftly employ all 7 forms of decision making, basing the form chosen for a particular situation on factors like timeline, decision purpose, and stakeholder characteristics.
(The feedback in our LDMA [leadership decision making] assessment report provides learning suggestions for building collaboration continuum skills. And our Certified Consultants can offer specific practices, tailored for your learning needs, that support the development of these skills.)
This morning, while doing some research on leader development, I googled “vertical leadership” and “coaching.” The search returned 466,000 results. Wow. Looks like vertical development is hot in the coaching world!
Two hours later, after scanning dozens of web sites, I was left with the following impression:
Vertical development occurs through profound, disruptive, transformative insights that alter how people see themselves, improve their relationships, increase happiness, and help them cope better with complex challenges. The task of the coach is to set people up for these experiences. Evidence of success is offered through personal stories of transformation.
But decades of developmental research contradicts this picture. This body of evidence shows that the kind of transformative experiences promised on these web sites is uncommon. And when it does occur it rarely produces a fairytale ending. In fact, profound disruptive insights can easily have negative consequences, and most experiences that people refer to as transformational are really just momentary insights. They may feel profound in the moment, but don’t actually usher in any measurable change at all, much less transformative change.
"The good news is, you don’t have to work on transforming yourself to become a better leader."
The fact is, insight is fairly easy, but growth is slow, and change is hard. Big change is really, really hard. And some things, like many dispositions and personality traits, are virtually impossible to change. This isn’t an opinion based on personal experience, it’s a conclusion based on evidence from hundreds of longitudinal developmental studies conducted during the last 70 years. (Check out our articles page for some of this evidence.)
The good news is, you don’t have to work on transforming yourself to become a better leader. All you need to do is engage in daily practices that incrementally, through a learning cycle called VCoL, help you build the skills and habits of a good leader. Over the long term, this will change you, because it will alter the quality of your interactions with others, and that will change your mind—profoundly.