About Theo

Founder and Executive Director of Lectica, Inc. and founder of the DiscoTest initiative. Dr. Dawson is an award-winning scholar, researcher, educator, and test developer. She has been studying how people learn and how people think about learning for over two decades. Her dissertation, which explored the way people of different ages conceptualize education, learning, testing, and teaching introduced a new set of methods for documenting learning sequences. This work, along with her studies in psychometrics, have provided the basis for a new model of assessment—one that focuses on helping teachers identify the learning needs of individual students. Through the DiscoTest initiative, Dawson and her colleagues have shown that it is possible to design standardized educational assessments that not only help teachers identify the learning needs of individual students, but turn the testing experience into a rich learning experience in which students practice their thinking, communication, and evaluation skills. Scholarly articles by Dawson can be found on the articles page of the <a href="https://dts.lectica.org/_about/articles.php"Lectica site.

Growth curves vs. individual growth

Individual growth trajectories often don’t stick to statistically determined expectations.

The illustration above depicts the growth trajectory of a woman named Eleanore. Between the ages 12 and 68, she completed two different developmental assessments several times. The first assessment was the LRJA, a test of reflective judgment (critical thinking), which she completed on 8 different occasions. The second assessment was the LDMA, a test of decision-making skills, which she completed four times between the ages of 42 and 68. As you can see, Eleanore has continued to develop throughout adulthood, with periods of more and less rapid growth.

The graph on which Eleanore’s scores are plotted shows several potential developmental curves (A–H), representing typical developmental trajectories for individuals performing in different levels at age 10. You can tell right away that Eleanore is not behaving as expected. Over time, her scores have landed on two different curves (D & E), and she shows considerable growth in age ranges for which no growth is expected — on either curve.

Eleanore, who was born in 1942, was a bright child who did well in school. By the time she graduated from high school in 1960, she was in the top 15% of her class. After attending two years of community college, she joined the workforce as a legal secretary. At 23 she married a lawyer, and at 25 she gave birth to the first of two children. During the next 15 years, while raising her her children, her scores hovered closer to curve E than curve D. When her youngest entered high school, Eleanore decided it was time to complete her bachelor of science degree, which she did, part time, over several years. During this period she grew more quickly than in the previous 10 years, and her LRJA scores began to cluster around curve D.

Sadly, shortly after completing her degree (at age 43), Eleanore learned that her mother had been diagnosed with dementia (now known as Alzheimer’s). For the next 6 years, she cared for her ailing mother, who died only a few days before Eleanore’s 50th birthday. While she cared for her mother, Eleanore learned a great deal about Alzheimer’s — from both personal experience and the extensive research she did to help ensure the best possible care for her mother. This may have contributed to the growth that occurred during this period. Following her mother’s death, Eleanore decided to build upon her knowledge of Alzheimer’s, spending the next 6 years earning a Ph.D. focused on its origins. At the time of her last assessment, she was a respected Alzheimer’s researcher.

And now I must confess. Eleanore is not a real person. She’s a compilation based on 70 years of research in which the growth of thousands of individuals has been measured over periods spanning 8 months to 25 years. Eleanore’s story has been designed to illustrate several phenomena my colleagues and I have observed in these data:

First, although statistics allow us to describe typical developmental trajectories, individual development is usually more or less atypical. Eleanore does not stay on the curve she started out on. In fact she actually drops below this curve for a time, then develops beyond it in later adulthood. She also grew during age-ranges in which no growth at all was expected. Both life events and formal education clearly influenced her developmental trajectory.

Second, many people develop throughout adulthood — especially if they are involved in rich learning experiences (like formal schooling), or when they are coping productively with life crises (like reflectively supporting an ailing parent).

Third, developmental spurts happen. The figure above shows a (real) growth spurt that occurred between the ages of 46 and 51. This highly motivated individual engaged in a sustained and varied learning adventure during this period — just because he wanted to build his interpersonal and leadership skills.

Fourth, developmental growth can happen late in life, given the right opportunities and circumstances. The (real) woman whose scores are shown here responded to a personal life crisis by embracing it as an opportunity to learn more about herself as person and as a leader.

My colleagues and I find the statistically determined growth curves shown on the figures in this article enormously useful in our research, but it’s important to keep in mind that they’re just averages. Many people can jump from one curve to another given the right learning skills and opportunities. On the other hand, these curves are associated with some constraints. For example, we’ve never seen anyone jump more than one of these curves, no matter how excellent their learning skills or opportunities have been. Unsurprisingly, nurture cannot entirely overcome nature.

Growth is predicted by a number of factors. Nature is a big one. How we personally approach learning is also pretty big — with approaches that feature virtuous cycles of learning taking the lead. And, of course, our growth is influenced by how optimally the environments we live, learn, and work in support learning.


Find out how we put this knowledge to work in leader development and recruitment contexts, with LAP-1 and LAP-2.

Please follow and like us:

Fit-to-role, clarity, & VUCA skills: Strong predictors of senior & executive recruitment success

Mental ability is by far the best predictor of recruitment success — across the board.* During the 20th century, aptitude tests were the mental ability metrics of choice — but this is the 21st century. The workplace has changed. Today, leaders don’t need skills for choosing the correct answer from a list. They need skills for coping with complex issues without simple right and wrong answers. Aptitude tests don’t measure these skills.

Today, success in senior and executive roles is best predicted by (1) the fit between the complexity of leaders’ thinking and the complexity of their roles, (2) the clarity of their thinking in real workplace contexts, and (3) their skills for functioning in VUCA (volatile, uncertain, complex, and ambiguous) conditions.

Fit-to-roleFit-to-role is the relation between the complexity level of an individual’s reasoning and the complexity level of a given role. Good fit-to-role increases well-being, engagement, effectiveness, and productivity.

ClarityClarity involves the degree to which an individual’s arguments are coherent and persuasive, how well their arguments are framed, and how well their ideas are connected. Individuals who think more clearly make better decisions and grow more rapidly than individuals who think less clearly.

VUCA skillsVUCA skills are required for making good decisions in volatile, uncertain, complex, or ambiguous contexts. They are…

  • perspective coordination—determining which perspectives matter, seeking out a diversity of relevant perspectives, and bringing them together in a way that allows for the emergence of effective solutions.
  • decision-making under complexity — employing a range of decision-making tools and skills to design effective decision-making processes for complex situations.
  • contextual thinking — being predisposed to think contextually, being able to identify the contexts that are most likely to matter in a given situation and determine how these contexts relate to a particular situation.
  • collaboration — understanding the value of collaboration, being equipped with the tools and skills required for collaboration, and being able to determine the level of collaboration that’s appropriate for a particular decision-making context.

Fit-to-role

Getting fit-to-role right increases well-being, engagement, effectiveness, and productivity. Our approach to role fit pairs an assessment of the complexity of an individual’s thinking — when applied to a wicked real-world workplace scenario — with an analysis of the complexity of a particular workplace role.

The Lectical Scores in the figure on the left represent the complexity level scores awarded to eight job candidates, based on their performances on a developmental assessment of leader decision making (LDMA). The fit-to-role score tells us how well the Lectical Score fits the complexity range of a role. Here, the complexity range of the role is 1120–1140, represented by the vertical teal band. The circles represent the Lectical Scores of candidates. The size of these circles represents the range in which the candidate’s true level of ability is likely to fall.

The “sweet spot” for a new hire is generally at the bottom end of the complexity range of a role, in this case, 1120. There are two reasons for this.

  • The sweet spot is where the challenge posed by a new role is “just right” — just difficult enough to keep an employee in flow — what we call the Goldilocks zone. Placing employees in the sweet spot increases employee satisfaction, improves performance, and optimally supports learning and development.
  • An existing team is more likely to embrace candidates who are performing in the sweet spot. Sweet spot candidates are likely to welcome support and mentoring, which makes it easier to integrate them into an existing team than it is to integrate candidates performing at higher levels, who may be viewed as competitors.

In the figure above, teal circles represent candidates whose scores are in or very near the sweet spot — fit to role is excellent. Yellow circles represent individuals demonstrating marginal fit, and red circles represent individuals demonstrating poor fit.

We can use circle color to help us figure out who should advance to the next level in a recruitment process.

The first cut

Based on the results shown above, it’s easy to decide who will advance to the next step in this process. Red circles mean, “This person is a poor fit to the complexity demands of this role.” Therefore, candidates with red circles should be eliminated from consideration for this role. Celia, Amar, Chilemba, and Jae-Eun, just don’t fit.

However, this does not mean that these candidates should be ignored. Every single one of the eliminated candidates has high or acceptable Clarity and VUCA scores. So, despite the fact that they did not fit this role, each one may be good fit for a different role in the organization.

It’s also worth noting that Jae-Eun demonstrates a level of skill — across measures — that’s relatively rare. When you identify a candidate with mental skills this good, it’s worth seeing if there is some way your organization can leverage these skills.

The second cut

The first cut left us with 4 candidates that met basic fit-to-role qualifications, Jewel, YiYu, Alistair, and Martin. The next step is to find out if their Clarity and VUCA scores are good enough for this role.

Below, you can see how we have interpreted the Clarity and VUCA scores for each of the remaining candidates, and made recommendations based on these interpretations. Notice that YiYu and Alistair are recommended with reservations. It will be important to take these reservations into account during next steps in the recruitment process.

What’s next?

Let’s assume that, Jewel, YiYu, and Alistair move to the next step in the recruitment process. Once the number of candidates has been winnowed down to this point, it’s a good time to administer personality or culture fit assessments, conduct team evaluations, view candidate presentations, or conduct interviews. You already know the candidates are equipped with adequate to excellent mental skills and fit-to-role. From here, it’s all about which candidate you think is likely to fit in to your team.


As soon as we have it, my colleagues and I either publish our reliability and validity evidence in refereed journals or conference presentations or present them on our web site. We believe in total transparency regarding the validity and reliability of all assessments employed in the workplace.

 


*Schmidt, F. L., Oh, I.-S., & Shaffer, J. A. (2016). Working paper: The validity and utility of selection methods in personnel psychology: Practical and theoretical implications of 100 years of research findings.

Please follow and like us:

By popular demand—two new self-guided courses from Lectica

Introducing LAP-1 & LAP-2 Light

For some time now, people have been asking us how they can learn at least some of what we teach in our certification courses—but without the homework! Well, we’ve taken the plunge, with two new self-guided courses.


All profits from sales support Lectica’s mission to deliver the world’s best assessments free of charge to K-12 teachers everywhere!


LAP-1 Light

In LAP-1 Light, we’ve brought together the lectures and much of the course material offered in the certification version of the course—Lectical Assessments in Practice for Coaches. You’ll take a deep dive into our learning model and learn how two of our most popular adult assessments—the LDMA (focused on leadership decision making) and the LSUA (focused on leaders’ understanding of themselves in workplace relationships)—are used to support leader development.

This course is perfect for coaches or consultants who are thinking about certifying down the road.

LEARN MORE

LAP-2 Light

In LAP-2 Light, we’re offering all of the lectures and much of the course material from LAP-2—Lectical Assessments in Practice for Recruitment Professionals. You’ll learn about Lectica’s Human Capital Value Chain, conventional recruitment practices, how to evaluate recruitment assessments, and all about Lectica’s recruitment products—including Lectica First (for front-line to mid-level recruitment) and Lectica Suite (for senior recruitment).

This course is perfect for recruitment professionals of all kinds, or for anyone who is toying with the idea of becoming accredited in the use of our recruitment tools.

LEARN MORE

Upgrades

Upgrades to our certification courses are available for both LAP-1 Light and LAP-2 Light!

 

Please follow and like us:

This is a terrible way to learn

Honestly folks, we really, really, really need to get over the memorization model of learning. It’s good for spelling bees, trivia games, Jeopardy, and passing multiple choice tests. But it’s BORING if not torturous! And cramming more and more facts into our brains isn’t going to help most of us thrive in real life — especially in the 21st century.

As an employer, I don’t care how many facts are in your head or how quickly you can memorize new information. I’m looking for talent, applied expertise (not just factual or theoretical knowledge), and the following skills and attributes:

The ability to tell the difference between memorizing and understanding

I won’t delegate responsibility to employees who can’t tell the difference between memorizing and understanding. Employees who can’t make this distinction don’t know when they need to ask questions. Consequently, they repeatedly make decisions that aren’t adequately informed.

I’ve taken to asking potential employees what it feels like when they realize they’ve really understood something. Many applicants, including highly educated applicants, don’t understand the question. It’s not their fault. The problem is an educational system that’s way too focused on memorizing.

The ability to think

It’s essential that every employee in my organization is able to evaluate information, solve problems, participate actively in decision making and know the difference between an opinion and a good evidence-based argument.

A desire to listen and the skills for doing it well

We also need employees who want and know how to listen — really listen. In my organization, we don’t make decisions in a vacuum. We seek and incorporate a wide range of stakeholder perspectives. A listening disposition and listening skills are indispensable.

The ability to speak truth (constructively)

I know my organization can’t grow the way I want it to if the people around me are unwilling to share their perspectives or are unable to share them constructively. When I ask someone for an opinion, I want to hear their truth — not what they think I want to hear.

The ability to work effectively with others

This requires respect for other human beings, good interpersonal, collaborative, and conflict resolution skills, the ability to hear and respond positively to productive critique, and buckets of compassion.

Humility

Awareness of the ubiquity of human fallibility, including one’s own, and knowledge about human limitations, including the built-in mental biases that so often lead us astray.

A passion for learning (a.k.a. growth mindset)

I love working with people who are driven to increase their understanding and skills — so driven that they’re willing to feel lost at times, so driven that they’re willing to make mistakes on their way to a solution, so driven that their happiness depends on the availability of new challenges.

The desire to do good in the world

I run a nonprofit. We need employees who are motivated to do good.

Not one of these capabilities can be learned by memorizing. All of them are best learned through reflective practice — preferably 12–16 years of reflective practice (a.k.a VCoLing) in an educational system that is not obsessed with remembering.

In case you’re thinking that maybe I’m a oddball employer, check out LinkedIn’s 2018 Workplace Learning Report, and the 2016 World Economic Forum Future of Jobs Report.

Please follow and like us:

National leaders’ thinking — growth trajectories

While my colleagues and I have been gathering the data we’ll use to analyze the complexity level of the high profile interview responses of British prime ministers, I’ve been exploring alternative approaches to sharing our findings. As I wrapped up my post on developmental trajectories, it occurred to me that the Lectical® Scores of national leaders could be livened up by plotting them on a graph of growth trajectories. The result is shown above.

The growth trajectory in red (top trajectory) shows what growth would look like for individuals whose thinking complexity would be most likely grow to the level of many of the issues faced by national leaders. In reality, very few people are on a trajectory like this one, and even if they are, it is limited to one area of expertise. This means that in order to cope with the most complex issues, even our most complex thinkers need the best decision making tools and teams of highly qualified advisors.

The circles with initials in them represent the highest score (with confidence intervals) received by each leader. The data we scored were responses to high profile interviews with leading journalists. To learn more about the research and our analysis, see the articles listed below.

Caveat: It is important to keep in mind that we do not claim that the complexity level measurements we have taken represent the full capability of national leaders (with the possible exception of President Trump). Our research so far corroborates existing evidence that national leaders systematically attempt to simplify their messages, often approximating the complexity level of political stories in prominent media. Because of this, the figure shown here should be interpreted cautiously.



Please follow and like us:

The rate of development

An individual’s rate of development is affected by a wide range of factors. Twin studies suggest that about 50% of the variation in Lectical growth trajectories is likely to be predicted by genetic factors. The remaining variation is explained by environmental factors, including the environment in the womb, the home environment, parenting quality, educational quality & fit, economic status, diet, personal learning habits, and aspects of personality.

Each Lectical Level takes longer to traverse than the previous level. This is because development through each successive level involves constructing increasingly elaborated and abstract knowledge networks. Don’t be fooled by the slow growth, though. A little growth can have an important impact on outcomes. For example, small advances in level 11 can make a big difference in an individual’s capacity to work effectively with complexity and change—at home and in the workplace.

The graphs above show possible learning trajectories, first, for the lifespan and second, for ages 10-60. Note that the highest age shown on these graphs is 60. This does not mean that individuals cannot develop after the age of 60.

The yellow circle in each graph represents a Lectical Score and the confidence interval around that score. That’s the range in which the “true score” would most likely fall. When interpreting any test score, you should keep the confidence interval in mind.

Within individuals, growth is not tidy

When we measure the development of individuals over short time spans, it does not look smooth. The kind of pattern shown in the following graph is more common. However, we have found that growth appears a bit smoother for adults than for children. We think this is because children, for a variety of reasons, are less likely to do their best work on every testing occasion.

People don’t grow at the same rate in every knowledge area

An individual’s rate of growth depends on the level of their immersion in particular knowledge areas. A physicist may be on one trajectory when it comes to physics and quite a different trajectory when it comes to interpersonal understanding.

Report card showing jagged growth

Factors that affect the rate of development

  • Genetics & socio-economic status.
  • A test-taker’s current developmental trajectory. For example, as time passes, a person whose history places her on the green curve in the first two graphs is less and less likely to jump to the blue curve.
  • The amount of everyday reflective activity (especially VCoLing) the individual typically engages in (less reflective activity > less growth)
  • Participation in deliberate learning activities that include lots of reflective activity (especially VCoLing)
  • Participating in supported learning (coaching, mentoring) after several years away from formal education (can create a growth spurt).

 

Please follow and like us:

National leaders’ thinking: What we’ve learned so far…

In this article, I’ll be providing a summary of results from each group of leaders observed as part of Lectica’s National Leaders’ Study. Each time my colleagues and I complete a round of research for a particular group of national leaders, the results will first be presented in a special article, then summarized here. This article will be written and rewritten over several months, with regular updates. If at any point you want to get a quick sense of what we’ve learned so far, just come back to this article for an overview.

Summary of quantitative results

The following table compares the scores received by the leaders of countries included in the National Leaders’ Study so far. (If you don’t yet know what I mean by complexity level, see the first article in this series.

Country

Complexity score range

Complexity score difference

Leader average

Media average

Leader average – media average

USA

1054–1163

109

1116

1137 (without P. Trump

1124

-8

13

Australia

1111–1133

22

1125

1111

14

Key observations

  1. Lowest score: The average complexity level of President Trump’s interviews was 1054—near the average score received by 12th graders in a good high school.
  2. Highest score: The mean score for President Obama’s first two interviews was 1193. This is well above the average score received by CEOs in Lectica’s database and is in the ideal range for a national leader, who must be able to comprehend and work with issues that have a complexity level of 1200 and above.
  3. Fit-to-role: With the exception of Barack Obama, none of the leaders so far has demonstrated (in their interviews)  a level of complexity that is a good match for the complexity level of many of the problems faced in office (1200+).
  4. Third interview scores: The scores of three out of 5 leaders whose scores at time 1 were above the level of average media scores—Barack Obama, Tony Abbott, and Malcolm Turnbull—dropped closer to media averages in their third interviews. We’re monitoring this potential trend.
  5. Media score comparison: The mean score for sampled U. S. media was 13 points higher than the mean score for Australian media.
  6. Leader score comparison: If we exclude President Trump as an extreme outlier, the average score for U. S. Presidents was 9 points higher than the average score for Australian prime ministers.

Emerging concerns

  1. Difficulty evaluating candidates: In the interest of accessibility, voters are systematically being deprived of the evidence required to evaluate the competence of candidates. High-profile interview responses of national leaders are often the only place to observe anything like the actual thinking of candidates for office, yet it is well known that candidates and leaders are trained to simplify responses to interview questions. Moreover, national leaders’ speeches are written in language that simplifies issues to make them more accessible to the general public, and many candidates have not produced written works that can be relied upon as evidence of current capacity.
  2. Danger of electing incompetent candidates: When all candidates produce responses and read speeches in which issues are systematically simplified, it becomes very difficult to distinguish between different candidates’ level of understanding. This makes it easier to elect candidates that lack the level of understanding and skill required to cope with highly complex national and international issues.

Other articles in this series

Please follow and like us:

National Leaders’ thinking: Australian Prime Ministers

How complex are the interview responses of the last four Australian prime ministers? How does the complexity of their responses compare to the complexity of the U.S. presidents’ responses?

Special thanks to my Australian colleague, Aiden M. A. Thornton, PhD. Cand., for his editorial and research assistance.

This is the 4th in a series of articles on the complexity of national leaders’ thinking, as measured with CLAS, a newly validated electronic developmental scoring system. This article will make more sense if you begin with the first article in the series.

Just in case you choose not to read or revisit the first article, here are a few things to keep in mind:

  • I am an educational researcher and the CEO of a nonprofit that specializes in measuring the complexity level of people’s thinking skills and supporting the development of their capacity to work with complexity.
  • The complexity level of leaders’ thinking is one of the strongest predictors of leader advancement and success. See the National Leaders Intro for evidence.
  • Many of the issues faced by national leaders require principles thinking (level 12 on the skill scale/LecticalScale), illustrated in the figure below). See the National Leaders Intro for the rationale.
  • To accurately measure the complexity level of someone’s thinking (on a given topic), we need examples of their best thinking. In this case, that kind of evidence wasn’t available. As an alternative, my colleagues and I have chosen to examine the complexity level of prime ministers’ responses to interviews with prominent journalists.

Benchmarks for complexity scores

  • Most high school graduates perform somewhere in the middle of level 10.
  • The average complexity score of American adults is in the upper end of level 10, somewhere in the range of 1050–1080.
  • The average complexity score for senior leaders in large corporations or government institutions is in the upper end of level 11, in the range of 1150–1180.
  • The average complexity score (reported in our National Leaders Study) for the three U. S. presidents that preceded President Trump was 1137.
  • The average complexity score (reported in our National Leaders Study) for President Trump was 1053.
  • The difference between 1053 and 1137 generally represents a decade or more of sustained learning. (If you’re a new reader and don’t yet know what a complexity level is, check out the National Leaders’ Series introductory article.)

The data

In this article, we examine the thinking of the four most recent prime ministers of Australia—Julia Gillard, Kevin Rudd, Tony Abbott, and Malcolm Turnbull. For each prime minister, we selected 3 interviews, based on the following criteria: They

  1. were conducted by prominent journalists representing respected news media;
  2. included questions that requested explanations of the Prime Minister’s perspective; and
  3. were either conducted within the Prime Minister’s first year in office or were the earliest interviews we could locate that met the first two criteria.

As noted in the introductory article of this series, we do not imagine that the responses provided in these interviews necessarily represent competence. It is common knowledge* that prime ministers and other leaders typically attempt to tailor messages to their audiences, so even when responding to interview questions, they may not show off their own best thinking. Media also tailor writing for their audiences, so to get a sense of what a typical complexity level target for top media might be, we used CLAS to score 11 articles from Australian news media on topics similar to those discussed by the four presidents in their interviews. We selected these articles at random—literally selecting the first ones that came to hand—from recent issues of the Canberra Times, The Age, the Sydney Morning Herald, and Adelaide Now. Articles from all of these newspapers landed in the lower range of the early systems thinking zone, with a mean score of 1109 (15 points lower than the mean for the U.S. media sample) and a range of 45 points.

Hypothesis

Based on the mean media score, and understanding that politicians generally attempt, like media, to tailor messages for their audience, we hypothesized that prime ministers would aim for a similar range. Since the mean score for the Australian media sample was lower by 15 points than the mean score for the U. S. media sample, we anticipated that the average score received by Australian prime ministers would be a bit lower than the average score received by U. S. presidents.

The results

The Table below shows the complexity scores received by the four prime ministers. (Contact us if you would like a copy of the interviews.) Complexity level scores are shown in the same order as interview listings.

All of the scores received by Australian prime ministers fell well below the complexity level of many of the problems faced by national leaders. Although we cannot assume that the interview responses we scored are representative of these leaders’ best thinking, we can assert that we can see no evidence in these interviews that these prime ministers have the capacity to grasp the full complexity of many of the issues they faced (or are currently facing) in office. Instead, their scores suggest levels of skill that are more appropriate for mid- to upper-level managers in large organizations.

Prime minister

Interview by date

Complexity level scores

Mean complexity level

Mean zone

Julia Gillard (2010-2013)

Laurie Oakes, Weekend Today, 6/27/2010; Jon Faine, ABC 774, 6/29/2010; Deborah Cameron, ABC Sydney, 7/07/2010

1108, 1113, 1113

1111

Early systems thinking

Kevin Rudd (2013-2013)

Kerry O’Brien, ABC AM, 4/24/2008; Lyndal Curtis, ABC AM, 5/30/2008; Jon Faine, ABC 774 Brisbane, 6/06/2008

1133, 1138, 1129

1133

Early systems thinking

Tony Abbott (2013-2015)

Alison Carabine, ABC Radio National, 12/16/2013; Ray Hadley, 1/29/2014; Chris Uhlman, ABC AM, 9/26/2014

1133, 1129, 1117

1126

Early systems thinking

Malcolm Turnbull (2015-)

Michael Brissendon, ABC AM, 9/21/2015; Several journalists, 12/1/2015; Steve Austin, ABC Radio Brisbane, 1/17/2017

1133, 1138, 1113

1128

Early systems thinking

Comparison of U.S. and Australian results

There was less variation in the complexity scores of Australian prime ministers than in the complexity scores of U. S. presidents. Mean scores for the U. S. presidents ranged from 1054–1163 (109 points), whereas the range for Australian prime ministers was 1111–1133 (22 points). If we exclude President Trump as an extreme outlier, the mean score for U. S. Presidents was 12 points higher than for Australian prime ministers.

You may notice that the scores of two of the prime ministers who received a score of 1133 on their first interview, had dropped by the time of their third interview. This is reminiscent of the pattern we observed for President Obama.

The mean score for all four prime ministers was 14 points higher than the mean for sampled media. Interestingly, if we exclude President Trump as an extreme outlier, the difference between the average score received by U. S. presidents is almost identical at 13 points. Almost all of the difference between the mean scores of prime ministers and presidents (excluding President Trump) could be explained by media scores.

Country

Complexity score range

Complexity score difference

Leader average

Media average

Leader average – media average

USA

1054–1163

109

1116

1137 (without P. Trump

1124

-8

13

Australia

1111–1133

22

1125

1111

14

The sample sizes here are too small to support a statistical analysis, but once we have conducted our analyses of the British and Canadian prime ministers, we will be able to examine these trends statistically—and find out if they look like more than a coincidence.

Discussion

In the first article of this series, I discussed the importance of attempting to “hire” leaders whose complexity level scores are a good match for the complexity level of the issues they face in their roles. I then posed two questions:

  • When asked by prominent journalists to explain their positions on complex issues, what is the average complexity level of national leaders’ responses?
  • How does the complexity level of national leaders’ responses relate to the complexity of the issues they discuss?”

We now have a third question to add:

  • What is the relation between the complexity level of National Leaders’ interview responses and the complexity level of respected media?

So far, we have learned that when national leaders explain their positions on complex issues, they do not — with the possible exception of President Obama — demonstrate that they are capable of grasping the full complexity of these issues. On average, their explanations do not rise to the mean level demonstrated by executive leaders in Lectica’s database.

We have also learned that when national leaders explained their positions on complex issues to the press, their explanations were 13–14 points higher on the Lectical Scale than the average complexity level of sampled media articles. We will be following this possible trend in upcoming articles about the British and Canadian leaders.

Interestingly, the Lectical Scores of two prime ministers whose average scores were above the media average dropped closer to the media average in their third interviews. We observed the same pattern for President Obama. It’s too soon to declare this to be a trend, but we’ll be watching.

As noted in the article about the thinking of U. S. presidents, the world needs leaders who understand and can work with highly complex issues, and particularly in democracies, we also need leaders whose messages are accessible to the general public. Unfortunately, the drive toward accessibility seems to have led to a situation in which candidates are persuaded to simplify their messages, leaving voters with one less way to evaluate the competence of our future leaders. How are we to differentiate between candidates whose capacity to comprehend complex issues is only as complex as that of a mid-level manager and candidates who have a high capacity to comprehend and work with these issues but feel compelled to simplify their messages? And in a world in which people increasingly seem to believe that one opinion is as good as any other, how do we convince voters of the critical importance of complex thinking and the expertise it represents?


*The speeches of presidents are generally written to be accessible to a middle school audience. The metrics used to determine reading level are not measures of complexity level. They are measures of sentence, word length, and sometimes the commonness of words. For more on reading level see: How to interpret reading level scores.


 Other articles in this series

Please follow and like us:

Fit-to-role, well-being, & productivity

How to recruit the brain’s natural motivational cycle—the power of fit-to-role.

People learn and work better when the challenges they face in their roles are just right—when there is good fit-to-role. Improving fit-to-role requires achieving an optimal balance between an individual’s level of skill and role requirements. When employers get this balance right, they increase engagement, happiness (satisfaction), quality of communication, productivity, and even cultural health.

video version

Here’s how it works.

In the workplace, the challenges we’re expected to face should be just big enough to allow for success most of the time, but not so big that frequent failure is inevitable. My colleagues and I call this balance-point the Goldilocks zone, because it’s where the level of challenge is just right. Identifying the Goldilocks zone is important for three reasons:

First, and most obviously, it’s not good for business if people make too many mistakes.

Second, if the distance between employees’ levels of understanding and the difficulty of the challenges they face is too great, employees are less likely to understand and learn from their mistakes. This kind of gap can lead to a vicious cycle, in which, instead of improving or staying the same, performance gradually deteriorates.

Third, when a work challenge is just right we’re more likely to enjoy ourselves—and feel motivated to work even harder. This is because challenges in the Goldilocks zone, allow us to succeed just often enough to stimulate our brains to release pleasure hormones called opioids. Opioids give us a sense of satisfaction and pleasure. And they have a second effect. They also trigger the release of dopamine—the striving hormone—which motivates us to reach for the next challenge (so we can experience the satisfaction of success once again).

The dopamine-opioid cycle will repeat indefinitely in a virtuous cycle, but only when enough of our learning challenges are in the zone—not too easy and not too hard. As long as the dopamine-opioid cycle keeps cycling, we feel engaged. Engaged people are happy people—they tend to feel satisfied, competent, and motivated. [1]

People are also happier when they feel they can communicate effectively and build understanding with those around them. When organizations get fit-to-role right for every member of a team, they’re also building a team with members who are more likely to understand one another. This is because the complexity level of role requirements for different team members are likely to be very similar. So, getting fit to role right for one team member means building a team in which members are performing within a complexity range that makes it relatively—but not too—easy for members to understand one another. Team members are happiest when they can be confident that—most of the time and with reasonable effort—they will be able to achieve a shared understanding with other members.

A team representing a diversity of perspectives and skills, composed of individuals performing within a complexity range of 10–20 points on the Lectical Scale is likely to function optimally.

Getting fit-to-role right, also ensures that line managers are slightly more complex thinkers than their direct reports. People tend to prefer leaders they can look up to, and most of us intuitively look up to people who think a little more complexly than we do. [2] When it comes to line managers, If we’re as skilled as they are, we tend to wonder why they’re leading us. If we’re more skilled than they are, we are likely to feel frustrated. And if they’re way more skilled than we are, we may not understand them fully. In other words, we’re happiest when our line managers challenge us—but not too much. (Sound familiar?)

Most people work better with line managers who perform 15–25 points higher on the Lectical Scale than they do.

Unsurprisingly, all this engagement and happiness has an impact on productivity. Individuals work more productively when they’re happily engaged. And teams work more productively when their members communicate well with one another.[2]

The moral of the story

The moral of this story is that employee happiness and organizational effectiveness are driven by the same thing—fit-to-role. We don’t have to compromise one to achieve the other. Quite the contrary. We can’t achieve either without achieving fit-to-role.

Summing up

To sum up, when we get fit to role right—in other words, ensure that every employee is in the zone—we support individual engagement & happiness, quality communication in teams, and leadership effectiveness. Together, these outcomes contribute to productivity and cultural health.

Getting fit-to-role right requires top-notch recruitment and people development practices, starting with the ability to measure the complexity of (1) role requirements and (2) people skills.

When my colleagues and I think about the future of recruitment and people development, we envision healthy, effective organizations characterized by engaged, happy, productive, and constantly developing employees & teams. We help organizations achieve this vision by…

  • reducing the cost of recruitment so that best practices can be employed at every level in an organization;
  • improving predictions of fit-to- role;
  • broadening the definition of fit-to-role to encompasses the role, the team, and the position of a role in the organizational hierarchy; and
  • promoting the seamless integration of recruitment with employee development strategy and practice.

[1] Csikszentmihalyi, M., Flow, the psychology of happiness. (2008) Harper-Collins.

[2] Oishi, S., Koo, M., & Akimoto, S. (2015) Culture, interpersonal perceptions, and happiness in social interactions, Pers Soc Psychol Bull, 34, 307–320.

[3] Oswald, A. J., Proto, E., & Sgroi, D. (2015). Happiness and productivity. Journal of labor economics, 33, 789-822.

Please follow and like us:

How to interpret reading level scores

Fleisch Kincaid and other reading level metrics are sometimes employed to compare the arguments made by politicians in their speeches, interviews, and writings. What are these metrics and what do they actually tell us about these verbal performances?

Fleisch Kincaid examines sentence, word length, and syllable number. Texts are considered “harder” when they have longer sentences and use words with more letters, and “easier” when they have shorter sentences and use words with fewer letters. For decades, Fleisch Kincaid and other reading level metrics have been used in word processors. When you are advised by a grammar checker that the reading level of your article is too high, it’s likely that this warning is based on word and sentence length.

Other reading level indicators, like Lexiles, use the commonness of words as an indicator. Texts are considered to be easier when the words they contain are more common, and more difficult when the words they contain are less common.

Because reading-level metrics are embedded in most grammar checkers, writers are continuously being encouraged to write shorter sentences with fewer, more common words. Writers for news media, advertisers, and politicians, all of whom care deeply about market share, work hard to create texts that meet specific “grade level” requirements. And if we are to judge by analyses of recent political speeches, this has considerably “dumbed down” political messages.

Weaknesses of reading level indicators

Reading level indicators look only at easy-to-measure things like length and frequency. But length and frequency are proxies for what they purport to measure—how easy it is to understand the meaning intended by the author.

Let’s start with word length. Words of the same length or number of syllables can have meanings that are more or less difficult to understand. The word, information has 4 syllables and 12 letters. The word, validity has 4 syllables and 8 letters. Which concept, information or validity, do you think is easier to understand? (Hint, one concept can’t be understood without a pretty rich understanding of the other.)

How about sentence length? These two sentences express the same meaning. “He was on fire.” “He was so angry that he felt as hot as a fire inside.” In this case, the short sentence is more difficult because it requires the reader to understand that it should be read within a context presented in an earlier sentence—”She really knew how to push his buttons.”

Finally, what about commonness? Well, there are many words that are less common but no more difficult to understand than other words. Take “giant” and “enormous.” The word, enormous doesn’t necessarily add meaning, it’s just used less often. It’s not harder, just less popular. And some relatively common words are more difficult to understand than less common words. For example, evolution is a common word with a complex meaning that’s quite difficult to understand, and onerous is an uncommon word that’s relatively easy to understand.

I’m not arguing that reducing sentence and word length and using more common words don’t make prose easier to understand, but metrics that use these proxies don’t actually measure understandability—or at least they don’t do it very well.

How reading level indicators relate to complexity level

When my colleagues and I analyze the complexity level of a text, we’re asking ourselves, “At what level does this person understand these concepts?” We’re looking for meaning, not word length or popularity. Level of complexity directly represents level of understanding.

Reading level indicators do correlate with complexity level. Correlations are generally within the range of .40 to .60, depending on the sample and reading level indicator. These are strong enough correlations to suggest that 16% to 36% of what reading-level indicators measure is the same thing we measure. In other words, they are weak measures of meaning.[1] They are stronger measures of factors that impact readability, but are not related directly to meaning—sentence and word length and/or commonness.

Here’s an example of how all of this plays out in the real world: The New York Times is said to have a grade 7 Fleisch Kincaid reading level, on average. But complexity analyses of their articles yield scores of 1100-1145. In other words, these articles express meanings that we don’t see in assessment responses until college and beyond. This would explain why the New York Times audience tends to be college educated.

We would say that by reducing sentence and word length, New York Times writers avoid making complex ideas harder to understand.

Summing up

Reading level indicators are flawed measures of understanding. They are also dinosaurs. When these tools were developed, we couldn’t do any better. But advances in technology, research methods, and the science of learning have taken us beyond proxies for understanding to direct measures of understanding. The next challenge is figuring out how to ensure that these new tools are used responsibly—for the good of all.

Please follow and like us: