How to interpret reading level scores

Fleisch Kincaid and other reading level metrics are sometimes employed to compare the arguments made by politicians in their speeches, interviews, and writings. What are these metrics and what do they actually tell us about these verbal performances?

Fleisch Kincaid examines sentence, word length, and syllable number. Texts are considered “harder” when they have longer sentences and use words with more letters, and “easier” when they have shorter sentences and use words with fewer letters. For decades, Fleisch Kincaid and other reading level metrics have been used in word processors. When you are advised by a grammar checker that the reading level of your article is too high, it’s likely that this warning is based on word and sentence length.

Other reading level indicators, like Lexiles, use the commonness of words as an indicator. Texts are considered to be easier when the words they contain are more common, and more difficult when the words they contain are less common.

Because reading-level metrics are embedded in most grammar checkers, writers are continuously being encouraged to write shorter sentences with fewer, more common words. Writers for news media, advertisers, and politicians, all of whom care deeply about market share, work hard to create texts that meet specific “grade level” requirements. And if we are to judge by analyses of recent political speeches, this has considerably “dumbed down” political messages.

Weaknesses of reading level indicators

Reading level indicators look only at easy-to-measure things like length and frequency. But length and frequency are proxies for what they purport to measure—how easy it is to understand the meaning intended by the author.

Let’s start with word length. Words of the same length or number of syllables can have meanings that are more or less difficult to understand. The word, information has 4 syllables and 12 letters. The word, validity has 4 syllables and 8 letters. Which concept, information or validity, do you think is easier to understand? (Hint, one concept can’t be understood without a pretty rich understanding of the other.)

How about sentence length? These two sentences express the same meaning. “He was on fire.” “He was so angry that he felt as hot as a fire inside.” In this case, the short sentence is more difficult because it requires the reader to understand that it should be read within a context presented in an earlier sentence—”She really knew how to push his buttons.”

Finally, what about commonness? Well, there are many words that are less common but no more difficult to understand than other words. Take “giant” and “enormous.” The word, enormous doesn’t necessarily add meaning, it’s just used less often. It’s not harder, just less popular. And some relatively common words are more difficult to understand than less common words. For example, evolution is a common word with a complex meaning that’s quite difficult to understand, and onerous is an uncommon word that’s relatively easy to understand.

I’m not arguing that reducing sentence and word length and using more common words don’t make prose easier to understand, but metrics that use these proxies don’t actually measure understandability—or at least they don’t do it very well.

How reading level indicators relate to complexity level

When my colleagues and I analyze the complexity level of a text, we’re asking ourselves, “At what level does this person understand these concepts?” We’re looking for meaning, not word length or popularity. Level of complexity directly represents level of understanding.

Reading level indicators do correlate with complexity level. Correlations are generally within the range of .40 to .60, depending on the sample and reading level indicator. These are strong enough correlations to suggest that 16% to 36% of what reading-level indicators measure is the same thing we measure. In other words, they are weak measures of meaning.[1] They are stronger measures of factors that impact readability, but are not related directly to meaning—sentence and word length and/or commonness.

Here’s an example of how all of this plays out in the real world: The New York Times is said to have a grade 7 Fleisch Kincaid reading level, on average. But complexity analyses of their articles yield scores of 1100-1145. In other words, these articles express meanings that we don’t see in assessment responses until college and beyond. This would explain why the New York Times audience tends to be college educated.

We would say that by reducing sentence and word length, New York Times writers avoid making complex ideas harder to understand.

Summing up

Reading level indicators are flawed measures of understanding. They are also dinosaurs. When these tools were developed, we couldn’t do any better. But advances in technology, research methods, and the science of learning have taken us beyond proxies for understanding to direct measures of understanding. The next challenge is figuring out how to ensure that these new tools are used responsibly—for the good of all.

Please follow and like us:

President Trump on climate change

How complex are the ideas about climate change expressed in President Trump’s tweets? The answer is, they are even less complex than ideas he has expressed about intelligence, international trade, and immigration—landing squarely in level 10. (See the benchmarks, below, to learn more about what it means to perform in level 10.)

The President’s climate change tweets

It snowed over 4 inches this past weekend in New York City. It is still October. So much for Global Warming.
2:43 PM – Nov 1, 2011

 

It’s freezing in New York—where the hell is global warming?
2:37 PM – Apr 23, 2013

 

Record low temperatures and massive amounts of snow. Where the hell is GLOBAL WARMING?
11:23 PM – Feb 14, 2015

 

In the East, it could be the COLDEST New Year’s Eve on record. Perhaps we could use a little bit of that good old Global Warming…!
7:01 PM – Dec 28, 2017

Analysis

In all of these tweets President Trump appears to assume that unusually cold weather is proof that climate change (a.k.a., global warming) is not real. The argument is an example of simple level 10, linear causal logic that can be represented as an “if,then” statement. “If the temperature right now is unusually low, then global warming isn’t happening.” Moreover, in these comments the President relies exclusively on immediate (proximal) evidence, “It’s unusually cold outside.” We see the same use of immediate evidence when climate change believers claim that a warm weather event is proof that climate change is real.

Let’s use some examples of students’ reasoning to get a fix on the complexity level of President Trump’s tweets. Here is a statement from an 11th grade student who took our assessment of environmental stewardship (complexity score = 1025):

“I do think that humans are adding [gases] to the air, causing climate change, because of everything around us. The polar ice caps are melting.”

The argument is an example of simple level 10, linear causal logic that can be represented as an “if,then” statement. “If the polar ice caps are melting, then global warming is real.” There is a difference between this argument and President Trump’s argument, however. The student is describing a trend rather than a single event.

Here is an argument made by an advanced 5th grader (complexity score = 1013):

“I think that fumes, coals, and gasses we use for things such as cars…cause global warming. I think this because all the heat and smoke is making the years warmer and warmer.”

This argument is also an example of simple level 10, linear causal logic that can be represented as an “if,then” statement. “If the years are getting warmer and warmer, then global warming is real.” Again, the difference between this argument and President Trump’s argument is that the student is describing a trend rather than a single event.

I offer one more example, this time of a 12th grade student making a somewhat more complex argument (complexity score = 1035).

“The temperature has increased over the years and studies show that the ice is melting in the north and south pole, so, yes humans are causing climate change.”

This argument is also an example of level 10, linear causal logic that can be represented as an “if,then” statement. “If the temperature has increased and studies show that the ice at the north and south poles are melting, then humans are causing climate change. But in this case, the student has mentioned two trends (warming and melting) and explicitly uses scientific evidence to support her conclusion.

Based on these comparisons, it seems clear that President Trump’s Tweets about climate change represent reasoning at the lower end of level 10.

“Humans have caused a lot of green house gasses…and these have caused global warming. The temperature has increased over the years and studies show that the ice is melting in the north and south pole, so, yes humans are causing climate change.

This argument is also an example of level 10, linear causal logic that can be represented as an “if,then” statement. “If the temperature has increased and studies show that the ice at the north and south poles are melting, then humans are causing climate change. In this case, the student’s argument is a bit more complex than in previous examples. She has mentioned two variables (warming and melting) and explicitly uses scientific evidence to support her conclusion.

Based on these comparisons, it seems clear that President Trump’s Tweets about climate change represent reasoning at the lower end of level 10.

Reasoning in level 11

Individuals performing in level 11 recognize that climate is an enormously complex phenomenon that involves many interacting variables. They understand that any single event or trend may be part of the bigger story, but is not, on its own, evidence for or against climate change.

Summing up

It concerns me greatly that someone who does not demonstrate any understanding of the complexity of climate is in a position to make major decisions related to climate change.


Benchmarks for complexity scores

  • Most high school graduates perform somewhere in the middle of level 10.
  • The average complexity score of American adults is in the upper end of level 10, somewhere in the range of 1050–1080.
  • The average complexity score for senior leaders in large corporations or government institutions is in the upper end of level 11, in the range of 1150–1180.
  • The average complexity score (reported in our National Leaders Study) for the three U. S. presidents that preceded President Trump was 1137.
  • The average complexity score (reported in our National Leaders Study) for President Trump was 1053.
  • The difference between 1053 and 1137 generally represents a decade or more of sustained learning. (If you’re a new reader and don’t yet know what a complexity level is, check out the National Leaders Series introductory article.)

 

Please follow and like us:

President Trump on immigration

How complex are the ideas about immigration expressed in President Trump’s recent comments to congress?

On January 9th, 2018, President Trump spoke to members of Congress about immigration reform. In his comments, the President stressed the need for bipartisan immigration reform, and laid out three goals.

  1. secure our border with Mexico
  2. end chain migration
  3. close the visa lottery program

I have analyzed President Trump’s comments in detail, looking at each goal in turn. But first, his full comments were submitted to CLAS (an electronic developmental assessment system) for an analysis of their complexity level. The CLAS score was 1046. This score is in what we call level 10, and is a few points lower than the average score of 1053 awarded to President Trump’s arguments in our earlier research.


Here are some benchmarks for complexity scores:

  • The average complexity score of American adults is in the upper end of level 10, somewhere in the range of 1050-1080.
  • The average complexity score for senior leaders in large corporations or government institutions is in the upper end of level 11, in the range of 1150-1180.
  • The average complexity score (reported in our National Leaders Study) for the three U. S. presidents that preceded President Trump was 1137.
  • The difference between 1046 and 1137 represents a decade or more of sustained learning. (If you’re a new reader and don’t yet know what a complexity level is, check out the National Leaders Series introductory article.)

Border security

President Trump’s first goal was to increase border security.

Drugs are pouring into our country at a record pace and a lot of people are coming in that we can’t have… we have tremendous numbers of people and drugs pouring into our country. So, in order to secure it, we need a wall.  We…have to close enforcement loopholes. Give immigration officers — and these are tremendous people, the border security agents, the ICE agents — we have to give them the equipment they need, we have to close loopholes, and this really does include a very strong amount of different things for border security.”

This is a good example of a level 10, if-then, linear argument. The gist of this argument is, “If we want to keep drugs and people we don’t want from coming across the border, then we need to build a wall and give border agents the equipment and other things they need to protect the border.”

As is also typical of level 10 arguments, this argument offers immediate concrete causes and solutions. The cause of our immigration problems is that bad people are getting into our country. The physical act of keeping people out of the country is a solution to the this problem.

Individuals performing in level 11 would not be satisfied with this line of reasoning. They would want to consider underlying or root causes such as poverty, political upheaval, or trade imbalances—and would be likely to try to formulate solutions that addressed these more systemic causes.

Side note: It’s not clear exactly what President Trump means by loopholes. In the past, he has used this term to mean “a law that lets people do things that I don’t think they should be allowed to do.” The dictionary meaning of the term would be more like, “a law that unintentionally allows people to do things it was meant to keep them from doing.”

Chain migration

President Trump’s second goal was to end chain migration. According to Wikipedia, Chain migration (a.k.a., family reunification) is a social phenomenon in which immigrants from a particular family or town are followed by others from that family or town. In other words, family members and friends often join friends and loved ones who have immigrated to a new country. Like many U. S. Citizens, I’m a product of chain migration. The first of my relatives who arrived in this country in the 17th century, later helped other relatives to immigrate.

President Trump wants to end chain migration, because…

“Chain migration is bringing in many, many people with one, and often it doesn’t work out very well.  Those many people are not doing us right.”

I believe that what the President is saying here is that chain migration is when one person immigrates to a new country and lots of other people known (or related to?) that person are allowed to immigrate too. He is concerned that the people who follow the first immigrant aren’t behaving properly.

To support this claim, President Trump provides an example of the harm caused by chain migration.

“…we have a recent case along the West Side Highway, having to do with chain migration, where a man ran over — killed eight people and many people injured badly.  Loss of arms, loss of legs.  Horrible thing happened, and then you look at the chain and all of the people that came in because of him.  Terrible situation.”

The perpetrator—Sayfullo Saipov—of the attack Trump appears to be referring to, was a Diversity Visa immigrant. Among other things, this means he was not sponsored, so he cannot be a chain immigrant. On November 21, 2017, President Trump claimed that Saipov had been listed as the primary contact of 23 people who attempted to immigrate following his arrival in 2010, suggesting that Saipov was the first in a chain of immigrants. According to Buzzfeed, federal authorities have been unable to confirm this claim.

Like the border security example, Trump’s argument about chain migration is a good example of a level 10, if-then, linear argument. Here, the gist of his argument is that, If we don’t stop chain migration, then bad people like Sayfullo Saipov will come into the country and do horrible things to us. (I’m intentionally ignoring President Trump’s mistaken assertion that Saipov was a chain immigrant.)

Individuals performing in level 11 would not regard a single example of violent behavior as adequate evidence that chain immigration is a bad thing. Before deciding that eliminating chain migration was a wise decision, they they would want to know, for example, whether or not chain immigrants are more likely to behave violently (or become terrorists) than natural born citizens.

The visa lottery (Diversity Visa Program)

The visa lottery was created as part of the Immigration Act of 1990, and signed into law by President George H. W. Bush. Application for this program is free, The only way to apply is to enter your data into a form on the State Department’s website. Individuals who win the lottery must undergo background checks and vetting before being admitted into the United States. (If you are interested in learning more, the Wikipedia article on this program is comprehensive and well-documented.)

President Trump wants to cancel the lottery program

“…countries come in and they put names in a hopper.  They’re not giving you their best names; common sense means they’re not giving you their best names.  They’re giving you people that they don’t want.  And then we take them out of the lottery.  And when they do it by hand — where they put the hand in a bowl — they’re probably — what’s in their hand are the worst of the worst.”

Here, President Trump seems to misunderstand the nature of the visa lottery program. He claims that countries put forward names and that these are the names of people they do not want in their own countries. That is simply not the way the Diversity Visa Program works.

To support his anti-lottery position, Trump again appears to mention the case of Sayfullo Saipov (“that same person who came in through the lottery program).”

But they put people that they don’t want into a lottery and the United States takes those people.  And again, they’re going back to that same person who came in through the lottery program. They went — they visited his neighborhood and the people in the neighborhood said, “oh my God, we suffered with this man — the rudeness, the horrible way he treated us right from the beginning.”  So we don’t want the lottery system or the visa lottery system.  We want it ended.”

I think that what President Trump is saying here is that Sayfullo Saipov was one of the outcasts put into our lottery program by a country that did not want him, and that his new neighbors in the U. S. had complained about his behavior from the start.

This is not a good example of a level 10 argument. This is not a good example of an argument. President Trump completely misrepresents the Diversity Immigrant Visa Program, leaving him with no basis for a sensible argument.

Summing up

The results from this analysis of President Trump’s statements about immigration provides additional evidence that he tends to perform in the middle of level 10, and his arguments generally have a simple if, then structure. It also reveals some apparent misunderstanding of the law and other factual information.

It is a matter for concern when a President of the United States does not appear to understand a law he wants to change.

 

Please follow and like us:

President Trump on intelligence

How complex are the ideas about intelligence expressed in President Trump’s tweets?

President Trump recently tweeted about his intelligence. The media has already had quite a bit to say about these tweets. So, if you’re suffering from Trump tweet trauma this may not be the article for you.

But you might want to hang around if you’re interested in looking at these tweets from a different angle. I thought it would be interesting to examine their complexity level, and consider what they suggest about the President’s conception of intelligence.

In the National Leaders Study, we’ve been using CLAS — Lectica, Inc.’s electronic developmental scoring system—to score the complexity level of several national leaders’ responses to questions posed by respected journalists. Unfortunately, I can’t use CLAS to score tweets. They’re too short. Instead, I’m going to use the Lectical Dictionary to examine the complexity of ideas being expressed in them.


If you aren’t familiar with the National Leaders series, you may find this article a bit difficult to follow.


The Lectical Dictionary is a developmentally curated list of about 200,000 words or short phrases (terms) that represent particular meanings. (The dictionary does not include entries for people, places, or physical things.) Each term in the dictionary has been assigned to one of 30 developmental phases, based on its least complex possible meaning. The 30 developmental phases span first speech (in infancy) to the highest adult developmental phase Lectica has observed in human performance. Each phase represents 1/4 a level (a, b, c, or d). Levels range from 5 (first speech) to 12 (the most complex level Lectica measures). Phase scores are named as follows: 09d, 10a, 10b, 10c, 10d, 11a, etc. Levels 10 through 12 are considered to be “adult levels,” but the earliest phase of level 10 is often observed in middle school students, and the average high school student performs in the 10b to10c range.

In the following analysis, I’ll be identifying the highest-phase Lectical Dictionary terms in the President’s statements, showing each item’s phase. Where possible, I’ll also be looking at the form of thinking—black-and-white, if-then logic (10a–10d) versus shades-of-gray, nuanced logic (11a–11d)—these terms are embedded in.

The President’s statements

The first two statements are tweets made on 01–05–2018.

“…throughout my life, my two greatest assets have been mental stability and being, like, really smart.

The two most complex ideas in this statement are the notion of having personal assets (10c), and the notion of mental stability (10b).

“I went from VERY successful businessman, to top T.V. Star…to President of the United States (on my first try). I think that would qualify as not smart, but genius…and a very stable genius at that!”

This statement presents an argument for the President’s belief that he is not only smart, but a stable genius (10b-10c). The evidence offered consists of a list of accomplishments—being a successful (09c) businessman, being a top star, and being elected (09b) president. (Stable genius is not in the Lectical Dictionary, but it is a reference back to the previous notion of mental stability, which is in the dictionary at 10b.)

The kind of thinking demonstrated in this argument is simple if-then linear logic. “If I did these things, then I must be a stable genius.”

Later, at Camp David, when asked about these Tweeted comments, President Trump explained further…

“I had a situation where I was a very excellent student, came out, made billions and billions of dollars, became one of the top business people, went to television and for 10 years was a tremendous success, which you’ve probably heard.”

This argument provides more detail about the President’s accomplishments—being an excellent (08a) student, making billions and billions of dollars, becoming a top business person, and being a tremendous success (10b) in television. Here the president demonstrates the same if-then linear logic observed in the second tweet, above.

Summing up

The President has spoken about his intelligence on numerous occasions. Across all of the instances I’ve identified, he makes a strong connection between intelligence and concrete accomplishments — most often wealth, fame, or performance (for example in school or in negotiations). I could not find a single instance in which he attributed any part of these accomplishments to external or mitigating factors — for example, luck, being born into a wealthy family, having access to expert advice, or good employees. (I’d be very interested in seeing any examples readers can send my way!)

President Trump’s statements represent the same kind of logic and meaning-making my colleagues and I observed in the interview responses analysed for the National Leaders’ series. President Trump’s logic in these statements has a simple, if-then structure and the most complex ideas he expresses are in the 10b to10c range. As yet, I have seen no evidence of reasoning above this range.

The average score of a US adult is in the 10c–10d range.

 

Please follow and like us:

Complexity level—A primer

image of a complex neural network—represents complexity level

What is complexity level? In my work, a complexity level is a point or range on a dimension called hierarchical complexity. In this article, I’m not going to explain hierarchical complexity, but I am going to try to illustrate—in plain(er) English—how complexity level relates to decision-making skills, workplace roles, and curricula. If you’re looking for a more scholarly definition, you can find it in our academic publications. The Shape of Development is a good place to begin.

Background

My colleagues and I make written-response developmental assessments that are designed to support optimal learning and development. All of these assessments are scored for their complexity level on a developmental scale called the Lectical Scale. It’s a scale of increasing hierarchical complexity, with 13 complexity levels (0–12) that span birth through adulthood. On this scale, each level represents a way of seeing the world. Each new level builds upon the previous level, so thinking in a new complexity level is more complex and abstract than thinking at the precious level. The following video describes levels 5–12.

We have five  ways of representing Lectical Level scores, depending on the context: (1) as whole levels (9, 10, 11, etc.), (2) as decimals (10.35, 11.13, etc.), (3) as 4 digit numbers (1035, 1113, etc.), (4) as 1/4 of a level phase scores (10a, 10b, 10c, 10d, 11a, etc.), and (5) as 1/2 of a level zone scores (early level 10, advanced level 10; early level 11, etc.).

Interpreting Lectical (complexity level) Scores

Lectical Scores are best thought of in terms of the specific skills, meanings, tasks, roles, or curricula associated with them. To illustrate, I’m including table below that shows…

  • Lectical Score ranges for the typical complexity of coursework and workplace roles (Role demands & Complexity demands), and
  • some examples of decision making skills demonstrated in these Lectical Score ranges.

In the last bullet above, I highlighted the term skill, because we differentiate between skills and knowledge. Lectical Scores don’t represent what people know, they represent the complexity of the skill used to apply what they know in the real world. This is important, because there’s a big difference between committing something to memory and understanding it well enough to put it to work. For example, in the 1140–1190 range, the first skill mentioned in the table below is the “ability to identify multiple relations between nested variables.” The Lectical range in this row does not represent the range in which people are able to make this statement. Instead, it represents the level of complexity associated with actually identifying multiple relations between nested variables.

Image of table providing information about complexity level. Click on image to go to readable version.

If you want to use this table to get an idea of how skills increase in complexity over time, I suggest that you begin by comparing skill descriptions in ranges that are far apart. For example, try comparing the skill description in the 945–995 range with the skill descriptions in the 1250–1300 range. The difference will be obvious. Then, work your way toward closer and closer ranges. It’s not unusual to have difficulty appreciating the difference between adjacent ranges—that generally takes time and training—but you’ll find it easy to see differences that are further apart.

When using this table as a reference, please keep in mind that several factors play a role in the actual complexity demands of both coursework and roles. In organizations, size and sector matter. For example, there can be a difference as large as 1/2 of a level between freshman curricula in different colleges.

I hope you find this table helpful (even though it’s difficult to read). I’ll be using it as a reference in future articles exploring some of what my colleagues and I have learned by measuring and studying complexity level—starting with leader decision-making.


Related articles

 

Please follow and like us:

National leaders’ thinking: If a US President thought like a teenager…

In a series of Medium articles, my colleagues and I have been examining the complexity level of national leaders’ thinking — with a newly validated electronic developmental assessment system called CLAS. Since I posted the second article in this series, which focused on the thinking of recent U. S. presidents, I’ve been asked several times to say more about what complexity scores mean and why they matter.

So, here goes. To keep it simple (or a simple as a discussion of complexity can be), I’m going to limit myself to an exploration of the complexity scores of Presidents Trump (mean score = 1054) and Obama, (mean score = 1163).

If you are unfamiliar with complexity levels, I recommend that you start by watching the short video, below. It provides a general explanation of developmental levels that will help get you oriented.

Adult complexity zones

If you’ve read the previous articles in this series (recommended), you’ve already seen the figure below. It shows the four complexity “zones” that are most common in adulthood and describes them in terms of the kinds of perspectives people performing in each zone are likely to be able to work with effectively. The first zone, advanced linear thinking, is the most common among adults in the United States. It’s also fairly common in the later years of high school—though early linear thinking (not shown here) is more common in that age range.

As development progresses, knowledge and thought move through levels of increasing complexity. Each level builds upon the previous level, which means we have to pass through all of the levels in sequence. Skipping a level is impossible, because a level can’t be built unless there is an earlier level to build upon. As we move through these levels, the evidence of earlier levels does not disappear. It leaves traces in language that can be represented as a kind of history of a person’s development. We call this a developmental profile. To produce a score, CLAS’s algorithm compares an individual’s developmental profile to the typical profiles for each possible score on the complexity scale. Right now, the CLAS algorithm is based on 20 years of rigorous research involving over 45,000 scored interviews, observations, and assessments.

President Trump

In the second article of this series, I reported that President Trump’s average score (1054) was in the advanced linear thinking zone. Thinking in this zone is abstract and linear. People performing in this zone link ideas in chains of (more or less) logical relations. Reasoning has a “black and white” quality, in the sense that there is a strong preference for simple correct or incorrect answers. Although individuals performing in this level can often see that a situation or problem involves multiple factors, the only way they can organize their thinking about these factors is in chains of logical statements, usually with an “if, then” structure. President Trump, in his interview with The Wall Street Journal on the 25th of July, 2017, provided a typical “if, then” argument when asked about trade with the UK. He argued:

…we’re going to have a very good relationship with the U.K. And we do have to talk to the European Union, because it’s not a reciprocal deal, you know. The word reciprocal, to me, is very important. For instance, we have countries that charge us 100 percent tax to sell a Harley-Davidson into that country. And yet, they sell their motorcycles to us, or their bikes, or anything comparable, and we charge them nothing. There has to be a reciprocal deal. I’m all about that.

The complexity level of an argument can be seen in its structure and the meanings embedded in that structure. This argument has an “if, then” structure, and points to the meaning of reciprocity, which for the President seems to mean an equal exchange—”If you tax at a certain level, then we should tax at that level too.”  This kind of “tit for tat” thinking is common in level 10 and below. It’s also a form of thinking that disappears above level 10. For example, in level 11, an individual would be more likely to argue, “It’s more complex than that. There are other considerations that need to be taken into account, like the impact a decision like this is is likely to have on international relations or our citizens’ buying power.” President Trump, in his response, does not even mention additional considerations. This is one of the patterns in his responses that contributed to the score awarded by CLAS.

In the results reported here, a Democrat scored higher than a Republican. We have no reason to believe that conservative thinking is inherently less complex than liberal thinking. In fact, in the past, we have identified highly complex thinking in both conservative and liberal leaders.


A couple of side notes

Upon reading President Trump’s statement above, you may have noticed that, without any framing or context, the President jumped to a discussion of reciprocity. This lack of framing is a ubiquitous feature of President Trump’s arguments. I did not mention it in my discussion of complexity because it is not a direct indicator of thinking complexity. It’s more strongly connected to logical coherence, which correlates with complexity but is not fully explained by complexity.

I’d also like to note that it was difficult to find a single argument in President Trump’s interviews that contained an actual explanation. When asked to explain a position, President Trump was far more likely to (1) tell a story, (2) deride someone, (3) point out his own fame or popularity, or (3) claim that another perspective was a lie or fake news. These were the main ways in which he “backed up” his opinions. Like the absence of framing, these behaviors are not direct indicators of thinking complexity, though they may be correlated with complexity. They are more strongly related to disposition, values, and personality.

These flaws in President Trump’s thinking, combined with the complexity level of his interview responses, should raise considerable alarm. If the President Trump we see is showing us his best thinking—and a casual examination of other examples of his thinking suggests that this is likely to be the case—he clearly lacks the thinking skills demanded by his role. In fact, mid-level management roles generally require better thinking skills than those demonstrated by President Trump.


President Obama

President Obama’s mean score (1163) was in the advanced systems thinking zone. Thinking in this zone is multivariate and non-linear. People performing in this zone link ideas in complex webs of relations, connecting these webs of relations to one another through common elements. For example, they view individuals as complex webs of traits & behaviors, and groups of individuals as complex webs that include not only the intersections of the webs of their members, but their own distinct properties. Thinking in this zone is very different from thinking in the advanced linear thinking zone. Where individuals performing in the advanced linear thinking zone are concerned about immediate outcomes and proximal causes, individuals performing in the advanced systems thinking zone concern themselves with long term outcomes and systemic causes. Here is an example from President Obama’s interview with the New York Times on March 7th, 2009, in which he explains his approach to economic recovery following the onset of the great recession:

…people have been concerned, –understandably, about the decline in the market. Well, the reason the market’s declining is because the economy’s declining and it’s generating a lot of bad news, not surprisingly. And so what I’m focused on is fixing the underlying economy. That’s ultimately what’s going to fix the markets. …in the interim you’ve got some folks who would love to see us artificially prop up the market by just putting in more taxpayer money, which in the short term could make bank balance sheets look better, make creditors and bondholders and shareholders of these financial institutions feel better and you could get a little blip. But we’d be in the exact same spot as we were six, eight, 10 months [ago]. So, what I’ve got to do is make sure that we’re focused on the underlying economy, and … if we do that well …we’re going to get this economy moving again. And I think over the long term we’re going to be much better off.

Rather than offering a pre-determined solution or focusing a single element of the economic crisis, President Obama anchors on the economic system as a system, advocating a comprehensive long-term solution rather than band-aid solutions that might offer some positive immediate results, but would be likely to backfire in the long term. Appreciating that the economic situation presents “a very complex set of problems,”  he employs a decision-making process that is “constantly… guided by evidence, facts, talking through all the best arguments, drawing from all the best perspectives, and then talking the best course of action possible.”

The complexity level of president Obama’s thinking as represented in the press interviews analyzed for our study, is a reasonable fit for high office. Of course, we were not able to determine if his scores in this context represent his full capabilities. An informal examination of some of his written work suggests that the “true” complexity level of his thinking may be even higher.

Discussion

Thinking complexity is not the only factor that plays a role in a president’s success. As president, Obama experienced both successes and failures, and as is usually the case, it’s difficult to say to what extent his solutions contributed to these successes or failures. But, even in the face of this uncertainty, isn’t it a no brainer that a complex problem that’s adequately understood is more likely to be resolved than a complex problem that’s not even recognized?

In his interview with the Wall Street Journal, President Trump claimed that Barack Obama, “didn’t know what the hell he was doing.” Our results suggest that it may be President Trump who doesn’t know what Obama was doing.

 


Other articles in this series

  1. The complexity of national leaders’ thinking: How does it measure up?
  2. The complexity of national leaders’ thinking: U.S. Presidents
Please follow and like us:

National leaders’ thinking: The US presidents

How well does the thinking of recent US Presidents stand up to the complexity of issues faced in their role?pictures of the last 4 US presidentsSpecial thanks to my Australian colleague, Aiden M. A. Thornton, PhD. Cand., for his editorial and research assistance.

This is the second in a series of articles on the complexity of national leaders’ thinking, as measured with CLAS, a newly validated electronic developmental scoring system. This article will make more sense if you begin with the first article in the series.

Just in case you choose not to read or revisit the first article, here are a few things to keep in mind.

  • I am an educational researcher and the CEO of a nonprofit that specializes in measuring the complexity level of people’s thinking and supporting the development of their capacity to work with complexity.
  • The complexity level of leaders’ thinking is one of the strongest predictors of leader advancement and success.
  • Many of the issues faced by national leaders require principles thinking (level 12 on the skill scale, illustrated in the figure below).
  • To accurately measure the complexity level of someone’s thinking (on a given topic), we need examples of their best thinking. In this case, that kind of evidence wasn’t available. As an alternative, my colleagues and I have chosen to examine the complexity level of Presidents’ responses to interviews with prominent journalists.

The data

In this article, we examine the thinking of the four most recent Presidents of the United States — Bill Clinton, George W. Bush, Barack Obama, and Donald Trump. For each president, we selected 3 interviews, based on the following criteria: They

  1. were conducted by prominent journalists representing respected news media;
  2. included questions that requested explanations of the president’s perspective; and
  3. were either conducted within the president’s first year in office or were the earliest interviews we could locate that met the first two criteria.

As noted in the introductory article of this series, we do not imagine that the responses provided in these interviews necessarily represent competence. It is common knowledge* that presidents and other leaders typically attempt to tailor messages for their audiences, so even when responding to interview questions, they may not show off their own best thinking.

Media also tailor writing for their audiences, so to get a sense of what a typical complexity level target for top media might be, we used CLAS to score 11 articles on topics similar to those discussed by the four presidents in their interviews. We selected these articles at random — literally selecting the first ones that came to hand — from recent issues of the New York Times, Guardian, Washington Post, and Wall Street Journal. Articles from all of these newspapers landed in the middle range of the early systems thinking zone, with an average score of 1124.

Based on this information, and understanding that presidents generally attempt to tailor messages for their audience, we hypothesized that presidents would aim for a similar range.

The results

The results were mixed. Only Presidents Clinton and Bush consistently performed in the anticipated range. President Trump stood out by performing well-below this range. His scores were all identical — and roughly equivalent to the average for 12th graders in a reasonably good high school. President Obama also missed the mark, but in the opposite direction. In his first interviews, he scored at the top of the advanced systems thinking zone. But he didn’t stay there. By the time of September’s interview, he was responding in the early systems thinking zone. He even mentioned simplifying communication in this interview. Commenting on his messaging around health care, he said, “I’ve tried to keep it digestible… it’s very hard for people to get… their whole arms around it.”

The Table below shows the complexity scores received by our four presidents. (All of the interviews can readily be found in the presidential archives.)

Discussion

In the first article of this series, I discussed the importance of attempting to “hire” leaders whose complexity level scores are a good match for the complexity level of the issues they face in their roles. I then posed two questions:

  1. When asked by prominent journalists to explain their positions on complex issues, what is the average complexity level of national leaders’ responses?
  2. How does the complexity level of national leaders’ responses relate to the complexity of the issues they discuss?”

The answer to question 1 is that the average complexity level of presidents’ responses to interview questions varied dramatically. President Trump’s average complexity level score was 1054 — near the average score received by 12th graders in a good high school. President Bush’s average score was 1107 — near the average score received by entry- to mid-level managers in a large corporation. President Clinton’s average score was 1141, near the average score received by upper level managers in large corporations. Obama’s, average score was 1163 — near the the average score of senior leaders in large corporations. (Obama’s highest scores were closer to the average for CEOs in our database.)

With respect to question 2, the complexity level of presidents’ responses did not rise to the complexity level of many of the issues raised in their interviews. These issues ranged from international relations and the economy to health care and global warming. All of these are thorny problems involving multiple interacting and nested systems—early principles and above. Indeed, many of these problems are so complex that they are beyond the capability of even the most complex thinkers to fully grasp. (See my article on the Complexity Gap for more on this issue.) President Obama came closest to demonstrating a level of thinking complexity that would be adequate for coping with problems of this kind. (For more on this, see the third article in this series, If a U. S. President thought like a teenager…)

Obama also demonstrated some of the other qualities required for working well with complexity, such as skills for perspective seeking and perspective coordination, and familiarity with tools for working with complexity—but that’s another story.

In addition to addressing the two questions posed in the first article of this series, we were able to ask if these U. S. presidents seemed to tailor the complexity level of their interview responses for the audiences of the media outlets represented by journalists conducting the interviews.

First, the responses of presidents Bush and Clinton were in the same zone as a set of articles collected from these media outlets. Of course, we can’t be sure the alignment was intentional. There are other plausible explanations, including the possibility that what we witnessed was their best thinking.

In contrast, however, President Trump’s responses were well below the zone of the selected articles, making it difficult to argue that he was tailoring his responses for their audiences. Individuals whose thinking is complex are likely to find thinking at lower levels of complexity simplistic and unsatisfying. Delivering a message that is likely to lead to judgments of this kind does not seem like a rational tactic — especially for a politician.

It seems more plausible that President Trump was demonstrating his best thinking about the issues raised in his interviews. If so, his best would be far below the complexity level of most issues faced in his role. Indeed, individuals performing in the advanced linear thinking zone would not even be aware of the complexity inherent in many of the issues faced daily by national leaders.

President Obama confronted a different challenge. The complexity of thinking evident in his early interviews was very high. Even though, as with Bush and Clinton, it isn’t possible to say we witnessed Obama’s best thinking, we would argue that what we saw of President Obama’s thinking in his first two interviews was a reasonable fit to the complexity of the challenges in his role. However, it appears that Obama soon learned that in order to communicate effectively with citizens, he needed to make his communications more accessible.

In the results reported here, Democrats scored higher than Republicans. We have no reason to believe that conservative thinking is inherently less complex than liberal thinking. In fact, in the past, we have identified highly complex thinking in both conservative and liberal leaders.

We need leaders who can cope with highly complex issues, and particularly in a democracy, we also need leaders we can understand. President Obama showed himself to be a complex thinker, but he struggled with making his communications accessible. President Trump’s message is accessible, but our results suggest that he may not even be aware of the complexity of many issues faced in his role. Is it inevitable that the tension between complexity and accessibility will sometimes lead us to “hire” national leaders who are easy to understand, but lack the ability to work with complexity? And how can we even know if a leader is equipped with the thinking complexity that’s required if candidates routinely simplify communications for their audience? Given our increasingly volatile and complex world, these are questions that cry out for answers.

We don’t have these answers, and we’ve intentionally resisted going deeper into the implications of these findings. Instead, we’re hoping to stimulate discussion around our questions and the implications that arise from the findings presented here. Please feel free to chime in or contact us to further the conversation. And stay tuned. The Australian Prime Ministers are next!


*The speeches of presidents are generally written to be accessible to a middle school audience. The metrics used to determine reading level are not measures of complexity level, but reading level scores are moderately correlated with complexity level.


 


Lectica

Assessments: Adult assessments | K-12 assessments | CLAS demo

Subscriptions: LecticaLive for Schools | LecticaLive for Teachers | LecticaLive for Parents | My LecticaLive

Just for organizations: LecticaFirst | Capability Assessment | Fitness Assessment | Compatibility Assessment | Role Complexity Analysis | Lectica for the C-Suite | Organizational Snapshot

Courses: LAP-1: coachingLAP-2: recruitment | FOLA: foundations

Please follow and like us:

National leaders’ thinking: How does it measure up?

Special thanks to my Australian colleague, Aiden Thornton, for his editorial and research assistance.

This is the first in a series of articles on the complexity of national leaders’ thinking. These articles will report results from research conducted with CLAS, our newly validated electronic developmental scoring system. CLAS will be used to score these leaders’ responses to questions posed by prominent journalists.

In this first article, I’ll be providing some of the context for this project, including information about how my colleagues and I think about complexity and its role in leadership. I’ve embedded lots of links to additional material for readers who have questions about our 100+ year-old research tradition, Lectica’s (the nonprofit that owns me) assessments, and other research we’ve conducted with these assessments.

Context and research questions

Lectica creates diagnostic assessments for learning that support the development of mental skills required for working with complexity. We make these learning tools for both adults and children. Our K-12 initiative—the DiscoTest Initiative—is dedicated to bringing these tools to individual K-12 teachers everywhere, free of charge. Its adult assessments are used by organizations in recruitment and training, and by colleges and universities in admissions and program evaluation.

All Lectical Assessments measure the complexity level (aka, level of vertical development) of people’s thinking in particular knowledge areas. A complexity level score on a Lectical Assessment tells us the highest level of complexity—in a problem, issue, or task—an individual is likely to be able to work with effectively.

On several occasions over the last 20 years, my colleagues and I have been asked to evaluate the complexity of national leaders’ reasoning skills. Our response has been, “We will, but only when we can score electronically—without the risk of human bias.” That time has come. Now that our electronic developmental scoring system, CLAS, has demonstrated a level of reliability and precision that is acceptable for this purpose, we’re ready to take a look.​

Evaluating the complexity of national leaders’ thinking is a challenging task for several reasons. First, it’s virtually impossible to find examples of many of these leaders’ “best work.” Their speeches are generally written for them, and speech writers usually try to keep the complexity level of these speeches low, aiming for a reading level in the 7th to 9th grade range. (Reading level is not the same thing as complexity level, but like most tests of capability, it correlates moderately with complexity level.) Second, even when national leaders respond to unscripted questions from journalists, they work hard to use language that is accessible to a wide audience. And finally, it’s difficult to identify a level playing field—one in which all leaders have the same opportunity to demonstrate the complexity of their thinking.

Given these obstacles, there’s no point in attempting to evaluate the actual thinking capabilities of national leaders. In other words, we won’t be claiming that the scores awarded by CLAS represent the true complexity level of leaders’ thinking. Instead, we will address the following questions:

  1. When asked by prominent journalists to explain their positions on complex issues, what is the average complexity level of national leaders’ responses?
  2. How does the complexity level of national leaders’ responses relate to the complexity of the issues they discuss?

Thinking complexity and leader success

At this point, you may be wondering, “What is thinking complexity and why is it important?” A comprehensive response to this question isn’t possible in a short article like this one, but I can provide a basic description of complexity as we see it at Lectica, and provide some examples that highlight its importance.

All issues faced by leaders are associated with a certain amount of built-in complexity. For example:

  1. The sheer number of factors/stakeholders that must be taken into account.
  2. Short and long-term implications/repercussions. (Will a quick fix cause problems downstream, such as global unrest or catastrophic weather?)
  3. The number and diversity of stakeholders/interest groups. (What is the best way to balance the needs of individuals, families, businesses, communities, states, nations, and the world?)
  4. The length of time it will take to implement a decision. (Will it take months, years, decades? Longer projects are inherently more complex because of changes over time.)
  5. Formal and informal rules/laws that place limits on the deliberative process. (For example, legislative and judicial processes are often designed to limit the decision making powers of presidents or prime ministers. This means that leaders must work across systems to develop decisions, which further increases the complexity of decision making.)

Over the course of childhood and adulthood, the complexity of our thinking develops through up to 13 skill levels (0–12). Each new level builds upon the previous level. The figure above shows four adult complexity “zones” — advanced linear thinking (second zone of level 10), early systems thinking (first zone of level 11), advanced systems thinking (second zone of level 11), early principles thinking (first zone of level 12). In advanced linear thinking, reasoning is often characterized as “black and white.” Individuals performing in this zone cope best with problems that have clear right or wrong answers. It is only once individuals enter early systems thinking, that we begin to work effectively with highly complex problems that do not have clear right or wrong answers.

Leadership at the national level requires exceptional skills for managing complexity, including the ability to deal with the most complex problems faced by humanity (Helbing, 2013). Needless to say, a national leader regularly faces issues at or above early principles thinking.

Complexity level and leadership—the evidence

In the workplace, the hiring managers who decide which individuals will be put in leadership roles are likely to choose leaders whose thinking complexity is a good match for their roles. Even if they have never heard the term complexity level, hiring managers generally understand, at least implicitly, that leaders who can work with the complexity inherent in the issues associated with their roles are likely to make better decisions than leaders whose thinking is less complex.

There is a strong relation between the complexity of leadership roles and the complexity level of leaders’ reasoning. In general, more complex thinkers fill more complex roles. The figure below shows how lower and senior level leaders’ complexity scores are distributed in Lectica’s database. Most senior leaders’ complexity scores are in or above advanced systems thinking, while those of lower level leaders are primarily in early systems thinking.

The strong relation between the complexity of leaders’ thinking and the complexity of their roles can also be seen in the recruitment literature. To be clear, complexity is not the only aspect of leadership decision making that affects leaders’ ability to deal effectively with complex issues. However, a large body of research, spanning over 50 years, suggests that the top predictors of workplace leader recruitment success are those that most strongly relate to thinking skills, including complexity level.

The figure below shows the predictive power of several forms of assessment employed in making hiring and promotion decisions. The cognitive assessments have been shown to have the highest predictive power. In other words, assessments of thinking skills do a better job predicting which candidates will be successful in a given role than other forms of assessment.

Predictive power graph

The match between the complexity of national leaders’ thinking and the complexity level of the problems faced in their roles is important. While we will not be able to assess the actual complexity level of the thinking of national leaders, we will be able to examine the complexity of their responses to questions posed by prominent journalists. In upcoming articles, we’ll be sharing our findings and discussing their implications.

Coming next…

In the second article in this series, we begin our examination of the complexity of national leaders’ thinking by scoring interview responses from four US Presidents—Bill Clinton, George W. Bush, Barack Obama, and Donald Trump.

 


Appendix

Predictive validity of various types of assessments used in recruitment

The following table shows average predictive validities for various forms of assessment used in recruitment contexts. The column “variance explained” is an indicator of how much of a role a particular form of assessment plays in predicting performance—it’s predictive power.

Form of assessment Source Predictive validity Variance explained  Variance explained (with GMA)
Complexity of workplace reasoning (Dawson & Stein, 2004; Stein, Dawson, Van Rossum, Hill, & Rothaizer, 2003) .53 28% n/a
Aptitude (General Mental Ability, GMA) (Hunter, 1980; Schmidt & Hunter, 1998) .51 26% n/a
Work sample tests (Hunter & Hunter, 1984; Schmidt & Hunter, 1998) .54 29% 40%
Integrity (Ones, Viswesvaran, and Schmidt, 1993; Schmidt & Hunter, 1998) .41 17% 42%
Conscientiousness (Barrick & Mount, 1995; Schmidt & Hunter, 1998). .31 10% 36%
Employment interviews (structured) (McDaniel, Whetzel, Schmidt, and Mauer, 1994; Schmidt & Hunter, 1998) .51 26% 39%
Employment interviews (unstructured) (McDaniel, Whetzel, Schmidt, and Mauer, 1994 Schmidt & Hunter, 1998) .38 14% 30%
Job knowledge tests (Hunter and Hunter, 1984; Schmidt & Hunter, 1998) .48 23% 33%
Job tryout procedure (Hunter and Hunter, 1984; Schmidt & Hunter, 1998) .44 19% 33%
Peer ratings (Hunter and Hunter, 1984; Schmidt & Hunter, 1998) .49 24% 33%
Training & experience: behavioral consistency method (McDaniel, Schmidt, and Hunter, 1988a, 1988b; Schmidt & Hunter, 1998; Schmidt, Ones, and Hunter, 1992) .45 20% 33%
Reference checks (Hunter and Hunter, 1984; Schmidt & Hunter, 1998) .26 7% 32%
Job experience (years) Hunter, 1980); McDaniel, Schmidt, and Hunter, 1988b; Schmidt & Hunter, 1998) .18 3% 29%
Biographical data measures Supervisory Profile Record Biodata Scale (Rothstein, Schmidt, Erwin, Owens, and Sparks, 1990; Schmidt & Hunter, 1998) .35 12% 27%
Assessment centers (Gaugler, Rosenthal, Thornton, and Benson, 1987; Schmidt & Hunter, 1998; Becker, Höft, Holzenkamp, & Spinath, 2011) Note: Arthur, Day, McNelly, & Edens (2003) found a predictive validity of .45 for assessment centers that included mental skills assessments. .37 14% 28%
EQ (Zeidner, Matthews, & Roberts, 2004) .24 6% n/a
360 assessments Beehr, Ivanitskaya, Hansen, Erofeev, & Gudanowski, 2001 .24 6% n/a
Training &  experience: point method (McDaniel, Schmidt, and Hunter, 1988a; Schmidt & Hunter, 1998) .11 1% 27%
Years of education (Hunter and Hunter, 1984; Schmidt & Hunter, 1998) .10 1% 27%
Interests (Schmidt & Hunter, 1998) .10 1% 27%

References

Arthur, W., Day, E. A., McNelly, T. A., & Edens, P. S. (2003). A meta‐analysis of the criterion‐related validity of assessment center dimensions. Personnel Psychology, 56(1), 125-153.

Becker, N., Höft, S., Holzenkamp, M., & Spinath, F. M. (2011). The Predictive Validity of Assessment Centers in German-Speaking Regions. Journal of Personnel Psychology, 10(2), 61-69.

Beehr, T. A., Ivanitskaya, L., Hansen, C. P., Erofeev, D., & Gudanowski, D. M. (2001). Evaluation of 360 degree feedback ratings: relationships with each other and with performance and selection predictors. Journal of Organizational Behavior, 22(7), 775-788.

Dawson, T. L., & Stein, Z. (2004). National Leadership Study results. Prepared for the U.S. Intelligence Community.

Dawson, T. L. (2017, October 20). Using technology to advance understanding: The calibration of CLAS, an electronic developmental scoring system. Proceedings from Annual Conference of the Northeastern Educational Research Association, Trumbull, CT.

Dawson, T. L., & Thornton, A. M. A. (2017, October 18). An examination of the relationship between argumentation quality and students’ growth trajectories. Proceedings from Annual Conference of the Northeastern Educational Research Association, Trumbull, CT.

Gaugler, B. B., Rosenthal, D. B., Thornton, G. C., & Bentson, C. (1987). Meta-analysis of assessment center validity. Journal of Applied Psychology, 72(3), 493-511.

Helbing, D. (2013). Globally networked risks and how to respond. Nature, 497, 51-59.

Hunter, J. E., & Hunter, R. F. (1984). The validity and utility of alternative predictors of job performance. Psychological Bulletin, 96, 72-98.

Hunter, J. E., Schmidt, F. L., & Judiesch, M. K. (1990). Individual differences in output variability as a function of job complexity. Journal of Applied Psychology, 75, 28-42.

Johnson, J. (2001). Toward a better understanding of the relationship between personality and individual job performance. In M. R. R. Barrick, Murray R. (Ed.), Personality and work: Reconsidering the role of personality in organizations (pp. 83-120).

McDaniel, M. A., Schmidt, F. L., & Hunter, J., E. (1988a). A Meta-analysis of the validity of training and experience ratings in personnel selection. Personnel Psychology, 41(2), 283-309.

McDaniel, M. A., Schmidt, F. L., & Hunter, J., E. (1988b). Job experience correlates of job performance. Journal of Applied Psychology, 73, 327-330.

McDaniel, M. A., Whetzel, D. L., Schmidt, F. L., & Maurer, S. D. (1994). Validity of employment interviews. Journal of Applied Psychology, 79, 599-616.

Rothstein, H. R., Schmidt, F. L., Erwin, F. W., Owens, W. A., & Sparks, C. P. (1990). Biographical data in employment selection: Can validities be made generalizable? Journal of Applied Psychology, 75, 175-184.

Stein, Z., Dawson, T., Van Rossum, Z., Hill, S., & Rothaizer, S. (2013, July). Virtuous cycles of learning: using formative, embedded, and diagnostic developmental assessments in a large-scale leadership program. Proceedings from ITC, Berkeley, CA.

Tett, R. P., Jackson, D. N., & Rothstein, M. (1991). Personality measures as predictors of job performance: A meta-analytic review. Personnel Psychology, 44, 703-742.

Please follow and like us:

Dr. Howard Drossman—leadership in environmental education

For several years now, one of our heroes, professor Howard Drossman of Colorado College and the Catamount Center, has been working with Lectical Assessments and helping us build LESA, the Lectical Environmental Stewardship Assessment.

Dr. Drossman's areas of expertise include developmental pedagogy, environmental stewardship, and the development of reflective judgment. His teaching focuses on building knowledge, skill, and passion through deep study, hands-on experience, and reflection.

For example, Dr. Drossman and ACM (Associated Colleges of the Midwest) offered a 10-day faculty seminar on interdisciplinary learning called Contested Spaces. This physically and intellectually challenging expeditionary learning experience provided participants with multiple disciplinary perspectives on current issues of land stewardship in the Pikes Peak region of Colorado. 

A second, ongoing program is offered by Catamount Center and Colorado College is dedicated to inspiring the "next generation of ecological stewards." This program, called TREE (Teaching & Research in Environmental Education), is a 16-week, residential program for undergraduate students who have an interest in teaching and the environment. Program participants live and learn in community at the Catamount Mountain Campus, which is located in a montane forest outside of Woodland Park, Colorado. Through study and practice, they cultivate their own conceptions of environmental stewardship and respect for the natural world, while building skills for creating virtuous cycles of learning and useable knowledge in K-12 classrooms.

Dr. Drossman embeds Lectical Assessments in both of these programs, using them to customize instruction, support individual development, and measure program outcomes. He also is working closely with us on the development of the LESA, which is one of the first assessments we plan to bring online after our new platform, LecticaLive, has been completed. 

 

Please follow and like us:

World Economic Forum—tomorrow’s skills

The top 10 workplace skills of the future.

Sources: Future of Jobs Report, WEF 2017

In a recent blog post—actually in several recent blog posts—I've been emphasizing the importance of building tomorrow's skills. These are the kinds of skills we all need to navigate our increasingly complex and changing world. While I may not agree that all of the top 10 skills listed in the World Economic Forum report (shown above) belong in a list of skills (Creativity is much more than a skill, and service orientation is more of a disposition than a skill.) the flavor of this list is generally in sync with the kinds of skills, dispositions, and behaviors required in a complex and rapidly changing world.

The "skills" in this list cannot be…

  • developed in learning environments focused primarily on correctness or in workplace environments that don't allow for mistakes; or
  • measured with ratings on surveys or on tests of people's ability to provide correct answers.

These "skills" are best developed through cycles of goal setting, information gathering, application, and reflection—what we call virtuous cycles of learning—or VCoLs. And they're best assessed with tests that focus on applications of skill in real-world contexts, like Lectical Assessments, which are based on a rich research tradition focused on the development of understanding and skill.

 

Please follow and like us: