Individual growth trajectories often don’t stick to statistically determined expectations.
The illustration above depicts the growth trajectory of a woman named Eleanore. Between the ages 12 and 68, she completed two different developmental assessments several times. The first assessment was the LRJA, a test of reflective judgment (critical thinking), which she completed on 8 different occasions. The second assessment was the LDMA, a test of decision-making skills, which she completed four times between the ages of 42 and 68. As you can see, Eleanore has continued to develop throughout adulthood, with periods of more and less rapid growth.
The graph on which Eleanore’s scores are plotted shows several potential developmental curves (A–H), representing typical developmental trajectories for individuals performing in different levels at age 10. You can tell right away that Eleanore is not behaving as expected. Over time, her scores have landed on two different curves (D & E), and she shows considerable growth in age ranges for which no growth is expected — on either curve.
Eleanore, who was born in 1942, was a bright child who did well in school. By the time she graduated from high school in 1960, she was in the top 15% of her class. After attending two years of community college, she joined the workforce as a legal secretary. At 23 she married a lawyer, and at 25 she gave birth to the first of two children. During the next 15 years, while raising her her children, her scores hovered closer to curve E than curve D. When her youngest entered high school, Eleanore decided it was time to complete her bachelor of science degree, which she did, part time, over several years. During this period she grew more quickly than in the previous 10 years, and her LRJA scores began to cluster around curve D.
Sadly, shortly after completing her degree (at age 43), Eleanore learned that her mother had been diagnosed with dementia (now known as Alzheimer’s). For the next 6 years, she cared for her ailing mother, who died only a few days before Eleanore’s 50th birthday. While she cared for her mother, Eleanore learned a great deal about Alzheimer’s — from both personal experience and the extensive research she did to help ensure the best possible care for her mother. This may have contributed to the growth that occurred during this period. Following her mother’s death, Eleanore decided to build upon her knowledge of Alzheimer’s, spending the next 6 years earning a Ph.D. focused on its origins. At the time of her last assessment, she was a respected Alzheimer’s researcher.
And now I must confess. Eleanore is not a real person. She’s a compilation based on 70 years of research in which the growth of thousands of individuals has been measured over periods spanning 8 months to 25 years. Eleanore’s story has been designed to illustrate several phenomena my colleagues and I have observed in these data:
First, although statistics allow us to describe typical developmental trajectories, individual development is usually more or less atypical. Eleanore does not stay on the curve she started out on. In fact she actually drops below this curve for a time, then develops beyond it in later adulthood. She also grew during age-ranges in which no growth at all was expected. Both life events and formal education clearly influenced her developmental trajectory.
Second, many people develop throughout adulthood — especially if they are involved in rich learning experiences (like formal schooling), or when they are coping productively with life crises (like reflectively supporting an ailing parent).
Third, developmental spurts happen. The figure above shows a (real) growth spurt that occurred between the ages of 46 and 51. This highly motivated individual engaged in a sustained and varied learning adventure during this period — just because he wanted to build his interpersonal and leadership skills.
Fourth, developmental growth can happen late in life, given the right opportunities and circumstances. The (real) woman whose scores are shown here responded to a personal life crisis by embracing it as an opportunity to learn more about herself as person and as a leader.
My colleagues and I find the statistically determined growth curves shown on the figures in this article enormously useful in our research, but it’s important to keep in mind that they’re just averages. Many people can jump from one curve to another given the right learning skills and opportunities. On the other hand, these curves are associated with some constraints. For example, we’ve never seen anyone jump more than one of these curves, no matter how excellent their learning skills or opportunities have been. Unsurprisingly, nurture cannot entirely overcome nature.
Growth is predicted by a number of factors. Nature is a big one. How we personally approach learning is also pretty big — with approaches that feature virtuous cycles of learning taking the lead. And, of course, our growth is influenced by how optimally the environments we live, learn, and work in support learning.
Find out how we put this knowledge to work in leader development and recruitment contexts, with LAP-1 and LAP-2.
Honestly folks, we really, really, really need to get over the memorization model of learning. It’s good for spelling bees, trivia games, Jeopardy, and passing multiple choice tests. But it’s BORING if not torturous! And cramming more and more facts into our brains isn’t going to help most of us thrive in real life — especially in the 21st century.
As an employer, I don’t care how many facts are in your head or how quickly you can memorize new information. I’m looking for talent, applied expertise (not just factual or theoretical knowledge), and the following skills and attributes:
The ability to tell the difference between memorizing and understanding
I won’t delegate responsibility to employees who can’t tell the difference between memorizing and understanding. Employees who can’t make this distinction don’t know when they need to ask questions. Consequently, they repeatedly make decisions that aren’t adequately informed.
I’ve taken to asking potential employees what it feels like when they realize they’ve really understood something. Many applicants, including highly educated applicants, don’t understand the question. It’s not their fault. The problem is an educational system that’s way too focused on memorizing.
The ability to think
It’s essential that every employee in my organization is able to evaluate information, solve problems, participate actively in decision making and know the difference between an opinion and a good evidence-based argument.
A desire to listen and the skills for doing it well
We also need employees who want and know how to listen — really listen. In my organization, we don’t make decisions in a vacuum. We seek and incorporate a wide range of stakeholder perspectives. A listening disposition and listening skills are indispensable.
The ability to speak truth (constructively)
I know my organization can’t grow the way I want it to if the people around me are unwilling to share their perspectives or are unable to share them constructively. When I ask someone for an opinion, I want to hear their truth — not what they think I want to hear.
The ability to work effectively with others
This requires respect for other human beings, good interpersonal, collaborative, and conflict resolution skills, the ability to hear and respond positively to productive critique, and buckets of compassion.
Awareness of the ubiquity of human fallibility, including one’s own, and knowledge about human limitations, including the built-in mental biases that so often lead us astray.
A passion for learning (a.k.a. growth mindset)
I love working with people who are driven to increase their understanding and skills — so driven that they’re willing to feel lost at times, so driven that they’re willing to make mistakes on their way to a solution, so driven that their happiness depends on the availability of new challenges.
The desire to do good in the world
I run a nonprofit. We need employees who are motivated to do good.
Not one of these capabilities can be learned by memorizing. All of them are best learned through reflective practice — preferably 12–16 years of reflective practice (a.k.a VCoLing) in an educational system that is not obsessed with remembering.
An individual’s rate of development is affected by a wide range of factors. Twin studies suggest that about 50% of the variation in Lectical growth trajectories is likely to be predicted by genetic factors. The remaining variation is explained by environmental factors, including the environment in the womb, the home environment, parenting quality, educational quality & fit, economic status, diet, personal learning habits, and aspects of personality.
Each Lectical Level takes longer to traverse than the previous level. This is because development through each successive level involves constructing increasingly elaborated and abstract knowledge networks. Don’t be fooled by the slow growth, though. A little growth can have an important impact on outcomes. For example, small advances in level 11 can make a big difference in an individual’s capacity to work effectively with complexity and change—at home and in the workplace.
The graphs above show possible learning trajectories, first, for the lifespan and second, for ages 10-60. Note that the highest age shown on these graphs is 60. This does not mean that individuals cannot develop after the age of 60.
The yellow circle in each graph represents a Lectical Score and the confidence interval around that score. That’s the range in which the “true score” would most likely fall. When interpreting any test score, you should keep the confidence interval in mind.
Within individuals, growth is not tidy
When we measure the development of individuals over short time spans, it does not look smooth. The kind of pattern shown in the following graph is more common. However, we have found that growth appears a bit smoother for adults than for children. We think this is because children, for a variety of reasons, are less likely to do their best work on every testing occasion.
People don’t grow at the same rate in every knowledge area
An individual’s rate of growth depends on the level of their immersion in particular knowledge areas. A physicist may be on one trajectory when it comes to physics and quite a different trajectory when it comes to interpersonal understanding.
Factors that affect the rate of development
Genetics & socio-economic status.
A test-taker’s current developmental trajectory. For example, as time passes, a person whose history places her on the green curve in the first two graphs is less and less likely to jump to the blue curve.
The amount of everyday reflective activity (especially VCoLing) the individual typically engages in (less reflective activity > less growth)
Participation in deliberate learning activities that include lots of reflective activity (especially VCoLing)
Participating in supported learning (coaching, mentoring) after several years away from formal education (can create a growth spurt).
Shortly after the President passed the Montreal Cognitive Assessment, a reader emailed with two questions:
Does this mean that the President has the cognitive capacity required of a national leader?
How does a score on this test relate to the complexity level scores you have been describing in recent posts?
A high score on the Montreal Cognitive Assessment dos not mean that the President has the cognitive capacity required of a national leader. This test result simply means there is a high probability that the President is not suffering from mild cognitive impairment. (The test has been shown to detect existing cognitive impairment 88% of the time .) In order to determine if the President has the mental capacity to understand the complex issues he faces as a National Leader, we need to know how complexly he thinks about those issues.
The answer to the second question is that there is little relation between scores on the Montreal Cognitive Assessment and the complexity level of a person’s thinking. A test like the Montreal Cognitive Assessment does not require the kind of thinking a President needs to understand highly complex issues like climate change or the economy. Teenagers can easily pass this test.
The difference between 1053 and 1137 generally represents a decade or more of sustained learning. (If you’re a new reader and don’t yet know what a complexity level is, check out the National Leaders Series introductory article.)
 JAMA Intern Med. 2015 Sep;175(9):1450-8. doi: 10.1001/jamainternmed.2015.2152. Cognitive Tests to Detect Dementia: A Systematic Review and Meta-analysis. Tsoi KK, Chan JY, Hirai HW, Wong SY, Kwok TC.
How complex are the ideas about immigration expressed in President Trump’s recent comments to congress?
On January 9th, 2018, President Trump spoke to members of Congress about immigration reform. In his comments, the President stressed the need for bipartisan immigration reform, and laid out three goals.
secure our border with Mexico
end chain migration
close the visa lottery program
I have analyzed President Trump’s comments in detail, looking at each goal in turn. But first, his full comments were submitted to CLAS (an electronic developmental assessment system) for an analysis of their complexity level. The CLAS score was 1046. This score is in what we call level 10, and is a few points lower than the average score of 1053 awarded to President Trump’s arguments in our earlier research.
Here are some benchmarks for complexity scores:
The average complexity score of American adults is in the upper end of level 10, somewhere in the range of 1050-1080.
The average complexity score for senior leaders in large corporations or government institutions is in the upper end of level 11, in the range of 1150-1180.
The average complexity score (reported in our National Leaders Study) for the three U. S. presidents that preceded President Trump was 1137.
The difference between 1046 and 1137 represents a decade or more of sustained learning. (If you’re a new reader and don’t yet know what a complexity level is, check out the National Leaders Series introductory article.)
President Trump’s first goal was to increase border security.
Drugs are pouring into our country at a record pace and a lot of people are coming in that we can’t have… we have tremendous numbers of people and drugs pouring into our country. So, in order to secure it, we need a wall. We…have to close enforcement loopholes. Give immigration officers — and these are tremendous people, the border security agents, the ICE agents — we have to give them the equipment they need, we have to close loopholes, and this really does include a very strong amount of different things for border security.”
This is a good example of a level 10, if-then, linear argument. The gist of this argument is, “If we want to keep drugs and people we don’t want from coming across the border, then we need to build a wall and give border agents the equipment and other things they need to protect the border.”
As is also typical of level 10 arguments, this argument offers immediate concrete causes and solutions. The cause of our immigration problems is that bad people are getting into our country. The physical act of keeping people out of the country is a solution to the this problem.
Individuals performing in level 11 would not be satisfied with this line of reasoning. They would want to consider underlying or root causes such as poverty, political upheaval, or trade imbalances—and would be likely to try to formulate solutions that addressed these more systemic causes.
Side note: It’s not clear exactly what President Trump means by loopholes. In the past, he has used this term to mean “a law that lets people do things that I don’t think they should be allowed to do.” The dictionary meaning of the term would be more like, “a law that unintentionally allows people to do things it was meant to keep them from doing.”
President Trump’s second goal was to end chain migration. According to Wikipedia, Chain migration (a.k.a., family reunification) is a social phenomenon in which immigrants from a particular family or town are followed by others from that family or town. In other words, family members and friends often join friends and loved ones who have immigrated to a new country. Like many U. S. Citizens, I’m a product of chain migration. The first of my relatives who arrived in this country in the 17th century, later helped other relatives to immigrate.
President Trump wants to end chain migration, because…
“Chain migration is bringing in many, many people with one, and often it doesn’t work out very well. Those many people are not doing us right.”
I believe that what the President is saying here is that chain migration is when one person immigrates to a new country and lots of other people known (or related to?) that person are allowed to immigrate too. He is concerned that the people who follow the first immigrant aren’t behaving properly.
To support this claim, President Trump provides an example of the harm caused by chain migration.
“…we have a recent case along the West Side Highway, having to do with chain migration, where a man ran over — killed eight people and many people injured badly. Loss of arms, loss of legs. Horrible thing happened, and then you look at the chain and all of the people that came in because of him. Terrible situation.”
The perpetrator—Sayfullo Saipov—of the attack Trump appears to be referring to, was a Diversity Visa immigrant. Among other things, this means he was not sponsored, so he cannot be a chain immigrant. On November 21, 2017, President Trump claimed that Saipov had been listed as the primary contact of 23 people who attempted to immigrate following his arrival in 2010, suggesting that Saipov was the first in a chain of immigrants. According to Buzzfeed, federal authorities have been unable to confirm this claim.
Like the border security example, Trump’s argument about chain migration is a good example of a level 10, if-then, linear argument. Here, the gist of his argument is that, If we don’t stop chain migration, then bad people like Sayfullo Saipov will come into the country and do horrible things to us. (I’m intentionally ignoring President Trump’s mistaken assertion that Saipov was a chain immigrant.)
Individuals performing in level 11 would not regard a single example of violent behavior as adequate evidence that chain immigration is a bad thing. Before deciding that eliminating chain migration was a wise decision, they they would want to know, for example, whether or not chain immigrants are more likely to behave violently (or become terrorists) than natural born citizens.
The visa lottery (Diversity Visa Program)
The visa lottery was created as part of the Immigration Act of 1990, and signed into law by President George H. W. Bush. Application for this program is free, The only way to apply is to enter your data into a form on the State Department’s website. Individuals who win the lottery must undergo background checks and vetting before being admitted into the United States. (If you are interested in learning more, the Wikipedia article on this program is comprehensive and well-documented.)
President Trump wants to cancel the lottery program
“…countries come in and they put names in a hopper. They’re not giving you their best names; common sense means they’re not giving you their best names. They’re giving you people that they don’t want. And then we take them out of the lottery. And when they do it by hand — where they put the hand in a bowl — they’re probably — what’s in their hand are the worst of the worst.”
Here, President Trump seems to misunderstand the nature of the visa lottery program. He claims that countries put forward names and that these are the names of people they do not want in their own countries. That is simply not the way the Diversity Visa Program works.
To support his anti-lottery position, Trump again appears to mention the case of Sayfullo Saipov (“that same person who came in through the lottery program).”
But they put people that they don’t want into a lottery and the United States takes those people. And again, they’re going back to that same person who came in through the lottery program. They went — they visited his neighborhood and the people in the neighborhood said, “oh my God, we suffered with this man — the rudeness, the horrible way he treated us right from the beginning.” So we don’t want the lottery system or the visa lottery system. We want it ended.”
I think that what President Trump is saying here is that Sayfullo Saipov was one of the outcasts put into our lottery program by a country that did not want him, and that his new neighbors in the U. S. had complained about his behavior from the start.
This is not a good example of a level 10 argument. This is not a good example of an argument. President Trump completely misrepresents the Diversity Immigrant Visa Program, leaving him with no basis for a sensible argument.
The results from this analysis of President Trump’s statements about immigration provides additional evidence that he tends to perform in the middle of level 10, and his arguments generally have a simple if, then structure. It also reveals some apparent misunderstanding of the law and other factual information.
It is a matter for concern when a President of the United States does not appear to understand a law he wants to change.
How complex are the ideas about intelligence expressed in President Trump’s tweets?
President Trump recently tweeted about his intelligence. The media has already had quite a bit to say about these tweets. So, if you’re suffering from Trump tweet trauma this may not be the article for you.
But you might want to hang around if you’re interested in looking at these tweets from a different angle. I thought it would be interesting to examine their complexity level, and consider what they suggest about the President’s conception of intelligence.
In the National Leaders Study, we’ve been using CLAS — Lectica, Inc.’s electronic developmental scoring system—to score the complexity level of several national leaders’ responses to questions posed by respected journalists. Unfortunately, I can’t use CLAS to score tweets. They’re too short. Instead, I’m going to use the Lectical Dictionary to examine the complexity of ideas being expressed in them.
If you aren’t familiar with the National Leaders series, you may find this article a bit difficult to follow.
The Lectical Dictionary is a developmentally curated list of about 200,000 words or short phrases (terms) that represent particular meanings. (The dictionary does not include entries for people, places, or physical things.) Each term in the dictionary has been assigned to one of 30 developmental phases, based on its least complex possible meaning. The 30 developmental phases span first speech (in infancy) to the highest adult developmental phase Lectica has observed in human performance. Each phase represents 1/4 a level (a, b, c, or d). Levels range from 5 (first speech) to 12 (the most complex level Lectica measures). Phase scores are named as follows: 09d, 10a, 10b, 10c, 10d, 11a, etc. Levels 10 through 12 are considered to be “adult levels,” but the earliest phase of level 10 is often observed in middle school students, and the average high school student performs in the 10b to10c range.
In the following analysis, I’ll be identifying the highest-phase Lectical Dictionary terms in the President’s statements, showing each item’s phase. Where possible, I’ll also be looking at the form of thinking—black-and-white, if-then logic (10a–10d) versus shades-of-gray, nuanced logic (11a–11d)—these terms are embedded in.
The President’s statements
The first two statements are tweets made on 01–05–2018.
“…throughout my life, my two greatest assets have been mental stability and being, like, really smart.
The two most complex ideas in this statement are the notion of having personal assets (10c), and the notion of mental stability (10b).
“I went from VERY successful businessman, to top T.V. Star…to President of the United States (on my first try). I think that would qualify as not smart, but genius…and a very stable genius at that!”
This statement presents an argument for the President’s belief that he is not only smart, but a stablegenius (10b-10c). The evidence offered consists of a list of accomplishments—being a successful (09c) businessman, being a top star, and being elected (09b) president. (Stable genius is not in the Lectical Dictionary, but it is a reference back to the previous notion of mental stability, which is in the dictionary at 10b.)
The kind of thinking demonstrated in this argument is simple if-then linear logic. “If I did these things, then I must be a stable genius.”
Later, at Camp David, when asked about these Tweeted comments, President Trump explained further…
“I had a situation where I was a very excellent student, came out, made billions and billions of dollars, became one of the top business people, went to television and for 10 years was a tremendous success, which you’ve probably heard.”
This argument provides more detail about the President’s accomplishments—being an excellent (08a) student, making billions and billions of dollars, becoming a top business person, and being a tremendous success (10b) in television. Here the president demonstrates the same if-then linear logic observed in the second tweet, above.
The President has spoken about his intelligence on numerous occasions. Across all of the instances I’ve identified, he makes a strong connection between intelligence and concrete accomplishments — most often wealth, fame, or performance (for example in school or in negotiations). I could not find a single instance in which he attributed any part of these accomplishments to external or mitigating factors — for example, luck, being born into a wealthy family, having access to expert advice, or good employees. (I’d be very interested in seeing any examples readers can send my way!)
President Trump’s statements represent the same kind of logic and meaning-making my colleagues and I observed in the interview responses analysed for the National Leaders’ series. President Trump’s logic in these statements has a simple, if-then structure and the most complex ideas he expresses are in the 10b to10c range. As yet, I have seen no evidence of reasoning above this range.
The average score of a US adult is in the 10c–10d range.
What is complexity level? In my work, a complexity level is a point or range on a dimension called hierarchical complexity. In this article, I’m not going to explain hierarchical complexity, but I am going to try to illustrate—in plain(er) English—how complexity level relates to decision-making skills, workplace roles, and curricula. If you’re looking for a more scholarly definition, you can find it in our academic publications. The Shape of Development is a good place to begin.
My colleagues and I make written-response developmental assessments that are designed to support optimal learning and development. All of these assessments are scored for their complexity level on a developmental scale called the Lectical Scale. It’s a scale of increasing hierarchical complexity, with 13 complexity levels (0–12) that span birth through adulthood. On this scale, each level represents a way of seeing the world. Each new level builds upon the previous level, so thinking in a new complexity level is more complex and abstract than thinking at the precious level. The following video describes levels 5–12.
We have five ways of representing Lectical Level scores, depending on the context: (1) as whole levels (9, 10, 11, etc.), (2) as decimals (10.35, 11.13, etc.), (3) as 4 digit numbers (1035, 1113, etc.), (4) as 1/4 of a level phase scores (10a, 10b, 10c, 10d, 11a, etc.), and (5) as 1/2 of a level zone scores (early level 10, advanced level 10; early level 11, etc.).
Interpreting Lectical (complexity level) Scores
Lectical Scores are best thought of in terms of the specific skills, meanings, tasks, roles, or curricula associated with them. To illustrate, I’m including table below that shows…
Lectical Score ranges for the typical complexity of coursework and workplace roles (Role demands & Complexity demands), and
some examples of decision making skills demonstrated in these Lectical Score ranges.
In the last bullet above, I highlighted the term skill, because we differentiate between skills and knowledge. Lectical Scores don’t represent what people know, they represent the complexity of the skill used to apply what they know in the real world. This is important, because there’s a big difference between committing something to memory and understanding it well enough to put it to work. For example, in the 1140–1190 range, the first skill mentioned in the table below is the “ability to identify multiple relations between nested variables.” The Lectical range in this row does not represent the range in which people are able to make this statement. Instead, it represents the level of complexity associated with actually identifying multiple relations between nested variables.
If you want to use this table to get an idea of how skills increase in complexity over time, I suggest that you begin by comparing skill descriptions in ranges that are far apart. For example, try comparing the skill description in the 945–995 range with the skill descriptions in the 1250–1300 range. The difference will be obvious. Then, work your way toward closer and closer ranges. It’s not unusual to have difficulty appreciating the difference between adjacent ranges—that generally takes time and training—but you’ll find it easy to see differences that are further apart.
When using this table as a reference, please keep in mind that several factors play a role in the actual complexity demands of both coursework and roles. In organizations, size and sector matter. For example, there can be a difference as large as 1/2 of a level between freshman curricula in different colleges.
I hope you find this table helpful (even though it’s difficult to read). I’ll be using it as a reference in future articles exploring some of what my colleagues and I have learned by measuring and studying complexity level—starting with leader decision-making.
In a recent blog post—actually in several recent blog posts—I've been emphasizing the importance of building tomorrow's skills. These are the kinds of skills we all need to navigate our increasingly complex and changing world. While I may not agree that all of the top 10 skills listed in the World Economic Forum report (shown above) belong in a list of skills (Creativity is much more than a skill, and service orientation is more of a disposition than a skill.) the flavor of this list is generally in sync with the kinds of skills, dispositions, and behaviors required in a complex and rapidly changing world.
The "skills" in this list cannot be…
developed in learning environments focused primarily on correctness or in workplace environments that don't allow for mistakes; or
These "skills" are best developed through cycles of goal setting, information gathering, application, and reflection—what we call virtuous cycles of learning—or VCoLs. And they're best assessed with tests that focus on applications of skill in real-world contexts, like Lectical Assessments, which are based on a rich research tradition focused on the development of understanding and skill.
I’ve been auditing a very popular 4.5 star Coursera course called “Learning how to learn.” It uses all of the latest research to help people improve their “learning skills.” Yet, even though the lectures in the course are interesting and the research behind the course appears to be sound, I find it difficult to agree that it is a course that helps people learn how to learn.
First, the tests used to determine how well participants have built the learning skills described in this course are actually tests of how well they have learned vocabulary and definitions. As far as I can tell, no skills are involved other than the ability to recall course content. This is problematic. The assumption that learning vocabulary and definitions builds skill is unwarranted. I believe we all know this. Who has not had the experience of learning something well enough to pass a test only to forget most of what they had learned shortly thereafter?
Second, the content in tests at the end of the videos aren’t particularly relevant to the stated intention of the course. These tests require remembering (or scrolling back to) facts like “Many new synapses are formed on dendrites.” We do not need to learn this to become effective learners. The test item for which this is the correct answer is focused on an aspect of how learning works rather than how to learn. And although understanding how learning works might be a step toward learning how to learn, answering this question correctly doesn’t tell us how the participant understands anything at all.
Third, if the course developers had used tests of skill—tests that asked participants to show off how effectively they could apply described techniques, we would be able to ask about the extent to which the course helps participants learn how to learn. Instead, the only way we have to evaluate the effectiveness of the course is through participant ratings and comments—how much people like it. I’m not suggesting that liking a course is unimportant, but it’s not a good way to evaluate its effectiveness.
Fourth, the course seems to be primarily concerned with fostering a kind of learning that helps people do better on tests of correctness. The underlying and unstated assumption seems to be that if you can do better on these tests, you have learned better. This assumption flies in the face of several decades of educational research, including our own [for example, 1, 2, 3]. Correctness is not adequate evidence of understanding or real-world skill. If we want to know how well people understand new knowledge, we must observe how they apply this knowledge in real-world contexts. If we want to evaluate their level of skill, we must observe how well they apply the skill in real-world contexts. In other words, a course in learning how to learn—especially a course in learning how to learn—should be building useable skills that have value beyond the act of passing a test of correctness.
Fifth, the research behind this course can help us understand how learning works. At Lectica, we’ve used the very same information as part of the basis for our learning model, VCoL+7. But instead of using this knowledge to support the status quo—an educational system that privileges correctness over understanding and skill—we’re using it to build learning tools designed to ensure that learning in school goes beyond correctness to build deep understanding and robust skill.
For the vast majority of people, schooling is not an end in itself. It is preparation for life—preparation with tomorrow’s skills. It’s time we held our educational institutions accountable for ensuring that students know how to learn more than correct answers. Wherever their lives take them, they will do better if equipped with understanding and skill. Correctness is not enough.
 FairTest; Mulholland, Quinn (2015). The case against standardized testing. Harvard Political Review, May 14.
 Schwartz, M. S., Sadler, P. M., Sonnert, G. & Tai, R. H. (2009). Depth versus breadth: How content coverage in high school science courses relates to later success in college science coursework. Science Education, 93, 5, 798-826.
During the 70s and 80s I practiced midwifery. It was a great honor to be present at the births of over 500 babies, and in many cases, follow them into childhood. Every single one of those babies was a joyful, driven, and effective "every moment" learner. Regardless of difficulty and pain they all learned to walk, talk, interact with others, and manipulate many aspects of their environment. They needed few external rewards to build these skills—the excitement and suspense of striving seemed to be reward enough. I felt like I was observing the "life force" in action.
Unfortunately as many of these children approached the third grade (age 8), I noticed something else—something deeply troubling. Many of the same children seemed to have lost much of this intrinsic drive to learn. For them, learning had become a chore motivated primarily by extrinsic rewards and punishments. Because this was happening primarily to children attending conventional schools (Children receiving alternative instruction seemed to be exempt.) it appeared that something about schooling was depriving many children of the fundamental human drive required to support a lifetime of learning and development—a drive that looked to me like a key source of happiness and fulfillment.
Understanding the problem
Following upon my midwifery career, I flirted briefly with a career in advertising, but by the early 90's I was back in school—in a Ph.D. program in U. C. Berkeley's Graduate School of Education—where I found myself observing the same pattern I'd observed as a midwife. Both the research and my own lab experience exposed the early loss of students' natural love of learning. My concern was only increased by the newly emerging trend toward high stakes multiple choice testing, which my colleagues and I saw as a further threat to children's natural drive to learn.
Most of the people I've spoken to about this problem have agreed that it's a shame, but few have seen it as a problem that can be solved, and many have seen it as an inevitable consequence of either mass schooling or simple maturation. But I knew it was not inevitable. Children and those educated in a range of alternative environments did not appear to lose their drive to learn. Additionally, above average students in conventional schools appeared to be more likely to retain their love of learning.
I set out to find out why—and ended up on a long journey toward a solution.
How learning works
First, I needed to understand how learning works. At Berkeley, I studied a wide variety of learning theories in several disciplines, including developmental theories, behavioral theories, and brain-based theories. I collected a large database of longitudinal interviews and submitted them to in-depth analysis, looked closely at the relation between testing and learning, and studied psychological measurement, all in the interest of finding a way to support childrens' growth while reinforcing their love of learning.
My dissertation—which won awards from both U.C. Berkeley and the American Psychological Association—focused on the development of people's conceptions of learning from age 5 through 85, and how this kind of knowledge could be used to measure and support learning. In 1998, I received $500,000 from the Spencer Foundation to further develop the methods designed for this research. Some of my areas of expertise are human learning and development, psychometrics, metacognition, moral education, and research methods.
In the simplest possible terms, what I learned in 5 years of graduate school is that the human brain is designed to drive learning, and that preserving that natural drive requires 5 ingredients:
a safe environment that is rich in learning opportunities and healthy human interaction,
a teacher who understands each child's interests and level of tolerance for failure,
a mechanism for determining "what comes next"—what is just challenging enough to allow for success most of the time (but not all of the time),
instant actionable feedback, and
the opportunity to integrate new knowledge or skills into each learner's existing knowledge network well enough to make it useable before pushing instruction to the next level. (We call this building a "robust knowledge network"—the essential foundation for future learning.)*
Identifying the solution
Once we understood what learning should look like, we needed to decide where to intervene. The answer, when it came, was a complete surprise. Understanding what comes next—something that can only be learned by measuring what a student understands now—was an integral part of the recipe for learning. This meant that testing—which we originally saw as an obstacle to robust learning—was actually the solution—but only if we could build tests that would free students to learn the way their brains are designed to learn. These tests would have to help teachers determine "what comes next" (ingredient 3) and provide instant actionable feedback (ingredient 4), while rewarding them for helping students build robust knowledge networks (ingredient 5).
Unfortunately, conventional standardized tests were focused on "correctness" rather than robust learning, and none of them were based on the study of how targeted concepts and skills develop over time. Moreover, they were designed not to support learning, but rather to make decisions about advancement or placement, based on how many correct answers students were able to provide relative to other students. Because this form of testing did not meet the requirements of our learning recipe, we'd have to start from scratch.
Developing the solution
We knew that our solution—reinventing educational testing to serve robust learning—would require many years of research. In fact, we would be committing to possible decades of effort without a guaranteed result. It was the vision of a future educational system in which all children retained their inborn drive for learning that ultimately compelled us to move forward.
To reinvent educational testing, we needed to:
make a deep study of precisely how children build particular knowledge and skills over time in a wide range of subject areas (so these tests could accurately identify "what comes next");
make tests that determine how deeply students understand what they have learned—how well they can use it to address real-world issues or problems (requires that students show how they are thinking, not just what they know—which means written responses with explanations); and
produce formative feedback and resources designed to foster "robust learning" (build robust knowledge networks).
Here's what we had to invent:
A learning ruler (building on Commons  and Fischer );
A method for studying how students learn tested concepts and skills (refining the methods developed for my dissertation);
A human scoring system for determining the level of understanding exhibited in students' written explanations (building upon Commons' and Fischer's methods, refining them until measurements were precise enough for use in educational contexts); and
An electronic scoring system, so feedback and resources could be delivered in real time.
It took over 20 years (1996–2016), but we did it! And while we were doing it, we conducted research. In fact, our assessments have been used in dozens of research projects, including a 25 million dollar study of literacy conducted at Harvard, and numerous Ph.D. dissertations—with more on the way.
What we've learned
We've learned many things from this research. Here are some that took us by surprise:
Students in schools that focus on building deep understanding graduate seniors that are up to 5 years ahead (on our learning ruler) of students in schools that focus on correctness (2.5 to 3 years after taking socioeconomic status into account).
Students in schools that foster robust learning develop faster and continue to develop longer (into adulthood) than students in schools that focus on correctness.
On average, students in schools that foster robust learning produce more coherent and persuasive arguments than students in schools that focus on correctness.
On average, students in our inner-city schools, which are the schools most focused on correctness, stop developing (on our learning ruler) in grade 10.
The average student who graduates from a school that strongly focuses on correctness is likely, in adulthood, to (1) be unable to grasp the complexity and ambiguity of many common situations and problems, (2) lack the mental agility to adapt to changes in society and the workplace, and (3) dislike learning.
From our perspective, these results point to an educational crisis that can best be addressed by allowing students to learn as their brains were designed to learn. Practically speaking, this means providing learners, parents, teachers, and schools with metrics that reward and support teaching that fosters robust learning.
Where we are today
Lectica has created the only metrics that meet all of these requirements. Our mission is to foster greater individual happiness and fulfillment while preparing students to meet 21st century challenges. We do this by creating and delivering learning tools that encourage students to learn the way their brains were designed to learn. And we ensure that students who need our learning tools the most get them first by providing free subscriptions to individual teachers everywhere.
To realize our mission, we organized as a nonprofit. We knew this choice would slow our progress (relative to organizing as a for-profit and welcoming investors), but it was the only way to guarantee that our true mission would not be derailed by other interests.
Thus far, we've funded ourselves with work in the for-profit sector and income from grants. Our background research is rich, our methods are well-established, and our technology works even better than we thought it would. Last fall, we completed a demonstration of our electronic scoring system, CLAS, a novel technology that learns from every single assessment taken in our system.
The groundwork has been laid, and we're ready to scale. All we need is the platform that will deliver the assessments (called DiscoTests), several of which are already in production.
After 20 years of high stakes testing, students and teachers need our solution more than ever. We feel compelled to scale a quickly as possible, so we can begin the process of reinvigorating today's students' natural love of learning, and ensure that the next generation of students never loses theirs. Lectica's story isn't finished. Instead, we find ourselves on the cusp of a new beginning!
A final note: There are many benefits associated with our approach to assessment that were not mentioned here. For example, because the assessment scores are all calibrated to the same learning ruler, students, teachers, and parents can easily track student growth. Even better, our assessments are designed to be taken frequently and to be embedded in low-stakes contexts. For grading purposes, teachers are encouraged to focus on growth over time rather than specific test scores. This way of using assessments pretty much eliminates concerns about cheating. And finally, the electronic scoring system we developed is backed by the world's first "taxonomy of learning," which also serves many other educational and research functions. It's already spawned a developmentally sensitive spell-checker! One day, this taxonomy of learning will be robust enough to empower teachers to create their own formative assessments on the fly.