Shortly after the President passed the Montreal Cognitive Assessment, a reader emailed with two questions:
Does this mean that the President has the cognitive capacity required of a national leader?
How does a score on this test relate to the complexity level scores you have been describing in recent posts?
A high score on the Montreal Cognitive Assessment dos not mean that the President has the cognitive capacity required of a national leader. This test result simply means there is a high probability that the President is not suffering from mild cognitive impairment. (The test has been shown to detect existing cognitive impairment 88% of the time .) In order to determine if the President has the mental capacity to understand the complex issues he faces as a National Leader, we need to know how complexly he thinks about those issues.
The answer to the second question is that there is little relation between scores on the Montreal Cognitive Assessment and the complexity level of a person’s thinking. A test like the Montreal Cognitive Assessment does not require the kind of thinking a President needs to understand highly complex issues like climate change or the economy. Teenagers can easily pass this test.
The difference between 1053 and 1137 generally represents a decade or more of sustained learning. (If you’re a new reader and don’t yet know what a complexity level is, check out the National Leaders Series introductory article.)
 JAMA Intern Med. 2015 Sep;175(9):1450-8. doi: 10.1001/jamainternmed.2015.2152. Cognitive Tests to Detect Dementia: A Systematic Review and Meta-analysis. Tsoi KK, Chan JY, Hirai HW, Wong SY, Kwok TC.
How complex are the ideas about immigration expressed in President Trump’s recent comments to congress?
On January 9th, 2018, President Trump spoke to members of Congress about immigration reform. In his comments, the President stressed the need for bipartisan immigration reform, and laid out three goals.
secure our border with Mexico
end chain migration
close the visa lottery program
I have analyzed President Trump’s comments in detail, looking at each goal in turn. But first, his full comments were submitted to CLAS (an electronic developmental assessment system) for an analysis of their complexity level. The CLAS score was 1046. This score is in what we call level 10, and is a few points lower than the average score of 1053 awarded to President Trump’s arguments in our earlier research.
Here are some benchmarks for complexity scores:
The average complexity score of American adults is in the upper end of level 10, somewhere in the range of 1050-1080.
The average complexity score for senior leaders in large corporations or government institutions is in the upper end of level 11, in the range of 1150-1180.
The average complexity score (reported in our National Leaders Study) for the three U. S. presidents that preceded President Trump was 1137.
The difference between 1046 and 1137 represents a decade or more of sustained learning. (If you’re a new reader and don’t yet know what a complexity level is, check out the National Leaders Series introductory article.)
President Trump’s first goal was to increase border security.
Drugs are pouring into our country at a record pace and a lot of people are coming in that we can’t have… we have tremendous numbers of people and drugs pouring into our country. So, in order to secure it, we need a wall. We…have to close enforcement loopholes. Give immigration officers — and these are tremendous people, the border security agents, the ICE agents — we have to give them the equipment they need, we have to close loopholes, and this really does include a very strong amount of different things for border security.”
This is a good example of a level 10, if-then, linear argument. The gist of this argument is, “If we want to keep drugs and people we don’t want from coming across the border, then we need to build a wall and give border agents the equipment and other things they need to protect the border.”
As is also typical of level 10 arguments, this argument offers immediate concrete causes and solutions. The cause of our immigration problems is that bad people are getting into our country. The physical act of keeping people out of the country is a solution to the this problem.
Individuals performing in level 11 would not be satisfied with this line of reasoning. They would want to consider underlying or root causes such as poverty, political upheaval, or trade imbalances—and would be likely to try to formulate solutions that addressed these more systemic causes.
Side note: It’s not clear exactly what President Trump means by loopholes. In the past, he has used this term to mean “a law that lets people do things that I don’t think they should be allowed to do.” The dictionary meaning of the term would be more like, “a law that unintentionally allows people to do things it was meant to keep them from doing.”
President Trump’s second goal was to end chain migration. According to Wikipedia, Chain migration (a.k.a., family reunification) is a social phenomenon in which immigrants from a particular family or town are followed by others from that family or town. In other words, family members and friends often join friends and loved ones who have immigrated to a new country. Like many U. S. Citizens, I’m a product of chain migration. The first of my relatives who arrived in this country in the 17th century, later helped other relatives to immigrate.
President Trump wants to end chain migration, because…
“Chain migration is bringing in many, many people with one, and often it doesn’t work out very well. Those many people are not doing us right.”
I believe that what the President is saying here is that chain migration is when one person immigrates to a new country and lots of other people known (or related to?) that person are allowed to immigrate too. He is concerned that the people who follow the first immigrant aren’t behaving properly.
To support this claim, President Trump provides an example of the harm caused by chain migration.
“…we have a recent case along the West Side Highway, having to do with chain migration, where a man ran over — killed eight people and many people injured badly. Loss of arms, loss of legs. Horrible thing happened, and then you look at the chain and all of the people that came in because of him. Terrible situation.”
The perpetrator—Sayfullo Saipov—of the attack Trump appears to be referring to, was a Diversity Visa immigrant. Among other things, this means he was not sponsored, so he cannot be a chain immigrant. On November 21, 2017, President Trump claimed that Saipov had been listed as the primary contact of 23 people who attempted to immigrate following his arrival in 2010, suggesting that Saipov was the first in a chain of immigrants. According to Buzzfeed, federal authorities have been unable to confirm this claim.
Like the border security example, Trump’s argument about chain migration is a good example of a level 10, if-then, linear argument. Here, the gist of his argument is that, If we don’t stop chain migration, then bad people like Sayfullo Saipov will come into the country and do horrible things to us. (I’m intentionally ignoring President Trump’s mistaken assertion that Saipov was a chain immigrant.)
Individuals performing in level 11 would not regard a single example of violent behavior as adequate evidence that chain immigration is a bad thing. Before deciding that eliminating chain migration was a wise decision, they they would want to know, for example, whether or not chain immigrants are more likely to behave violently (or become terrorists) than natural born citizens.
The visa lottery (Diversity Visa Program)
The visa lottery was created as part of the Immigration Act of 1990, and signed into law by President George H. W. Bush. Application for this program is free, The only way to apply is to enter your data into a form on the State Department’s website. Individuals who win the lottery must undergo background checks and vetting before being admitted into the United States. (If you are interested in learning more, the Wikipedia article on this program is comprehensive and well-documented.)
President Trump wants to cancel the lottery program
“…countries come in and they put names in a hopper. They’re not giving you their best names; common sense means they’re not giving you their best names. They’re giving you people that they don’t want. And then we take them out of the lottery. And when they do it by hand — where they put the hand in a bowl — they’re probably — what’s in their hand are the worst of the worst.”
Here, President Trump seems to misunderstand the nature of the visa lottery program. He claims that countries put forward names and that these are the names of people they do not want in their own countries. That is simply not the way the Diversity Visa Program works.
To support his anti-lottery position, Trump again appears to mention the case of Sayfullo Saipov (“that same person who came in through the lottery program).”
But they put people that they don’t want into a lottery and the United States takes those people. And again, they’re going back to that same person who came in through the lottery program. They went — they visited his neighborhood and the people in the neighborhood said, “oh my God, we suffered with this man — the rudeness, the horrible way he treated us right from the beginning.” So we don’t want the lottery system or the visa lottery system. We want it ended.”
I think that what President Trump is saying here is that Sayfullo Saipov was one of the outcasts put into our lottery program by a country that did not want him, and that his new neighbors in the U. S. had complained about his behavior from the start.
This is not a good example of a level 10 argument. This is not a good example of an argument. President Trump completely misrepresents the Diversity Immigrant Visa Program, leaving him with no basis for a sensible argument.
The results from this analysis of President Trump’s statements about immigration provides additional evidence that he tends to perform in the middle of level 10, and his arguments generally have a simple if, then structure. It also reveals some apparent misunderstanding of the law and other factual information.
It is a matter for concern when a President of the United States does not appear to understand a law he wants to change.
How complex are the ideas about intelligence expressed in President Trump’s tweets?
President Trump recently tweeted about his intelligence. The media has already had quite a bit to say about these tweets. So, if you’re suffering from Trump tweet trauma this may not be the article for you.
But you might want to hang around if you’re interested in looking at these tweets from a different angle. I thought it would be interesting to examine their complexity level, and consider what they suggest about the President’s conception of intelligence.
In the National Leaders Study, we’ve been using CLAS — Lectica, Inc.’s electronic developmental scoring system—to score the complexity level of several national leaders’ responses to questions posed by respected journalists. Unfortunately, I can’t use CLAS to score tweets. They’re too short. Instead, I’m going to use the Lectical Dictionary to examine the complexity of ideas being expressed in them.
If you aren’t familiar with the National Leaders series, you may find this article a bit difficult to follow.
The Lectical Dictionary is a developmentally curated list of about 200,000 words or short phrases (terms) that represent particular meanings. (The dictionary does not include entries for people, places, or physical things.) Each term in the dictionary has been assigned to one of 30 developmental phases, based on its least complex possible meaning. The 30 developmental phases span first speech (in infancy) to the highest adult developmental phase Lectica has observed in human performance. Each phase represents 1/4 a level (a, b, c, or d). Levels range from 5 (first speech) to 12 (the most complex level Lectica measures). Phase scores are named as follows: 09d, 10a, 10b, 10c, 10d, 11a, etc. Levels 10 through 12 are considered to be “adult levels,” but the earliest phase of level 10 is often observed in middle school students, and the average high school student performs in the 10b to10c range.
In the following analysis, I’ll be identifying the highest-phase Lectical Dictionary terms in the President’s statements, showing each item’s phase. Where possible, I’ll also be looking at the form of thinking—black-and-white, if-then logic (10a–10d) versus shades-of-gray, nuanced logic (11a–11d)—these terms are embedded in.
The President’s statements
The first two statements are tweets made on 01–05–2018.
“…throughout my life, my two greatest assets have been mental stability and being, like, really smart.
The two most complex ideas in this statement are the notion of having personal assets (10c), and the notion of mental stability (10b).
“I went from VERY successful businessman, to top T.V. Star…to President of the United States (on my first try). I think that would qualify as not smart, but genius…and a very stable genius at that!”
This statement presents an argument for the President’s belief that he is not only smart, but a stablegenius (10b-10c). The evidence offered consists of a list of accomplishments—being a successful (09c) businessman, being a top star, and being elected (09b) president. (Stable genius is not in the Lectical Dictionary, but it is a reference back to the previous notion of mental stability, which is in the dictionary at 10b.)
The kind of thinking demonstrated in this argument is simple if-then linear logic. “If I did these things, then I must be a stable genius.”
Later, at Camp David, when asked about these Tweeted comments, President Trump explained further…
“I had a situation where I was a very excellent student, came out, made billions and billions of dollars, became one of the top business people, went to television and for 10 years was a tremendous success, which you’ve probably heard.”
This argument provides more detail about the President’s accomplishments—being an excellent (08a) student, making billions and billions of dollars, becoming a top business person, and being a tremendous success (10b) in television. Here the president demonstrates the same if-then linear logic observed in the second tweet, above.
The President has spoken about his intelligence on numerous occasions. Across all of the instances I’ve identified, he makes a strong connection between intelligence and concrete accomplishments — most often wealth, fame, or performance (for example in school or in negotiations). I could not find a single instance in which he attributed any part of these accomplishments to external or mitigating factors — for example, luck, being born into a wealthy family, having access to expert advice, or good employees. (I’d be very interested in seeing any examples readers can send my way!)
President Trump’s statements represent the same kind of logic and meaning-making my colleagues and I observed in the interview responses analysed for the National Leaders’ series. President Trump’s logic in these statements has a simple, if-then structure and the most complex ideas he expresses are in the 10b to10c range. As yet, I have seen no evidence of reasoning above this range.
The average score of a US adult is in the 10c–10d range.
What is complexity level? In my work, a complexity level is a point or range on a dimension called hierarchical complexity. In this article, I’m not going to explain hierarchical complexity, but I am going to try to illustrate—in plain(er) English—how complexity level relates to decision-making skills, workplace roles, and curricula. If you’re looking for a more scholarly definition, you can find it in our academic publications. The Shape of Development is a good place to begin.
My colleagues and I make written-response developmental assessments that are designed to support optimal learning and development. All of these assessments are scored for their complexity level on a developmental scale called the Lectical Scale. It’s a scale of increasing hierarchical complexity, with 13 complexity levels (0–12) that span birth through adulthood. On this scale, each level represents a way of seeing the world. Each new level builds upon the previous level, so thinking in a new complexity level is more complex and abstract than thinking at the precious level. The following video describes levels 5–12.
We have five ways of representing Lectical Level scores, depending on the context: (1) as whole levels (9, 10, 11, etc.), (2) as decimals (10.35, 11.13, etc.), (3) as 4 digit numbers (1035, 1113, etc.), (4) as 1/4 of a level phase scores (10a, 10b, 10c, 10d, 11a, etc.), and (5) as 1/2 of a level zone scores (early level 10, advanced level 10; early level 11, etc.).
Interpreting Lectical (complexity level) Scores
Lectical Scores are best thought of in terms of the specific skills, meanings, tasks, roles, or curricula associated with them. To illustrate, I’m including table below that shows…
Lectical Score ranges for the typical complexity of coursework and workplace roles (Role demands & Complexity demands), and
some examples of decision making skills demonstrated in these Lectical Score ranges.
In the last bullet above, I highlighted the term skill, because we differentiate between skills and knowledge. Lectical Scores don’t represent what people know, they represent the complexity of the skill used to apply what they know in the real world. This is important, because there’s a big difference between committing something to memory and understanding it well enough to put it to work. For example, in the 1140–1190 range, the first skill mentioned in the table below is the “ability to identify multiple relations between nested variables.” The Lectical range in this row does not represent the range in which people are able to make this statement. Instead, it represents the level of complexity associated with actually identifying multiple relations between nested variables.
If you want to use this table to get an idea of how skills increase in complexity over time, I suggest that you begin by comparing skill descriptions in ranges that are far apart. For example, try comparing the skill description in the 945–995 range with the skill descriptions in the 1250–1300 range. The difference will be obvious. Then, work your way toward closer and closer ranges. It’s not unusual to have difficulty appreciating the difference between adjacent ranges—that generally takes time and training—but you’ll find it easy to see differences that are further apart.
When using this table as a reference, please keep in mind that several factors play a role in the actual complexity demands of both coursework and roles. In organizations, size and sector matter. For example, there can be a difference as large as 1/2 of a level between freshman curricula in different colleges.
I hope you find this table helpful (even though it’s difficult to read). I’ll be using it as a reference in future articles exploring some of what my colleagues and I have learned by measuring and studying complexity level—starting with leader decision-making.
In a recent blog post—actually in several recent blog posts—I've been emphasizing the importance of building tomorrow's skills. These are the kinds of skills we all need to navigate our increasingly complex and changing world. While I may not agree that all of the top 10 skills listed in the World Economic Forum report (shown above) belong in a list of skills (Creativity is much more than a skill, and service orientation is more of a disposition than a skill.) the flavor of this list is generally in sync with the kinds of skills, dispositions, and behaviors required in a complex and rapidly changing world.
The "skills" in this list cannot be…
developed in learning environments focused primarily on correctness or in workplace environments that don't allow for mistakes; or
These "skills" are best developed through cycles of goal setting, information gathering, application, and reflection—what we call virtuous cycles of learning—or VCoLs. And they're best assessed with tests that focus on applications of skill in real-world contexts, like Lectical Assessments, which are based on a rich research tradition focused on the development of understanding and skill.
I’ve been auditing a very popular 4.5 star Coursera course called “Learning how to learn.” It uses all of the latest research to help people improve their “learning skills.” Yet, even though the lectures in the course are interesting and the research behind the course appears to be sound, I find it difficult to agree that it is a course that helps people learn how to learn.
First, the tests used to determine how well participants have built the learning skills described in this course are actually tests of how well they have learned vocabulary and definitions. As far as I can tell, no skills are involved other than the ability to recall course content. This is problematic. The assumption that learning vocabulary and definitions builds skill is unwarranted. I believe we all know this. Who has not had the experience of learning something well enough to pass a test only to forget most of what they had learned shortly thereafter?
Second, the content in tests at the end of the videos aren’t particularly relevant to the stated intention of the course. These tests require remembering (or scrolling back to) facts like “Many new synapses are formed on dendrites.” We do not need to learn this to become effective learners. The test item for which this is the correct answer is focused on an aspect of how learning works rather than how to learn. And although understanding how learning works might be a step toward learning how to learn, answering this question correctly doesn’t tell us how the participant understands anything at all.
Third, if the course developers had used tests of skill—tests that asked participants to show off how effectively they could apply described techniques, we would be able to ask about the extent to which the course helps participants learn how to learn. Instead, the only way we have to evaluate the effectiveness of the course is through participant ratings and comments—how much people like it. I’m not suggesting that liking a course is unimportant, but it’s not a good way to evaluate its effectiveness.
Fourth, the course seems to be primarily concerned with fostering a kind of learning that helps people do better on tests of correctness. The underlying and unstated assumption seems to be that if you can do better on these tests, you have learned better. This assumption flies in the face of several decades of educational research, including our own [for example, 1, 2, 3]. Correctness is not adequate evidence of understanding or real-world skill. If we want to know how well people understand new knowledge, we must observe how they apply this knowledge in real-world contexts. If we want to evaluate their level of skill, we must observe how well they apply the skill in real-world contexts. In other words, a course in learning how to learn—especially a course in learning how to learn—should be building useable skills that have value beyond the act of passing a test of correctness.
Fifth, the research behind this course can help us understand how learning works. At Lectica, we’ve used the very same information as part of the basis for our learning model, VCoL+7. But instead of using this knowledge to support the status quo—an educational system that privileges correctness over understanding and skill—we’re using it to build learning tools designed to ensure that learning in school goes beyond correctness to build deep understanding and robust skill.
For the vast majority of people, schooling is not an end in itself. It is preparation for life—preparation with tomorrow’s skills. It’s time we held our educational institutions accountable for ensuring that students know how to learn more than correct answers. Wherever their lives take them, they will do better if equipped with understanding and skill. Correctness is not enough.
 FairTest; Mulholland, Quinn (2015). The case against standardized testing. Harvard Political Review, May 14.
 Schwartz, M. S., Sadler, P. M., Sonnert, G. & Tai, R. H. (2009). Depth versus breadth: How content coverage in high school science courses relates to later success in college science coursework. Science Education, 93, 5, 798-826.
During the 70s and 80s I practiced midwifery. It was a great honor to be present at the births of over 500 babies, and in many cases, follow them into childhood. Every single one of those babies was a joyful, driven, and effective "every moment" learner. Regardless of difficulty and pain they all learned to walk, talk, interact with others, and manipulate many aspects of their environment. They needed few external rewards to build these skills—the excitement and suspense of striving seemed to be reward enough. I felt like I was observing the "life force" in action.
Unfortunately as many of these children approached the third grade (age 8), I noticed something else—something deeply troubling. Many of the same children seemed to have lost much of this intrinsic drive to learn. For them, learning had become a chore motivated primarily by extrinsic rewards and punishments. Because this was happening primarily to children attending conventional schools (Children receiving alternative instruction seemed to be exempt.) it appeared that something about schooling was depriving many children of the fundamental human drive required to support a lifetime of learning and development—a drive that looked to me like a key source of happiness and fulfillment.
Understanding the problem
Following upon my midwifery career, I flirted briefly with a career in advertising, but by the early 90's I was back in school—in a Ph.D. program in U. C. Berkeley's Graduate School of Education—where I found myself observing the same pattern I'd observed as a midwife. Both the research and my own lab experience exposed the early loss of students' natural love of learning. My concern was only increased by the newly emerging trend toward high stakes multiple choice testing, which my colleagues and I saw as a further threat to children's natural drive to learn.
Most of the people I've spoken to about this problem have agreed that it's a shame, but few have seen it as a problem that can be solved, and many have seen it as an inevitable consequence of either mass schooling or simple maturation. But I knew it was not inevitable. Children and those educated in a range of alternative environments did not appear to lose their drive to learn. Additionally, above average students in conventional schools appeared to be more likely to retain their love of learning.
I set out to find out why—and ended up on a long journey toward a solution.
How learning works
First, I needed to understand how learning works. At Berkeley, I studied a wide variety of learning theories in several disciplines, including developmental theories, behavioral theories, and brain-based theories. I collected a large database of longitudinal interviews and submitted them to in-depth analysis, looked closely at the relation between testing and learning, and studied psychological measurement, all in the interest of finding a way to support childrens' growth while reinforcing their love of learning.
My dissertation—which won awards from both U.C. Berkeley and the American Psychological Association—focused on the development of people's conceptions of learning from age 5 through 85, and how this kind of knowledge could be used to measure and support learning. In 1998, I received $500,000 from the Spencer Foundation to further develop the methods designed for this research. Some of my areas of expertise are human learning and development, psychometrics, metacognition, moral education, and research methods.
In the simplest possible terms, what I learned in 5 years of graduate school is that the human brain is designed to drive learning, and that preserving that natural drive requires 5 ingredients:
a safe environment that is rich in learning opportunities and healthy human interaction,
a teacher who understands each child's interests and level of tolerance for failure,
a mechanism for determining "what comes next"—what is just challenging enough to allow for success most of the time (but not all of the time),
instant actionable feedback, and
the opportunity to integrate new knowledge or skills into each learner's existing knowledge network well enough to make it useable before pushing instruction to the next level. (We call this building a "robust knowledge network"—the essential foundation for future learning.)*
Identifying the solution
Once we understood what learning should look like, we needed to decide where to intervene. The answer, when it came, was a complete surprise. Understanding what comes next—something that can only be learned by measuring what a student understands now—was an integral part of the recipe for learning. This meant that testing—which we originally saw as an obstacle to robust learning—was actually the solution—but only if we could build tests that would free students to learn the way their brains are designed to learn. These tests would have to help teachers determine "what comes next" (ingredient 3) and provide instant actionable feedback (ingredient 4), while rewarding them for helping students build robust knowledge networks (ingredient 5).
Unfortunately, conventional standardized tests were focused on "correctness" rather than robust learning, and none of them were based on the study of how targeted concepts and skills develop over time. Moreover, they were designed not to support learning, but rather to make decisions about advancement or placement, based on how many correct answers students were able to provide relative to other students. Because this form of testing did not meet the requirements of our learning recipe, we'd have to start from scratch.
Developing the solution
We knew that our solution—reinventing educational testing to serve robust learning—would require many years of research. In fact, we would be committing to possible decades of effort without a guaranteed result. It was the vision of a future educational system in which all children retained their inborn drive for learning that ultimately compelled us to move forward.
To reinvent educational testing, we needed to:
make a deep study of precisely how children build particular knowledge and skills over time in a wide range of subject areas (so these tests could accurately identify "what comes next");
make tests that determine how deeply students understand what they have learned—how well they can use it to address real-world issues or problems (requires that students show how they are thinking, not just what they know—which means written responses with explanations); and
produce formative feedback and resources designed to foster "robust learning" (build robust knowledge networks).
Here's what we had to invent:
A learning ruler (building on Commons  and Fischer );
A method for studying how students learn tested concepts and skills (refining the methods developed for my dissertation);
A human scoring system for determining the level of understanding exhibited in students' written explanations (building upon Commons' and Fischer's methods, refining them until measurements were precise enough for use in educational contexts); and
An electronic scoring system, so feedback and resources could be delivered in real time.
It took over 20 years (1996–2016), but we did it! And while we were doing it, we conducted research. In fact, our assessments have been used in dozens of research projects, including a 25 million dollar study of literacy conducted at Harvard, and numerous Ph.D. dissertations—with more on the way.
What we've learned
We've learned many things from this research. Here are some that took us by surprise:
Students in schools that focus on building deep understanding graduate seniors that are up to 5 years ahead (on our learning ruler) of students in schools that focus on correctness (2.5 to 3 years after taking socioeconomic status into account).
Students in schools that foster robust learning develop faster and continue to develop longer (into adulthood) than students in schools that focus on correctness.
On average, students in schools that foster robust learning produce more coherent and persuasive arguments than students in schools that focus on correctness.
On average, students in our inner-city schools, which are the schools most focused on correctness, stop developing (on our learning ruler) in grade 10.
The average student who graduates from a school that strongly focuses on correctness is likely, in adulthood, to (1) be unable to grasp the complexity and ambiguity of many common situations and problems, (2) lack the mental agility to adapt to changes in society and the workplace, and (3) dislike learning.
From our perspective, these results point to an educational crisis that can best be addressed by allowing students to learn as their brains were designed to learn. Practically speaking, this means providing learners, parents, teachers, and schools with metrics that reward and support teaching that fosters robust learning.
Where we are today
Lectica has created the only metrics that meet all of these requirements. Our mission is to foster greater individual happiness and fulfillment while preparing students to meet 21st century challenges. We do this by creating and delivering learning tools that encourage students to learn the way their brains were designed to learn. And we ensure that students who need our learning tools the most get them first by providing free subscriptions to individual teachers everywhere.
To realize our mission, we organized as a nonprofit. We knew this choice would slow our progress (relative to organizing as a for-profit and welcoming investors), but it was the only way to guarantee that our true mission would not be derailed by other interests.
Thus far, we've funded ourselves with work in the for-profit sector and income from grants. Our background research is rich, our methods are well-established, and our technology works even better than we thought it would. Last fall, we completed a demonstration of our electronic scoring system, CLAS, a novel technology that learns from every single assessment taken in our system.
The groundwork has been laid, and we're ready to scale. All we need is the platform that will deliver the assessments (called DiscoTests), several of which are already in production.
After 20 years of high stakes testing, students and teachers need our solution more than ever. We feel compelled to scale a quickly as possible, so we can begin the process of reinvigorating today's students' natural love of learning, and ensure that the next generation of students never loses theirs. Lectica's story isn't finished. Instead, we find ourselves on the cusp of a new beginning!
A final note: There are many benefits associated with our approach to assessment that were not mentioned here. For example, because the assessment scores are all calibrated to the same learning ruler, students, teachers, and parents can easily track student growth. Even better, our assessments are designed to be taken frequently and to be embedded in low-stakes contexts. For grading purposes, teachers are encouraged to focus on growth over time rather than specific test scores. This way of using assessments pretty much eliminates concerns about cheating. And finally, the electronic scoring system we developed is backed by the world's first "taxonomy of learning," which also serves many other educational and research functions. It's already spawned a developmentally sensitive spell-checker! One day, this taxonomy of learning will be robust enough to empower teachers to create their own formative assessments on the fly.
Adaptive learning technologies are touted as an advance in education and a harbinger of what's to come. But although we at Lectica agree that adaptive learning has a great deal to offer, we have some concerns about its current limitations. In an earlier article, I raised the question of how well one of these platforms, Knewton, serves "robust learning"—the kind of learning that leads to deep understanding and usable knowledge. Here are some more general observations.
The great strength of adaptive learning technologies is that they allow students to learn at their own pace. That's big. It's quite enough to be excited about, even if it changes nothing else about how people learn. But in our excitement about this advance, the educational community is in danger of ignoring important shortcomings of these technologies.
First, adaptive learning technologies are built on adaptive testing technologies. Today, these testing technologies are focused on "correctness." Students are moved to the next level of difficulty based on their ability to get correct answers. This is what today's testing technologies measure best. However, although being able to produce or select correct answers is important, it is not an adequate indication of understanding. And without real understanding, knowledge is not usable and can't be built upon effectively over the long term.
Second, today's adaptive learning technologies are focused on a narrow range of content—the kind of content psychometricians know how to build tests for—mostly math and science (with an awkward nod to literacy). In public education during the last 20 years, we've experienced a gradual narrowing of the curriculum, largely because of high stakes testing and its narrow focus. Today's adaptive learning technologies suffer from the same limitations and are likely to reinforce this trend.
Third, the success of adaptive learning technologies is measured with standardized tests of correctness. Higher scores will help more students get into college—after all, colleges use these tests to decide who will be admitted. But we have no idea how well higher scores on these tests translate into life success. Efforts to demonstrate the relevance of educational practices are few and far between. And notably, there are many examples of highly successful individuals who were poor players in the education game—including several of the worlds' most productive and influential people.
Fourth, some proponents of online adaptive learning believe that it can and should replace (or marginalize) teachers and classrooms. This is concerning. Education is more than a process of accumulating facts. For one thing, it plays an enormous role in socialization. Good teachers and classrooms offer students opportunities to build knowledge while learning how to engage and work with diverse others. Great teachers catalyze optimal learning and engagement by leveraging students' interests, knowledge, skills, and dispositions. They also encourage students to put what they're learning to work in everyday life—both on their own and in collaboration with others.
Lectica has a strong interest in adaptive learning and the technologies that deliver it. We anticipate that over the next few years, our assessment technology will be integrated into adaptive learning platforms to help expand their subject matter and ensure that students are building robust, usable knowledge. We will also be working hard to ensure that these platforms are part of a well-thought out, evidence-based approach to education—one that fosters the development of tomorrow's skills—the full range of skills and knowledge required for success in a complex and rapidly changing world.
There are four keys to optimizing learning and development and ensuring that it continues over a lifetime.
Don’t cram content. Learning doesn’t work optimally when it is rushed or when learners are over-stressed. In Finland, students only go to school three 6-hour days a week, rarely have homework, and do better on PISA than students anywhere else in the world. (Unfortunately, PISA primarily measures correctness, but it’s the best international metric we have at present.) Their educational system is focused on building students’ knowledge networks. Students don’t move on to the next level until they master the current level. The Fins have figured out what our research shows—stuffing content has the long-term effect of slowing or halting development, while a focus on building knowledge networks leads to a steeper learning trajectory and a lifetime of learning and development.
Focus on the network. To learn very large quantities of information, we must effectively recruit System 1 (the fast unconscious brain). System 1 makes associations. (Think of a neural network.) When we learn content through VCoL, we network System 1, connecting new content to already networked content in a way that creates a foundation for what comes next. This does not happen robustly without VCoL, which builds and solidifies the network through application/practice and reflection. System 1 can handle vast amounts of information and processes it rapidly. It serves us well when we learn well.
Make reflection a part of every learning moment. People cannot reason well about things they don’t understand well. When we foster deep understanding through VCoL (and the +7 skills), we recruit System 2 (the slow reasoning brain) to consciously shape the creation and modification of connections in System 1—ensuring that our network of knowledge is growing in a way that mirrors “reality.” The constant practice of analytical and reflective skills not only builds a robust network, but also increases our capacity for making reasonable connections and inferences and enhances our mental agility and capacity for making useful intuitive “leaps.” We learn to think by thinking—and we think better when we have a robust knowledge network to rely on.
Educate the whole person. We believe that education should focus on the development of the entire human being. This means supporting the development of competent, compassionate, aware, and attentive human beings who work well with others. A good way to develop these qualities is through embedded practices that foster interpersonal awareness and skill, such as collaborative or shared learning. These practices provide another benefit as well. They tend to excite emotions that are known to enhance learning.
During the last 20 years—since high stakes testing began to take hold—public school curricula have undergone a massive transformation. Standards have pushed material that was once taught in high school down into the 3rd and 4th grade, and the amount of content teachers are expected to cover each year has increased steadily. The theory behind this trend appears to be that learning more content and learning it earlier will help students develop faster.
But is this true? Is there any evidence at all that learning more content and learning it earlier produces more rapid development? If so, I haven't seen it.
In fact, our evidence points to the opposite conclusion. Learning more and learning it earlier may actually be interfering with the development of critical life skills—like those required for making good decisions in real-life contexts. As the graph below makes clear, students in schools that emphasize covering required content do not develop as rapidly as students in schools that focus on fostering deep understanding—even though learning for understanding generally takes more time than learning something well enough to "pass the test."
What is worse, we're finding that the average student in schools with the greatest emphasis on covering required content appears to stop developing by the end of grade 10, with an average score of 10.1. This is the same score received by the average 6th grader in schools with the greatest emphasis on fostering deep understanding.
The graphs in this post are based on data from 17,755 LRJA assessments. The LRJA asks test-takers to respond to a complex real-life dilemma. They are prompted to explore questions about:
finding, creating, and evaluating information and evidence,
perspectives, persuasion, and conflict resolution,
when and if it's possible to be certain, and
the nature of facts, truth, and reality.
Students were in grades 4-12, and attended one or more of 56 schools in the United States and Canada.
The graphs shown above represent two groups of schools—those with students who received the highest scores on the LRJA and those with students who received the lowest scores. These schools differed from one another in two other ways. First, the highest performing schools were all private schools*. Most students in these schools came from upper middle SES (socio-economic status) homes. The lowest performing schools were all public schools primarily serving low SES inner city students.
The second way in which these schools differed was in the design of their curricula. The highest performing schools featured integrated curricula with a great deal of practice-based learning and a heavy emphasis on fostering understanding and real-world competence. All of the lowest performing schools featured standards-focused curricula with a strong emphasis on learning the facts, formulas, procedures, vocabulary, and rules targeted by state tests.
Based on the results of conventional standardized tests, we expected most of the differences between student performances on the LRJA in these two groups of schools to be explained by SES. But this was not the case. Private schools with more conventional curricula and high performing public schools serving middle and upper middle SES families did indeed outperform the low SES schools, but as shown in the graph below, by grade 12, their students were still about 2.5 years behind students in the highest performing schools. At best, SES explains only about 1/2 of the difference between the best and worst schools in our database. (For more on this, see the post, "Does a focus on deep understanding accelerate growth?")
By the way, the conventional standardized test scores of students in this middle group, despite their greater emphasis on covering content, were no better than the conventional standardized test scores of students in the high performing group. Focusing on deep understanding appears to help students develop faster without interfering with their ability to learn required content.
This will not be our last word on the subject. As we scale our K-12 assessments, we'll be able to paint an increasingly clear picture of the developmental impact of a variety of curricula.
Lectica's nonprofit mission is to help educators foster deep understanding and lifelong growth. We can do it with your help! Please donate now. Your donation will help us deliver our learning tools—free—to K-12 teachers everywhere.
*None of these schools pre-selected their students based on test scores.