There are four keys to optimizing learning and development and ensuring that it continues over a lifetime.
Don’t cram content. Learning doesn’t work optimally when it is rushed or when learners are over-stressed. In Finland, students only go to school three 6-hour days a week, rarely have homework, and do better on PISA than students anywhere else in the world. (Unfortunately, PISA primarily measures correctness, but it’s the best international metric we have at present.) Their educational system is focused on building students’ knowledge networks. Students don’t move on to the next level until they master the current level. The Fins have figured out what our research shows—stuffing content has the long-term effect of slowing or halting development, while a focus on building knowledge networks leads to a steeper learning trajectory and a lifetime of learning and development.
Focus on the network. To learn very large quantities of information, we must effectively recruit System 1 (the fast unconscious brain). System 1 makes associations. (Think of a neural network.) When we learn content through VCoL, we network System 1, connecting new content to already networked content in a way that creates a foundation for what comes next. This does not happen robustly without VCoL, which builds and solidifies the network through application/practice and reflection. System 1 can handle vast amounts of information and processes it rapidly. It serves us well when we learn well.
Make reflection a part of every learning moment. People cannot reason well about things they don’t understand well. When we foster deep understanding through VCoL (and the +7 skills), we recruit System 2 (the slow reasoning brain) to consciously shape the creation and modification of connections in System 1—ensuring that our network of knowledge is growing in a way that mirrors “reality.” The constant practice of analytical and reflective skills not only builds a robust network, but also increases our capacity for making reasonable connections and inferences and enhances our mental agility and capacity for making useful intuitive “leaps.” We learn to think by thinking—and we think better when we have a robust knowledge network to rely on.
Educate the whole person. We believe that education should focus on the development of the entire human being. This means supporting the development of competent, compassionate, aware, and attentive human beings who work well with others. A good way to develop these qualities is through embedded practices that foster interpersonal awareness and skill, such as collaborative or shared learning. These practices provide another benefit as well. They tend to excite emotions that are known to enhance learning.
During the last 20 years—since high stakes testing began to take hold—public school curricula have undergone a massive transformation. Standards have pushed material that was once taught in high school down into the 3rd and 4th grade, and the amount of content teachers are expected to cover each year has increased steadily. The theory behind this trend appears to be that learning more content and learning it earlier will help students develop faster.
But is this true? Is there any evidence at all that learning more content and learning it earlier produces more rapid development? If so, I haven't seen it.
In fact, our evidence points to the opposite conclusion. Learning more and learning it earlier may actually be interfering with the development of critical life skills—like those required for making good decisions in real-life contexts. As the graph below makes clear, students in schools that emphasize covering required content do not develop as rapidly as students in schools that focus on fostering deep understanding—even though learning for understanding generally takes more time than learning something well enough to "pass the test."
What is worse, we're finding that the average student in schools with the greatest emphasis on covering required content appears to stop developing by the end of grade 10, with an average score of 10.1. This is the same score received by the average 6th grader in schools with the greatest emphasis on fostering deep understanding.
The graphs in this post are based on data from 17,755 LRJA assessments. The LRJA asks test-takers to respond to a complex real-life dilemma. They are prompted to explore questions about:
finding, creating, and evaluating information and evidence,
perspectives, persuasion, and conflict resolution,
when and if it's possible to be certain, and
the nature of facts, truth, and reality.
Students were in grades 4-12, and attended one or more of 56 schools in the United States and Canada.
The graphs shown above represent two groups of schools—those with students who received the highest scores on the LRJA and those with students who received the lowest scores. These schools differed from one another in two other ways. First, the highest performing schools were all private schools*. Most students in these schools came from upper middle SES (socio-economic status) homes. The lowest performing schools were all public schools primarily serving low SES inner city students.
The second way in which these schools differed was in the design of their curricula. The highest performing schools featured integrated curricula with a great deal of practice-based learning and a heavy emphasis on fostering understanding and real-world competence. All of the lowest performing schools featured standards-focused curricula with a strong emphasis on learning the facts, formulas, procedures, vocabulary, and rules targeted by state tests.
Based on the results of conventional standardized tests, we expected most of the differences between student performances on the LRJA in these two groups of schools to be explained by SES. But this was not the case. Private schools with more conventional curricula and high performing public schools serving middle and upper middle SES families did indeed outperform the low SES schools, but as shown in the graph below, by grade 12, their students were still about 2.5 years behind students in the highest performing schools. At best, SES explains only about 1/2 of the difference between the best and worst schools in our database. (For more on this, see the post, "Does a focus on deep understanding accelerate growth?")
By the way, the conventional standardized test scores of students in this middle group, despite their greater emphasis on covering content, were no better than the conventional standardized test scores of students in the high performing group. Focusing on deep understanding appears to help students develop faster without interfering with their ability to learn required content.
This will not be our last word on the subject. As we scale our K-12 assessments, we'll be able to paint an increasingly clear picture of the developmental impact of a variety of curricula.
Lectica's nonprofit mission is to help educators foster deep understanding and lifelong growth. We can do it with your help! Please donate now. Your donation will help us deliver our learning tools—free—to K-12 teachers everywhere.
*None of these schools pre-selected their students based on test scores.
Last week, I received an inquiry about the relation between flow states (Csikszentmihalyi & colleagues) and the natural dopamine/opioid learning cycle that undergirds Lectica's learning model, VCoL+7. The short answer is that flow and the natural learning cycle have a great deal in common. The primary difference appears to be that flow can occur during almost any activity, while the natural learning cycle is specifically associated with learning. Also, flow has been associated with neurochemicals we haven't (yet?) incorporated in our conception of the natural learning cycle. We'll be tracking the literature to see if research on these neurochemicals suggests modifications.
The similarity between flow states and the dopamine/opioid learning cycle are numerous. Both involve dopamine (striving & focus) and opioids (reward). And researchers who have studied the role of flow in learning even use the term "Goldilocks Zone" to describe students' learning sweet-spot—the place where interest and challenge are just right to stimulate the release of dopamine, and where success happens just often enough to trigger the release of opioids (which stimulate the desire for more learning, to start the cycle again).
Since psychologist Mihalyi Csikszentmihalyi began his studies of flow, it has been linked to feelings of happiness and euphoria, and to peak performance among workers, scientists, athletes, musicians, and many others. Flow has also been shown to deepen learning and support interest.
Flow is gradually making its way into the classroom. It's featured on UC Berkeley's Greater Good site in several informative articles designed to help teachers bring flow into the classroom.
"Teachers want their kids to find “flow,” that feeling of complete immersion in an activity, where we’re so engaged that our worries, sense of time, and self-consciousness seem to disappear."
Advice for stimulating flow is similar to our advice for teaching and learning in the Goldilocks Zone, and includes suggestions like the following:
Challenge kids—but not too much.
Make assignments relevant to students’ lives.
Encourage choice, feed interest.
Set clear goals (and give feedback along the way).
Offer hands on activities.
If you've been following our work, these suggestions should sound very familiar.
All in all, the flow literature provides additional support for the value of our mission to deliver learning tools that help teachers help students learn in the zone.
I'm frequently asked about the relation between transformative learning and what we, at Lectica, call robust, embodied learning.
According to Mezirow, there are two kinds of transformative learning, Learning that transforms one's point of view and learning that transforms a habit of mind.
Transforming a point of view: This kind of transformation occurs when we have an experience that causes us to reflect critically on our current conceptions of a situation, individual, or group.
Transforming a habit of mind: This is a more profound and less common kind of transformation that occurs when we become critically reflective of a generalized bias in the way we view situations, people, or groups. This kind of transformation is less common and more difficult than a transformation of point of view and occurs only after several transformations in point of view.
Embodied learning occurs through natural and learned virtuous cycles in which we take in new information, apply it in some way, and reflect on outcomes. The natural cycles occur in a process Piaget referred to as reflective abstraction. The learned process, which we call VCoL (for virtuous cycle of learning) deliberately reproduces and amplifies elements of this unconscious process, incorporating conscious critical reflection as part of every learning cycle. These acts of critical reflection reinforce connections that are affirmed (or create new connections) and prune connections that are negated. Virtuous learning cycles, both conscious and unconscious, incrementally build a mental network that not only connects ideas, but also different parts of the brain, including those involved in motivation and emotion.
Learning through intentional virtuous cycles ensures that our mental network is constantly being challenged with new information, so alterations to point of view are possible any time we receive information that doesn't easily fit into the existing network. But this kind of learning is also part of a larger developmental process in which our mental networks undergo major reorganizations called hierarchical integrations that produce fundamental qualitative changes in the way we think.
Here are some of the similarities I see between transformative learning and our learning model:
Both are based on developmental mechanisms (reflecting abstraction, assimilation, accommodation, hierarchical integration, chunking, qualitative change, and emergence) that were the hallmarks of Piagetian and Neo-Piagetian theory. The jargon and applications may be different, but the fundamental ideas are very similar.
Both are strongly influenced by the work of Habermas (communicative action) and Freire (critical pedagogy).
Both lead to a pedagogy that emphasizes the role of critical reflection and perspectival awareness in high quality learning.
Both emphasize the involvement of the whole person in learning.
Both transcend conventional approaches to learning.
Here are some differences I've identified so far:
Terminology: Overcoming this problem requires pretty active perspective seeking!
Role of critical reflection: For us, critical reflection is both a habit of mind to cultivate (In VCoL+7, it's one of the +7 skills) and a step in every (conscious) learning cycle (the "reflect" step). I'm not sure how this is viewed in Transformative learning circles.
Target: We have two learning/development targets, one is meta, the other is incremental. Our meta target is long-term development, including the major transformations that take place between levels in our developmental model. Our incremental target is the micro-learning or micro-development that prepares our neural networks for major transformations.
Measurement: As far as I can tell, the metrics used to study transformative learning are primarily focused on the subjective experience of transformation. We take a different approach by measuring the way in which learning experiences change our conceptions or the way in which we approach real-world problems. We don't ask what people think or what they learned, we ask how they think with what they learned.
We've argued for years that you can't really learn critical thinking by taking a critical thinking course. Critical thinking is a skill that develops through reflective practice (VCoL). Recently, a group of Stanford scientists reported that a reflective practice approach not only works in the short term, but it produces "sticky" results. Students who are routinely prompted to evaluate data get better at evaluating data—and keep evaluating it even after the prompts are removed.
Lectica is the only test developer that creates assessments that measure and support this kind of learning.
For many years, we’ve been arguing that learning is best viewed as a process of creating networks of connections. We’ve defined robust learning as a process of building knowledge networks that are so well connected they allow us to put knowledge to work in a wide range of contexts. And we’ve described embodied learning—a way of learning that involves the whole person and is much more than the memorization of facts, terms, definitions, rules, or procedures.
New evidence from the neurosciences provides support for this way of thinking about learning. According to research recently published in Nature, people with more connected brains—specifically those with more connections across different parts of the brain—demonstrate greater intelligence than those with less connected brains—including better problem-solving skills. And this is only one of several research projects that report similar findings.
Lectica exists because we believe that if we really want to support robust, embodied learning, we need to measure it. Our assessments are the only standardized assessments that have been deliberately developed to measure and support this kind of learning.
During the last 20 years, children in our public schools have been required to learn important concepts earlier and earlier. This is supposed to speed up learning. But we, at Lectica, are finding that when students try to learn complex ideas too early, they don’t seem to find those ideas useful.
For example, let's look at the terms reliable, credible, and valid, which refer to different aspects of information quality. These terms used to be taught in high school, but are now taught as early as grade 3. We looked at how these terms were used by over 15,000 students in grades 4-12. These students were asked to write about what they would need to know in order to trust information from someone making a claim like, "Violent television is bad for children."
As you can see in the following graph, until grade 10, fewer than 10% of these students used the terms at all—even though they were taught them by grade 5. What is more, our research shows that when these terms are used before Lectical Level 10 (see video about Lectical Levels, below), they mean little more than “correct” or “true”, and it's not until well into Lectical Level 10 that people use these terms in a way that clearly shows they have distinct meanings.
Children aren't likely to find the words reliable, valid, or credible useful until they understand why some information is better than other information. This means they need to understand concepts like motivation, bias, scientific method, and expertise. We can get 5th graders to remember that they should apply the word "valid" instead of "true" when presented with a specific stimulus, but this is not the same as understanding.
Reliable, valid, and credible aren't the only words taught in the early grades that students don't find useful. We have hundreds of examples in our database.
Learning in the zone
The pattern above is what we see when students are taught ideas they aren't yet prepared to understand. When children learn ideas they're ready for—ideas that are in "the zone"—the pattern looks very different. Under these conditions, the use of a new word quickly goes from zero to frequent (or even constant, as parents of 4-year-olds know only too well). If you're a parent you probably remember when your child first learned the words "why," "secret," or "favorite." Suddenly, questioning why, telling and keeping secrets, or having favorites became the focus of many conversations. Children "play hard" with ideas they're prepared to understand. This rapidly integrates these new ideas into their existing knowledge networks. But they can't do this with an idea they aren't ready for, because they don't yet have a knowledge network that's ready to receive it.
The curve shown in the figure above shows what it would look like if these terms were taught when students were more prepared with knowledge networks that were ready to receive them. Acquisition would be relatively rapid, and students would find the terms more useful because they would be more likely to grasp aspects of their distinct meanings. For example, they might choose to use the term "reliable" rather than "factual" because they understand that these two terms mean different things.
If you're a parent, think about how many times your child is asked to learn something that isn’t yet useful. Consider the time invested, and ask yourself if that time was well spent.
An ideal educational assessment strategy—represented above in the assessment triangle—includes three indicators of learning—correctness (content knowledge), complexity (developmental level of understanding), and coherence (quality of argumentation). Lectical Assessments focus primarily on two areas of the triangle—complexity and coherence. Complexity is measured with the Lectical Assessment System, and coherence is measured with a set of argumentation rubrics focused on mechanics, logic, and persuasiveness. We do not focus on correctness, primarily because most assessments already target correctness.
At the center of the assessment triangle is a hazy area. This represents the Goldilocks Zone—the range in which the difficulty of learning tasks is just right for a particular student. To diagnose the Goldilocks Zone, educators evaluate correctness, coherence, and complexity, plus a given learner’s level of interest and tolerance for failure.
When educators work with Lectical Assessments, they use the assessment triangle to diagnose students’ learning needs. Here are some examples:
Level of skill (low, average, high) relative to expectations
This student has relatively high complexity and correctness scores, but his performance is low in coherence. Because lower coherence scores suggest that he has not yet fully integrated his existing knowledge, he is likely to benefit most from participating in interesting activities that require applying existing knowledge in relevant contexts (using VCoL).
This student’s scores are high relative to expectations. Her knowledge appears to be well integrated, but the low correctness suggests that there are gaps in her content knowledge relative to targeted content. Here, we would suggest filling in the missing content knowledge in a way that engages the learner and allows her to integrate it into her well-developed knowledge network.
The scores received by this student are high for correctness, while they are low for complexity and coherence. This pattern suggests that the student is memorizing content without integrating it effectively into his or her knowledge network—and may have been doing this for some time. This student is most likely to benefit from applying their existing content knowledge in personally relevant contexts (using VCoL) until their coherence and complexity scores catch up with their correctness scores.
The scores received by this student are high for correctness, complexity, and coherence. This pattern suggests that the student has a high level of proficiency. Here, we would suggest introducing new knowledge that’s just challenging enough to keep her in her personal Goldilocks zone.
The assessment triangle helps educators optimize learning by ensuring that students are always learning in the Goldilocks Zone. This is a good thing, because students who spend more time in the Goldilocks Zone not only enjoy learning more, they learn better and faster.
This morning, while doing some research on leader development, I googled “vertical leadership” and “coaching.” The search returned 466,000 results. Wow. Looks like vertical development is hot in the coaching world!
Two hours later, after scanning dozens of web sites, I was left with the following impression:
Vertical development occurs through profound, disruptive, transformative insights that alter how people see themselves, improve their relationships, increase happiness, and help them cope better with complex challenges. The task of the coach is to set people up for these experiences. Evidence of success is offered through personal stories of transformation.
But decades of developmental research contradicts this picture. This body of evidence shows that the kind of transformative experiences promised on these web sites is uncommon. And when it does occur it rarely produces a fairytale ending. In fact, profound disruptive insights can easily have negative consequences, and most experiences that people refer to as transformational are really just momentary insights. They may feel profound in the moment, but don’t actually usher in any measurable change at all, much less transformative change.
"The good news is, you don’t have to work on transforming yourself to become a better leader."
The fact is, insight is fairly easy, but growth is slow, and change is hard. Big change is really, really hard. And some things, like many dispositions and personality traits, are virtually impossible to change. This isn’t an opinion based on personal experience, it’s a conclusion based on evidence from hundreds of longitudinal developmental studies conducted during the last 70 years. (Check out our articles page for some of this evidence.)
The good news is, you don’t have to work on transforming yourself to become a better leader. All you need to do is engage in daily practices that incrementally, through a learning cycle called VCoL, help you build the skills and habits of a good leader. Over the long term, this will change you, because it will alter the quality of your interactions with others, and that will change your mind—profoundly.
Like the items in Lectical Assessments, PISA items involve real-world problems. PISA developers also claim, as we do here at Lectica, that their items measure how knowledge is applied. So, why do we persist in claiming that Lectical Assessments and assessments like PISA measure different things?
Part of the answer lies in questions about what’s actually being measured, and in the meaning of terms like “real world problems” and “how knowledge is applied.” I’ll illustrate with an example from, Take the test: sample questions from OECD’s PISA assessments.
One of the reading comprehension items in “Take the test” involves a short story about a woman who is trapped in her home during a flood. Early in the story, a hungry panther arrives on her porch. The woman has a gun, which she keeps at her side as long as the panther is present. At first, it seems that she will kill the panther, but in the end, she offers it a ham hock instead.
What is being measured?
There are three sources of difficulty in the story. It’s Lectical phase is 10c — the third phase of four in level 10. Also, the story is challenging to interpret because it’s written to be a bit ambiguous. I had to read it twice in order to appreciate the subtlety of the author’s message. And it is set on the water in a rural setting, so there’s lots of language that would be new to many students. How well a student will comprehend this story hinges on their level of understanding — where they are currently performing on the Lectical Scale — and how much they know about living on the water in a rural setting. Assuming they understand the content of the story, comprehension also depends on how good students are at decoding its somewhat ambiguous message.
The first question that comes up for me is whether or not this is a good story selection for the average 15-year-old. The average phase of performance for most 15-year-olds is 10a. That’s their productive level. When we prescribe learning recommendations to students performing in 10a, we choose texts that are about 1 phase higher than their current productive level. We refer to this as the “Goldilocks zone”, because we’ve found it to be the range in which material is just difficult enough to be challenging, but not so difficult that the risk of failure is too high. Some failure is good. Constant failure is bad.
But this PISA story is intended to test comprehension; it’s not a learning recommendation or resource. Here, its difficulty level raises a different issue. In this context, the question that arises for me is, “What is reading comprehension, when the text students are asked to decode presents different challenges to students living in different environments and performing in different Lectical Levels?” Clearly, this story does not present the same challenge to students performing in phase 10a as it presents to students performing in 10c. Students performing in 10a or lower are struggling to understand the basic content of the story. Students performing in 10c are grappling with the subtlety of the message. And if the student lives in a city and knows nothing about living on the water, even a student performing at 10c is disadvantaged.
Real world problems
Now, let’s consider what it means to present a real-world problem. When we at Lectica use this term, we usually mean that the problem is ill-structured (like the world), without a “correct” answer. (We don’t even talk about correctness.) The challenges we present to learners reveal the current level of their understandings—there is always room for growth. One of our interns refers to development as a process of learning to make “better and better mistakes”. This is a VERY different mindset from the “right or wrong” mindset nurtured by conventional standardized tests.
What do PISA developers mean by “real world problem”? They clearly don’t mean without a “correct” answer. Their scoring rubrics show correct, partial (sometimes), and incorrect answers. And it doesn’t get any more subtle than that. I think what they mean by “real world” is that their problems are contextualized; they are simply set in the real world. But this is not a fundamental change in the way PISA developers think about learning. Theirs is still a model that is primarily about the ability to get right answers.
How knowledge is applied
Let’s go back to the story about the woman and the panther. After they read the story, test-takers are asked to respond to a series of multiple choice and written response questions. In one written response question they are asked, “What does the story suggest was the woman’s reason for feeding the panther?”
The scoring rubric presents a selection of potential correct answers and a set of wrong answers. (No partially correct answers here.) It’s pretty clear that when PISA developers ask “how well” students’ knowledge is applied, they’re talking about whether or not students can provide a correct answer. That’s not surprising, given what we’ve observed so far. What’s new and troubling here is that all “correct” answers are treated as though they are equivalent. Take a look at the list of choices. Do they look equally sophisticated to you?
She felt sorry for it.
Because she knew what it felt like to be hungry.
Because she’s a compassionate person.
To help it live. (p. 77)
“She felt sorry for it.” is considered to be just as correct as “She is a compassionate person.” But we know the ideas expressed in these two statements are not equivalent. The idea of feeling sorryfor can be expressed by children as early as phase 08b (6- to 7-year-olds). The idea of compassion (as sympathy) does not appear until level 10b. And the idea of being a compassionate person does not appear until 10c—even when the concept of compassion is being explicitly taught. Given that this is a test of comprehension—defined by PISA’s developers in terms of understanding and interpretation—doesn’t the student who writes, “She is a compassionate person,” deserve credit for arriving at a more sophisticated interpretation?
I’m not claiming that students can’t learn the word compassion earlier than level 10b. And I’m certainly not claiming that there is enough evidence in students’ responses to the prompt in this assessment to determine if an individual who wrote “She felt sorry for it.” meant something different from an individual who wrote, “She’s a compassionate person.” What I am arguing is that what students mean is more important than whether or not they get a right answer. A student who has constructed the notion of compassion as sympathy is expressing a more sophisticated understanding of the story than a student who can’t go further than saying the protagonist felt sorry for the panther. When we, at Lectica, talk about how well knowledge is applied, we mean, “At what level does this child appear to understand the concepts she’s working with and how they relate to one another?”
What is reading comprehension?
All of these observations lead me back to the question, “What is reading comprehension?” PISA developers define reading comprehension in terms of understanding and interpretation, and Lectical assessments measure the sophistications of students’ understanding and interpretation. It looks like our definitions are at least very similar.
We think the problem is not in the definition, but in the operationalization. PISAs items measure proxies for comprehension, not comprehension itself. Getting beyond proxies requires three ingredients.
First, we have to ask students to show us how they’re thinking. This means asking for verbal responses that include both judgments and justifications for those judgments.
Second, the questions we ask need to be more open-ended. Life is rarely about finding right answers. It’s about finding increasingly adequate answers. We need to prepare students for that reality.
Third, we need to engage in the careful, painstaking study of how students construct meanings over time.
This third requirement is such an ambitious undertaking that many scholars don’t believe it’s possible. But we’ve not only demonstrated that it’s possible, we’re doing it every day. We call the product of this work the Lectical™ Dictionary. It’s the first curated developmental taxonomy of meanings. You can think of it as a developmental dictionary. Aside from making it possible to create direct tests of student understanding, the Lectical Dictionary makes it easy to describe how ideas evolve over time. We can not only tell people what their scores mean, but also what they’re most likely to benefit from learning next. If you’re wondering what that means in practice, check out our demo.