VCoL+7: Can it save democracy?

Our learning model, the Virtuous Cycle of Learning and its +7 skills (VCoL+7) is more than a way of learning—it's a set of tools that help students build a relationship with knowledge that's uniquely compatible with democratic values. 

Equal opportunity: In the company of good teachers and the right metrics, VCoL makes it possible to create a truly level playing field for learning—one in which all children have a real opportunity to achieve their full learning potential.

Freedom: VCoL shifts the emphasis from learning a particular set of facts, vocabulary, rules, procedures, and definitions, to building transferable skills for thinking, communicating, and learning, thus allowing students greater freedom to learn essential skills through study and practice in their own areas of interest.

Pursuit of happiness: VCoL leverages our brain's natural motivational cycle, allowing people retain their inborn love of learning. Thus, they're equipped not only with skills and knowledge, but with a disposition to adapt and thrive in a complex and rapidly changing world.

Citizenship: VCoLs build skills for (1) coping with complexity, (2) gathering, evaluating, & applying information, (3) perspective seeking & coordination, (4) reflective analysis, and (5) communication & argumentation, all of which are essential for the high quality decision making required of citizens in a democracy. 

Open mindset: VCoLs treat all learning as partial or provisional, which fosters a sense of humility about one's own knowledge. A touch of humility can make citizens more open to considering the perspectives of others—a useful attribute in democratic societies.

All of the effects listed here refer primarily to VCoL itself—a cycle of goal setting, information gathering, application, and reflection. The +7 skills—reflectivity, awareness, seeking and evaluating information, making connections, applying knowledge, seeking and working with feedback, and recognizing and overcoming built in biases—amplify these effects.

VCoL is not only a learning model for our times, it could well be the learning model that helps save democracy. 

Why we need to LEARN to think

I'm not sure I buy the argument that reason developed to support social relationships, but the body of research described in this New Yorker article clearly exposes several built-in biases that get in the way of high quality reasoning. These biases are the reason why learning to think should be a much higher priority in our schools (and in the workplace). 

Transformative & embodied learning

I'm frequently asked about the relation between transformative learning and what we, at Lectica, call robust, embodied learning

According to Mezirow, there are two kinds of transformative learning, Learning that transforms one's point of view and learning that transforms a habit of mind.

Transforming a point of view: This kind of transformation occurs when we have an experience that causes us to reflect critically on our current conceptions of a situation, individual, or group. 

Transforming a habit of mind: This is a more profound and less common kind of transformation that occurs when we become critically reflective of a generalized bias in the way we view situations, people, or groups. This kind of transformation is less common and more difficult than a transformation of point of view and occurs only after several transformations in point of view.

Embodied learning occurs through natural and learned virtuous cycles in which we take in new information, apply it in some way, and reflect on outcomes. The natural cycles occur in a process Piaget referred to as reflective abstraction. The learned process, which we call VCoL (for virtuous cycle of learning) deliberately reproduces and amplifies elements of this unconscious process, incorporating conscious critical reflection as part of every learning cycle. These acts of critical reflection reinforce connections that are affirmed (or create new connections) and prune connections that are negated. Virtuous learning cycles, both conscious and unconscious, incrementally build a mental network that not only connects ideas, but also different parts of the brain, including those involved in motivation and emotion.

Learning through intentional virtuous cycles ensures that our mental network is constantly being challenged with new information, so alterations to point of view are possible any time we receive information that doesn't easily fit into the existing network. But this kind of learning is also part of a larger developmental process in which our mental networks undergo major reorganizations called hierarchical integrations that produce fundamental qualitative changes in the way we think.

Here are some of the similarities I see between transformative learning and our learning model:

  1. Both are based on developmental mechanisms (reflecting abstraction, assimilation, accommodation, hierarchical integration, chunking, qualitative change, and emergence) that were the hallmarks of Piagetian and Neo-Piagetian theory. The jargon and applications may be different, but the fundamental ideas are very similar.
  2. Both are strongly influenced by the work of Habermas (communicative action) and Freire (critical pedagogy).
  3. Both lead to a pedagogy that emphasizes the role of critical reflection and perspectival awareness in high quality learning. 
  4. Both emphasize the involvement of the whole person in learning.
  5. Both transcend conventional approaches to learning.

Here are some differences I've identified so far:

  1. Terminology: Overcoming this problem requires pretty active perspective seeking!
  2. Role of critical reflection: For us, critical reflection is both a habit of mind to cultivate (In VCoL+7, it's one of the +7 skills) and a step in every (conscious) learning cycle (the "reflect" step). I'm not sure how this is viewed in Transformative learning circles.
  3. Target: We have two learning/development targets, one is meta, the other is incremental. Our meta target is long-term development, including the major transformations that take place between levels in our developmental model. Our incremental target is the micro-learning or micro-development that prepares our neural networks for major transformations. 
  4. Measurement: As far as I can tell, the metrics used to study transformative learning are primarily focused on the subjective experience of transformation. We take a different approach by measuring the way in which learning experiences change our conceptions or the way in which we approach real-world problems. We don't ask what people think or what they learned, we ask how they think with what they learned.

Proficiency vs. growth

We've been hearing quite a bit about the "proficiency vs. growth" debate since Betsy DeVos (Trump's candidate for Education Secretary) was asked to weigh in last week. This debate involves a disagreement about how high stakes tests should be used to evaluate educational programs. Advocates for proficiency want to reward schools when their students score higher on state tests. Advocates for growth want to reward schools when their students grow more on state tests. Readers who know about Lectica's work can guess where we'd land in this debate—we're outspokenly growth-minded. 

For us, however, the proficiency vs. growth debate is only a tiny piece of a broader issue about what counts as learning. Here's a sketch of the situation as we see it:

Getting a higher score on a state test means that you can get more correct answers on increasingly difficult questions, or that you can more accurately apply writing conventions or decode texts. But these aren't the things we really want to measure. They're "proxies"—approximations of our real learning objectives. Test developers measure proxies because they don't know how to measure what we really want to know.

What we really want to know is how well we're preparing students with the skills and knowledge they'll need to successfully navigate life and work.

Scores on conventional tests predict how well students are likely to perform, in the future, on conventional tests. But scores on these tests have not been shown to be good predictors of success in life.*  

In light of this glaring problem with conventional tests, the debate between proficiency and growth is a bit of a red herring. What we really need to be asking ourselves is a far more fundamental question:

What knowledge and skills will our children need to navigate the world of tomorrow, and how can we best nurture their development?

That's the question that frames our work here at Lectica.

 

*For information about the many problems with conventional tests, see FairTest.

 

How to teach critical thinking: make it a regular practice

We've argued for years that you can't really learn critical thinking by taking a critical thinking course. Critical thinking is a skill that develops through reflective practice (VCoL). Recently, a group of Stanford scientists reported that a reflective practice approach not only works in the short term, but it produces "sticky" results. Students who are routinely prompted to evaluate data get better at evaluating data—and keep evaluating it even after the prompts are removed. 

Lectica is the only test developer that creates assessments that measure and support this kind of learning.

Support from neuroscience for robust, embodied learning

Human connector, by jgmarcelino from Newcastle upon Tyne, UK, via Wikimedia Commons

Fluid intelligence Connectome

For many years, we've been arguing that learning is best viewed as a process of creating networks of connections. We've defined robust learning as a process of building knowledge networks that are so well connected they allow us to put knowledge to work in a wide range of contexts. And we've described embodied learninga way of learning that involves the whole person and is much more than the memorization of facts, terms, definitions, rules, or procedures.

New evidence from the neurosciences provides support for this way of thinking about learning. According to research recently published in Nature, people with more connected brains—specifically those with more connections across different parts of the brain—demonstrate greater intelligence than those with less connected brains—including better problem-solving skills. And this is only one of several research projects that report similar findings.

Lectica exists because we believe that if we really want to support robust, embodied learning, we need to measure it. Our assessments are the only standardized assessments that have been deliberately developed to measure and support this kind of learning. 

How to waste students’ time

During the last 20 years, children in our public schools have been required to learn important concepts earlier and earlier. This is supposed to speed up learning. But we, at Lectica, are finding that when students try to learn complex ideas too early, they don’t seem to find those ideas useful.

For example, let's look at the terms reliable, credible, and valid, which refer to different aspects of information quality. These terms used to be taught in high school, but are now taught as early as grade 3. We looked at how these terms were used by over 15,000 students in grades 4-12. These students were asked to write about what they would need to know in order to trust information from someone making a claim like, "Violent television is bad for children."

As you can see in the following graph, until grade 10, fewer than 10% of these students used the terms at all—even though they were taught them by grade 5. What is more, our research shows that when these terms are used before Lectical Level 10 (see video about Lectical Levels, below), they mean little more than “correct” or “true”, and it's not until well into Lectical Level 10 that people use these terms in a way that clearly shows they have distinct meanings.

Children aren't likely to find the words reliable, valid, or credible useful until they understand why some information is better than other information. This means they need to understand concepts like motivation, bias, scientific method, and expertise. We can get 5th graders to remember that they should apply the word "valid" instead of "true" when presented with a specific stimulus, but this is not the same as understanding.

Reliable, valid, and credible aren't the only words taught in the early grades that students don't find useful. We have hundreds of examples in our database.

Learning in the zone

The pattern above is what we see when students are taught ideas they aren't yet prepared to understand. When children learn ideas they're ready for—ideas that are in "the zone"—the pattern looks very different. Under these conditions, the use of a new word quickly goes from zero to frequent (or even constant, as parents of 4-year-olds know only too well). If you're a parent you probably remember when your child first learned the words "why," "secret," or "favorite." Suddenly, questioning why, telling and keeping secrets, or having favorites became the focus of many conversations. Children "play hard" with ideas they're prepared to understand. This rapidly integrates these new ideas into their existing knowledge networks. But they can't do this with an idea they aren't ready for, because they don't yet have a knowledge network that's ready to receive it. 

 

The curve shown in the figure above shows what it would look like if these terms were taught when students were more prepared with knowledge networks that were ready to receive them. Acquisition would be relatively rapid, and students would find the terms more useful because they would be more likely to grasp aspects of their distinct meanings. For example, they might choose to use the term "reliable" rather than "factual" because they understand that these two terms mean different things.

If you're a parent, think about how many times your child is asked to learn something that isn’t yet useful. Consider the time invested, and ask yourself if that time was well spent.

 

 

Correctness, argumentation, and Lectical Level

How correctness, argumentation, and Lectical Level work together diagnostically

In a fully developed Lectical Assessment, we include separate measures of aspects of arguments such as mechanics (spelling, punctuation, and capitalization), coherence (logic and relevance), and persuasiveness (use of evidence, argument, & psychology to persuade). (We do not evaluate correctness, primarily because most existing assessments already concern themselves primarily with correctness.) When educators use Lectical Assessments, they use information about Lectical Level, mechanics, coherence, persuasiveness, and sometimes correctness to diagnose students' learning needs. Here are some examples:

Level of skill (low, average, high) relative to expectations

  Lectical Level Mechanics Coherence Persuasiveness Correctness
Case 1 high high low average high
Case 2 high high high low low
Case 3 low average low low high

Case 1

This student has relatively high Lectical, mechanics, and correctness scores, but their performance is low in coherence and the persuasiveness of their answers is average. Because lower coherence and persuasiveness scores suggest that a student has not yet fully integrated their new knowledge, this student is likely to benefit most from participating in activities that require them to apply their existing knowledge in relevant contexts (using VCoL).

Case 2

This student's scores, with the exception of their correctness score, are high relative to expectations. This students' knowledge appears to be well integrated, but the combination of average persuasiveness and low correctness suggests that there are gaps in their content knowledge relative to targeted content. Here, we would suggest filling in the missing content knowledge in a way that integrates it into this students' well-developed knowledge network.

Case 3

The scores received by this student are high for correctness, while they are average for mechanics, and low for Lectical Level, coherence, and persuasiveness. This pattern suggests that the student is memorizing content without integrating it effectively into his or her knowledge network and has been doing this for some time. This student is most likely to benefit from applying their existing content knowledge in personally relevant contexts (using VCoL) until their coherence, persuasiveness, and Lectical scores catch up with their correctness scores.

Leadership, vertical development & transformative change: a polemic

This morning, while doing some research on leader development, I googled “vertical leadership” and “coaching.” The search returned 466,000 results. Wow. Looks like vertical development is hot in the coaching world!

Two hours later, after scanning dozens of web sites, I was left with the following impression: 

Vertical development occurs through profound, disruptive, transformative insights that alter how people see themselves, improve their relationships, increase happiness, and help them cope better with complex challenges. The task of the coach is to set people up for these experiences. Evidence of success is offered through personal stories of transformation.

But decades of developmental research contradicts this picture. This body of evidence shows that the kind of transformative experiences promised on these web sites is uncommon. And when it does occur it rarely produces a fairytale ending. In fact, profound disruptive insights can easily have negative consequences, and most experiences that people refer to as transformational are really just momentary insights. They may feel profound in the moment, but don’t actually usher in any measurable change at all, much less transformative change. 

 

"The good news is, you don’t have to work on transforming yourself to become a better leader."

 

The fact is, insight is fairly easy, but growth is slow, and change is hard. Big change is really, really hard. And some things, like many dispositions and personality traits, are virtually impossible to change. This isn’t an opinion based on personal experience, it’s a conclusion based on evidence from hundreds of longitudinal developmental studies conducted during the last 70 years. (Check out our articles page for some of this evidence.)

The good news is, you don’t have to work on transforming yourself to become a better leader. All you need to do is engage in daily practices that incrementally, through a learning cycle called VCoL, help you build the skills and habits of a good leader. Over the long term, this will change you, because it will alter the quality of your interactions with others, and that will change your mind—profoundly.

 

What PISA measures. What we measure.

Like the items in Lectical Assessments, PISA items involve real-world problems. PISA developers also claim, as we do here at Lectica, that their items measure how knowledge is applied. So, why do we persist in claiming that Lectical Assessments and assessments like PISA measure different things?

Part of the answer lies in questions about what's actually being measured, and in the meaning of terms like "real world problems" and "how knowledge is applied." I'll illustrate with an example from, Take the test: sample questions from OECD's PISA assessments

One of the reading comprehension items in "Take the test" involves a short story about a woman who is trapped in her home during a flood. Early in the story, a hungry panther arrives on her porch. The woman has a gun, which she keeps at her side as long as the panther is present. At first, it seems that she will kill the panther, but in the end, she offers it a ham hock instead. 

What is being measured?

There are three sources of difficulty in the story. It's Lectical phase is 10c—the third phase of four in level 10. Also, the story is challenging to interpret because it's written to be a bit ambiguous. I had to read it twice in order to appreciate the subtlety of the author's message. And it is set on the water in a rural setting, so there's lots of language that would be new to many students. How well a student will comprehend this story hinges on their level of understanding—where they are currently performing on the Lectical Scale—and how much they know about living on the water in a rural setting. Assuming they understand the content of the story, it also depends on how good they are at decoding the somewhat ambiguous message of the story.

The first question that comes up for me is whether or not this is a good story selection for the average 15-year-old. The average phase of performance for most 15-year-olds is 10a. That's their productive level. When we prescribe learning recommendations to students performing in 10a, we choose texts that are about 1 phase higher than their current productive level. We refer to this as the "Goldilocks zone", because we've found it to be the range in which material is just difficult enough to be challenging, but not so difficult that the risk of failure is too high. Some failure is good. Constant failure is bad.

But this PISA story is intended to test comprehension; it's not a learning recommendation or resource. Here, its difficulty level raises a different issue. In this context, the question that arises for me is, "What is reading comprehension, when the text students are asked to decode presents different challenges to students living in different environments and performing in different Lectical Levels?" Clearly, this story does not present the same challenge to students performing in phase 10a as it presents to students performing in 10c. Students performing in 10a or lower are struggling to understand the basic content of the story. Students performing in 10c are grappling with the subtlety of the message. And if the student lives in a city and knows nothing about living on the water, even a student performing at 10c is disadvantaged.

Real world problems

Now, let's consider what it means to present a real-world problem. When we at Lectica use this term, we usually mean that the problem is ill-structured (like the world), without a "correct" answer. (We don't even talk about correctness.) The challenges we present to learners reveal the current level of their understandings—there is always room for growth. One of our interns refers to development as a process of learning to make "better and better mistakes". This is a VERY different mindset from the "right or wrong" mindset nurtured by conventional standardized tests.

What do PISA developers mean by "real world problem"? They clearly don't mean without a "correct" answer. Their scoring rubrics show correct, partial (sometimes), and incorrect answers. And it doesn't get any more subtle than that. I think what they mean by "real world" is that their problems are contextualized; they are simply set in the real world. But this is not a fundamental change in the way PISA developers think about learning. Theirs is still a model that is primarily about the ability to get right answers.

How knowledge is applied

Let's go back to the story about the woman and the panther. After they read the story, test-takers are asked to respond to a series of multiple choice and written response questions. In one written response question they are asked, "What does the story suggest was the woman’s reason for feeding the panther?"

The scoring rubric presents a selection of potential correct answers and a set of wrong answers. (No partially correct answers here.) It's pretty clear that when PISA developers ask “how well” students' knowledge is applied, they're talking about whether or not students can provide a correct answer. That's not surprising, given what we've observed so far. What's new and troubling here is that all "correct" answers are treated as though they are equivalent. Take a look at the list of choices. Do they look equally sophisticated to you?

  •  She felt sorry for it.
  • Because she knew what it felt like to be hungry.
  • Because she’s a compassionate person.
  • To help it live. (p. 77)

“She felt sorry for it.” is considered to be just as correct as “She is a compassionate person.” But we know the ideas expressed in these two statements are not equivalent. The idea of feeling sorry for can be expressed by children as early as phase 08b (6- to 7-year-olds). The idea of compassion (as sympathy) does not appear until level 10b. And the idea of being a compassionate person does not appear until 10c—even when the concept of compassion is being explicitly taught. Given that this is a test of comprehension—defined by PISA's developers in terms of understanding and interpretation—doesn't the student who writes, "She is a compassionate person," deserve credit for arriving at a more sophisticated interpretation?

I'm not claiming that students can't learn the word compassion earlier than level 10b. And I'm certainly not claiming that there is enough evidence in students' responses to the prompt in this assessment to determine if an individual who wrote "She felt sorry for it." meant something different from an individual who wrote, "She's a compassionate person." What I am arguing is that what students mean is more important than whether or not they get a right answer. A student who has constructed the notion of compassion as sympathy is expressing a more sophisticated understanding of the story than a student who can't go further than saying the protagonist felt sorry for the panther. When we, at Lectica, talk about how well knowledge is applied, we mean, “At what level does this child appear to understand the concepts she’s working with and how they relate to one another?” 

What is reading comprehension?

All of these observations lead me back to the question, "What is reading comprehension?" PISA developers define reading comprehension in terms of understanding and interpretation, and Lectical assessments measure the sophistications of students' understanding and interpretation. It looks like our definitions are at least very similar.

We think the problem is not in the definition, but in the operationalization. PISAs items measure proxies for comprehension, not comprehension itself. Getting beyond proxies requires three ingredients.

  • First, we have to ask students to show us how they're thinking. This means asking for verbal responses that include both judgments and justifications for those judgments. 
  • Second, the questions we ask need to be more open-ended. Life is rarely about finding right answers. It's about finding increasingly adequate answers. We need to prepare students for that reality. 
  • Third, we need to engage in the careful, painstaking study of how students construct meanings over time.

This third requirement is such an ambitious undertaking that many scholars don't believe it's possible. But we've not only demonstrated that it's possible, we're doing it every day. We call the product of this work the Lectical™ Dictionary. It's the first curated developmental taxonomy of meanings. You can think of it as a developmental dictionary. Aside from making it possible to create direct tests of student understanding, the Lectical Dictionary makes it easy to describe how ideas evolve over time. We can not only tell people what their scores mean, but also what they're most likely to benefit from learning next. If you're wondering what that means in practice, check out our demo.