How to accelerate growth

The best way we know of to accelerate growth is to slow down and teach in ways that foster deep understanding. It may be counterintuitive, but slow learning really does accelerate growth!

In the post entitled, If you want students to develop faster, stop trying to speed up learning, I presented evidence that schools with curricula that promote deep understanding accelerate growth relative to schools with more of a focus on covering required content. In this post, I'm going to explain what we've learned so far about the relation between deep understanding and the rate of development. (I recommend reading the earlier post before trying to make sense of this one.)

Lectica's learning model, VCoL+7, emphasizes the importance of giving students ample opportunity to build deep understanding through cycles of goal setting, information gathering, application, and reflection. We argue that evidence of deep understanding can be seen in the coherence of students' arguments—you can't explain or defend an idea coherently if you don't understand it. Furthermore, because poorly understood ideas provide a weak foundation for future learning, we would hypothesize that over time students who demonstrate lower levels of understanding—through the coherence of their arguments—will grow more slowly than students who demonstrate higher levels of understanding.* 

We tested this hypothesis by examining data from a sample of 276 students attending low SES (socio-economic status) inner city schools. Each student had taken the LRJA (our reflective judgment assessment) 3 times over 3 1/2 years. Some of these students were in grade 4 at time 1, and some were in grade 6 at time 1. Each LRJA performance received 2 scores, 1 for its developmental level (shown on the vertical axis in the graphic below), and one for its logical coherence, rated on a 10 point scale. 

We conducted a hierarchical regression analysis that examined the relation between time 1 argumentation score and developmental growth (after controlling for developmental level at time 1).

For the figure below, I've borrowed the third graph from the "stop trying to speed up learning" post, faded it into the background, then superimposed growth curves predicted by the hierarchical regression model for three hypothetical students receiving time 1 coherence scores of 5.5, 6.5, and 7.5.* These values were selected because they are close to the actual time 1 coherence scores for the three groups of students in the background graphic. (Actual average time 1 scores are shown on the right.) 

Graph showing a one year advantage in grade 8 for students whose argumentation scores in grade 4 were 2 point higher than those of other students

As you can see, the distance between grade 8 scores predicted by the hierarchical regression is a bit less than half of the difference between the actual average scores in the background image. What this means is that in grade 8, a bit less than half of the difference between students in the three types of schools is explained by depth of understanding (as captured by our measure of coherence). 

In my earlier post, "If you want students to develop faster, stop trying to speed up learning," I concluded that socioeconomic status could not be the main cause of the differences in growth curves for different kinds of schools, because two of the groups we compared were not socio-economically different. The results of the analysis shown in this post suggests that almost half of the difference is due to the different levels of understanding reflected in coherence scores. This result supports the hypothesis that it is possible to accelerate development by increasing the depth of students' understanding .

We cannot even attempt to explain the remaining differences between school groups without controlling for the effects of socio-economic status and English proficiency. We'll do that as soon as we've finished rating the logical coherence of performances from a larger sample of students representing all three types of schools featured in this analysis. Stay tuned!

Lectica's nonprofit mission is to help educators foster deep understanding and lifelong growth. We can do it with your help! Please donate now. Your donation will help us deliver our learning tools—free—to K-12 teachers everywhere.

*You can learn more about our developmental scale on lecticalive's skill levels page, and our argumentation scales are described in the video, New evidence that robust knowledge networks support development.


If you want students to develop faster, stop trying to speed up learning

During the last 20 years—since high stakes testing began to take hold—public school curricula have undergone a massive transformation. Standards have pushed material that was once taught in high school down into the 3rd and 4th grade, and the amount of content teachers are expected to cover each year has increased steadily. The theory behind this trend appears to be that learning more content and learning it earlier will help students develop faster.

But is this true? Is there any evidence at all that learning more content and learning it earlier produces more rapid development? If so, I haven't seen it.

In fact, our evidence points to the opposite conclusion. Learning more and learning it earlier may actually be interfering with the development of critical life skills—like those required for making good decisions in real-life contexts. As the graph below makes clear, students in schools that emphasize covering required content do not develop as rapidly as students in schools that focus on fostering deep understanding—even though learning for understanding generally takes more time than learning something well enough to "pass the test."

What is worse, we're finding that the average student in schools with the greatest emphasis on covering required content appears to stop developing by the end of grade 10, with an average score of 10.1. This is the same score received by the average 6th grader in schools with the greatest emphasis on fostering deep understanding.

The graphs in this post are based on data from 17,755 LRJA assessments. The LRJA asks test-takers to respond to a complex real-life dilemma. They are prompted to explore questions about:

  • finding, creating, and evaluating information and evidence,
  • perspectives, persuasion, and conflict resolution,
  • when and if it's possible to be certain,
  • the nature of facts, truth, and reality.

Students were in grades 4-12, and attended one or more of 56 schools in the United States and Canada.

The graphs shown above represent two groups of schools—those with students who received the highest scores on the LRJA and those with students who received the lowest scores. These schools differed from one another in two other ways. First, the highest performing schools were all private schools*. Most students in these schools came from upper middle or high SES (socio-economic status) homes. The lowest performing schools were all public schools serving low to middle SES inner city students.

The second way in which these schools differed was in the design of their curricula. The highest performing schools featured integrated curricula with a great deal of practice-based learning and a heavy emphasis on fostering understanding and real-world competence. All of the lowest performing schools featured standards-focused curricula with a strong emphasis on learning the facts, formulas, procedures, vocabulary, and rules targeted by state tests.

Based on the results of conventional standardized tests, we expected most of the differences between student performances on the LRJA in these two groups of schools to be explained by SES. But this was not the case. Private schools with more conventional curricula and high performing public schools serving middle and upper middle SES families did indeed outperform the low SES schools, but as shown in the graph below, by grade 12, their students were still about 2.5 years behind students in the highest performing schools. At best, SES explains only about 1/2 of the difference between the best and worst schools in our database. (For more on this, see the post, "Does a focus on deep understanding accelerate growth?")

By the way, the conventional standardized test scores of students in this middle group, despite their greater emphasis on covering content, were no better than the conventional standardized test scores of students in the high performing group. Focusing on deep understanding appears to help students develop faster without interfering with their ability to learn required content.

This will not be our last word on the subject. As we scale our K-12 assessments, we'll be able to paint an increasingly clear picture of the developmental impact of a variety of curricula.

Lectica's nonprofit mission is to help educators foster deep understanding and lifelong growth. We can do it with your help! Please donate now. Your donation will help us deliver our learning tools—free—to K-12 teachers everywhere.

*None of these schools pre-selected their students based on test scores. 


Correctness versus understanding

Recently, I was asked by a colleague for a clear, simple example that would show how DiscoTest items differ from the items on conventional standardized tests. My first thought was that this would be impossible without oversimplifying. My second thought was that it might be okay to oversimplify a bit. So, here goes!

The table below lists four differences between what Lectica measures and what is measured by other standardized assessments.1 The descriptions are simplified and lack nuance, but the distinctions are accurate.

  Lectical Assessments Other standardized assessments
Scores represent level of understanding based on a valid learning scale number of correct answers
Target the depth of an individual's understanding (demonstrated in the complexity of arguments and the way the test taker works with knowledge) the ability to recall facts, or to apply rules, definitions, or procedures (demonstrated by  correct answers)
Format paragraph length written responses primarily multiple choice or short written answers2
Responses explanations, applications, and transfer right/wrong judgments or right/wrong applications of rules and procedures

The example

I chose a scenario-based example that we're already using in an assessment of students' conceptions of the conservation of matter. We borrowed the scenario from a pre-existing multiple choice item.

The scenario

Sophia balances a pile of stainless steel wire and ordinary steel wire on a scale. After a few days the ordinary wire in the pan on the right starts rusting.

Conventional multiple choice question

What will happen to the pan with the rusting wire?

  1. The pan will move up.
  2. The pan will not move.
  3. The pan will move down.
  4. The pan will first move up and then down.
  5. The pan will first move down and then up.

(Go ahead, give it a try! Which answer would you choose?)

Lectical Assessment question

What will happen to the height of the pan with the rusting wire? Please explain your answer thoroughly.

Here are three examples of responses from 12th graders.

Lillian: The pan will move down because the rusted steel is heavier than the plain steel.



Josh: The pan will move down, because when iron rusts, oxygen atoms get attached to the iron atoms. Oxygen atoms don't weigh very much, but they weigh a bit, so the rusted iron will "gain weight," and the scale will to down a bit on that side.

Ariana: The pan will go down at first, but it might go back up later. When iron oxidizes, oxygen from the air combines with the iron to make iron oxide. So, the mass of the wire increases, due to the mass of the oxygen that has bonded with the iron. But iron oxide is non-adherent, so over time the rust will fall off of the wire. If the metal rusts for a long time, some of the rust will become dust and some of that dust will very likely be blown away.


The correct answer to the multiple choice question is, "The pan will move down."

There is no single correct answer to the Lectical Assessment item. Instead, there are answers that reveal different levels of understanding. Most readers will immediately see that Josh's answer reveals more understanding than Lillian's, and that Ariana's reveals more understanding than Josh's.

You may also notice that Arianna's written response would result in her selecting one of the incorrect multiple-choice answers, and that Lillian and Josh are given equal credit for correctness even though their levels of understanding are not equally sophisticated. 

Why is all of this important?

  • It's not fair! The multiple choice item cheats Adriana of the chance to show off what she knows, and it treats Lillian and Josh as if their level of understanding is identical.
  • The multiple choice item provides no useful information to students or teachers! The most we can legitimately infer from a correct answer is that the student has learned that when steel rusts, it gets heavier. This correct answer is a fact. The ability to identify a fact does not tell us how it is understood.
  • Without understanding, knowledge isn't useful. Facts that are not supported with understanding are useful on Jeopardy, but less so in real life. Learning that does not increase understanding or competence is a tragic waste of students' time.
  • Despite clear evidence that correct answers on standardized tests do not measure understanding and are therefore not a good indicator of useable knowledge or competence, we continue to use scores on these tests to make decisions about who will get into which college, which teachers deserve a raise, and which schools should be closed. 
  • We value what we measure. As long as we continue to measure correctness, school curricula will emphasize correctness, and deeper, more useful, forms of learning will remain relatively neglected.

None of these points is particularly controversial. Most educators agree on the importance of understanding and competence. What's been missing is the ability to measure understanding at scale and in real time. Lectical Assessments are designed to fill this gap.


1Many alternative assessments are designed to measure understanding—at least to some degree—but few of these are standardized or scalable. 

2See my examination of a PISA item for an example of a typical written response item from a highly respected standardized test.

Benchmarks: education, jobs, and the Lectical Scale

I'm frequently asked about benchmarks. My most frequent response is something like: "Setting benchmarks requires more data than we have collected so far," or "Benchmarks are just averages, they don't necessarily apply to particular cases, but people tend to use them like they do." Well, that last excuse will probably always hold true, but now that our database contains more than 43,000 assessments, the first response is a little less true. So, I'm pleased to announce that we've published a benchmark table that shows how educational and workplace role demands relate to the Lectical Scale. We hope you find it useful!

Meet Nate Bowling—teacher of the year

Nate is the kind of teacher every child needs and deserves. We want to (1) help all teachers build skills like Nate's, and (2) remove some of the barrier's he's concerned about.

Does everyone learn in exactly the same way?

All of our assessments are calibrated to the same learning scale, called the "Lectical Scale". To people who are familiar with how most educational assessments work, this seems pretty weird. In fact, it can sound to some people like we're claiming that we make a bunch of assessments that all measure exactly the same thing. So why bother making more than one?

In fact, we ARE measuring exactly the same thing with all of our assessments, but we're measuring it in different contexts. Or put another way, we're using the same ruler to measure the development of different skills and ideas. The claim we're making is that people's ability to think about all things grows in the same fundamental way.

To understand what we mean by this, it helps to think about how thermometers work. We can use the temperature scale to describe the heat of anything. This is because temperature is a fundamental property. It doesn't change if the context changes. When we say someone's temperature is 102º Fahrenheit, we can say that they are likely to be sick. However, we cannot say what is causing them to be sick unless we make other kinds of measurements or observations.

Similarly, the Lectical Assessment System (our human scoring system) and CLAS (our computer scoring system) measure the complexity of thinking as it shows up in what people write or say. Evidence shows that complexity of thinking is a fundamental property. A Lectical Score tells us how complex what someone has written or said is, so we can say that people who share that score demonstrate the same thinking complexity. But the Lectical Score doesn't tell us exactly what they are thinking. In fact, there are many, many ways in which two people can get the same score on one of our assessments, so in order to say what the score means on a particular test, we need to make other kinds of measurements or observations.


How we do it—the Lectical Dictionary

wood-cube2Almost all of today's standardized educational assessments are technologically sophisticated, but Lectical Assessments are both technologically and scientifically sophisticated. We think of our approach as the "rocket science" of educational assessment. And Lectica's mission as a whole can be thought of, in part, as an ambitious and research-intensive engineering project.

Our aim is nothing less than a comprehensive account of human learning that covers the verbal lifespan. You can think of this account as a "taxonomy of learning". At its core is the Lectical Dictionary, a continuously vetted and growing developmental inventory of the English language. We use this dictionary to support our understanding of the development of specific concepts and skills. It's also at the heart of CLAS, our electronic scoring system, and our as-yet-unnamed developmental spell checker. Every Lectical Assessment that's taken helps us increase the accuracy of the Lectical Dictionary, and every Lectical Assessment we create expands its scope. 

Complexity, Lectica, and your business

In this video, I explain how you can use Lectical Assessments to  find out (1) if your leaders are up to the complexity demands of their jobs and (2) how Lectical Assessments can help them build the skills they need to close the complexity gap.


The development of reasoning about leadership

Since 2002 (when we began our work on the Federal Government's National Leadership Project) we've been documenting the development of leaders' conceptions of leadership. We've learned a lot about how the understanding of leadership develops over time, but we haven't yet shared this knowledge widely. I'm going to remedy the situation here, by sharing a small sample of learning sequences for "reasoning about leadership". 

In the tables below, the learning sequences are described at the "zone" level. A zone is 1/2 of a Lectical Level, and there are four zones that we regularly observe in adulthood. These are illustrated in the figure below.


Lectical development is growth in the complexity of our thinking. As illustrated in the figure above, one way this increasing complexity shows up is in our ability to work effectively with increasingly broad perspectives. It also appears in our reasoning about specific ideas, including our ideas about leadership. The table below shows what we've learned so far about leaders' reasoning about leadership in general.

The development of reasoning about leadership
phase good leadership is…
advanced linear thinking a collection of traits, dispositions, habits, or skills
early systems thinking a complex set of interrelated traits, dispositions, learned qualities, and skills that are applied in particular contexts
advanced systems thinking a complex and flexible set of interrelated and constantly developing skills, dispositions, learned qualities, and behaviors
early integrative thinking

the actualization of context-independent, consciously cultivated qualities, disposition, and skills that have evolved through purposeful and committed engagement and reflective interaction with others

The second table shows what we know so far about how thinking about sharing power, courage, working with emotion, and social skills—develops across the four adult zones. Note how the conceptions at successive levels build upon one another and increase in scope. It's easy to see why individuals performing at higher levels tend to rise to the top of organizations—they can see more of the picture. 

The development of reasoning about leader skills
phase sharing power courage working with emotion social skills
advanced linear thinking sharing the work load with others or letting other people make some of the decisions the ability to face, conquer, or conceal fear, admit when you are wrong, stand up for others, believe in yourself, or stand up for what you believe is right being able to keep staff satisfied and productive, calm down overly emotional staff, or support staff during difficult times being able to listen or communicate well, control your emotions, or put yourself in the other person's shoes
early systems thinking empowering others by giving them opportunities to share responsibility, knowledge, and/or benefits the ability to function well in the face of fear or other obstacles, or being willing to take reasonable risks or make mistakes in the interest of a "higher" goal being able to manage your own emotions and to maintain employee morale, motivation, happiness, or sense of well-being having the skills required to foster compassionate, open, accepting, or tolerant relationships or interactions
advanced systems thinking sharing responsibility and accountability as a way to leverage the wisdom, expertise, or skills of stakeholders the ability to maintain and model integrity, purpose, and openness or to continue striving to fulfill one's vision or purpose—even in the face of obstacles or adversity having enough insight into human emotion to foster an emotionally healthy culture in which emotional awareness and maturity are valued and rewarded being able to foster a culture that supports optimal social relations and the ongoing development of social skills
early integrative thinking

strategically distributing power by developing systems and structures that foster continuous learning, collaboration, and collective engagement

the ability to serve a larger principle or vision by strategically embracing risk, uncertainty, and ambiguity—even in the face of internal and external obstacles or resistance having the ability to work with others to establish systems and structures that support the emergence of, and help sustain, an emotionally healthy culture being able to develop adaptive systems that respond to the emergent social dynamics of internal and external relationships

The level at which we understand leadership has been shown to have a major impact both on how we choose to lead and on the level of complexity we can work with effectively.  Lectical Assessments are designed to measure and foster this kind of growth. If you'd like to learn more or have any questions, we'd love to hear from you.

Getting the biggest bang for your leader development dollar

human figuresMost leader development programs we're aware of focus on top leadership. We assume this has something to do with a belief that top leaders are more likely to benefit from these programs than lower-level leaders. Another possible assumption is that top leadership has more impact on corporate success than leadership elsewhere in an organization. Addressing this second assumption is beyond the scope of this post, but we have some good evidence that the first assumption is faulty.

In a big longitudinal leader-development project we were involved with (with a consultancy called Clear Impact), 238 managers completed pre and post LDMAs. (The LDMA is our leadership decision-making assessment.) Participants represented three management levels—supervisory, mid-level, and upper-level.

As we expected, at time 1, upper-level managers had higher scores on the LDMA than lower-level managers—by an average of .15 of a level. But by the end of the 9-month program, this difference was only .06 of a level. Lower-level managers had grown about .14 of a level, whereas upper-level managers had grown only .05 of a level. Lower-level managers were catching up!

Lower-level managers developed more than twice as rapidly as upper-level managers.

Part of the reason for the more rapid growth of the lower-level managers, was that their initial scores were lower. We expect somewhat more rapid growth at lower developmental levels because development gets more difficult as we move up the developmental scale. Another reason might have been that the lower-level managers had never before participated in a leadership program, which could mean they had been developing tacit skills over the years, and the program simply helped bring those skills into conscious awareness. It's also possible that they grew more because they took the program more seriously, seeing it as a special opportunity to advance their careers.

Whatever the reason for the more rapid growth of lower-level leaders, the fact is that their decision-making skills grew—so much that they were catching up with the upper-level managers. Moreover, the more they grew, the more likely it was that their subordinates noticed improvement in their decision-making behavior.** 

From our perspective, these results represent a strong challenge to the assumption that lower-level managers are likely to benefit less from leader development programs than upper-level managers. In fact, the opposite may be true. And if it is, it's time to reconsider how we're spending that development dollar.

**Dawson, T. L., et al. (2015). Cultivating the integral mind: the relation between development and perceptions of performance in a large-scale leadership program. ITC. Sonoma, CA.