About Theo

Founder and Executive Director of Lectica, Inc. and founder of the DiscoTest initiative.

Dr. Dawson is an award-winning scholar, researcher, educator, and test developer. She has been studying how people learn and how people think about learning for over two decades. Her dissertation, which explored the way people of different ages conceptualize education, learning, testing, and teaching introduced a new set of methods for documenting learning sequences. This work, along with her studies in psychometrics, have provided the basis for a new model of assessment—one that focuses on helping teachers identify the learning needs of individual students.

Through the DiscoTest initiative, Dawson and her colleagues have shown that it is possible to design standardized educational assessments that not only help teachers identify the learning needs of individual students, but turn the testing experience into a rich learning experience in which students practice their thinking, communication, and evaluation skills.

Scholarly articles by Dawson can be found on the articles page of the <a href="https://dts.lectica.org/_about/articles.php"Lectica site.

Learning how to learn or learning how to pass tests?

how to learnI've been auditing a very popular 4.5 star Coursera course called "Learning how to learn." It uses all of the latest research to help people improve their learning skills. Yet, even though the lectures in the course are interesting and the research behind the course appears to be sound, I find it difficult to agree that it is a course that helps people learn how to learn.

First, the tests used to determine how well participants have built the learning skills described in this course are actually tests of how well they have learned vocabulary and definitions. As far as I can tell, no skills are involved other than the ability to recall course content. This is problematic. The assumption that learning vocabulary and definitions builds skill is unwarranted. I believe we all know this. Who has not had the experience of learning something well enough to pass a test only to forget most of what they had learned shortly thereafter?

Second, the content in tests at the end of the videos aren't particularly relevant to the stated intention of the course. These tests require remembering (or scrolling back to) facts like "Many new synapses are formed on dendrites." We do not need to learn this to become effective learners. The test item for which this is the correct answer is focused on an aspect of how learning works rather than how to learn. And although understanding how learning works might be a step toward learning how to learn, answering this question correctly doesn't tell us how the participant understands anything at all.

Third, if the course developers had used tests of skill—tests that asked participants to show off how effectively they could apply described techniques, we would be able to ask about the extent to which the course helps participants learn how to learn. Instead, the only way we have to evaluate the effectiveness of the course is through participant ratings and comments—how much people like it. I'm not suggesting that liking a course is unimportant, but it's not a good way to evaluate its effectiveness.

Fourth, the course seems to be primarily concerned with fostering a kind of learning that helps people do better on tests of correctness. The underlying and unstated assumption seems to be that if you can do better on these tests, you have learned better. This assumption flies in the face of several decades of educational research, including our own [for example, 1, 2, 3]. Correctness is not adequate evidence of understanding or real-world skill. If we want to know how well people understand new knowledge, we must observe how they apply this knowledge in real-world contexts. If we want to evaluate their level of skill, we must observe how well they apply the skill in real-world contexts. In other words, a course in learning how to learn—especially a course in learning how to learn—should be building useable skills that have value beyond the act of passing a test of correctness.

Fifth, the research behind this course can help us understand how learning works. At Lectica, we've used the very same information as part of the basis for our learning model, VCoL+7. But instead of using this knowledge to support the status quo—an educational system that privileges correctness over understanding and skill—we're using it to build learning tools designed to ensure that learning in school goes beyond correctness to build deep understanding and robust skill.

For the vast majority of people, schooling is not an end in itself. It is preparation for life—preparation with tomorrow's skills. It's time we held our educational institutions accountable for ensuring that students know how to learn more than correct answers. Wherever their lives take them, they will do better if equipped with understanding and skill. Correctness is not enough.

 


[1] FairTest; Mulholland, Quinn  (2015). The case against standardized testing. Harvard Political Review, May 14.

[2] Schwartz, M. S., Sadler, P. M., Sonnert, G. & Tai, R. H. (2009). Depth versus breadth: How content coverage in high school science courses relates to later success in college science coursework. Science Education, 93, 5, 798-826.

[3] Kontra, C., Goldin-Meadow, S., & Beilock, S. L. (2012). Embodied learning across the lifespan. Topics in Cognitive Science, 4, 4, 731–739.

 

Please follow and like us:

Lectica’s story: long, rewarding, & still unfolding


Lectica's story started in Toronto in 1976…

Identifying the problem

During the 70s and 80s I practiced midwifery. It was a great honor to be present at the births of over 500 babies, and in many cases, follow them into childhood. Every single one of those babies was a joyful, driven, and effective "every moment" learner. Regardless of difficulty and pain they all learned to walk, talk, interact with others, and manipulate many aspects of their environment. They needed few external rewards to build these skills—the excitement and suspense of striving seemed to be reward enough. I felt like I was observing the "life force" in action.

Unfortunately as many of these children approached the third grade (age 8), I noticed something else—something deeply troubling. Many of the same children seemed to have lost much of this intrinsic drive to learn. For them, learning had become a chore motivated primarily by extrinsic rewards and punishments. Because this was happening primarily to children attending conventional schools (Children receiving alternative instruction seemed to be exempt.) it appeared that something about schooling was depriving many children of the fundamental human drive required to support a lifetime of learning and development—a drive that looked to me like a key source of happiness and fulfillment.

Understanding the problem

Following upon my midwifery career, I flirted briefly with a career in advertising, but by the early 90's I was back in school—in a Ph.D. program in U. C. Berkeley's Graduate School of Education—where I found myself observing the same pattern I'd observed as a midwife. Both the research and my own lab experience exposed the early loss of students' natural love of learning. My concern was only increased by the newly emerging trend toward high stakes multiple choice testing, which my colleagues and I saw as a further threat to children's natural drive to learn.

Most of the people I've spoken to about this problem have agreed that it's a shame, but few have seen it as a problem that can be solved, and many have seen it as an inevitable consequence of either mass schooling or simple maturation. But I knew it was not inevitable. Children and those educated in a range of alternative environments did not appear to lose their drive to learn. Additionally, above average students in conventional schools appeared to be more likely to retain their love of learning.

I set out to find out why—and ended up on a long journey toward a solution.

How learning works

First, I needed to understand how learning works. At Berkeley, I studied a wide variety of learning theories in several disciplines, including developmental theories, behavioral theories, and brain-based theories. I collected a large database of longitudinal interviews and submitted them to in-depth analysis, looked closely at the relation between testing and learning, and studied psychological measurement, all in the interest of finding a way to support childrens' growth while reinforcing their love of learning.

My dissertation—which won awards from both U.C. Berkeley and the American Psychological Association—focused on the development of people's conceptions of learning from age 5 through 85, and how this kind of knowledge could be used to measure and support learning. In 1998, I received $500,000 from the Spencer Foundation to further develop the methods designed for this research. Some of my areas of expertise are human learning and development, psychometrics, metacognition, moral education, and research methods.

In the simplest possible terms, what I learned in 5 years of graduate school is that the human brain is designed to drive learning, and that preserving that natural drive requires 5 ingredients:

  1. a safe environment that is rich in learning opportunities and healthy human interaction,
  2. a teacher who understands each child's interests and level of tolerance for failure,
  3. a mechanism for determining "what comes next"—what is just challenging enough to allow for success most of the time (but not all of the time),
  4. instant actionable feedback, and 
  5. the opportunity to integrate new knowledge or skills into each learner's existing knowledge network well enough to make it useable before pushing instruction to the next level. (We call this building a "robust knowledge network"—the essential foundation for future learning.)*

Identifying the solution

Once we understood what learning should look like, we needed to decide where to intervene. The answer, when it came, was a complete surprise. Understanding what comes next—something that can only be learned by measuring what a student understands now—was an integral part of the recipe for learning. This meant that testing—which we originally saw as an obstacle to robust learning—was actually the solution—but only if we could build tests that would free students to learn the way their brains are designed to learn. These tests would have to help teachers determine "what comes next" (ingredient 3) and provide instant actionable feedback (ingredient 4), while rewarding them for helping students build robust knowledge networks (ingredient 5).

Unfortunately, conventional standardized tests were focused on "correctness" rather than robust learning, and none of them were based on the study of how targeted concepts and skills develop over time. Moreover, they were designed not to support learning, but rather to make decisions about advancement or placement, based on how many correct answers students were able to provide relative to other students. Because this form of testing did not meet the requirements of our learning recipe, we'd have to start from scratch.

Developing the solution

We knew that our solution—reinventing educational testing to serve robust learning—would require many years of research. In fact, we would be committing to possible decades of effort without a guaranteed result. It was the vision of a future educational system in which all children retained their inborn drive for learning that ultimately compelled us to move forward. 

To reinvent educational testing, we needed to:

  1. make a deep study of precisely how children build particular knowledge and skills over time in a wide range of subject areas (so these tests could accurately identify "what comes next");
  2. make tests that determine how deeply students understand what they have learned—how well they can use it to address real-world issues or problems (requires that students show how they are thinking, not just what they know—which means written responses with explanations); and
  3. produce formative feedback and resources designed to foster "robust learning" (build robust knowledge networks).

Here's what we had to invent:

  1. A learning ruler (building on Commons [1998] and Fischer [2006]);
  2. A method for studying how students learn tested concepts and skills (refining the methods developed for my dissertation);
  3. A human scoring system for determining the level of understanding exhibited in students' written explanations (building upon Commons' and Fischer's methods, refining them until measurements were precise enough for use in educational contexts); and 
  4. An electronic scoring system, so feedback and resources could be delivered in real time.

It took over 20 years (1996–2016), but we did it! And while we were doing it, we conducted research. In fact, our assessments have been used in dozens of research projects, including a 25 million dollar study of literacy conducted at Harvard, and numerous Ph.D. dissertations—with more on the way.

What we've learned

We've learned many things from this research. Here are some that took us by surprise:

  1. Students in schools that focus on building deep understanding graduate seniors that are up to 5 years ahead (on our learning ruler) of students in schools that focus on correctness (2.5 to 3 years after taking socioeconomic status into account).
  2. Students in schools that foster robust learning develop faster and continue to develop longer (into adulthood) than students in schools that focus on correctness.
  3. On average, students in schools that foster robust learning produce more coherent and persuasive arguments than students in schools that focus on correctness.
  4. On average, students in our inner-city schools, which are the schools most focused on correctness, stop developing (on our learning ruler) in grade 10. 
  5. The average student who graduates from a school that strongly focuses on correctness is likely, in adulthood, to (1) be unable to grasp the complexity and ambiguity of many common situations and problems, (2) lack the mental agility to adapt to changes in society and the workplace, and (3) dislike learning. 

From our perspective, these results point to an educational crisis that can best be addressed by allowing students to learn as their brains were designed to learn. Practically speaking, this means providing learners, parents, teachers, and schools with metrics that reward and support teaching that fosters robust learning. 

Where we are today

Lectica has created the only metrics that meet all of these requirements. Our mission is to foster greater individual happiness and fulfillment while preparing students to meet 21st century challenges. We do this by creating and delivering learning tools that encourage students to learn the way their brains were designed to learn. And we ensure that students who need our learning tools the most get them first by providing free subscriptions to individual teachers everywhere.

To realize our mission, we organized as a nonprofit. We knew this choice would slow our progress (relative to organizing as a for-profit and welcoming investors), but it was the only way to guarantee that our true mission would not be derailed by other interests.

Thus far, we've funded ourselves with work in the for-profit sector and income from grants. Our background research is rich, our methods are well-established, and our technology works even better than we thought it would. Last fall, we completed a demonstration of our electronic scoring system, CLAS, a novel technology that learns from every single assessment taken in our system. 

The groundwork has been laid, and we're ready to scale. All we need is the platform that will deliver the assessments (called DiscoTests), several of which are already in production.

After 20 years of high stakes testing, students and teachers need our solution more than ever. We feel compelled to scale a quickly as possible, so we can begin the process of reinvigorating today's students' natural love of learning, and ensure that the next generation of students never loses theirs. Lectica's story isn't finished. Instead, we find ourselves on the cusp of a new beginning! 

Please consider making a donation today.

 


A final note: There are many benefits associated with our approach to assessment that were not mentioned here. For example, because the assessment scores are all calibrated to the same learning ruler, students, teachers, and parents can easily track student growth. Even better, our assessments are designed to be taken frequently and to be embedded in low-stakes contexts. For grading purposes, teachers are encouraged to focus on growth over time rather than specific test scores. This way of using assessments pretty much eliminates concerns about cheating. And finally, the electronic scoring system we developed is backed by the world's first "taxonomy of learning," which also serves many other educational and research functions. It's already spawned a developmentally sensitive spell-checker! One day, this taxonomy of learning will be robust enough to empower teachers to create their own formative assessments on the fly. 

 


*This is the ingredient that's missing from current adaptive learning technologies.

 

Please follow and like us:

Adaptive learning. Are we there yet?

Adaptive learning technologies are touted as an advance in education and a harbinger of what's to come. But although we at Lectica agree that adaptive learning has a great deal to offer, we have some concerns about its current limitations. In an earlier article, I raised the question of how well one of these platforms, Knewton, serves "robust learning"—the kind of learning that leads to deep understanding and usable knowledge. Here are some more general observations.

The great strength of adaptive learning technologies is that they allow students to learn at their own pace. That's big. It's quite enough to be excited about, even if it changes nothing else about how people learn. But in our excitement about this advance, the educational community is in danger of ignoring important shortcomings of these technologies.

First, adaptive learning technologies are built on adaptive testing technologies. Today, these testing technologies are focused on "correctness." Students are moved to the next level of difficulty based on their ability to get correct answers. This is what today's testing technologies measure best. However, although being able to produce or select correct answers is important, it is not an adequate indication of understanding. And without real understanding, knowledge is not usable and can't be built upon effectively over the long term.

Second, today's adaptive learning technologies are focused on a narrow range of content—the kind of content psychometricians know how to build tests for—mostly math and science (with an awkward nod to literacy). In public education during the last 20 years, we've experienced a gradual narrowing of the curriculum, largely because of high stakes testing and its narrow focus. Today's adaptive learning technologies suffer from the same limitations and are likely to reinforce this trend.

Third, the success of adaptive learning technologies is measured with standardized tests of correctness. Higher scores will help more students get into college—after all, colleges use these tests to decide who will be admitted. But we have no idea how well higher scores on these tests translate into life success. Efforts to demonstrate the relevance of educational practices are few and far between. And notably, there are many examples of highly successful individuals who were poor players in the education game—including several of the worlds' most productive and influential people.

Fourth, some proponents of online adaptive learning believe that it can and should replace (or marginalize) teachers and classrooms. This is concerning. Education is more than a process of accumulating facts. For one thing, it plays an enormous role in socialization. Good teachers and classrooms offer students opportunities to build knowledge while learning how to engage and work with diverse others. Great teachers catalyze optimal learning and engagement by leveraging students' interests, knowledge, skills, and dispositions. They also encourage students to put what they're learning to work in everyday life—both on their own and in collaboration with others.

Lectica has a strong interest in adaptive learning and the technologies that deliver it. We anticipate that over the next few years, our assessment technology will be integrated into adaptive learning platforms to help expand their subject matter and ensure that students are building robust, usable knowledge. We will also be working hard to ensure that these platforms are part of a well-thought out, evidence-based approach to education—one that fosters the development of tomorrow's skills—the full range of skills and knowledge required for success in a complex and rapidly changing world.

Please follow and like us:

Four keys to optimizing learning & development

There are four keys to optimizing learning and development and ensuring that it continues over a lifetime. 

  1. Don't cram content. Learning doesn't work optimally when it is rushed or when learners are over-stressed. In Finland, students only go to school three 6-hour days a week, rarely have homework, and do better on PISA than students anywhere else in the world. (Unfortunately, PISA primarily measures correctness, but it's the best international metric we have at present.) Their educational system is focused on building students' knowledge networks. Students don't move on to the next level until they master the current level. The Fins have figured out what our research shows—stuffing content has the long-term effect of slowing or halting development, while a focus on building knowledge networks leads to a steeper learning trajectory and a lifetime of learning and development.

     

     

  2. Focus on the network. To learn very large quantities of information, we must effectively recruit System 1 (the fast unconscious brain). System 1 makes associations. (Think of a neural network.) When we learn content through VCoL, we network System 1, connecting new content to already networked content in a way that creates a foundation for what comes next. This does not happen robustly without VCoL, which builds and solidifies the network through application/practice and reflection. System 1 can handle vast amounts of information and processes it rapidly. It serves us well when we learn well.
  3. Make reflection a part of every learning moment. People cannot reason well about things they don't understand well. When we foster deep understanding through VCoL (and the +7 skills), we recruit System 2 (the slow reasoning brain) to consciously shape the creation and modification of connections in System 1—ensuring that our network of knowledge is growing in a way that mirrors "reality." The constant practice of analytical and reflective skills not only builds a robust network, but also increases our capacity for making reasonable connections and inferences and enhances our mental agility and capacity for making useful intuitive "leaps." We learn to think by thinking—and we think better when we have a robust knowledge network to rely on.
  4. Educate the whole person. We believe that education should focus on the development of the entire human being. This means supporting the development of competent, compassionate, aware, and attentive human beings who work well with others. A good way to develop these qualities is through embedded practices that foster interpersonal awareness and skill, such as collaborative or shared learning. These practices provide another benefit as well. They tend to excite emotions that are known to enhance learning.

 

Please follow and like us:

How to accelerate growth

The best way we know of to accelerate growth is to slow down and teach in ways that foster deep understanding. It may be counterintuitive, but slow learning really does accelerate growth!

In the post entitled, If you want students to develop faster, stop trying to speed up learning, I presented evidence that schools with curricula that promote deep understanding accelerate growth relative to schools with more of a focus on covering required content. In this post, I'm going to explain what we've learned so far about the relation between deep understanding and the rate of development. (I recommend reading the earlier post before trying to make sense of this one.)

Lectica's learning model, the Virtuous Cycle of Learning and +7 skills (VCoL+7), emphasizes the importance of giving learners ample opportunity to build deep understanding through cycles of goal setting, information gathering, application, and reflection. We argue that evidence of deep understanding can be seen in the coherence of students' arguments—you can't explain or defend an idea coherently if you don't understand it. Furthermore, because poorly understood ideas provide a weak foundation for future learning, we hypothesized that over time students who demonstrate lower levels of understanding—through the coherence of their arguments—will grow more slowly than students who demonstrate higher levels of understanding.* 

We tested this hypothesis by examining assessment results from students attending low SES (socio-economic status) inner city schools. Each student had taken the LRJA (our reflective judgment assessment) 3 times over 3 1/2 years. Some of these students were in grade 4 at time 1, and some were in grade 6 at time 1. Each LRJA performance received 2 scores, 1 for its developmental level (shown on the vertical axis in the graphic below), and one for its logical coherence, rated on a 10 point scale. 

We conducted a hierarchical regression analysis that examined the relation between time 1 argumentation score and developmental growth (after controlling for developmental level at time 1).

For the figure below, I've borrowed the third graph from the "stop trying to speed up learning" post, faded it into the background, then superimposed growth curves predicted by the hierarchical regression model for three hypothetical students receiving time 1 coherence scores of 5.5, 6.5, and 7.5.* These values were selected because they are close to the actual time 1 coherence scores for the three groups of students in the background graphic. (Actual average time 1 scores are shown on the right.) 

Graph showing a one year advantage in grade 8 for students whose argumentation scores in grade 4 were 2 point higher than those of other students

As you can see, the distance between grade 8 scores predicted by the hierarchical regression is a bit less than half of the difference between the actual average scores in the background image. What this means is that in grade 8, a bit less than half of the difference between students in the three types of schools is explained by depth of understanding (as captured by our measure of coherence). 

In my earlier post, "If you want students to develop faster, stop trying to speed up learning," I concluded that socioeconomic status could not be the main cause of the differences in growth curves for different kinds of schools, because two of the groups we compared were not socio-economically different. The results of the analysis shown in this post suggests that almost half of the difference is due to the different levels of understanding reflected in coherence scores. This result supports the hypothesis that it is possible to accelerate development by increasing the depth of students' understanding .

We cannot even attempt to explain the remaining differences between school groups without controlling for the effects of socio-economic status and English proficiency. We'll do that as soon as we've finished rating the logical coherence of performances from a larger sample of students representing all three types of schools featured in this analysis. Stay tuned!


Lectica's nonprofit mission is to help educators foster deep understanding and lifelong growth. We can do it with your help! Please donate now. Your donation will help us deliver our learning tools—free—to K-12 teachers everywhere.


*You can learn more about our developmental scale on lecticalive's skill levels page, and our argumentation scales are described in the video, New evidence that robust knowledge networks support development.

 

Please follow and like us:

If you want students to develop faster, stop trying to speed up learning

During the last 20 years—since high stakes testing began to take hold—public school curricula have undergone a massive transformation. Standards have pushed material that was once taught in high school down into the 3rd and 4th grade, and the amount of content teachers are expected to cover each year has increased steadily. The theory behind this trend appears to be that learning more content and learning it earlier will help students develop faster.

But is this true? Is there any evidence at all that learning more content and learning it earlier produces more rapid development? If so, I haven't seen it.

In fact, our evidence points to the opposite conclusion. Learning more and learning it earlier may actually be interfering with the development of critical life skills—like those required for making good decisions in real-life contexts. As the graph below makes clear, students in schools that emphasize covering required content do not develop as rapidly as students in schools that focus on fostering deep understanding—even though learning for understanding generally takes more time than learning something well enough to "pass the test."

What is worse, we're finding that the average student in schools with the greatest emphasis on covering required content appears to stop developing by the end of grade 10, with an average score of 10.1. This is the same score received by the average 6th grader in schools with the greatest emphasis on fostering deep understanding.

The graphs in this post are based on data from 17,755 LRJA assessments. The LRJA asks test-takers to respond to a complex real-life dilemma. They are prompted to explore questions about:

  • finding, creating, and evaluating information and evidence,
  • perspectives, persuasion, and conflict resolution,
  • when and if it's possible to be certain, and
  • the nature of facts, truth, and reality.

Students were in grades 4-12, and attended one or more of 56 schools in the United States and Canada.

The graphs shown above represent two groups of schools—those with students who received the highest scores on the LRJA and those with students who received the lowest scores. These schools differed from one another in two other ways. First, the highest performing schools were all private schools*. Most students in these schools came from upper middle SES (socio-economic status) homes. The lowest performing schools were all public schools primarily serving low SES inner city students.

The second way in which these schools differed was in the design of their curricula. The highest performing schools featured integrated curricula with a great deal of practice-based learning and a heavy emphasis on fostering understanding and real-world competence. All of the lowest performing schools featured standards-focused curricula with a strong emphasis on learning the facts, formulas, procedures, vocabulary, and rules targeted by state tests.

Based on the results of conventional standardized tests, we expected most of the differences between student performances on the LRJA in these two groups of schools to be explained by SES. But this was not the case. Private schools with more conventional curricula and high performing public schools serving middle and upper middle SES families did indeed outperform the low SES schools, but as shown in the graph below, by grade 12, their students were still about 2.5 years behind students in the highest performing schools. At best, SES explains only about 1/2 of the difference between the best and worst schools in our database. (For more on this, see the post, "Does a focus on deep understanding accelerate growth?")

By the way, the conventional standardized test scores of students in this middle group, despite their greater emphasis on covering content, were no better than the conventional standardized test scores of students in the high performing group. Focusing on deep understanding appears to help students develop faster without interfering with their ability to learn required content.

This will not be our last word on the subject. As we scale our K-12 assessments, we'll be able to paint an increasingly clear picture of the developmental impact of a variety of curricula.


Lectica's nonprofit mission is to help educators foster deep understanding and lifelong growth. We can do it with your help! Please donate now. Your donation will help us deliver our learning tools—free—to K-12 teachers everywhere.


*None of these schools pre-selected their students based on test scores. 

See a version of this article on Medium.

Please follow and like us:

Lectica’s Human Capital Value Chain—for organizations that are serious about human development

Lectica's tools and services have powerful applications for every process in the human capital value chain. I explain how in the following video.

For links to more information see the HCVC page on Lecticalive. For references that support claims made in the video, see the post—Introducing LecticaFirst.

 

Please follow and like us:

The rate of development

An individual's rate of development is affected by a wide range of factors. Twin studies suggest that about 50% of the variation in Lectical growth trajectories is likely to be predicted by genetic factors. The remaining variation is explained by environmental factors, including the environment in the womb, the home environment, parenting quality, educational quality & fit, economic status, diet, personal learning habits, and aspects of personality.

Each Lectical Level takes longer to traverse than the previous level. This is because development through each successive level involves constructing increasingly elaborated and abstract knowledge networks. Don't be fooled by the slow growth, though. A little growth can have an important impact on outcomes. For example, small advances in level 11, can make a big difference in an individual's capacity to work effectively with complexity and change.  

growth trajectories lifespan Growth trajectories over the lifespan

The graphs above show possible learning trajectories, first, for the lifespan and second, for ages 10-60. Note that the highest age shown on these graphs is 60. This does not mean that individuals cannot develop after the age of 60.

The yellow circle in each graph represents a Lectical Score and the confidence interval around that score. That's the range in which the "true score" would most likely fall. When interpreting any test score, you should keep the confidence interval in mind.  

Test results are not tidy

When we measure development over short time spans, it does not look smooth. The kind of pattern shown in the following graph is more common. However, we have found that growth appears a bit smoother for adults than for children. We think this is because children, for a variety of reasons, are less likely to do their best work on every testing occasion.

Report card showing jagged growth 

Factors that increase the rate of development

  • The test-taker's current developmental trajectory. (A person whose history places her on the green curve in the first two graphs is unlikely to jump to the blue curve.)
  • The amount of reflective activity (especially VCoLing) the individual typically engages in (no reflective activity, no growth)
  • Participation in deliberate learning activities that include lots of reflective activity (especially VCoLing)
  • Participating in supported learning (coaching, mentoring) after a long period of time away from formal education (can create a spurt)

 

Please follow and like us:

Learning and metacognition

Metacognition is thinking about thinking. Metacognitive skills are an interrelated set of competencies for learning and thinking, and include many of the skills required for active learning, critical thinking, reflective judgment, problem solving, and decision-making. People whose metacognitive skills are well developed are better problem-solvers, decision makers and critical thinkers, are more able and more motivated to learn, and are more likely to be able to regulate their emotions (even in difficult situations), handle complexity, and cope with conflict. Although metacognitive skills, once they are well-learned, can become habits of mind that are applied unconsciously in a wide variety of contexts, it is important for even the most advanced learners to “flex their cognitive muscles” by consciously applying appropriate metacognitive skills to new knowledge and in new situations.

Lectica's learning model, VCoL+7 (the virtuous cycle of learning and +7 skills) leverages metacognitive skills in a number of ways. For example, the fourth step in VCoL is reflection & analysis, the +7 skills include reflective dispositionself-monitoring and awareness, and awareness of cognitive and behavioral biases.

Learn more

 

Learning in the workplace occurs optimally when the learner has a reflective disposition and receives both insitutional and educational support

Please follow and like us:

Correctness versus understanding

Recently, I was asked by a colleague for a clear, simple example that would show how DiscoTest items differ from the items on conventional standardized tests. My first thought was that this would be impossible without oversimplifying. My second thought was that it might be okay to oversimplify a bit. So, here goes!

The table below lists four differences between what Lectica measures and what is measured by other standardized assessments.1 The descriptions are simplified and lack nuance, but the distinctions are accurate.

  Lectical Assessments Other standardized assessments
Scores represent level of understanding based on a valid learning scale number of correct answers
Target the depth of an individual's understanding (demonstrated in the complexity of arguments and the way the test taker works with knowledge) the ability to recall facts, or to apply rules, definitions, or procedures (demonstrated by  correct answers)
Format paragraph length written responses primarily multiple choice or short written answers2
Responses explanations, applications, and transfer right/wrong judgments or right/wrong applications of rules and procedures

The example

I chose a scenario-based example that we're already using in an assessment of students' conceptions of the conservation of matter. We borrowed the scenario from a pre-existing multiple choice item.

The scenario

Sophia balances a pile of stainless steel wire and ordinary steel wire on a scale. After a few days the ordinary wire in the pan on the right starts rusting.

Conventional multiple choice question

What will happen to the pan with the rusting wire?

  1. The pan will move up.
  2. The pan will not move.
  3. The pan will move down.
  4. The pan will first move up and then down.
  5. The pan will first move down and then up.

(Go ahead, give it a try! Which answer would you choose?)

Lectical Assessment question

What will happen to the height of the pan with the rusting wire? Please explain your answer thoroughly.

Here are three examples of responses from 12th graders.

Lillian: The pan will move down because the rusted steel is heavier than the plain steel.

 

 

Josh: The pan will move down, because when iron rusts, oxygen atoms get attached to the iron atoms. Oxygen atoms don't weigh very much, but they weigh a bit, so the rusted iron will "gain weight," and the scale will to down a bit on that side.

Ariana: The pan will go down at first, but it might go back up later. When iron oxidizes, oxygen from the air combines with the iron to make iron oxide. So, the mass of the wire increases, due to the mass of the oxygen that has bonded with the iron. But iron oxide is non-adherent, so over time the rust will fall off of the wire. If the metal rusts for a long time, some of the rust will become dust and some of that dust will very likely be blown away.

Debrief

The correct answer to the multiple choice question is, "The pan will move down."

There is no single correct answer to the Lectical Assessment item. Instead, there are answers that reveal different levels of understanding. Most readers will immediately see that Josh's answer reveals more understanding than Lillian's, and that Ariana's reveals more understanding than Josh's.

You may also notice that Arianna's written response would result in her selecting one of the incorrect multiple-choice answers, and that Lillian and Josh are given equal credit for correctness even though their levels of understanding are not equally sophisticated. 

Why is all of this important?

  • It's not fair! The multiple choice item cheats Adriana of the chance to show off what she knows, and it treats Lillian and Josh as if their level of understanding is identical.
  • The multiple choice item provides no useful information to students or teachers! The most we can legitimately infer from a correct answer is that the student has learned that when steel rusts, it gets heavier. This correct answer is a fact. The ability to identify a fact does not tell us how it is understood.
  • Without understanding, knowledge isn't useful. Facts that are not supported with understanding are useful on Jeopardy, but less so in real life. Learning that does not increase understanding or competence is a tragic waste of students' time.
  • Despite clear evidence that correct answers on standardized tests do not measure understanding and are therefore not a good indicator of useable knowledge or competence, we continue to use scores on these tests to make decisions about who will get into which college, which teachers deserve a raise, and which schools should be closed. 
  • We value what we measure. As long as we continue to measure correctness, school curricula will emphasize correctness, and deeper, more useful, forms of learning will remain relatively neglected.

None of these points is particularly controversial. Most educators agree on the importance of understanding and competence. What's been missing is the ability to measure understanding at scale and in real time. Lectical Assessments are designed to fill this gap.

 


1Many alternative assessments are designed to measure understanding—at least to some degree—but few of these are standardized or scalable. 

2See my examination of a PISA item for an example of a typical written response item from a highly respected standardized test.

Please follow and like us: