Straw men and flawed metrics

khan_constructivistTen years ago, Kirschner, Sweller, & Clark published an article entitled, Why minimal guidance during instruction does not work: An analysis of the failure of constructivist, discovery, problem-based, experiential, and inquiry-based teaching.

In this article, Kirschner and his colleagues contrast outcomes for what they call "guidance instruction" (lecture and demonstration) with those from constructivism-based instruction. They conclude that constructivist approaches produce inferior outcomes.

The article suffers from at least three serious flaws

First, the authors, in making their distinction between guided instruction and constructivist approaches, have created a caricacture of constructivist approaches. Very few experienced practitioners of constructivist, discovery, problem-based, experiential, or inquiry-based teaching would characterize their approach as minimally guided. "Differently guided" would be a more appropriate term. Moreover, most educators who use constructivist approaches include lecture and demonstration where these are appropriate.

Second, the research reviewed by the authors was fundamentally flawed. For the most part, the metrics employed to evaluate different styles of instruction were not reasonable measures of the kind of learning constructivist instruction aims to support—deep understanding (the ability to apply knowledge effectively in real-world contexts). They were measures of memory or attitude. Back in 2010, Stein, Fisher, and I argued that metrics can't produce valid results if they don't actually measure what we care about  (Redesigning testing: Operalizationalizing the new science of learning. Why isn't this a no-brainer?

And finally, the longitudinal studies Kirschner and his colleagues reviewed had short time-spans. None of them examined the long-term impacts of different forms of instruction on deep understanding or long-term development. This is a big problem for learning research—one that is often acknowedged, but rarely addressed.

Since Kirschner's article was published in 2006, we've had an opportunity to examine the difference between schools that provide different kids of instruction, using assessments that measure the depth and coherence of students' understanding. We've documented a 3 to 5 year advantage, by grade 12, for students who attend schools that emphasize constructivist methods vs. those that use more "guidance instruction". 

To learn more, see:

Are our children learning robustly?

Lectica rationale

 

Lectica basics for schools

If you are a school leader, this post is for you. Here, you'll find information about Lectica, it's mission, and our first electronically scored Lectical Assessment—the LRJA.

Background

Lectica, Inc. is a 501(c)(3) charitable corporation. It's mission is to build and deliver learning tools that help students build skills for thinking and learning. These learning tools are backed by a strong learning model—the Virtuous Cycle of Learning (VCoL+7™)—and a comprehensive vision for educational testing and learning, which you can learn more about in our white paper—Virtuous cycles of learning: Redesigning testing during the digital revolution

We have spent over 20 years developing our methods and the technology required to deliver our learning tools—known as Lectical™ Assessments or DiscoTests®—at scale. These assessments are backed by a large body of research, including ongoing investigations of their validity and reliability. Here are some links to research reports:

The following video provides an overview our research and mission:

Current offerings

This fall, we're introducing our first electronically scored Lectical Assessment—the LRJA (an assessment of reflective judgment/cirtical thinking). The LRJA can be used in research and program evaluation as a summative assessment, or in the classroom as a formative assessment—or both.

The best way to learn about the LRJA is to experience it first-hand at lecticalive. Just click on this link, then select the "go straight to the demo" button. On the next page, fill in the sign up form with the educational level of your choice. Click "submit", then, click on the "autofill" button (top right, under the header) to fill the responses form with an example. 

If you're interested in working with the LRJA or would like to learn more about using Lectical Assessments to optimize thinking and learning, please contact us.

Robust knowledge knowledge networks catalyze development

Lectica's learning model, VCoL+7, emphasizes the importance of giving students ample opportunity to build well-connected knowledge networks through application and reflection. We argue that evidence of the level of integration in students' knowledge networks can be seen in the quality of their argumentation. In other words, we think of poor arguments as a symptom of poor integration. In the research reported in the video below, we asked if students' ability to make good arguments predicts their rate of growth on the Lectical Scale. 

Second language learning predicts the growth of critical thinking

On November 20th, 2016, we presented a paper at the ACTFL conference in Boston. In this paper, we described the results of a 4-year research project, designed to address the question, "Does second language learning support the development of critical thinking as measured by the LRJA?". To learn more, view the presentation below.



 

Growing into compassion

Since 2008, Lectica has been working with Dr. Sharon Solloway to study and support the development of mindfulness, reflective judgment, and compassion. 

Recently, we presented some of our results at the 2016 Mind & Life (ISCS) conference. Here is a recording of our presentation:

 

What do we mean by “embodied” learning?

There's a lot of talk about "embodied" learning these days, and it doesn't seem like there's much consensus about what it means. Since we sometimes use the term alongside "optimal learning" and "robust learning," I think it's time we offered a clear definition.

learning_expeditionary_1000Take a close look the activity in the lesson shown above. I found it on the Shorewood School District's web site. The lesson depicted in this photo is an excellent example of embodied learning in action. Note the many ways in which students are engaged. They are trying to solve a problem: "What do we need to do to pick up this cup?" This problem has kinesthetic, mathematical, mechanical, and collaborative components. Minimally, the students are intellectually, physically, and socially engaged. And I'm sure they're emotionally engaged as well—I can practically feel their hearts beating faster as they get closer to their goal. 

These children aren't just thinking about a solution, they're living the solution. What they learn is wired into their neural net at every level. It's not just an intellectual experience. It's embodied. This is what we call optimal or robust learning. It's the kind of learning we measure, support, and reward with Lectical Assessments.

Adaptive learning, big data, and the meaning of learning

Knewton defines adaptive learning as "A teaching method premised on the idea that the curriculum should adapt to each user." In a recent blog post, Knewton's COO, David Liu, expanded on this definition. Here are some extracts:

You have to understand and have real data on content… Is the instructional content teaching what it was intended to teach? Is the assessment accurate in terms of what it’s supposed to assess? Can you calibrate that content at scale so you’re putting the right thing in front of a student, once you understand the state of that student? 

On the other side of the equation, you really have to understand student proficiency… understanding and being able to predict how that student is going to perform, based upon what they’ve done and based upon that content that I talked about before. And if you understand how well the student is performing against that piece of content, then you can actually begin to understand what that student needs to be able to move forward.

The idea of putting the right thing in front of a students is very cool. That's part of what we do here at Lectica. But what does Knewton mean by learning?

Curiosity got the better of me, so I set out to do some investigating. 

What does Knewton mean by learning?

In Knewton's white paper on adaptive learning the authors do a great job describing how their technology works. 

To provide continuously adaptive learning, Knewton analyzes learning materials based on thousands of data points — including concepts, structure, difficulty level, and media format — and uses sophisticated algorithms to piece together the perfect bundle of content for each student, constantly. The system refines recommendations through network effects that harness the power of all the data collected for all students to optimize learning for each individual student.

They go on to discuss several impressive technological innovations. I have to admit, the technology is cool, but what is their learning model and how is Knewton's technology being used to improve learning and teaching?

Unfortunately, Knewton does not seem to operate with a clearly articulated learning model in mind. In any case, I couldn't find one. But based on the sample items and feedback examples shown in their white paper and on their site, what Knewton means by learning is the ability to consistently get right answers on tests and quizzes, and the way to learn (get more answers right) is to get more practice on the kind of items students are not yet consistently getting right.

In fact, Knewton appears to be a high tech application of the content-focused learning model that's dominated public education since No Child Left Behind—another example of what it looks like when we throw technology at a problem without engaging in a deep enough analysis of that problem.

We're in the middle of an education crisis, but it's not because children aren't getting enough answers right on tests and quizzes. It's because our efforts to improve education consistently fail to ask the most important questions, "Why do we educate our children?" and "What are the outcomes that would be genuine evidence of success?"

Don't get me wrong. We love technology, and we leverage it shamelessly. But we don't believe technology is the answer. The answer lies in a deep understanding of how learning works and what we need to do to support the kind of learning that produces outcomes we really care about. 

 

Meet Nate Bowling—teacher of the year

Nate is the kind of teacher every child needs and deserves. We want to (1) help all teachers build skills like Nate's, and (2) remove some of the barrier's he's concerned about.

Decision-making under VUCA conditions

VUCA

I was recently asked if there is a decision making approach that’s designed specifically for situations characterized by volatility, uncertainty, complexity, and ambiguity (VUCA). I don’t know of a one-size-fits-all solution, but I can speak to what’s needed to optimize decisions made in VUCA conditions. Here are the main ingredients:

Agility

  1. Acrobatic-catThe ability to adjust one’s decision-making approach to meet the demands of a particular problem: For example, some problems must be addressed immediately and autocratically, others are best addressed more collaboratively and with a greater focus on data collection and perspective seeking.
  2. The ability to make high-quality autocratic decisions: By setting up systems that keep stakeholders continuously appraised of one another’s perspectives and data, we can improve the quality of autocratic decisions by ensuring that there are fewer surprises and that rapid decisions are informed decisions.
  3. Dynamic steering: Every leader in an organization should be constantly cultivating this skill. It increases the agility of teams and organizations by building skill for efficient decision-making and timely adjustment.

The most complete information possible (under conditions in which complete information is impossible), which requires:

  1. Collaborative capacity: highly complex problems, by definition, are beyond the comprehension of even the most developed individuals. Collaborative skills ensure that leaders can effectively leverage key perspectives.
  2. Systems and structures that foster ongoing two-way communication up and down the organizational hierarchy, across departments, divisions, and teams, and between internal and external stakeholders.
  3. Systems and structures that cultivate excellent perspective-taking and -seeking skills. These include…
    • Building in opportunities for collaborative decision-making,  
    • “Double linking”—the formal inclusion, in high-stakes or policy decision-making, of representatives from lower and higher levels in the organizational hierarchy or from cross-disciplinary teams, and
    • Embedding virtuous cycles to ensure that all processes are continuously moving toward higher functioning states, and that employees are constantly building knowledge and skills.

Where appropriate, technologies for constructing models of highly complex problems:

  • For a comprehensive overview of options, see Decision Making Under Uncertainty: Theory and Application, by Mykel J. Kochenderfer.

Our flagship adult assessment, the Leadership Decision-Making Assessment (LDMA), was designed for the US government to document and assess the level of sophistication individuals and teams demonstrate on key skills for making optimal decisions in VUCA conditions.