The dark? side of Lectical Assessment

Recently, members of our team at Lectica have been discussing potential misuses of Lectical Assessments, and exploring the possibility that they could harm some students. There are serious concerns that require careful consideration and discussion, and I urge readers to pitch in.

One of the potential problems we've discussed is the possiblilty that students will compare their scores with one another, and that students with lower scores will suffer from these comparisons. Here's my current take on this issue.

Students receive scores all the time. By third grade they already know their position in the class hierarchy, and live everyday with that reality. Moreover, despite the popular notion that all students can become above average if they work hard enough, average students don't often become above average students, which means that during their entire 12 years of schooling, they rarely receive top rewards (the best grades) for the hard work they do. In fact, they often feel like they're being punished even when they try their best. To make things worse, in our current system they're further punished by being forced to memorize content they haven't been prepared to understand, a problem that worsens year by year.

Lectica's approach to assessment can't prevent students from figuring out where their scores land in the class distribution, but we can give all students an opportunity to see themselves as successful learners, no matter where their scores are in that distribution. Average or below average students may still have to live with the reality that they grow at different rates than some of their peers, but they'll be rewarded for their efforts, just the same.

I've been told by some very good teachers that it is unacceptable to use the expression "average student." While I share the instinct to protect students from the harm that can come from labels, I don't share the belief that being an average student is a bad thing. Most of us were average students—or to be more precise, 68% of us were within one standard deviation of the mean. How did being a member of the majority become a bad thing?  And what harm are we doing to students by creating the illusion that we are all capable of performing above the mean?

I don't think we hurt children by serving up reality. We hurt them when we mislead them by telling them they can all be above average, or when we make them feel hopeless by insisting that they all learn at the same pace, then punishing them when they can't keep up.

I'm not saying it's not possible to raise the average. We do it by meeting the specific learning needs of every student and making sure that learning time is spent learning robustly. But we can't change the fact that there's a distribution. And we shouldn't pretend this is the case.

Lectical Assessments are tests, and are subject to the same abuses as other tests. But they have three attributes that help mitigate these abuses. First, they allow all students without severe disabilities to see themselves as learners. Second, they help teachers customize instruction to meet the needs of each student, so more kids have a chance to achieve their full potential. And finally, they reward good pedagogy—even in cases in which the assessments are being misused. After all, testing drives instruction.

Please follow and like us:

Comparison of DiscoTests with conventional tests

DiscoTests and conventional standardized tests can be thought of as complementary. They are designed to test different kinds of skills, and research confirms that they are successful in doing so. Correlations between scores on the kind of developmental assessments made by DTS and scores on conventional multiple choice assessments is in the .40-.60 range. That means that somewhere between 16% to 36% of the kind of learning that is captured by conventional assessments is likely to overlap with the kind of learning that is captured by DiscoTests.

The table below provides a comparison of DiscoTests with conventional standardized tests on a number of dimensions.

Category
DiscoTests
Conventional tests
Theoretical foundationCognitive developmental theory, Dynamic Skill Theory, Test theoryTest theory
ScaleFischer’s Dynamic Skill Scale, an exhaustively researched general developmental scale, which is a member of a family of similar scales that were developed during the 20th century.Statistically generated scales, different for each test (though some tests are statistically linked)
Learning sequencesEmpirical, fine-grained & precise, calibrated to the dynamic skill scaleEmpirical, coarse-grained and general
Primary item typeOpen responseMore or less sophisticated forms of multiple choice
Targeted skillsReasoning with knowledge, knowledge application, making connections between new and existing knowledge, writingContent knowledge, procedural knowledge
ContentCarefully selected “big ideas” and the concepts and skills associated with them.The full range of content specified in state standards for a given subject
Educative/formativeYes, (1) each DiscoTest focuses on ideas and skills central K-12 curricula, (2) test questions require students to thoughtfully apply new knowledge and connect it with their existing knowledge, (3) students receive reports with targeted feedback and learning suggestions, (4) teachers learn how student knowledge develops in general and on each targeted concept or skill.Not really, though increasingly claim to be
Embeddable in curriculaYes, DiscoTests are designed to be part of the curriculum.No
StandardizedYes, statistically, calibrated to the skill scaleYes, statistically only
StakesLow. Selection decisions are based on performance patterns over time on many individual assessments.High. Selection decisions are often based on single assessments.
Ecological validityDirect tests that focus on deepening and connecting knowledge about key concepts and ideas, while developing broad skills that are required in adult life, such as those required for reasoning, communicating, and problem-solving.Tests of proxies, focus on ability to detect correct answers.
Statistical reliability.91+ for a single age cohort (distinguishes 5-6 distinct levels of performance).For high stakes tests, usually .95+ for a single age cohort (distinguishes 6-7 distinct levels of performance).
Please follow and like us:

Virtuous cycles of learning and instruction

What is a virtuous cycle of learning?

Ideal learning occurs in virtuous cycles—repeating cycles of goal settingobservation (taking in new knowledge), testing (applying what has been learned and getting feedback on results), and reflection (figuring out which adjustments are needed to improve one’s performance on the next attempt). This process, which occurs unconsciously from birth, can be made conscious. One recent application of the virtuous cycle is in dynamic steering, in which decisions are developed, applied, and evaluated through intentionally iterating cycles. The idea is to stretch as far as possible within a given cycle, without setting immediate goals that are completely beyond one’s reach. Success emerges from the achievement of a series of incremental goals, each of which brings one closer to the final goal. Processes of this kind lay down foundational skills that support resilience and agility. For example, the infant who learns to walk also learns to fall more gracefully, which makes learning to run much less traumatic than it might have been. And decision makers who use dynamic steering learn a great deal about what makes decisions more likely to be successful, which leads to better, faster decisions in increasingly complex or thorny situations.

virtuous_cycle_with_icons_4_names

The figure on the right illustrates how educators can support virtuous learning cycles. There are 4 "steps" in this process (not necessarily in the following order):

  1. Find out what individual learners already know and how they work with their knowledge, then set provisional learning goals.
  2. Acquire and evaluate information
  3. Apply new knowledge or skills in hypothetical or real-life situations.
  4. Provide frequent opportunities for learners to reflect upon outcomes associated with the application of new knowledge in an environment in which ongoing learning, application, and reflection are consistently rewarded.

Please follow and like us:

The limitations of testing

It is important for those of us who use assessments to ensure that they (1) measure what we say they measure, (2) measure it reliably enough to justify claimed distinctions between and within persons, and (3) are used responsibly. It is relatively easy for testing experts to create assessments that are adequately reliable (2) for individual assessment, and although it is more difficult to show that these tests measure the construct of interest (1), there are reasonable methods for showing that an assessment meets this standard. However, it is more difficult to ensure that assessments are used responsibly (3).

Few consumers of tests are aware of their inherent limitations. Even the best tests, those that are highly reliable and measure what they are supposed to measure, provide only a limited amount of information. This is true of all measures. The more we hone in on a measureable dimension—in other words, the greater our precision becomes—the narrower the construct becomes. Time, weight, height, and distance are all extremely narrow constructs. This means that they provide a very specific piece of information extremely well. When we use a ruler, we can have great confidence in the measurement we make, down to very small lengths (depending on the ruler, of course). No one doubts the great advantages of this kind of precision. But we can’t learn anything else about the measured object. Its length usually cannot tell us what the object is, how it is shaped, its color, its use, its weight, how it feels, how attractive it is, or how useful it is. We only know how long it is. To provide an accurate account of the thing that was measured, we need to know many more things about it, and we need to construct a narrative that brings these things together in a meaningful way.

A really good psychological measure is similar. The LAS (Lectical Assessment System), for example, is designed to go to the heart of development, stripping away everything that does not contribute to the pure developmental “height” of a given performance. Without knowledge of many other things—such as the ways of thinking that are generally associated with this “height” in a particular domain, the specific ideas that are associated with this particular performance, information from other performances on other measures, qualitative observations, and good clinical judgment—we cannot construct a terribly useful narrative.

And this brings me to my final point: A formal measure, no matter how great it is, should always be employed by a knowledgeable mentor, clinician, teacher, consultant, or coach as a single item of information about a given client that may or may not provide useful insights into relevant needs or capabilities. Consider this relatively simple example: a given 2-year-old may be tall for his age, but if he is somewhat under weight for his age, the latter measure may seem more important. However, if he has a broken arm, neither measure may loom large—at least until the bone is set. Once the arm is safely in a cast, all three pieces of information—weight, height, and broken arm—may contribute to a clinical diagnosis that would have been difficult to make without any one of them.

It is my hope that the educational community will choose to adopt high standards for measurement, then put measurement in its place—alongside good clinical judgment, reflective life experience, qualitative observations, and honest feedback from trusted others.

Please follow and like us:

What is a holistic assessment?

Thirty years ago, when I was a hippy midwife, the idea of holism began to slip into the counter-culture. A few years later, this much misunderstood notion was all the rage on college campuses. By the time I was in graduate school in the nineties there was a impassable division between the trendy postmodern holists and the rigidly old fashioned modernists. You may detect a slight mocking tone, and rightly so. People with good ideas on both sides made themselves look pretty silly by refusing, for example, to use any of the tools associated with the other side. One of the more tragic outcomes of this silliness was the emergence of the holistic assessment.

Simply put, the holistic assessment is a multidimensional assessment that is designed to take a more nuanced, textured, or rich approach to assessment. Great idea. Love it.

It’s the next part that’s silly. Having collected rich information on multiple dimensions, the test designers sum up a person’s performance with a single number. Why is this silly? Because the so-called holistic score becomes pretty-much meaningless. Two people with the same score can have very little in common. For example, let’s imagine that a holistic assessment examines emotional maturity, perspective taking, and leadership thinking. Two people receive a score of 10 that may be accompanied by boilerplate descriptions of what emotional maturity, perspective taking, and leadership attitudes look like at level 10. However, person one was actually weak in perspective-taking and strongest in leadership, and person two was weak in emotional maturity and strongest in perspective taking. The score of 10, it turns out, means something quite different for these two people. I would argue that it is relatively meaningless because there is no way to know, based on the single “holistic” score, how best to support the development of these distinct individuals.

Holism has its roots in system dynamics, where measurements are used to build rich models of systems. All of the measurements are unidimensional. They are never lumped together into “holistic” measures. That would be equivalent to talking about the temperaturelength of a day or the lengthweight of an object*. It’s essential to measure time, weight, and length with appropriate metrics and then to describe their interrelationships and the outcomes of these interrelationships. The language used to describe these is the language of probability, which is sensitive to differences in the measurement of different properties.

In psychological assessment, dimensionality is a challenging issue. What constitutes a single dimension is a matter for debate. For DTS, the primary consideration is how useful an assessment will be in helping people learn and grow. So, we tend to construct individual assessments, each of which represents a fairly tightly defined content space, and we use only one metric to determine the level of a performance. The meaning of a given score is both universal (it is an order of hierarchical complexity and phase on the skill scale) and contextual (it is provided to a performance in a particular domain in a particular context, and is associated with particular content.) We independently analyze the content of the performance to determine its strengths and weaknesses—relative to its level and the known range of content associated with that level—and provide feedback about these strengths and weaknesses as well as targeted learning suggestions. We use the level score to help us tell a useful story about a particular performance, without claiming to measure “lenghtweight”. This is accomplished by the rigorous separation of structure (level) and content.

*If we described objects in terms of their lengthweight, an object that was 10 inches long and 2 lbs could have a lengthweight of 12, but so could an object that was 2 inches long and 10 lbs.

Please follow and like us:

What is a developmental assessment?

A developmental assessment is a test of knowledge and thinking that is based on extensive research into how students come to learn specific concepts and skills over time. All good developmental assessments require test-takers to show their thinking by making written or oral arguments in support of their judgments. Developmental assessments are less concerned about “right” answers and more concerned with how students use their knowledge and thinking skills to solve problems. A good developmental assessment should be educative in the sense that taking it is a learning experience in its own right, and each score is accompanied by feedback that tells students what they are most likely to benefit from learning next.

Please follow and like us:

A good test

babyIn this post, I explore a way of thinking about testing that would lead to the design of tests that are very different from most of the tests students take today.

Two propositions, an observation, and a third proposition:

Proposition 1. Because adults who do not enjoy learning are at a severe disadvantage in a rapidly changing world, an educational system should do everything possible to nurture children's inborn love of learning.

Proposition 2. In K-12, the specific content of a curriculum is not as important as the development of broadly applicable skills for learning, reasoning, communicating, and participating in a civil society. (The content of the curriculum would be chosen to support the development of these skills and could—perhaps should—differ from classroom to classroom.)

Observation. Testing tends to drive instruction.

Proposition 3. Consequently, tests should evaluate relevant skills and be employed in ways that support students' natural love of learning.

Given these propositions, here is my favorite definition of a "good test."

A good test is part of the conversation between a "student" and a "teacher" that tells the teacher what the student is most likely to benefit from learning next.

I'll unpack this definition and show how it relates to the proposals listed above:

Anyone who has carefully observed an infant in pursuit of knowledge will understand the conversational nature of learning. A parent holds out a shiny spoon and an infant's arms wave wildly. Her hand makes contact with the spoon and a message is sent to her brain, "Something interesting happened!" The next day, her arm movements are a little less random. She makes contact several times, feeling the same sense of satisfaction. Her parents laugh with delight. She coos. In this way, her physical and social environment provide immediate feedback each time she succeeds (or fails). Over time, the infant uses this information to learn how to reach out and touch the spoon at will. Of course, she is not satisfied with merely touching the spoon, and, through the same kind of trial and error, supplemented with a little support from Mom and Dad, she soon learns to bring the spoon to her mouth. And the conversation goes on.

Every attempt to touch the spoon is a kind of test. Every success is an affirmation that the strategy just employed was an effective strategy, but the story does not end here. In her quest to master her environment, the infant keeps moving the bar. Once she can do so at will, touching the spoon is no longer satisfying. She moves on to the next skill—holding the spoon, and the next—bringing it to her mouth, etc. Having observed this process hundreds of times, I strongly suspect that a sense of mastery is the intrinsic reward that motivates learning, while conversation, including both social and physical interactions, acts as the fuel.

Conversation

A good educational test should have the same quality of conversation, in the form of performance and feedback, that is illustrated in the example above. In an ideal testing situation, the student shows a teacher how he or she understands new concepts and skills, then the teacher uses this information to determine what comes next.

Part of the conversation

However, a good test is part of the conversation—not the entire conversation. No single test (or kind of conversation) will do. For example, the infant reaches for the spoon because she finds it interesting, and she must be interested enough to reach out many dozens of times before she can grasp an object at will. Good parents recognize that she expresses more sustained interest if they provide her with a number of different objects—and don't try to force her to manipulate objects when she would rather be nursing or sleeping. Each act is a test embedded in a long conversation that is further embedded in a broader context.

What comes next?

In the story, I suggest that the spoon must be both interesting and within an infant's reach before it can become part of an ongoing conversation. In the same way, a good test should both be engaging and within a student's reach in order to play its role in the conversation between student and teacher.

An engaging test of appropriate skills can tell us how a student understands what he or she is learning, but this knowledge, by itself, does not tell the teacher (or the student) what comes next. To find out, researchers must study how particular concepts and skills are learned over time. Only when we have done a good job describing how particular skills and concepts are learned can we predict what a student is most likely to benefit from learning next.

So, a good test must not only capture the nature of a particular student's understanding, it must also be connected to knowledge about  the pathways through which students come to understand the concepts and skills of the knowledge area it targets.

Back to conversation

I argue above, that in infancy, a sense of mastery is the intrinsic reward that motivates learning, while conversation is the fuel. If conversation is the fuel, tests that do a good job serving the conversational function I outline here are likely to fuel students' natural pursuit of mastery and a lifelong love of learning.

Later: But what about accountability?

Please follow and like us:

Predicting trends, testing people

Mark Forman, in his response to the post entitled, IQ and development, wrote about the difference between predicting trends and testing individuals. I agree that people, including many academics, do not understand the difference between using assessments to predict trends and using assessments to make judgments about individuals. There are two main issues: First, as Mark argues, questions of validity differ, depending upon whether we are looking at individuals or population trends. If we are looking at trends, determining predictive validity is a simple matter of determining if an assessment helps an institution make more successful decisions than it was able to make without the assessment. However, if a test is intended to be useful to individuals (aid in their learning, help them determine what to learn next, help them find the best place to learn, help them decide what profession to pursue, etc.), predictive validity cannot be determined by examining trends. In this case, the predictive validity of an assessment should be evaluated in terms of how well it predicts what individual test-takers can most benefit from learning next, where they can learn it, or what kind of employment they should seek—as individuals.

The second issue concerns reliability. Especially in the adult assessment field, researchers often do not understand that the levels of statistical reliability considered acceptable for studies of population trends are far from adequate for making judgments about individuals. Many of the adult assessments that are on the market today have been developed by researchers who do not understand the reliability criteria for assessments used to test individuals*. As a consequence, the reliability of these assessments is often so low that we cannot be confident that a score on a given assessment is truly different from any other score on that assessment.

*Unfortunately, there is no magic reliability number. But here are some general guidelines. The absolute minimum statistical reliability for an assessment that claims to distinguish two or three levels of performance is an alpha of .85. To claim up to 6 levels, you need an alpha of .95. You will also want to think about the meaning of these distinctions between levels in terms of confidence intervals. A confidence interval is the range in which an individual’s true score is most likely to fall.  For example, in the case of Lectical™ assessments, the statistical reliabilities we have calculated over the last 10 years indicate that the confidence interval around Lectical scores is generally around 1/4 of a level (a phase).

Advice: If statistical reliability is not reported (preferably in a peer reviewed article), don’t use the test.

Please follow and like us:

IQ and development

IQ is a dimension of ability that has been defined using a form of statistical modeling called psychometrics. It is based entirely on psychometric analysis of results from tests consisting of many items, each of which has one correct answer.

IQ scores are arranged along a scale that is based upon the performances of hundreds of people who have taken the same test.

IQ is considered to be a relatively fixed characteristic of a person. People who score higher on an IQ test are considered to be more intelligent than people who score lower.

Cognitive development is a theoretically defined, evidence based dimension. Developmental level is determined by asking individuals to engage in activities that expose their reasoning. Items on developmental assessments are typically open-ended and do not focus on correct answers. They focus on how people go about seeking answers.

A single developmental dimension has been shown to underlie development in a wide range of cognitive domains, making it possible to define a non-arbitrary scale along which development progresses. Individual performances can be placed within a range on this scale.

Cognitive developmental level is not viewed as a fixed trait and is known to vary within persons, depending on knowledge area and a range of contextual variables. Individuals who demonstrate higher levels of cognitive development are viewed as more cognitively developed than those demonstrating lower levels of cognitive development.

The relation between IQ and cognitive development

Children with higher IQ’s learn the kind of knowledge and skills represented in IQ tests earlier than people with lower IQ’s. There is some evidence that cognitive development is likely to be more rapid (and have a higher “endpoint”) in people who have higher IQ’s.

Limitations of testing

The subject matter of IQ tests is limited, and the skill sets that are tested are narrow, so we have to be careful about making generalizations about people based on test results—especially the results of single tests. The same is true for cognitive developmental assessments. Good cognitive developmental assessments are now providing scores with a level of precision similar to that of conventional assessments, but even the most precise and accurate scores apply to performance on a single assessment in a single subject area, and do not capture the full range of capabilities of a test-taker.

The inability of any single assessment (or type of assessment) to provide an accurate account of the capabilities of an individual suggests that the best (most ethical) use of assessments involves repeated measurements across a wide range of subject areas over time.

Please follow and like us:

Testing as part of learning 1

Learning isn’t easy

Yet all healthy babies pursue it with dogged determination, spending hour after hour exploring—and learning to master—their own bodies, as well as their physical and social environments.

Natural testing

When infants and young children engage their environments, they receive constant feedback about what does and does not work. For example, babies spend months learning how to control the movements of their hands. An infant will spend several weeks just learning how to bring an object to her mouth. She’ll use what she learns from successes and failures to do better next time. Feedback is instant and accurate, and the results of each attempt tell her what to try next.

Babies often act like they are addicted to learning. They will tolerate an amazing amount of failure. But without prompt feedback from their external environment, they wouldn’t get far. The same is true for older children.

Testing in schools

Ideally, educational tests model natural testing by providing students with timely and accurate feedback that tells them (and their teachers) what to try next.

Please follow and like us: