What every buyer should know about forms of assessment

In this post, I'll be describing and comparing three basic forms of assessment—surveys, tests of factual and procedural knowledge, and performative tests.

Surveys—measures of perception, preference, or opinion

checklistWhat is a survey? A survey (a.k.a. inventory) is any assessment that asks the test-taker to choose from a set of options, such as "strongly agree" or "strongly disagree", based on opinion, preference, or perception. Surveys can be used by organizations in several ways. For example, opinion surveys can help maintain employee satisfaction by providing a "safe" way to express dissatisfaction before workplace problems have a chance to escalate.

Surveys have been used by organizations in a variety of ways. Just about everyone who's worked for a large organization has completed a personality inventory as part of a team-building exercise. The results stimulate lots of water cooler discussions about which "type" or "color" employees are, but their impact on employee performance is unclear. (Fair warning: I'm notorious for my discomfort with typologies!) Some personality inventories are even used in high stakes hiring and promotion decisions, a practice that continues despite evidence that they are very poor predictors of employee success [1].

survey_itemAlthough most survey developers don't pretend their assessments measure competence, many do. The item on the left was used in a survey with the words "management skills" in it's title.

Claims that surveys measure competence are most common when "malleable traits"—traits that are subject to change, learning or growth—are targeted. One example of a malleable trait is "EQ" or "emotional intelligence". EQ is viewed as a skill that can be developed, and there are several surveys that purport to measure its development. What they actually measure is attitude.

Another example of surveys masquerading as assessments of skill is in the measurement of "transformational learning". Transformational learning is defined as a learning experience that fundamentally changes the way a person understands something, yet the only way it appears to be measured is with surveys. Transformational learning surveys measure people's perceptions of their learning experience, not how much they are actually changed by it.

The only survey-type assessments that can be said to measure something like skill are assessments—such as 360s—that ask people about their perceptions. Although 360s inadvertently measure other things, like how much a person is liked or whether or not a respondent agrees with that person, they may also document evidence of behavior change. If what you are interested in is behavior change, a 360 may be appropriate in some cases, but it's important to keep in mind that while a 360 may measure change in a target's behavior, it's also likely to measure change in a respondent's attitude that's unrelated to the target's behavior.

360-type assessments may, to some extent, serve as tests of competence, because behavior change may be an indication that someone has learned new skills. When an assessment measures something that might be an indicator of something else, it is said to measure a proxy. A good 360 may measure a proxy (perceptions of behavior) for a skill (competence).

There are literally hundreds of research articles that document the limitations of surveys, but I'll mention only one more of them here: All of the survey types I've discussed are vulnerable to "gaming"—smart people can easily figure out what the most desirable answers are.

Surveys are extremely popular today because, relative to assessments of skill, they are inexpensive to develop and cost almost nothing to administer. Lectica gives away several high quality surveys for free because they are so inexpensive, yet organizations spend millions of dollars every year on surveys, many of which are falsely marketed as assessments of skill or competence.

Tests of factual and procedural knowledge

A test of competence is any test that asks the test taker to demonstrate a skill. Tests of factual and procedural knowledge can legitimately be thought of as tests of competence.

mc_itemThe classic multiple choice test examines factual knowledge, procedural knowledge, and basic comprehension. If you want to know if someone knows the rules, which formulas to apply, the steps in a process, or the vocabulary of a field, a multiple choice test may meet your needs. Often, the developers of multiple choice tests claim that their assessments measure understanding, reasoning, or critical thinking. This is because some multiple choice tests measure skills that are assumed to be proxies for skills like understanding, reasoning, and critical thinking. They are not direct tests of these skills.

Multiple choice tests are widely used, because there is a large industry devoted to making them, but they are increasingly unpopular because of their (mis)use as high stakes assessments. They are often perceived as threatening and unfair because they are often used to rank or select people, and are not helpful to the individual learner. Moreover, their relevance is often brought into question because they don't directly measure what we really care about—the ability to apply knowledge and skills in real-life contexts.

Performative tests

performative_itemTests that ask people to directly demonstrate their skills in (1) the real world, (2) real-world simulations, or (3) as they are applied to real-world scenarios are called performative tests. These tests usually do not have "right" answers. Instead, they employ objective criteria to evaluate performances for the level of skill demonstrated, and often play a formative role by providing feedback designed to improve performance or understanding. This is the kind of assessment you want if what you care about is deep understanding, reasoning skills, or performance in real-world contexts.

Performative tests are the most difficult tests to make, but they are the gold standard if what you want to know is the level of competence a person is likely to demonstrate in real-world conditions—and if you're interested in supporting development. Standardized performative tests are not yet widely used, because the methods and technology required to develop them are relatively new, and there is not yet a large industry devoted to making them. But they are increasingly popular because they support learning.

Unfortunately, performative tests may initially be perceived as threatening because people's attitudes toward tests of knowledge and skill have been shaped by their exposure to high stakes multiple choice tests. The idea of testing for learning is taking hold, but changing the way people think about something as ubiquitous as testing is an ongoing challenge.

Lectical Assessments

Lectical Assessments are performative tests—tests for learning. They are designed to support robust learning—the kind of learning that optimizes the growth of essential real-world skills. We're the leader of the pack when it comes to the sophistication of our methods and technology, our evidence base, and the sheer number of assessments we've developed.

[1] Frederick P. Morgeson, et al. (2007) Are we getting fooled again? Coming to terms with limitations in the use of personality tests for personnel selection, Personnel Psychology, 60, 1029-1033.

Virtuous cycles of learning and instruction

What is a virtuous cycle of learning?

Ideal learning occurs in virtuous cycles—repeating cycles of goal settingobservation (taking in new knowledge), testing (applying what has been learned and getting feedback on results), and reflection (figuring out which adjustments are needed to improve one’s performance on the next attempt). This process, which occurs unconsciously from birth, can be made conscious. One recent application of the virtuous cycle is in dynamic steering, in which decisions are developed, applied, and evaluated through intentionally iterating cycles. The idea is to stretch as far as possible within a given cycle, without setting immediate goals that are completely beyond one’s reach. Success emerges from the achievement of a series of incremental goals, each of which brings one closer to the final goal. Processes of this kind lay down foundational skills that support resilience and agility. For example, the infant who learns to walk also learns to fall more gracefully, which makes learning to run much less traumatic than it might have been. And decision makers who use dynamic steering learn a great deal about what makes decisions more likely to be successful, which leads to better, faster decisions in increasingly complex or thorny situations.

virtuous_cycle_with_icons_4_names

The figure on the right illustrates how educators can support virtuous learning cycles. There are 4 "steps" in this process (not necessarily in the following order):

  1. Find out what individual learners already know and how they work with their knowledge, then set provisional learning goals.
  2. Acquire and evaluate information
  3. Apply new knowledge or skills in hypothetical or real-life situations.
  4. Provide frequent opportunities for learners to reflect upon outcomes associated with the application of new knowledge in an environment in which ongoing learning, application, and reflection are consistently rewarded.

The limitations of testing

It is important for those of us who use assessments to ensure that they (1) measure what we say they measure, (2) measure it reliably enough to justify claimed distinctions between and within persons, and (3) are used responsibly. It is relatively easy for testing experts to create assessments that are adequately reliable (2) for individual assessment, and although it is more difficult to show that these tests measure the construct of interest (1), there are reasonable methods for showing that an assessment meets this standard. However, it is more difficult to ensure that assessments are used responsibly (3).

Few consumers of tests are aware of their inherent limitations. Even the best tests, those that are highly reliable and measure what they are supposed to measure, provide only a limited amount of information. This is true of all measures. The more we hone in on a measureable dimension—in other words, the greater our precision becomes—the narrower the construct becomes. Time, weight, height, and distance are all extremely narrow constructs. This means that they provide a very specific piece of information extremely well. When we use a ruler, we can have great confidence in the measurement we make, down to very small lengths (depending on the ruler, of course). No one doubts the great advantages of this kind of precision. But we can’t learn anything else about the measured object. Its length usually cannot tell us what the object is, how it is shaped, its color, its use, its weight, how it feels, how attractive it is, or how useful it is. We only know how long it is. To provide an accurate account of the thing that was measured, we need to know many more things about it, and we need to construct a narrative that brings these things together in a meaningful way.

A really good psychological measure is similar. The LAS (Lectical Assessment System), for example, is designed to go to the heart of development, stripping away everything that does not contribute to the pure developmental “height” of a given performance. Without knowledge of many other things—such as the ways of thinking that are generally associated with this “height” in a particular domain, the specific ideas that are associated with this particular performance, information from other performances on other measures, qualitative observations, and good clinical judgment—we cannot construct a terribly useful narrative.

And this brings me to my final point: A formal measure, no matter how great it is, should always be employed by a knowledgeable mentor, clinician, teacher, consultant, or coach as a single item of information about a given client that may or may not provide useful insights into relevant needs or capabilities. Consider this relatively simple example: a given 2-year-old may be tall for his age, but if he is somewhat under weight for his age, the latter measure may seem more important. However, if he has a broken arm, neither measure may loom large—at least until the bone is set. Once the arm is safely in a cast, all three pieces of information—weight, height, and broken arm—may contribute to a clinical diagnosis that would have been difficult to make without any one of them.

It is my hope that the educational community will choose to adopt high standards for measurement, then put measurement in its place—alongside good clinical judgment, reflective life experience, qualitative observations, and honest feedback from trusted others.

Promoting development

There is a vast literature exploring ways to promote development. Much of this literature focuses on speeding up development, some of it focuses on optimizing development. Although both approaches are intended to support development, there is evidence that approaches focused on optimizing development are likely to do a better job. This is because development involves two intertwined processes, differentiation (broadening and deepening knowledge) and integration. In plain(er) English, you get more adequate integrations at each level if you accomplish rich differentiation at the prior level.

When we code an assessment, we pay close attention to the degree to which the test-taker elaborates each of the sub-skills it targets. In our personal feedback, we note areas of strength and areas that appear to require further growth. The basic idea is to bring all of the sub-skills up to an optimal level of elaboration to support the emergence of next-level integrations.

Most of the readings we suggest are targeted one to two phases (1/4 to 1/2 of a level) above the level of a given performance. This practice has been shown to provide the ideal level of challenge (scaffolding) for optimal growth. We also suggest activities like engaging in discourse with peers, journaling, cultivating a habit of reflection, and improving metacognitive skills, all of which provide support for growth.

We do not teach people to think at higher levels. Higher levels of performance emerge when knowledge is adequately elaborated and the environment supports higher levels of thinking and performance. We focus on helping people to think better at their current level and challenging them to elaborate their current knowledge and skills—including the not-so-sexy nuts-and-bolts knowledge required for success in any context.

What is a developmental assessment?

A developmental assessment is a test of knowledge and thinking that is based on extensive research into how students come to learn specific concepts and skills over time. All good developmental assessments require test-takers to show their thinking by making written or oral arguments in support of their judgments. Developmental assessments are less concerned about “right” answers and more concerned with how students use their knowledge and thinking skills to solve problems. A good developmental assessment should be educative in the sense that taking it is a learning experience in its own right, and each score is accompanied by feedback that tells students what they are most likely to benefit from learning next.

A good test

babyIn this post, I explore a way of thinking about testing that would lead to the design of tests that are very different from most of the tests students take today.

Two propositions, an observation, and a third proposition:

Proposition 1. Because adults who do not enjoy learning are at a severe disadvantage in a rapidly changing world, an educational system should do everything possible to nurture children's inborn love of learning.

Proposition 2. In K-12, the specific content of a curriculum is not as important as the development of broadly applicable skills for learning, reasoning, communicating, and participating in a civil society. (The content of the curriculum would be chosen to support the development of these skills and could—perhaps should—differ from classroom to classroom.)

Observation. Testing tends to drive instruction.

Proposition 3. Consequently, tests should evaluate relevant skills and be employed in ways that support students' natural love of learning.

Given these propositions, here is my favorite definition of a "good test."

A good test is part of the conversation between a "student" and a "teacher" that tells the teacher what the student is most likely to benefit from learning next.

I'll unpack this definition and show how it relates to the proposals listed above:

Anyone who has carefully observed an infant in pursuit of knowledge will understand the conversational nature of learning. A parent holds out a shiny spoon and an infant's arms wave wildly. Her hand makes contact with the spoon and a message is sent to her brain, "Something interesting happened!" The next day, her arm movements are a little less random. She makes contact several times, feeling the same sense of satisfaction. Her parents laugh with delight. She coos. In this way, her physical and social environment provide immediate feedback each time she succeeds (or fails). Over time, the infant uses this information to learn how to reach out and touch the spoon at will. Of course, she is not satisfied with merely touching the spoon, and, through the same kind of trial and error, supplemented with a little support from Mom and Dad, she soon learns to bring the spoon to her mouth. And the conversation goes on.

Every attempt to touch the spoon is a kind of test. Every success is an affirmation that the strategy just employed was an effective strategy, but the story does not end here. In her quest to master her environment, the infant keeps moving the bar. Once she can do so at will, touching the spoon is no longer satisfying. She moves on to the next skill—holding the spoon, and the next—bringing it to her mouth, etc. Having observed this process hundreds of times, I strongly suspect that a sense of mastery is the intrinsic reward that motivates learning, while conversation, including both social and physical interactions, acts as the fuel.

Conversation

A good educational test should have the same quality of conversation, in the form of performance and feedback, that is illustrated in the example above. In an ideal testing situation, the student shows a teacher how he or she understands new concepts and skills, then the teacher uses this information to determine what comes next.

Part of the conversation

However, a good test is part of the conversation—not the entire conversation. No single test (or kind of conversation) will do. For example, the infant reaches for the spoon because she finds it interesting, and she must be interested enough to reach out many dozens of times before she can grasp an object at will. Good parents recognize that she expresses more sustained interest if they provide her with a number of different objects—and don't try to force her to manipulate objects when she would rather be nursing or sleeping. Each act is a test embedded in a long conversation that is further embedded in a broader context.

What comes next?

In the story, I suggest that the spoon must be both interesting and within an infant's reach before it can become part of an ongoing conversation. In the same way, a good test should both be engaging and within a student's reach in order to play its role in the conversation between student and teacher.

An engaging test of appropriate skills can tell us how a student understands what he or she is learning, but this knowledge, by itself, does not tell the teacher (or the student) what comes next. To find out, researchers must study how particular concepts and skills are learned over time. Only when we have done a good job describing how particular skills and concepts are learned can we predict what a student is most likely to benefit from learning next.

So, a good test must not only capture the nature of a particular student's understanding, it must also be connected to knowledge about  the pathways through which students come to understand the concepts and skills of the knowledge area it targets.

Back to conversation

I argue above, that in infancy, a sense of mastery is the intrinsic reward that motivates learning, while conversation is the fuel. If conversation is the fuel, tests that do a good job serving the conversational function I outline here are likely to fuel students' natural pursuit of mastery and a lifelong love of learning.

Later: But what about accountability?