Check out this post at Docere est Discere (Musings on language and teaching).
Mark Forman, in his response to the post entitled, IQ and development, wrote about the difference between predicting trends and testing individuals. I agree that people, including many academics, do not understand the difference between using assessments to predict trends and using assessments to make judgments about individuals. There are two main issues: First, as Mark argues, questions of validity differ, depending upon whether we are looking at individuals or population trends. If we are looking at trends, determining predictive validity is a simple matter of determining if an assessment helps an institution make more successful decisions than it was able to make without the assessment. However, if a test is intended to be useful to individuals (aid in their learning, help them determine what to learn next, help them find the best place to learn, help them decide what profession to pursue, etc.), predictive validity cannot be determined by examining trends. In this case, the predictive validity of an assessment should be evaluated in terms of how well it predicts what individual test-takers can most benefit from learning next, where they can learn it, or what kind of employment they should seek—as individuals.
The second issue concerns reliability. Especially in the adult assessment field, researchers often do not understand that the levels of statistical reliability considered acceptable for studies of population trends are far from adequate for making judgments about individuals. Many of the adult assessments that are on the market today have been developed by researchers who do not understand the reliability criteria for assessments used to test individuals*. As a consequence, the reliability of these assessments is often so low that we cannot be confident that a score on a given assessment is truly different from any other score on that assessment.
*Unfortunately, there is no magic reliability number. But here are some general guidelines. The absolute minimum statistical reliability for an assessment that claims to distinguish two or three levels of performance is an alpha of .85. To claim up to 6 levels, you need an alpha of .95. You will also want to think about the meaning of these distinctions between levels in terms of confidence intervals. A confidence interval is the range in which an individual’s true score is most likely to fall. For example, in the case of Lectical™ assessments, the statistical reliabilities we have calculated over the last 10 years indicate that the confidence interval around Lectical scores is generally around 1/4 of a level (a phase).
Advice: If statistical reliability is not reported (preferably in a peer reviewed article), don’t use the test.
The NTS is an interactive online survey that asks about (1) the legitimate purposes of testing and (2) how well today’s tests serve these purposes. In addition to completing a set of survey questions, respondents are offered an opportunity to write about their personal experiences with testing and share alternative testing resources. When respondents have completed the survey, they can view their results and compare them to national averages. Anyone who visits the site can read respondents’ stories, explore the resources, and track national results. Please participate in the NTS, and use your email lists and social networks to spread the word! Feel free to circulate the NTS poster or the poster announcing the NTS launch event. Contact Zachary Stein if you have questions or would like to become involved.
NTS launch event: Testing the limits of testing
Thursday, May 28th, 2009, 4:00 – 5:30 pm
Zachary Stein, Marc Schwartz, and Theo L. Dawson
The launch event will occur just prior to the opening of the second annual conference of the International Mind, Brain, and Education Society (IMBES) at the Sheraton Society Hill Hotel in Philadelphia, Pennsylvania. At this event, speakers will present preliminary data from the NTS, examine the limits of current test development methods, and explore new approaches to assessment, incorporating the perspectives of stakeholder groups who have participated in the survey so far.
More information is available on the NTS site.
Admission to the launch is FREE and open to the public, but space is limited. To attend, you must obtain a ticket from the NTS web site.
The conference will also feature a workshop on testing:
Educational testing for the 21st century: Challenges, models, and solutions
10:45 – 3:45, Saturday, May 30
Kurt Fischer, Marc Schwartz, Theo Dawson, Zachary Stein
The most basic form of educational testing takes the form of a “conversation” between an individual student and a teacher in which the student reveals what he or she is most likely to benefit from learning next. This kind of conversation increasingly takes a back seat to standardized forms of assessment that are designed to rank students for purposes that are dissociated from learning itself. Testing has lost its roots. The statistically generated rankings of standardized tests tell us very little about the specific learning needs of individual students. And it is becoming increasingly apparent that the kind of knowledge required to succeed on a typical standardized test bears little resemblance to the kind of knowledge required for adult life. The challenge we now face is creating the kind of mass-customization that revives the educative role of assessments in the local dialogue between teachers, students, and the curriculum, while maintaining the advantages of standardization. Simply stated: we need tests that help teachers meet the learning needs of individual students–tests teachers ought to teach to. In this workshop, we explore perspectives on these issues from the classroom, cognitive developmental science, psychometrics, and philosophy and offer a concrete vision for the future of assessment. The workshop is intended for educators, administrators, researchers, and policy makers. It is FREE to those who register for the entire IMBES conference. If you are interested in attending only the workshop, the fee is $80 before April 28th, and $95 after April 28th.
Test developers face a tension betwen construct and ecological validity. If a test is (1) measuring what it intends to measure (construct validity) and (2) what it is measuring is of value (ecological validity), it is considered to be a valid test. Sounds pretty straightforward, but it's not. That's partly because construct and ecological validity often compete with one another—and it is a challenge to find the right balance.
For example, it seems pretty obvious that math items should be about math and reading comprehension items should be about reading comprehension. So, to make sure a math test has construct validity—is about math—you ought to limit the amount of reading required to understand your test items, right?
But what if what you really want to know is how students tackle real-world math problems, which often require the ability to understand the context in which mathematical problems are encountered. After all, there are good reasons to think that a skill a student can apply in real-world contexts is superior to a skill a student can only exhibit on a test that is stripped of context. If you followed this line of reasoning and composed your test of questions that reflect how knowledge is used in the world outside of the classroom, it would have ecological validity.
Here lies the tension between construct and ecological validity: While including context in your math test would increase its ecological validity, doing so would increase the risk of reducing its construct validity by making it less clear exactly what is being measured. This might be reflected in lowered scores for students who can do math but aren't good readers or are unfamiliar with the kind of situations described in test questions. A result like this can look a lot like discrimination—especially when the stakes are high.
In sum, the more you strip away context, the more you risk lowering ecological validity. The more context you add, the more you risk lowering construct validity. Today, there is a strong tendency to prioritize construct validity over ecological validity, primarily because the stakes of many tests are very high, which increases our focus on anything that seems to interfere with fairness. Without intending to, test developers, policy-makers, parents, and teachers have contributed to the creation of tests with decreasing ecological validity—and there is no doubt that teachers are teaching to these tests. The implication? What students are learning in our public schools is increasingly irrelevant to competence in the real world.
This is a cause for concern.
Confidence in testing
I doubt there is a person in the Western world over the age of 4 who hasn’t taken a psychological or educational test. Yet very few of us know one of the most important facts about these tests—their scores are always imprecise.
When you measure the height of a child, you can be pretty confident that the measurement you make is correct within a fraction of an inch on either side. And if you check the time on your mobile phone, you can be pretty certain that it is accurate within a fraction of a minute on either side. Rulers and clocks are well-calibrated measures that we can use with great confidence if we use them correctly. The same is true of measures of temperature, speed, frequency, and weight.
But even measurements made with these metrics are more or less precise. They’re correct within a range. These ranges are called confidence intervals. The confidence interval around the measurement of a child’s height would be expressed as something like “82 centimeters plus or minus 1/2 of a centimeter.” Statisticians would say that the child’s true height is likely to be somewhere in this range.
Scores on educational and psychological tests have confidence intervals too. But there is a difference between these confidence intervals and those for physical measurements. The confidence intervals around scores on psychological and educaitonal tests are larger than the confidence intervals around measurements in the physical world. How much larger? Let’s look at an example.
The psychological and educational tests with the smallest confidence intervals are those made by high-stakes test developers like ETS. For their high stakes tests — the ones used to make decisions like who gets to go to which college — they set the highest standard. This standard, if it was applied to measuring height, would allow us to to say something along the lines of, “We’re confident that this child is 82 centimeters tall, give or take 8 centimeters.”
Now, you may argue that 8 centimeters isn’t all that much, but if you’re buying a car seat or deciding who gets to ride a roller coaster, it could be the difference between life and death. Measurement precision matters.
The more imprecise our measurements are — the bigger the confidence intervals around them — the more careful we need to be about the kinds of decisions we make with them. When it comes to educational and psychological assessment, I think we’re far too careless. Too many people who buy and use assessments don’t know enough about statistics to make well-informed assessment decisions.
Fortunately, I believe we can remedy this! And it seems to me that the best place to begin is with confidence, so, in the next article in this series I’m going to share a super-easy way to figure out how much confidence you can have in any test’s scores.
How do you know if the score you get on a test is accurate?
This depends on what you mean by accurate. If you mean, “Can I be sure that the score I receive on a test is an accurate representation of my performance?” all you need to know is if the test was scored accurately.
However, if you mean, “Can I be sure that the score I receive on a test is an accurate representation of my true abilities, competence, attitudes, dispositions, or opinions?” you can’t. It is impossible for a single test, no matter how good it is, to guarantee an accurate assessment of any of these things. In fact, it takes multiple assessments and multiple kinds of assessments to build up anything like an accurate picture.
I think what test developers do is a black box for most of us. We tend to assume that there is some kind of special insight test developers have by virtue of their psychometric tools (or knowledge about learning) that allows them to get inside of our minds and make accurate judgments about what’s going on in there. But there isn’t. All a single test can measure is performance on the items on that test. The rest is inference.
Realizing this, you may wonder why single scores on single tests are being used to make high stakes decisions about anything.
There are three main players in the creation of most standardized tests. They are the (1) discipline experts, (2) item developers, and (3) psychometricians. The discipline experts are usually PhD’s who specialize in particular areas—like science, math, writing, or history. They know a lot about their content areas and have done research on teaching and learning in these areas. They also may be teachers of teachers.
A group of discipline experts work together to decide what material should be covered in lessons and on tests.* Discipline experts set standards through organizations like the National Research Council, and may or may not be affiliated with test developers.
The item developers create test questions. They usually have a bachelor or masters degree in a particular subject area. Many have not taught, and few are experts in learning and development. Item developers design test questions that cover the content of the standards. Almost all of the items designed by item developers are multiple choice, which means that they have right and wrong answers, and thus, must focus on “factual” knowledge.
The third players are psychometricians. They put together groups of items and examine how well these work together to measure students’ knowledge of the subject at hand. Generally, psychometricians know relatively little about learning and development and do not work closely with item developers or discipline experts.
Although discipline experts may include skills for thinking and learning in their standards, these skills are not measured on standardized tests, because they cannot be evaluated with multiple choice items. And although discipline experts may focus on student understanding in their standards, research has shown that up to 50% of the students who get a multiple choice item correct cannot demonstrate understanding by providing an adequate explanation of their answer.
Cognitive psychologists know about the problems that stem from how standardized tests are made. Many of them are not fans of standardized tests, partly because of the limitations of multiple choice items, partly because the tests are inadequately grounded in evidence about how students actually learn concepts and skills, and partly because the tests push teachers to emphasize breadth over depth and memorization of facts over skills for thinking and learning.
In future posts, after explaining a bit more about how tests work, I will examine an alternative testing model based on research into how students actually learn concepts and skills over time.
*These decisions, for the most part, are based on informed opinion and are usually not informed by systematic research into how students actually learn concepts over the course of K-12. (This is because research into how students learn concepts over time is relatively rare. Most educational research focuses on learning in a narrow age-range, attempting to identify “general mechanisms” for learning, or figuring out better ways to teach.)