This is a terrible way to learn

Honestly folks, we really, really, really need to get over the memorization model of learning. It’s good for spelling bees, trivia games, Jeopardy, and passing multiple choice tests. But it’s BORING if not torturous! And cramming more and more facts into our brains isn’t going to help most of us thrive in real life — especially in the 21st century.

As an employer, I don’t care how many facts are in your head or how quickly you can memorize new information. I’m looking for talent, applied expertise (not just factual or theoretical knowledge), and the following skills and attributes:

The ability to tell the difference between memorizing and understanding

I won’t delegate responsibility to employees who can’t tell the difference between memorizing and understanding. Employees who can’t make this distinction don’t know when they need to ask questions. Consequently, they repeatedly make decisions that aren’t adequately informed.

I’ve taken to asking potential employees what it feels like when they realize they’ve really understood something. Many applicants, including highly educated applicants, don’t understand the question. It’s not their fault. The problem is an educational system that’s way too focused on memorizing.

The ability to think

It’s essential that every employee in my organization is able to evaluate information, solve problems, participate actively in decision making and know the difference between an opinion and a good evidence-based argument.

A desire to listen and the skills for doing it well

We also need employees who want and know how to listen — really listen. In my organization, we don’t make decisions in a vacuum. We seek and incorporate a wide range of stakeholder perspectives. A listening disposition and listening skills are indispensable.

The ability to speak truth (constructively)

I know my organization can’t grow the way I want it to if the people around me are unwilling to share their perspectives or are unable to share them constructively. When I ask someone for an opinion, I want to hear their truth — not what they think I want to hear.

The ability to work effectively with others

This requires respect for other human beings, good interpersonal, collaborative, and conflict resolution skills, the ability to hear and respond positively to productive critique, and buckets of compassion.

Humility

Awareness of the ubiquity of human fallibility, including one’s own, and knowledge about human limitations, including the built-in mental biases that so often lead us astray.

A passion for learning (a.k.a. growth mindset)

I love working with people who are driven to increase their understanding and skills — so driven that they’re willing to feel lost at times, so driven that they’re willing to make mistakes on their way to a solution, so driven that their happiness depends on the availability of new challenges.

The desire to do good in the world

I run a nonprofit. We need employees who are motivated to do good.

Not one of these capabilities can be learned by memorizing. All of them are best learned through reflective practice — preferably 12–16 years of reflective practice (a.k.a VCoLing) in an educational system that is not obsessed with remembering.

In case you’re thinking that maybe I’m a oddball employer, check out LinkedIn’s 2018 Workplace Learning Report, and the 2016 World Economic Forum Future of Jobs Report.

Please follow and like us:

President Trump passed the Montreal Cognitive Assessment

Shortly after the President passed the Montreal Cognitive Assessment, a reader emailed with two questions:

  1. Does this mean that the President has the cognitive capacity required of a national leader?
  2. How does a score on this test relate to the complexity level scores you have been describing in recent posts?

Question 1

A high score on the Montreal Cognitive Assessment dos not mean that the President has the cognitive capacity required of a national leader. This test result simply means there is a high probability that the President is not suffering from mild cognitive impairment. (The test has been shown to detect existing cognitive impairment 88% of the time [1].) In order to determine if the President has the mental capacity to understand the complex issues he faces as a National Leader, we need to know how complexly he thinks about those issues.

Question 2

The answer to the second question is that there is little relation between scores on the Montreal Cognitive Assessment and the complexity level of a person’s thinking. A test like the Montreal Cognitive Assessment does not require the kind of thinking a President needs to understand highly complex issues like climate change or the economy. Teenagers can easily pass this test.

Related articles


Benchmarks for complexity scores

  • Most high school graduates perform somewhere in the middle of level 10.
  • The average complexity score of American adults is in the upper end of level 10, somewhere in the range of 1050–1080.
  • The average complexity score for senior leaders in large corporations or government institutions is in the upper end of level 11, in the range of 1150–1180.
  • The average complexity score (reported in our National Leaders Study) for the three U. S. presidents that preceded President Trump was 1137.
  • The average complexity score (reported in our National Leaders Study) for President Trump was 1053.
  • The difference between 1053 and 1137 generally represents a decade or more of sustained learning. (If you’re a new reader and don’t yet know what a complexity level is, check out the National Leaders Series introductory article.)

[1] JAMA Intern Med. 2015 Sep;175(9):1450-8. doi: 10.1001/jamainternmed.2015.2152. Cognitive Tests to Detect Dementia: A Systematic Review and Meta-analysis. Tsoi KK, Chan JY, Hirai HW, Wong SY, Kwok TC.

 

Please follow and like us:

Statistics for all: significance vs. significance

There’s a battle out there no one’s tweeting about. It involves a tension between statistical significance and practical significance. If you make decisions that involve evaluating evidence—in other words, if you are human—understanding the distinction between these two types of significance will significantly improve your decisions (both practically and statistically).

Statistical significance

Statistical significance (a.k.a. “p”) is a calculation made to determine how confident we can be that a relationship between two factors (variables) is real. The lower a p value, the more confident we can be. Most of the time, we want p to be less than .05.

Don’t be misled! A low p value tells us nothing about the size of a relationship between two variables. When someone says that statistical significance is high, all this means is that we can be more confident that the relationship is real.

Replication

Once we know we can be confident that a relationship between two variables is real, we should check to see if the research has been replicated. That’s because we can’t be sure a statistically significant relationship found in a single study is really real. After we’ve determined that a relationship is statistically significant and replicable, it’s time to consider practical significance. Practical significance has to do with the size of the relationship.

Practical significance

To figure out how practically significant a relationship is, we need to know how big it is. The size of a relationship, or effect size, is evaluated independently of p. For a plain English discussion of effect size, check out this article, Statistics for all: prediction.

Importance

The greater the size of a relationship between two variables, the more likely the relationship is to be important — but that’s not enough. To have real importance, a relationship must also matter. And it is the decision-maker who decides what matters.

Examples

Let’s look at one of my favorite examples. The results of high stakes tests like the SAT and GRE — college entrance exams made by ETS — have been shown to predict college success. Effect sizes tend to be small, but the effects are statistically significant — we can have confidence that they are real. And evidence for these effects have come from numerous studies, so we know they are really real.

If you’re the president of a college, there is little doubt that these test scores have practical significance. Improving prediction of student success, even a little, can have a big impact on the bottom line.

If you’re an employer, you’re more likely to care about how well a student did in college than how they did prior to college, so SAT and GRE scores are likely to be less important to you than college success.

If you’re a student, the size of the effect isn’t important at all. You don’t make the decision about whether or not the school is going to use the SAT or GRE to filter students. Whether or not these assessments are used is out of your control. What’s important to you is how a given college is likely to benefit you.

If you’re me, the size of the effect isn’t very important either. My perspective is that of someone who wants to see major changes in the educational system. I don’t think we’re doing our students any favors by focusing on the kind of learning that can be measured by tests like the GRE and SAT. I think our entire educational system leans toward the wrong goal—transmitting more and more “correct” information. I think we need to ask if what students are learning in school is preparing them for life.

Another thing to consider when evaluating practical significance is whether or not a relationship between two variables tells us only part of a more complex story. For example, the relationship between ethnicity and the rate of developmental growth (what my colleagues and I specialize in measuring) is highly statistically significant (real) and fairly strong (moderate effect size). But, this relationship completely disappears once socioeconomic status (wealth) is taken into account. The first relationship is misleading (spurious). The real culprit is poverty. It’s a social problem, not an ethnic problem.

Summing up

Most discussions of practical significance stop with effect size. From a statistical perspective, this makes sense. Statistics can’t be used to determine which outcomes matter. People have to do that part, but statistics, when good ones are available, should come first. Here’s my recipe:

  1. Find out if the relationship is real (p < .05).
  2. Find out if it is really real (replication).
  3. Consider the effect size.
  4. Decide how much it matters.

My organization, Lectica, Inc., is a 501(c)3 nonprofit corporation. Part of our mission is to share what we learn with the world. One of the things we’ve learned is that many assessment buyers don’t seem to know enough about statistics to make the best choices. The Statistics for all series is designed to provide assessment buyers with the knowledge they need most to become better assessment shoppers.

 

Please follow and like us:

National leaders’ thinking: How does it measure up?

Special thanks to my Australian colleague, Aiden Thornton, for his editorial and research assistance.

This is the first in a series of articles on the complexity of national leaders’ thinking. These articles will report results from research conducted with CLAS, our newly validated electronic developmental scoring system. CLAS will be used to score these leaders’ responses to questions posed by prominent journalists.

In this first article, I’ll be providing some of the context for this project, including information about how my colleagues and I think about complexity and its role in leadership. I’ve embedded lots of links to additional material for readers who have questions about our 100+ year-old research tradition, Lectica’s (the nonprofit that owns me) assessments, and other research we’ve conducted with these assessments.

Context and research questions

Lectica creates diagnostic assessments for learning that support the development of mental skills required for working with complexity. We make these learning tools for both adults and children. Our K-12 initiative—the DiscoTest Initiative—is dedicated to bringing these tools to individual K-12 teachers everywhere, free of charge. Its adult assessments are used by organizations in recruitment and training, and by colleges and universities in admissions and program evaluation.

All Lectical Assessments measure the complexity level (aka, level of vertical development) of people’s thinking in particular knowledge areas. A complexity level score on a Lectical Assessment tells us the highest level of complexity—in a problem, issue, or task—an individual is likely to be able to work with effectively.

On several occasions over the last 20 years, my colleagues and I have been asked to evaluate the complexity of national leaders’ reasoning skills. Our response has been, “We will, but only when we can score electronically—without the risk of human bias.” That time has come. Now that our electronic developmental scoring system, CLAS, has demonstrated a level of reliability and precision that is acceptable for this purpose, we’re ready to take a look.

Evaluating the complexity of national leaders’ thinking is a challenging task for several reasons. First, it’s virtually impossible to find examples of many of these leaders’ “best work.” Their speeches are generally written for them, and speech writers usually try to keep the complexity level of these speeches low, aiming for a reading level in the 7th to 9th grade range. (Reading level is not the same thing as complexity level, but like most tests of capability, it correlates moderately with complexity level.) Second, even when national leaders respond to unscripted questions from journalists, they work hard to use language that is accessible to a wide audience. And finally, it’s difficult to identify a level playing field—one in which all leaders have the same opportunity to demonstrate the complexity of their thinking.

Given these obstacles, there’s no point in attempting to evaluate the actual thinking capabilities of national leaders. In other words, we won’t be claiming that the scores awarded by CLAS represent the true complexity level of leaders’ thinking. Instead, we will address the following questions:

  1. When asked by prominent journalists to explain their positions on complex issues, what is the average complexity level of national leaders’ responses?
  2. How does the complexity level of national leaders’ responses relate to the complexity of the issues they discuss?

Thinking complexity and leader success

At this point, you may be wondering, “What is thinking complexity and why is it important?” A comprehensive response to this question isn’t possible in a short article like this one, but I can provide a basic description of complexity as we see it at Lectica, and provide some examples that highlight its importance.

All issues faced by leaders are associated with a certain amount of built-in complexity. For example:

  1. The sheer number of factors/stakeholders that must be taken into account.
  2. Short and long-term implications/repercussions. (Will a quick fix cause problems downstream, such as global unrest or catastrophic weather?)
  3. The number and diversity of stakeholders/interest groups. (What is the best way to balance the needs of individuals, families, businesses, communities, states, nations, and the world?)
  4. The length of time it will take to implement a decision. (Will it take months, years, decades? Longer projects are inherently more complex because of changes over time.)
  5. Formal and informal rules/laws that place limits on the deliberative process. (For example, legislative and judicial processes are often designed to limit the decision making powers of presidents or prime ministers. This means that leaders must work across systems to develop decisions, which further increases the complexity of decision making.)

Over the course of childhood and adulthood, the complexity of our thinking develops through up to 13 skill levels (0–12). Each new level builds upon the previous level. The figure above shows four adult complexity “zones” — advanced linear thinking (second zone of level 10), early systems thinking (first zone of level 11), advanced systems thinking (second zone of level 11), early principles thinking (first zone of level 12). In advanced linear thinking, reasoning is often characterized as “black and white.” Individuals performing in this zone cope best with problems that have clear right or wrong answers. It is only once individuals enter early systems thinking, that we begin to work effectively with highly complex problems that do not have clear right or wrong answers.

Leadership at the national level requires exceptional skills for managing complexity, including the ability to deal with the most complex problems faced by humanity (Helbing, 2013). Needless to say, a national leader regularly faces issues at or above early principles thinking.

Complexity level and leadership—the evidence

In the workplace, the hiring managers who decide which individuals will be put in leadership roles are likely to choose leaders whose thinking complexity is a good match for their roles. Even if they have never heard the term complexity level, hiring managers generally understand, at least implicitly, that leaders who can work with the complexity inherent in the issues associated with their roles are likely to make better decisions than leaders whose thinking is less complex.

There is a strong relation between the complexity of leadership roles and the complexity level of leaders’ reasoning. In general, more complex thinkers fill more complex roles. The figure below shows how lower and senior level leaders’ complexity scores are distributed in Lectica’s database. Most senior leaders’ complexity scores are in or above advanced systems thinking, while those of lower level leaders are primarily in early systems thinking.

The strong relation between the complexity of leaders’ thinking and the complexity of their roles can also be seen in the recruitment literature. To be clear, complexity is not the only aspect of leadership decision making that affects leaders’ ability to deal effectively with complex issues. However, a large body of research, spanning over 50 years, suggests that the top predictors of workplace leader recruitment success are those that most strongly relate to thinking skills, including complexity level.

The figure below shows the predictive power of several forms of assessment employed in making hiring and promotion decisions. Assessments of mental ability have been shown to have the highest predictive power. In other words, assessments of thinking skills do a better job predicting which candidates will be successful in a given role than other forms of assessment.

The match between the complexity of national leaders’ thinking and the complexity level of the problems faced in their roles is important. While we will not be able to assess the actual complexity level of the thinking of national leaders, we will be able to examine the complexity of their responses to questions posed by prominent journalists. In upcoming articles, we’ll be sharing our findings and discussing their implications.

Coming next…

In the second article in this series, we begin our examination of the complexity of national leaders’ thinking by scoring interview responses from four US Presidents—Bill Clinton, George W. Bush, Barack Obama, and Donald Trump.


References

Arthur, W., Day, E. A., McNelly, T. A., & Edens, P. S. (2003). A meta‐analysis of the criterion‐related validity of assessment center dimensions. Personnel Psychology, 56(1), 125-153.

Becker, N., Höft, S., Holzenkamp, M., & Spinath, F. M. (2011). The Predictive Validity of Assessment Centers in German-Speaking Regions. Journal of Personnel Psychology, 10(2), 61-69.

Beehr, T. A., Ivanitskaya, L., Hansen, C. P., Erofeev, D., & Gudanowski, D. M. (2001). Evaluation of 360 degree feedback ratings: relationships with each other and with performance and selection predictors. Journal of Organizational Behavior, 22(7), 775-788.

Dawson, T. L., & Stein, Z. (2004). National Leadership Study results. Prepared for the U.S. Intelligence Community.

Dawson, T. L. (2017, October 20). Using technology to advance understanding: The calibration of CLAS, an electronic developmental scoring system. Proceedings from Annual Conference of the Northeastern Educational Research Association, Trumbull, CT.

Dawson, T. L., & Thornton, A. M. A. (2017, October 18). An examination of the relationship between argumentation quality and students’ growth trajectories. Proceedings from Annual Conference of the Northeastern Educational Research Association, Trumbull, CT.

Gaugler, B. B., Rosenthal, D. B., Thornton, G. C., & Bentson, C. (1987). Meta-analysis of assessment center validity. Journal of Applied Psychology, 72(3), 493-511.

Helbing, D. (2013). Globally networked risks and how to respond. Nature, 497, 51-59.

Hunter, J. E., & Hunter, R. F. (1984). The validity and utility of alternative predictors of job performance. Psychological Bulletin, 96, 72-98.

Hunter, J. E., Schmidt, F. L., & Judiesch, M. K. (1990). Individual differences in output variability as a function of job complexity. Journal of Applied Psychology, 75, 28-42.

Johnson, J. (2001). Toward a better understanding of the relationship between personality and individual job performance. In M. R. R. Barrick, Murray R. (Ed.), Personality and work: Reconsidering the role of personality in organizations (pp. 83-120).

McDaniel, M. A., Schmidt, F. L., & Hunter, J., E. (1988a). A Meta-analysis of the validity of training and experience ratings in personnel selection. Personnel Psychology, 41(2), 283-309.

McDaniel, M. A., Schmidt, F. L., & Hunter, J., E. (1988b). Job experience correlates of job performance. Journal of Applied Psychology, 73, 327-330.

McDaniel, M. A., Whetzel, D. L., Schmidt, F. L., & Maurer, S. D. (1994). Validity of employment interviews. Journal of Applied Psychology, 79, 599-616.

Rothstein, H. R., Schmidt, F. L., Erwin, F. W., Owens, W. A., & Sparks, C. P. (1990). Biographical data in employment selection: Can validities be made generalizable? Journal of Applied Psychology, 75, 175-184.

Stein, Z., Dawson, T., Van Rossum, Z., Hill, S., & Rothaizer, S. (2013, July). Virtuous cycles of learning: using formative, embedded, and diagnostic developmental assessments in a large-scale leadership program. Proceedings from ITC, Berkeley, CA.

Tett, R. P., Jackson, D. N., & Rothstein, M. (1991). Personality measures as predictors of job performance: A meta-analytic review. Personnel Psychology, 44, 703-742.

Please follow and like us:

World Economic Forum—tomorrow’s skills

The top 10 workplace skills of the future.

Sources: Future of Jobs Report, WEF 2017

In a recent blog post—actually in several recent blog posts—I've been emphasizing the importance of building tomorrow's skills. These are the kinds of skills we all need to navigate our increasingly complex and changing world. While I may not agree that all of the top 10 skills listed in the World Economic Forum report (shown above) belong in a list of skills (Creativity is much more than a skill, and service orientation is more of a disposition than a skill.) the flavor of this list is generally in sync with the kinds of skills, dispositions, and behaviors required in a complex and rapidly changing world.

The "skills" in this list cannot be…

  • developed in learning environments focused primarily on correctness or in workplace environments that don't allow for mistakes; or
  • measured with ratings on surveys or on tests of people's ability to provide correct answers.

These "skills" are best developed through cycles of goal setting, information gathering, application, and reflection—what we call virtuous cycles of learning—or VCoLs. And they're best assessed with tests that focus on applications of skill in real-world contexts, like Lectical Assessments, which are based on a rich research tradition focused on the development of understanding and skill.

 

Please follow and like us:

If you want students to develop faster, stop trying to speed up learning

During the last 20 years—since high stakes testing began to take hold—public school curricula have undergone a massive transformation. Standards have pushed material that was once taught in high school down into the 3rd and 4th grade, and the amount of content teachers are expected to cover each year has increased steadily. The theory behind this trend appears to be that learning more content and learning it earlier will help students develop faster.

But is this true? Is there any evidence at all that learning more content and learning it earlier produces more rapid development? If so, I haven't seen it.

In fact, our evidence points to the opposite conclusion. Learning more and learning it earlier may actually be interfering with the development of critical life skills—like those required for making good decisions in real-life contexts. As the graph below makes clear, students in schools that emphasize covering required content do not develop as rapidly as students in schools that focus on fostering deep understanding—even though learning for understanding generally takes more time than learning something well enough to "pass the test."

What is worse, we're finding that the average student in schools with the greatest emphasis on covering required content appears to stop developing by the end of grade 10, with an average score of 10.1. This is the same score received by the average 6th grader in schools with the greatest emphasis on fostering deep understanding.

The graphs in this post are based on data from 17,755 LRJA assessments. The LRJA asks test-takers to respond to a complex real-life dilemma. They are prompted to explore questions about:

  • finding, creating, and evaluating information and evidence,
  • perspectives, persuasion, and conflict resolution,
  • when and if it's possible to be certain, and
  • the nature of facts, truth, and reality.

Students were in grades 4-12, and attended one or more of 56 schools in the United States and Canada.

The graphs shown above represent two groups of schools—those with students who received the highest scores on the LRJA and those with students who received the lowest scores. These schools differed from one another in two other ways. First, the highest performing schools were all private schools*. Most students in these schools came from upper middle SES (socio-economic status) homes. The lowest performing schools were all public schools primarily serving low SES inner city students.

The second way in which these schools differed was in the design of their curricula. The highest performing schools featured integrated curricula with a great deal of practice-based learning and a heavy emphasis on fostering understanding and real-world competence. All of the lowest performing schools featured standards-focused curricula with a strong emphasis on learning the facts, formulas, procedures, vocabulary, and rules targeted by state tests.

Based on the results of conventional standardized tests, we expected most of the differences between student performances on the LRJA in these two groups of schools to be explained by SES. But this was not the case. Private schools with more conventional curricula and high performing public schools serving middle and upper middle SES families did indeed outperform the low SES schools, but as shown in the graph below, by grade 12, their students were still about 2.5 years behind students in the highest performing schools. At best, SES explains only about 1/2 of the difference between the best and worst schools in our database. (For more on this, see the post, "Does a focus on deep understanding accelerate growth?")

By the way, the conventional standardized test scores of students in this middle group, despite their greater emphasis on covering content, were no better than the conventional standardized test scores of students in the high performing group. Focusing on deep understanding appears to help students develop faster without interfering with their ability to learn required content.

This will not be our last word on the subject. As we scale our K-12 assessments, we'll be able to paint an increasingly clear picture of the developmental impact of a variety of curricula.


Lectica's nonprofit mission is to help educators foster deep understanding and lifelong growth. We can do it with your help! Please donate now. Your donation will help us deliver our learning tools—free—to K-12 teachers everywhere.


*None of these schools pre-selected their students based on test scores. 

See a version of this article on Medium.

Please follow and like us:

Learning and metacognition

Metacognition is thinking about thinking. Metacognitive skills are an interrelated set of competencies for learning and thinking, and include many of the skills required for active learning, critical thinking, reflective judgment, problem solving, and decision-making. People whose metacognitive skills are well developed are better problem-solvers, decision makers and critical thinkers, are more able and more motivated to learn, and are more likely to be able to regulate their emotions (even in difficult situations), handle complexity, and cope with conflict. Although metacognitive skills, once they are well-learned, can become habits of mind that are applied unconsciously in a wide variety of contexts, it is important for even the most advanced learners to “flex their cognitive muscles” by consciously applying appropriate metacognitive skills to new knowledge and in new situations.

Lectica's learning model, VCoL+7 (the virtuous cycle of learning and +7 skills) leverages metacognitive skills in a number of ways. For example, the fourth step in VCoL is reflection & analysis, the +7 skills include reflective dispositionself-monitoring and awareness, and awareness of cognitive and behavioral biases.

Learn more

 

Learning in the workplace occurs optimally when the learner has a reflective disposition and receives both insitutional and educational support

Please follow and like us:

Introducing Lectica First: Front-line to mid-level recruitment assessment—on demand

The world’s best recruitment assessments—unlimited, auto-scored, affordable, relevant, and easy

Lectical Assessments have been used to support senior and executive recruitment for over 10 years, but the expense of human scoring has prohibited their use at scale. I’m delighted to report that this is no longer the case. Because of CLAS—our electronic developmental scoring system—we plan to deliver customized assessments of workplace reasoning with real time scoring. We’re calling this service Lectica First.

Lectica First is a subscription service.* It allows you to administer as many Lectica First assessments as you’d like, any time you’d like. It’s priced to make it possible for your organization to pre-screen every candidate (up through mid-level management) before you look at a single resume or call a single reference. And we’ve built in several upgrade options, so you can easily obtain additional information about the candidates that capture your interest.

learn more about Lectica First subscriptions


The current state of recruitment assessment

“Use of hiring methods with increased predictive validity leads to substantial increases in employee performance as measured in percentage increases in output, increased monetary value of output, and increased learning of job-related skills” (Hunter, Schmidt, & Judiesch, 1990).

Most conventional workplace assessments measure either ability (knowledge & skill) or perspective (opinion or perception). These assessments examine factors like literacy, numeracy, role-specific competencies, leadership traits, personality, and cultural fit, and are generally delivered through interviews, multiple choice tests, or likert-style surveys.

Lectical Assessments  are tests of mental ability (or mental skill). High-quality tests of mental ability have the highest predictive validity for recruitment purposes, hands down. The latest meta-analytic study of predictive validity shows that tests of mental abiliy are by far the best predictors of recruitment success.

Personality tests come in a distant second. In their meta-analysis of the literature, Teft, Jackson, and Rothstein (1991) reported an overall relation between personality and job performance of .24 (with conscientiousness as the best predictor by a wide margin). Translated, this means that only about 6% of job performance is predicted by personality traits. These numbers do not appear to have been challenged in more recent research (Johnson, 2001).

Predictive validity of various types of assessments used in recruitment

The following figure shows average predictive validities for various forms of assessment used in recruitment contexts. The percentages indicate how much of a role a particular form of assessment plays in predicting performance—it’s predictive power. When deciding which assessments to use in recruitment, the goal is to achieve the greatest possible predictive power with the fewest assessments.

In the figure below, assessments are color-coded to indicate which are focused on mental (cognitive) skills, behavior (past or present), or personality traits. It is clear that tests of mental skills stand out as the best predictors.

Schmidt, F. L., Oh, I.-S., & Shaffer, J. A. (2016). Working paper: The validity and utility of selection methods in personnel psychology: Practical and theoretical implications of 100 years of research findings.

Why use Lectical Assessments for recruitment?

Lectical Assessments are “next generation” assessments of mental ability, made possible through a novel synthesis of developmental theory, primary research, and technology. Until now multiple choice style ability tests have been the most affordable option for employers. But despite being far more predictive than other types of tests, these tests suffer from important limitations. Lectical Assessments address these limitations. For details, take a look at the side-by-side comparison of LecticaFirst tests with conventional tests, below.

DimensionLecticaFirstAptitude
AccuracyLevel of reliability (.95–.97) makes them accurate enough for high-stakes decision-making. (Interpreting reliability statistics)Varies greatly. The best aptitude tests have levels of reliability in the .95 range. Many recruitment tests have much lower levels.
Time investmentLectical Assessments are not timed. They usually take from 45–60 minutes, depending on the individual test-taker.Varies greatly. For acceptable accuracy, tests must have many items and may take hours to administer.
ObjectivityScores are objective (Computer scoring is blind to differences in sex, body weight, ethnicity, etc.)Scores on multiple choice tests are objective. Scores on interview-based tests are subject to several sources of bias.
ExpenseHighly affordable.Expensive.
Fit to role: complexityLectica employs sophisticated developmental tools and technologies to efficiently determine the relation between the complexity of role requirements and the level of mental skill required to meet those requirements.Lectica’s approach is not directly comparable to other available approaches.
Fit to role: relevanceLectical Assessments are readily customized to fit particular jobs, and are direct measures of what’s most important—whether or not candidates’ actual workplace reasoning skills are a good fit for a particular job.Aptitude tests measure people’s ability to select correct answers to abstract problems. It is hoped that these answers will predict how good a candidate’s workplace reasoning skills are likely to be.
Predictive validityIn research so far: Predict advancement (uncorrected R = .53**, R2 = .28), National Leadership Study.The aptitude (IQ) tests used in published research predict performance (uncorrected R = .45 to .54, R2 = .20 to .29)
CheatingThe written response format makes cheating virtually impossible when assessments are taken under observation, and very difficult when taken without observation.Cheating is relatively easy and rates can be quite high.
Formative valueHigh. Lectica First assessments can be upgraded after hiring, then used to inform employee development plans.None. Aptitude is a fixed attribute, so there is no room for growth.
Continuous improvementOur assessments are developed with a 21st century learning technology that allows us to continuously improve the predictive validity of Lectica First assessments.Conventional aptitude tests are built with a 20th century technology that does not easily lend itself to continuous improvement.

* CLAS is not yet fully calibrated for scores above 11.5 on our scale. Scores at this level are more often seen in upper- and senior-level managers and executives. For this reason, we do not recommend using Lectica First for recruitment above mid-level management.

**The US Department of Labor’s highest category of validity, labeled “Very Beneficial” requires regression coefficients .35 or higher (R > .34).

References

Arthur, W., Day, E. A., McNelly, T. A., & Edens, P. S. (2003). A meta‐analysis of the criterion‐related validity of assessment center dimensions. Personnel Psychology, 56(1), 125-153.

Becker, N., Höft, S., Holzenkamp, M., & Spinath, F. M. (2011). The Predictive Validity of Assessment Centers in German-Speaking Regions. Journal of Personnel Psychology, 10(2), 61-69.

Beehr, T. A., Ivanitskaya, L., Hansen, C. P., Erofeev, D., & Gudanowski, D. M. (2001). Evaluation of 360 degree feedback ratings: relationships with each other and with performance and selection predictors. Journal of Organizational Behavior, 22(7), 775-788.

Dawson, T. L., & Stein, Z. (2004). National Leadership Study results. Prepared for the U.S. Intelligence Community.

Gaugler, B. B., Rosenthal, D. B., Thornton, G. C., & Bentson, C. (1987). Meta-analysis of assessment center validity. Journal of Applied Psychology, 72(3), 493-511.

Hunter, J. E., & Hunter, R. F. (1984). The validity and utility of alterna­tive predictors of job performance. Psychological Bulletin, 96, 72-98.

Hunter, J. E., Schmidt, F. L., & Judiesch, M. K. (1990). Individual differences in output variability as a function of job complexity. Journal of Applied Psychology, 75, 28-42.

Johnson, J. (2001). Toward a better understanding of the relationship between personality and individual job performance. In M. R. R. Barrick, Murray R. (Ed.), Personality and work: Reconsidering the role of personality in organizations (pp. 83-120).

Mcdaniel, M. A., Schmidt, F. L., & Hunter, J., E. (1988a). A Meta-analysis of the validity of training and experience ratings in personnel selection. Personnel Psychology, 41(2), 283-309.

Mcdaniel, M. A., Schmidt, F. L., & Hunter, J., E. (1988b). Job experience correlates of job performance. Journal of Applied Psychology, 73, 327-330.

McDaniel, M. A., Whetzel, D. L., Schmidt, F. L., & Maurer, S. D. (1994). Validity of employment interviews. Journal of Applied Psychology, 79, 599-616.

Rothstein, H. R., Schmidt, F. L., Erwin, F. W., Owens, W. A., & Sparks, C. P. (1990). Biographical data in employment selection: Can validities be made generalizable? Journal of Applied Psychology, 75, 175-184.

Schmidt, F. L., Oh, I.-S., & Shaffer, J. A. (2016). Working paper: The validity and utility of selection methods in personnel psychology: Practical and theoretical implications of 100 years of research findings.

Stein, Z., Dawson, T., Van Rossum, Z., Hill, S., & Rothaizer, S. (2013, July). Virtuous cycles of learning: using formative, embedded, and diagnostic developmental assessments in a large-scale leadership program. Proceedings from ITC, Berkeley, CA.

Tett, R. P., Jackson, D. N., & Rothstein, M. (1991). Personality measures as predictors of job performance: A meta-analytic review. Personnel Psychology, 44, 703-742.

Zeidner, M., Matthews, G., & Roberts, R. D. (2004). Emotional intelligence in the workplace: A critical review. Applied psychology: An International Review, 53(3), 371-399.

Please follow and like us:

Why we need to LEARN to think

I'm not sure I buy the argument that reason developed to support social relationships, but the body of research described in this New Yorker article clearly exposes several built-in biases that get in the way of high quality reasoning. These biases are the reason why learning to think should be a much higher priority in our schools (and in the workplace). 

Please follow and like us:

How to teach critical thinking: make it a regular practice

We've argued for years that you can't really learn critical thinking by taking a critical thinking course. Critical thinking is a skill that develops through reflective practice (VCoL). Recently, a group of Stanford scientists reported that a reflective practice approach not only works in the short term, but it produces "sticky" results. Students who are routinely prompted to evaluate data get better at evaluating data—and keep evaluating it even after the prompts are removed. 

Lectica is the only test developer that creates assessments that measure and support this kind of learning.

Please follow and like us: