This is a terrible way to learn

Honestly folks, we really, really, really need to get over the memorization model of learning. It’s good for spelling bees, trivia games, Jeopardy, and passing multiple choice tests. But it’s BORING if not torturous! And cramming more and more facts into our brains isn’t going to help most of us thrive in real life — especially in the 21st century.

As an employer, I don’t care how many facts are in your head or how quickly you can memorize new information. I’m looking for talent, applied expertise (not just factual or theoretical knowledge), and the following skills and attributes:

The ability to tell the difference between memorizing and understanding

I won’t delegate responsibility to employees who can’t tell the difference between memorizing and understanding. Employees who can’t make this distinction don’t know when they need to ask questions. Consequently, they repeatedly make decisions that aren’t adequately informed.

I’ve taken to asking potential employees what it feels like when they realize they’ve really understood something. Many applicants, including highly educated applicants, don’t understand the question. It’s not their fault. The problem is an educational system that’s way too focused on memorizing.

The ability to think

It’s essential that every employee in my organization is able to evaluate information, solve problems, participate actively in decision making and know the difference between an opinion and a good evidence-based argument.

A desire to listen and the skills for doing it well

We also need employees who want and know how to listen — really listen. In my organization, we don’t make decisions in a vacuum. We seek and incorporate a wide range of stakeholder perspectives. A listening disposition and listening skills are indispensable.

The ability to speak truth (constructively)

I know my organization can’t grow the way I want it to if the people around me are unwilling to share their perspectives or are unable to share them constructively. When I ask someone for an opinion, I want to hear their truth — not what they think I want to hear.

The ability to work effectively with others

This requires respect for other human beings, good interpersonal, collaborative, and conflict resolution skills, the ability to hear and respond positively to productive critique, and buckets of compassion.

Humility

Awareness of the ubiquity of human fallibility, including one’s own, and knowledge about human limitations, including the built-in mental biases that so often lead us astray.

A passion for learning (a.k.a. growth mindset)

I love working with people who are driven to increase their understanding and skills — so driven that they’re willing to feel lost at times, so driven that they’re willing to make mistakes on their way to a solution, so driven that their happiness depends on the availability of new challenges.

The desire to do good in the world

I run a nonprofit. We need employees who are motivated to do good.

Not one of these capabilities can be learned by memorizing. All of them are best learned through reflective practice — preferably 12–16 years of reflective practice (a.k.a VCoLing) in an educational system that is not obsessed with remembering.

In case you’re thinking that maybe I’m a oddball employer, check out LinkedIn’s 2018 Workplace Learning Report, and the 2016 World Economic Forum Future of Jobs Report.

Please follow and like us:

Introducing Lectica First: Front-line to mid-level recruitment assessment—on demand

The world’s best recruitment assessments—unlimited, auto-scored, affordable, relevant, and easy

Lectical Assessments have been used to support senior and executive recruitment for over 10 years, but the expense of human scoring has prohibited their use at scale. I’m delighted to report that this is no longer the case. Because of CLAS—our electronic developmental scoring system—we plan to deliver customized assessments of workplace reasoning with real time scoring. We’re calling this service Lectica First.

Lectica First is a subscription service.* It allows you to administer as many Lectica First assessments as you’d like, any time you’d like. It’s priced to make it possible for your organization to pre-screen every candidate (up through mid-level management) before you look at a single resume or call a single reference. And we’ve built in several upgrade options, so you can easily obtain additional information about the candidates that capture your interest.

learn more about Lectica First subscriptions


The current state of recruitment assessment

“Use of hiring methods with increased predictive validity leads to substantial increases in employee performance as measured in percentage increases in output, increased monetary value of output, and increased learning of job-related skills” (Hunter, Schmidt, & Judiesch, 1990).

Most conventional workplace assessments measure either ability (knowledge & skill) or perspective (opinion or perception). These assessments examine factors like literacy, numeracy, role-specific competencies, leadership traits, personality, and cultural fit, and are generally delivered through interviews, multiple choice tests, or likert-style surveys.

Lectical Assessments  are tests of mental ability (or mental skill). High-quality tests of mental ability have the highest predictive validity for recruitment purposes, hands down. The latest meta-analytic study of predictive validity shows that tests of mental abiliy are by far the best predictors of recruitment success.

Personality tests come in a distant second. In their meta-analysis of the literature, Teft, Jackson, and Rothstein (1991) reported an overall relation between personality and job performance of .24 (with conscientiousness as the best predictor by a wide margin). Translated, this means that only about 6% of job performance is predicted by personality traits. These numbers do not appear to have been challenged in more recent research (Johnson, 2001).

Predictive validity of various types of assessments used in recruitment

The following figure shows average predictive validities for various forms of assessment used in recruitment contexts. The percentages indicate how much of a role a particular form of assessment plays in predicting performance—it’s predictive power. When deciding which assessments to use in recruitment, the goal is to achieve the greatest possible predictive power with the fewest assessments.

In the figure below, assessments are color-coded to indicate which are focused on mental (cognitive) skills, behavior (past or present), or personality traits. It is clear that tests of mental skills stand out as the best predictors.

Schmidt, F. L., Oh, I.-S., & Shaffer, J. A. (2016). Working paper: The validity and utility of selection methods in personnel psychology: Practical and theoretical implications of 100 years of research findings.

Why use Lectical Assessments for recruitment?

Lectical Assessments are “next generation” assessments of mental ability, made possible through a novel synthesis of developmental theory, primary research, and technology. Until now multiple choice style ability tests have been the most affordable option for employers. But despite being far more predictive than other types of tests, these tests suffer from important limitations. Lectical Assessments address these limitations. For details, take a look at the side-by-side comparison of LecticaFirst tests with conventional tests, below.

DimensionLecticaFirstAptitude
AccuracyLevel of reliability (.95–.97) makes them accurate enough for high-stakes decision-making. (Interpreting reliability statistics)Varies greatly. The best aptitude tests have levels of reliability in the .95 range. Many recruitment tests have much lower levels.
Time investmentLectical Assessments are not timed. They usually take from 45–60 minutes, depending on the individual test-taker.Varies greatly. For acceptable accuracy, tests must have many items and may take hours to administer.
ObjectivityScores are objective (Computer scoring is blind to differences in sex, body weight, ethnicity, etc.)Scores on multiple choice tests are objective. Scores on interview-based tests are subject to several sources of bias.
ExpenseHighly affordable.Expensive.
Fit to role: complexityLectica employs sophisticated developmental tools and technologies to efficiently determine the relation between the complexity of role requirements and the level of mental skill required to meet those requirements.Lectica’s approach is not directly comparable to other available approaches.
Fit to role: relevanceLectical Assessments are readily customized to fit particular jobs, and are direct measures of what’s most important—whether or not candidates’ actual workplace reasoning skills are a good fit for a particular job.Aptitude tests measure people’s ability to select correct answers to abstract problems. It is hoped that these answers will predict how good a candidate’s workplace reasoning skills are likely to be.
Predictive validityIn research so far: Predict advancement (uncorrected R = .53**, R2 = .28), National Leadership Study.The aptitude (IQ) tests used in published research predict performance (uncorrected R = .45 to .54, R2 = .20 to .29)
CheatingThe written response format makes cheating virtually impossible when assessments are taken under observation, and very difficult when taken without observation.Cheating is relatively easy and rates can be quite high.
Formative valueHigh. Lectica First assessments can be upgraded after hiring, then used to inform employee development plans.None. Aptitude is a fixed attribute, so there is no room for growth.
Continuous improvementOur assessments are developed with a 21st century learning technology that allows us to continuously improve the predictive validity of Lectica First assessments.Conventional aptitude tests are built with a 20th century technology that does not easily lend itself to continuous improvement.

* CLAS is not yet fully calibrated for scores above 11.5 on our scale. Scores at this level are more often seen in upper- and senior-level managers and executives. For this reason, we do not recommend using Lectica First for recruitment above mid-level management.

**The US Department of Labor’s highest category of validity, labeled “Very Beneficial” requires regression coefficients .35 or higher (R > .34).

References

Arthur, W., Day, E. A., McNelly, T. A., & Edens, P. S. (2003). A meta‐analysis of the criterion‐related validity of assessment center dimensions. Personnel Psychology, 56(1), 125-153.

Becker, N., Höft, S., Holzenkamp, M., & Spinath, F. M. (2011). The Predictive Validity of Assessment Centers in German-Speaking Regions. Journal of Personnel Psychology, 10(2), 61-69.

Beehr, T. A., Ivanitskaya, L., Hansen, C. P., Erofeev, D., & Gudanowski, D. M. (2001). Evaluation of 360 degree feedback ratings: relationships with each other and with performance and selection predictors. Journal of Organizational Behavior, 22(7), 775-788.

Dawson, T. L., & Stein, Z. (2004). National Leadership Study results. Prepared for the U.S. Intelligence Community.

Gaugler, B. B., Rosenthal, D. B., Thornton, G. C., & Bentson, C. (1987). Meta-analysis of assessment center validity. Journal of Applied Psychology, 72(3), 493-511.

Hunter, J. E., & Hunter, R. F. (1984). The validity and utility of alterna­tive predictors of job performance. Psychological Bulletin, 96, 72-98.

Hunter, J. E., Schmidt, F. L., & Judiesch, M. K. (1990). Individual differences in output variability as a function of job complexity. Journal of Applied Psychology, 75, 28-42.

Johnson, J. (2001). Toward a better understanding of the relationship between personality and individual job performance. In M. R. R. Barrick, Murray R. (Ed.), Personality and work: Reconsidering the role of personality in organizations (pp. 83-120).

Mcdaniel, M. A., Schmidt, F. L., & Hunter, J., E. (1988a). A Meta-analysis of the validity of training and experience ratings in personnel selection. Personnel Psychology, 41(2), 283-309.

Mcdaniel, M. A., Schmidt, F. L., & Hunter, J., E. (1988b). Job experience correlates of job performance. Journal of Applied Psychology, 73, 327-330.

McDaniel, M. A., Whetzel, D. L., Schmidt, F. L., & Maurer, S. D. (1994). Validity of employment interviews. Journal of Applied Psychology, 79, 599-616.

Rothstein, H. R., Schmidt, F. L., Erwin, F. W., Owens, W. A., & Sparks, C. P. (1990). Biographical data in employment selection: Can validities be made generalizable? Journal of Applied Psychology, 75, 175-184.

Schmidt, F. L., Oh, I.-S., & Shaffer, J. A. (2016). Working paper: The validity and utility of selection methods in personnel psychology: Practical and theoretical implications of 100 years of research findings.

Stein, Z., Dawson, T., Van Rossum, Z., Hill, S., & Rothaizer, S. (2013, July). Virtuous cycles of learning: using formative, embedded, and diagnostic developmental assessments in a large-scale leadership program. Proceedings from ITC, Berkeley, CA.

Tett, R. P., Jackson, D. N., & Rothstein, M. (1991). Personality measures as predictors of job performance: A meta-analytic review. Personnel Psychology, 44, 703-742.

Zeidner, M., Matthews, G., & Roberts, R. D. (2004). Emotional intelligence in the workplace: A critical review. Applied psychology: An International Review, 53(3), 371-399.

Please follow and like us: