The world’s best recruitment assessments—unlimited, auto-scored, affordable, relevant, and easy
Lectical Assessments have been used to support senior and executive recruitment for over 10 years, but the expense of human scoring has prohibited their use at scale. I’m delighted to report that this is no longer the case. Because of CLAS—our electronic developmental scoring system—we plan to deliver customized assessments of workplace reasoning with real time scoring. We’re calling this service Lectica First.
Lectica First is a subscription service.* It allows you to administer as many Lectica First assessments as you’d like, any time you’d like. It’s priced to make it possible for your organization to pre-screen every candidate (up through mid-level management) before you look at a single resume or call a single reference. And we’ve built in several upgrade options, so you can easily obtain additional information about the candidates that capture your interest.
The current state of recruitment assessment
“Use of hiring methods with increased predictive validity leads to substantial increases in employee performance as measured in percentage increases in output, increased monetary value of output, and increased learning of job-related skills” (Hunter, Schmidt, & Judiesch, 1990).
Most conventional workplace assessments measure either ability (knowledge & skill) or perspective (opinion or perception). These assessments examine factors like literacy, numeracy, role-specific competencies, leadership traits, personality, and cultural fit, and are generally delivered through interviews, multiple choice tests, or likert-style surveys.
Lectical Assessments are tests of mental ability (or mental skill). High-quality tests of mental ability have the highest predictive validity for recruitment purposes, hands down. The latest meta-analytic study of predictive validity shows that tests of mental abiliy are by far the best predictors of recruitment success.
Personality tests come in a distant second. In their meta-analysis of the literature, Teft, Jackson, and Rothstein (1991) reported an overall relation between personality and job performance of .24 (with conscientiousness as the best predictor by a wide margin). Translated, this means that only about 6% of job performance is predicted by personality traits. These numbers do not appear to have been challenged in more recent research (Johnson, 2001).
Predictive validity of various types of assessments used in recruitment
The following figure shows average predictive validities for various forms of assessment used in recruitment contexts. The percentages indicate how much of a role a particular form of assessment plays in predicting performance—it’s predictive power. When deciding which assessments to use in recruitment, the goal is to achieve the greatest possible predictive power with the fewest assessments.
In the figure below, assessments are color-coded to indicate which are focused on mental (cognitive) skills, behavior (past or present), or personality traits. It is clear that tests of mental skills stand out as the best predictors.
Why use Lectical Assessments for recruitment?
Lectical Assessments are “next generation” assessments of mental ability, made possible through a novel synthesis of developmental theory, primary research, and technology. Until now multiple choice style ability tests have been the most affordable option for employers. But despite being far more predictive than other types of tests, these tests suffer from important limitations. Lectical Assessments address these limitations. For details, take a look at the side-by-side comparison of LecticaFirst tests with conventional tests, below.
|Accuracy||Level of reliability (.95–.97) makes them accurate enough for high-stakes decision-making. (Interpreting reliability statistics)||Varies greatly. The best aptitude tests have levels of reliability in the .95 range. Many recruitment tests have much lower levels.|
|Time investment||Lectical Assessments are not timed. They usually take from 45–60 minutes, depending on the individual test-taker.||Varies greatly. For acceptable accuracy, tests must have many items and may take hours to administer.|
|Objectivity||Scores are objective (Computer scoring is blind to differences in sex, body weight, ethnicity, etc.)||Scores on multiple choice tests are objective. Scores on interview-based tests are subject to several sources of bias.|
|Fit to role: complexity||Lectica employs sophisticated developmental tools and technologies to efficiently determine the relation between the complexity of role requirements and the level of mental skill required to meet those requirements.||Lectica’s approach is not directly comparable to other available approaches.|
|Fit to role: relevance||Lectical Assessments are readily customized to fit particular jobs, and are direct measures of what’s most important—whether or not candidates’ actual workplace reasoning skills are a good fit for a particular job.||Aptitude tests measure people’s ability to select correct answers to abstract problems. It is hoped that these answers will predict how good a candidate’s workplace reasoning skills are likely to be.|
|Predictive validity||In research so far: Predict advancement (uncorrected R = .53**, R2 = .28), National Leadership Study.||The aptitude (IQ) tests used in published research predict performance (uncorrected R = .45 to .54, R2 = .20 to .29)|
|Cheating||The written response format makes cheating virtually impossible when assessments are taken under observation, and very difficult when taken without observation.||Cheating is relatively easy and rates can be quite high.|
|Formative value||High. Lectica First assessments can be upgraded after hiring, then used to inform employee development plans.||None. Aptitude is a fixed attribute, so there is no room for growth.|
|Continuous improvement||Our assessments are developed with a 21st century learning technology that allows us to continuously improve the predictive validity of Lectica First assessments.||Conventional aptitude tests are built with a 20th century technology that does not easily lend itself to continuous improvement.|
* CLAS is not yet fully calibrated for scores above 11.5 on our scale. Scores at this level are more often seen in upper- and senior-level managers and executives. For this reason, we do not recommend using Lectica First for recruitment above mid-level management.
**The US Department of Labor’s highest category of validity, labeled “Very Beneficial” requires regression coefficients .35 or higher (R > .34).
Arthur, W., Day, E. A., McNelly, T. A., & Edens, P. S. (2003). A meta‐analysis of the criterion‐related validity of assessment center dimensions. Personnel Psychology, 56(1), 125-153.
Becker, N., Höft, S., Holzenkamp, M., & Spinath, F. M. (2011). The Predictive Validity of Assessment Centers in German-Speaking Regions. Journal of Personnel Psychology, 10(2), 61-69.
Beehr, T. A., Ivanitskaya, L., Hansen, C. P., Erofeev, D., & Gudanowski, D. M. (2001). Evaluation of 360 degree feedback ratings: relationships with each other and with performance and selection predictors. Journal of Organizational Behavior, 22(7), 775-788.
Dawson, T. L., & Stein, Z. (2004). National Leadership Study results. Prepared for the U.S. Intelligence Community.
Gaugler, B. B., Rosenthal, D. B., Thornton, G. C., & Bentson, C. (1987). Meta-analysis of assessment center validity. Journal of Applied Psychology, 72(3), 493-511.
Hunter, J. E., & Hunter, R. F. (1984). The validity and utility of alternative predictors of job performance. Psychological Bulletin, 96, 72-98.
Hunter, J. E., Schmidt, F. L., & Judiesch, M. K. (1990). Individual differences in output variability as a function of job complexity. Journal of Applied Psychology, 75, 28-42.
Johnson, J. (2001). Toward a better understanding of the relationship between personality and individual job performance. In M. R. R. Barrick, Murray R. (Ed.), Personality and work: Reconsidering the role of personality in organizations (pp. 83-120).
Mcdaniel, M. A., Schmidt, F. L., & Hunter, J., E. (1988a). A Meta-analysis of the validity of training and experience ratings in personnel selection. Personnel Psychology, 41(2), 283-309.
Mcdaniel, M. A., Schmidt, F. L., & Hunter, J., E. (1988b). Job experience correlates of job performance. Journal of Applied Psychology, 73, 327-330.
McDaniel, M. A., Whetzel, D. L., Schmidt, F. L., & Maurer, S. D. (1994). Validity of employment interviews. Journal of Applied Psychology, 79, 599-616.
Rothstein, H. R., Schmidt, F. L., Erwin, F. W., Owens, W. A., & Sparks, C. P. (1990). Biographical data in employment selection: Can validities be made generalizable? Journal of Applied Psychology, 75, 175-184.
Schmidt, F. L., Oh, I.-S., & Shaffer, J. A. (2016). Working paper: The validity and utility of selection methods in personnel psychology: Practical and theoretical implications of 100 years of research findings.
Stein, Z., Dawson, T., Van Rossum, Z., Hill, S., & Rothaizer, S. (2013, July). Virtuous cycles of learning: using formative, embedded, and diagnostic developmental assessments in a large-scale leadership program. Proceedings from ITC, Berkeley, CA.
Tett, R. P., Jackson, D. N., & Rothstein, M. (1991). Personality measures as predictors of job performance: A meta-analytic review. Personnel Psychology, 44, 703-742.
Zeidner, M., Matthews, G., & Roberts, R. D. (2004). Emotional intelligence in the workplace: A critical review. Applied psychology: An International Review, 53(3), 371-399.