The world's best recruitment assessments—unlimited, auto-scored, affordable, relevant, and easy
Lectical Assessments have been used to support senior and executive recruitment for over 10 years, but the expense of human scoring has prohibited their use at scale. I'm DELIGHTED to report that this is no longer the case. Because of CLAS—our electronic developmental scoring system—this fall we plan to deliver customized assessments of workplace reasoning with real time scoring. We're calling this service LecticaFirst.
LecticaFirst is a subscription service.* It allows you to administer as many LecticaFirst assessments as you'd like, any time you'd like. It's priced to make it possible for your organization to pre-screen every candidate (up through mid-level management) before you look at a single resume or call a single reference. And we've built in several upgrade options, so you can easily obtain additional information about the candidates that capture your interest.
The current state of recruitment assessment
"Use of hiring methods with increased predictive validity leads to substantial increases in employee performance as measured in percentage increases in output, increased monetary value of output, and increased learning of job-related skills" (Hunter, Schmidt, & Judiesch, 1990).
Most conventional workplace assessments focus on one of two broad constructs—aptitude or personality. These assessments examine factors like literacy, numeracy, role-specific competencies, leadership traits, and cultural fit, and are generally delivered through interviews, multiple choice tests, or likert-style surveys. Emotional intelligence is also sometimes measured, but thus far, is not producing results that can complete with aptitude tests (Zeidner, Matthews, & Roberts, 2004).
Like Lectical Assessments, aptitude tests are tests of mental ability (or mental skill). High-quality tests of mental ability have the highest predictive validity for recruitment purposes, hands down. Hunter and Hunter (1984), in their systematic review of the literature, found an effective range of predictive validity for aptitude tests of .45 to .54. Translated, this means that about 20% to 29% of success on the job was predicted by mental ability. These numbers do not appear to have changed appreciably since Hunter and Hunter's 1984 review.
Personality tests come in a distant second. In their meta-analysis of the literature, Teft, Jackson, and Rothstein (1991) reported an overall relation between personality and job performance of .24 (with conscientiousness as the best predictor by a wide margin). Translated, this means that only about 6% of job performance is predicted by personality traits. These numbers do not appear to have been challenged in more recent research (Johnson, 2001).
Predictive validity of various types of assessments used in recruitment
The following table shows average predictive validities for various forms of assessment used in recruitment contexts. The column "variance explained" is an indicator of how much of a role a particular form of assessment plays in predicting performance—it's predictive power. When deciding which assessments to use in recruitment, the goal is to achieve the greatest possible predictive power with the fewest assessments. That's why I've included the last column, "variance explained with GMA." It shows what happens to the variance explained when an assessment of General Mental Ability is combined with the form of assessment in a given row. The best combinations shown here are GMA and work sample tests, GMA and Integrity, and GMA and conscientiousness.
|Form of assessment||Source||Predictive validity||Variance explained||Variance explained (with GMA)|
|Complexity of workplace reasoning||(Dawson & Stein, 2004; Stein, Dawson, Van Rossum, Hill, & Rothaizer, 2003)||.53||28%||n/a|
|Aptitude (General Mental Ability, GMA)||(Hunter, 1980; Schmidt & Hunter, 1998)||.51||26%||n/a|
|Work sample tests||(Hunter & Hunter, 1984; Schmidt & Hunter, 1998)||.54||29%||40%|
|Integrity||(Ones, Viswesvaran, and Schmidt, 1993; Schmidt & Hunter, 1998)||.41||17%||42%|
|Conscientiousness||(Barrick & Mount, 1995; Schmidt & Hunter, 1998).||.31||10%||36%|
|Employment interviews (structured)||(McDaniel, Whetzel, Schmidt, and Mauer, 1994; Schmidt & Hunter, 1998)||.51||26%||39%|
|Employment interviews (unstructured)||(McDaniel, Whetzel, Schmidt, and Mauer, 1994 Schmidt & Hunter, 1998)||.38||14%||30%|
Job knowledge tests
|(Hunter and Hunter, 1984; Schmidt & Hunter, 1998)||.48||23%||33%|
Job tryout procedure
|(Hunter and Hunter, 1984; Schmidt & Hunter, 1998)||.44||19%||33%|
|Peer ratings||(Hunter and Hunter, 1984; Schmidt & Hunter, 1998)||.49||24%||33%|
Training & experience: behavioral consistency method
|(McDaniel, Schmidt, and Hunter, 1988a, 1988b; Schmidt & Hunter, 1998; Schmidt, Ones, and Hunter, 1992)||.45||20%||33%|
|Reference checks||(Hunter and Hunter, 1984; Schmidt & Hunter, 1998)||.26||7%||32%|
|Job experience (years)||
Hunter, 1980); McDaniel, Schmidt, and Hunter, 1988b; Schmidt & Hunter, 1998)
|Biographical data measures||
Supervisory Profile Record Biodata Scale (Rothstein, Schmidt, Erwin, Owens, and Sparks, 1990; Schmidt & Hunter, 1998)
(Gaugler, Rosenthal, Thornton, and Benson, 1987; Schmidt & Hunter, 1998; Becker, Höft, Holzenkamp, & Spinath, 2011) Note: Arthur, Day, McNelly, & Edens (2003) found a predictive validity of .45 for assessment centers that included mental skills assessments.
|EQ||(Zeidner, Matthews, & Roberts, 2004)||.24||6%||n/a|
|360 assessments||Beehr, Ivanitskaya, Hansen, Erofeev, & Gudanowski, 2001||.24||6%||n/a|
|Training & experience: point method||(McDaniel, Schmidt, and Hunter, 1988a; Schmidt & Hunter, 1998)||.11||1%||27%|
|Years of education||(Hunter and Hunter, 1984; Schmidt & Hunter, 1998)||.10||1%||27%|
|Interests||(Schmidt & Hunter, 1998)||.10||1%||27%|
The figure below shows the predictive power information from this table in graphical form. Assessments are color coded to indicate which are focused on mental (cognitive) skills, behavior (past or present), or personality traits. It is clear that tests of mental skills stand out as the best predictors.
Why use Lectical Assessments for recruitment?
Lectical Assessments are "next generation" assessments, made possible through a novel synthesis of developmental theory, primary research, and technology. Until now multiple choice style aptitude tests have been the most affordable option for employers. But despite being more predictive than personality tests, aptitude tests still suffer from important limitations. Lectical Assessments address these limitations. For details, take a look at the side-by-side comparison of LecticaFirst tests with conventional tests, below.
|Accuracy||Level of reliability (.95–.97) makes them accurate enough for high-stakes decision-making. (Interpreting reliability statistics)||Varies greatly. The best aptitude tests have levels of reliability in the .95 range. Many recruitment tests have much lower levels.|
|Time investment||Lectical Assessments are not timed. They usually take from 45–60 minutes, depending on the individual test-taker.||Varies greatly. For acceptable accuracy, tests must have many items and may take hours to administer.|
|Objectivity||Scores are objective (Computer scoring is blind to differences in sex, body weight, ethnicity, etc.)||Scores on multiple choice tests are objective. Scores on interview-based tests are subject to several sources of bias.|
|Expense||Highly competitive subscription. (From $6 – $10) per existing employee annually||Varies greatly.|
|Fit to role: complexity||Lectica employs sophisticated developmental tools and technologies to efficiently determine the relation between role requirements and the level of reasoning skill required to meet those requirements.||Lectica's approach is not directly comparable to other available approaches.|
|Fit to role: relevance||Lectical Assessments are readily customized to fit particular jobs, and are direct measures of what's most important—whether or not candidates' actual workplace reasoning skills are a good fit for a particular job.||Aptitude tests measure people's ability to select correct answers to abstract problems. It is hoped that these answers will predict how good a candidate's workplace reasoning skills are likely to be.|
|Predictive validity||In research so far: Predict advancement (R = .53**, R2 = .28), National Leadership Study.||The aptitude (IQ) tests used in published research predict performance (R = .45 to .54, R2 = .20 to .29)|
|Cheating||The written response format makes cheating virtually impossible when assessments are taken under observation, and very difficult when taken without observation.||Cheating is relatively easy and rates can be quite high.|
|Formative value||High. LecticaFirst assessments can be upgraded after hiring, then used to inform employee development plans.||None. Aptitude is a fixed attribute, so there is no room for growth.|
|Continuous improvement||Our assessments are developed with a 21st century learning technology that allows us to continuously improve the predictive validity of Lecticafirst assessments.||Conventional aptitude tests are built with a 20th century technology that does not easily lend itself to continuous improvement.|
* CLAS is not yet fully calibrated for scores above 11.5 on our scale. Scores at this level are more often seen in upper- and senior-level managers and executives. For this reason, we do not recommend using lecticafirst for recruitment above mid-level management.
**The US Department of Labor’s highest category of validity, labeled “Very Beneficial” requires regression coefficients .35 or higher (R > .34).
Arthur, W., Day, E. A., McNelly, T. A., & Edens, P. S. (2003). A meta‐analysis of the criterion‐related validity of assessment center dimensions. Personnel Psychology, 56(1), 125-153.
Becker, N., Höft, S., Holzenkamp, M., & Spinath, F. M. (2011). The Predictive Validity of Assessment Centers in German-Speaking Regions. Journal of Personnel Psychology, 10(2), 61-69.
Beehr, T. A., Ivanitskaya, L., Hansen, C. P., Erofeev, D., & Gudanowski, D. M. (2001). Evaluation of 360 degree feedback ratings: relationships with each other and with performance and selection predictors. Journal of Organizational Behavior, 22(7), 775-788.
Dawson, T. L., & Stein, Z. (2004). National Leadership Study results. Prepared for the U.S. Intelligence Community.
Gaugler, B. B., Rosenthal, D. B., Thornton, G. C., & Bentson, C. (1987). Meta-analysis of assessment center validity. Journal of Applied Psychology, 72(3), 493-511.
Hunter, J. E., & Hunter, R. F. (1984). The validity and utility of alternative predictors of job performance. Psychological Bulletin, 96, 72-98.
Hunter, J. E., Schmidt, F. L., & Judiesch, M. K. (1990). Individual differences in output variability as a function of job complexity. Journal of Applied Psychology, 75, 28-42.
Johnson, J. (2001). Toward a better understanding of the relationship between personality and individual job performance. In M. R. R. Barrick, Murray R. (Ed.), Personality and work: Reconsidering the role of personality in organizations (pp. 83-120).
Mcdaniel, M. A., Schmidt, F. L., & Hunter, J., E. (1988a). A Meta-analysis of the validity of training and experience ratings in personnel selection. Personnel Psychology, 41(2), 283-309.
Mcdaniel, M. A., Schmidt, F. L., & Hunter, J., E. (1988b). Job experience correlates of job performance. Journal of Applied Psychology, 73, 327-330.
McDaniel, M. A., Whetzel, D. L., Schmidt, F. L., & Maurer, S. D. (1994). Validity of employment interviews. Journal of Applied Psychology, 79, 599-616.
Rothstein, H. R., Schmidt, F. L., Erwin, F. W., Owens, W. A., & Sparks, C. P. (1990). Biographical data in employment selection: Can validities be made generalizable? Journal of Applied Psychology, 75, 175-184.
Stein, Z., Dawson, T., Van Rossum, Z., Hill, S., & Rothaizer, S. (2013, July). Virtuous cycles of learning: using formative, embedded, and diagnostic developmental assessments in a large-scale leadership program. Proceedings from ITC, Berkeley, CA.
Tett, R. P., Jackson, D. N., & Rothstein, M. (1991). Personality measures as predictors of job performance: A meta-analytic review. Personnel Psychology, 44, 703-742.
Zeidner, M., Matthews, G., & Roberts, R. D. (2004). Emotional intelligence in the workplace: A critical review. Applied psychology: An International Review, 53(3), 371-399.