Fit-to-role, clarity, & VUCA skills: Strong predictors of senior & executive recruitment success

Mental ability is by far the best predictor of recruitment success — across the board.* During the 20th century, aptitude tests were the mental ability metrics of choice — but this is the 21st century. The workplace has changed. Today, leaders don’t need skills for choosing the correct answer from a list. They need skills for coping with complex issues without simple right and wrong answers. Aptitude tests don’t measure these skills.

Today, success in senior and executive roles is best predicted by (1) the fit between the complexity of leaders’ thinking and the complexity of their roles, (2) the clarity of their thinking in real workplace contexts, and (3) their skills for functioning in VUCA (volatile, uncertain, complex, and ambiguous) conditions.

Fit-to-roleFit-to-role is the relation between the complexity level of an individual’s reasoning and the complexity level of a given role. Good fit-to-role increases well-being, engagement, effectiveness, and productivity.

ClarityClarity involves the degree to which an individual’s arguments are coherent and persuasive, how well their arguments are framed, and how well their ideas are connected. Individuals who think more clearly make better decisions and grow more rapidly than individuals who think less clearly.

VUCA skillsVUCA skills are required for making good decisions in volatile, uncertain, complex, or ambiguous contexts. They are…

  • perspective coordination—determining which perspectives matter, seeking out a diversity of relevant perspectives, and bringing them together in a way that allows for the emergence of effective solutions.
  • decision-making under complexity — employing a range of decision-making tools and skills to design effective decision-making processes for complex situations.
  • contextual thinking — being predisposed to think contextually, being able to identify the contexts that are most likely to matter in a given situation and determine how these contexts relate to a particular situation.
  • collaboration — understanding the value of collaboration, being equipped with the tools and skills required for collaboration, and being able to determine the level of collaboration that’s appropriate for a particular decision-making context.

Fit-to-role

Getting fit-to-role right increases well-being, engagement, effectiveness, and productivity. Our approach to role fit pairs an assessment of the complexity of an individual’s thinking — when applied to a wicked real-world workplace scenario — with an analysis of the complexity of a particular workplace role.

The Lectical Scores in the figure on the left represent the complexity level scores awarded to eight job candidates, based on their performances on a developmental assessment of leader decision making (LDMA). The fit-to-role score tells us how well the Lectical Score fits the complexity range of a role. Here, the complexity range of the role is 1120–1140, represented by the vertical teal band. The circles represent the Lectical Scores of candidates. The size of these circles represents the range in which the candidate’s true level of ability is likely to fall.

The “sweet spot” for a new hire is generally at the bottom end of the complexity range of a role, in this case, 1120. There are two reasons for this.

  • The sweet spot is where the challenge posed by a new role is “just right” — just difficult enough to keep an employee in flow — what we call the Goldilocks zone. Placing employees in the sweet spot increases employee satisfaction, improves performance, and optimally supports learning and development.
  • An existing team is more likely to embrace candidates who are performing in the sweet spot. Sweet spot candidates are likely to welcome support and mentoring, which makes it easier to integrate them into an existing team than it is to integrate candidates performing at higher levels, who may be viewed as competitors.

In the figure above, teal circles represent candidates whose scores are in or very near the sweet spot — fit to role is excellent. Yellow circles represent individuals demonstrating marginal fit, and red circles represent individuals demonstrating poor fit.

We can use circle color to help us figure out who should advance to the next level in a recruitment process.

The first cut

Based on the results shown above, it’s easy to decide who will advance to the next step in this process. Red circles mean, “This person is a poor fit to the complexity demands of this role.” Therefore, candidates with red circles should be eliminated from consideration for this role. Celia, Amar, Chilemba, and Jae-Eun, just don’t fit.

However, this does not mean that these candidates should be ignored. Every single one of the eliminated candidates has high or acceptable Clarity and VUCA scores. So, despite the fact that they did not fit this role, each one may be good fit for a different role in the organization.

It’s also worth noting that Jae-Eun demonstrates a level of skill — across measures — that’s relatively rare. When you identify a candidate with mental skills this good, it’s worth seeing if there is some way your organization can leverage these skills.

The second cut

The first cut left us with 4 candidates that met basic fit-to-role qualifications, Jewel, YiYu, Alistair, and Martin. The next step is to find out if their Clarity and VUCA scores are good enough for this role.

Below, you can see how we have interpreted the Clarity and VUCA scores for each of the remaining candidates, and made recommendations based on these interpretations. Notice that YiYu and Alistair are recommended with reservations. It will be important to take these reservations into account during next steps in the recruitment process.

What’s next?

Let’s assume that, Jewel, YiYu, and Alistair move to the next step in the recruitment process. Once the number of candidates has been winnowed down to this point, it’s a good time to administer personality or culture fit assessments, conduct team evaluations, view candidate presentations, or conduct interviews. You already know the candidates are equipped with adequate to excellent mental skills and fit-to-role. From here, it’s all about which candidate you think is likely to fit in to your team.


As soon as we have it, my colleagues and I either publish our reliability and validity evidence in refereed journals or conference presentations or present them on our web site. We believe in total transparency regarding the validity and reliability of all assessments employed in the workplace.

 


*Schmidt, F. L., Oh, I.-S., & Shaffer, J. A. (2016). Working paper: The validity and utility of selection methods in personnel psychology: Practical and theoretical implications of 100 years of research findings.

Please follow and like us:

By popular demand—two new self-guided courses from Lectica

Introducing LAP-1 & LAP-2 Light

For some time now, people have been asking us how they can learn at least some of what we teach in our certification courses—but without the homework! Well, we’ve taken the plunge, with two new self-guided courses.


All profits from sales support Lectica’s mission to deliver the world’s best assessments free of charge to K-12 teachers everywhere!


LAP-1 Light

In LAP-1 Light, we’ve brought together the lectures and much of the course material offered in the certification version of the course—Lectical Assessments in Practice for Coaches. You’ll take a deep dive into our learning model and learn how two of our most popular adult assessments—the LDMA (focused on leadership decision making) and the LSUA (focused on leaders’ understanding of themselves in workplace relationships)—are used to support leader development.

This course is perfect for coaches or consultants who are thinking about certifying down the road.

LEARN MORE

LAP-2 Light

In LAP-2 Light, we’re offering all of the lectures and much of the course material from LAP-2—Lectical Assessments in Practice for Recruitment Professionals. You’ll learn about Lectica’s Human Capital Value Chain, conventional recruitment practices, how to evaluate recruitment assessments, and all about Lectica’s recruitment products—including Lectica First (for front-line to mid-level recruitment) and Lectica Suite (for senior recruitment).

This course is perfect for recruitment professionals of all kinds, or for anyone who is toying with the idea of becoming accredited in the use of our recruitment tools.

LEARN MORE

Upgrades

Upgrades to our certification courses are available for both LAP-1 Light and LAP-2 Light!

 

Please follow and like us:

This is a terrible way to learn

Honestly folks, we really, really, really need to get over the memorization model of learning. It’s good for spelling bees, trivia games, Jeopardy, and passing multiple choice tests. But it’s BORING if not torturous! And cramming more and more facts into our brains isn’t going to help most of us thrive in real life — especially in the 21st century.

As an employer, I don’t care how many facts are in your head or how quickly you can memorize new information. I’m looking for talent, applied expertise (not just factual or theoretical knowledge), and the following skills and attributes:

The ability to tell the difference between memorizing and understanding

I won’t delegate responsibility to employees who can’t tell the difference between memorizing and understanding. Employees who can’t make this distinction don’t know when they need to ask questions. Consequently, they repeatedly make decisions that aren’t adequately informed.

I’ve taken to asking potential employees what it feels like when they realize they’ve really understood something. Many applicants, including highly educated applicants, don’t understand the question. It’s not their fault. The problem is an educational system that’s way too focused on memorizing.

The ability to think

It’s essential that every employee in my organization is able to evaluate information, solve problems, participate actively in decision making and know the difference between an opinion and a good evidence-based argument.

A desire to listen and the skills for doing it well

We also need employees who want and know how to listen — really listen. In my organization, we don’t make decisions in a vacuum. We seek and incorporate a wide range of stakeholder perspectives. A listening disposition and listening skills are indispensable.

The ability to speak truth (constructively)

I know my organization can’t grow the way I want it to if the people around me are unwilling to share their perspectives or are unable to share them constructively. When I ask someone for an opinion, I want to hear their truth — not what they think I want to hear.

The ability to work effectively with others

This requires respect for other human beings, good interpersonal, collaborative, and conflict resolution skills, the ability to hear and respond positively to productive critique, and buckets of compassion.

Humility

Awareness of the ubiquity of human fallibility, including one’s own, and knowledge about human limitations, including the built-in mental biases that so often lead us astray.

A passion for learning (a.k.a. growth mindset)

I love working with people who are driven to increase their understanding and skills — so driven that they’re willing to feel lost at times, so driven that they’re willing to make mistakes on their way to a solution, so driven that their happiness depends on the availability of new challenges.

The desire to do good in the world

I run a nonprofit. We need employees who are motivated to do good.

Not one of these capabilities can be learned by memorizing. All of them are best learned through reflective practice — preferably 12–16 years of reflective practice (a.k.a VCoLing) in an educational system that is not obsessed with remembering.

In case you’re thinking that maybe I’m a oddball employer, check out LinkedIn’s 2018 Workplace Learning Report, and the 2016 World Economic Forum Future of Jobs Report.

Please follow and like us:

Fit-to-role, well-being, & productivity

How to recruit the brain’s natural motivational cycle—the power of fit-to-role.

People learn and work better when the challenges they face in their roles are just right—when there is good fit-to-role. Improving fit-to-role requires achieving an optimal balance between an individual’s level of skill and role requirements. When employers get this balance right, they increase engagement, happiness (satisfaction), quality of communication, productivity, and even cultural health.

video version

Here’s how it works.

In the workplace, the challenges we’re expected to face should be just big enough to allow for success most of the time, but not so big that frequent failure is inevitable. My colleagues and I call this balance-point the Goldilocks zone, because it’s where the level of challenge is just right. Identifying the Goldilocks zone is important for three reasons:

First, and most obviously, it’s not good for business if people make too many mistakes.

Second, if the distance between employees’ levels of understanding and the difficulty of the challenges they face is too great, employees are less likely to understand and learn from their mistakes. This kind of gap can lead to a vicious cycle, in which, instead of improving or staying the same, performance gradually deteriorates.

Third, when a work challenge is just right we’re more likely to enjoy ourselves—and feel motivated to work even harder. This is because challenges in the Goldilocks zone, allow us to succeed just often enough to stimulate our brains to release pleasure hormones called opioids. Opioids give us a sense of satisfaction and pleasure. And they have a second effect. They also trigger the release of dopamine—the striving hormone—which motivates us to reach for the next challenge (so we can experience the satisfaction of success once again).

The dopamine-opioid cycle will repeat indefinitely in a virtuous cycle, but only when enough of our learning challenges are in the zone—not too easy and not too hard. As long as the dopamine-opioid cycle keeps cycling, we feel engaged. Engaged people are happy people—they tend to feel satisfied, competent, and motivated. [1]

People are also happier when they feel they can communicate effectively and build understanding with those around them. When organizations get fit-to-role right for every member of a team, they’re also building a team with members who are more likely to understand one another. This is because the complexity level of role requirements for different team members are likely to be very similar. So, getting fit to role right for one team member means building a team in which members are performing within a complexity range that makes it relatively—but not too—easy for members to understand one another. Team members are happiest when they can be confident that—most of the time and with reasonable effort—they will be able to achieve a shared understanding with other members.

A team representing a diversity of perspectives and skills, composed of individuals performing within a complexity range of 10–20 points on the Lectical Scale is likely to function optimally.

Getting fit-to-role right, also ensures that line managers are slightly more complex thinkers than their direct reports. People tend to prefer leaders they can look up to, and most of us intuitively look up to people who think a little more complexly than we do. [2] When it comes to line managers, If we’re as skilled as they are, we tend to wonder why they’re leading us. If we’re more skilled than they are, we are likely to feel frustrated. And if they’re way more skilled than we are, we may not understand them fully. In other words, we’re happiest when our line managers challenge us—but not too much. (Sound familiar?)

Most people work better with line managers who perform 15–25 points higher on the Lectical Scale than they do.

Unsurprisingly, all this engagement and happiness has an impact on productivity. Individuals work more productively when they’re happily engaged. And teams work more productively when their members communicate well with one another.[2]

The moral of the story

The moral of this story is that employee happiness and organizational effectiveness are driven by the same thing—fit-to-role. We don’t have to compromise one to achieve the other. Quite the contrary. We can’t achieve either without achieving fit-to-role.

Summing up

To sum up, when we get fit to role right—in other words, ensure that every employee is in the zone—we support individual engagement & happiness, quality communication in teams, and leadership effectiveness. Together, these outcomes contribute to productivity and cultural health.

Getting fit-to-role right requires top-notch recruitment and people development practices, starting with the ability to measure the complexity of (1) role requirements and (2) people skills.

When my colleagues and I think about the future of recruitment and people development, we envision healthy, effective organizations characterized by engaged, happy, productive, and constantly developing employees & teams. We help organizations achieve this vision by…

  • reducing the cost of recruitment so that best practices can be employed at every level in an organization;
  • improving predictions of fit-to- role;
  • broadening the definition of fit-to-role to encompasses the role, the team, and the position of a role in the organizational hierarchy; and
  • promoting the seamless integration of recruitment with employee development strategy and practice.

[1] Csikszentmihalyi, M., Flow, the psychology of happiness. (2008) Harper-Collins.

[2] Oishi, S., Koo, M., & Akimoto, S. (2015) Culture, interpersonal perceptions, and happiness in social interactions, Pers Soc Psychol Bull, 34, 307–320.

[3] Oswald, A. J., Proto, E., & Sgroi, D. (2015). Happiness and productivity. Journal of labor economics, 33, 789-822.

Please follow and like us:

Statistics for all: Prediction

Why you might want to reconsider using 360s and EQ assessments to predict recruitment success


Measurements are often used to make predictions. For example, they can help predict how tall a 4-year-old is likely to be in adulthood, which students are likely to do better in an academic program, or which candidates are most likely to succeed in a particular job.

Some of the attributes we measure are strong predictors, others are weaker. For example, a child’s height at age 4 is a pretty strong predictor of adult height. Parental height is a weaker predictor. The complexity of a person’s workplace decision making, on its own, is a moderate predictor of success in the workplace. But the relation between the complexly of their workplace decision making and the complexity of their role is a strong predictor.

How do we determine the strength or a predictor? In statistics, the strength of predictions is represented by an effect size. Most effect size indicators are expressed as decimals and range from .00 –1.00, with 1.00 representing 100% accuracy of prediction. The effect size indicator you’ll see most often is r-square. If you’ve ever been forced to take a statistics course—;)—you may remember that r represents the strength of a correlation. Before I explain r-square, let’s look at some correlation data.

The four figures below represent 4 different correlations, from weakest (.30) to strongest (.90). Let’s say the vertical axis (40 –140) represents the level of success in college, and the horizontal axis (50 –150) represents scores on one of 4 college entrance exams. The dots represent students. If you were trying to predict success in college, you would be wise to choose the college entrance exam that delivered an r of .90.

Why is an r of .90 preferable? Well, take a look at the next set of figures. I’ve drawn lines through the clouds of dots (students) to show regression lines. These lines represent the prediction we would make about how successful a student will be, given a particular score. It’s clear that in the case of the first figure (r =.30), this prediction is likely to be pretty inaccurate. Many students perform better or worse than predicted by the regression line. But as the correlations increase in size, prediction improves. In the case of the fourth figure (r =.90), the prediction is most accurate.

What does a .90 correlation mean in practical terms? That’s where r-square comes in. If we multiply .90 by .90 (calculate the square), we get an r-square of .81. Statisticians would say that the predictor (test score), explains 81% of the variance in college success. The 19% of the variance that’s not explained (1.00 -.81 =.19) represents the percent of the variance that is due to error (unexplained variance). The square root of 19% is the amount of error (.44).

Even when r = .90, error accounts for 19% of the variance.

Correlations of .90 are very rare in the social sciences—but even correlations this strong are associated with a significant amount of error. It’s important to keep error in mind when we use tests to make big decisions—like who gets hired or who gets to go to college. When we use tests to make decisions like these, the business or school is likely to benefit—slightly better prediction can result in much better returns. But there are always rejected individuals who would have performed well, and there are always accepted individuals who will perform badly.

For references, see: The complexity of national leaders’ thinking: How does it measure up?

Let’s get realistic. As I mentioned earlier, correlations of .90 are very rare. In recruitment contexts, the most predictive assessments (shown above) correlate with hire success in the range of .50 –.54, predicting from 25% – 29% of the variance in hire success. That leaves a whopping 71% – 75% of the variance unexplained, which is why the best hiring processes not only use the most predictive assessments, but also consider multiple predictive criteria.

On the other end of the spectrum, there are several common forms of assessment that explain less than 9% of the variance in recruitment success. Their correlations with recruitment success are lower than .30. Yet some of these, like 360s, reference checks, and EQ, are wildly popular. In the context of hiring, the size of the variance explained by error in these cases (more than 91%) means there is a very big risk of being unfair to a large percentage of candidates. (I’m pretty certain assessment buyers aren’t intentionally being unfair. They probably just don’t know about effect size.)

If you’ve read my earlier article about replication, you know that the power-posing research could not be replicated. You also might be interested to learn that the correlations reported in the original research were also lower than .30. If power-posing had turned out to be a proven predictor of presentation quality, the question I’d be asking myself is, “How much effort am I willing to put into power-posing when the variance explained is lower than 9%?”

If we were talking about something other than power-posing, like reducing even a small risk that my child would die of a contagious disease, I probably wouldn’t hesitate to make a big effort. But I’m not so sure about power-posing before a presentation. Practicing my presentation or getting feedback might be a better use of my time.

Summing up (for now)

A basic understanding of prediction is worth cultivating. And it’s pretty simple. You don’t even have to do any fancy calculations. Most importantly, it can save you time and tons of wasted effort by giving you a quick way to estimate the likelihood that an activity is worth doing (or product is worth having). Heck, it can even increase fairness. What’s not to like?


My organization, Lectica, Inc., is a 501(c)3 nonprofit corporation. Part of our mission is to share what we learn with the world. One of the things we’ve learned is that many assessment buyers don’t seem to know enough about statistics to make the best choices. The Statistics for all series is designed to provide assessment buyers with the knowledge they need most to become better assessment shoppers.

Statistics for all: Replication

Statistics for all: What the heck is confidence?

Statistics for all: Estimating confidence

 

Please follow and like us:

Lectica’s Human Capital Value Chain—for organizations that are serious about human development

Lectica's tools and services have powerful applications for every process in the human capital value chain. I explain how in the following video.

For links to more information see the HCVC page on Lecticalive. For references that support claims made in the video, see the post—Introducing LecticaFirst.

 

Please follow and like us:

Introducing Lectica First: Front-line to mid-level recruitment assessment—on demand

The world’s best recruitment assessments—unlimited, auto-scored, affordable, relevant, and easy

Lectical Assessments have been used to support senior and executive recruitment for over 10 years, but the expense of human scoring has prohibited their use at scale. I’m delighted to report that this is no longer the case. Because of CLAS—our electronic developmental scoring system—we plan to deliver customized assessments of workplace reasoning with real time scoring. We’re calling this service Lectica First.

Lectica First is a subscription service.* It allows you to administer as many Lectica First assessments as you’d like, any time you’d like. It’s priced to make it possible for your organization to pre-screen every candidate (up through mid-level management) before you look at a single resume or call a single reference. And we’ve built in several upgrade options, so you can easily obtain additional information about the candidates that capture your interest.

learn more about Lectica First subscriptions


The current state of recruitment assessment

“Use of hiring methods with increased predictive validity leads to substantial increases in employee performance as measured in percentage increases in output, increased monetary value of output, and increased learning of job-related skills” (Hunter, Schmidt, & Judiesch, 1990).

Most conventional workplace assessments measure either ability (knowledge & skill) or perspective (opinion or perception). These assessments examine factors like literacy, numeracy, role-specific competencies, leadership traits, personality, and cultural fit, and are generally delivered through interviews, multiple choice tests, or likert-style surveys.

Lectical Assessments  are tests of mental ability (or mental skill). High-quality tests of mental ability have the highest predictive validity for recruitment purposes, hands down. The latest meta-analytic study of predictive validity shows that tests of mental abiliy are by far the best predictors of recruitment success.

Personality tests come in a distant second. In their meta-analysis of the literature, Teft, Jackson, and Rothstein (1991) reported an overall relation between personality and job performance of .24 (with conscientiousness as the best predictor by a wide margin). Translated, this means that only about 6% of job performance is predicted by personality traits. These numbers do not appear to have been challenged in more recent research (Johnson, 2001).

Predictive validity of various types of assessments used in recruitment

The following figure shows average predictive validities for various forms of assessment used in recruitment contexts. The percentages indicate how much of a role a particular form of assessment plays in predicting performance—it’s predictive power. When deciding which assessments to use in recruitment, the goal is to achieve the greatest possible predictive power with the fewest assessments.

In the figure below, assessments are color-coded to indicate which are focused on mental (cognitive) skills, behavior (past or present), or personality traits. It is clear that tests of mental skills stand out as the best predictors.

Schmidt, F. L., Oh, I.-S., & Shaffer, J. A. (2016). Working paper: The validity and utility of selection methods in personnel psychology: Practical and theoretical implications of 100 years of research findings.

Why use Lectical Assessments for recruitment?

Lectical Assessments are “next generation” assessments of mental ability, made possible through a novel synthesis of developmental theory, primary research, and technology. Until now multiple choice style ability tests have been the most affordable option for employers. But despite being far more predictive than other types of tests, these tests suffer from important limitations. Lectical Assessments address these limitations. For details, take a look at the side-by-side comparison of LecticaFirst tests with conventional tests, below.

DimensionLecticaFirstAptitude
AccuracyLevel of reliability (.95–.97) makes them accurate enough for high-stakes decision-making. (Interpreting reliability statistics)Varies greatly. The best aptitude tests have levels of reliability in the .95 range. Many recruitment tests have much lower levels.
Time investmentLectical Assessments are not timed. They usually take from 45–60 minutes, depending on the individual test-taker.Varies greatly. For acceptable accuracy, tests must have many items and may take hours to administer.
ObjectivityScores are objective (Computer scoring is blind to differences in sex, body weight, ethnicity, etc.)Scores on multiple choice tests are objective. Scores on interview-based tests are subject to several sources of bias.
ExpenseHighly affordable.Expensive.
Fit to role: complexityLectica employs sophisticated developmental tools and technologies to efficiently determine the relation between the complexity of role requirements and the level of mental skill required to meet those requirements.Lectica’s approach is not directly comparable to other available approaches.
Fit to role: relevanceLectical Assessments are readily customized to fit particular jobs, and are direct measures of what’s most important—whether or not candidates’ actual workplace reasoning skills are a good fit for a particular job.Aptitude tests measure people’s ability to select correct answers to abstract problems. It is hoped that these answers will predict how good a candidate’s workplace reasoning skills are likely to be.
Predictive validityIn research so far: Predict advancement (uncorrected R = .53**, R2 = .28), National Leadership Study.The aptitude (IQ) tests used in published research predict performance (uncorrected R = .45 to .54, R2 = .20 to .29)
CheatingThe written response format makes cheating virtually impossible when assessments are taken under observation, and very difficult when taken without observation.Cheating is relatively easy and rates can be quite high.
Formative valueHigh. Lectica First assessments can be upgraded after hiring, then used to inform employee development plans.None. Aptitude is a fixed attribute, so there is no room for growth.
Continuous improvementOur assessments are developed with a 21st century learning technology that allows us to continuously improve the predictive validity of Lectica First assessments.Conventional aptitude tests are built with a 20th century technology that does not easily lend itself to continuous improvement.

* CLAS is not yet fully calibrated for scores above 11.5 on our scale. Scores at this level are more often seen in upper- and senior-level managers and executives. For this reason, we do not recommend using Lectica First for recruitment above mid-level management.

**The US Department of Labor’s highest category of validity, labeled “Very Beneficial” requires regression coefficients .35 or higher (R > .34).

References

Arthur, W., Day, E. A., McNelly, T. A., & Edens, P. S. (2003). A meta‐analysis of the criterion‐related validity of assessment center dimensions. Personnel Psychology, 56(1), 125-153.

Becker, N., Höft, S., Holzenkamp, M., & Spinath, F. M. (2011). The Predictive Validity of Assessment Centers in German-Speaking Regions. Journal of Personnel Psychology, 10(2), 61-69.

Beehr, T. A., Ivanitskaya, L., Hansen, C. P., Erofeev, D., & Gudanowski, D. M. (2001). Evaluation of 360 degree feedback ratings: relationships with each other and with performance and selection predictors. Journal of Organizational Behavior, 22(7), 775-788.

Dawson, T. L., & Stein, Z. (2004). National Leadership Study results. Prepared for the U.S. Intelligence Community.

Gaugler, B. B., Rosenthal, D. B., Thornton, G. C., & Bentson, C. (1987). Meta-analysis of assessment center validity. Journal of Applied Psychology, 72(3), 493-511.

Hunter, J. E., & Hunter, R. F. (1984). The validity and utility of alterna­tive predictors of job performance. Psychological Bulletin, 96, 72-98.

Hunter, J. E., Schmidt, F. L., & Judiesch, M. K. (1990). Individual differences in output variability as a function of job complexity. Journal of Applied Psychology, 75, 28-42.

Johnson, J. (2001). Toward a better understanding of the relationship between personality and individual job performance. In M. R. R. Barrick, Murray R. (Ed.), Personality and work: Reconsidering the role of personality in organizations (pp. 83-120).

Mcdaniel, M. A., Schmidt, F. L., & Hunter, J., E. (1988a). A Meta-analysis of the validity of training and experience ratings in personnel selection. Personnel Psychology, 41(2), 283-309.

Mcdaniel, M. A., Schmidt, F. L., & Hunter, J., E. (1988b). Job experience correlates of job performance. Journal of Applied Psychology, 73, 327-330.

McDaniel, M. A., Whetzel, D. L., Schmidt, F. L., & Maurer, S. D. (1994). Validity of employment interviews. Journal of Applied Psychology, 79, 599-616.

Rothstein, H. R., Schmidt, F. L., Erwin, F. W., Owens, W. A., & Sparks, C. P. (1990). Biographical data in employment selection: Can validities be made generalizable? Journal of Applied Psychology, 75, 175-184.

Schmidt, F. L., Oh, I.-S., & Shaffer, J. A. (2016). Working paper: The validity and utility of selection methods in personnel psychology: Practical and theoretical implications of 100 years of research findings.

Stein, Z., Dawson, T., Van Rossum, Z., Hill, S., & Rothaizer, S. (2013, July). Virtuous cycles of learning: using formative, embedded, and diagnostic developmental assessments in a large-scale leadership program. Proceedings from ITC, Berkeley, CA.

Tett, R. P., Jackson, D. N., & Rothstein, M. (1991). Personality measures as predictors of job performance: A meta-analytic review. Personnel Psychology, 44, 703-742.

Zeidner, M., Matthews, G., & Roberts, R. D. (2004). Emotional intelligence in the workplace: A critical review. Applied psychology: An International Review, 53(3), 371-399.

Please follow and like us: