Fit-to-role, well-being, & productivity

How to recruit the brain’s natural motivational cycle—the power of fit-to-role.

People learn and work better when the challenges they face in their roles are just right—when there is good fit-to-role. Improving fit-to-role requires achieving an optimal balance between an individual’s level of skill and role requirements. When employers get this balance right, they increase engagement, happiness (satisfaction), quality of communication, productivity, and even cultural health.

video version

Here’s how it works.

In the workplace, the challenges we’re expected to face should be just big enough to allow for success most of the time, but not so big that frequent failure is inevitable. My colleagues and I call this balance-point the Goldilocks zone, because it’s where the level of challenge is just right. Identifying the Goldilocks zone is important for three reasons:

First, and most obviously, it’s not good for business if people make too many mistakes.

Second, if the distance between employees’ levels of understanding and the difficulty of the challenges they face is too great, employees are less likely to understand and learn from their mistakes. This kind of gap can lead to a vicious cycle, in which, instead of improving or staying the same, performance gradually deteriorates.

Third, when a work challenge is just right we’re more likely to enjoy ourselves—and feel motivated to work even harder. This is because challenges in the Goldilocks zone, allow us to succeed just often enough to stimulate our brains to release pleasure hormones called opioids. Opioids give us a sense of satisfaction and pleasure. And they have a second effect. They also trigger the release of dopamine—the striving hormone—which motivates us to reach for the next challenge (so we can experience the satisfaction of success once again).

The dopamine-opioid cycle will repeat indefinitely in a virtuous cycle, but only when enough of our learning challenges are in the zone—not too easy and not too hard. As long as the dopamine-opioid cycle keeps cycling, we feel engaged. Engaged people are happy people—they tend to feel satisfied, competent, and motivated. [1]

People are also happier when they feel they can communicate effectively and build understanding with those around them. When organizations get fit-to-role right for every member of a team, they’re also building a team with members who are more likely to understand one another. This is because the complexity level of role requirements for different team members are likely to be very similar. So, getting fit to role right for one team member means building a team in which members are performing within a complexity range that makes it relatively—but not too—easy for members to understand one another. Team members are happiest when they can be confident that—most of the time and with reasonable effort—they will be able to achieve a shared understanding with other members.

A team representing a diversity of perspectives and skills, composed of individuals performing within a complexity range of 10–20 points on the Lectical Scale is likely to function optimally.

Getting fit-to-role right, also ensures that line managers are slightly more complex thinkers than their direct reports. People tend to prefer leaders they can look up to, and most of us intuitively look up to people who think a little more complexly than we do. [2] When it comes to line managers, If we’re as skilled as they are, we tend to wonder why they’re leading us. If we’re more skilled than they are, we are likely to feel frustrated. And if they’re way more skilled than we are, we may not understand them fully. In other words, we’re happiest when our line managers challenge us—but not too much. (Sound familiar?)

Most people work better with line managers who perform 15–25 points higher on the Lectical Scale than they do.

Unsurprisingly, all this engagement and happiness has an impact on productivity. Individuals work more productively when they’re happily engaged. And teams work more productively when their members communicate well with one another.[2]

The moral of the story

The moral of this story is that employee happiness and organizational effectiveness are driven by the same thing—fit-to-role. We don’t have to compromise one to achieve the other. Quite the contrary. We can’t achieve either without achieving fit-to-role.

Summing up

To sum up, when we get fit to role right—in other words, ensure that every employee is in the zone—we support individual engagement & happiness, quality communication in teams, and leadership effectiveness. Together, these outcomes contribute to productivity and cultural health.

Getting fit-to-role right requires top-notch recruitment and people development practices, starting with the ability to measure the complexity of (1) role requirements and (2) people skills.

When my colleagues and I think about the future of recruitment and people development, we envision healthy, effective organizations characterized by engaged, happy, productive, and constantly developing employees & teams. We help organizations achieve this vision by…

  • reducing the cost of recruitment so that best practices can be employed at every level in an organization;
  • improving predictions of fit-to- role;
  • broadening the definition of fit-to-role to encompasses the role, the team, and the position of a role in the organizational hierarchy; and
  • promoting the seamless integration of recruitment with employee development strategy and practice.

[1] Csikszentmihalyi, M., Flow, the psychology of happiness. (2008) Harper-Collins.

[2] Oishi, S., Koo, M., & Akimoto, S. (2015) Culture, interpersonal perceptions, and happiness in social interactions, Pers Soc Psychol Bull, 34, 307–320.

[3] Oswald, A. J., Proto, E., & Sgroi, D. (2015). Happiness and productivity. Journal of labor economics, 33, 789-822.

Please follow and like us:

President Trump passed the Montreal Cognitive Assessment

Shortly after the President passed the Montreal Cognitive Assessment, a reader emailed with two questions:

  1. Does this mean that the President has the cognitive capacity required of a national leader?
  2. How does a score on this test relate to the complexity level scores you have been describing in recent posts?

Question 1

A high score on the Montreal Cognitive Assessment dos not mean that the President has the cognitive capacity required of a national leader. This test result simply means there is a high probability that the President is not suffering from mild cognitive impairment. (The test has been shown to detect existing cognitive impairment 88% of the time [1].) In order to determine if the President has the mental capacity to understand the complex issues he faces as a National Leader, we need to know how complexly he thinks about those issues.

Question 2

The answer to the second question is that there is little relation between scores on the Montreal Cognitive Assessment and the complexity level of a person’s thinking. A test like the Montreal Cognitive Assessment does not require the kind of thinking a President needs to understand highly complex issues like climate change or the economy. Teenagers can easily pass this test.

Related articles


Benchmarks for complexity scores

  • Most high school graduates perform somewhere in the middle of level 10.
  • The average complexity score of American adults is in the upper end of level 10, somewhere in the range of 1050–1080.
  • The average complexity score for senior leaders in large corporations or government institutions is in the upper end of level 11, in the range of 1150–1180.
  • The average complexity score (reported in our National Leaders Study) for the three U. S. presidents that preceded President Trump was 1137.
  • The average complexity score (reported in our National Leaders Study) for President Trump was 1053.
  • The difference between 1053 and 1137 generally represents a decade or more of sustained learning. (If you’re a new reader and don’t yet know what a complexity level is, check out the National Leaders Series introductory article.)

[1] JAMA Intern Med. 2015 Sep;175(9):1450-8. doi: 10.1001/jamainternmed.2015.2152. Cognitive Tests to Detect Dementia: A Systematic Review and Meta-analysis. Tsoi KK, Chan JY, Hirai HW, Wong SY, Kwok TC.

 

Please follow and like us:

President Trump on climate change

How complex are the ideas about climate change expressed in President Trump’s tweets? The answer is, they are even less complex than ideas he has expressed about intelligence, international trade, and immigration—landing squarely in level 10. (See the benchmarks, below, to learn more about what it means to perform in level 10.)

The President’s climate change tweets

It snowed over 4 inches this past weekend in New York City. It is still October. So much for Global Warming.
2:43 PM – Nov 1, 2011

 

It’s freezing in New York—where the hell is global warming?
2:37 PM – Apr 23, 2013

 

Record low temperatures and massive amounts of snow. Where the hell is GLOBAL WARMING?
11:23 PM – Feb 14, 2015

 

In the East, it could be the COLDEST New Year’s Eve on record. Perhaps we could use a little bit of that good old Global Warming…!
7:01 PM – Dec 28, 2017

Analysis

In all of these tweets President Trump appears to assume that unusually cold weather is proof that climate change (a.k.a., global warming) is not real. The argument is an example of simple level 10, linear causal logic that can be represented as an “if,then” statement. “If the temperature right now is unusually low, then global warming isn’t happening.” Moreover, in these comments the President relies exclusively on immediate (proximal) evidence, “It’s unusually cold outside.” We see the same use of immediate evidence when climate change believers claim that a warm weather event is proof that climate change is real.

Let’s use some examples of students’ reasoning to get a fix on the complexity level of President Trump’s tweets. Here is a statement from an 11th grade student who took our assessment of environmental stewardship (complexity score = 1025):

“I do think that humans are adding [gases] to the air, causing climate change, because of everything around us. The polar ice caps are melting.”

The argument is an example of simple level 10, linear causal logic that can be represented as an “if,then” statement. “If the polar ice caps are melting, then global warming is real.” There is a difference between this argument and President Trump’s argument, however. The student is describing a trend rather than a single event.

Here is an argument made by an advanced 5th grader (complexity score = 1013):

“I think that fumes, coals, and gasses we use for things such as cars…cause global warming. I think this because all the heat and smoke is making the years warmer and warmer.”

This argument is also an example of simple level 10, linear causal logic that can be represented as an “if,then” statement. “If the years are getting warmer and warmer, then global warming is real.” Again, the difference between this argument and President Trump’s argument is that the student is describing a trend rather than a single event.

I offer one more example, this time of a 12th grade student making a somewhat more complex argument (complexity score = 1035).

“The temperature has increased over the years and studies show that the ice is melting in the north and south pole, so, yes humans are causing climate change.”

This argument is also an example of level 10, linear causal logic that can be represented as an “if,then” statement. “If the temperature has increased and studies show that the ice at the north and south poles are melting, then humans are causing climate change. But in this case, the student has mentioned two trends (warming and melting) and explicitly uses scientific evidence to support her conclusion.

Based on these comparisons, it seems clear that President Trump’s Tweets about climate change represent reasoning at the lower end of level 10.

“Humans have caused a lot of green house gasses…and these have caused global warming. The temperature has increased over the years and studies show that the ice is melting in the north and south pole, so, yes humans are causing climate change.

This argument is also an example of level 10, linear causal logic that can be represented as an “if,then” statement. “If the temperature has increased and studies show that the ice at the north and south poles are melting, then humans are causing climate change. In this case, the student’s argument is a bit more complex than in previous examples. She has mentioned two variables (warming and melting) and explicitly uses scientific evidence to support her conclusion.

Based on these comparisons, it seems clear that President Trump’s Tweets about climate change represent reasoning at the lower end of level 10.

Reasoning in level 11

Individuals performing in level 11 recognize that climate is an enormously complex phenomenon that involves many interacting variables. They understand that any single event or trend may be part of the bigger story, but is not, on its own, evidence for or against climate change.

Summing up

It concerns me greatly that someone who does not demonstrate any understanding of the complexity of climate is in a position to make major decisions related to climate change.


Benchmarks for complexity scores

  • Most high school graduates perform somewhere in the middle of level 10.
  • The average complexity score of American adults is in the upper end of level 10, somewhere in the range of 1050–1080.
  • The average complexity score for senior leaders in large corporations or government institutions is in the upper end of level 11, in the range of 1150–1180.
  • The average complexity score (reported in our National Leaders Study) for the three U. S. presidents that preceded President Trump was 1137.
  • The average complexity score (reported in our National Leaders Study) for President Trump was 1053.
  • The difference between 1053 and 1137 generally represents a decade or more of sustained learning. (If you’re a new reader and don’t yet know what a complexity level is, check out the National Leaders Series introductory article.)

 

Please follow and like us:

National leaders’ thinking: How does it measure up?

Special thanks to my Australian colleague, Aiden Thornton, for his editorial and research assistance.

This is the first in a series of articles on the complexity of national leaders’ thinking. These articles will report results from research conducted with CLAS, our newly validated electronic developmental scoring system. CLAS will be used to score these leaders’ responses to questions posed by prominent journalists.

In this first article, I’ll be providing some of the context for this project, including information about how my colleagues and I think about complexity and its role in leadership. I’ve embedded lots of links to additional material for readers who have questions about our 100+ year-old research tradition, Lectica’s (the nonprofit that owns me) assessments, and other research we’ve conducted with these assessments.

Context and research questions

Lectica creates diagnostic assessments for learning that support the development of mental skills required for working with complexity. We make these learning tools for both adults and children. Our K-12 initiative—the DiscoTest Initiative—is dedicated to bringing these tools to individual K-12 teachers everywhere, free of charge. Its adult assessments are used by organizations in recruitment and training, and by colleges and universities in admissions and program evaluation.

All Lectical Assessments measure the complexity level (aka, level of vertical development) of people’s thinking in particular knowledge areas. A complexity level score on a Lectical Assessment tells us the highest level of complexity—in a problem, issue, or task—an individual is likely to be able to work with effectively.

On several occasions over the last 20 years, my colleagues and I have been asked to evaluate the complexity of national leaders’ reasoning skills. Our response has been, “We will, but only when we can score electronically—without the risk of human bias.” That time has come. Now that our electronic developmental scoring system, CLAS, has demonstrated a level of reliability and precision that is acceptable for this purpose, we’re ready to take a look.​

Evaluating the complexity of national leaders’ thinking is a challenging task for several reasons. First, it’s virtually impossible to find examples of many of these leaders’ “best work.” Their speeches are generally written for them, and speech writers usually try to keep the complexity level of these speeches low, aiming for a reading level in the 7th to 9th grade range. (Reading level is not the same thing as complexity level, but like most tests of capability, it correlates moderately with complexity level.) Second, even when national leaders respond to unscripted questions from journalists, they work hard to use language that is accessible to a wide audience. And finally, it’s difficult to identify a level playing field—one in which all leaders have the same opportunity to demonstrate the complexity of their thinking.

Given these obstacles, there’s no point in attempting to evaluate the actual thinking capabilities of national leaders. In other words, we won’t be claiming that the scores awarded by CLAS represent the true complexity level of leaders’ thinking. Instead, we will address the following questions:

  1. When asked by prominent journalists to explain their positions on complex issues, what is the average complexity level of national leaders’ responses?
  2. How does the complexity level of national leaders’ responses relate to the complexity of the issues they discuss?

Thinking complexity and leader success

At this point, you may be wondering, “What is thinking complexity and why is it important?” A comprehensive response to this question isn’t possible in a short article like this one, but I can provide a basic description of complexity as we see it at Lectica, and provide some examples that highlight its importance.

All issues faced by leaders are associated with a certain amount of built-in complexity. For example:

  1. The sheer number of factors/stakeholders that must be taken into account.
  2. Short and long-term implications/repercussions. (Will a quick fix cause problems downstream, such as global unrest or catastrophic weather?)
  3. The number and diversity of stakeholders/interest groups. (What is the best way to balance the needs of individuals, families, businesses, communities, states, nations, and the world?)
  4. The length of time it will take to implement a decision. (Will it take months, years, decades? Longer projects are inherently more complex because of changes over time.)
  5. Formal and informal rules/laws that place limits on the deliberative process. (For example, legislative and judicial processes are often designed to limit the decision making powers of presidents or prime ministers. This means that leaders must work across systems to develop decisions, which further increases the complexity of decision making.)

Over the course of childhood and adulthood, the complexity of our thinking develops through up to 13 skill levels (0–12). Each new level builds upon the previous level. The figure above shows four adult complexity “zones” — advanced linear thinking (second zone of level 10), early systems thinking (first zone of level 11), advanced systems thinking (second zone of level 11), early principles thinking (first zone of level 12). In advanced linear thinking, reasoning is often characterized as “black and white.” Individuals performing in this zone cope best with problems that have clear right or wrong answers. It is only once individuals enter early systems thinking, that we begin to work effectively with highly complex problems that do not have clear right or wrong answers.

Leadership at the national level requires exceptional skills for managing complexity, including the ability to deal with the most complex problems faced by humanity (Helbing, 2013). Needless to say, a national leader regularly faces issues at or above early principles thinking.

Complexity level and leadership—the evidence

In the workplace, the hiring managers who decide which individuals will be put in leadership roles are likely to choose leaders whose thinking complexity is a good match for their roles. Even if they have never heard the term complexity level, hiring managers generally understand, at least implicitly, that leaders who can work with the complexity inherent in the issues associated with their roles are likely to make better decisions than leaders whose thinking is less complex.

There is a strong relation between the complexity of leadership roles and the complexity level of leaders’ reasoning. In general, more complex thinkers fill more complex roles. The figure below shows how lower and senior level leaders’ complexity scores are distributed in Lectica’s database. Most senior leaders’ complexity scores are in or above advanced systems thinking, while those of lower level leaders are primarily in early systems thinking.

The strong relation between the complexity of leaders’ thinking and the complexity of their roles can also be seen in the recruitment literature. To be clear, complexity is not the only aspect of leadership decision making that affects leaders’ ability to deal effectively with complex issues. However, a large body of research, spanning over 50 years, suggests that the top predictors of workplace leader recruitment success are those that most strongly relate to thinking skills, including complexity level.

The figure below shows the predictive power of several forms of assessment employed in making hiring and promotion decisions. The cognitive assessments have been shown to have the highest predictive power. In other words, assessments of thinking skills do a better job predicting which candidates will be successful in a given role than other forms of assessment.

Predictive power graph

The match between the complexity of national leaders’ thinking and the complexity level of the problems faced in their roles is important. While we will not be able to assess the actual complexity level of the thinking of national leaders, we will be able to examine the complexity of their responses to questions posed by prominent journalists. In upcoming articles, we’ll be sharing our findings and discussing their implications.

Coming next…

In the second article in this series, we begin our examination of the complexity of national leaders’ thinking by scoring interview responses from four US Presidents—Bill Clinton, George W. Bush, Barack Obama, and Donald Trump.

 


Appendix

Predictive validity of various types of assessments used in recruitment

The following table shows average predictive validities for various forms of assessment used in recruitment contexts. The column “variance explained” is an indicator of how much of a role a particular form of assessment plays in predicting performance—it’s predictive power.

Form of assessmentSourcePredictive validityVariance explained Variance explained (with GMA)
Complexity of workplace reasoning(Dawson & Stein, 2004; Stein, Dawson, Van Rossum, Hill, & Rothaizer, 2003).5328%n/a
Aptitude (General Mental Ability, GMA)(Hunter, 1980; Schmidt & Hunter, 1998).5126%n/a
Work sample tests(Hunter & Hunter, 1984; Schmidt & Hunter, 1998).5429%40%
Integrity(Ones, Viswesvaran, and Schmidt, 1993; Schmidt & Hunter, 1998).4117%42%
Conscientiousness(Barrick & Mount, 1995; Schmidt & Hunter, 1998)..3110%36%
Employment interviews (structured)(McDaniel, Whetzel, Schmidt, and Mauer, 1994; Schmidt & Hunter, 1998).5126%39%
Employment interviews (unstructured)(McDaniel, Whetzel, Schmidt, and Mauer, 1994 Schmidt & Hunter, 1998).3814%30%
Job knowledge tests(Hunter and Hunter, 1984; Schmidt & Hunter, 1998).4823%33%
Job tryout procedure(Hunter and Hunter, 1984; Schmidt & Hunter, 1998).4419%33%
Peer ratings(Hunter and Hunter, 1984; Schmidt & Hunter, 1998).4924%33%
Training & experience: behavioral consistency method(McDaniel, Schmidt, and Hunter, 1988a, 1988b; Schmidt & Hunter, 1998; Schmidt, Ones, and Hunter, 1992).4520%33%
Reference checks(Hunter and Hunter, 1984; Schmidt & Hunter, 1998).267%32%
Job experience (years)Hunter, 1980); McDaniel, Schmidt, and Hunter, 1988b; Schmidt & Hunter, 1998).183%29%
Biographical data measuresSupervisory Profile Record Biodata Scale (Rothstein, Schmidt, Erwin, Owens, and Sparks, 1990; Schmidt & Hunter, 1998).3512%27%
Assessment centers(Gaugler, Rosenthal, Thornton, and Benson, 1987; Schmidt & Hunter, 1998; Becker, Höft, Holzenkamp, & Spinath, 2011) Note: Arthur, Day, McNelly, & Edens (2003) found a predictive validity of .45 for assessment centers that included mental skills assessments..3714%28%
EQ(Zeidner, Matthews, & Roberts, 2004).246%n/a
360 assessmentsBeehr, Ivanitskaya, Hansen, Erofeev, & Gudanowski, 2001.246%n/a
Training &  experience: point method(McDaniel, Schmidt, and Hunter, 1988a; Schmidt & Hunter, 1998).111%27%
Years of education(Hunter and Hunter, 1984; Schmidt & Hunter, 1998).101%27%
Interests(Schmidt & Hunter, 1998).101%27%

References

Arthur, W., Day, E. A., McNelly, T. A., & Edens, P. S. (2003). A meta‐analysis of the criterion‐related validity of assessment center dimensions. Personnel Psychology, 56(1), 125-153.

Becker, N., Höft, S., Holzenkamp, M., & Spinath, F. M. (2011). The Predictive Validity of Assessment Centers in German-Speaking Regions. Journal of Personnel Psychology, 10(2), 61-69.

Beehr, T. A., Ivanitskaya, L., Hansen, C. P., Erofeev, D., & Gudanowski, D. M. (2001). Evaluation of 360 degree feedback ratings: relationships with each other and with performance and selection predictors. Journal of Organizational Behavior, 22(7), 775-788.

Dawson, T. L., & Stein, Z. (2004). National Leadership Study results. Prepared for the U.S. Intelligence Community.

Dawson, T. L. (2017, October 20). Using technology to advance understanding: The calibration of CLAS, an electronic developmental scoring system. Proceedings from Annual Conference of the Northeastern Educational Research Association, Trumbull, CT.

Dawson, T. L., & Thornton, A. M. A. (2017, October 18). An examination of the relationship between argumentation quality and students’ growth trajectories. Proceedings from Annual Conference of the Northeastern Educational Research Association, Trumbull, CT.

Gaugler, B. B., Rosenthal, D. B., Thornton, G. C., & Bentson, C. (1987). Meta-analysis of assessment center validity. Journal of Applied Psychology, 72(3), 493-511.

Helbing, D. (2013). Globally networked risks and how to respond. Nature, 497, 51-59.

Hunter, J. E., & Hunter, R. F. (1984). The validity and utility of alternative predictors of job performance. Psychological Bulletin, 96, 72-98.

Hunter, J. E., Schmidt, F. L., & Judiesch, M. K. (1990). Individual differences in output variability as a function of job complexity. Journal of Applied Psychology, 75, 28-42.

Johnson, J. (2001). Toward a better understanding of the relationship between personality and individual job performance. In M. R. R. Barrick, Murray R. (Ed.), Personality and work: Reconsidering the role of personality in organizations (pp. 83-120).

McDaniel, M. A., Schmidt, F. L., & Hunter, J., E. (1988a). A Meta-analysis of the validity of training and experience ratings in personnel selection. Personnel Psychology, 41(2), 283-309.

McDaniel, M. A., Schmidt, F. L., & Hunter, J., E. (1988b). Job experience correlates of job performance. Journal of Applied Psychology, 73, 327-330.

McDaniel, M. A., Whetzel, D. L., Schmidt, F. L., & Maurer, S. D. (1994). Validity of employment interviews. Journal of Applied Psychology, 79, 599-616.

Rothstein, H. R., Schmidt, F. L., Erwin, F. W., Owens, W. A., & Sparks, C. P. (1990). Biographical data in employment selection: Can validities be made generalizable? Journal of Applied Psychology, 75, 175-184.

Stein, Z., Dawson, T., Van Rossum, Z., Hill, S., & Rothaizer, S. (2013, July). Virtuous cycles of learning: using formative, embedded, and diagnostic developmental assessments in a large-scale leadership program. Proceedings from ITC, Berkeley, CA.

Tett, R. P., Jackson, D. N., & Rothstein, M. (1991). Personality measures as predictors of job performance: A meta-analytic review. Personnel Psychology, 44, 703-742.

Please follow and like us:

From Piaget to Dawson: The evolution of adult developmental metrics

I've just added a new video about the evolution of adult developmental metrics to YouTube and LecticaLive. It traces the evolutionary history of Lectica's developmental model and metric.

If you are curious about the origins of our work, this video is a great place to start. If you'd like to see the reference list for this video, view it on LecticaLive.

 

 

Please follow and like us:

Decision making & the collaboration continuum

Lectical Scale (our developmental scale). The collaboration continuum has emerged from this research.

Many people seem to think of decision making as either top-down or collaborative, and tend to prefer one over the other. But several thousand decision-making leaders have taught us that this is a false dichotomy. We’ve learned two things. First, there is no clear-cut division between autocratic and collaborative decision making—it’s a continuum. And second, both more autocratic and more collaborative decision making processes have legitimate applications.

As it applies to decision making, the collaboration continuum is a scale that runs from fully autocratic to consensus-based. We find it helpful to divide the continuum into 7 relatively distinct levels, as shown below:


Level Basis for decision Applications Limitations

LESS COLLABORATION

Fully autocratic  personal knowledge or rules, no consideration of other perspectives everyday operational decisions where there are clear rules and no apparent conflicts quick and efficient
Autocratic personal knowledge, with some consideration of others' perspectives (no perspective seeking) operational decisions in which conflicts are already well-understood and trust is high quick and efficient, but spends trust, so should be used with care
Consulting personal knowledge, with perspective-seeking to help people feel heard operational decisions in which the perspectives of well-known stakeholders are in conflict and trust needs reinforcement time consuming, but can build trust if not abused
Inclusive personal knowledge, with perspective seeking to inform a decision operational or policy decisions in which the perspectives of stakeholders are required to formulate a decision time consuming, but improves decisions and builds engagement
Compromise-focused leverages stakeholder perspectives to develop a decision that gives everyone something they want making "deals" to which all stakeholders must agree time consuming, but necessary in deal-making situations
Consent-focused leverages stakeholder perspectives to develop a decision that everyone can consent to (even though there may be reservations) policy decisions in which the perspectives of stakeholders are required to formulate a decision can be efficient, but requires excellent facilitation skills and training for all parties
Consensus-focused leverages stakeholder perspectives to develop a decision that everyone can agree with. decisions in which complete agreement is required to formulate a decision requires strong relationships, useful primarily when decision-makers are equal partners

MORE COLLABORATION

As the table above shows, all 7 forms of decision making on the collaboration continuum have legitimate applications. And all can be learned in any adult developmental level. However, the most effective application of each successive form of decision making requires more developed skills. Inclusive, consent, and consensus decision making are particularly demanding, and generally require formal training for all participating parties.

The most developmentally advanced and accomplished leaders who have taken our assessments deftly employ all 7 forms of decision making, basing the form chosen for a particular situation on factors like timeline, decision purpose, and stakeholder characteristics.


(The feedback in our LDMA [leadership decision making] assessment report provides learning suggestions for building collaboration continuum skills. And our Certified Consultants can offer specific practices, tailored for your learning needs, that support the development of these skills.) 

 

Please follow and like us:

Leadership, vertical development & transformative change: a polemic

This morning, while doing some research on leader development, I googled “vertical leadership” and “coaching.” The search returned 466,000 results. Wow. Looks like vertical development is hot in the coaching world!

Two hours later, after scanning dozens of web sites, I was left with the following impression: 

Vertical development occurs through profound, disruptive, transformative insights that alter how people see themselves, improve their relationships, increase happiness, and help them cope better with complex challenges. The task of the coach is to set people up for these experiences. Evidence of success is offered through personal stories of transformation.

But decades of developmental research contradicts this picture. This body of evidence shows that the kind of transformative experiences promised on these web sites is uncommon. And when it does occur it rarely produces a fairytale ending. In fact, profound disruptive insights can easily have negative consequences, and most experiences that people refer to as transformational are really just momentary insights. They may feel profound in the moment, but don’t actually usher in any measurable change at all, much less transformative change. 

 

"The good news is, you don’t have to work on transforming yourself to become a better leader."

 

The fact is, insight is fairly easy, but growth is slow, and change is hard. Big change is really, really hard. And some things, like many dispositions and personality traits, are virtually impossible to change. This isn’t an opinion based on personal experience, it’s a conclusion based on evidence from hundreds of longitudinal developmental studies conducted during the last 70 years. (Check out our articles page for some of this evidence.)

The good news is, you don’t have to work on transforming yourself to become a better leader. All you need to do is engage in daily practices that incrementally, through a learning cycle called VCoL, help you build the skills and habits of a good leader. Over the long term, this will change you, because it will alter the quality of your interactions with others, and that will change your mind—profoundly.

 

Please follow and like us:

Decision-making under VUCA conditions

VUCA

I was recently asked if there is a decision making approach that’s designed specifically for situations characterized by volatility, uncertainty, complexity, and ambiguity (VUCA). I don’t know of a one-size-fits-all solution, but I can speak to what’s needed to optimize decisions made in VUCA conditions. Here are the main ingredients:

Agility

  1. Acrobatic-catThe ability to adjust one’s decision-making approach to meet the demands of a particular problem: For example, some problems must be addressed immediately and autocratically, others are best addressed more collaboratively and with a greater focus on data collection and perspective seeking.
  2. The ability to make high-quality autocratic decisions: By setting up systems that keep stakeholders continuously appraised of one another’s perspectives and data, we can improve the quality of autocratic decisions by ensuring that there are fewer surprises and that rapid decisions are informed decisions.
  3. Dynamic steering: Every leader in an organization should be constantly cultivating this skill. It increases the agility of teams and organizations by building skill for efficient decision-making and timely adjustment.

The most complete information possible (under conditions in which complete information is impossible), which requires:

  1. Collaborative capacity: highly complex problems, by definition, are beyond the comprehension of even the most developed individuals. Collaborative skills ensure that leaders can effectively leverage key perspectives.
  2. Systems and structures that foster ongoing two-way communication up and down the organizational hierarchy, across departments, divisions, and teams, and between internal and external stakeholders.
  3. Systems and structures that cultivate excellent perspective-taking and -seeking skills. These include…
    • Building in opportunities for collaborative decision-making,  
    • “Double linking”—the formal inclusion, in high-stakes or policy decision-making, of representatives from lower and higher levels in the organizational hierarchy or from cross-disciplinary teams, and
    • Embedding virtuous cycles to ensure that all processes are continuously moving toward higher functioning states, and that employees are constantly building knowledge and skills.

Where appropriate, technologies for constructing models of highly complex problems:

  • For a comprehensive overview of options, see Decision Making Under Uncertainty: Theory and Application, by Mykel J. Kochenderfer.

Our flagship adult assessment, the Leadership Decision-Making Assessment (LDMA), was designed for the US government to document and assess the level of sophistication individuals and teams demonstrate on key skills for making optimal decisions in VUCA conditions.

 

Please follow and like us:

Jaques’ Strata and Lectical Levels

We often receive queries about the relation between Lectical Levels the Strata defined by Jaques. The following table shows the relation between Lectical Levels and Strata as they were defined by Jaques in Requisite Organization. These relations were determined by using the Lectical Assessment System to score Jaques’ definitions. We have not yet had an opportunity to compare the results of scoring the same material with the Lectical Assessment System and any scoring system based on Jaques’ definitions as we have done with other comparisons of scoring systems. Our interpretation of Jaques’ Strata definitions may differ from the interpretations of other researchers, leading to differences between theoretical and actual comparisons.

Strata by Lectical Level

References

Jaques, E. (1996). Requisite organization (2 ed.). Arlington, VA: Cason Hall.

Please follow and like us:

Vertical development & leadership skills

What is vertical development?

In our view, learning involves two interrelated processes—the accumulation of knowledge and the organization of that knowledge into mental maps and the neural nets that support them. Over time, if we engage in activities that promote development, our mental maps become increasingly complex. More complex mental maps allow for more complex thinking. This increasing capacity to handle complexity is sometimes called vertical development.

Vertical development and leadership

As leaders move into more senior positions, the task demands of their role increase in complexity. They must juggle more (and more complex) perspectives, cope with more ambiguity, and make an increasing number of adaptive decisions. It's no surprise that more complex thinkers are more likely to rise into senior management roles.

For 15 years, we've been building learning tools that support vertical development by diagnosing leaders' current capabilities and making targeted learning recommendations. The first step in this process is measuring the developmental level of leaders' skills on the Lectical® Scale. The figure below shows how the performances of lower-level (n=1108) and senior managers (n=222) on the LDMA (our decision making assessment)are distributed on this scale. As you can see, the distribution of senior managers is higher on the Lectical Scale than the distribution of lower-level managers. In fact senior leaders, on average, are several years ahead of lower-level managers in their vertical development. This means they are considerably better at working with complexity.

management level by Lectical Level

Lectical Assessments are designed to advance vertical development—to help build the capacity of individuals and teams to meet the demands of an increasingly complex world. In the hands of competent coaches, mentors, and educators, Lectical Assessments double the rate of vertical development that typically occurs in effective leadership programs. This is possible because they support the natural learning cycle by providing learning suggestions that are "just right."

To learn more about the relation between vertical development and job complexity see the post: Task demands and capabilities.

To learn more about the way we think about learning and assessment, listen to this interview with Dr. Dawson: The ideal relationship between learning and assessment.

To learn more about the research with Lectical Assessments, visit our Validity and reliability page.

Source: 2014_0339_all_LDMA_scores.xlsx

Please follow and like us: