If you want students to develop faster, stop trying to speed up learning

During the last 20 years—since high stakes testing began to take hold—public school curricula have undergone a massive transformation. Standards have pushed material that was once taught in high school down into the 3rd and 4th grade, and the amount of content teachers are expected to cover each year has increased steadily. The theory behind this trend appears to be that learning more content and learning it earlier will help students develop faster.

But is this true? Is there any evidence at all that learning more content and learning it earlier produces more rapid development? If so, I haven't seen it.

In fact, our evidence points to the opposite conclusion. Learning more and learning it earlier may actually be interfering with the development of critical life skills—like those required for making good decisions in real-life contexts. As the graph below makes clear, students in schools that emphasize covering required content do not develop as rapidly as students in schools that focus on fostering deep understanding—even though learning for understanding generally takes more time than learning something well enough to "pass the test."

What is worse, we're finding that the average student in schools with the greatest emphasis on covering required content appears to stop developing by the end of grade 10, with an average score of 10.1. This is the same score received by the average 6th grader in schools with the greatest emphasis on fostering deep understanding.

The graphs in this post are based on data from 17,755 LRJA assessments. The LRJA asks test-takers to respond to a complex real-life dilemma. They are prompted to explore questions about:

  • finding, creating, and evaluating information and evidence,
  • perspectives, persuasion, and conflict resolution,
  • when and if it's possible to be certain,
  • the nature of facts, truth, and reality.

Students were in grades 4-12, and attended one or more of 56 schools in the United States and Canada.

The graphs shown above represent two groups of schools—those with students who received the highest scores on the LRJA and those with students who received the lowest scores. These schools differed from one another in two other ways. First, the highest performing schools were all private schools*. Most students in these schools came from upper middle or high SES (socio-economic status) homes. The lowest performing schools were all public schools serving low to middle SES inner city students.

The second way in which these schools differed was in the design of their curricula. The highest performing schools featured integrated curricula with a great deal of practice-based learning and a heavy emphasis on fostering understanding and real-world competence. All of the lowest performing schools featured standards-focused curricula with a strong emphasis on learning the facts, formulas, procedures, vocabulary, and rules targeted by state tests.

Based on the results of conventional standardized tests, we expected most of the differences between student performances on the LRJA in these two groups of schools to be explained by SES. But this was not the case. Private schools with more conventional curricula and high performing public schools serving middle and upper middle SES families did indeed outperform the low SES schools, but as shown in the graph below, by grade 12, their students were still about 2.5 years behind students in the highest performing schools. At best, SES explains only about 1/2 of the difference between the best and worst schools in our database. (For more on this, see the post, "Does a focus on deep understanding accelerate growth?")

By the way, the conventional standardized test scores of students in this middle group, despite their greater emphasis on covering content, were no better than the conventional standardized test scores of students in the high performing group. Focusing on deep understanding appears to help students develop faster without interfering with their ability to learn required content.

This will not be our last word on the subject. As we scale our K-12 assessments, we'll be able to paint an increasingly clear picture of the developmental impact of a variety of curricula.


Lectica's nonprofit mission is to help educators foster deep understanding and lifelong growth. We can do it with your help! Please donate now. Your donation will help us deliver our learning tools—free—to K-12 teachers everywhere.


*None of these schools pre-selected their students based on test scores. 

 

Lectica’s Human Capital Value Chain—for organizations that are serious about human development

Lectica's tools and services have powerful applications for every process in the human capital value chain. I explain how in the following video.

For links to more information see the HCVC page on Lecticalive. For references that support claims made in the video, see the post—Introducing LecticaFirst.

 

The rate of development

An individual's rate of development is affected by a wide range of factors. Twin studies suggest that about 50% of the variation in Lectical growth trajectories is likely to be predicted by genetic factors. The remaining variation is explained by environmental factors, including the environment in the womb, the home environment, parenting quality, educational quality & fit, economic status, diet, personal learning habits, and aspects of personality.

Each Lectical Level takes longer to traverse than the previous level. This is because development through each successive level involves constructing increasingly elaborated and abstract knowledge networks. Don't be fooled by the slow growth, though. A little growth can have an important impact on outcomes. For example, small advances in level 11, can make a big difference in an individual's capacity to work effectively with complexity and change.  

growth trajectories lifespan Growth trajectories over the lifespan

The graphs above show possible learning trajectories, first, for the lifespan and second, for ages 10-60. Note that the highest age shown on these graphs is 60. This does not mean that individuals cannot develop after the age of 60.

The yellow circle in each graph represents a Lectical Score and the confidence interval around that score. That's the range in which the "true score" would most likely fall. When interpreting any test score, you should keep the confidence interval in mind.  

Test results are not tidy

When we measure development over short time spans, it does not look smooth. The kind of pattern shown in the following graph is more common. However, we have found that growth appears a bit smoother for adults than for children. We think this is because children, for a variety of reasons, are less likely to do their best work on every testing occasion.

Report card showing jagged growth 

Factors that increase the rate of development

  • The test-taker's current developmental trajectory. (A person whose history places her on the green curve in the first two graphs is unlikely to jump to the blue curve.)
  • The amount of reflective activity (especially VCoLing) the individual typically engages in (no reflective activity, no growth)
  • Participation in deliberate learning activities that include lots of reflective activity (especially VCoLing)
  • Participating in supported learning (coaching, mentoring) after a long period of time away from formal education (can create a spurt)

 

Learning and metacognition

Metacognition is thinking about thinking. Metacognitive skills are an interrelated set of competencies for learning and thinking, and include many of the skills required for active learning, critical thinking, reflective judgment, problem solving, and decision-making. People whose metacognitive skills are well developed are better problem-solvers, decision makers and critical thinkers, are more able and more motivated to learn, and are more likely to be able to regulate their emotions (even in difficult situations), handle complexity, and cope with conflict. Although metacognitive skills, once they are well-learned, can become habits of mind that are applied unconsciously in a wide variety of contexts, it is important for even the most advanced learners to “flex their cognitive muscles” by consciously applying appropriate metacognitive skills to new knowledge and in new situations.

Lectica's learning model, VCoL+7 (the virtuous cycle of learning and +7 skills) leverages metacognitive skills in a number of ways. For example, the fourth step in VCoL is reflection & analysis, the +7 skills include reflective dispositionself-monitoring and awareness, and awareness of cognitive and behavioral biases.

Learn more

 

Learning in the workplace occurs optimally when the learner has a reflective disposition and receives both insitutional and educational support

Correctness versus understanding

Recently, I was asked by a colleague for a clear, simple example that would show how DiscoTest items differ from the items on conventional standardized tests. My first thought was that this would be impossible without oversimplifying. My second thought was that it might be okay to oversimplify a bit. So, here goes!

The table below lists four differences between what Lectica measures and what is measured by other standardized assessments.1 The descriptions are simplified and lack nuance, but the distinctions are accurate.

  Lectical Assessments Other standardized assessments
Scores represent level of understanding based on a valid learning scale number of correct answers
Target the depth of an individual's understanding (demonstrated in the complexity of arguments and the way the test taker works with knowledge) the ability to recall facts, or to apply rules, definitions, or procedures (demonstrated by  correct answers)
Format paragraph length written responses primarily multiple choice or short written answers2
Responses explanations, applications, and transfer right/wrong judgments or right/wrong applications of rules and procedures

The example

I chose a scenario-based example that we're already using in an assessment of students' conceptions of the conservation of matter. We borrowed the scenario from a pre-existing multiple choice item.

The scenario

Sophia balances a pile of stainless steel wire and ordinary steel wire on a scale. After a few days the ordinary wire in the pan on the right starts rusting.

Conventional multiple choice question

What will happen to the pan with the rusting wire?

  1. The pan will move up.
  2. The pan will not move.
  3. The pan will move down.
  4. The pan will first move up and then down.
  5. The pan will first move down and then up.

(Go ahead, give it a try! Which answer would you choose?)

Lectical Assessment question

What will happen to the height of the pan with the rusting wire? Please explain your answer thoroughly.

Here are three examples of responses from 12th graders.

Lillian: The pan will move down because the rusted steel is heavier than the plain steel.

 

 

Josh: The pan will move down, because when iron rusts, oxygen atoms get attached to the iron atoms. Oxygen atoms don't weigh very much, but they weigh a bit, so the rusted iron will "gain weight," and the scale will to down a bit on that side.

Ariana: The pan will go down at first, but it might go back up later. When iron oxidizes, oxygen from the air combines with the iron to make iron oxide. So, the mass of the wire increases, due to the mass of the oxygen that has bonded with the iron. But iron oxide is non-adherent, so over time the rust will fall off of the wire. If the metal rusts for a long time, some of the rust will become dust and some of that dust will very likely be blown away.

Debrief

The correct answer to the multiple choice question is, "The pan will move down."

There is no single correct answer to the Lectical Assessment item. Instead, there are answers that reveal different levels of understanding. Most readers will immediately see that Josh's answer reveals more understanding than Lillian's, and that Ariana's reveals more understanding than Josh's.

You may also notice that Arianna's written response would result in her selecting one of the incorrect multiple-choice answers, and that Lillian and Josh are given equal credit for correctness even though their levels of understanding are not equally sophisticated. 

Why is all of this important?

  • It's not fair! The multiple choice item cheats Adriana of the chance to show off what she knows, and it treats Lillian and Josh as if their level of understanding is identical.
  • The multiple choice item provides no useful information to students or teachers! The most we can legitimately infer from a correct answer is that the student has learned that when steel rusts, it gets heavier. This correct answer is a fact. The ability to identify a fact does not tell us how it is understood.
  • Without understanding, knowledge isn't useful. Facts that are not supported with understanding are useful on Jeopardy, but less so in real life. Learning that does not increase understanding or competence is a tragic waste of students' time.
  • Despite clear evidence that correct answers on standardized tests do not measure understanding and are therefore not a good indicator of useable knowledge or competence, we continue to use scores on these tests to make decisions about who will get into which college, which teachers deserve a raise, and which schools should be closed. 
  • We value what we measure. As long as we continue to measure correctness, school curricula will emphasize correctness, and deeper, more useful, forms of learning will remain relatively neglected.

None of these points is particularly controversial. Most educators agree on the importance of understanding and competence. What's been missing is the ability to measure understanding at scale and in real time. Lectical Assessments are designed to fill this gap.

 


1Many alternative assessments are designed to measure understanding—at least to some degree—but few of these are standardized or scalable. 

2See my examination of a PISA item for an example of a typical written response item from a highly respected standardized test.

Benchmarks: education, jobs, and the Lectical Scale

I'm frequently asked about benchmarks. My most frequent response is something like: "Setting benchmarks requires more data than we have collected so far," or "Benchmarks are just averages, they don't necessarily apply to particular cases, but people tend to use them like they do." Well, that last excuse will probably always hold true, but now that our database contains more than 43,000 assessments, the first response is a little less true. So, I'm pleased to announce that we've published a benchmark table that shows how educational and workplace role demands relate to the Lectical Scale. We hope you find it useful!

Introducing LecticaFirst: Front-line to mid-level recruitment assessment—on demand

LecticaLive logo

The world's best recruitment assessments—unlimited, auto-scored, affordable, relevant, and easy

Lectical Assessments have been used to support senior and executive recruitment for over 10 years, but the expense of human scoring has prohibited their use at scale. I'm DELIGHTED to report that this is no longer the case. Because of CLAS—our electronic developmental scoring system—this fall we plan to deliver customized assessments of workplace reasoning with real time scoring. We're calling this service LecticaFirst.

LecticaFirst is a subscription service.* It allows you to administer as many LecticaFirst assessments as you'd like, any time you'd like. It's priced to make it possible for your organization to pre-screen every candidate (up through mid-level management) before you look at a single resume or call a single reference. And we've built in several upgrade options, so you can easily obtain additional information about the candidates that capture your interest.

learn more about LecticaFirst subscriptions


The current state of recruitment assessment

"Use of hiring methods with increased predictive validity leads to substantial increases in employee performance as measured in percentage increases in output, increased monetary value of output, and increased learning of job-related skills" (Hunter, Schmidt, & Judiesch, 1990).

Most conventional workplace assessments focus on one of two broad constructs—aptitude or personality. These assessments examine factors like literacy, numeracy, role-specific competencies, leadership traits, and cultural fit, and are generally delivered through interviews, multiple choice tests, or likert-style surveys. Emotional intelligence is also sometimes measured, but thus far, is not producing results that can complete with aptitude tests (Zeidner, Matthews, & Roberts, 2004). 

Like Lectical Assessments, aptitude tests are tests of mental ability (or mental skill). High-quality tests of mental ability have the highest predictive validity for recruitment purposes, hands down. Hunter and Hunter (1984), in their systematic review of the literature, found an effective range of predictive validity for aptitude tests of .45 to .54. Translated, this means that about 20% to 29% of success on the job was predicted by mental ability. These numbers do not appear to have changed appreciably since Hunter and Hunter's 1984 review.

Personality tests come in a distant second. In their meta-analysis of the literature, Teft, Jackson, and Rothstein (1991) reported an overall relation between personality and job performance of .24 (with conscientiousness as the best predictor by a wide margin). Translated, this means that only about 6% of job performance is predicted by personality traits. These numbers do not appear to have been challenged in more recent research (Johnson, 2001).

Predictive validity of various types of assessments used in recruitment

The following table shows average predictive validities for various forms of assessment used in recruitment contexts. The column "variance explained" is an indicator of how much of a role a particular form of assessment plays in predicting performance—it's predictive power. When deciding which assessments to use in recruitment, the goal is to achieve the greatest possible predictive power with the fewest assessments. That's why I've included the last column, "variance explained with GMA." It shows what happens to the variance explained when an assessment of General Mental Ability is combined with the form of assessment in a given row. The best combinations shown here are GMA and work sample tests, GMA and Integrity, and GMA and conscientiousness.

Form of assessment Source Predictive validity Variance explained  Variance explained (with GMA)
Complexity of workplace reasoning (Dawson & Stein, 2004; Stein, Dawson, Van Rossum, Hill, & Rothaizer, 2003) .53 28% n/a
Aptitude (General Mental Ability, GMA) (Hunter, 1980; Schmidt & Hunter, 1998) .51 26% n/a
Work sample tests (Hunter & Hunter, 1984; Schmidt & Hunter, 1998) .54 29% 40%
Integrity (Ones, Viswesvaran, and Schmidt, 1993; Schmidt & Hunter, 1998) .41 17% 42%
Conscientiousness (Barrick & Mount, 1995; Schmidt & Hunter, 1998). .31 10% 36%
Employment interviews (structured) (McDaniel, Whetzel, Schmidt, and Mauer, 1994; Schmidt & Hunter, 1998) .51 26% 39%
Employment interviews (unstructured) (McDaniel, Whetzel, Schmidt, and Mauer, 1994 Schmidt & Hunter, 1998) .38 14% 30%

Job knowledge tests

(Hunter and Hunter, 1984; Schmidt & Hunter, 1998) .48 23% 33%

Job tryout procedure

(Hunter and Hunter, 1984; Schmidt & Hunter, 1998) .44 19% 33%
Peer ratings (Hunter and Hunter, 1984; Schmidt & Hunter, 1998) .49 24% 33%

Training & experience: behavioral consistency method

(McDaniel, Schmidt, and Hunter, 1988a, 1988b; Schmidt & Hunter, 1998; Schmidt, Ones, and Hunter, 1992) .45 20% 33%
Reference checks (Hunter and Hunter, 1984; Schmidt & Hunter, 1998) .26 7% 32%
Job experience (years)

Hunter, 1980); McDaniel, Schmidt, and Hunter, 1988b; Schmidt & Hunter, 1998) 

.18 3% 29%  
Biographical data measures

Supervisory Profile Record Biodata Scale (Rothstein, Schmidt, Erwin, Owens, and Sparks, 1990; Schmidt & Hunter, 1998)

.35 12% 27%

Assessment centers

(Gaugler, Rosenthal, Thornton, and Benson, 1987; Schmidt & Hunter, 1998; Becker, Höft, Holzenkamp, & Spinath, 2011) Note: Arthur, Day, McNelly, & Edens (2003) found a predictive validity of .45 for assessment centers that included mental skills assessments. 

.37 14% 28%
EQ (Zeidner, Matthews, & Roberts, 2004) .24 6% n/a
360 assessments Beehr, Ivanitskaya, Hansen, Erofeev, & Gudanowski, 2001 .24 6% n/a
Training &  experience: point method (McDaniel, Schmidt, and Hunter, 1988a; Schmidt & Hunter, 1998) .11 1% 27%
Years of education (Hunter and Hunter, 1984; Schmidt & Hunter, 1998) .10 1% 27%
Interests (Schmidt & Hunter, 1998) .10 1% 27%

The figure below shows the predictive power information from this table in graphical form. Assessments are color coded to indicate which are focused on mental (cognitive) skills, behavior (past or present), or personality traits. It is clear that tests of mental skills stand out as the best predictors.

Predictive power graph

Why use Lectical Assessments for recruitment?

Lectical Assessments are "next generation" assessments, made possible through a novel synthesis of developmental theory, primary research, and technology. Until now multiple choice style aptitude tests have been the most affordable option for employers. But despite being more predictive than personality tests, aptitude tests still suffer from important limitations. Lectical Assessments address these limitations. For details, take a look at the side-by-side comparison of LecticaFirst tests with conventional tests, below.

Dimension LecticaFirst Aptitude
Accuracy Level of reliability (.95–.97) makes them accurate enough for high-stakes decision-making. (Interpreting reliability statistics) Varies greatly. The best aptitude tests have levels of reliability in the .95 range. Many recruitment tests have much lower levels.
Time investment Lectical Assessments are not timed. They usually take from 45–60 minutes, depending on the individual test-taker. Varies greatly. For acceptable accuracy, tests must have many items and may take hours to administer.
Objectivity Scores are objective (Computer scoring is blind to differences in sex, body weight, ethnicity, etc.) Scores on multiple choice tests are objective. Scores on interview-based tests are subject to several sources of bias.
Expense Highly competitive subscription. (From $6 – $10) per existing employee annually Varies greatly.
Fit to role: complexity Lectica employs sophisticated developmental tools and technologies to efficiently determine the relation between role requirements and the level of reasoning skill required to meet those requirements. Lectica's approach is not directly comparable to other available approaches.
Fit to role: relevance Lectical Assessments are readily customized to fit particular jobs, and are direct measures of what's most important—whether or not candidates' actual workplace reasoning skills are a good fit for a particular job. Aptitude tests measure people's ability to select correct answers to abstract problems. It is hoped that these answers will predict how good a candidate's workplace reasoning skills are likely to be.
Predictive validity In research so far: Predict advancement (R = .53**, R2 = .28), National Leadership Study. The aptitude (IQ) tests used in published research predict performance (R = .45 to .54, R2 = .20 to .29)
Cheating The written response format makes cheating virtually impossible when assessments are taken under observation, and very difficult when taken without observation. Cheating is relatively easy and rates can be quite high.
Formative value High. LecticaFirst assessments can be upgraded after hiring, then used to inform employee development plans. None. Aptitude is a fixed attribute, so there is no room for growth. 
Continuous improvement Our assessments are developed with a 21st century learning technology that allows us to continuously improve the predictive validity of Lecticafirst assessments. Conventional aptitude tests are built with a 20th century technology that does not easily lend itself to continuous improvement.

* CLAS is not yet fully calibrated for scores above 11.5 on our scale. Scores at this level are more often seen in upper- and senior-level managers and executives. For this reason, we do not recommend using lecticafirst for recruitment above mid-level management.

**The US Department of Labor’s highest category of validity, labeled “Very Beneficial” requires regression coefficients .35 or higher (R > .34).


References

Arthur, W., Day, E. A., McNelly, T. A., & Edens, P. S. (2003). A meta‐analysis of the criterion‐related validity of assessment center dimensions. Personnel Psychology, 56(1), 125-153.

Becker, N., Höft, S., Holzenkamp, M., & Spinath, F. M. (2011). The Predictive Validity of Assessment Centers in German-Speaking Regions. Journal of Personnel Psychology, 10(2), 61-69.

Beehr, T. A., Ivanitskaya, L., Hansen, C. P., Erofeev, D., & Gudanowski, D. M. (2001). Evaluation of 360 degree feedback ratings: relationships with each other and with performance and selection predictors. Journal of Organizational Behavior, 22(7), 775-788.

Dawson, T. L., & Stein, Z. (2004). National Leadership Study results. Prepared for the U.S. Intelligence Community.

Gaugler, B. B., Rosenthal, D. B., Thornton, G. C., & Bentson, C. (1987). Meta-analysis of assessment center validity. Journal of Applied Psychology, 72(3), 493-511.

Hunter, J. E., & Hunter, R. F. (1984). The validity and utility of alterna­tive predictors of job performance. Psychological Bulletin, 96, 72-98.

Hunter, J. E., Schmidt, F. L., & Judiesch, M. K. (1990). Individual differences in output variability as a function of job complexity. Journal of Applied Psychology, 75, 28-42.

Johnson, J. (2001). Toward a better understanding of the relationship between personality and individual job performance. In M. R. R. Barrick, Murray R. (Ed.), Personality and work: Reconsidering the role of personality in organizations (pp. 83-120).

Mcdaniel, M. A., Schmidt, F. L., & Hunter, J., E. (1988a). A Meta-analysis of the validity of training and experience ratings in personnel selection. Personnel Psychology, 41(2), 283-309.

Mcdaniel, M. A., Schmidt, F. L., & Hunter, J., E. (1988b). Job experience correlates of job performance. Journal of Applied Psychology, 73, 327-330.

McDaniel, M. A., Whetzel, D. L., Schmidt, F. L., & Maurer, S. D. (1994). Validity of employment interviews. Journal of Applied Psychology, 79, 599-616.

Rothstein, H. R., Schmidt, F. L., Erwin, F. W., Owens, W. A., & Sparks, C. P. (1990). Biographical data in employment selection: Can validities be made generalizable? Journal of Applied Psychology, 75, 175-184.

Stein, Z., Dawson, T., Van Rossum, Z., Hill, S., & Rothaizer, S. (2013, July). Virtuous cycles of learning: using formative, embedded, and diagnostic developmental assessments in a large-scale leadership program. Proceedings from ITC, Berkeley, CA.Tett, R. P., Jackson, D. N., & Rothstein, M. (1991). Personality measures as predictors of job performance: A meta-analytic review. Personnel Psychology, 44, 703-742.

Zeidner, M., Matthews, G., & Roberts, R. D. (2004). Emotional intelligence in the workplace: A critical review. Applied psychology: An International Review, 53(3), 371-399.

Decision making & the collaboration continuum

When we create a Lectical Assessment, we make a deep (and never ending) study of how the skills and knowledge targeted by that assessment develop over time. The research involves identifying key concepts and skills and studying their evolution on the Lectical Scale (our developmental scale). The collaboration continuum has emerged from this research.

As it applies to decision making, the collaboration continuum is a scale that runs from fully autocratic to consensus-based. Although it is a continuum, we find it useful to think of the scale as having 7 relatively distinct levels, as shown in the table below:


Level Basis for decision Applications Limitations

LESS COLLABORATION

Fully autocratic  personal knowledge or rules, no consideration of other perspectives everyday operational decisions where there are clear rules and no apparent conflicts quick and efficient
Autocratic personal knowledge, with some consideration of others' perspectives (no perspective seeking) operational decisions in which conflicts are already well-understood and trust is high quick and efficient, but spends trust, so should be used with care
Consulting personal knowledge, with perspective-seeking to help people feel heard operational decisions in which the perspectives of well-known stakeholders are in conflict and trust needs reinforcement time consuming, but can build trust if not abused
Inclusive personal knowledge, with perspective seeking to inform a decision operational or policy decisions in which the perspectives of stakeholders are required to formulate a decision time consuming, but improves decisions and builds engagement
Compromise-focused leverages stakeholder perspectives to develop a decision that gives everyone something they want making "deals" to which all stakeholders must agree time consuming, but necessary in deal-making situations
Consent-focused leverages stakeholder perspectives to develop a decision that everyone can consent to (even though there may be reservations) policy decisions in which the perspectives of stakeholders are required to formulate a decision can be efficient, but requires excellent facilitation skills and training for all parties
Consensus-focused leverages stakeholder perspectives to develop a decision that everyone can agree with. decisions in which complete agreement is required to formulate a decision requires strong relationships, useful primarily when decision-makers are equal partners

MORE COLLABORATION

As the table shows, all 7 forms of decision making on the collaboration continuum have legitimate applications. And all can be learned in any adult developmental level. However, the most effective application of each successive form of decision making requires more developed skills. Inclusive, consent, and consensus decision making are particularly demanding, and consent decison-making requires formal training for all participating parties.

The most developmentally advanced and accomplished leaders who have taken our assessments deftly employ all 7 forms of decision making, basing the form chosen for a particular situation on factors like timeline, decision purpose, and stakeholder characteristics. 

 

(The feedback in our LDMA [leadership decision making] assessment report provides learning suggestions for building collaboration continuum skills. And our Certified Consultants can offer specific practices, tailored for your learning needs, that support the development of these skills.) 

 

VCoL & flow: Can Lectical Assessments increase happiness?

Last week, I received an inquiry about the relation between  flow states (Csikszentmihalyi & colleagues) and the natural dopamine/opioid learning cycle that undergirds Lectica's learning model, VCoL+7. The short answer is that flow and the natural learning cycle have a great deal in common. The primary difference appears to be that flow can occur during almost any activity, while the natural learning cycle is specifically associated with learning. Also, flow has been associated with neurochemicals we haven't (yet?) incorporated in our conception of the natural learning cycle. We'll be tracking the literature to see if research on these neurochemicals suggests modifications.

The similarity between flow states and the dopamine/opioid learning cycle are numerous. Both involve dopamine (striving & focus) and opioids (reward). And researchers who have studied the role of flow in learning even use the term "Goldilocks Zone" to describe students' learning sweet-spot—the place where interest and challenge are just right to stimulate the release of dopamine, and where success happens just often enough to trigger the release of opioids (which stimulate the desire for more learning, to start the cycle again).

Since psychologist Mihalyi Csikszentmihalyi began his studies of flow, it has been linked to feelings of happiness and euphoria, and to peak performance among workers, scientists, athletes, musicians, and many others. Flow has also been shown to deepen learning and support interest.

Flow is gradually making its way into the classroom. It's featured on UC Berkeley's Greater Good site in several informative articles designed to help teachers bring flow into the classroom.

"Teachers want their kids to find “flow,” that feeling of complete immersion in an activity, where we’re so engaged that our worries, sense of time, and self-consciousness seem to disappear."

Advice for stimulating flow is similar to our advice for teaching and learning in the Goldilocks Zone, and includes suggestions like the following:

  • Challenge kids—but not too much. 
  • Make assignments relevant to students’ lives.
  • Encourage choice, feed interest.
  • Set clear goals (and give feedback along the way).
  • Offer hands on activities.

If you've been following our work, these suggestions should sound very familiar.

All in all, the flow literature provides additional support for the value of our mission to deliver learning tools that help teachers help students learn in the zone.

Here are a few links to additional information:

VCoL+7: Can it save democracy?

Our learning model, the Virtuous Cycle of Learning and its +7 skills (VCoL+7) is more than a way of learning—it's a set of tools that help students build a relationship with knowledge that's uniquely compatible with democratic values. 

Equal opportunity: In the company of good teachers and the right metrics, VCoL makes it possible to create a truly level playing field for learning—one in which all children have a real opportunity to achieve their full learning potential.

Freedom: VCoL shifts the emphasis from learning a particular set of facts, vocabulary, rules, procedures, and definitions, to building transferable skills for thinking, communicating, and learning, thus allowing students greater freedom to learn essential skills through study and practice in their own areas of interest.

Pursuit of happiness: VCoL leverages our brain's natural motivational cycle, allowing people retain their inborn love of learning. Thus, they're equipped not only with skills and knowledge, but with a disposition to adapt and thrive in a complex and rapidly changing world.

Citizenship: VCoLs build skills for (1) coping with complexity, (2) gathering, evaluating, & applying information, (3) perspective seeking & coordination, (4) reflective analysis, and (5) communication & argumentation, all of which are essential for the high quality decision making required of citizens in a democracy. 

Open mindset: VCoLs treat all learning as partial or provisional, which fosters a sense of humility about one's own knowledge. A touch of humility can make citizens more open to considering the perspectives of others—a useful attribute in democratic societies.

All of the effects listed here refer primarily to VCoL itself—a cycle of goal setting, information gathering, application, and reflection. The +7 skills—reflectivity, awareness, seeking and evaluating information, making connections, applying knowledge, seeking and working with feedback, and recognizing and overcoming built in biases—amplify these effects.

VCoL is not only a learning model for our times, it could well be the learning model that helps save democracy.