I’ve written elsewhere that, of the 3,500 people who have answered the subscriber survey I put out a year or so ago, a strong majority are educators wearied from years of high-stakes accountability and the over-sciencing of teaching.
But with that latter descriptor — the “over-sciencing” of teaching — I want to be clearer because, as my summer work for the Character Lab Teacher Innovator Grant has shown me, there is a kind of science that invigorates and invents and inspires, a science that is alive. And then there is the kind of science that stultifies and stifles and demotivates, a science that has teachers leaving our profession in droves.
The difference between the two sciences is that the living one operates within a world of fresh, testable, adapting hypotheses, whereas the dying one — the one that is killing us as teachers — operates on philosophies and theories posing as science. Let’s look at each of these in turn.
I. The science that stultifies
Paul Tough’s work in and around his book How Children Succeed was my first introduction to the Cognitive Hypothesis. Namely, this hypothesis holds that the best way to ensure that children grow up to be successful adults is to make them smart, or “cognitively able,” or good at reading and writing and math and science.
This hypothesis, which has really become a societal phenomenon, manifests itself in everything from high-stakes preschool admissions to a billion-dollar testing and test-prep industry to teaching kindergarteners to read (when I was an elementary kid in the 80s, reading was a 1st grade skill). Oh, and there’s one other way we clearly see the Cognitive Hypothesis in action: tying teacher pay or job security or evaluation points, heavily, to data from cognitive tests. 
My thinking is rough draft here, but what I’ll say about the Cognitive Hypothesis is this:
- Anecdotally, every adult knows someone who is way smart and way less successful than they could be. These anecdotes suggest that IQ alone does not cause success.
- Instinctually, when I ask workshop participants or keynote audiences to picture a student they’ve worked with recently whom they feel is very likely to succeed, and then ask participants to share adjectives describing why they chose the student they pictured, it is exceedingly rare that the first adjective that comes to an audience member’s mind is “smart” or “intelligent” or “scores well on tests.”
- Empirically, a rapidly growing body of evidence from research in psychology, and some really interesting work from a Nobel Prize-winning economist, makes a strong case that factors apart from IQ — “noncognitive factors,” they are often called, though I tend to refer to them, as Tough does, as simply character–may be equally if not more predictive of long-term human flourishing than cognitive factors.
But here’s an even bigger problem, getting back to the purposes of this post, with the Cognitive Hypothesis: it is something that is largely done to educators rather than a work they are invited into.
And I’m no science teacher, but I know enough of science to know this: You don’t do hypotheses to people; you don’t do hypotheses at all. That is not the kind of science that makes young people grow up to become science teachers. That is not the kind of science that will ever make teachers engaged as researchers and scientists in their craft.
II. The science that invigorates
No: the kind of science that I think teaching is dying for is the kind that is based on simple principles (see Fig. 1).
Here are a few steps to breathing new, science-y life into our schools:
- Step 1: Give teachers permission and encouragement to form and act upon hypotheses.
- Step 2: Ensure the quality of these hypotheses by vetting them. Mentors or committees who have themselves gone through this kind of research process would be ideal for the vetting process.
- Step 3: Reward the work of designing and carrying out studies to test these hypotheses. My colleague Doug Stark, author of Mechanics Instruction that Sticks, even goes so far as to say that this kind of extra work would be one of the only sensible uses of merit pay.
- Step 4: Require teachers to share this work in a way that makes sense. No dust-collecting reports: I’m talking about conference sessions or blog articles or toolkits or programs. The goal is either the spread of promising hypotheses or the spread of lessons learned from failed ones.
And, to be clear, these steps aren’t my idea — they are essentially what Character Lab’s Teacher Innovator Grant does (the next call for Teacher Innovator Grant proposals will be in September; stay tuned). There’s no reason forward-thinking schools can’t replicate the TIG model or something like it.
Picking apart a case study from my own classroom last year
This kind of science should be the opposite of exclusive; let’s take a look at an “experiment” I conducted last year to test a hypothesis about making my students better at life. (Click for video; you can get the point of the experiment from the first three minutes or so of the video.)
Let’s break down my science game here.
What I did well:
- I wasn’t conducting some random experiment that had no connection to my regular work. This activity was an opportunity for my students to practice at least two elements from the NFO Framework: Grow Character and Speak Purposefully and Often. If I were to learn this fall that Haddie’s kindergarten teacher will be conducting random experiments totally disconnected from what kindergarteners need to be doing, I would be alarmed. On the other hand, if I were to learn that Haddie’s teacher will be seeking to measure the impact of a specific intervention she wants to try, I’d be game.
- The intervention I was testing wasn’t time-intensive. Setting these goals, which you’ll notice I didn’t limit to school-related, is obviously not a part of my district’s English 9 or World History curriculum — nor do I think it should be. And yet there’s an obvious case to be made for including this goal-setting activity if it doesn’t sideline the curriculum to an unreasonable extent. The 10 minutes (maximum) that this activity took each week was a decision I could explain to parents and colleagues with a clear conscience.
- The intervention targeted a specific element of a specific character strength (see Fig. 2). I wasn’t just trying to make kids “grittier”; I was trying to increase their grit by increasing their mastery of short-term goal-keeping. Many of us know that, for both our students and ourselves, defining a long-term aspiration (e.g., I want to be a doctor) is simple compared to defining and keeping a short-term goal that leads to that long-term aspiration (e.g., I need to study three nights for my history test on Friday) .
- I didn’t wait for permission to try something I thought had a chance of promoting my students’ long-term flourishing. You shouldn’t, either — but remember that your curriculum does matter, so design interventions that are as quick as they can be.
What I did not do well:
- I had no plan for measuring the results of the intervention. I can’t tell you if this activity was successful or not. I can say that a lot of kids liked it; I can say it gave me some great insight into what makes my students tick. But that’s about it. Aaaaand that’s not much in terms of deciding whether this intervention is worth the ten minutes of instructional time each week.
- In short, I was not thinking scientifically. I had no clearly defined hypothesis, and as a result no hope of making this a study I could really learn from. I was doing what great teachers do, but I was missing out on a huge opportunity to learn from what I was doing.
Looking ahead to next year
Compare the goal-setting example above with this coming school year’s Pop-Up Debate study that I’m doing with the enormous help of Character Lab, especially Character Lab’s Michele McNamara, who is my Obi Wan of conducting research.
Specifically, my study will measure the growth of three specific pieces of three different character strengths: zest, social intelligence, and grit. The study is built on three hypotheses organized by what Michelle and I have taken to calling Phases One and Two.
Phase One: Overcoming public speaking anxiety
Hypothesis 1: Three weeks of Think-Pair-Share and Daily Facts mini-lessons, followed by three successful pop-up debates, will increase zest in students by decreasing anxiety around public speaking.
To measure this, we’ll use two sources of data from before and after the intervention period:
- a student survey consisting of various psychological measures (Michelle was huge in developing this)
- and observational data from one or more teachers.
Phase Two: Getting better at getting better
Hypothesis 2: Nine pop-up debates throughout the rest of the semester will allow students to have a more accurate perception of 1) their own abilities (a component of social intelligence) and 2) the nature of deliberate practice (a component of grit).
To measure this, we’ll use data from the end of Phase One and data taken at the end of Phase Two. Again, we’ll rely on both student and teacher reported information.
Hypothesis 3: The growth projected in Hypothesis 2 will be greater if students are allowed to review and reflect on their debate performances through the use of film.
For one of my classes, we’ll spend ten minutes on the day following pop-up debates with a laptop cart so that we can access yesterday’s debate footage, find a clip of our own speaking performance, and completing a simple reflection form: What did I do well? What do I need to work on next time? How much did I practice prior to this week’s debate? How much do I plan to practice prior to next week’s?
That, in a nutshell, is next year’s study. It is not perfect, but it’s a heck of a lot better than last year’s goal-setting “experiment.”
Not about perfection
The kind of science education needs cannot be about perfection. It needs to encourage the kinds of quasi-experimentation that I did last year with goal-setting, and it needs to build capacity in teachers to grow into conducting better and smarter research throughout their careers.
Right now, education seems to be stuck in a science that’s all about proving that my edu-strategy or philosophy is right; we need to abandon that, instead cultivating a science that promotes inventing, tinkering — and, yes, measuring — within nourishing professional communities and toward Everests much taller than measures of cognitive achievement.
1. Let me just say right now that using the word “cognitive” to just mean IQ / reading / math / tested stuff is problematic — namely because the character strengths (or “success skills,” as David Conley smartly calls them) involve lots of cognition.
2. David Conley helped me clarify this thought when we met during my recent visit to Portland, OR.
Thank you to Michelle McNamara of Character Lab for letting me be her research apprentice this year.