In the next week or two, The Washington Post will be publishing a piece I’ve written about some recent examples of schools paying students cash for attendance and performing academic work.
While I was writing it, I revisited a well-known study by Roland Fryer that I’ve previously posted about (see The Problem With “Bribing Students” and More On The Problem With “Bribing Students”). One of the findings of that study that is often cited by supporters of this “cash incentive” idea was that paying second graders in Dallas resulted in “significant” gains in reading comprehension standardized tests and that a significant amount of that gain remained a year later.
Something about that finding always sounded fishy to me, but I just didn’t have it in me to plow through a nearly 200 page scholarly research paper. It’s also questionable if I would have understood what I was reading, either.
Fortunately, though, Dr. Stephen Krashen, the internationally-known language and literacy scholar, was interested and willing to analyze the research. Here is what he discovered in reviewing the section on the Dallas “success”:
Comments on Fryer, R. Financial Incentives and Student Achievement: Evidence from Randomized Trials Quarterly Journal of Economics, 126 (4): 1755-1998
S. Krashen, March 1, 2012
It is not correct to assume that this study demonstrated that incentives work and that the effect is lasting. Fryer (2011) paid second graders in Dallas $2 for each book they read and then passed the AR test on that book. Children were in general allowed to take each test only once, and had to score 80% or better correct to get credit. The duration of the study was one academic year. Students in the incentive program were compared to controls who were not in the program.
Incentives produced higher scores on the Iowa test for only one component, reading comprehension. Increases in vocabulary and language were not significant. When tested one year later, the effects were half that of the original effects and not significant.
This is hardly an overwhelming victory for incentives.
There are three major problems with this study:
The students were second graders. Second graders are not always independent readers. The easiest Goosebumps, for example, is at the third grade level.
They didn’t read very much: The average student earned $13.81. At $2 a book, this means that the students who got incentives read and passed AR tests on less than seven books during the entire year. And these are books for second graders, which means none of them were massive tomes. Is it possible that the comparisons read even less? (see below)
MOST SERIOUS. The incentive group did better than comparisons on one subtest, but we must ask “compared to what”? What did the comparisons do? The real question is whether an AR program with financial rewards is better than a literature-based print-rich program without incentives. Would children have done as well or better if they had just read the books without taking tests and getting paid?
This is the major flaw of all AR research, as I have argued in my reviews of AR research (see citations below).
AR has four components: (1) access to books, (2) provide time to read books, (3) take tests, (4) get rewards and the complete program is consistently compared to “traditional” instruction and is often (but not always) better. It is no surprise to see a program with all four components do better than one with none of them, but is this just because of the access and time dedicated to reading? Did the tests and prizes add anything?
There has been no attempt to see if components (3) and (4) add anything, no attempt to compare (1,2,3,4) with just (1,2). There is overwhelming evidence that the combination of (1) and (2) is in fact enough to produce excellent results, superior to traditional programs (Krashen, 2004), but the AR people have shown no interest in testing this simpler hypothesis.
Summary: Five out of six results were statistically insignificant. Only one was significant and the one significant result could have been because of more reading, not because of the tests and financial rewards.
Krashen, S. 2002. Accelerated reader: Does it work? If so, why? School Libraries in Canada 22(2): 24-26, 44.
Krashen, S. 2003. The (lack of) experimental evidence supporting the use of accelerated reader. Journal of Children’s Literature 29 (2): 9, 16-30.
Krashen, S. 2004a. A comment on Accelerated Reader: The pot calls the kettle black. Journal of Adolescent and Adult Literacy 47(6): 444-445.
Krashen, S. 2004b. The Power of Reading. Portsmouth: Heinemann and Westport: Libraries Unlimited.
Krashen, S. 2005. Accelerated reader: Evidence still lacking. Knowledge Quest 33(3): 48-49.
Krashen, S. 2007. Accelerated reading: Once again, evidence still lacking. Knowledge Quest September/October. 36 (1); 11-17
Thanks to Dr. Krashen for identifying the flaws in this report…..
Earlier today, I wrote a post titled Does Getting Better At Metacognition Physically Alter The Brain? In it, I described some interesting studies done on metacognition using MRI’s.
I contacted the author of the study with a question, Dr. Stephen Fleming, and he graciously responded very quickly. Here’s my email and his answer:
I’m a high school teacher in California, and write a blog with over 25,000 daily subscribers — mostly educators.
I’ve recently learned about your research on metacognition, and have posted on my blog about it.
Helping my students learn about the physical impact learning has on their brain has had an important impact on them. I saw that in your 2010 paper on metacognition, which I write about in my blog, you found that people with a greater metacognitive function had a greater developed prefrontal cortex, but you weren’t sure if that was because it had developed because of their practice of metacognition or if they were just born with it.
Since 2010, have you determined which it was? As I write in my post, it would be a great asset for teachers if we could help our students see that their brains actually change as they practice metacognition.
DR. FLEMING’S RESPONSE
Many thanks for your interest in our research, and for featuring our article on your blog.
Unfortunately we still do not know the answer to your question. There are two main challenges in carrying out this study. First, one would have to develop a reliable method for training metacognitive function in isolation of other changes in cognitive skill, such as decision-making, memory, etc. As yet I do not know of such a protocol, but would love to hear your ideas on this.
And second, longitudinal measures of brain structure and function would be required at different stages during the training. This is certainly feasible, but a caveat is that the field is still developing in its understanding of what different types of MRI measures mean for brain function. For example, we don’t know how the measure of structure we used in our paper (voxel-based morphometry) affects the functional properties of a particular brain region.
This would be a great study to carry out, and I would love to know the results!
In my own research, I am currently focussed on understanding the computations underlying metacognition at the individual level. Hopefully we can then use this knowledge to examine questions about differences between individuals.
So it looks like we’ll have to wait awhile for the answer….
Thanks to Dr. Fleming for his gracious response!
I’ve posted a lot about the importance of metacognition, and how I try to help students recognize its importance and apply it.
The Wellcome Trust in the United Kingdom just published a report on a very interesting study on metacognition — Metacognition – I know (or don’t know) that I know.
It’s apparently one of the few studies done on the topic with MRI’s. They were able to identify metacognition with a small part of the brain. Here’s the most interesting part of the report:
The findings, published in ‘Science’ in September 2010, linked the complex high-level process of metacognition to a small part of the brain. The study was the first to show that physical brain differences between people are linked to their level of self-awareness or metacognition.
Intriguingly, the anterior prefrontal cortex is also one of the few parts of the brain with anatomical properties that are unique to humans and fundamentally different from our closest relatives, the great apes. It seems introspection might be unique to humans.
“At this stage, we don’t know whether this area develops as we get better at reflecting on our thoughts, or whether people are better at introspection if their prefrontal cortex is more developed in the first place,” says Steve.
Boy, if scientists find that practicing metacognition physically alters the brain, that sure would be a great addition to my brain lessons (see The Best Resources For Showing Students That They Make Their Brain Stronger By Learning).
The study referenced in the report took place in 2010. I’ve contacting Dr. Fleming to see if he has developed any further conclusions since that time.