From the National Bureau of Economic Research, hat tip to Inside Higher Ed, an indication that overtesting is a no bueno.
Ian Fillmore and Devin G. Pope of the University of Chicago studied student performance on the AP exam and found:
. . . strong evidence that a shorter amount of time between exams is associated with lower scores, particularly on the second exam. Our estimates suggest that students who take exams with 10 days of separation are 8% more likely to pass both exams than students who take the same two exams with only 1 day of separation.
This is of particular interest to me for a variety of reasons. Since the passage of NCLB, testing in grades 2-12 seems to occur at an astonishing frequency. Not only are there state tests in ELA and math and, in some grades, social studies and science, but there are usually some kind of interim (benchmark, call them what you will) district tests administered once (or more) per quarter in both ELA and math, along with the classroom teacher’s tests and quizzes in every content area, and then there are other supplementary tests administered in programs such as Accelerated Reader (please don’t consider this mention as an endorsement, more on this later).
Testing is not instruction. It seems obvious, but it needs to be said. When kids are being tested, they’re not learning.
If you asked why all the tests, teachers and district personnel would say that they need to test in order to find out if kids are learning. Which might be true if they weren’t testing quite so much.
The more testing, the less instruction, the more homework. The burden for instruction is offloaded to the children. They’re supposed to be teaching themselves. This, in spite of a growing body of research that tells us how ineffective homework is:
The results of national and international exams raise further doubts. One of many examples is an analysis of 1994 and 1999 Trends in Mathematics and Science Study (TIMSS) data from 50 countries. Researchers David Baker and Gerald Letendre were scarcely able to conceal their surprise when they published their results last year: “Not only did we fail to find any positive relationships,” but “the overall correlations between national average student achievement and national averages in [amount of homework assigned] are all negative.”
(No one likes hearing that about homework. A teacher I know tells me that when she assigns less homework, parents complain. They worry their kids aren’t working hard enough. As a parent, I was often astounded by the amount of homework expected from my children. Clearly no teacher ever sat down and worked his or her way through the material, or the teacher would have discovered that the time on task was excessive.)
Not to mention the other obvious problems with such a scheme–I mean, have you ever launched some ambitious self-study program? To muster up the wherewithal is daunting enough for a grown-up of strong will, and yet, we expect this of a child who 1) lacks the body of knowledge and skills required for such self-study and 2) has yet to develop that kind of self-discipline.
What’s sad is that the overtesting deprives kids of the joy of demonstrating what they’ve learned. When teaching is sound and kids are learning, they can’t wait to show you what they know. That’s when we know that the instruction is working.
Fillmore, Ian, and Devin G. Pope. “The Impact of Time Between Cognitive Tasks on Performance: Evidence from Advanced Placement Exams.” NBER. National Bureau of Economic Research, Oct. 2012. Web. 09 Oct. 2012.
Kohn, Alfie. “The Truth About Homework.” The Truth About Homework. Education Week, 6 Sept. 2006. Web. 09 Oct. 2012.
“The Impact of Time Between Tests | Inside Higher Ed.” The Impact of Time Between Tests. Inside Higher Ed, 9 Oct. 2012. Web. 09 Oct. 2012.
UPDATE: fixed a bad copy-cut-paste.