Changing Incentives Creates Security Risks
One of the things I am writing about in my new book is how security equilibriums change. They often change because of technology, but they sometimes change because of incentives.
An interesting example of this is the recent scandal in the Washington, DC, public school system over teachers changing their students’ test answers.
In the U.S., under the No Child Left Behind Act, students have to pass certain tests; otherwise, schools are penalized. In the District of Columbia, things went further. Michelle Rhee, chancellor of the public school system from 2007 to 2010, offered teachers $8,000 bonuses—and threatened them with termination—for improving test scores. Scores did increase significantly during the period, and the schools were held up as examples of how incentives affect teaching behavior.
It turns out that a lot of those score increases were faked. In addition to teaching students, teachers cheated on their students’ tests by changing wrong answers to correct ones. That’s how the cheating was discovered; researchers looked at the actual test papers and found more erasures than usual, and many more erasures from wrong answers to correct ones than could be explained by anything other than deliberate manipulation.
Teachers were always able to manipulate their students’ test answers, but before, there wasn’t much incentive to do so. With Rhee’s changes, there was a much greater incentive to cheat.
The point is that whatever security measures were in place to prevent teacher cheating before the financial incentives and threats of firing wasn’t sufficient to prevent teacher cheating afterwards. Because Rhee significantly increased the costs of cooperation (by threatening to fire teachers of poorly performing students) and increased the benefits of defection ($8,000), she created a security risk. And she should have increased security measures to restore balance to those incentives.
This is not isolated to DC. It has happened elsewhere as well.
tobias d. robison • April 14, 2011 6:55 AM
Bruce,
I believe this particular cheating problem is related to an interesting general one: You can use a score to measure something, only if the score is not used for any side purpose related to financial gain. My favorite example is Chess ratings. These are obtained objectively by statistical analysis of win/loss results, and they can be highly accurate. Their accuracy has been compromised by tournament prizes. For example, if a tournament offers a desirable prize for the best result by a player rated under 2,000, it is well-known that some players will get their rating down under 2,000 before the tournament in order to qualify for this prize.
tobias d. robison
http://precision-blogging.blogspot.com
http://ravensGift.com