Archive for December 26th, 2007

An Earful on the Science and Policy of Risk

Wednesday, December 26th, 2007

The New York Times reports that Europe is gradually approaching the point of decision on whether to allow genetically modified corn. Because of WTO rules on such things, the EU is required to base its policy on “science” if it want to keep out mutant corn from the US. This reflects a deep, deep misunderstanding of what science can contribute to policy.

Let’s start with the basics. There are two sorts of error we can make in this uncertain world, Type I (the risk of believing something to be the case when it is not) and Type II (the risk of not believing something to be the case when it is). Science is, among other things, a human enterprise organized around the systematic minimization of Type I error. Experimental protocols are about this, and so are the conventions we follow in determining statistical significance. This obsession comes at the cost of permitting greater Type II error, but that’s OK. Science operates on the basis of a vast division of labor, where each scientist’s work depends on the reliability of the methods and results carried over from what others have done. One false conclusion, if not noticed in time, could invalidate the efforts of an entire research community. This is why a serious Type I error is a potential career ender, whereas an avoidable Type II glitch simply diminishes a researcher’s list of accomplishments.

This single-minded insistence on avoiding Type I error is the reason why science is the one truly progressive human activity. Today’s science is better than yesterday’s, and tomorrow’s will be better than today. You can’t say this about poetry or politics.

(It is also, in the end, why economics is not a “real” science: it is no big deal for an economist to claim something to be true and to later discover that it isn’t.)

Policy, on the other hand, has to take Type II error as seriously as Type I. Take the Bt corn case before the EU, for example. It is a problem if regulators falsely think Bt corn is dangerous and ban it, but it is also a problem if they falsely think it is not dangerous and allow it to be used. A reasonable first cut is the standard cost-benefit approach: value each sort of error in terms of its cost function. Thus the cost of banning Bt corn is the probability of Type I error (falsely believing it to be harmful) times the economic cost of not taking advantage of this technology, whereas the cost of not banning it is the probability of Type II error times the cost of the damage it would do in that case. You go for the lowest cost option.

(There is an even better approach, as I argued here, based on the fullest possible utilization of information.)

The difference should be obvious. Science is radically asymmetric in the way it treats uncertainty: avoiding a false positive is everything. Policy is more balanced: failure to see is potentially as harmful as seeing what isn’t there. If you happen to be the sort of person, as I am, who thinks environmental risks are particularly important to avoid, you might tilt the policy calculus on issues like Bt corn toward less Type II error, even at the expense of more Type I.

Science has one job to do. Policy has another. They follow different rules.