I was inspired to write this by something I read yesterday. Anyone who has been involved with public health policy understands the Precautionary Principle and really doesn’t need this primer, and I’d kind of hoped that some degree of understanding might have been absorbed by osmosis by the people who have been involved in the debate on mobile phones, since it’s often discussed in that context. It seems I was wrong, and it continues to be invoked in the sense of “I think there’s a problem -> Invoke the Precautionary Principle and Stop it Now!!!”
It doesn’t work that way.
The Precautionary Principle in fact is a well-grounded legalistic tool, to the extent that in Europe at least one does not invoke it on a case-by-case basis but rather must always use it. Using it, though, doesn’t mean you always have to take action, and that’s an important distinction.
The PP (I’m going to call it that from now on because it’s easier to type) really is just common sense. It’s rooted in what we mean by “proof”; commonly we use a 95% confidence interval in science, which means we have to be pretty damn sure of something before we consider it “proven”. Now, that’s fine if we’re looking for gravitational waves from coalescing black holes (though for the Higgs boson they used 5-sigma) but it’s not really appropriate for environmental agents where you’d want to be a bit more, well, cautious. There’s quite a good analogy in law, where for a criminal conviction we look for “proven beyond reasonable doubt” but in a civil case it’s “balance-of-probabilities” or “more-likely-then-not”.
So for the PP, you’d consider taking action if it looks more likely than not that inaction could lead to significant harm even if there’s not proof beyond reasonable doubt, or proof at 95% scientific confidence levels or whatever. One issue with that is that “balance of probabilities” is something that shifts with evidence; if you’re pouring stuff in a lake that you know is toxic to fish, you shouldn’t wait until you start seeing dead fish to decide it’s more likely than not that you’re killing them. That would be a reasonable assumption based on what you know about the stuff you’re pouring in. It’s actually pretty likely that if you pour enough of it in, fish will die. You really hardly need the PP to tell you not to do it. Where it really comes into play properly is when you either don’t know anything much (new agent on the block) or you know enough to know that something is probably (but not definitely) a problem. The second of those is the easier case to start to deal with: it’s time to consider action. The first is actually quite difficult, because if you don’t know anything at all, what’s the balance of probabilities? It then usually comes down to a pragmatic decision about outcomes. Which leads to the second misunderstanding about the PP.
The PP is not a blank cheque to take action to ban an agent or prevent an activity. If the balance of probabilities suggests that it’s time to do something, the something you do is take a look at what’s an appropriate action to take. That means balancing the costs and benefits of any action against the costs (and potential benefits) of doing nothing. By “costs” here I don’t really mean financial costs (though that may also be a factor) but more any societal or personal detriment accruing from action or inaction. That’s quite hard to appreciate unless you see a concrete example, so here is one.
Across the years, and across many epidemiological studies, there has been an association between magnetic field exposure above 0.4 uT and an increased risk of childhood leukaemia. It’s not established as causative, it’s an association, but it seems quite robust. Largely as a result of this finding, IARC assigned a 2B category (“possible carcinogen”) to power frequency fields in 2002. Now “possible” doesn’t meet the PP threshold of more-likely-than-not, so one wouldn’t expect a public health-driven change to the way we distribute or use electricity. Nevertheless, it makes for an interesting case study in “what if”; In the UK the SAGE process considered what would be appropriate to do in terms of remedial action if one took as axiomatic a doubling of risk of childhood leukaemia as indicated in the epidemiological studies. The answer was “almost nothing”. Some obvious measures like not routing cables over schools where an equal-cost alternative is available, and arranging the phases of conductor bundles to minimise fields emerged as no-brainers, but actions such as moving existing lines or relocating people who lived under them turned out to be too hard to justify, not just in terms of financial cost but in terms of lives: people are injured and die (statistically) when major engineering works take place.
The original article that triggered me to write this claimed that mobile phones were harmful and the precautionary principle should be applied. Now, I have no idea what the author had in mind by that, and I suspect he doesn’t really know either. However, putting aside the fact that it’s not the case that phones are, on balance of probabilities, a public health issue anyway, we should consider what the outcomes would be of any actions to restrict their use. Putting aside also the fact that most people would only give up their smart phone under duress – and you’d have to have some pretty damn good evidence to make them do it – we would be removing an often-essential communications tool, and that itself has public health consequences. Mobile phones are a literal lifeline for many, and a safety net for many others. I rely on one myself when cycling in remote rural areas, and we are seeing increasingly a shift to telemedicine and to the use of phones to deliver healthcare to isolated communities. Here in the UK we’re shifting our whole emergency services communications – fire, police, ambulance etc – from a dedicated digital radio system to something piggy-backed on the mobile phone network. Reducing the efficacy of that would have serious consequences.
So in summary:
- Don’t be afraid of the PP, but don’t use it as a blunt instrument either.
- It’s not there to be invoked on a case-by-case basis; you should always think about what it requires.
- The PP suggests action only when the balance of probabilities weigh in that direction.
- The action that the PP requires is not an automatic introduction of control measures, but rather a consideration of what is appropriate in terms of the balance of risks and benefits of action vs inaction.