Are some risks just too big to take?
As human capability reaches the point where we think we can remould the fundamentals of nature itself, what's guiding us - and how can we avoid becoming the architects of our own extinction?
Dr Rupert Read, Reader in Philosophy at the University of East Anglia, argues for the Precautionary Principle: an idea that some claim is limiting and anti-‘progressive’, while others consider it the only logical way to prevent global catastrophe.
What is the Precautionary Principle?
The Precautionary Principle states that where any action has the potential to cause widespread harm, the burden of proof concerning the absence of harm falls on those advocating the action. While often exercised in the areas of science and technology – particularly those, like genetic engineering and climate science, which have an innate potential for catastrophe – it can also be applied to financial regulation, public health and the highest levels of government policy.
Together with Dr Nassim Nicholas Taleb, Distinguished Professor of Risk Engineering at the New York University Tandon School of Engineering, and others, Dr Read has developed the Precautionary Principle into something akin to a logical position. In the paper The Precautionary Principle (with Application to the Genetic Modification of Organisms), Dr Read and co-authors place it into a statistical and risk-analysis framework.
In the paper, the domain of the Precautionary Principle is limited to situations in which risk is "fat-tailed" (essentially, unpredictable and difficult to mitigate) and "systemic" (as opposed to localised). No matter how small the risk, any action that carries the risk of "ruin" – defined as harm with far-reaching and permanent consequences, such as complete environmental collapse – cannot and must not be allowed.
The rationale for this is that when we deal with ruin, the usual cost-benefit analyses associated with risk management cease to make sense. A policy of allowing risks with ruinous potential "sometimes", however rarely, guarantees destruction because any probability becomes a certainty over a long enough timeline.
Any gains from such a policy are, therefore, cancelled out due to what Dr Read calls "the value placed on a future that ceases to exist”: for the destruction of the future would, obviously, far more than cancel out any benefits from an action or policy. Thus there is an asymmetry between benefits and harms: sufficiently grave harms must be weighted much more heavily than claimed benefits.
The advocates of any ruin-potential action must, then, show that the risk of ruin is zero before that action can be taken, provided that there is an alternative possible course of action which does not expose one to such existential risk. While the Precautionary Principle is often interpreted as a way of managing computable risks, this interpretation is something more fundamental, and increasingly relevant in an era when the gap between the human ability to affect the world on the one hand and human understanding on the other is greater than ever.
Applying the Precautionary Principle
One area in which Dr Read believes there is an urgent need to apply the Precautionary Principle is the controversial issue of genetically-engineered organisms (GMOs). This is because Dr Read believes that they pose a distinct systemic risk: in his view, GMOs have the potential to spread uncontrollably, cross-breeding with wild plants until their genetics become irreversibly entangled with the ecosystem. The effects of this, together with their long-term impact on human health, remain unknown.
As Dr Read considers the risk to be potentially catastrophic, he argues that the burden of proof that GMOs will not cause widespread ecological damage rests with those who would advocate their use. In his opinion, all the stated benefits of GMOs – alleged increased crop yields, resistance to disease and tolerance for extreme environments – become moot until their safety can be proven beyond reasonable doubt.
Taleb and Read argue that the evidence assembled so far for the safety of GMOs is itself moot, because, while often perfectly good on its own terms, it is simply statistically insignificant compared to what would be needed to show safety. For catastrophic ‘black swan’ events are by definition exceedingly rare: showing that some crop has been grown without harm for a period of (say) ten years is a drop in the ocean compared to the timescales on which nature operates.
Dr Read's views on GMOs are not universally held - in fact, there is a tendency for both the pro- and anti- sides of the GMO debate to claim there is a lack of evidence to support the opposing view.
The point of the Precautionary Principle is precisely to enable that debate to move on beyond rival claims about extant evidence. And to develop into a more mature reflection upon how the absence of evidence of harm does not equate to an evidence of absence of harm, and upon how long-termist a view one needs to take if one is to be relatively confident that one isn’t accidentally creating a situation that may well prove to be harmful.
Moreover, this leads to a broader aspect of the Precautionary Principle. It is not simply a call for more evidence and more rigorous testing before any major, top-down re-engineering of the world around us can take place – in fact, the false certainty that can stem from this approach is considered almost as dangerous as to rush in blind.
Rather, it argues that we should favour the creation of systems that are resistant to systemic failure in the first place – ones that have internal barriers and boundaries that prevent localised shocks from propagating into widespread disasters. Somewhat paradoxically, we avoid ruin only when we treat it as if it were an inevitability, and look to build an alternative system that can pre-empt that inevitability.
This aspect neatly links the Precautionary Principle to less tangible, but no less critical areas such as the economy. Take the global financial crisis: though that crisis had various specific causes, the interdependence and homogeneity of the world's markets was above all what caused a number of local banking failures in 2007-08 to snowball into a global financial crisis. Financial gigantism and ‘monoculture’ fragilised a vital system on which we all depend.
Worse, the risks – subprime lending and the growing housing bubble, among many others — were known and, at the time, tolerated as acceptable. In retrospect, we know that the financial system came extremely close to ruin, and could do again. As Dr Read would argue, to think we've already learned adequately from the mistakes made and will manage the risks better next time is to practically invite another similar crisis or worse.
A precautious approach would shift the burden of proof to those seeking to take increased risks (e.g. those wishing to create exotic financial products), rather than those trying to reduce exposure. In doing so, and by striving to ensure that the effects of any risks are localised and not systemic, new structures can emerge or be created that are more resilient to ruin.
The Precautionary Principle in policy
The Precautionary Principle already exists in public policy: it was an important guiding principle of the World Charter for Nature adopted by United Nations member states in 1982, and is integrated into a number of international treaties.
While it currently tends to pertain to environmental issues, such as particularly anthropogenic climate change, and conservation, Dr Read believes there is scope for it to be implemented in a number of other areas. He is currently working with organisations including the National Audit Office, the Bank of England's Prudential Regulation Authority, and with members of the Westminster and European Parliaments to consider the precautionary approach in their policy and regulatory frameworks.
Risky Business (The Forum, 11th April 2016)
Rise of the robots will harm the Earth as well as humans (The Guardian, 27th March 2016)