A mantra of recent government policy reporting in recent weeks is that they are only following the science. The science recommended the lockdown, and the science will let us know when it can be relaxed, apparently. It tells us what tests need to be done for whom and what protective equipment is needed for whom. How exactly can it do this?
Consider personal protective equipment (PPE). If everyone working in a hospital wore the kind of positive pressure full body suit worn by researchers in the highest level of biosecurity laboratory (level 4), transmission of virus to NHS workers would be close to zero. Scientists and engineers have worked hard to make this the case. Does this mean that following the science requires equipping everyone in the NHS this way? Of course not. The expense would be enormous, many jobs would be difficult or impossible dressed in this cumbersome way, and many NHS workers are anyhow exposed to relatively low risks of coronavirus transmission.
Policy must be based on a mixture of goals, costs and facts. Leaving aside costs for the moment, a common idea is that policy-makers decide on the goals and scientists provide them with facts relevant to achieving those goals. Even this simple model immediately makes clear why one could not just follow the science. If the goal is to minimise the loss of life then scientists will try to discover how much social distancing and so on will continue to contribute to this end. If the goal is to minimise economic disruption, very different facts will be required. If, as is more likely, some balance will be struck between the importance of these policy goals, a more complex set of options will need to be provided by scientific experts. None of these options amounts to just “following the science”.
In reality the simple model just mentioned is hopelessly simplistic, and the relations between science and policy are much more intricate. Nothing, one might imagine, is more objective a fact than a death. But a death caused by coronavirus, something that one imagines policymakers would wish to prevent as far as possible, is another matter. It is likely that for anyone who dies infected with COVID-19, that infection made some contribution to their death. But while in many cases it will be overwhelmingly the most significant cause, in others it will be a minor factor. Some people would have died anyhow within a day, or a week, or a year and this, I suppose, might be relevant to how bad the contribution of COVID-19 will seem. So when scientists are asked to estimate how many people will die from the virus under certain policy conditions, how are they to define dying from the virus?
What this case points to is the fact that contrary to an assumption implicit in the simple model (policymakers decide the goals, scientists determine the relevant facts), “facts” are often inevitably impregnated with specific value judgements. There is no simple fact of how many people will die or have died from COVID-19, and how this fact is shaped must depend on some kind of negotiation between concerned parties.
Another problem that the simple model overlooks is that science is always more or less uncertain. I do not mean only to point out that science—like everything else—is fallible, but rather that uncertainty is an integral feature of many of the products of science. Consider disease testing. No test is perfect. Every test is liable to produce a certain proportion of false positives, people who are said to have a disease, but don’t, and false negatives, people who test negative despite having the disease. The probability of each of these occurrences is something that responsible scientists constructing a test will always try to assess as accurately as possible. It will often be possible for scientists constructing a test to trade off between these kinds of error, so they may need to know from policymakers, or at least from a different set of experts, what the relative harms are of each of these kinds of error. And if it proves impossible to reduce the error rate far enough, the test may actually be worse than useless. People often suppose that screening for cancer, say, is always a good idea. But false positives sometimes leading to invasive and dangerous further exploration, and can do harm that outweighs the good done by early detection of disease. Determining the value of a test requires complicated discussions between various experts with both biomedical and policy related expertise.
It is perhaps obvious by now that economics provides no fully objective way out of these problems. There is no economic way of assessing the costs of various policies to deal with a pandemic that does not attribute a value to a human life, perhaps different values to different human lives depending on age, or expected years of healthy life. Anyone who finds this unbearably distasteful must eschew economic evaluation of such a situation altogether. But equally, no one should suppose that imputing such a value is something that can be done merely by following the science. Those who still imagine that economics is a fully objective science might consider the problem of measuring inflation. Different goods undergo different changes in price and are bought by different groups of people. Which “basket” of goods should we use to measure aggregate changes in price? This matters deeply to pensioners or union negotiators, and different measures will affect their economic well-being. But no resolution can be found just from following the science.
The upshot of all this is that policy decisions of the kind that must be made in response to a major crisis such as the COVID-19 pandemic require discussions between a variety of different experts: virologists, epidemiologists and physicians of course; but also social scientists, bioethicists, political theorists and representatives of particularly affected groups. There is no solution that does not make assumptions about the kinds of questions important to all of these groups.
And this finally is what makes it so important that these decisions are made in an open and openly accountable way. One of the worst consequences of the myth that policy can just “follow the science” is that it suggests that sufficiently senior and respected scientists can simply tell us what science has discovered we should do. And we are used to the idea that science may be very hard for ordinary mortals to understand fully. But when policy is seen as a negotiation between a number of more or less factual and more or less normative perspectives, we should all want to see how these various perspective and considerations have been weighed against each other. That, ultimately, is what we hope our politicians will do for us, and that ultimately we shall judge them to have done well or badly. They cannot simply decide the goals and let the scientists point the way to achieve them, because the goals and the science constantly change one another.
Thanks for this article. As a scientist I find it difficult to swallow this broad statement that keeps getting touted about of 'following the science'. The science is research, which can be interpreted in many ways, even though the data making it up is accurate as it can be. I feel as though the media likes to suppress those scientific articles suggesting something different from their narrative.
The only reasonable thing would be to at least let the public aware of those papers and findings that are directing the decisions being made. Peer review is an important aspect of research, questioning findings, and repeatable results. If they don't share what is being found, how are people expected to be knowledgeable in their choices? Especially when I can find articles like: https://www.sciencedaily.com/releases/2019/09/190903134732.htm and https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4420971/
Both of these show that mask wearing is more nuanced than media reports. 97% of particles get through a cloth mask according to one of those studies, and the other shows that wearing an N95 may be no more effective than a normal surgical mask... So why the media uproar and guidance on wearing them? I'm picking on mask wearing because it's an easy one. The evidence (prior to COVID), fairly sums up wearing cloth masks are next to useless. But there's some recent articles written about how maybe its helpful, I can't help but wonder if its finding science to fit the policy rather than the other way around?
Astute observation that the intention of a study will yield a certain type of results - the question at the heart of a study is so important and valuable, that it will guide the conclusions taken from it. For instance - how far does a breath spread? vs how far can viable particles spread? vs what is the concentration of viable particles being spread? vs what is the effectiveness in practice of wearing a cloth mask while in a certain environment in reducing contagion between individuals? If we took the first question as gospel it may give us a different answer than the last one, as to what is more practical and valuable to us at the present time.
John, well written peice. Far from being involved in science or policy. Your blog puts things certainly in perspective for all to read, understand and implement
Theory-ladenness of facts or observations is pretty generally accepted in the philosophy of science world these days and it would certainly be a good thing if people concerned with the application of science to real world problems thought about it a bit more.
It would be wonderful to hear about theory-ladenness--or even value-ladenness-- at the daily briefings, but I won't hold my breath!
John, I thought this was excellently put and very interesting.
It reminded me of something I read in Nelson Goodman's (1978) 'Ways of Worldmaking' years ago. He says, 'facts are small theories, and true theories are big facts'. He also quoted Hanson's (1958) ' Patterns of Discovery: An Inquiry into the Conceptual Foundations of Science' in which he suggests that facts are 'theory-laden'.
Perhaps the government could discuss this at their daily briefings!