One place that social epistemologists tracking contemporary manifestations of the ‘two cultures’ problem can still find rich material is the encounter between the biological scientists and the ‘security community’. And one of the most recent sites for this encounter is the security implications of genome editing. This was the subject of an international workshop last week organised by the Inter Academy Partnership, the European Academies Science Advisory Council, the German National Academy for Sciences (Leopoldina), and the US National Academies of Sciences, Engineering and Medicine, and hosted by the Volkswagen Foundation.

Genome editing – and particularly CRISPR-type systems that have emerged since 2012 – has been identified as a potentially transformative technology in almost every field of potential application and not surprisingly, therefore, in national security. We considered this in last year's Nuffield Council report Genome Editing: an ethical review, although our assessment was circumspect. For the last couple of years, in fact, security implications have had been an obligatory (if oblique) reference in most policy orientated reports about genome editing and, in return, genome editing has had a placeholder in most national security risk registers, albeit without a substantial debate taking shape in the public sphere. The gambit that threatened to change that was played by former US Director of National Intelligence, James Clapper, in testimony coming off the back of a 2016 Worldwide Threat Assessment of the US Intelligence Community report. Although the short paragraph on genome editing was itself fairly measured (if one can set aside the inexplicable reference to the furore over “unregulated editing of the human germline… which might create inheritable genetic changes”), what piqued peoples interest was probably the fact that these nine lines were included in a section headed “WEAPONS OF MASS DESTRUCTION AND PROLIFERATION”. Although most commentators rate this intervention as hyperbolic the curve of attention nevertheless began to rise.

Last week’s meeting, held at the Herrenhausen Palace in Hannover, Germany, brought together biological scientists, security researchers and policy officials, including from the major agencies of key national governments and international institutions. The audience, then, was mixed but, in keeping with the backgrounds of the main organisers, with a high proportion of scientists.[i] The meat of the workshop involved plenary presentations and breakout sessions on, first, assessing and, then, addressing the security implications of genome editing in four domains: human cells, agriculture, ecosystems (gene drives) and microorganisms. The outcomes of the breakout sessions were reported back along with the findings of a policy 'mini-hackathon', organised by the Global Young Academy working group on DIY SynBio, held alongside the main meeting. A further plenary session considered questions of public communication and engagement and it is here that, in the plenary context at least, the differences in framing really emerged.

In his opening address Volker ter Meulen, the IAP Chair, began by admonishing his fellow scientists in something like the following terms: "We have to take responsibility for the reasonably foreseeable consequences of our work." In the context of the meeting, his words became something of a trope. They were prayed in aid of extenuation in one of the breakout sessions (‘we’ can't reasonably foresee or control the consequences of our work) but resurfaced later as an injunction during the discussion of scientific research culture in the second plenary (‘we’ have an obligation to reason and to foresee). In the final plenary the ambiguity of the scientists’ role became apparent: ‘we’ must reason and foresee as scientists but also as members of society to which science affords the benefits of its knowledge and which, by the fact of being a society, enjoins that ‘we’ reason together with others about science.

In an interdisciplinary meeting of this sort – particularly of this sort – the ‘we’ was bound to be difficult to configure, and it was one of the several ambiguities that marked out the interior limits of discourse over the three days. By hypothesis, the definition of ‘we scientists’ is complicated by the nature of some genome editing systems themselves, by their alleged accessibility and ease of use. At the beginning of recombinant DNA technology, in 1975, it was probably possible to get most of the world’s potential users of the technique into a single Californian conference centre. With genome editing (and especially CRISPR-type systems) this is clearly no longer the case. One of the subterfuges that bubbled away at the meeting is quite how accessible genome editing will be: what knowledge, skills, facilities and resources are required to deploy it? (The suggestion that it might become so refined that ‘any idiot’ could do it was aired and disputed during the discussions.) Whatever its final limits, it seems quite clear that the range of people extends well beyond those who form a coherent educated, socialised, institutionalised and idealised ‘scientific community’ (or even a ‘republic of science’).[ii] This has consequences for any claim about the sufficiency of self-regulation. The discussion also drew attention to the lack of homogeneity of research culture. (In one session, Abhi Veerakumarasivam, drew attention to diversity of values and cultural factors that were not mutually transparent, and Indira Nath and Ursula Jenal, raised the issue of perverse incentives.) We know that the conditions under which many scientists work – the competitive pursuit of funding, career progression, publications and patents – can lead to unintended consequences that threaten the quality and integrity of science. When we researched this at the Nuffield Council we found that the problem was widely recognised by researchers but not owned by any of them: a situation of ‘organised irresponsibility’, where responsibility is diffused around a system but not vested in any one place.[iii] So the notion of ‘scientific community’ (a Wissenschaftsgeminschaft) has both external margins and internal margins with which to contend.

On the other hand, it seemed to me that we had been very quick to describe genome editing as a ‘technology’ (as in the title of the workshop). I, for one, am not sure that we have any genome editing technologies as yet. Perhaps they are emerging in animal husbandry and vector/pest control but the description remains problematic: realised technologies (which we described, in Emerging Biotechnologies as productive assemblages of knowledges, practices, products and applications) require much more than just a promising scientific technique. The insect vector control strategies discussed, for example, require skills with gene drive and genome editing along with breeding capability and genomics knowledge; and they require sites of application with favourable economic, geographical, institutional, jurisdictional, climatic and meteorological factors (among others). Indeed, the tenor of many of the contributions at the meeting was to suggest that we were engaging in a kind of ‘genome editing exceptionalism’, where genome editing added little or nothing of significance to security risks of existing techniques (to gene drives, for example). As one might expect, there was a strong push from the scientists for regulation to be applied to genome editing products rather than processes of production (although the position of ‘technologies’, whether as products or processes, remained problematic).

A further ambiguity was over the role of evidence and foresight. Returning to the maxim with which Volker ter Meulen sought to enjoin participation in the workshop, the ‘security’ focus must always seem exorbitant: how do we reason about the unforeseeable and foresee the unreasonable? In the set up of the workshop there was an emphasis on working from evidence, while to the ‘security community’, simply extrapolating risks is to ignore important sources of threat. Thus it was that the meeting proceeded, with the biologists and security wonks practising insouciance and archness by turns. The scientists accused the wonks of a lack of understanding; the wonks accused the scientists of a lack of imagination. Nevertheless, this cannot – must not – have been futile and, indeed, it wasn’t. An achievement of the meeting may turn out to have been to begin the formation of a network, to identify some key nodes and interstices, within which discussions of the security implications of genome editing can take some sort of shape, albeit one that still needs to find a mood or mode of discourse in which it can engage comfortably.[iv]

[i] I had the privilege of being a member of the organising committee for this meeting, although in this respect I was an outlier.

[ii] In “The Republic of Science”, Michael Polanyi (brother of the economist, Karl), who was concerned with the relation between the scientific freedom and the public good, wrote: “The more widely the republic of science extends over the globe, the more numerous become its members in each country and the greater the material resources at its command, the more clearly emerges the need for a strong and effective scientific authority to reign over this republic.” Polanyi, M (1962) Minerva I, 1, pp.54- 73.

[iii] This term (Die Organisierte Unverantwortlichkeit) was coined by the sociologist Ulrich Beck; see: Ecological Politics in the Age of Risk (Cambridge: Polity Press, 1995).

[iv] A place where this can happen would be something like security studies of science and technology, of which Filippa Lentzos, who wrote our 2015 Nuffield Council background paper on Dual Use in Biology and Biomedicine and spoke at the Hannover meeting, is a notable exponent. But this needs to find its reflected image in in the domain of policy and this image, or this reflection, was not in evidence at the workshop.

Comments (1)

  • Genetic Pakistan   

    A gene is a small section of that DNA. 2007;44:89-98.

Join the conversation

Share