There have been some interesting developments around the ethics and governance of artificial intelligence (AI) in recent days. First we read that Google’s DeepMind has set up an Ethics and Society research unit, with the rationale that “AI can be of extraordinary benefit to the world, but only if held to the highest ethical standards. Technology is not value neutral, and technologists must take responsibility for the ethical and social impact of their work. …. We are committed to deep research into ethical and social questions, the inclusion of many voices, and ongoing critical reflection.” The unit has a number of Fellows (‘independent advisors’), including Oxford’s Nick Bostrom, to “help provide oversight, critical feedback and guidance for our research strategy and work program”.
Meanwhile, a fringe event on AI at the Tory Party conference was attended by Digital Minister Matt Hancock and Damian Collins MP, Chair of the Commons’ Digital, Culture, Media and Sport Committee. Addressing the meeting, Deputy CEO of techUK Antony Walker referred to “the Nuffield Council on Bioethics and suggested a similar institution should be set up to inform and drive public debate around the use of data and AI. This would provide a forum but also an independent and expert body to ensure that academics, businesses and policymakers all have access to the best, impartial information on norms and standards around AI development.” The most interesting thing to note about this contribution is that it suggests that industry would welcome some independent advice and, presumably, governance.
This discussion is not new, of course. Elon Musk, not known to be afraid of technology development, has been expressing his anxieties about AI, and the need for governance and regulation, for quite a while. And the Commons Science and Technology Committee published a report on Robotics and AI exactly twelve months ago. It proposed “a standing Commission on Artificial Intelligence be established … to examine the social, ethical and legal implications of recent and potential developments in AI. It should focus on establishing principles to govern the development and application of AI techniques, as well as advising the Government of any regulation required on limits to its progression.” The Government responded to the Committee in December 2016 by noting that “The Royal Society is currently examining the implications of Machine Learning, alongside the Royal Society and British Academy work on Data Governance. These projects aim to develop recommendations for data governance arrangements, including ensuring the UK remains a world leader in the use and governance of artificial intelligence.”
The Royal Society published its report on Machine Learning in April 2017 and, with the British Academy, a report on Data Management and Use in June 2017. Discussion about where to go with these issues is still a lively debate, involving multiple contributors from all sectors. I should just add that the use of person-related data and AI are different questions with different sets of issues and concerns, but it will be difficult to keep them apart.
So what of the recent moves at DeepMind and the techUK proposal? If the latter is seriously interested in having the benefit of an independent advisory body, DeepMind’s unit is not going to provide it. It will no doubt be an important source of knowledge and information as a research unit, but being internal to DeepMind (itself part of Google) it will not have the independence that will be essential for building public confidence. It is flattering that techUK should look to the Nuffield Council on Bioethics as a model institution. Our remit, of course, is limited to bio-related developments and whilst AI will have applications in bio-fields (on which we will maintain a watching brief), the scope of AI and data use goes much wider than that. I’m sure techUK will also be watching out for the outcomes of the work that RS and BA are pursuing, along with partners including the Nuffield Foundation (one of our funders). That is whence the ethical and expert advisory structures are more likely to appear, in my view. Government would no doubt welcome this, with others doing the heavy lifting, but a further question is whether the Government will also be looking to bring in some more hard-edged regulatory systems for data and AI. The Conservative Party Manifesto promised a ‘framework for data ethics’, saying that “we will institute an expert Data Use and Ethics Commission to advise regulators and parliament on the nature of data use and how best to prevent its abuse.” Whether this remains the intention of Government might yet be an open question, given its other current preoccupations and the number of manifesto pledges that have already been sidelined.
This also still begs the question of how the wider public plays into the issue. When we published our report on biological and health data in 2015, we stressed the need for transparency and public participation, warning that by not taking into account people’s preferences and values, projects that could deliver significant public good may continue to be challenged and fail to secure public confidence. The problems that arose with the care.data initiative, and in the DeepMind collaboration with the Royal Free hospital, illustrated that very clearly. Those cases, and our report, were specifically about the use of data in biological research and healthcare, but in developing the technology, the applications and the governance arrangements for AI and wider data use, it is increasingly clear that the public should not be left out of the conversation.
The future of AI looks both great and scary.