Artificial intelligence (AI) in healthcare and research
Bioethics Briefing Note
Limits of AI
AI depends on digital data, so inconsistencies in the availability and quality of data restrict the potential of AI. Also, significant computing power is required for the analysis of large and complex data sets. While many are enthusiastic about the possible uses of AI in the NHS, others point to the practical challenges, such as the fact that medical records are not consistently digitised across the NHS, and the lack of interoperability and standardisation in NHS IT systems, digital record keeping, and data labelling. There are questions about the extent to which patients and doctors are comfortable with digital sharing of personal health data.
Humans have attributes that AI systems might not be able to authentically possess, such as compassion. Clinical practice often involves complex judgments and abilities that AI currently is unable to replicate, such as contexual knowledge and the ability to read social cues. There is also debate about whether some human knowledge is tacit and cannot be taught.* Claims that AI will be able to display autonomy have been questioned on grounds that this is a property essential to being human and by definition cannot be held by a machine.
*Wachter R (2015) The digital doctor: hope, hype and harm at the dawn of medicine’s computer age. An example of risks posed by hard to detect software error in healthcare is the Therac 25 scandal in 1985- 7 when faulty computerised radiation equipment led to accidental overdoses causing six deaths in Canada and the US.