Artificial intelligence (AI) in healthcare and research

Policy Briefing

Published 15/05/2018

AI cover Page 1
AI in healthcare 596 x 330

Ethical and social issues

Many ethical and social issues raised by AI overlap with those raised by data use; automation; the reliance on technologies more broadly; and issues that arise with the use of assistive technologies and ‘telehealth’.

Reliability and safety

Reliability and safety are key issues where AI is used to control equipment, deliver treatment, or make decisions in healthcare. AI could make errors and, if an error is difficult to detect or has knock-on effects, this could have serious implications. For example, in a 2015 clinical trial, an AI app was used to predict which patients were likely to develop complications following pneumonia, and therefore should be hospitalised. This app erroneously instructed doctors to send home patients with asthma due to its inability to take contextual information into account.

The performance of symptom checker apps using AI, has been questioned. For example, it has been found that recommendations from apps might be overly cautious, potentially increasing demand for uneccessary tests and treatments.

Transparency and accountability

It can be difficult or impossible to determine the underlying logic that generates the outputs produced by AI. Some AI is proprietary and deliberately kept secret, but some are simply too complex for a human to understand. Machine-learning technologies can be particularly opaque because of the way they continuously tweak their own parameters and rules as they learn. This creates problems for validating the outputs of AI systems, and identifying errors or biases in the data.

The new EU General Data Protection Regulation (GDPR) states that data subjects have the right not to be subject to a decision based solely on automated processing that produces legal or similarly significant effects. It further states that information provided to individuals when data about them are used should include “the existence of automated decision-making, (…) meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject”. However, the scope and content of these restrictions – for example, whether and how AI can be intelligible – and how they will apply in the UK, remain uncertain and contested. Related questions include who is accountable for decisions made by AI and how anyone harmed by the use of AI can seek redress.

Data bias, fairness, and equity

Although AI applications have the potential to reduce human bias and error, they can also reflect and reinforce biases in the data used to train them. Concerns have been raised about the potential of AI to lead to discrimination in ways that may be hidden or which may not align with legally protected characteristics, such as gender, ethnicity, disability, and age. The House of Lords Select Committee on AI has cautioned that datasets used to train AI systems are often poorly representative of the wider population and, as a result, could make unfair decisions that reflect wider prejudices in society. The Committee also found that biases can be embedded in the algorithms themselves, reflecting the beliefs and prejudices of AI developers. Several commentators have called for increased diversity among developers to help address this issue.

The benefits of AI in healthcare might not be evenly distributed. AI might work less well where data are scarce or more difficult to collect or render digitally. This could affect people with rare medical conditions, or others who are underrepresented in clinical trials and research data, such as Black, Asian, and minority ethnic populations.

Trust

The collaboration between DeepMind and the Royal Free Hospital in London led to public debate about commercial companies being given access to patient data. Commentators have warned that there could be a public backlash against AI if people feel unable to trust that the technologies are being developed in the public interest.

At a practical level, both patients and healthcare professionals will need to be able to trust AI systems if they are to be implemented successfully in healthcare. Clinical trials of IBM’s Watson Oncology, a tool used in cancer diagnosis, was reportedly halted in some clinics as doctors outside the US did not have confidence in its recommendations, and felt that the model reflected an American-specific approach to cancer treatment.

Effects on patients

AI health apps have the potential to empower people to evaluate their own symptoms and care for themselves when possible. AI systems that aim to support people with chronic health conditions or disabilities could increase people’s sense of dignity, independence, and quality of life; and enable people who may otherwise have been admitted to care institutions to stay at home for longer. However, concerns have been raised about a loss of human contact and increased social isolation if AI technologies are used to replace staff or family time with patients.

AI systems could have a negative impact on individual autonomy: for example, if they restrict choices based on calculations about risk or what is in the best interests of the user. If AI systems are used to make a diagnosis or devise a treatment plan, but the healthcare professional is unable to explain how these were arrived at, this could be seen as restricting the patient’s right to make free, informed decisions about their health.* Applications that aim to imitate a human companion or carer raise the possibility that the user will be unable to judge whether they are communicating with a real person or with technology. This could be experienced as a form of deception or fraud.

Effects on healthcare professionals

Healthcare professionals may feel that their autonomy and authority is threatened if their expertise is challenged by AI. The ethical obligations of healthcare professionals towards individual patients might be affected by the use of AI decision support systems, given these might be guided by other priorities or interests, such as cost efficiency or wider public health concerns.

As with many new technologies, the introduction of AI is likely to mean the skills and expertise required of healthcare professionals will change. In some areas, AI could enable automation of tasks that have previously been carried out by humans. This could free up health professionals to spend more time engaging directly with patients. However, there are concerns that the introduction of AI systems might be used to justify the employment of less skilled staff.** This could be problematic if the technology fails and staff are not able to recognise errors or carry out necessary tasks without computer guidance. A related concern is that AI could make healthcare professionals complacent, and less likely to check results and challenge errors.

Data privacy and security

AI applications in healthcare make use of data that many would consider to be sensitive and private. These are subject to legal controls. However, other kinds of data that are not obviously about health status, such as social media activity and internet search history, could be used to reveal information about the health status of the user and those around them. The Nuffield Council on Bioethics has suggested that initiatives using data that raise privacy concerns should go beyond compliance with the law to take account of people’s expectations about how their data will be used.

AI could be used to detect cyber-attacks and protect healthcare computer systems. However, there is the potential for AI systems to be hacked to gain access to sensitive data, or spammed with fake or biased data in ways that might not easily be detectable.

Malicious use of AI

While AI has the potential to be used for good, it could also be used for malicious purposes. For example, there are fears that AI could be used for covert surveillance or screening. AI technologies that analyse motor behaviour, (such as the way someone types on a keyboard), and mobility patterns detected by tracking smartphones, could reveal information about a person’s health without their knowledge. AI could be used to carry out cyber-attacks at a lower financial cost and on a greater scale. This has led to calls for governments, researchers, and engineers to reflect on the dual use nature of AI and prepare for possible malicious uses of AI technologies.

*See Mittelstadt B (2017) The doctor will not see you now in Otto P and Gräf E (2017) 3TH1CS: a reinvention of ethics in the digital age? Otto P and Gräf E (Editors).

**See Wachter R (2015) The digital doctor: hope, hype and harm at the dawn of medicine’s computer age.

Share