This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Freshfields Risk & Compliance

| 3 minute read

UK NHS pilots AI tool aimed at reducing bias in healthcare datasets – a step toward ‘algorithmovigilance’?

The UK’s Department for Health and Social Care has announced that NHS England will lead a ‘world first’ pilot using algorithmic impact assessments (AIAs) on healthcare datasets in a bid to eradicate algorithm bias in healthcare, in what could be a step by the UK government towards so-called ‘algorithmovigilance’ in healthcare.

This latest announcement from the UK Government Department forms part of its agenda to tackle health inequalities, and follows its recent confirmation that Professor Dame Margaret Whitehead will lead a ‘landmark’ independent review into possible ethnic bias in the design and use of medical devices (please see here for our related blog).

AIAs are a form of algorithmic accountability mechanism designed to support researchers in assessing and taking account of potential risks and biases of AI systems and algorithms before they can access NHS data. The AIAs, which are largely untested in a public health context, will form part of the data access process for the National COVID-19 Chest Imaging Database and the NHS AI Lab’s National Medical Imaging Platform.

The AIAs were designed by the Ada Lovelace Institute, an independent AI research institute. In research published alongside the UK Government’s announcement, the Institute explained that, without proper accountability mechanisms, data-driven healthcare innovations, such as AI, risk producing harmful outcomes and exacerbating existing health and social inequalities.

Innovation Minister, Lord Kamall, said (press release available here):

While AI has great potential to transform health and care services, we must tackle biases which have the potential to do further harm to some populations as part of our mission to eradicate health disparities … This pilot once again demonstrates the UK is at the forefront of adopting new technologies in a way that is ethical and patient-centred.”

The AIAs were commissioned by the NHS AI Lab, which is working to counter health inequalities related to AI as part of its remit to accelerate the safe adoption of AI in healthcare. Brhmie Balaram, Head of AI Research and Ethics at the NHS AI Lab, commented:

Building trust in the use of AI technologies for screening and diagnosis is fundamental if the NHS is to realise the benefits of AI. Through this pilot, we hope to demonstrate the value of supporting developers to meaningfully engage with patients and healthcare professionals much earlier in the process of bringing an AI system to market.

The algorithmic impact assessment will prompt developers to explore and address the legal, social and ethical implications of their proposed AI systems as a condition of accessing NHS data. We anticipate that this will lead to improvements in AI systems and assure patients that their data is being used responsibly and for the public good.”

The Ada Lovelace Institute cautioned that, while AIAs are an emerging mechanism with potential to mitigate potential harm and maximise the potential benefit of AI systems, they are not a complete solution for accountability and should work alongside other methods, including audits or transparency registers. The Institute’s full research report is available here.

AI in healthcare is, unsurprisingly, under the UK government spotlight more generally (consistent with international regulatory peers). For example, the UK government announced last year an extensive AI Work Programme, and we also await the outcome of the Medicines and Healthcare Products Regulatory Agency’s recent consultation on regulating AI as a medical device (you can read our related blog here).

In the U.S., lawmakers have yet to pass comprehensive legislation regulating AI in healthcare. Last year, the Food and Drug Administration (FDA) released its first Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan. The action plan describes a multi-pronged approach to advance the FDA’s oversight of AI/ML-based medical software. In addition to the FDA, certain U.S. States are in various stages of enacting laws and regulations relating to AI in healthcare.

However, without comprehensive U.S. government regulation of AI in healthcare, there have been increasing calls for institutional commitments in the healthcare industry, particularly in the area of algorithmovigilance.  Algorithmovigilance is a term coined by the U.S. Regenstrief Institute President and Chief Executive Officer, Peter Embi, and refers to scientific methods and activities relating to the evaluation, monitoring, understanding, and prevention of bias in algorithms in healthcare. Dr. Embi recently commented:

Algorithmic performance changes as it is deployed with different data, different settings and different human-computer interactions. These factors could turn a beneficial tool into one that causes unintended harm, so these algorithms must continually be evaluated to eliminate the inherent and systemic inequalities that exist in our healthcare system. Therefore, it’s imperative that we continue to develop tools and capabilities to enable systematic surveillance and vigilance in the development and use of algorithms in healthcare.

You can also read more on the regulation of AI here, as well as our insights on data ethics here.

NHS England will lead a ‘world first’ pilot using ...AIAs... on healthcare datasets in a bid to eradicate algorithm bias in healthcare, in what could be a step by the UK government towards so-called ‘algorithmovigilance’ in healthcare.

Tags

life sciences, us, restructuring and insolvency