Out-Law News 2 min. read
04 Mar 2022, 1:35 pm
Plans for the NHS in England to begin a pilot for algorithmic impact assessments (AIAs) will be “key to building public confidence” in the use of artificial intelligence (AI) in healthcare, according to one expert.
Plans for the NHS in England to begin a pilot for algorithmic impact assessments (AIAs) will be “key to building public confidence” in the use of artificial intelligence (AI) in healthcare, according to one expert.
AIAs are programmes that help AI system developers to analyse the potential benefits and drawbacks of the systems they design. The Department for Health and Social Care (DHSC) said it hoped the technology, designed by the Ada Lovelace Institute, would ensure potential risks such as algorithm biases of AI systems to patients and the public are addressed before NHS data is accessed.
Cerys Wyn Davies, medtech and digital health sector expert at Pinsent Masons, said the use of AIAs was “key to building public confidence in AI technology and trust in its use in managing health. While artificial intelligence has the potential to support clinicians and healthcare staff to deliver better care and treatment for people, it could also exacerbate existing health inequalities if concerns such as algorithmic bias aren’t accounted for.”
“Access to data is essential in the development of AI systems and improved transparency and accountability are seen as key to achieving a society that is willing to share personal health data for the greater public good,” she added.
The pilot will see AIAs trialled across a number of the NHS AI Lab’s initiatives in England, including the data access process for the National Covid-19 Chest Imaging Database (NCCID) – a centralised database that supports coronavirus researchers. AIAs will also be trialled at the proposed National Medical Imaging Platform (NMIP), which will test the screening and diagnostic capacities of AI technologies.
Helen Cline, health and life sciences law expert at Pinsent Masons said: “The use of AIAs is intended to complement and build from existing regulatory requirements imposed on proposed medical AI products. The Medicines and Healthcare products Regulatory Agency (MHRA) requires organisations to assign a risk category to such products, which can sometimes be challenging – so as one component of an accountability and regulatory framework, AIAs may prove useful for accessing the risks associated with AI systems being developed for medical purposes.”
The DHSC said the NHS will support researchers and developers to engage patients and healthcare professionals at an early stage of AI development - when there is greater flexibility to make adjustments and respond to concerns. It said supporting patient and public involvement as part of the development process will lead to improvements in patient experience and the clinical integration of AI.
Cline added: “Further updates to regulation in this area are expected soon, with the MHRA proposing a new early access pathway for software and AI as a medical device which could also potentially assist in clarifying risk profile. The consultation on this proposal and a broader review of medtech regulation in the UK closed in November 2021 with the UK government response expected in April.”