Skip to main content

Article

Artificial Intelligence in Life Sciences: Regulating AI Technologies and the Product Liability Implications

This third article in our Artificial Intelligence in Life Sciences series looks at the regulatory and legal landscape around the use of AI in life sciences and some of the implications for product liability.

Regulatory developments

There has been some form of regulation of artificial intelligence (AI) technologies in many life sciences uses in the EU and UK for several years. The EU’s lead on AI regulation began within the medical devices sphere with the introduction of the Medical Device Regulations (“MDR”) and In Vitro Diagnostic Regulations (“IVDR”) (2017/745 and 746) that, belatedly, came into force on 26 May 2021.

More recently, it has led the charge in proposing the first ever comprehensive regulatory framework to govern the risks posed by emerging digital technologies, including AI. Following the publication of the European Commission’s (“the Commission”) White Paper on AI and a series of consultations and expert group discussions, on 21 April 2021 the Commission published its long awaited proposal for a regulation laying down harmonised rules on AI, also referred to as the ‘Artificial Intelligence Act’. It is designed to complement existing EU legislation, such as the General Data Protection Regulation. It also aims to extend the applicability of existing sectoral product safety legislation to certain high-risk AI systems to ensure consistency.

The proposed regulation adopts a risk based approach and imposes strict controls and extensive risk management for the most risky forms of AI, including the requirement to undergo conformity assessments; the drawing up and maintenance of technical documentation; the implementation of quality management systems; and affixing of CE-markings to indicate conformity with the Commission’s proposed regulation before products are released to market. It has wide-ranging applicability and will affect AI providers and users inside and outside of the EU. Although this is familiar territory for life sciences companies, it is important that resources are put in place to respond to and deal with this additional regulatory burden, if and when it comes into force.

If the proposed regulation does come into force, it will not be implemented in the UK owing to Brexit. Nevertheless, UK businesses offering AI technologies to the EU will be directly affected when selling their products in the EU, and will be required to comply with the regulation.

The EU’s drive to implement global standards for new technologies has also had a domino effect in the UK:

  • On 16 September 2021, the Medicines & Healthcare products Regulatory Agency (“MHRA”) published a “Consultation on the future regulation of medical devices in the United Kingdom”, which ran until 25 November 2021. The Consultation set out proposed changes to the UK medical device regulatory framework with the aim to “develop a world-leading future regime for medical devices that prioritises patient safety while fostering innovation.
  • In conjunction with the Consultation, the MHRA also published Guidance, “Software and AI as a Medical Device Change Programme”, which pledges to deliver bold change to provide a regulatory framework that gives a high degree of protection for patients and the public, while ensuring that the UK is the home of responsible innovation for medical device software.
  • On 22 September 2021, the UK launched its first National Artificial Intelligence (AI) Strategy to “help it strengthen its position as a global science superpower and seize the potential of modern technology to improve people’s lives and solve global challenges such as climate change and public health”. The Strategy includes plans for a white paper on AI governance and regulation.

Product liability risks

Although there is a human hand behind AI technologies, the intangible nature of many AI applications raises questions as to who or what will be accountable for the consequences of their use, particularly when the development of such applications involve a myriad of persons, including software developers and data analysts.

In the UK, and depending on the specific circumstances, claims relating to product liability may be brought in negligence, breach of contract or pursuant to the Consumer Protection Act 1987 (CPA), the implementing legislation which transposed the EU Product Liability Directive 85/374/EEC (PLD) into UK law. The CPA imposes liability on a producer for damage caused by a defective product, often referred to as “no fault liability”.

Section 3 of the CPA provides that a product is defective if the safety of the product is “not such as persons generally are entitled to expect”. In assessing the safety of a product, the court will take into account all of the circumstances it considers factually and legally relevant to the evaluation of safety, on a case by case basis. These factors may include safety marks, regulatory compliance and warnings. A claimant bringing a claim under the CPA must prove the existence of a defect and that the defect caused the damage.

The unique features and characteristics of AI technologies present challenges to the existing liability framework. For example, questions are raised as to whether AI based software or data is considered a “product”, as defined by the CPA, or a service. This distinction is particularly relevant in the context of AI technologies that comprise physical hardware and cloud based software, such as a smart medical device, and where such software is often subject to automated modification. Similarly, questions may be asked as to which person(s) are considered a producer for the purposes of the CPA. Is it the software developer, the engineer, or the user responsible for updating the software?

The EU is seeking to address whether the PLD is fit for purpose and whether, and if so how and to what it extent, it should be adapted to address “the challenges posed by emerging digital technologies, ensuring, thereby, a high level of effective consumer protection, as well as legal certainty for consumers and businesses”.  Draft legislative changes could be available by Q3 2022.

The UK is taking similar measures to address whether its existing product safety and liability regimes meet the challenges of AI technologies. The UK Government opened a consultation via the UK Product Safety Review to explore possible changes to existing product safety laws to ensure the framework is fit for the future, acknowledging that the provisions of the CPA do not reflect emerging technologies such as AI. Furthermore, potential reform of the CPA is being mooted by the Law Commission as part of its 14th Programme of Law Reform, having invited views as to whether the CPA should be extended to cover technological developments.

In the final article in our Artificial Intelligence in Life Sciences – Revolution, Risk, and Regulation series, we will consider what action life sciences companies can take to ‘future-proof’ themselves against some of the key issues and risks.

Meet the authors

Placeholder Image

Jenny Yu

Chemicals and Life Sciences Industry Leader, UK & Ireland

Placeholder Image

Paula Margolis

Corporate Affairs Lawyer, Kennedys

  • United Kingdom

Placeholder Image

Samantha Silver

Partner, Kennedys

  • United Kingdom