This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Freshfields Risk & Compliance

| 6 minute read

AI litigation and regulatory enforcement: where are we and what’s on the horizon?

Introduction

As AI becomes increasingly advanced, so does the risk that companies may face litigation or regulatory investigations concerning their use or development of the technology. Governments and regulators are heightening their focus on both the opportunities and risks posed by AI, meaning providers of AI will likely soon be faced with new regulatory regimes and forms of liability. At the moment, before these specific regimes have come into force, AI-related litigation and investigations are being brought based on pre-existing, AI-neutral regulations and causes of action, primarily in the fields of human rights and anti-discrimination, intellectual property, and data protection. In this blog, we take a closer look at these early AI-related cases and highlight potential areas of liability on the horizon. 

Types of AI-related claims 

Copyright and intellectual property 

Copyright infringement represents a key area of AI litigation, with cases mostly concerning situations where alleged copyrighted material has been used to train AI models without the copyright owner’s permission. For example, an ongoing English High Court claim alleges that an AI developer used the claimant’s images, without their consent, to train and develop its image generator tool. The suit also claims that the defendant’s output images reproduce a substantial part of its copyrighted work and/or bear its brand markings. Similar examples exist both in the UK and abroad. For example, in December 2023 a well-known AI software provider faced a claim in the US that their LLM (large language model) has been developed using others’ copyrighted information.

Data protection and privacy

Data protection law also forms the basis of several AI-related claims and investigations. A number of cases concern a perceived failure of providers of AI systems to adequately protect users’ data privacy. For example, in autumn 2023 the UK Information Commissioners’ Office (ICO) issued a preliminary enforcement notice to a social media platform over their potential failure to assess the privacy risks posed by their generative AI chatbot. This follows on from a £7.5 million fine issued by the ICO to an AI developer in 2022 for the unlawful scraping of billions of facial images from the internet and the use of these images to offer AI-based facial recognition. 

Outside the UK, some companies are being met with similar data protection claims across a number of different jurisdictions – under the GDPR in Europe and also in the US. The EU GDPR in particular is providing fertile regulatory grounds for AI-related claims. As well as claims brought under the GDPR in relation to data scraping, we have also seen a number of claims based on Article 22 GDPR, which protects individuals from being subject to automated decision making in certain circumstances. 

Human rights and anti-discrimination

Human rights and anti-discrimination laws are also frequently being relied upon to bring AI-related claims, primarily on the basis that AI systems allegedly unfairly discriminate against protected classes of people. For example, in 2022 a case was brought in the UK under the Equality Act on the basis that the facial-recognition feature of a food delivery app placed people from ethnic minority groups at a disadvantage when seeking work from the app provider. This was because false positive and false negative results were greater among app users from ethnic minority groups. Similarly, in the US a number of class action lawsuits have been brought on the basis that AI-powered platforms routinely discriminate against minorities, for example in the context of tenant screening in housing applications, or in job applications.

The outlook for 2024 and beyond

New avenues of liability

Beyond the target areas outlined above, new avenues of liability are starting to emerge. For example, ‘AI washing’ appears to be a particular focus of US regulators at the moment. Regulators such as the SEC and FTC have warned companies against making misleading statements regarding their AI capabilities in advertising, akin to greenwashing in the ESG space, with the SEC recently having settled charges with two investment firms over alleged false AI claims. We can expect that global regulators will apply similar scrutiny and civil litigation may then follow in cases where companies are deemed to overstate or misstate the use of AI in their offering (or make claims that cannot be adequately backed up).

Increased scrutiny of algorithms underlying AI decisions

We have recently seen a number of cases in the US and in Europe concerning the accuracy of automated decision-making. For example, two separate US class action lawsuits brought at the end of 2023 concerned the use of AI algorithms to award health insurance coverage. The claimants alleged that the faulty algorithms were overriding doctors’ determinations of the healthcare that patients needed.  Looking ahead, corporations should increasingly expect to be held accountable for the accuracy of any algorithms which sit beneath the services they offer to customers. 

New legislative and regulatory regimes

Governments around the world are considering the need for entirely new legislative and regulatory regimes in response to the rapid growth of AI. For example, in March 2024, the European Parliament approved the new AI Act, which aims to introduce a sliding scale of EU-wide rules based on the perceived risk of AI systems (the higher the perceived risk of a particular AI system, the harsher the restrictions and compliance requirements that the Act will impose on that system). The Act will create a number of new regulatory and advisory bodies to assist in its implementation and, importantly, will apply extra-territorially (as long as the relevant AI system is placed on the EU market, or its use affects people located in the EU).

The European Commission also released a proposal for an AI Liability Directive in September 2022, which is intended to complement the AI Act by introducing a new liability regime that makes it easier for consumers to claim for damage allegedly caused by AI-enabled products and services. For more information on the EU’s approach to AI and liability, see our previous post here.

The UK’s approach

The UK government does not appear to be moving to introduce any general AI legislation anytime soon. In February 2024, the government published the results of its ‘pro-innovation approach to AI’ consultation, announcing a £10 million package to boost regulators’ AI capabilities, £1.5 billion to build the “next generation of supercomputers in the public sector” and “an £80 million boost in AI research”. The government’s consultation response further clarifies that it will consider a range of possible requirements and interventions to encourage the responsible and safe deployment of highly capable AI, and establish a steering committee to oversee a new AI central regulator coordination function.

This follows on from the government’s March 2023 white paper on AI regulation, which set out how the UK will look to encourage regulators to apply five ‘framework principles’ to govern how they approach AI. These principles are: safety; security and robustness; appropriate transparency and explicability; fairness, accountability and governance; and contestability and redress. Targeted legislation will be introduced in relation to specific sectors where the framework principles are not deemed sufficient to address the advancement of AI, for example the Automated Vehicles Bill has just passed through Parliament and is awaiting Royal Assent (see our previous post on this here).

Furthermore, on 1 April 2024, the UK Department for Science, Innovation and Technology (DSIT) and the US Department of Commerce have signed a Memorandum of Understanding (MoU) for cooperation on the development of tests for the most advanced AI models. The MoU is effective immediately and provides for: (a) plans to build a common approach to AI safety testing; (b) the sharing of capabilities to ensure effective risk management; (c) the performance of at least one joint testing exercise on a publicly accessible model; and (d) tapping into a collective pool of expertise through exploring personnel exchanges between the UK and US AI Safety Institutes. Of note, the MoU also contains a commitment to develop similar partnerships with other countries to promote AI safety worldwide in the future.

The UK’s lighter-touch approach may ultimately be favourable for businesses, as compared to the proposed direction of travel under the EU AI Act. However, a general election looms in the UK and, following comments from the Labour Party leader, a new Labour government could be expected to introduce tougher AI regulation. 

Conclusion

The combined effect of AI systems’ continued rapid development, regulators’ sharp focus in areas such as data protection and misleading advertising in the form of AI washing, and various jurisdictions looking to hone their approach to AI (even if not by introducing broad-ranging legislation), means that we anticipate a surge in AI-related claims and regulatory enforcement in the near future, both in relation to the existing areas we have explored above, and new causes of action which are yet to come to the fore. Businesses will need to stay on top of these developments in order to assess their risk profile and ensure compliance with the developing (and at times divergent) regimes in various jurisdictions. We will continue to provide updates on key developments as they come through. If you have any questions, please do get in touch. 

Tags

ai, litigation, regulatory