This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Freshfields Risk & Compliance

| 5 minute read

AI Investigations: First lessons for compliance programmes

Utilising AI systems means owning value and managing risks. Inadequate AI governance can give rise to a wide range of compliance threats, which go beyond complying with the newly adopted EU AI Act (see our EU AI Act unpacked blog series). In this blog, we look at AI as the driving force for investigations and how organisations may structure their AI compliance accordingly.

Investigations into AI

AI is on the radar of regulatory authorities, consumer protection organisations and private claimants across the globe (for a deep dive on AI litigation and regulatory enforcement see our previous blog post). In this respect, common themes of interest emerge: 

  • Product regulation: Emerging laws and regulations focus on AI as a product, setting standards for consumer safety and corporate transparency. 
  • Consumer protection: The potential impact of AI systems on individuals draws attention of consumer protection authorities like the US Federal Trade Commission and non-profit organisations like the Federation of German Consumer Organisations (Verbraucherzentrale Bundesverband).  
  • Data protection: Regulators such as the European Data Protection Board, Italian Garante and UK ICO are increasingly attuned to the risks AI may pose with respect to data protection. In particular with regard to the legal basis for processing personal data, measures to obtain rectification of inaccurate data, accuracy and transparency of data (eg inputs and outputs of AI systems), as well as potential harm (with a specific focus on the protection of children).   
  • Intellectual property: The use of potentially copyrighted material in training AI models may present novel IP issues.
  • Human rights and anti-discrimination: Potential biases in AI systems may lead to unfair treatment of individuals.
  • AI washing: Regulators are increasingly alleging that companies’ statements about their use of AI are inaccurate, similar to greenwashing (see our previous blog on AI washing for more details).
  • Competition: Regulators may be concerned about potential effects on competition given the inputs needed to build and maintain AI products. 

Illustrative enforcement patterns in the US

These global regulatory themes are also reflected in current developments in the US. Recent months have seen an uptick in investigations into to AI use, as well as new AI-related legislation and regulatory activity that may serve as the basis for future investigations. Several of these developments follow up on requirements from the White House’s recent Executive Order on AI.  For example:

  • Health data. The Department of Health and Human Services has issued a final rule clarifying the application of non-discrimination requirements in health programs and activities in the context of AI and related tools like predictive analytics. This follows an investigation by the California Attorney General into potential bias associated with hospitals’ use of algorithmic decision-making tools.
  • Voice imitation. The Federal Communications Commission has issued a Notice of Inquiry to gather information and commentary regarding the risks of AI technologies that may seek to emulate human voices in robocalls, presumably to evaluate whether additional regulations under the Telephone Consumer Protection Act might be warranted.
  • Housing. The Department of Housing and Urban Development has released guidance for applying existing prohibitions against discrimination in the context of AI’s use for screening tenants and advertising housing opportunities.
  • State and local law. A number of states and localities have passed AI regulations that could lead to compliance investigations. These range from specific use cases, like a New York City law regulating use of AI in making employment decisions, to more general laws like a comprehensive AI bill recently passed in Colorado and set to take effect in early 2026.

Lessons learned for compliance programmes

Organisations should take into account the risk presented by the above investigation patterns and regulatory developments into the use of AI systems when designing their compliance programmes, and it is more and more common across jurisdictions for regulations to expressly require AI-specific risk assessments. In the EU, liability can even extend to management if a business decision to (not) use or deploy AI systems is not based on appropriate information. 

Where possible, organisations should consider adding AI compliance structures by updating existing enterprise risk management (ERM) systems or similar compliance programmes and by considering the establishment of an AI committee and/or a management role dedicated to AI. In the process, AI risks may be addressed by following three seemingly simple steps:

  1. Identify AI risk areas.
  2. Mitigate identified AI risks.
  3. Monitor AI systems continuously.

To identify risk areas is challenging and requires literacy on AI systems and knowledge of regulations and investigations relevant for the organisation. 

  • For Europe, organisations need to understand their statutory obligations, such as under the EU AI Act, by evaluating the risk categories of an AI system that they intend to use (see our previous blogs on risk categories under the AI Act and the resulting key governance obligations). 
  • Organisations should also keep abreast of US state laws that may require risk assessments specific to AI. 
  • General data protection obligations under the GDPR, the UK GDPR or US state law may also apply to the use of AI systems. Organisations should identify if they plan to use an AI system to process personal data, if an AI system was trained with personal data and, under the GDPR, determine if they act as a controller or processor.
  • Intellectual property obligations need to be examined in the context of training AI systems or using AI output that may reflect materials subject to copyright protection. Additionally, organisations should be aware that the usage of an AI system may risk infringing third party rights. 
  • Safeguarding human rights and anti-discrimination require the ongoing monitoring of AI systems, especially if they interact with or otherwise affect other natural persons. When marketing the usage of AI systems, organisations should bear in mind the risk that public statements may be viewed as inaccurate or misleading. This can require a comprehensive understanding of the technical features of an AI system as well as the general public’s familiarity with technical terms.

Data governance plays a key role when mitigating AI related risks (see our previous blog on what boards need to know on data governance. In updating compliance frameworks, companies may leverage synergies with existing compliance obligations, such as data protection impact assessments under the GDPR or cybersecurity measures. Organisations may also want to take a look at their commercial contracts when it comes to using AI systems to manage potential liabilities. 

AI systems are constantly learning and evolving, which provides great value for organisations. In turn, AI governance may require regular monitoring and documentation of findings to secure the value once added. 

Organisations may want to draw inspiration from existing guidance and standards by authorities and international organisations to set up or adapt their AI governance. For example:

Key takeaways

  • Organisations developing, using or otherwise interacting with AI are well advised to identify risks related to AI.
  • Identifying risks when deploying AI systems creates a challenge for organisations to assemble appropriate information and literacy on AI systems. 
  • Keep an eye out for ongoing developments in this rapidly-developing space, but don’t reinvent the wheel: existing compliance frameworks or an ERM may be leveraged to efficiently update compliance programs where necessary to meet new AI-specific regulations.

 

Tags

eu ai act, global enforcement outlook 2024, compliance, ai, corporate crime, investigations