This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Freshfields Risk & Compliance

| 3 minute read

AI in financial services – Innovation, regulation and practical considerations

AI is a topic that is both new and old. Financial services firms have been grappling with the capabilities of AI and its use cases for some time but technology is developing rapidly, regulatory frameworks are emerging and new use cases are being considered in the financial services sector. On 20 November, Brock Dahl, Adam Gillert, Rikki Haria, Satya Staes Polet, Katie Sa and Julia Utzerath presented a financial services webinar on AI, discussing the emerging framework for regulation of AI and handling practical points that arise in adopting AI within financial services businesses.

In terms of the regulatory landscape, the EU’s AI Act is the most comprehensive regime to be introduced so far and will come into effect in phases, starting in February 2025 for the prohibited AI practices with all other rules taking effect by August 2026. The AI Act will apply to a wide range of businesses from inside and outside the EU, where the AI system or AI model is placed on the EU market, its use affects people located in the EU, or output produced by an AI system is used in the EU.  The EU has classified different types of AI systems by risk, with some being completely prohibited and others viewed as high or limited risk being permitted with additional safeguards. Financial services firms, whether developing AI in-house or buying it in, need to consider carefully which AI systems and models are adopted, for what use case and how to meet any additional compliance requirements.

In the US, the recent elections will introduce some uncertainty into the forthcoming federal, regulatory landscape. Experts anticipate significant changes to certain, current federal initiatives, including the AI Executive Order and National Security Memorandum issued during the Biden administration. But the States will continue to develop statutes and regulation in line with the prolific work they have been doing to date in the space, perhaps accelerating such work in response to developments at the federal level. 

The UK has also taken a different approach to the EU, and hasn’t introduced a general law specifically targeting AI. Existing regulators (including sector-specific regulators like the Financial Conduct Authority) are expected to use existing powers to regulate the use of AI proportionately, taking into account overarching principles such as safety, transparency, fairness and contestability. The UK government is expected to propose AI legislation to impose some requirements on the handful of companies developing the most powerful AI models, but to otherwise focus on promoting innovation and growth through use of AI. 

In addition to regulatory considerations, there are a range of practical considerations for firms implementing AI models within their business. We mention a few examples below.

  • Governance and oversight of use of AI– A key consideration is how to set up governance controls and scrutiny around the use of AI and how to ensure that it reflects the values and acceptable standards within the business more generally.
  • Contracting with counterparties – Firms need to consider the appropriate level of due diligence and evidence that is needed for contracting with suppliers or for products that may not have a lengthy track record. And questions around apportionment of risk need to be considered to manage potential litigation risk. Firms will also want to ensure that their expectations are reflected in contract terms, for example how input data is used, confidentiality of outputs and required governance and oversight, for example. 
  • Competition and consumer protection – Regulators are actively exploring whether the use and deployment of AI may raise competition or consumer protection concerns such as collusion (e.g. trading models), access to data, whether outputs contain false and misleading information, and the “fairness” of customer outcomes. This ties in with other regulatory obligations from financial regulators, such as the UK FCA’s consumer duty. Firms should be thinking about what disclosures and information may need to be provided to customers around their use of AI, and what steps to take to monitor and audit an AI system’s performance and outputs (including from third-party AI models) on an ongoing basis.
  • Biases and discriminatory outcomes – AI systems are being used for HR and customer management. Firms need to consider how to prevent AI systems from being biased and exhibiting discriminatory outcomes, and how to monitor this over time. 

A recording of the webinar is available on request.  Please contact one of the speakers listed below or your usual Freshfields contact to discuss any of these topics in more detail.

Brock Dahl, Partner, Washington

Rikki Haria, Partner, London

Satya Staes Polet, Partner, Brussels

Katie Sa, Associate, London

Adam Gillert, Senior Knowledge Lawyer, London

Julia Utzerath, Senior Knowledge Lawyer, Dusseldorf

Tags

fintech, ai