Artificial intelligence (AI) is already reshaping financial services, but regulators are moving fast to ensure opportunity does not outpace risk. Recent developments from the Financial Conduct Authority (FCA) and the UK Parliament’s Treasury Committee suggest that safe, transparent, accountable deployment of AI is rapidly becoming not just best practice, but a regulatory expectation.
Background
Both the Bank of England and the FCA adopt a technology-agnostic approach to supervision, meaning they neither prescribe nor prohibit the use of particular technologies.
In its 2024 AI Update, the FCA confirmed that its outcomes-focused approach to regulation and supervision applies equally to AI. This means the FCA is relying on existing regulatory and legislative frameworks to mitigate many of the risks associated with the use of AI in UK financial services and markets. The FCA sees this as regulation that enables innovation. By focusing on outcomes rather than rigid rules, the FCA allows firms some flexibility in how they adopt new technologies like AI, while still holding them accountable for fair treatment of customers and resilient operation.
No New Rules (For Now): FCA’s AI Approach Explained
On 9 September 2025, the FCA launched a new webpage entitled “AI and the FCA: our approach”, which consolidates its position on the safe and responsible adoption of AI across UK financial markets.
The webpage reinforces the FCA’s consistent message: it will not introduce AI-specific regulations but will instead rely on existing frameworks – specifically the Consumer Duty and the accountability and governance requirements under the Senior Managers and Certification Regime.
The new webpage also serves as a hub of resources for firms, including:
- related publications and policy documents – such as the FCA’s 2024 AI Update and its letter to the Prime Minister (January 2025) setting out its supervisory priorities for the year;
- FCA’s AI Lab – a dedicated initiative offering support to firms developing AI solutions, with a focus on safe experimentation and responsible deployment;
- AI-related research and collaboration – highlighting the FCA’s work with other regulators and international partners, and its use of AI tools to become a “smarter regulator” itself; and
- AI Live Testing – signposting the FCA’s commitment to real-world testing of AI systems, with the first cohort of firms due to begin in October 2025.
For firms, the message is clear: while there is no standalone AI rulebook, the FCA expects them to embed AI within their existing governance, accountability, and consumer protection obligations. The webpage provides both reassurance and practical direction, underlining that regulatory flexibility does not mean a lack of scrutiny.
AI Live Testing: FCA Opens the Sandbox Door
On 9 September 2025, the FCA published its Feedback Statement FS25/5, confirming it will proceed with AI Live Testing – a new initiative under its AI Lab that enables firms to work directly with the regulator, receiving tailored support to develop, assess and deploy AI systems live in UK financial markets. Applications for the first cohort of AI Live Testing closed on 15 September 2025. The FCA expects to start working with participating firms in October 2025. The application window for the second cohort will open before the end of 2025.
The Feedback Statement also summarises industry responses to the FCA’s April 2025 Engagement Paper on AI Live Testing. The FCA received 67 responses from regulated firms (including banks, insurers, and wealth managers) and non-regulated firms (including AI specialists, RegTech firms, academics, and trade associations). Feedback was strongly supportive, viewing AI Live Testing as a way to enhance transparency, bridge the gap between principles and practice, and reduce regulatory uncertainty that often stalls AI projects.
The Feedback Statement also highlights a number of issues that the respondents face in relation to AI deployment, AI model performance and evaluations, data governance and accountability. A few issues stand out:
- high consumer impact: respondents want AI systems directly interacting with consumers or making significant decisions – such as credit approvals or financial advice – at the top of the testing agenda;
- standard and governance: many called for industry-wide benchmarks and FCA-led working groups to align regulatory expectations with AI-driven systems; and
- stress-testing: respondents urged the FCA to prioritise resilience testing under adverse conditions – such as market volatility, adversarial inputs, and cybersecurity threats. Financial crime scenarios were highlighted as a priority.
Whilst there was strong support for AI Live Testing, respondents also provided views on how the FCA could further help firms’ safe and responsible AI adoption. Specifically, respondents recommended:
- development of standardised benchmarks for AI performance and outcome-focused monitoring;
- incorporation of stress-testing and scenario simulations, including adversarial conditions;
- clearer guidance from the FCA on regulatory expectations so that firms better understand what they can do without introducing unintended discrimination or regulatory risk; and
- promotion of collaboration and knowledge-sharing, including anonymised case studies and alignment with global standards such as NIST AI RMF.
The Feedback Statement indicates that AI Live Testing represents more than a traditional sandbox, reflecting a shift toward greater regulatory collaboration. The FCA is moving from a primarily rule-setting role to one that involves working directly with firms to explore practical solutions. For industry participants, this provides an opportunity to contribute to the development of emerging standards, address areas of compliance uncertainty, and engage with the regulator on responsible approaches to AI adoption.
Treasury Committee Turns Spotlight on Major Tech Companies' Role in Financial Services
On 17 September 2025, the House of Commons Treasury Committee wrote to six major technology firms, seeking clarity on their role in providing AI services to the UK financial sector. The letters form part of an ongoing inquiry into AI’s impact on banking, pensions, and markets. Questions cover a wide range of issues, including these companies’ AI strategies, transparency measures, bias mitigation, contingency planning, and engagement with the FCA and Bank of England. Notably, the Committee asks how these companies would respond if designated as “critical third parties” – a status that could impose heightened regulatory obligations and resilience requirements.
The letters also ask for views on the Artificial Intelligence (Regulation) Bill, a Private Member’s Bill reintroduced in the House of Lords in March 2025. The Bill proposes a significant shift in the UK’s AI governance model, moving away from the current sector-based, principles-driven approach toward a more centralised framework. Key provisions include the creation of a dedicated AI Authority to oversee compliance, codification of the UK’s five AI principles – safety, transparency, fairness, accountability, and contestability – into binding duties, and a requirement for businesses developing or deploying AI to appoint a designated AI Officer responsible for ensuring safe and ethical use of AI. If enacted, the Bill would bring the UK closer to the EU’s risk-based model under the AI Act, marking a departure from the government’s current “pro-innovation” stance. While its passage remains uncertain, the Bill underscores growing pressure for statutory oversight and raises critical questions for businesses about governance, operational resilience, and compliance readiness.
Responses were due by 1 October 2025, and while immediate rule changes are unlikely, the tone suggests that policymakers are considering potential future measures.