Introduction
Hong Kong’s financial regulators, including the Securities and Futures Commission (SFC) and the Hong Kong Monetary Authority (HKMA), have recently issued rules on the use of AI in the financial sector.
These rules give effect to the policy direction of the Financial Services and Treasury Bureau (FSTB), a Hong Kong government ministry, as articulated in its policy statement of 28 October 2024[1]. The FTSB policy statement urges financial institutions to adopt a risk-based approach at every stage of an AI system’s lifecycle, and establish a comprehensive AI governance approach.
SFC Circular on use of generative AI language models
On 12 November 2024, the SFC issued a circular on the use of AI in investment advisory services[2] (the SFC Circular).
The SFC Circular is organised around four core principles.
- Core Principle 1 – senior management oversight: the SFC Circular stresses the importance of the close involvement of senior management throughout the full lifecycle of an AI system. Senior management must ensure that qualified individuals are employed and in place to oversee model development (if using an in-house AI model) and model procurement (if using an external AI model) activities.
- Core Principle 2 – AI model risk management: licensed corporations engaging in in-house AI model development or customisation must (i) ensure adequate validation of results before deployment, (ii) conduct end-to-end testing, and (iii) establish protocols for continuous performance monitoring.
The SFC Circular deems the provision of investment recommendations, investment advice and investment research using AI models to be “high risk” activities, and therefore subject to stricter requirements. For all such high-risk use cases, licensed corporations must additionally:
- continuously monitor and review AI output for accuracy
- use a “human-in-the-loop” approach to review all AI model output before any outcome is actioned
- test the AI model’s consistency by varying prompts that convey the same meaning
- provide customers with regular disclosures informing them that they are interacting with an AI model (and not a human) each time the customer interacts with an AI model
- notify the SFC of any changes made to the AI model.
- Core Principle 3 – cybersecurity and data risk management: licensed corporations are required to implement measures to address the cybersecurity risks associated with AI models. These include implementing monitoring measures (for example, regularly conducting adversarial testing on AI models) as well as protection measures (encrypting non-public data).
- Core Principle 4 – third-party providers: licensed corporations must ensure clear risk allocation between themselves and third-party AI providers and, where possible, seek indemnities.
While the SFC Circular is mandatory and took effect immediately, the SFC recognises that licensed corporations may need time to update their policies to meet the new requirements. The SFC has said that it will take a pragmatic approach in assessing compliance with the SFC Circular in the short term.
HKMA circulars on AI risk management
The HKMA has issued two notable circulars on AI in recent years.
On 19 August 2024, the HKMA issued a circular on customer protection aspects of the use of Generative AI[3] (the GenAI Circular). The GenAI Circular specifically requires institutions to:
- allow customers to opt out or request human intervention in customer-facing applications
- implement systems to continuously monitor generative AI output to ensure quality and prevent harmful or misleading results.
The GenAI Circular follows on from an earlier circular of the HKMA from 5 November 2019, regarding the use of big data analytics and AI (BDAI)[4]. This circular had emphasised the accountability of the senior management and boards of financial institutions for all aspects related to AI models, including the procurement, use and deployment of AI.
The BDAI also requires pre-deployment validation of AI models and mandates the inclusion of human intervention options in high-risk cases, for example when lending decisions are made.
While the circulars themselves are not legally binding, they outline the HKMA’s expectations on various financial institutions in their use of AI.
The Insurance Authority’s approach to AI
Unlike the HKMA’s targeted AI circulars, the Insurance Authority (IA) has not yet provided specific guidelines for the use of AI in the insurance sector. However, the IA has recently indicated[5] that it is developing a framework for AI adoption specific to the insurance industry.
Approach adopted by other financial regulators in the region
Financial regulators across Asia have adopted varied approaches to AI governance in the financial sector. While Hong Kong is employing a relatively prescriptive framework, Singapore continues to emphasise self-regulation, and South Korea is imposing binding rules, although at a less detailed level than the rules recently introduced in Hong Kong.
Singapore
On 12 November 2018, the Monetary Authority of Singapore (MAS) introduced the Fairness, Ethics, Accountability, and Transparency (FEAT) Principles, a high-level framework guiding financial institutions in the use of AI in financial products and services.
The FEAT Principles encourage:
- measures to avoid AI-driven decisions from being used to disadvantage or discriminate against any particular individual or group of people
- regularly reviewing AI decisions to ensure models function as intended
- proactively disclosing the use of AI systems
- providing channels for individuals to appeal AI-driven decisions.
To operationalise these principles, the MAS partnered with the “Veritas Consortium” – a group of financial institutions and technology firms. This collaboration produced the Veritas Toolkit, an open-source toolkit that enables financial institutions to assess their AI use case and determine whether it aligns with the FEAT Principles. The Veritas Consortium is now developing a toolkit specifically for GenAI technology.
South Korea
On 8 July 2021, Financial Services Commission (FSC) issued mandatory guidelines for financial institutions using AI in customer-facing services and products. The use of AI internally is excluded from the scope of the guidelines.
Key requirements include:
- Internal Controls: establishing three separate internal control mechanisms: (i) an AI ethics committee to manage potential risks, (ii) a designated oversight body, and (iii) a separate risk management system for privacy issues.
- Human oversight over automated decision-making: ensuring human oversight to effectively supervise, control and if necessary, intervene, in the decision-making process. Fully autonomous AI decision-making systems are prohibited.
- Consumer Rights: providing clear and accessible explanations of AI-based services to consumers.
- Outsourcing: monitoring third-party designed AI systems through a periodic reporting system to ensure that the AI system complies with the guidelines
These guidelines, while mandatory, are not detailed. While the FSC was expected to collate feedback from financial industry groups and issue a more detailed set of guidelines, the FSC has instead focussed on addressing specific concepts, including introducing a mandatory verification system to evaluate AI-driven credit scoring models based on their selection of credit-approval algorithms and statistical validity before deployment of the AI model, providing security guidelines for AI chatbots and introducing a regulatory sandbox programme for testing generative AI.
Japan
Japan, by contrast, has no AI-specific guidelines from financial regulators. Instead, Japan’s Ministry of Internal Affairs and Communication in collaboration with the Ministry of Economy, Trade and Industry has issued the AI Guidelines for Business, a set of general non-binding guidelines for using AI in a business context. It is also reported to be considering legally binding regulations on developers of large-scale AI systems.
See our earlier briefing[6] on the general state of development of AI regulation in Asia, including Hong Kong, Singapore, South Korea and Japan.
[1] Available here: P2024102800154_475819_1_1730083937115.pdf (info.gov.hk)
[2] Available here: Circular to licensed corporations - Use of generative AI language models | Securities & Futures Commission of Hong Kong (sfc.hk)
[3] Available here: https://www.hkma.gov.hk/media/eng/doc/key-information/guidelines-and-circular/2024/20240819e1.pdf
[4] Available here: https://www.hkma.gov.hk/media/eng/doc/key-information/guidelines-and-circular/2019/20191105e1.pdf
[5] Available here: https://www.ia.org.hk/en/infocenter/press_releases/20241029.html
[6] Available here: https://technologyquotient.freshfields.com/post/102jl88/eu-ai-act-unpacked-the-spillover-effect-in-asia-part-1-binding-ai-regulation-i