The big opportunity
Harnessing big data and the power of artificial intelligence (AI) may be a match made in heaven for the insurance industry. At the core of insurance, particularly for underwriting, is the analysis of large volumes of data to make informed decisions as to the likelihood of outcomes based on previous learnings – a natural assignment for AI.
There are, of course, other uses for AI such as improving customer experience during the lifetime of a policy – for example, chat bots for personalised quotes and faster claims handling. For products like health insurance, the use of AI to analyse medical claims has the potential to reduce health insurance fraud. Where partnerships exist between insurers and technology based firms using data from wearable devices, the expectant result is fewer claims pay-outs and more attractive premiums for customers in the future. In the world of car insurance, AI continues to allow consumers access to products that better suit them and that are priced and updated according to their individual risk factors.
Insurtechs are rapidly tapping into this potential for the insurance industry, however it comes with challenges and an increasing regulatory focus.
Challenges of AI for the insurance industry
But AI brings its own challenges for the industry. For a start, there are significant concerns around the use of data and data privacy. AI systems rely on large quantities of data. The underwriting process itself relies on the gathering and analysis of data to create personalised policies, and the elimination of repetitive tasks and unnecessary delays. Both the source data from the potential insured and the wider, big data, allow firms to provide targeted risk analysis. The huge volume of often sensitive personal data required to maximise the benefits of AI requires firms to safeguard this material, or to obtain the necessary consent for its use.
Failure to do so can expose the insurer to severe financial penalties under the GDPR, the EU framework on data protection. However this regulation was drafted before the rise of AI and, as such, does not adequately cover the ethical challenges that the rapid growth of AI brings. From a consumer perspective, this often sensitive information is vulnerable to cyber criminals and therefore managing this risk will require constant action from the insurer and any third party service providers.
The lack of regulation, unavailability of trusted data and the public perception of risk are the biggest barriers to the widespread uptake and insurability of new and evolving technologies in the insurance industry, according to a survey conducted earlier this year and published in March by the International Underwriting Association (IUA).
There is hope in the form of the setting of international standards through. The EU Artificial Intelligence Act (EU AI Act) is expected to come into effect in 2024. This will be the first international regulation for AI, but it will not cover all firms everywhere and is limited in its application. The scope, instruments and governance framework introduced by the proposal are still being debated and refined by European co-legislators. The hope is that the EU AI Act will become a global standard. And indeed, UK regulators have noted the need to avoid regulatory fragmentation where possible, both domestically and internationally and to harmonise where possible.
As recognised by the BoE, governance is crucial to the safe adoption of AI in financial services. It ensures accountability and puts in place the set of rules, controls, and policies for a firm’s use of AI. Good governance can ensure effective risk management and support many of the data and model-related issues raised in the previous chapters. On the other hand, poor governance can increase challenges and produce risks for consumers, firms, and the financial system.
AI also raises complex ethical issues. The analysis of damage and the speed with which claims can be processed continue to appeal to firms and consumers alike. Discrimination is not always obvious and where AI is deployed in the underwriting process, if the AI starts to adjust premiums taking gender information for example, into account, there is a significant risk that the insurer will risk breaching anti-discrimination laws.
The regulators, without a doubt, recognise the value of AI and the focus is to ensure that the financial services sector is able to harness the value of AI for the benefit of society at large.
The pandemic accelerated the overall pace of AI adoption however the gap between the rapid progress being made by insurtechs and the existing regulatory framework means the need for clear ethical rules to protect consumers is high on the agenda. UK regulators are studying the issue and considering the need for action but have yet to implement any specific guidance for the use of AI and big data.
Together, the Bank of England and the FCA established the AI Public-Private Forum (AIPPF) in October 2020 to further the dialogue between the public sector, the private sector, and academia on AI. Earlier this year, the AIPPF published its final report on the various barriers to adoption, challenges, and risks related to the use of AI in financial services. This was followed by the publication by the Bank of England of a Discussion Paper which considers the current regulatory framework and explores how key existing sectoral legal requirements and guidance in UK financial services apply to AI.
The discussion covers a number of topics, at the heart of which is how can policy mitigate AI risks while facilitating beneficial innovation? Is there a role for technical and, indeed, for global standards? If so, what?
These are all questions that the industry continues to work together on to create a framework where both innovation is encouraged, whilst simultaneously protecting consumers.