This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Freshfields Risk & Compliance

| 2 minutes read

GenAI and board minute-taking

Many companies are exploring the ways in which generative AI tools may be employed to assist with board minute-taking. Such AI tools include: real time transcription distinguishing between different speakers; organisation and summarisation of discussions and actions; identification and categorisation of key information; and in some instances, personalisation whereby the AI is trained to recognise business-specific terminology. Whilst there are many benefits to the developments in this area, generative AI remains a relatively new technology, and so it is important to recognise that its use and deployment can create risks that need to be identified and managed. Some of these risks are explored below. 

As set out in our blog, Building Your Company’s AI Governance Framework (freshfields.com), introducing a programme to manage the use of AI is one of the key actions that an organisation considering utilising AI should take.

Accuracy?

As discussed in our blog, Board minutes: Not just an administrative formality (freshfields.com), a key requirement of board minutes is accuracy, and generative AI solutions may ‘hallucinate’, generating information that is not wholly accurate. This is a risk where outputs generated by an AI model become untethered from the source materials, including, for example, users’ prompts and input reference texts. There has, however, been continued effort and technical breakthroughs across the AI and academic communities to detect, measure and mitigate such risks.  Generative AI may also lack the contextual understanding (including the business and industry expertise of the human attendees) needed to correctly frame discussion and the ability to identify the most pertinent points raised. Biases may also be present in the AI algorithms affecting the reliability of such output. 

Loss of confidential information? 

Many AI tools use the data inputted into their models (whether via text, voice, image or file uploads) as training data to further improve and develop their AI model and broader products. Without proper protections, this may lead to leakage of confidential information, either directly via model outputs or indirectly by models being trained on confidential inputs. For certain confidential board-level discussions (eg where particularly sensitive information is discussed), it may therefore not be appropriate to use generative AI tools, or to only use them (a) subject to appropriate contractual confidentiality obligations placed on the model supplier or (b) where that use is of a locally hosted instance of the tool (to which the supplier does not have access). 

Data privacy?

Companies will need to take measures to ensure that the use of AI tools complies with their privacy and data protection obligations. For example, it may be prudent for transparency reasons to notify board meeting attendees if using an AI tool that continuously monitors and processes attendees’ discussions. 

Cyber risks?

Incorporation of generative AI models into company systems can raise novel cyber risks. Companies should assess the extent to which generative AI usage increases their exposure to cyber attacks and how best to continue to protect against cyber breaches (not least given the investor body focus on cyber security as a key area of risk for all companies). 

 

For more information on this topic, including in respect of disclosure obligations and privilege considerations, please ask your usual Freshfields contact for a copy of our recent client briefing: The use of AI in minute-taking for UK listed companies

Tags

ai, corporate governance