This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Freshfields Risk & Compliance

| 5 minutes read
Reposted from Freshfields Technology Quotient

WorkLife 2.0: A robot in control - should AI have a voice on the Board?

Artificial intelligence (AI) is ubiquitous in modern decision-making for both individuals and corporations. AI can be deployed by companies for greater efficiency and innovation while on a personal level it can suggest music we might like, groceries we might need and news we should read. Companies able to harness machine learning techniques and AI have significant economic potential and the McKinsey Global Institute has estimated AI technologies could unlock from USD9.5 trillion to USD15.4 trillion in annual business value worldwide. It is hard to think of a sector, private or public, which has not deployed AI in some capacity.

However, where should we draw the line between a company harnessing AI and a company which is led by AI?

In May 2014, Hong Kong venture capital firm, Deep Knowledge Ventures, appointed an artificially intelligent representative named Vital to its board of directors. Vital (Validating Investment Tool for Advancing Life Sciences), chosen for its ability to pick up on market trends, was given the right to vote on whether the firm makes an investment in a specific company or not. The goal is to get Vital to a stage where it could operate autonomously and be given an equal vote with other board members on all financial decisions made by the company.

The appointment represented a material advancement in the role of AI in corporate decision-making, but the question of whether this can be seen as an advancement for corporate governance is perhaps more complicated, giving rise to some critical questions about ethics, accountability and potential discrimination.

The ethical question

Directors have legal responsibilities and fiduciary duties to their companies. These obligations form the ethical framework which underpins each decision made by a director on behalf of a company. A critical feature of a directorship is the special relationship of trust, assurance and confidence between the director and his/her company and its shareholders. Can such a relationship be built with a non-human party? It is perhaps hard to imagine a non-human feeling or exhibiting integrity in the same way as is expected of their human counterparts. What about the obligation on directors to ensure they balance the interests of the company’s stakeholders? Is that better done by humans, or can AI more easily deal with the competing interests of varying stakeholders that must be constantly balanced in decision-making?

A further element of the ethical matrix informing a director’s decisions is of course the director’s own set of values. Society is increasingly expecting more of directors and business leaders. With greater employee activism, directors and companies are being pushed at times to think critically about their decisions and not to think purely about the financials. For example, last year employees at Ford lobbied the CEO to stop the supply of cars to the US police force in relation to the Black Lives Matter movement. But could a company’s employees and shareholders rely on a non-human director to be sensitive to such matters? How would this be built into an algorithm, or could the AI director adapt to changing societal trends through machine learning?

Since January 2017, at least 22 different sets of ethical principles for robotics and AI have been published. New ethical standards are emerging all the time, notably from the British Standards Institute and the IEEE Standards Association and a growing number of companies have announced AI strategies with national advisory or public bodies to help develop these principles. Whilst this development will be interesting to follow over the next few years, it is a reminder to companies that ethical responsibility cannot be overlooked when deploying AI.   

Who would be accountable?

When a director fails to discharge his/her duties, the director is often held personally accountable for wrongdoing with potential criminal liability for certain offences.

But who is responsible for the actions or decisions of AI which has been designed to exercise a level of autonomy? Should it be the developer or programmer, or when deployed on a company’s Board, the other directors?

The issue of accountability is complicated further by the threat of hacking. In the UK government’s Cyber Security Breaches Survey 2020, almost half of all businesses surveyed reported having cyber security breaches or attacks in the previous 12 months. When the decision-making entity on the Board of a company is the victim of a cyber-attack, who should be held accountable for illegal or unconscionable decisions made?  And does the risk of such a cyber-attack outweigh the potential benefits of appointing such a non-human entity to the Board?

The risk of discrimination

One known risk of relying on AI is the potential for inadvertent discrimination. AI and machine learning that is only informed by data from certain demographic groups may reflect societal biases and lead to unfair treatment of under-represented groups. Advice for companies looking to prevent such discrimination was explored in this recent blogpost. This is a further issue for companies to consider before giving AI a ‘voice’ on the Board. By doing so without the appropriate checks and balances, a company could inadvertently install unconscious biases and prejudices at the very top of its organisation, which will influence its future strategies and key decisions. Further, with a greater focus in recent years on gender balance on Boards, how does an AI appointment affect female representation on the Board? Is the AI tool truly gender neutral or, if it has been programmed by men, using largely male data sets, should the AI director count as a male member of the Board?

Is greater regulation the answer? 

Greater regulation of AI technology may help to refine these questions. After identifying various gaps in the regulation of AI technology, the EU intends to publish more comprehensive legislation, including new regulations for those who build and deploy AI. Three resolutions (the Framework of ethical aspects of artificial intelligence, robotics and related technologies; the Civil liability regime for artificial intelligence and the intellectual property rights for the development of artificial intelligence technologies) adopted by the European Parliament on 20 October 2020 give an indication of what can be expected from the new regime. For an example, a European ‘certification of ethical compliance is envisaged’, which would help to build an ecosystem of trust and encourage the social acceptance of AI.

Whilst in the UK there is currently no AI specific legislation, the UK, alongside 41 other countries, signed up to the OECD Principles on Artificial Intelligence in May 2019 and continues to develop guidance and advice for all bodies deploying AI and algorithmic decision-making. In November 2020, the government’s advisory body, the Centre for Data Ethics (CDEI), published a review on bias in algorithm decision-making, which includes a number of recommendations for all organisations making decisions on the basis of data, and urges the government, regulators and industry to work together to ensure AI technologies are deployed fairly.

It is clear that in 2021, appointing an AI director to the Board would still be a dramatic step for any organisation. And while the potential gains could be material, there are also risks. Such a decision should therefore only be made after careful, holistic consideration of all the factors and potential issues. Ultimately, a company may decide that given the current state of regulation and technology, instead of giving AI a seat on the Board, the company will still access the benefits of AI by strategically deploying AI tools in its organisation. However, what the future holds is another question and maybe we are seeing the first baby steps towards ‘artificial’ Boards. 

Tags

ai, corporate