On 12 July 2024, the Artificial Intelligence Act ("AI Act") was published in the Official Journal of the European Union. It will enter into force on 1 August 2024 and will be directly applicable law in the Member States, requiring no further implementation by the Member States.
Article 113 of the AI Act provides for a transitional period, meaning that the AI Act will be generally applicable 24 months after its entry into force. However, a few individual provisions will be applicable earlier or later than that. In particular, AI systems with unacceptable risk will be prohibited six months after the AI Act comes into force, i.e. from 2 February 2025.
The AI Act is expected to have a significant impact on the use of AI systems by companies and businesses. The impact will extend beyond the EU as the AI Act will apply to foreign companies which provide, import, distribute or manufacture AI systems in the EU market. In an employment context, it will shape the use of AI in recruitment, decision-making and disciplinary actions, amongst others, and it may set the tone for future regulation in other jurisdictions.
We have provided a brief summary the main provisions of the AI Act relevant to employers below.
Risk-based approach
The AI Act follows a risk-based approach. The higher the risk, the stricter the regulation. AI systems are categorised into the following risk categories:
- AI systems with unacceptable risk;
- AI systems with high risk;
- AI systems with special transparency obligations; and
- AI systems with minimal risk.
Prohibited: AI systems with unacceptable risk
In the employment context, AI systems that will be completely banned include, in particular systems,
- Designed to infer the emotions of a natural person in the workplace, unless they are for medical or safety purposes, such as monitoring a pilot's fatigue.
- Which categorise individual natural persons based on their biometric data to deduce or infer discriminatory characteristics.
Strictly regulated: AI systems with high risk
The category of so-called high-risk AI systems will become particularly relevant in employment law in the future. This is because high-risk AI systems are, in particular, AI-systems that are used for the following purposes:
- Recruitment or selection of applicants (e.g. placement of targeted job adverts, analysing and filtering of applications and evaluation of applicants).
- Decisions on promotions and terminations, the allocation of tasks and the monitoring and evaluation of performance and behaviour.
This covers many, but by no means all, areas of application of AI in the workplace. For example, it does not include:
- AI systems for approving holiday requests, language assistance and translation programmes or AI-based trainings.
Important distinction: Deployer or provider of a high-risk AI system?
Which obligations are imposed on employers when using an AI system in each individual case depends on whether they are classified as a provider or a deployer under the AI Act:
- If employers are not developers of AI systems, but purchase and use existing AI systems, they will generally be categorised as deployers.
- There are good arguments that employers would not be classified as deployers if they only allow or tolerate their employees’ use of freely accessible AI systems via the browser (e.g. AI-based translation systems). However, this is not clear yet.
- If employers are not satisfied with the AI systems available on the market, but instead "customise" high-risk AI systems themselves, they can be classified as providers.
Obligations for deployers of high-risk AI systems
As the deployer of a high-risk AI system, employers will be subject to obligations under the AI Act, such as:
- Use and monitoring of high-risk AI systems in accordance with the instructions for use and immediate reporting of detected risks.
- Relevant and sufficiently representative input of data, for example in an AI-based recruiting system that supports the selection of suitable candidates.
- Establishment of a human oversight with appropriate competence, training and authority.
- Keeping logs that are automatically generated by the high risk AI-system.
- Involvement of employee representatives (this obligation already applies in some Member States, eg, Germany).
Additional obligations for providers of high-risk AI systems
- If employers are classified as providers in individual cases, they must fulfil additional obligations. In particular, this includes ensuring that the high-risk AI system fulfils the general requirements for trustworthy AI. They must also undergo a conformity assessment procedure and introduce a quality management system.
- Providers of high-risk AI systems are also subject to strict registration, documentation, and information obligations.
What measures can employers take now?
- Identification of future obligations by observing the risk-based approach both for AI systems already in use and for the planning and introduction of new systems.
- Development of a robust concept for the early involvement of the works council.
- Setting rules for the use of AI in the workplace and offering training courses on the basic technical understanding of AI.
- Appointment of an “AI Officer”/ human oversight.
What are the penalties for offences?
The AI Act sets certain thresholds for penalties to be determined by the Member States. For employers, the following penalties are most relevant:
- The use of a prohibited AI system can result in fines of up to EUR 35 million or up to 7% of the previous year's global turnover.
- Fines of up to EUR 15 million or up to 3 % of the previous year's global turnover can be imposed in the event of a breach of the aforementioned obligations as a deployer of high-risk AI systems.
When will the AI Act be applicable?
- The AI Act will be directly applicable law in the Member States from 1 August 2024, meaning that no further implementation by the Member States is required in principle.
- However, it provides for a transitional period so that - with a few exceptions - it will be fully applicable 24 months after it comes into force (i.e. from 2 August 2026).
- AI systems with unacceptable risk will become prohibited within 6 months of entry into force (i.e. from 2 February 2025).