An AI system is a machine-based system that can operate with varying levels of autonomy and that generates different outputs from the input it receives. Providers who have placed or are planning to place AI systems on the EU market and users of AI systems should ask the following questions:
- Is a system an AI system within the meaning of the AI Act?
- If yes, then to which category of risk does it fall?
- Is such an AI system allowed or what are the requirements for such AI system?
- Am I compliant with such requirements?
These questions are especially important in the fields of employment, education, critical infrastructure, health services, banking, insurance, and justice.
_____
Risk-based approach
The AI Act applies to a wide range of persons, including but not limited to:
- providers of AI systems or general-purpose AI models;
- deployers (users) of AI systems that have their establishment or are located within the EU.
As all AI systems are different and some of them may be used for harm, the AI Act has established a risk-based approach and divided AI systems into four different groups, based on the risk they create.
AI systems can either have:
- unacceptable risk,
- high-level risk,
- limited risk,
- minimal risk.
______
Unacceptable risk. AI systems that qualify under unacceptable risk will be banned according to the AI Act as they can be used as a threat to people’s fundamental rights.
Such AI systems include:
- social scoring and classification;
- making risk assessments of natural persons to predict or assess risk of criminal behaviour;
- biometric categorisation based on sensitive characteristics;
- untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases;
- inferring emotions of a natural person in workplace and education institutions;
- manipulating human behaviour or exploiting people’s vulnerabilities;
- categorising persons based on biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation;
- ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement.
Use of biometric identification systems by law enforcement is still allowed in exhaustively listed and narrowly defined situations, such as a targeted search of a missing person or preventing a terrorist attack.
_____________
High level risk. Such AI systems may negatively affect safety of fundamental rights. The classification as high-risk does not only depend on the function performed by the AI system, but also on the specific purpose and modalities for which that system is used.
Such systems are, for example, but not limited to systems used in the following areas:
- biometrics, such as remote biometric identification systems, biometric categorisation systems, systems used for emotion recognition;
- critical infrastructure (digital infrastructure, transport, supply of water, gas, heating, electricity);
- educational or vocational training;
- employment, workers management, and access to self-employment;
- access to and enjoyment of essential private services and essential public services and benefits (evaluating the eligibility of natural persons for essential public assistance benefits and services, including healthcare services; evaluating the creditworthiness of natural persons or establishing their credit score; risk assessment and pricing in relation to natural persons in the case of life and health insurance; evaluating and classifying emergency calls by natural persons or to be used to dispatch, or to establish priority in the dispatching of, emergency first response services);
- law enforcement;
- migration, asylum, and border control management;
- administration of justice and democratic processes;
- AI used as a safety component of certain products (e.g. AI application in robot-assisted surgery), such as products which are required to undergo a third-party conformity assessment before being placed on the EU market (CE markings).
High risk AI systems are not banned; however, the following requirements have to be followed (more detailed overview in the AI Act):
- adequate risk assessment and mitigation systems;
- high quality of the datasets feeding the system to minimise risks and discriminatory outcome;
- detailed technical documentation providing all information necessary on the system and its purpose for authorities to assess its compliance;
- logging of activity to ensure traceability of results;
- clear and adequate information to the users;
- appropriate human oversight measures to minimise risks;
- high level of robustness, security and accuracy.
In case a company is planning to put a high risk AI system on the market or plans to use in it aforementioned sensitive fields (such as banking, employment, insurance, healthcare), it is essential to consult with experts or become familiar with the requirements as consequences can be rather harsh in the case of non-compliance with the AI Act (explained below).
______
Limited risk. This refers to risks associated with the lack of transparency in AI usage. AI systems with limited risk are subject to transparency obligations to ensure that humans are informed of AI use. For example, when using AI systems as chatbots, humans should be made aware that they are interacting with a machine, unless this is obvious from the circumstances. Using deep fake should also be transparent so people know the content has been artificially generated or manipulated.
Minimal risk. The AI Act allows free use of minimal risk AI. These include applications such as AI-enabled video games or spam filters. Most of the AI systems currently used in the EU fall into this category.
_____
Non-compliance
Non-compliance. In the case of non-compliance with the requirements of the AI Act, there are several different consequences.
- If a person is placing a banned AI system with unacceptable risk on the market, it will be fined either up to EUR 35,000,000 or 7% of the worldwide turnover of the last financial year, whichever is higher.
- Fine up to either EUR 15,000,000 or 3% (whichever is higher) of the worldwide turnover of the last financial year is imposed if any other requirement set out in the AI Act is breached.
- Fine up to either EUR 7,500,000 or 1% (whichever is higher) of the worldwide turnover of the last financial year is imposed if providers of AI systems give false information to dedicated officials in case they require information.
The exact amount of the fine is decided case by case, considering the breach and its consequences. In the case of small or medium-sized enterprises or start-ups, the fine will be up to the percentage or amount previously referred, whichever thereof is lower.
Other relevant information
Click here to learn more about AI checklist for business.
Click here to learn about GDPR requirements when using AI.
Click here to read more about the status of text and data mining exceptions in the Baltic states.