EU AI Act - What you should know about the world's first regulation on the use of Artificial Intelligence

News | INTERNET AND INNOVATIONS | Article 20 December 2023

After a 40-hour negotiation, EU lawmakers finally agreed on the approval of the EU AI Act on December 8, 2023. As such, a debate on the world’s first comprehensive regulation on the use of artificial intelligence has been concluded. In general, 2023 can be called a year of artificial intelligence regulation: world leaders discussed the issue of AI regulation at the G7 Summit for the first time [1]; The White House issued the first executive order on the use of AI [2]; at the same time, the first world summit dedicated exclusively to issues related to AI was held in London [3]. What sets the EU AI Act apart from all the abovementioned events is that it is binding and covers the second largest economy in the world - the European Union market. The adopted act will have a notable impact on the regulation of artificial intelligence in the world, similar to the General Data Protection Regulation (GDPR).

 

According to EU lawmakers, the main goal of the Act is to ensure that the AI systems used in the EU will be based on safety, fundamental rights, and EU values [4]. 

 

The Main Parts of the Agreement:

 

Definition and Scope

 

The EU agreed on a specific definition of artificial intelligence to ensure that AI regulation would be clear and would not extend to other software programs. The Act uses the AI definition proposed by the OECD: 

 

“An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment” [5].

 

The AI Act only covers the European Union and its member states and encompasses both the public and private sectors. However, there are several exceptions. The Act does not cover AI created for military, defense, national security, and research and development (R&D) systems. The Act bans the use of certain AI systems, such as biometric categorization systems using sensitive characteristics and data; those that involve behavioral manipulation; emotion recognition systems at work and in education. Nevertheless, when it comes to national security, law enforcement can, for example, use biometric identification systems in public spaces to investigate and/or prevent crime. The EU strictly defined 16 types of crimes where the courts may use AI systems mentioned above. Among these crimes are: terrorism, trafficking, and sexual exploitation of children [6].

 

It should be noted that France was especially supportive of these exceptions to the Act. France, alongside Germany and Italy, generally opposed the adoption of this Act, as these states believe that such legislation will hamper the growth and development of innovations and startup companies in the EU, which will ultimately lead to their inability to compete with American, British, and Chinese companies in the area of artificial intelligence [7].

 

 

Classification of AI Systems

 

EU AI Act defined three main groups of artificial intelligence systems according to risk:

 

Limited risk: AI systems such as, for example, chatbots, which generate texts or photos create a limited risk to people, and as such, these have an obligation of “soft transparency”. Specifically, such systems must label appropriate products and content as having been generated by AI. One of the goals of this requirement is to ensure that the users are informed [8].

 

High-risk AI: AI systems with a higher risk to human safety and health are regulated much more strictly. Alongside transparency, they must fulfill the following requirements:

 

- Assess compliance with/influence on fundamental rights;

 

 

- AI system registration in the public EU database;

 

 

- Implementation of risk and quality control systems;

 

 

- Data management (e.g. reducing bias); 

 

 

- Transparency (e.g. instructions for use, technical documentation);

 

 

- Human oversight (e.g. so-called Human in the loop, meaning that no system will function without human oversight);

 

 

- Precision, sustainability, and cybersecurity (e.g. testing and monitoring).

 

 

These systems include:

 

 

- Medical devices;

 

 

- Systems built into vehicles;

 

 

- Recruitment, HR, and worker management systems;

 

 

- Education and vocational training;

 

 

- Access to services (e.g. insurance, banking, etc.);

 

 

- Systems used to manage critical infrastructure (e.g. water, gas); 

 

 

- Emotion recognition systems; 

 

 

- Biometric identification;

 

 

- Law enforcement border control, migration, and asylum systems;

 

 

- Systems used in administration of justice

 

Therefore, large tech companies will have to operate on the EU market in compliance with these regulations, although how qualitatively strict these regulations will be is partially dependent on the companies themselves. EU legislators established the strictness of the regulations according to the power of artificial intelligence. This, in turn, is determined by the computing power necessary for creating and developing AI systems. Based on the fact that companies know what computing power will be needed for the development of their AI models, it is partially up to them how stringent the regulations that their company will be subject to will be. This was a kind of compromise from the EU with regard to the private sector [9].

 

Foundational Model AI - fundamental model, based on which artificial intelligence applications are built - falls under the high-risk category as well. This includes machine learning algorithms, statistical models, and other fundamental concepts that are included in complex AI systems such as generative AI.

 

Prohibited risk: AI systems that create a clear threat to fundamental human rights are strictly prohibited. This group includes:

 

 

- Social credit scoring system; 

 

 

- Emotion recognition systems at work and in education; 

 

 

- AI that manipulates behavior;

 

 

- Untargeted scraping of facial images for facial recognition; 

 

 

- Biometric systems;

 

 

- Crime prediction systems;

 

 

- Law enforcement use of real-time biometric identification in public spaces. 



However, as noted above, according to the Act, with court approval, law enforcement agencies may use certain prohibited AI applications to investigate or prevent strictly defined crimes.

 

Act Enforcement Mechanisms

 

It may take several more weeks or months to further elaborate the Act and agree on the text and technical details. Afterwards, the Act will have to be approved by the European Parliament and the national parliaments of EU member states, following which it will enter into force. According to the EU AI Act, certain articles (especially those concerning prohibited AI) will enter into force 6 months after ratification, while the Act in its entirety will become a requirement to follow only from 2025, with companies having 2 years to meet the new standards established by the EU.  

 

To effectively monitor the enforcement of the Act, a new agency was established with the European Commission - AI Office, which has been tasked with controlling the enforcement of the Act across the EU. At the national level, this authority will be with appropriate governmental agencies. For coordination between governments and the EU, an AI Council was created, consisting of representatives of member states, which will serve as a kind of coordination platform that will help the states adapt the Act to their national context and legislation. Apart from these, a scientific panel of independent experts will be created, fulfilling an advisory role to the AI Office in technical and methodological issues (especially in the direction of general purpose artificial intelligence). At the same time, the Advisory Forum will unite all stakeholders, including civil society organizations, the private sector, and academia, in order to provide expert advice and recommendations to the AI Office. As such, the interests of all stakeholders will be considered. 

 

The fines imposed in case of non-compliance with the requirements of the act are quite severe and, depending on the size of the company and the severity of the violation, range from 1.5% to 7% of the company's global sales turnover [11]. 

 

EU AI Act and Georgia

 

The introduction of artificial intelligence systems in the Georgian public sector is still in its initial stage, although the private sector already has examples of successful use of these technologies, such as remote verification systems, automatic document identification systems, communication automation programs, and many other tools. Nevertheless, IDFI’s 2021 study -  Artificial Intelligence: International Tendencies and Georgia - Legislation and Practice showed that there are no normative acts and ethical norms regulating artificial intelligence systems. The only exception is the Decree of the President of the National Bank of Georgia on the approval of the regulation of risk management of data-based statistical, artificial intelligence and machine learning models, which regulates the use of artificial intelligence in the financial sector.

 

There is no definition of artificial intelligence at the legislative level in Georgia, which complicates not only risk management and regulation in this field, but also its study in general.

 

On the path of Georgia's European integration, it will be important to take into account the regulations and directives of the European Union in this direction. Therefore, it is important for the country to start working on a regulatory legal framework for artificial intelligence today, taking into account the main principles of the EU AI Act.

 

Other Publications on This Issue