The Council of Europe Framework Convention on Artificial Intelligence: Core Content and Obligations for Georgia

News | Civic Tech and Innovations 30 September 2024

In September 2024, Georgia signed the Council of Europe Framework Convention on Artificial Intelligence – Artificial Intelligence and Human Rights, Democracy, and the Rule of Law. This is the first binding international document of its kind to establish a legal framework for balancing the use of artificial intelligence systems with the protection of human rights, support for technological progress, and promotion of innovation. The convention outlines foundational principles that govern AI usage in signatory states, ensuring AI advancements align with democratic values and the rule of law.



Overview of the Convention of CoE

 

The Council of Europe began work on the Artificial Intelligence Convention in 2019, with the Committee on Artificial Intelligence (CAI)—an intergovernmental body—leading the drafting process. Participants included 46 member states of the Council of Europe, observer countries (such as the USA, Japan, the European Union, Australia, and Argentina), and 68 international stakeholders from civil society, academia, the private sector, and various international organizations.

 

This is the first international document on artificial intelligence and human rights to establish overarching standards for safeguarding human rights in the use of AI systems. The convention becomes legally binding once ratified at the national level by each signatory country. Alongside Georgia, representatives from the European Union, the United States, the United Kingdom, Andorra, Iceland, Norway, Israel, San Marino, and Moldova have also signed the document.



Fundamental Principles of the convention

 

Activities within the lifecycle of AI systems must comply with the following fundamental principles:

 

 - Human dignity and individual autonomy

 - Equality and non-discrimination

 - Respect for privacy and personal data protection

 - Transparency and oversight

 - Accountability and responsibility

 - Reliability

 - Safe innovation



Remedies, Procedural Rights and Safeguards:

 

  •  - Document the relevant information regarding AI systems and their usage and to make it available to affected persons;

  •  - The information must be sufficient to enable people concerned to challenge the decision(s) made through the use of the system or based substantially on it, and to challenge the use of the system itself;

     

  •  - Effective possibility to lodge a complaint to competent authorities;

  •  - Provide effective procedural guarantees, safeguards

    and rights to affected persons in connection with the application of an artificial intelligence system where an artificial intelligence system significantly impacts upon the enjoyment of human rights and fundamental freedoms;

  •  - Provision of notice that one is interacting with an artificial intelligence system and not with a human being.



Risk and Impact Management Requirements 

 

  •  - Carry out risk and impact assessments in respect of actual and potential impacts on human rights, democracy
    and the rule of law, in an iterative manner;

  •  - Establishment of sufficient prevention and mitigation measures as a result of the implementation of these assessments;

  •   - Possibility for the authorities to introduce ban or moratoria on certain application of AI systems (“red lines”).



The convention, similar to the EU AI Act, adopts the definition of artificial intelligence provided by the Organization for Economic Co-operation and Development (OECD), defining AI as follows:

An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.



The scope of the convention

 

The Framework Convention covers the use of AI systems by public authorities – including private actors acting on their behalf – and private actors.

 

The Convention offers Parties two modalities to comply with its principles and obligations when regulating the private sector: 

 

 - Parties may opt to be directly obliged by the relevant Convention provisions 

 - Or, as an alternative, take other measures to comply with the treaty’s provisions while fully respecting their international obligations regarding human rights, democracy and the rule of law.

 

As with the EU Artificial Intelligence Act, signatory states to the Council of Europe Convention are not obligated to apply its provisions to national security, defense, or the research and development of artificial intelligence. However, in cases of AI testing, respect for human rights, democracy, and the rule of law must still be upheld.



Implementation of the Convention and Future Plans

 

To support the implementation of the Convention and enhance coordination with the Council of Europe, a Conference of the Parties has been established, comprising representatives from each signatory country. This platform serves a dual purpose: firstly, it facilitates the convention's implementation by enabling information exchange among parties and offering feedback to the Council of Europe. Secondly, it functions as a supervisory body, requiring countries to submit a report on relevant reforms within two years of signing the Convention, with subsequent periodic reporting thereafter.

 

By signing the Council of Europe Convention, states have demonstrated their commitment to using and developing artificial intelligence in alignment with human rights, democracy, and the rule of law. Upon ratification at the national level, the convention’s provisions will become legally binding. Each signatory is obligated to establish an independent supervisory mechanism to monitor compliance with the Convention. This body will also be tasked with raising public awareness about AI usage, fostering fact-based public discussions, and conducting consultations with interested stakeholders.

 

The Ministry of Justice of Georgia has prioritized two main areas: integrating modern technologies into justice and service delivery, and developing legal regulations to ensure that technology and artificial intelligence use fully aligns with human rights standards. To support these goals, the Ministry has announced plans to open an Artificial Intelligence Law Center in the near future.

 

 

Other Publications on This Issue