Artificial Intelligence: International Tendencies and Georgia - Legislation and Practice

News | Civic Tech and Innovations | Pressing Issues | Policy Document 19 February 2021

Technological advances have substantially increased the availability of technical or software means required for the creation of services rooted in artificial intelligence. Artificial Intelligence (AI) has the potential to transform virtually every traditional approach to work or business process management for the better. Within the framework of the project - "Promoting Greater Transparency and Ethical Standards of Using Artificial Intelligence (AI) in Georgia" – financially supported by the International Center for NonCommercial Law, IDFI aims to study the functioning of artificial intelligence systems in the public sector. We are paying special attention to the challenges in terms of the use of this technological solution by the public institutions in Georgia.

 

The use of artificial intelligence (AI) systems in the public sector can yield significant benefits in terms of simplifying the decision-making process, improving service delivery, and introducing many other innovative approaches. The use of artificial intelligence, however, carries a few risks. Its use is linked to challenges in terms of transparency, accountability, freedom of expression, and the right to privacy. At the same time, in countries such as Georgia, where oversight mechanisms for law enforcement agencies are relatively weak and there are questions about the independence of the judiciary branch, the problem of balancing the risks associated with artificial intelligence is becoming increasingly critical to address.

 

In the main part of the study, the information obtained on the use of artificial intelligence by public institutions is discussed in parallel with the basic standards of ethical artificial intelligence and the main principles of current Georgian legislation.

 

Methodology

 

The use of artificial intelligence by government agencies is linked with a number of practical as well as legislative problems. For example, there is no unified register of information systems based on artificial intelligence, a normative definition of artificial intelligence and/ or specific legislation on AI.

 

In order to identify possible cases of the use of artificial intelligence in a given public institution, IDFI used publicly available information indicating that said institution was using services containing elements of artificial intelligence in its operations. In addition, we requested information from the agencies that, given their specific functions, were highly likely to be using the aforementioned technology. After identifying these public institutions, IDFI addressed them with a request for relevant public information. 54 agencies received information request letters to this end.

 

It should be noted that the Georgian legislation does not define the concept of artificial intelligence as such. At the same time, there is not always a clear line between artificial intelligence and other types of functional algorithms. 1 To dispel this ambiguity and reduce the likelihood of receiving misinformation, before requesting public information, IDFI held a meeting with representatives of the relevant public bodies, one of the purposes of which was to familiarize them with the public information request in advance in order to ensure accurate answers to our inquiries.

 

In parallel with requests for public information, we studied the existing regulatory framework governing artificial intelligence. Namely, while it is true that there is no special legislation in Georgia regulating artificial intelligence software services by public institutions, the Constitution of Georgia, as well as other legislative acts, nevertheless set out a number of normative requirements that would apply to the use of artificial intelligence by a public institution.

 

Gemeral Analysis of the Information Acquired as a Result of Requests for Public Information

 

IDFI requested information about IT systems, software, and artificial intelligence systems created and used by the 54 public institutions identified at the initial stage. The research process primarily focused on programs containing artificial intelligence. More specifically, IDFI requested the following information from public institutions:

 

1. Name, purpose, creator, and users of the software;

2. Description, parameters, and its use in the decision-making process;

3. User manual/instruction and the technical manual;

4. Legal acts regulating the use of the software;

5. Rules for the protection of the standards of ethics and personal data processing;

6. Audit report and conclusion on the operation of the software.

 

IDFI received responses from 36 agencies out of the 54 agencies that received a request for information. Only 12 of these 36 agencies provided information on the software they used, while some informed IDFI that they did not use artificial intelligence systems. A significant number of the target agencies did not respond to the letter requesting public information at all, while most of the responses received were limited to a statement that the agency did not use artificial intelligence. Such responses have, on several occasions, raised the suspicion that these agencies have taken advantage of the ambiguity of the term "artificial intelligence" and thereby avoided disclosing information.

 

 

Taking into account the complexity of distinguishing artificial intelligence systems from other automated software, we also received information from individual agencies on software they use that do not contain elements of artificial intelligence, but are still distinguished by a high degree of automation. As an example, the Revenue Service has introduced the Automated Customs Data System (ASYCUDA), the VAT refund system, and the electronic application processing system. 12 agencies provided a response to the first and second points of the request for public information (i.e., the name of the programs and the general description). Among these, 8 agencies provided information on the software tools they use that do not contain the features of systems that are based on artificial intelligence. According to the results of the analysis of the responses received from public institutions, the use of artificial intelligence was confirmed in 4 agencies, while the information on the use of artificial intelligence by the Prosecutor's Office of Georgia became available from public sources.

 

Institutions that use systems containing artificial intelligence:

 

 

Out of 4 confirmed cases of the use of artificial intelligence, user manual and regulatory acts for the corresponding information systems were provided only by the National Center for Education Quality Development, while information on system ethics and personal data protection standards was shared only by the National Center for Education Quality Development and the National Tourism Administration. It should be noted that the audit report and conclusion on the functioning of the program was not received from any of the agencies; according to some of them, the audit of the functioning of the mentioned systems and programs was not conducted, while others left the point unaddressed.

 

Conclusion

 

The introduction of artificial intelligence systems in the Georgian public sector is at an early stage of development, although in the private sector there are many successful examples of the use of this technology, such as remote verification systems, automatic document identification systems, communication automation programs, and many other tools. This sector has become particularly profitable in the face of the pandemic due to increased demand for remote services. Therefore, the potential for large-scale use of artificial intelligence in terms of increasing efficiency and cost-effectiveness in various processes has already become evident. However, there are also risks that can arise as a result of systemic misuse, technical glitches, and mismanagement of personal data.

 

Artificial intelligence is not just another electronic assistance tool. It substantially increases the governing capacity of the state, thereby increasing the temptation for its illicit use. This risk is particularly high in developing democracies. The study revealed that law enforcement agencies are the only area in the public sector where the process of introducing artificial intelligence is stable, which serves as an indirect indication of the imminent nature of these risks.

 

The present study highlights the lack of normative acts regulating artificial intelligence systems and documents defining ethical norms in the target agencies. In order for the public sector to be able to make the most of these technologies, access to information and transparency regarding the systems is critically important, along with technological readiness, so that the public be informed about the peculiarities of the functioning of these systems, to exclude the risks of bias, and make it possible for an external observer to discuss the possible shortcomings of the system and for it to gain a high degree of trust. The study has shown that information on the use of artificial intelligence is so scarce that it is difficult not only to control its use but also to exercise the right to a fair trial.

 

The public sector has two major roles to play in the development of artificial intelligence, and it, therefore, faces a dual challenge. Firstly, the public sector should promote the formation of a national ecosystem for national startups and industry aimed at the exploitation of AI, attract investors and donors, use AI applications in different sectors, and achieve socio-economic growth and prosperity through artificial intelligence. Simultaneously, for the development of artificial intelligence, the government should create a regulatory framework that balances and reduces the threats, risks, and challenges associated with artificial intelligence systems; one that provides effective mechanisms for enforcing the adopted legal and ethical standards. It is also important to outline procedures for auditing the operations of artificial systems, to define responsibilities, and to make the results of such inspections available to the public. It is important to take appropriate steps in this direction from the very beginning of the introduction of artificial intelligence.

 

/public/upload/Article/ENG The Use of Artificial Intelligence Systems in Georgia_Legislation and Practice_19_02_2021.pdf

 

Other Publications on This Issue