The recent technological developments in the field of artificial intelligence (AI) with the innovative development of Generative AI have inevitably caused significant change in the legal industry. This type of AI is a machine learning model which can generate new verbal, visual or auditory content based on previous data.

Despite the challenges that its use poses, the integration of this model of artificial intelligence into the industry of legal services is significant for the enhancement of the quality of the services provided by legal professionals and the delivery of justice, in a rather “obsolete” current legal system.  

(A) Advantages of AI for the legal professionals

Especially in the field of legal profession the use of AI is a primary tool to increase productivity of legal professionals, mainly due to saving of time spent on legal issues of minor significance and/or administrative tasks. In particular, modern artificial intelligence tools enable the conduct of legal research through extensive databases of legislation, case law and legal texts in seconds, thus effectively accelerating legal research and improving efficiency, for example by identifying precedent on a legal issue or relevant court decisions on a pending court case. At the same time, one of the innovations of Artificial Intelligence is the generation of original contractual legal texts, including legal motions, and the proofreading thereof in an automatic manner, which therefore reduces the time required for their drafting and review.

Many law firms and consulting companies around the world have already invested in the adoption and use of Artificial Intelligence tools, such as Lexis+ AI, Harvey AI and CoCounsel AI, to improve the delivery of their services.

In addition, one of the potentials that Artificial Intelligence now offers is the so-called “predictive analytics”, which is the ability of an AI algorithm to analyze historical court case data to predict possible litigation outcome. This would allow a strategic balancing of interests before an “unnecessary” as well as costly for the client initiation of legal proceedings, thus contributing to the reduction of the volume of already accumulated lawsuits before Greek courts.

As regards the judicial system, Artificial Intelligence tools could effectively reduce the significant delay in the issuance of court decisions by providing data on backlog of pending cases as well as data on the estimated time for decision issuance, thus allowing appropriate workload management and more efficient administrative work in the courts.

In a rapidly evolving world, Artificial Intelligence could substitute a great fraction of legal work, offering the potential to completely reshape the industry. Its adoption will allow legal professionals to focus on negotiation techniques and strategy development on higher order legal tasks, while also enabling them to invest in strengthening client relationships and improving the quality of their services. In any case, human intervention will be necessary to ensure the accuracy and validity of the data generated by AI applications.

However, despite its significant role in the transformation of the legal industry, Artificial Intelligence raises reasonable concerns regarding the protection of human rights, the right to privacy, the protection of personal data, transparency and possible discrimination (“algorithmic bias”). For this reason, the primary objective should be to balance the potential risks of its use with its undeniable benefits by adopting an effective legal framework that can ensure its proper use and beneficial operation.   

(B) Regulatory framework for Artificial Intelligence systems – New Regulation (EU) 2024/1689 

With the adoption of the most recent Regulation (EU) 2024/1689 (the “Regulation”), published on 12.07.2024 in the Official Journal of the European Union, a single regulatory framework for artificial intelligence (AI) has been introduced for the first time. 

This is a Regulation- milestone, which adopts a people-centred approach that protects fundamental human rights and freedoms, while promoting innovation and investment in the AI sector. The largest part of its provisions is to be implemented in all EU Member States from 2nd August 2026. 

In particular, the Regulation introduces a system of assessing the risk level of AI systems and establishes corresponding obligations for providers of AI systems who place on the market or put into service AI systems, for importers, distributors and manufacturers of these products as well as for deployers thereof, i.e. for any natural or legal person using an AI system under its authority. 

Among the obligations established by the Regulation are the assurance of the quality of the data entered into the system, the transparency, the data accuracy and the security of the AI systems. 

Regarding the use of AI systems in the legal industry and especially in judicial practice, the following provisions introduced by the Regulation are relevant: 

(i) High-risk AI systems (Article 6)

The Regulation classifies certain AI systems as high risk, subject to strict risk assessment and risk management procedures (Article 6 of the Regulation).

For a system to be considered to be high risk, it must firstly pose a significant risk of harm to the health, safety or fundamental rights of natural persons and significantly affect the outcome of decisions adopted. Among others, high-risk systems are considered to be AI systems intended to be used by a judicial authority or on their behalf to assist a judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts, or to be used in a similar way in alternative dispute resolution” (Art. 6(2) of the Regulation). 

High-risk AI systems must comply with the requirements set out in the Regulation (Article 8). In this context, an integrated risk management system is established in relation to high-risk AI systems, through which risks are recorded and assessed, incident logs are kept, information is communicated to specialized agencies and obligations are imposed on associated persons. Providers of such systems should comply with requirements such as: 

  • The preparation of a risk analysis and the implementation of mitigation measures.
  • Compliance with quality, safety and transparency standards and 
  • Ensuring human oversight and processing of decisions taken by the AI systems.

(ii) Provisions for General Purpose Models

The Regulation also introduces rules on General-Purpose models (Article 51), aiming to enhance transparency and prevent abuse. General purpose AI models (models that have been trained with a large amount of data, using self-supervision at scale, that displays significant generality and competently perform a wide range of distinct tasks – Article 3 par. 63), are conditionally classified as systemic risk models, which must meet specific transparency and accountability requirements. A typical example of general-purpose models is the well-known and already widely used AI chatbots.

To ensure compliance with the above obligations, the Regulation establishes a system of sanctions with administrative fines in case of breach thereof and allows Member – States to introduce additional sanctions. The amount of the fine imposed depends on the type of infringement and is calculated either as a fixed amount or as a percentage of the total annual turnover, whichever is higher. 

In view of the above, the new Regulation (EU) 2024/1689 on artificial intelligence is a decisive step and perhaps a first “attempt” to regulate a rapidly evolving new technology. The ultimate goal is to ensure that the development and use of AI is in line with fundamental human values, in particular in relation to human rights, democracy and the rule of law.