top of page
Writer's pictureAnna Pelizzari

How the AI Act Enhances the Need for EU Regulation on Evidence in Criminal Proceedings

In January 2020, a shoplifter stole several watches from a shop in Detroit. The authorities used facial recognition technology to identify the thief based on the store’s security footage. They subsequently arrested a man named Robert Williams after the system flagged him as a positive match. However, it turned out that he was innocent. The system had misidentified him, leading Robert to sue the city for his wrongful arrest.


In August 2023, neighbours complained about a strong smell coming from the house of a man named Samuel Grover. When the police investigated, they uncovered a drug production operation in his garage. Samuel had been manufacturing methamphetamine for over a year, relying on ChatGPT as his main source for instructions and guidance. To demonstrate this, for the first time in history, the court summoned ChatGPT itself as a witness during the criminal proceedings. The prosecution questioned the system through a projected screen, asking about the specific queries Samuel had made and establishing his malicious intent.


One of these stories is real. The other, while fictional, is not far-fetched. ChatGPT has not been called to testify in court – yet. AI usage is visibly growing, and one field where this is evident is criminal justice. In recent years, law enforcement authorities have increasingly relied on AI-driven tools. These are particularly useful for burdensome tasks like gathering and processing evidence. Law enforcement agencies have always faced numerous challenges: budget and time constraints, a dangerous margin for human error, and the ever-evolving nature of crime. AI is a valuable asset in overcoming these difficulties due to its ability to process vast amounts of data quickly and efficiently. The "Artificial Intelligence and Robotics for Law Enforcement" report summarises some examples of AI use: autonomous robotic patrol systems, devices that predict where and what types of crimes are likely to occur, and computer vision software to identify stolen cars. Additional tools include facial recognition, as seen in the Robert Williams case, and biometric surveillance.


If criminal proceedings arise, AI-generated evidence can be used in court to support the authorities’ claims. However, this raises a critical issue: how does introducing this kind of evidence affect the defence rights of the accused? The core right of criminal law is the right to a fair trial, mentioned in international legislation such as the Article 6 of the ECHR and Article 14 of the ICCPR, as well as part of many Members State’s national rules. Equality of arms is an inherent feature of a fair trial, ensuring that both the prosecution and defence have equal opportunities to present their case and challenge the other side’s arguments. This is accomplished by ensuring access to all relevant evidence. 


With the introduction of AI, while efficiency has increased, it has also added complexity and opacity to the system. A gap in equality arises when defence teams are confronted with high-tech evidence that legal expertise alone does not make intuitive. Equality of arms cannot be achieved if one side presents traditional evidence, such as documents and eyewitnesses, while the other exhibits screens full of code carrying an air of authority.


The complexity of modern evidence also poses challenges for judges. When evaluating evidence, they rely on a proportionality assessment, considering whether a piece of evidence serves a legitimate purpose, and if its inclusion outweighs any potential harm. If judges do not understand the technology presented to them, they will struggle to carry out this analysis with the required depth. The discussion goes beyond the evaluation of evidence; it questions whether judges are adequately prepared to incorporate AI outputs into their decisions. In a recent case in Colombia, a judge ruled on an appeal of a "tutela" claim, a judicial mechanism of protection of fundamental rights, and turned to ChatGPT to “extend the arguments of his decision.” The judge did not state if he had verified the information provided, which opened a discussion on its credibility, and therefore of the decision itself.


Unregulated AI usage threatens to be an atomic bomb on the traditional criminal law principles. Despite the growing importance of this issue, regulatory measures are struggling to keep pace. At an international level, the OECD created the first intergovernmental standards on AI, encouraging States to incorporate the use of AI into their systems while respecting human rights and democratic values. The Council of Europe adopted in 2018 a text setting out ethical principles relating to the use of AI in judicial systems. AI-specific regulation and case-law of the EU Member States is still in the process of emerging. Although these texts may be a helpful departure point, it is still very limited.


This leads us to the key point: why should the EU regulate this matter, and not leave it to the discretion of the Member States? Traditionally, this is the case with most evidence-related issues. No comprehensive regulation exists, and none of the directives on criminal procedural safeguards, adopted between 2010 and 2016, specifically address evidence. The result is a significant variation on how evidence is gathered, what is admissible in court, and the consequences of exclusion. This discrepancy is acceptable when dealing with paper documents or telephone tapping, but it falls short in the context of AI evidence, where sensitive biometric data and facial recognition are at stake. States are not all on the same page: in June 2023, the French “Sénat” voted a law authorising the testing of facial recognition in France for a period of three years, as part of the fight against terrorism. Minimum standards on this must be set, with the danger of leaving room for harmful applications.


Currently, there is no EU regulatory framework specifically addressing the use of AI evidence by legal authorities and the imbalance it creates. Although there is legislation concerning evidence in criminal proceedings, establishing rules for electronic evidence and custodial sentences and the appointment of legal representatives for gathering electronic evidence, is does not account for the complexities of AI-generated data. Although there has been some jurisprudence from the European Court of Human Rights on mass surveillance, nothing has been established on the limits of AI use by law enforcement agencies.


A light at the end of the tunnel came with the highly anticipated "Artificial Intelligence Act" (AI Act), which came into force on 1 August 2024. The regulation introduced a safety net of measures to ensure that AI tools are developed and used in accordance with the Union’s values, principles, and fundamental rights. Recital 59 acknowledges the issue of AI evidence, stating that “certain uses of AI systems [by law enforcement authorities] are characterised by a significant degree of power imbalance.” It highlights “the difficulty of challenging their results in court, particularly by natural persons under investigation.”


The AI Act sheds light on an ongoing discussion about the need for solutions. One obvious answer is to make these technologies more accessible to the public. The Regulation made important progress in this area by imposing transparency obligations for certain AI systems, which are categorised by risk levels: unacceptable, high, and low. Unacceptable risk practices are outright banned, while high-risk systems must meet specific conditions before deployment. Most AI systems used in law enforcement fall into the high-risk category, meaning they must comply with requirements from Article 8 onwards, including establishing an overall quality management system to ensure transparency and the use of high-quality data to train AI systems, as well as conducting a mandatory fundamental rights impact assessment. The regulation also interdicts the use of AI systems that create or expand facial recognition.


However, transparency and a few warnings in the AI Act recitals are not enough. The EU needs to take further regulatory action to clarify the treatment of AI evidence in proceedings. This requires establishing minimum standards for all Member States regarding the admissibility of such evidence, its probative value, and the consequences of its exclusion. This last part of the article will focus on proposing further ideas on how the AI Act – or a whole new specific regulation - could better legislate on AI in criminal law.


For starters, additional measures can be thought of to ensure the reliability of the data. An 

important notion in this regard is the chain of custody, referring to the process of documenting how the digital evidence has been processed in the course of a investigation. Detailed records must be kept of how data is obtained and handled, including the location and reliability of sources; this involves using timestamps and logging all actions related to the data, such as its collection, alteration, and transfer. 


Regarding admissibility, a new evidentiary rule should be added to existing national frameworks. These articles typically follow a structured format, setting out a general principle of admissibility followed by exclusionary rules, such as evidence obtained illegally or deemed prejudicial to the defendant’s rights. An example of such a clause can be found in this insightful article:  

“The court shall exclude otherwise relevant evidence when its source is data from a technological source the reliability and accuracy of which cannot reasonably be determined.”


Another ground for inadmissibility should be when a party to the proceeding resorts to AI to gather evidence and strengthen their arguments, but does not inform the court. This risk also exists in the case of judicial decisions, such as in the Colombian case. It is worth discussing whether this could provide any basis for an annulment of the decision.


If the evidence is deemed inadmissible, the court must inform the relevant authorities to take corrective measures. It is not enough to simply set the evidence aside; a declaration of inadmissibility should be the first and last one concerning evidence from a specific device, as errors must be identified and corrected. Specific criteria have to be set for judges to evaluate its probative value, such as the reliability of algorithms, the representativeness of the data used to train AI systems, and the system’s transparency.


In addition to drafting regulations, it is crucial to equip legal professionals with technical knowledge. There is no use in establishing extensive AI rules if judges and lawyers do not have the necessary expertise to follow them in practise. This training should not begin in the courtroom, and rather be instituted from the start of their academic journey. This would include elective, if not mandatory, university courses, certification programs and seminars with AI experts to ensure that attorneys and judges can digest how AI tools work. This is the only way to avoid arbitrary interpretations and to prevent professionals from having to learn the intricacies of AI on the fly during proceedings, which is inconsistent with its complexity.


These are just a few of the many measures worth thinking about to ensure that all parties have a fair opportunity to challenge the validity and authenticity of AI evidence. The purpose of this article is not to dismiss the benefits of AI in the criminal justice system. It is true that these tools have made mistakes in the past, and they will continue to do so in the future. However, for every Robert Williams misidentified by a machine, there are hundreds of others who are spared from false accusations by human witnesses with unreliable memories. Acknowledging this imbalance doesn’t mean it’s impossible to fix; on the contrary, it recognises that AI is here to stay, and these challenges need to be anticipated and addressed now.


Striking a balance between the valuable information AI provides and the rights of vulnerable defendants is not just desirable - it’s essential. This way, we’ll be better prepared for the day when ChatGPT is called to testify in a courtroom, ready to view it as a powerful asset rather than a threat.

Comments


bottom of page