The Digital Rights project, co-funded by the European Union, was presented by Béatrice Maccarini, Project Manager at Agenfor International Foundation. This program is part of the digital transformation of criminal justice, placing fundamental rights at the heart of innovation. Its goal is clear: to ensure that technology supports justice without ever replacing legal reasoning.

Designed for judges, lawyers, and prosecutors, it offers tools to use digital solutions responsibly and in compliance with European standards. The project is based on the principles of Directives 2013/48/EU and 2016/343/EU, guaranteeing the right to a lawyer and the presumption of innocence, even in the face of challenges such as digital evidence, cross-border cooperation, and data-driven decisions.

Digital Rights also aims to be a platform for dialogue among Member States, promoting the sharing of experiences and adapting technological developments to legal requirements. Finally, it supports the creation of innovative tools such as the Virtual Judicial Academy, a space where technology helps professionals learn and collaborate without ever reducing procedural rights, for a justice system that is more transparent, accessible, and fair.


During her presentation, Serena Quattroccolo addressed a fundamental issue: protecting the right to defense and the presumption of innocence in the era of algorithmic evidence. She emphasized that these safeguards, enshrined in Directives (EU) 2016/343 and 2013/48/EU, are at the core of the European model of fair trial. The use of AI systems in criminal proceedings must never compromise these rights, particularly in terms of transparency and human oversight.

The professor highlighted the risks associated with algorithmic opacity and the difficulty for lawyers to challenge results generated by automated systems. She also discussed the challenges posed by mandatory videoconferencing, which can limit direct interaction between the defendant and the judge—an essential element for ensuring impartiality and fairness in trials.

According to her, digitalization should not lead to a reduction of procedural rights but rather to tools that enhance access to justice and the quality of safeguards. Her message is clear: innovation must remain at the service of individuals, respecting the principles of the ECHR and ensuring the possibility of appeal against any decision influenced by technology.

Looking back at the day of October 22, 2025

The session dedicated to responsibilities, risks, and skills in digital justice opened the second day of the conference.

Antonio De Nicolo reminded the audience that AI applied to justice is classified as “high risk” under the AI Act, due to the threats it poses to democracy and fundamental rights. He emphasized the ban on predictive justice and the irreplaceable role of judges in interpreting facts. Giovanni Canzio, former President of the Italian Court of Cassation, broadened the debate by stressing that Europe advocates an anthropocentric approach based on transparency and human accountability, in contrast to the American model (market-driven deregulation) and the Chinese model (social control). He warned of the risk that Europe could become merely a consumer of technologies if it limits itself to producing rules without investing in innovation and training.

Serena Quattroccolo analyzed the impact of digital tools on the right to defense and the presumption of innocence, particularly in the context of imposed videoconferencing.

Finally, Sami Kodia addressed the challenges of AI-based legal translation, illustrating the dangers of linguistic errors and the need for human validation. All contributions converged on one point: innovation is unavoidable, but justice must remain human, transparent, and compliant with fundamental rights.


The discussion continued with the session “AI and Criminal Proceedings: Regulatory Framework and Ethical Questions,” moderated by Vasiliki Artinopoulou, Professor of Criminology at Panteion University of Athens and Director of the EPLO Institute on Crime & Criminal Justice, which highlighted the major issues related to the use of artificial intelligence in criminal proceedings.

Speakers emphasized that AI applied to justice is classified as “high-risk” under the AI Act, due to the threats it poses to democracy and fundamental rights. While these technologies can facilitate certain technical tasks such as legal research, transcription, or anonymization, they must never replace human decision-making.

Predictive justice is strictly prohibited: a person cannot be judged on anticipated behaviour or statistical profiles, but only on verifiable facts. The debates also highlighted the risks associated with the opacity of algorithms, the difficulty of explaining their results in court, and the protection of the right to a fair trial.

The issue of mandatory videoconferences was addressed, with a reminder of the safeguards provided by Article 6 of the ECHR to preserve physical presence and direct interaction with the judge. A consensus emerged: innovation is essential, but it must remain human-centered, ensure transparency, traceability, and the possibility of appeal, so that digital justice remains in line with fundamental principles.


During her presentation entitled “The right to defence and the presumption of innocence in the age of algorithmic evidence”, Serena Quattroccolo, Professor of Italian and European Criminal Procedure and Vice-Dean for International Programmes at the University of Turin, addressed a key issue: protecting the right to defence and the presumption of innocence in the face of the growing use of artificial intelligence in criminal proceedings.

She emphasized that these safeguards, enshrined in Directives (EU) 2016/343 and 2013/48/EU, are at the heart of the European model of a fair trial. AI must never compromise these rights, particularly in terms of transparency, traceability, and human oversight.

The professor warned against the opacity of algorithms and the difficulty for lawyers to challenge results generated by automated systems, which threatens the balance of adversarial proceedings.

She also highlighted the risks associated with mandatory videoconferencing, which can limit direct interaction between the defendant and the judge—an element that is essential to ensure impartiality and fairness in the trial.

In her view, digitalisation should not reduce procedural rights but rather strengthen them by offering tools that improve access to justice and the quality of safeguards. Her message is clear: innovation must remain at the service of the individual, respect the principles of the ECHR and ensure the possibility of appeal against any decision influenced by technology.


Predictive technologies and algorithmic bias: two perspectives on the same risk

Predictive technologies promise efficiency, but at what cost? asks Sarah Holland-Kunkel, a human rights researcher. She warns of algorithmic biases that can distort judicial decisions: “These systems learn from historical data, which is often discriminatory. They can reproduce inequalities and compromise the fairness of the trial. “ To illustrate her point, she cites the example of the American software COMPASS, used to assess the likelihood of reoffending, which has been accused of discriminating against certain social groups. According to her, algorithmic logic, influenced by invisible biases, can transform justice into an opaque mechanism that is difficult to challenge.

Federico Cappelletti, a lawyer in Venice and co-chair of the European Law Observatory, shares this concern but adds a legal dimension: “Predictive justice is incompatible with our constitutional principles. No algorithm can replace the human motivation behind a decision.” “ He points out that Italian law 132/2025 prohibits any delegation of judicial decision-making to a machine and insists on the need for independent audits and clear rules. “AI can assist, but never decide. Responsibility must remain human.

Both experts agree on one point: innovation is inevitable, but it must be regulated to prevent the pursuit of efficiency from compromising fundamental rights and the fairness of the trial.


Mirko Jatsch, Senator for Justice and the Constitution of the Free Hanseatic City of Bremen, presented the challenges and progress of digitalization in the judicial system, focusing on the German context and European perspectives. He explained that digital transformation aims to modernize judicial processes, improve efficiency, and ensure better access to justice. In Germany, the national strategy is based on nine guiding principles, including the digitization of case files, secure communication, and system interoperability. One of the major challenges concerns electronic case management in correctional facilities, where practices often remain hybrid (paper and digital). He emphasized the need for federal and European cooperation to harmonize standards and avoid fragmented solutions. Digitalization also includes the digital participation of inmates, with tools to access their rights, submit requests, and track their procedures. Mirko Jatsch highlighted the importance of training judicial and prison staff to develop digital skills and support this change. He presented innovative projects, such as the use of virtual reality for staff training and preparation for complex scenarios, inspired by practices already tested in the health and security sectors. Finally, he stressed that digitalization should not be limited to technology: it must respect principles of security, confidentiality, and fundamental rights, while promoting transparency and trust in the judicial system.


The final discussion reaffirmed a strong consensus: innovation is essential for modernising justice, but it must remain at the service of humanity. Participants emphasised an anthropocentric approach based on transparency, traceability and human control, in accordance with the AI Act.

Predictive justice is strictly prohibited: no decision can be based on anticipated behaviors or statistical profiles, but only on verifiable facts. Fundamental rights—the right to defense and the presumption of innocence—must be guaranteed, in line with European Directives 2016/343 and 2013/48/EU.

The risks associated with algorithmic bias require independent audits, corrective mechanisms, and full transparency. Digitalization—whether through videoconferencing, AI-assisted translation, or virtual reality for training—must strengthen access to justice without compromising procedural safeguards. Training professionals is essential to master these tools and anticipate their impacts.

Finally, European cooperation and data protection are priorities to avoid fragmentation and maintain public confidence. The message is clear: innovate without compromising democratic values.

The European consortium

This project is funded by the European Union