There is a conversation happening across the European legal sector right now, and it keeps getting stuck in the same place. On one side: AI tools that promise faster document review, cheaper contract analysis, and broader access to justice. On the other: legitimate concerns about transparency, accountability, bias, and the integrity of the justice system. The discourse treats these as opposing forces — efficiency versus the rule of law — as though you must sacrifice one to gain the other.
That framing is not just unhelpful. It is actively dangerous. It lets firms adopt AI tools without proper governance because "we need to stay competitive," and it lets other firms refuse any adoption because "the risks are too high." Both positions lead to the same outcome: clients get worse service.
Judicial operators familiar with AI
UNESCO survey — yet 92% call for mandatory AI training. The gap between what is needed and what exists defines the current moment.
The Three Conversations That Dominate — And What They Miss
The current debate among judges, bar associations, IT architects, and public innovation offices revolves around three themes. Each is necessary. None is sufficient.
Capacity and resources. How much does it cost to deploy generative AI? Where does the technical expertise come from? Goldman Sachs projects AI could boost annual productivity by 1.5%, driving $7 trillion in added economic value over a decade. Venture funding for LegalTech startups hit roughly $700 million between January 2023 and February 2024. Public sector initiatives are also moving — Germany's Baden-Württemberg deployed an AI assistant (OLGA) for case categorization, and Spain's Ministry of Justice built Delfos, a GenAI-powered judicial search engine.
Technological possibilities. What can the tools actually do? Contract analysis, document drafting, legal research, predictive analytics, automated legal advice for underserved populations. The potential is real and well-documented.
Overcoming conservatism. How do we get sceptical judges and cautious clients to accept AI-driven processes? This includes building explainability, running pilot projects, and demonstrating tangible results.
Transparency Is Not a Feature Request — It Is a Legal Requirement
When legal professionals discuss AI transparency, they often treat it as a nice-to-have: a dashboard here, an explainability report there. This misses the regulatory reality.
The EU AI Act imposes specific transparency obligations, with particularly strict requirements for high-risk AI systems — and several categories land squarely in legal practice. Administration of justice (Annex III, point 8) explicitly covers AI intended to assist judicial authorities in fact-finding or applying the law. Employment decisions, access to essential services, and creditworthiness assessments all trigger high-risk classification.
For Swiss firms, the picture is more layered. The Swiss Federal Act on Data Protection (nFADP) has its own concept of "high-risk profiling" with no GDPR equivalent. Article 321 StGB on professional secrecy applies regardless of which data regulation is in scope. A firm routing client data through a US-hosted AI tool may be in breach of professional secrecy before any AI-specific regulation enters the picture.
The European Ethical Charter on the use of AI in judicial systems — adopted by the Council of Europe in 2018 — sets five principles that remain the benchmark:
- Respect for fundamental rights in AI design and implementation
- Non-discrimination — preventing AI from creating or deepening inequalities
- Quality and security — certified sources, secure environments, multi-disciplinary model design
- Transparency, impartiality, and fairness — accessible methods, external audits
- User control — no prescriptive AI; users must remain informed actors
These are not aspirational statements. They are the framework against which any AI deployment in the justice domain will be evaluated.
The Accountability Gap Is Where Firms Will Get Hurt
Accountability in AI-driven legal processes is genuinely complex. Traditional legal frameworks were not designed for situations where an algorithm contributes to a legal outcome. Three layers of responsibility need clear definition — and most firms have defined none of them.
Developer accountability. AI vendors must ensure their systems meet ethical standards and legal requirements. This includes regular audits and updates. Ask your vendor for their conformity declaration and audit schedule. If they cannot produce one, that tells you something.
Practitioner responsibility. Legal professionals using AI must understand the technology and its limitations. They must interpret AI outputs critically and integrate these insights into their practice responsibly. A UNESCO survey found that only 31% of judicial operators considered themselves expert or very familiar with AI — 41% described their knowledge as moderate, 20% as slight, and 7% acknowledged knowing nothing.
31%
Judicial operators familiar with AI
UNESCO Global Judges' Initiative survey
92%
Call for mandatory AI training
Same survey — respondents supporting mandatory regulations and training on AI use in judicial activities
$700M
LegalTech venture funding
Jan 2023 – Feb 2024, Stanford CodeX data
Institutional oversight. Governance structures must oversee the deployment and use of AI systems. Review boards, performance monitoring, compliance with legal and ethical standards — these are not optional add-ons.
The gap between what is needed and what exists is wide. Most firms deploying AI tools today have no formal accountability framework that addresses all three layers. When an AI-assisted output produces harm — a biased recommendation, a hallucinated case citation, a privacy breach — the question of who is responsible remains unanswered until it is litigated. By then, the reputational and financial damage is done.
Human Perception of Justice Still Matters
There is a dimension that technologists consistently underweight: how people experience justice. A legally correct outcome delivered by an opaque process feels unjust. A well-explained decision that takes slightly longer builds trust in the entire system.
The digital divide compounds this problem. Lutz identifies three levels: access to technology, digital skills and participation, and outcomes in terms of benefits and harms. AI-driven legal services risk deepening each level if they are designed primarily for the provider's convenience rather than the client's comprehension.
Legal design — applying human-centered design principles to legal systems and services — offers a practical methodology here. It is not about making things pretty. It is about ensuring that interfaces, communications, and processes are comprehensible to people with varying levels of technical proficiency. When a client receives an AI-assisted legal recommendation, they need to understand what influenced it, what alternatives exist, and what recourse they have if they disagree.
What This Means for Your Practice
If you are running a law firm or legal department in the DACH region, here is what I would focus on — in order of urgency.
Audit your current AI tools against the EU AI Act risk classification. Identify which tools fall into high-risk categories. For each one, verify whether your vendor has completed a conformity assessment and can produce technical documentation per Annex IV. If not, you have a compliance gap that needs closing before August 2026.
Establish a three-layer accountability framework. Document who is responsible at the vendor level, the practitioner level, and the institutional level. Define protocols for error detection, correction, and client notification. This does not need to be a 200-page policy document — it needs to be clear, assigned, and enforced.
Invest in AI literacy across the firm. The 92% of judicial operators calling for mandatory AI training are right. Your associates, partners, and support staff need to understand what the tools do, what they cannot do, and how to critically evaluate their outputs. This is not a one-day workshop — it is an ongoing programme.
Put the client perspective at the centre. Before deploying any new AI use case, ask: does this serve the client or does it serve the firm? If the honest answer is "primarily the firm," redesign the use case until it genuinely benefits the client. Efficiency gains that accrue only to the provider while the client bears the risk of opaque processes are not sustainable.
The Deepfake and Hallucination Problem Is Not Theoretical
The risks of generative AI in legal practice are not limited to bias and opacity. Two specific failure modes deserve attention because they are already occurring and have direct implications for the rule of law.
Deepfakes in legal proceedings. Generative AI can produce fabricated audio, video, and documentary evidence that is increasingly difficult to distinguish from authentic material. Courts are not yet equipped to systematically detect AI-generated evidence. For litigation departments, this means both offensive risk (opposing parties submitting fabricated evidence) and defensive risk (your own AI tools inadvertently generating content that could be challenged as inauthentic). The evidentiary chain must now include provenance verification for any AI-touched material.
Hallucinations in legal research. Stanford research has documented that leading AI legal research tools hallucinate — they fabricate case citations, invent statutory provisions, or misstate holdings. A 2024 study assessed the reliability of major AI legal research platforms and found non-trivial hallucination rates even in tools specifically designed for legal professionals. If your associates are using AI research tools without systematic verification of every citation, you are carrying risk that no professional liability insurer has priced correctly.
Both failure modes share a common root: generative AI systems optimize for plausibility, not for truth. In contexts where truth is a legal requirement — and in the justice system, it always is — this optimization mismatch creates systemic risk.
The Rule of Law Is Not the Obstacle — It Is the Competitive Advantage
The firms that will lead in the next decade are not those that adopt AI fastest. They are those that adopt AI in ways that demonstrably uphold transparency, accountability, and fairness. In a market where trust is the most valuable currency a law firm holds, a rigorous governance framework is not a cost centre — it is a differentiator.
The rule of law and legal tech efficiency are not opposing forces. They are complementary constraints. Build within both, and you build something durable. Sacrifice one for the other, and you build something that will not survive its first serious test.
The question is not whether your firm will use AI. The question is whether, when it matters, you can explain exactly how you used it, who was responsible, and why the client was protected. That is the standard. Everything else is noise.