If you're using an AI tool to review contracts, screen candidates, or assist with legal research — and your firm operates in Switzerland or Germany — you are already inside a compliance framework that has real teeth. The EU AI Act's high-risk provisions take effect in August 2026. For Swiss firms serving EU clients, a second layer applies on top: the Swiss Federal Act on Data Protection (FADP) and, for any tool handling client communications, the professional secrecy rules under Art. 321 of the Swiss Criminal Code (StGB). These are not the same regulation, and compliance with one does not guarantee compliance with the other.
Two Frameworks, Not One
German firms are straightforwardly inside the EU AI Act. Swiss firms face a dual obligation:
- EU AI Act: Applies if you serve EU-resident clients or deploy AI tools developed for the EU market — which covers virtually every major legal AI platform.
- Swiss FADP (nFADP, in force since 1 September 2023): Applies to all Swiss domestic operations. The FADP is not a copy of GDPR. It has its own definitions, its own penalty structure (up to CHF 250,000 per responsible individual), and — critically for law firms — its own concept of "high-risk profiling" with no GDPR equivalent.
- Art. 321 StGB (Anwaltsgeheimnis): Applies regardless of which data regulation is in scope. Breach of professional secrecy carries up to 3 years imprisonment and may also trigger a monetary penalty under the general Swiss criminal-law framework. The offence is subject to the ordinary Swiss criminal-law limitation rules.
A firm using a US-hosted AI tool to draft client documents may be in breach of Art. 321 StGB before the EU AI Act even comes into the picture.
The Four-Tier Risk Framework
The EU AI Act sorts AI systems into four tiers:
Unacceptable risk (banned outright). Social scoring by public authorities, AI exploiting psychological vulnerabilities, and most real-time remote biometric identification in public spaces. No legal tool should touch this category — but confirm your vendors are not using a banned technique in their underlying model stack.
High-risk (Annex III and Annex II). This is the tier that matters most to law firms. High-risk systems require mandatory conformity assessments, technical documentation, human oversight mechanisms, and registration in the EU database before deployment. Several Annex III categories land directly in legal practice.
Limited risk (transparency obligations). Chatbots and AI content tools that interact with humans must disclose that the user is talking to AI. Most general-purpose legal research assistants and client-facing intake tools land here — unless a higher-risk factor applies.
Minimal risk. Spam filters, AI-assisted document formatting, and most recommendation engines. These only need to satisfy general product safety and applicable data protection rules.
High-Risk Categories That Affect Legal Practice
Three Annex III categories hit law firms directly:
Employment and HR management (Annex III, point 4). Any AI used to recruit, filter candidates, assess performance, or make promotion or termination decisions is high-risk. If your firm uses AI to screen lateral hire applications or allocate work based on associate metrics, that system is high-risk. The obligation sits with the deployer — your firm — not just the vendor.
Administration of justice and democratic processes (Annex III, point 8). AI intended to assist judicial authorities in fact-finding or applying the law is explicitly listed. The scope is still being interpreted, but firms doing litigation support or regulatory advice should monitor how national regulators apply this category.
Access to essential services (Annex III, point 5). AI evaluating creditworthiness, insurance risk, or eligibility for public benefits is high-risk. Law firms advising financial institutions or operating in structured finance need to check whether tools embedded in client workflows trigger this classification.
What High-Risk Classification Requires
If a system is high-risk, your firm as deployer must:
- Conduct or verify a conformity assessment. For Annex III systems, vendor self-assessment is usually permitted — but the documentation must be thorough and auditable. Ask your vendor for their conformity declaration now.
- Maintain technical documentation per Annex IV: system purpose, architecture, training data provenance, performance metrics, and known limitations.
- Implement human oversight. The system must allow a human to understand, monitor, and override its outputs. Assign responsibility explicitly and document staff training.
- Register the system in the EU database for high-risk AI (Article 51) before use.
- Establish a risk management system (Article 9) covering the full lifecycle, including post-market monitoring.
Penalties for deployer violations reach up to €15 million or 3% of global annual turnover, whichever is higher.
€15M
Maximum deployer penalty
Or 3% of global annual turnover
CHF 250K
FADP individual penalty
Per responsible person, not the firm
Art. 321
Separate Swiss secrecy regime
Can create immediate criminal-law exposure before the AI Act analysis is complete.
The Swiss Dimension: SAV Guidelines and Anwaltsgeheimnis
For Swiss firms, the EU AI Act compliance question is nested inside a more immediate one: does your use of this AI tool comply with Art. 321 StGB?
The Swiss Bar Association (SAV/FSA) issued AI guidelines in June 2024 — the most practical compliance framework available to Swiss law firms. The SAV identifies three pathways for using AI tools that process client data without breaching professional secrecy:
- Internal or local deployment. The AI runs within the firm's own infrastructure. Client data never leaves the firm's network. This is the strongest protection and the clearest pathway.
- Compliant outsourcing. The provider qualifies as a Hilfsperson (auxiliary person) under Art. 321 StGB. The Federal Court confirmed in BGE 145 II 229 that cloud providers can qualify — but the bar is high. The provider must implement all reasonably expected measures to prevent secrecy violations, sub-delegation to sub-contractors is prohibited, and liability limitation clauses that cover only intentional acts and gross negligence are not sufficient.
- Informed client consent. The client expressly waives the protection. The Federal Court has questioned whether consent can genuinely be "informed" given the opacity of AI systems — treat this as the weakest pathway and use it only where the first two are not available.
No model training on client data. No client-identifiable data in public LLMs. These are hard lines under Art. 321 StGB, regardless of which EU or Swiss data regulation applies.
A Practical Three-Step Checklist for August 2026
For a 10-20 lawyer firm with no dedicated compliance officer, here is a proportionate starting point:
Step 1: Build an AI tool inventory. List every AI tool your firm uses that touches client data or informs a legal or HR recommendation. Include tools that staff use individually — a lawyer using ChatGPT to draft correspondence is not exempt from Art. 321 StGB. For each tool, record: vendor name, hosting location (Swiss/EU/US), and current use cases.
Step 2: Check each vendor's EU AI Act conformity status. Contact each vendor and ask: (a) Does this system appear in any Annex III high-risk category? (b) If yes, can you provide your conformity declaration? (c) Does your data processing agreement explicitly prohibit model training on our client data? If a vendor cannot answer these questions, that is itself a compliance signal.
Step 3: Appoint one partner as AI compliance owner. This is not a full-time role. Estimate two to three hours per month: reviewing new tool requests, maintaining the inventory, and keeping the firm's DPA templates current. Document the appointment. If regulators or clients ask about your AI governance, the answer "Partner X is responsible and here is the inventory" is infinitely better than silence.
August 2026 is close. Firms that begin this exercise now will be ready to document compliance, satisfy client due diligence requests, and — for Swiss firms especially — avoid enforcement risk that does not wait for EU deadlines.
EU AI Act High-Risk Compliance Checklist
0/0LLM Security Risks Specific to Legal AI
Most legal AI platforms (CASUS, Harvey, Lexis AI, Omnilex) use Retrieval-Augmented Generation (RAG) — the AI retrieves relevant documents from a database, then generates answers grounded in those sources. This architecture creates four attack vectors that go beyond standard LLM risks:
- Vector database exposure. Embedding vectors encode information about original documents. An adversary with access to the vector store can infer relationships between matters or partially reconstruct document content.
- Retrieval pattern leakage. Co-retrieved documents reveal sensitive associations — if querying a client's employment dispute consistently co-retrieves a specific regulatory filing, the association itself is information.
- Document attribution. Source citations in AI responses can inadvertently expose confidential documents to users who should not have access.
- Prompt injection. Crafted inputs can cause the AI to bypass safety controls, misclassify documents, or extract information from other matters in the same system.
The mitigation for law firms is specific: access controls must be per-matter, not per-user. A lawyer authorised to use the research tool should not automatically have access to embeddings from every client matter in the firm's database.
Client Contestability Rights — A New Obligation
Under EU AI Act Article 86(1), individuals have the right to an explanation from the deployer when a high-risk AI system "produces legal effects or similarly significantly affects" them. This means clients must be able to challenge AI-influenced legal advice, case assessments, or risk evaluations with "clear and meaningful explanations of the AI system's role in the decision-making process."
For Swiss firms, this right layers on top of existing GDPR Article 22(3) rights to contest automated decisions. Practically, if your firm's AI tool influences case strategy or risk assessment, affected clients can demand to know how the AI contributed to the recommendation — and dispute it. Document the AI's role in every matter where it materially influences advice.
Does Your Legal AI System Qualify as High-Risk?
0 questions
Need help classifying your firm's AI tools? Get in touch for a structured compliance review.