Here is the scenario that keeps happening in Swiss law firms: one partner discovers ChatGPT, starts using it to draft client memos, and tells no one. Three months later, the firm has an Art. 321 StGB exposure — breach of professional secrecy — and no documented process to show a regulator or a claimant that due diligence was exercised.
That is not a compliance problem. That is a liability problem in professional secrecy clothing. And it is entirely preventable with four weeks of structured work.
This guide is written for a 10-lawyer firm with no compliance officer. If you have one, the same framework applies — it just takes less time.
Swiss professional secrecy applies independently of your AI policy
Criminal exposure attaches to the individual lawyer, so unapproved AI use needs documented controls.
Your Swiss Baseline: SAV Guidelines, Not Just EU AI Act
Most governance guides for law firms start with the EU AI Act. That is the right framework for high-risk AI classification, deployer obligations under Article 26, and documentation requirements. But for a Swiss firm, the SAV (Swiss Bar Association) AI Guidelines adopted on 14 June 2024 are your immediate operational baseline — they are binding professional rules, not aspirational regulation.
The SAV guidelines define three compliance pathways for any AI tool that processes client data:
Pathway 1 — Internal/Local deployment. AI runs within your firm's own network. Client data never leaves your infrastructure. This is the cleanest compliance path and the strongest protection against Art. 321 StGB exposure.
Pathway 2 — Compliant outsourcing. You use a cloud-hosted tool, but the provider meets the SAV cloud recommendations and qualifies as a Hilfsperson under Art. 321. The lawyer remains personally liable. The provider must not sub-delegate data processing to a third party — the Federal Court (BGE 145 II 229) confirmed that sub-delegation breaks the Hilfsperson chain.
Pathway 3 — Informed client consent. The client expressly waives their secrecy protections for the specific AI use. The Federal Court has expressed skepticism about whether "informed" consent is achievable in practice given AI's "black box" nature. This is the weakest pathway and should not be your primary compliance strategy.
Your AI governance framework is, in operational terms, a system for ensuring every tool in your firm follows one of these three pathways. Build your Acceptable Use Policy around them.
The Pre-Vetted Starting Point: Swiss-Hosted Tools
Before you classify every tool against FADP and SAV, note that three tools have been purpose-built for Swiss compliance and represent a safe harbor starting point:
- CASUS — Swiss-hosted AI associate for contract drafting and review
- Omnilex — Swiss-hosted legal research workspace with 700,000+ Swiss court decisions
- DeepLaw — Swiss-hosted legal research, ISO 27001:2022 certified, FADP compliant, queries not used for training
A firm that uses only these three tools for client-facing work does not need to conduct a full FADP analysis per tool. The Swiss hosting, Swiss operator context, and certified compliance posture cover the core requirements. This is your lowest-friction starting point.
The complication arises with every other tool in your stack: Microsoft Copilot, generic ChatGPT, a document automation platform with US-hosted processing, a contract review add-in with unclear sub-processor chains. Each of those needs classification.
The 4-Week Implementation Plan
Week 1 — Inventory
Build a spreadsheet with one row per AI tool. The fields:
| Field | Why it matters |
|---|---|
| Tool name | Identification |
| Vendor and legal entity | Jurisdiction of incorporation |
| Use case | What task the tool performs |
| Data processed | Personal, confidential, privileged — which categories? |
| Client data yes/no | Binary: does this tool ever receive client information? |
| Swiss/EU hosted | Data sovereignty question |
| SAV pathway | Which of the 3 pathways does this tool follow? |
| Owner | Named partner responsible |
| Review date | Next scheduled assessment |
Include everything: the legal research tool, the contract review add-in, the transcription service used for client calls, the AI-assisted billing tool, the drafting assistant in your Word plugin. If it processes text or documents, it belongs on this list.
Week 2 — Classify
For each tool on your inventory, answer one question: does this tool receive identifiable client data, and if so, which SAV pathway does it follow?
Any tool that sends client data to a US-hosted model with no Data Processing Agreement and no documented qualification as a Hilfsperson is a red flag. This includes free-tier use of ChatGPT, Claude.ai, or Gemini — services where queries may be used for training and where sub-processor chains are opaque.
The classification output is simple: Green (compliant, pathway confirmed), Amber (needs a DPA or vendor clarification), Red (stop using for client data until resolved). Most firms find two or three red flags in week two. That is normal and fixable.
When classifying each tool, run a six-question vendor due diligence sub-check before assigning a Green status:
- Is the vendor SOC 2 Type II or ISO 27001 certified?
- Does the vendor's DPA explicitly prohibit use of client queries for model training?
- What are the full sub-processor chains, and where does data reside geographically?
- Does the vendor provide documentation of training data provenance?
- What is the vendor's financial viability — is this a thin wrapper over GPT or a substantive product?
- Is there a data deletion and exit provision?
This is not bureaucracy. It is the documented evidence of professional diligence in tool selection. If a vendor cannot answer these questions, that silence is itself a compliance signal. Seventy new generative AI companies entered the legal market in 2025 alone; many are GPT wrappers with no institutional durability. A firm that greenlighted a tool it cannot describe under regulatory examination is in a materially worse position than one that did not greenlist it at all.
Week 3 — Draft the AUP
Your Acceptable Use Policy should be two pages, not twenty. A policy that runs to thirty pages will not be read. A policy that fits on two pages will be.
Core rules every Swiss law firm AUP needs:
No client data in unapproved tools. Fee earners may not input client personal data, confidential information, or privileged communications into any AI tool not on the approved list. This is not a preference — it is an Art. 321 StGB obligation.
Verify before delivery. Every AI output that reaches a client, enters a filing, or advises on a material matter must be reviewed by a qualified lawyer. The AI output is a draft, not a product. Swiss courts do not accept "the AI said so" as professional diligence.
A specific failure mode worth naming: AI sycophancy. Models are designed to be helpful, and "helpful" sometimes means telling users what they want to hear rather than what they need to hear. OpenAI rolled back a GPT-4o update specifically because it was "too nice" — producing validation rather than critique. A lawyer who asks an AI to review their own draft contract and receives an affirming response is not getting independent review. The AUP verification requirement should be explicit on this point: AI validation of your own work product is not the same as independent review. The reviewer must actively prompt for flaws, not ask for confirmation.
Disclose AI use. Add a standard line to your engagement letters: "We use AI tools in delivering our services. All AI-assisted work is reviewed and verified by qualified counsel. A summary of our AI governance policy is available upon request." This converts your governance framework from an internal document into a client-facing credential.
This disclosure is not optional optics. Under SAV Pathway 2, the lawyer remains personally liable for the tool's use of client data. The engagement letter clause is the evidentiary record that the client was informed and that the specific tool choice was documented and disclosed. A firm that processes client confidential data through an AI tool without disclosure has no documented basis to defend that choice if the data is later compromised or misused — and no path to invoking client consent as a mitigating factor.
One AI owner. A named partner is responsible for maintaining the approved tool list, handling escalations, and conducting the quarterly review. In a 10-lawyer firm, this is a two-hour-per-quarter role, not a full-time position.
Week 4 — Sign, Train, Document
All partners sign the AUP. Then a one-hour training session for all staff: what the approved tools are, what the policy requires, what the red lines are, and how to escalate a question. Document attendance.
This documentation matters. Under the EU AI Act Article 26(6) and under Swiss professional diligence standards, your ability to demonstrate that staff received training and that a governance framework existed is material evidence if a complaint or claim arises.
AI Governance Framework Checklist
0/0The Governance Framework as Client Credential
The Blickstein Group's 2025 Law Department Operations survey found that almost two-thirds of law firm clients disagree that their firms are innovative. Most law firms have increased their technology investment significantly — the problem is that the investment is invisible to clients.
Your governance framework is one of the cheapest and fastest ways to close that perception gap. Add one sentence to your engagement documentation: "This firm operates under a documented AI governance policy, available upon request." Send a one-page summary proactively to your top five clients when you finalise the policy.
Most clients will never ask to see the policy. What they register is that the firm has one — and bothered to tell them. That signals an approach to AI that is disciplined, documented, and client-focused. In a market where two-thirds of clients think their firms are falling behind, a one-page policy summary sent proactively changes the perception overnight.
The Risk of Not Having Governance
The risk of inaction is concrete, not theoretical. One partner using ChatGPT to draft a client memo involving a pending dispute can create professional-secrecy exposure under Art. 321 StGB even before any data-protection authority becomes involved. The criminal-law analysis attaches to the individual lawyer, not just the firm entity.
The nFADP (Swiss revised Data Protection Act, in force since September 2023) adds a parallel track: intentional violations carry penalties of up to CHF 250,000 on the responsible natural person.
The governance framework is not bureaucracy. It is the documented evidence that the firm exercised professional diligence in how it deployed AI. In a profession where personal liability is the default, documented diligence is the only defence.
EU AI Act Integration
For firms serving clients in EU member states, the EU AI Act's deployer obligations under Article 26 apply directly. Your governance framework already satisfies the core requirements: risk assessment through the inventory and classification process, human oversight through your AUP verification requirements, logging through your file documentation practice, and incident reporting through your quarterly review process.
The one addition for EU Act compliance: a defined escalation path for serious incidents to the relevant national supervisory authority. Document where that path leads for each jurisdiction where you have client matters.
ISO 42001: The Governance Standard That Gives You Legal Cover
ISO/IEC 42001 — the international AI Management System standard — is becoming directly relevant for law firms. Colorado SB 205 (effective 1 February 2026) provides a concrete "safe harbour" defence: deployers of high-risk AI who comply with both NIST AI RMF and ISO 42001 can raise an affirmative defence against enforcement. This is the first legislation to give ISO 42001 compliance explicit legal weight, and it sets a precedent other jurisdictions will follow.
For Swiss firms already maintaining FADP privacy policies, the integration is practical: ISO 42001 Control A.2.3 explicitly states that "the privacy domain intersects with artificial intelligence" and recommends updating existing privacy policies to cover AI — not building a parallel governance document. Your existing FADP compliance work is the starting point, halving implementation effort.
Vendor Evaluation: The 22-Question Procurement Framework
The Trustworthy AI Procurement framework provides a structured scoring methodology for evaluating any legal AI tool. Each vendor is assessed across dimensions — need assessment, data quality, fairness, transparency, safety, and accountability — scored 1-5. All dimensions must score 3 or above to proceed; any dimension scoring 1 requires remediation before contract.
For a firm without a compliance officer, the three highest-priority questions to ask any AI vendor:
- Data provenance: What data was used to train the model, and does it include client data from other firms?
- Sub-processor transparency: What is the complete chain of data processors between your interface and the model inference endpoint?
- Exit strategy: If we terminate, how is our data deleted, and can we migrate to a different vendor without losing work product?
If a vendor cannot answer these questions within five business days, that silence is a governance signal.
How Mature Is Your Firm's AI Governance?
0 questions
Ready to build your firm's AI governance framework? Get in touch — I help law firms design governance structures that work in practice, not just on paper.