Law firms occupy an unusual position in the European AI regulatory landscape. They are simultaneously subject to AI regulation as deployers of AI tools and advisors on AI regulation for their clients. Getting the firm's own compliance right is not just a legal obligation — it is a reputational imperative.
Double compliance burden
Law firms face both GDPR and EU AI Act simultaneously — the only sector where this dual obligation is standard.
The overlap between GDPR and the EU AI Act
GDPR and the EU AI Act share significant conceptual ground — both are built around transparency, accountability, and data subject rights — but they operate on different axes:
| GDPR | EU AI Act | |
|---|---|---|
| Focus | Personal data processing | AI system risk |
| Trigger | Any processing of personal data | Using or providing an AI system |
| Key obligation | Lawful basis, consent, rights | Risk classification, documentation, oversight |
For a law firm deploying an AI tool that processes client personal data (which most do), both frameworks apply simultaneously.
Risk classification under the EU AI Act
The EU AI Act introduces a four-tier risk classification:
- Unacceptable risk — Prohibited outright (social scoring, real-time biometric surveillance)
- High risk — Requires conformity assessment, documentation, human oversight
- Limited risk — Transparency obligations only
- Minimal risk — No specific obligations
For most law firm AI use cases:
- AI-assisted legal research → likely limited risk (transparency obligations)
- AI in employment law matters → potentially high risk (employment and HR management is a listed high-risk category)
- Predictive litigation tools → likely high risk (administration of justice is listed)
- Contract drafting assistants → likely limited or minimal risk
Practical compliance steps for law firms
1. Maintain an AI inventory
Document every AI tool in use across the firm, including:
- Vendor and product name
- Use case and practice group
- Data processed (personal / sensitive / client confidential)
- Risk classification (self-assessed)
2. Update your client engagement terms
If you use AI tools in delivering client services, your engagement letters should:
- Disclose that AI tools may be used
- Specify the categories of AI use
- Confirm your human oversight procedures
3. Appoint an AI governance lead
Firms of any meaningful size should designate an individual responsible for AI governance. This role overlaps with — but is not identical to — the Data Protection Officer role under GDPR.
4. Train your lawyers
EU AI Act compliance is not solely an IT or compliance department matter. Every lawyer using AI tools needs to understand the basic obligations around transparency and human oversight.
AI Compliance Quick Check
0/0The opportunity in compliance
Firms that develop strict AI compliance frameworks will find themselves well-positioned to advise clients facing the same challenges. The internal work of getting your own house in order becomes the experiential foundation for a practice area.
Integrating DPIA with AI Impact Assessments
Firms already conducting Data Protection Impact Assessments under FADP can extend them to cover AI-specific risks rather than building a separate framework. ISO 42001 explicitly endorses this approach: AI system impact assessments focused on privacy "may need to be integrated into the organization's broader risk management program." A unified DPIA/AI impact assessment avoids duplicative work.
The DPIA should examine more closely than a general AI assessment, examining: purpose of data collection, method and scope of processing, data sensitivity classification, affected data subjects, processing context, and opportunities for individual participation. For a 10-lawyer firm, one unified document covering both data protection and AI governance is more practical — and more defensible — than two separate assessments.
Secondary Use: The Hidden Vendor Risk
When an AI vendor's terms are ambiguous about model training, a specific risk materialises: secondary use — repurposing data collected from your clients for AI model training without consent. Three specific threats:
- Training new models on client data without explicit prohibition in the DPA
- Inference risk — AI algorithms predicting characteristics your client would prefer to keep private, even if the individual never provided that information directly
- Web-scraping training data that may include documents your firm published on behalf of clients
The EU Data Act Article 6(1) reinforces this: third parties receiving personal data must process it "only for the purposes and under the conditions agreed with the user." If your AI vendor cannot produce a DPA clause explicitly prohibiting model training on your client data, do not use the tool.
Is Your Firm AI-Compliant?
0 questions
Adriana Adafinoaiei advises law firms on GDPR compliance, EU AI Act implementation, and legal technology governance. Get in touch to discuss your firm's compliance posture.