There is a problem that every contract professional knows but rarely names: the implementation gap.
Deliverable from structured prompts
The quality gap between a lazy prompt and a structured one is enormous — a structured prompt produces output that forms 80% of your deliverable, while an unstructured prompt produces output you discard.
Contracts are drafted by lawyers for lawyers — optimized for dispute resolution, not for the warehouse manager who needs to know what "commercially reasonable efforts" means for Tuesday's delivery.
The result is predictable. Obligations get missed, not because people are careless, but because the contract is unreadable to the people who are supposed to perform it. The gap between what the contract says and what actually happens on the ground is not a failure of compliance. It is a failure of communication design.
The question that matters now: can generative AI help close this gap? The answer, based on recent research and practical testing, is a qualified yes — with important caveats that most AI vendors will not tell you.
What the Implementation Gap Actually Costs
In a typical procurement relationship, the contract sits in a folder. The operational team works from memory, email threads, and whatever the project manager understood during the kick-off meeting. The legal terms that define delivery standards, record-keeping obligations, and escalation procedures are effectively invisible.
This is not an edge case. A WorldCC study found that the disconnect between contract terms and operational execution is one of the most persistent problems in commercial relationships. The contract is supposed to be a management tool. In practice, it is an artifact that gets signed and forgotten.
The cost is real: disputes that arise not from bad faith but from genuine ignorance of what was agreed. Delivery failures that could have been prevented if the right person had known about a notification requirement buried in clause 14.3(b). Relationship damage that accumulates quietly until it becomes a formal claim.
What GenAI Can Actually Do
When you ask a modern AI assistant — Claude, ChatGPT, or similar — to help bridge the implementation gap, you get surprisingly competent initial results. Both Claude and ChatGPT, when prompted on this topic, immediately identified the core issue: contracts need to be translated from legal instruments into operational guides.
Here is what AI does well today:
Extract and classify obligations. Give an AI assistant a set of general terms and conditions, and it can reliably identify which clauses create record-keeping requirements, which require communication between the parties, and which need mutual agreement. In practical tests with actual commercial GTCs, ChatGPT produced results comparable to what a human contract designer would deliver — with the important caveat that it occasionally over-interpreted clauses (reading a confidentiality obligation as a "record-keeping" requirement, for example).
Generate audience-specific guides. This is where AI genuinely impressed. When asked to reorganize contract content for operational, financial, and legal audiences separately, ChatGPT correctly identified that safe offloading procedures belong in the operational guide, credit terms in the financial guide, and governing law in the legal guide. The ability to sift contract content by audience relevance is a meaningful capability.
Simplify language without losing meaning. AI has been doing this well for some time through tools like Grammarly and DeepL Write. The newer generation of models goes further — they can suggest FAQ-style headings, apply layered structures to contract text, and produce genuinely readable summaries.
Where AI Quietly Fails
The failures are more interesting than the successes, because they reveal the boundary between what AI can process and what it cannot.
It introduces knowledge that is not in the document. In one test, ChatGPT was asked to create a user guide for a set of purchase terms. It produced an impressive guide — but it also added advice about how to accept an order, a process that was never mentioned in the original document. When challenged, ChatGPT initially defended its addition as based on "general contract law knowledge," then gradually backtracked. The final version included a cautious hedge: "under some interpretations of contract law, starting to fulfill the order may sometimes imply acceptance."
This is the hallucination problem in its most dangerous form for legal documents: the AI does not invent nonsense. It invents plausible legal advice that may or may not be correct under the applicable law. A non-lawyer reading the guide would have no way to distinguish the AI's additions from the actual contract terms.
It cannot access tacit knowledge. The sociologist Etienne Wenger studied how insurance claims offices actually work and found that written procedures are only one component of operational knowledge. Communities of practice rely on "implicit relations, tacit conventions, subtle cues, untold rules of thumb" that are never documented. When AI rewrites a contract for operational use, it has no access to this layer of knowledge. It does not know that in your organization, the procurement team handles force majeure notifications informally before escalating, or that the quality standard referenced in clause 8 is interpreted differently at the Hamburg facility than in Zurich.
It misses strategic ambiguity. Good contract drafting sometimes involves deliberate vagueness — leaving a clause open-ended to preserve negotiation flexibility, or relying on implied terms that favor one party under the applicable law. AI will attempt to clarify everything, because clarity is what it optimizes for. A human contract designer would recognize when ambiguity is intentional and leave it intact.
The Textuality Problem
There is a useful framework from text linguistics — the seven standards of textuality defined by de Beaugrande and Dressler — that helps explain where AI's limitations lie. AI handles cohesion (grammatical connections) and coherence (conceptual logic) well. It is reasonably good at informativity (signaling what is new or unusual) and situationality (understanding the contract context).
But it struggles with intentionality and acceptability — understanding the strategic attitudes of the parties toward the text. It cannot reliably detect when one party has drafted a clause to create leverage, or when a particular formulation signals flexibility versus rigidity. These are precisely the aspects that matter most in contract implementation, where the question is not "what does this clause say?" but "what does this clause mean in the context of our actual business relationship?"
The Prompt Engineering Lever
The chapter's authors make a point worth emphasizing: the quality gap between a lazy prompt and a structured one is enormous. Asking "what's wrong with this contract?" produces generic output. Specifying role, context, task, format, and constraints produces something you can actually use.
In the context of implementation guides, a structured prompt might look like: "You are an experienced contract manager reviewing software licensing terms for the operations team at a mid-sized Swiss manufacturer. Extract all clauses that create notification obligations for the buyer, organize them by timeline urgency, and present each one as a plain-language action item with the original clause reference."
The difference is not marginal. It is the difference between output you discard and output that forms 80% of your deliverable. The authors compare this to the difference between asking someone to "make supper" versus providing a detailed recipe — you need to know the audience well enough to skip the instructions, and AI does not know your audience.
Three techniques matter most for contract implementation work. Direct insertion works for short clause sets. Retrieval-augmented generation (RAG) lets you connect the AI to your firm's internal knowledge base of policies, prior interpretations, and compliance requirements. Few-shot learning — showing the AI examples of what good output looks like — is particularly effective for establishing consistent formatting and severity classification across a portfolio of contract guides.
Practical Recommendations for Swiss and DACH Legal Teams
If you are working with contracts in a Swiss or EU context, here is how to use AI effectively for implementation support — without creating new risks.
Use AI for the first draft of contract guides, not the final version. AI-generated obligation lists and audience-specific summaries are a strong starting point. But every output needs review by someone who knows the specific business relationship, the applicable law, and the organizational context.
Invest in prompt engineering. The quality difference between a simple prompt ("summarize this contract") and a structured prompt (specifying role, context, task, format, and constraints) is significant. Provide the AI with explicit information about the audience, the industry, and the applicable jurisdiction. In a Swiss context, specify whether Swiss OR, EU regulations, or both apply. Do not assume the AI will default to the correct legal framework.
Use RAG for organizational context. If your organization maintains a knowledge base of internal policies, previous contract interpretations, or compliance requirements, use retrieval-augmented generation to give the AI access to this context. The implementation gap is partly an information design problem, and RAG helps the AI produce outputs that reflect your specific organizational reality rather than generic legal knowledge.
Maintain a human-readable layer. As AI increasingly assists both buyers and sellers in contract preparation and evaluation, there is a growing risk that contracts become optimized for machine processing rather than human understanding. Resist this. The implementation gap exists because humans cannot operationalize what they cannot read. Every contract should maintain a layer that is readable without AI assistance — and this layer should be the primary reference for implementation personnel.
Build layered contract documents. The most effective approach combines the full legal text with human-readable explanations and high-level action points. AI is well-suited to generating the explanatory and action-point layers, provided a human validates the output against the actual contract terms and the specific business context.
Contract Implementation Checklist
0/0The AI-on-Both-Sides Problem
There is a scenario already emerging in procurement: complex bids prepared by sellers with AI assistance, evaluated by buyers also using AI. When both sides of a contract relationship use AI for drafting and analysis, a new risk appears — contracts that are technically coherent but optimized for machine processing rather than human understanding.
This matters because the implementation gap is fundamentally about human comprehension. If both parties' AI systems can parse and interpret the contract perfectly, but the warehouse manager and the delivery coordinator cannot, the gap widens rather than narrows. The layered approach — full legal text plus human-readable explanations plus high-level action points — becomes not just a best practice but a necessity.
Under the EU AI Act, contracts involving AI-assisted decision-making in areas like employment, insurance, or lending may trigger additional transparency requirements. Swiss firms operating under both the FADP and EU frameworks need to ensure that their AI-assisted contract processes maintain a human-interpretable layer — not just for operational effectiveness, but for regulatory compliance.
What Comes Next
AI assistants will inevitably become more sophisticated. The models that today require careful prompting to produce useful contract guides will soon handle these tasks with less human input. But the fundamental limitation remains: AI operates on reified knowledge — what is written down — and contracts exist in a world of tacit knowledge, relationship history, and strategic intent.
The organizations that will benefit most from AI in contract implementation are those that treat it as a communication design tool, not a legal oracle. The implementation gap is ultimately a human problem: people who need to act on contract terms cannot access or understand those terms. AI can make the terms more accessible. It cannot replace the judgment needed to apply them in context.
That judgment — knowing what a clause means in practice, for this relationship, in this industry, under this applicable law — remains a human responsibility. And for legal professionals advising on contract design, that is where the real value lies.
How Well Do Your Contracts Bridge the Implementation Gap?
0 questions