AI art auction price
The Portrait of Edmond Belamy, produced by three non-artists using a GAN built on open-source code, sold at Christie's in 2018 — none of the harder questions about authorship or compensation were settled.
The Portrait of Edmond Belamy sold at Christie's for USD 432,500 in 2018. It was produced by a collective of three people — none of them artists — using a Generative Adversarial Network built on code written by a 19-year-old developer who received no share of the proceeds. The code was open source, so the use was legal. But that fact did not settle any of the harder questions: who authored the work, whether it qualified as art, and who deserved to be compensated.
Eight years later, those questions remain largely open. And as generative AI tools move from curiosity to commodity, the pressure on both the legal system and the art world to answer them has only increased.
Two Systems, Two Logics
The core problem is not that copyright law is outdated — though parts of it are. The core problem is that copyright law and the art system operate on entirely different principles, and AI exposes the gap between them.
Copyright law asks: is there a human author? Is the work original — meaning, does it reflect the author's own intellectual creation through free and creative choices? If yes, protection attaches. The standard is low. Courts have consistently refused to assess artistic merit. In Bleistein v. Donaldson (1903), the US Supreme Court warned against judges constituting themselves "final judges of the worth of pictorial illustrations." More recently, a US district court managed to sidestep the question of whether a banana taped to a wall was art, finding only that no original expression was infringed.
The art world asks something different entirely. Art, in sociological terms, requires a creative act connected to human perception, an intention to create, and recognition within a recursive network of institutions — critics, galleries, exhibitions, competitions. A work is art when the art system says it is, through ongoing communication about art. A copyright certificate does not make something art, and the absence of copyright does not make something not-art.
The Authorship Problem
Under EU law, copyright protection assumes a human author. This is not always stated explicitly, but it is embedded throughout the framework. The Copyright Term Directive defines protection duration as "life of the author and for 70 years after his death." In Painer, Advocate General Trstenjak confirmed that "only human creations are therefore protected, which can also include those for which the person employs a technical aid, such as a camera."
AI is not a camera. When someone types a prompt into a text-to-image model, the relationship between human input and creative output is far more attenuated than pointing a lens and pressing a button. The question becomes: at what point do free and creative choices occur, and who makes them?
In GAN-based systems like the one used for the Belamy portrait, creative choices are distributed across multiple actors — the programmer who designed and fine-tuned the model, the person who curated the training dataset, and the person who selected the final output. Each may have contributed original expression, but in different ways and at different stages.
With modern diffusion models, the distribution shifts again. The prompter has some creative control, but the model's outputs are shaped overwhelmingly by the training data and architecture. Original expressions from training data creators may be mixed — often invisibly — with whatever originality the prompter contributes.
The Training Data Problem
Copyright infringement involving AI models occurs in two distinct ways.
First, an AI system can generate outputs that are substantially similar to works in its training data. With current text-to-image models, this is relatively rare — but it happens, particularly with works that appear frequently in the training dataset.
Second, the training process itself may constitute infringement. When a model ingests copyrighted works to learn patterns, that use of the work may violate the reproduction right. Article 4 of the EU's Copyright in the Digital Single Market (CDSM) Directive permits text and data mining of lawfully accessible works for commercial purposes, but subject to an opt-out mechanism. Authors can reserve their rights. In practice, though, enforcing this opt-out is extremely difficult, and many details of its implementation remain unresolved.
Article 53(1) of the EU AI Act adds a transparency requirement: providers of general-purpose AI models must publish sufficiently detailed summaries of the content used for training. But determining whether a specific work was used in training a particular model remains technically challenging.
The result is a significant gap between the rights that exist on paper and the rights that can be enforced in practice. Artists have responded with self-help measures — removing works from platforms, using data poisoning tools to destabilize model outputs, and deploying "copyright traps" designed to detect unauthorized training.
Why Legislation Alone Is Not Enough
Copyright legislation is inherently reactive. It adapts to new technologies through amendments and judicial interpretation, referencing international treaties that are themselves difficult to change. The current framework was not designed for a world where millions of works can be ingested, recombined, and reproduced by software in seconds.
More fundamentally, copyright law reduces every dispute to a legal question: was there infringement? Was there a breach of contract? Courts are not equipped to address the interests that matter within the art system — recognition as an artist, advancement of artistic communication, preservation of the creative process as a distinctly human endeavor.
This does not mean legislation is irrelevant. It means legislation must be supplemented by other instruments.
Contracts as Structural Connectors
Contracts sit at the intersection of the legal system, the economic system, and the art system. They are not just legal documents — they are instruments that can shape how art is created, recognized, and traded.
In the traditional art market, contracts perform multiple functions simultaneously. A gallery contract defines what the artist will create, how revenue is split, and — critically — obligates the gallery to promote the work to collectors, critics, and curators. A publishing contract designates an agent as a gatekeeper who identifies quality and connects the author with a publisher. A recording contract mirrors a production sequence, defining advances, royalties, and publication rights.
These contracts create what systems theory calls structural couplings — connections between autonomous systems that allow each to internalize operations from the other. When art is commercialized, there must be coupling between the art system (recognition), the legal system (property and rights), and the economic system (payment).
AI disrupts these couplings. Mass-produced AI content does not flow through galleries, agents, or publishers. There is no gatekeeper assessing quality. There is no contract sequence that makes the creative process visible. The structural connections that traditionally linked art to law to commerce are being bypassed entirely.
The Swiss and EU Dimension
For practitioners in Switzerland, the copyright framework is the Swiss Federal Act on Copyright and Related Rights (URG). The URG requires a human author and an individual character (individuelle Charakter) — conceptually similar to the EU's originality standard. Works generated entirely by AI without meaningful human creative input will not qualify for protection under the URG.
The EU's CDSM Directive and the AI Act introduce additional obligations for providers and deployers of AI systems that process copyrighted content. Swiss firms operating cross-border — which is most firms in the DACH region — must navigate both frameworks simultaneously.
What to Do Now
If you advise creators, publishers, or platforms in the DACH region, here are four concrete steps:
Audit existing contracts for AI clauses. Most contracts in the creative sector predate generative AI. They do not address whether AI-generated content is covered, who owns AI-assisted outputs, or whether training on the contracted work is permitted. Update them.
Implement opt-out mechanisms. For clients who are rights holders, ensure their works carry machine-readable opt-out signals as specified under the CDSM Directive. Document the opt-out and monitor compliance — but do not rely on it as the sole enforcement tool.
Design transparency into creative workflows. Contracts should require disclosure of AI involvement in the creative process. This serves both the legal system (originality assessment) and the art system (recognition of the creative act). The creative sector is already moving in this direction — the SAG-AFTRA agreements from 2023 are a template.
Explore alternative dispute resolution. Copyright litigation is expensive, uncertain, and limited to legal questions. Mediation and arbitration — particularly through specialist institutions like WIPO — can address broader interests including artistic recognition. For individual artists who cannot afford litigation, collective dispute resolution mechanisms are essential.
The gap between what copyright law protects and what artists actually need is widening. Closing it requires instruments that work across systems — not just within the legal one.