A key theme of Walled Culture the book (free digital versions available) is that copyright, born in an analogue age of scarcity, works poorly in today’s digital world of abundance. One manifestation of that is how lawmakers struggle to adapt the existing copyright rules to deal with novel technological developments, like the new generation of AI technologies. The EU’s AI Act marks a major step in regulating artificial intelligence, but it touches on copyright only briefly, leaving many copyright-related questions still open. The process of aligning national copyright laws with the AI Act provides an opportunity for EU Member States to flesh out some of the details, and that is what Italy has done with its new “Disposizioni e deleghe al Governo in materia di intelligenza artificiale.” (Provisions and delegations to the Government regarding artificial intelligence). The Communia blog explains the two main provisions. The first specifies that only works of human creativity are eligible for protection under Italian copyright law:
It codifies a crucial principle: while AI can be a tool in the creative process, copyright protection remains reserved for human-generated intellectual effort. This positions Italian law in alignment with the broader international trend, seen in the EU, U.S., and UK, of rejecting full legal authorship rights for non-human agents such as AI systems. In practice, this means that works solely generated by AI without significant human input will likely fall outside the scope of copyright protection.
The second provision deals with the legality of text and data mining (TDM) activities used in the training of AI models:
This provision essentially reaffirms that text and data mining (TDM) is permitted under certain conditions, namely where access to the source materials is lawful and the activity complies with the existing TDM exceptions under EU copyright law
The Italian AI law is about clarifying existing copyright law to deal with issues raised by AI. But some EU countries want to go much further in their response to generative AI, and bring in an entirely new kind of copyright. Both Denmark and the Netherlands are proposing to give people the copyright to their body, facial features, and voice. The move is intended as a response to the rising number of AI-generated deepfakes, where aspects such as someone’s face, body and voice are used without their permission, often for questionable purposes, and sometimes for criminal ones. There are good reasons for tackling deepfakes, as noted in an excellent commentary by P. Bernt Hugenholtz regarding the proposed Danish and Dutch laws:
Fake porn and other deepfake content is causing serious, and sometimes irreversible, harm to a person’s integrity and reputation. Fake audio or video content might deceive or mislead audiences and consumers, poison the public sphere, induce hatred, manipulate political discourse and undermine trust in science, journalism, and the public media. Like misinformation more generally, deepfakes pose a threat to our increasingly fragile democracies.
The problem is not that new laws are being brought in, but that the Danish and Dutch governments are proposing to use the wrong legal framework – copyright – to do so:
If concerns over privacy and reputation are the main reasons for regulating deepfakes, any new rules should be grounded in the law of privacy. If preserving trust in the media or safeguarding democracy are the dominant concerns, deepfakes ought to be addressed in media regulation or election laws. The Danish and Dutch bills address and alleviate none of these concerns.
It’s a classic example of copyright maximalism, where wider and stronger copyright laws are seen as the solution to everything. As well as being a poor fit for the problem, taking this approach would bring with it a real harm:
both deepfake bills conceive the new right to control deepfakes as a marketable, exploitable right, subject to monetization by way of licensing.
…
The message both bills convey is not that deepfakes are taboo, but that deepfakes amount to a new licensing opportunity.
In other words, the copyright maximalist approach makes everything about money, not morals. Ironically, taking such an approach would weaken copyright itself, as Communia’s submission to the Danish consultation on the deepfake proposal explains:
the proposal risks undermining the coherence of copyright law itself by introducing doctrinal inconsistencies. Copyright protects original expressive works, not a person’s indicia of personal identity, such as their image, voice or other physical characteristics. It is awarded for a limited duration in order to incentivise the creation of new works, and the existing corpus of limitations and exceptions has been designed with this premise in mind. Extending copyright to subject matter of an entirely different nature, for which marketisation is not an intended objective, will inevitably create legal uncertainty.
Communia points out a further reason not to take the copyright route for protecting people against deepfakes. The Danish bill would grant performing artists a new and wide-ranging copyright in their performances that would have a negative impact on the public domain:
the proposed extension of protection to subject matter that does not constitute a performance of an artistic or literary work raises significant concerns as to scope and proportionality. The introduction of a new exclusive right with such a wide scope would unduly restrict the Public Domain, interfering with the lawful access and reuse of subject matter that is currently out-of-copyright and that should remain as such, in the absence of clear economic evidence that such expansion is needed.
Moreover, as Communia notes:
The recitals of the draft [Danish] bill themselves acknowledge that multiple legal bases for acting against deepfakes already exist, including within criminal law. If individuals face difficulties in asserting their rights under the current framework, the appropriate course of action would be for the legislator to clarify the existing legal position. Introducing an additional and conceptually flawed layer of protection risks creating confusion and may ultimately prove counterproductive.
There’s no doubt that the harms caused by AI-generated deepfakes need tackling. The situation is made worse by advanced AI apps explicitly designed to make deepfake generation as easy as possible, such as OpenAI’s Sora, which are currently entering the market. But introducing a new kind of copyright is the wrong way to do it.
Featured image by Stable Diffusion.
