Denmark is set to introduce new deepfake legislation aiming to grant its citizens copyright protection over their own faces, voices, and other personal attributes, according to a recent New York Times article.
The goal: make unauthorized and potentially hard-to-spot AI-generated deepfakes illegal to create and/or distribute.
A deepfake is a photo, video, or audio recording which has been altered or manipulated, often to misrepresent an individual for deceptive purposes. Most commonly created using AI models trained on large datasets of real recordings and/or images, deepfakes have been used to replicate facial expressions, voice patterns, and other natural behaviors. Such creations pose significant risks, including misinformation, reputational harm and defamation, promoting non-consensual pornography, fraud, identity theft, and other threats that erode trust in the media. These concerns are especially relevant given the global rise of AI, which undeniably poses a serious threat to personal identity.
Denmark’s novel proposal may aid to fill gaps in existing legal frameworks which have difficulty addressing the misuse of personal identity in the era of generative AI. At the same time, the potential adoption of the proposed legislation raises questions regarding whether the U.S. lags in addressing these issues, as it currently lacks comprehensive federal legislation to regulate deepfakes.
The Law at Face-Value
At the heart of Denmark’s proposed deepfake legislation is a plan to hold online platforms accountable by requiring prompt removal of harmful deepfakes or subjecting websites to penalties under EU digital regulations—an initiative that has already gained broad political support and is expected to advance later this year.
The proposal specifically amends sections of the Danish Copyright Act to expand protections beyond not only performing artists and their artistic performances, but also to individuals seeking intellectual property protection against the unauthorized distribution of realistic, AI-generated imitations of their physical traits. This marks a significant shift, as it effectively treats a person’s likeness as copyrightable work, enabling individuals to assert copyright claims over their own identity. Under this framework, individuals would be empowered to issue takedown notices, initiate infringement proceedings, and seek damages. Importantly, the law includes fair use exceptions for parody, satire, and social criticism to protect freedom of expression.
The amendments to the Danish Copyright Act raise questions in terms of whether copyright can, or should be, used as a tool to protect personal identity. They also invite comparison to jurisdictions like the U.S., where free speech protections are stronger, and likeness rights are primarily governed by state-level “right of publicity” laws. And while the U.S. does offer name, image, and likeness protections in the form of these right of publicity laws, there are critical differences. Specifically, the right of publicity in the U.S. varies significantly by state, as there is no federal right enforcing the right of publicity and only applies when a person’s identity is used for commercial gain. By contrast, Denmark’s model would create automatic, nationwide copyright protections over one’s likeness—treating it as a form of intellectual property, similar to how creative works by artists and writers are protected in the U.S. under the Copyright Act. Moreover, the Danish proposal applies broadly to unauthorized uses, not just commercial exploitation, which is the primary focus of most state right of publicity laws.
While the Danish model appears promising, the U.S. would likely face significant challenges in adopting a similar, federal approach addressing deepfake issues. This is due in part to the absence of a federal right of publicity, meaning such a system would require new federal legislation. Additionally, any effort to implement these protections would need to navigate complex issues involving First Amendment rights and the limitations of U.S. copyright law, which currently protect creative works—not individual identities.
Do Cosmetic Alterations Cause a Wrinkle?
In an era marked by widespread cosmetic alterations like Botox®, fillers, and plastic surgery, questions arise relating to how one’s “likeness” is defined and whether Denmark has considered the possible ramifications of its legislation. In the U.S., many states’ right of publicity laws already protect individuals’ identities as recognizable in public. Specifically, some courts will evaluate whether the likeness is recognizably you as currently represented, meaning cosmetic alterations may negate certain rights of publicity, such as name, image, and likeness if your old likeness is no longer recognizable as you. Whether state right of publicity protections apply to an individual’s current appearance and past appearances is not clear, particularly because “likeness” is defined state-to-state, with some states offering broader protections that may cover AI-generated images resembling you, while others offer more narrow interpretations.
In the midst of evolving appearances, this begs the question: how would Denmark’s legislation apply to AI-generated images of pre-enhanced individuals whose current appearance differs today? Would the legislation permit individuals to hold onto rights to their old appearances, such that AI-generated images of one’s past-appearance is still protected? Considering the U.S. state-specific right of publicity framework may or may not protect an individual’s appearance during a specific point in time, some ambiguity exists around old vs. new appearances.
Under Denmark’s legislation, it seems unclear whether individuals will maintain a copyright-like right over their traits, regardless of when the likeness was captured. For now, the plain reading of the amendments suggest it won’t matter whether an AI-generated deepfake is based on your pre- or post-enhancement appearance or current look—any realistic imitation of your likeness will be protected.
While this is a thought-provoking law, clarification will ultimately be required. It will be interesting to see how this legislation progresses and whether the U.S. takes note how deepfake laws continue to unfold in other countries.
For your copyright questions, please email us at info@thip.law.