Demo

On 26 June, the Danish government proposed amendments to its copyright laws to strengthen protections against deepfakes. The Ministry of Culture secured bipartisan support to submit a bill that enables individuals to take legal action if their image, voice, or likeness is used in realistic digital imitations without consent. The law empowers people and artists to demand the removal of such deepfake content from platforms and potentially seek damages under Danish law. 

The proposed changes target unauthorized sharing of realistic, digitally generated reproductions of individuals or artists’ performances. These protections extend for 50 years after a person’s death. The bill introduces sections 65 and 73a to the Danish Copyright Act, enhancing legal safeguards for artists and individuals against AI-generated imitations. While parody and satire remain permitted, platforms like Meta, TikTok, and YouTube may face significant fines for hosting illegal deepfake content. 

Culture Minister Jakob Engel-Schmidt emphasized the bill sends a clear message that people have a right to control their own image, voice, and body, especially in an age where generative AI can easily replicate them. The proposal builds on broader European efforts, aligning with the EU’s Digital Services Act (DSA), which mandates platforms to remove illegal content upon notification or face penalties. 

Deepfakes have surged in recent years. Identity verification firm Sumsub reported a tenfold increase in deepfake fraud from 2022 to 2023, while Deep Media found over 500,000 deepfake videos and audio shared on social media in 2023. These can lead to psychological harm, blackmail, and financial fraud. In one case, UK firm Arup lost over AU$38 million when scammers used a deepfake of a senior executive to order money transfers. 

Despite its ambitious scope, critics argue the law’s reach is limited since content can still be accessed from outside Denmark. However, with Denmark holding the EU Council presidency from July to December 2025, it hopes to influence broader European media policy. 

Globally, governments are taking note. Australia’s eSafety Commissioner has warned that detection tools are lagging behind deepfake creation technologies. Australians affected by non-consensual deepfake content can seek help from eSafety, which investigates image-based abuse, including digitally altered intimate content. 

Denmark’s proposed law is seen as a potential blueprint for Europe and beyond in tackling deepfake misuse. 

Author

Leave A Reply