Does India also need to take a page from Denmark’s proposed amendment?

Granting copyright over one’s face and voice

**Deepika Shekhawati

Over the last decade, substantial investments in artificial intelligence have enabled machines to interact with humans in a natural, conversational manner. However, this technological advancement has also given rise to challenges, including the creation and misuse of deepfakes. Deepfakes are realistic and manipulated videos created by AI, replicating the face or voice of an individual in a manner that makes it look real. Deepfakes can spread very quickly and are hard to control. They can be used to create social tension, spread fake news, promote hate, and mislead people with false information. Such misuse can also increase cases of breach of privacy, defamation, revenge porn, cyber fraud and online scams. Therefore, it is very important for the government to make a clear law that defines the proper use of deepfakes and strictly punishes their misuse.

Deepfakes not only cause problems to one’s privacy but also intellectual property like eBooks, movies and songs. This issue has been recently highlighted by the actor Dhanush, who raised the concern about alteration of the movie’s ending, stating that “the re-release of Raanjhanaa with an AI-modified ending has profoundly disturbed me and this change was made despite my clear objections.” AI privacy concerns arise from the issues such as data collection, collection of sensitive data without consent, unauthorised use of data, unchecked surveillance, data exfiltration and data leakage,  concerns that were largely absent in earlier times. This highlights how the understanding of privacy has changed with the advent of AI.

In this scenario, Denmark has proposed an amendment to its copyright law granting individuals ownership rights over their face, body, and voice to counter the misuse of AI-generated deepfakes. This move marks a shift from viewing identity as merely a privacy concern to recognising it as a property right. India currently does not have a specific law addressing deepfakes, though existing laws are sometimes applied to deal with related issues. This raises the question: in the absence of a dedicated law, does India also need one?

Denmark’s Proposed Amendment

In the wake of exponential misuse of AI, the Denmark government has taken a step to mitigate these negative effects of AI’s creation, specifically deepfakes, by proposing an amendment in the current copyright law to endow people with a copyright of their own body, facial features and voice. The Artificial Intelligence Act of the EU defines deepfakes as “AI-generated or manipulated image, audio or video content that resembles existing persons, objects, places, entities or events and would falsely appear to a person to be authentic or truthful.” In simple language, they are the digital representation, giving a very realistic view of their voice and facts. These videos and audios are created by AI tools, and they manipulate reality.

The proposed amendment gives an automatic right from birth; there is no need for registration. It allows for proactive enforcement and gives individuals digital sovereignty over their own identity. This right would continue to exist for a period extending 50 years beyond the individual’s death. The idea is to move beyond simply protecting people from harm and instead to empower them with active control over how their identity is used in the digital world. 

Under the proposed changes, Section 65A would make it unlawful to create or publicly share deepfakes of a performing artist or their artistic performance. Similarly, Section 73 (a) would prohibit the creation or public disclosure of deepfakes that replicate anyone’s personal or physical traits, such as their appearance or voice, without their consent. The Copyright Act generally protects a persona that appears in a protectable work, but this proposed amendment extends this protection to all natural persons, not just creators or artists. This protection applies only to public disclosure. There is also an exception attached to section 73(a), which says caricature, satire, parody, pastiche, power criticism, and social criticism are permitted unless they constitute misinformation posing a serious threat to the rights and interests of another person. The current legal regime of Denmark upholds two legal general principles: A person’s image cannot be commercially used without their consent, and a publicly known individual may be entitled to compensation for such unauthorised use. The proposed amendment seeks to broaden the scope of the existing legal framework on personality rights.

Jakob Engel-Schmidt, the cultural minister of Denmark, highlighted that this proposed law would give an unambiguous message that everyone has the right over their body, face, and voice. These rights have recently been violated to an extreme extent by generative AI. This highlights that individual identity is not just limited to privacy but also extends to property rights, as reported by The Guardian. This would give Danish citizens the right to go against such an act and exercise not just their right to privacy but also their property right over their identity. 

The consequences of a citizen exercising the above explained right is the following: a) The affected person can demand the intermediaries to take down such video or audio; b) the affected person can claim compensation even without proving the damages; c) Significant penalties may be imposed on technology platforms and hosting providers if they do not respond promptly after being alerted about unlawful content. The EU Digital Services Act has the same provisions.

However, there have been many previous laws by other state actors on similar lines, but the Denmark law provides a broader ambit, and it would be the first of its kind to “treat identity rights as copyright-protected assets.” Across Europe, countries like France, the UK, and those under the EU have introduced strict laws regulating AI-generated deepfakes, focusing on consent, labelling, and penalties for misuse. In May, the U.S.A. passed the Take It Down Act, which criminalises nonconsensual intimate or sexual deepfakes and harmful intimidation.

 However, unlike Denmark, none yet allow individuals to claim copyright over their own face or voice.  The criminalisation of deepfakes focuses on punishing offenders, in cases like non-consensual intimate images or harmful impersonations, protecting victims from online abuse. Denmark’s approach is different; it is rights-based. It treats a person’s face, voice, and body as their own property under the law. Any AI-generated realistic imitation shared in public without consent is illegal. The Danish citizens have the clear legal right to demand its removal and can seek compensation if their identity is misused.

India’s Approach to Deepfakes: Is a New Law Necessary?

A report, Internet in India Report 2024, jointly compiled by the Internet and Mobile Association of India and Kantar, suggests India will cross 900 million internet users by 2025. Currently, as of 2024, India has reached 886 million active internet users. The likelihood of misuse of Generative Adversarial Networks (GANs) is particularly high in India. There is no doubt that this model of creating highly realistic audio or video can not be put to good use, such as education, research, and entertainment. However, the possibility of misuse outweighs its positive creation. 

In the recent case of Sadhguru Jagadish Vasudev v. Igor Isakov & Others (2025), a temporary injunction is sought against the ‘rogue websites’ violating the personality and publicity rights of the plaintiff by creating deepfakes which are false, misleading and unlawful. The court in this case upheld and protected the personality rights of an individual. The court highlighted that if this modern technology is not protected would spread like a pandemic. Furthermore, the injunction was issued against creating pornographic content by influencers. The current law, such as the Information Technology Act, 2000 and the Information Technology (Intermediary Guidelines and Digital Media and Ethics Code) Rules, 2021 and the Bharatiya Nyay Sanhita, can be used against the offences arising out of the AI’s generation. 

The government has issued three advisories to intermediaries (companies offering online services) addressing the growing concerns over AI-generated deepfakes; first, the content has to be removed withing 36 hours of complaint and content that is sexual in nature or impersonates another individual has to be removed withing 24 hours after the individual produce the receipt of the complaint as per Rule 3(2)(b) of the 2021 IT Rules; second, there must be strict compliance with the advisory otherwise the intermediaries would lose safe harbours as per section 79(1) and 79(2)(c) of the IT Act;  last, there would be watermarking of deepfake content. Though these advisories are a step forward to protect the citizens, they do not have the same virtue as legislation, rules, regulations, and notifications. India needs a law that does not breach the privacy of individuals online through active monitoring, and at the same time, provides a robust way to deal with deepfakes. 

Recently, the Minister of State for Electronics and Information Technology said that Sections 66C, 66D, 66E, 67A and 67B of the IT Act can be used for AI misuse. Similarly, BNS can also be used, and since the DPDP Act has not come into effect, the 021 IT Rules and a CERT-IN can be used for AI misuse. However, the DPDP Act does not apply to data that is publicly available. This raised greater concern for the influencers and people who have their accounts public.  

The misuse of AI expands to the misuse of intellectual property of an individual, such as movies, songs, videos and eBooks. Altering and morphing these without the consent of the owner can be against the law. A report by the Economic Times underscores that with the increase in misuse of nudify apps powered by AI”, there would be a rise in the crime of sextortion against minors.

There is no specific law in India regulating personality rights. However, the concept of personality rights has been recognised and upheld by the Indian judiciary under existing intellectual property frameworks, as seen in the Daler Mehndi case, where dolls imitating him were sold without consent for commercial gain, and the Anil Kapoor case, where his personality attributes were misused and tarnished over the internet. It is important to note that copyright protection applies only when a person’s persona appears in a protectable work. As a result, many harms to personality, such as unauthorised commercial exploitation, fall outside the scope of copyright law.

While copyright law safeguards creative works, it does not protect inherent personal traits such as name, likeness, or voice unless these appear in an authored work. Consequently, copyright law alone is inadequate to fully protect a person’s face, image, or voice, highlighting the need for a sui generis law.

Conclusion

The development of AI has brought both positive and negative consequences with it. Whenever something is done with ill intention or without consent, there is a high chance that it will pose serious ethical and legal issues. The current problem which we encounter is the “deepfakes”, which are out of revenge or without consent. The issue with this technology is that it makes the video look highly realistic. If not regulated, this would spread like a pandemic, and the adverse party would be the people. 

Governing social media is difficult due to three main reasons: the huge number of users, limitations of technology, and business priorities. Platforms like Facebook, YouTube, and Twitter generate massive amounts of content daily, making it hard to monitor everything. AI tools for moderation are not perfect and can make mistakes, while human moderators face stress and cultural or language challenges. At the same time, social media companies focus on user engagement and profits, which can conflict with stricter content control.

Most of the Generative AI companies have safety guidelines that bar harmful content, such as sexual content, without consent or self-harming. However, these are insufficient due to various problems. At the same time, there are Generative AIs which have no such safety guidelines, making a way for people to misuse them. The complete removal of AI would be of no use; what India, or for that matter world needs is a robust legal framework to deal with issues and narrow the ambit of the pre-existing law.

**Deepika Shekhawati is a third-year law student at Nirma University.

**Disclaimer: The views expressed in this blog do not necessarily align with the views of the Vidhi Centre for Legal Policy.