Safeguarding Digital Identity in the age of Deepfakes: An analytical study of AI regulation in India with special reference to Personality Rights jurisprudence

A modern workspace scene showing a computer monitor displaying a digitally reconstructed human face with labels like “Deepfake Technology” and “Personality Rights,” alongside legal icons. On the desk are a gavel, legal documents titled “AI Regulation in India,” coffee cups, and stationery, symbolizing the legal regulation of deepfakes and protection of digital identity. Featured image for article: Safeguarding Digital Identity in the age of Deepfakes: An analytical study of AI regulation in India with special reference to Personality Rights jurisprudence

Summary

This guest post explores how deepfake and generative AI technologies challenge India's legal frameworks on digital identity and celebrity personality rights. It compares domestic and global regulatory models, advocating for a balanced system to safeguard digital identities without hindering AI innovation.

Introduction

The rapid rise of Generative AI has fundamentally reshaped the landscape of digital identity, forcing legal systems around the world to deal with the unprecedented challenge of deepfake technology. In India, this shift is especially significant, as traditional disputes over unauthorized use of celebrity names and likeness have evolved into a far more complex struggle involving highly realistic digital replicas that mislead consumers and affect the personal autonomy of a celebrity to exercise their right of publicity. What began with cases centred on unauthorized domain names has expanded into a broader conflict involving personality rights where a celebrity’s name, likeness, voice, and personality traits are digitally recreated and commercially exploited to falsely associate their identities with endorsement of goods and services, to which they never consented.

This escalating harm has placed Indian courts under immense pressure to rethink how celebrity personality rights and digital identity may be safeguarded in the age of deepfake technologies. While the judiciary has attempted to recognize celebrity personality rights by navigating its legal basis within the underlying principles of constitutional privacy, remedy of passing off under trademarks, and infringement of moral rights under copyright law, the absence of a sui generis statutory framework is bound to give rise to implementation and enforcement roadblocks. The Draft amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, relating to synthetically generated information released by the Ministry of Electronics and Information Technology (MeitY), on October 22, 2025, is a commendable initiative toward regulating deepfakes. However, by far, these proposed amendments to the IT Rules raise pressing issues about the procedural mandates governing delegated legislation and constitutional freedoms. Against this backdrop, global comparisons such as Denmark’s rights-based model and EU tiered compliance framework under the AI Act highlight the growing need for India to build a comprehensive system that is aimed at holistically safeguarding a person’s digital identity in the age of deepfakes.

A two-tracked jurisprudence on Celebrity Personality Rights: Conflating Domain Name Disputes with Deepfake Misuse

The recent jurisprudence on personality rights in India has unfolded two distinct legal pathways. While both deal with the unauthorized use of celebrity identity, the first involves disputes over domain names and the second relates to the use of Generative AI and deepfake technology to create celebrity replicating personas. In the case of the former, individuals register a celebrity’s name or iconic dialogues they are known to be associated with as a website URL without permission. Cases involving Aishwarya Rai Bachchan, Abhishek Bachchan, Anil Kapoor, Jackie Shroff, Shilpa Shetty Kundra, Salman Khan, Asha Bhosle, Hritik Roshan and Akshay Kumar show how their names function like brand identifiers, with the domain name itself becoming the commodity being misused for profit. When someone registers a domain using a celebrity’s name, they exploit the built-in commercial value of that identity. Legally, this often amounts to an economic tort passing off and moral rights infringement because the domain name creates a misleading association and capitalizes on the celebrity’s goodwill. The courts in India, however, are providing remedy under the trademark laws due to domain names being a subject matter under the law of trademarks and passing off.

A very different set of challenges emerges in the case of the latter due to the rise of deepfake technology and use of GenAI that facilitates synthetically generated or fabricated information. These cases reflect a more sophisticated form of identity theft, misuse, and impersonation. In this case, by the use of deepfake technology and GenAI tools, videos or audio clips are created that replicate a celebrity’s face, voice, and likeness with striking realism. Suits involving Rajat Sharma, Sadhguru, and Ankur Warikoo demonstrate how these digital replicas are used to endorse goods or services without consent. The risks associated with these become more pertinent due to the culture of celebrity or influencer endorsements across social media platforms. Here, the harm runs deeper, where it appears to show that they are actively engaged in the endorsement of something that such a celebrity never approved, damaging public trust and exploiting the unique commercial value of their persona.

These deepfake disputes expose two major legal gaps in India’s current framework. The first implication is determining how personality traits such as likeness, voice, and persona fit within existing forms of intellectual properties. Since these attributes are not goods or services, applying the traditional passing off principles as a remedy becomes challenging. When the misuse involves exploiting the economic value of a celebrity’s identity, the issue aligns more with economic adversity in the form of misappropriation or unjust enrichment, which may be actionable under the economic tort of passing off. On the other hand, if the celebrity’s digital likeness is used to promote a product or service, the more precise statutory tool is under the concept of False Trade Description under Section 2(1)(i) of the Trade Marks Act, 1999 which directly addresses deceptive claims of endorsement read with unfair trade practices under Section 2(47) of the Consumer Protection Act, 2019.

The second major implication is the growing importance of the false endorsement using celebrity images or voice. This focuses specifically on misleading consumers into believing that a celebrity has approved, reviewed, or supported a product. Using GenAI embedded with deepfake technologies that misuse a celebrity’s personality attributes fits squarely within this category because the fabricated video or audio portrays a celebrity promoting goods or services they never engaged with or consented to. Hence, remedy under trademark law may only be stretched to cover such cases, rather than those involving false domain name uses. Recently, the actor Shah Rukh Khan successfully obtained for a trademark registration over his name and acronym. Supposedly, if his name or acronym is used for devising a domain name URL, then he may be safeguarded under a remedy of passing off in trademarks, as here clearly his name can be considered as a subject matter of marks. Although passing off as a remedy under trademarks is applicable to cases where the mark may not be registered but be safeguarded due to misrepresentation and harm to goodwill, in cases where celebrity names or iconic dialogues are used as website URL indicators, reasonable doubts regarding the authenticity of such websites associated with the celebrity still remain. A preponderance of probabilities may suggest otherwise. An analogy may be drawn to several celebrity fan pages on social media platforms, including YouTube, Facebook, and Instagram. A fan page is very easily distinguishable from an official page of a celebrity. Likewise, in the case of domain names, the consumer can easily decipher the source of such websites. The possibilities of such URL’s belonging to a celebrity personality are far beyond negligible.

Therefore, while domain name disputes deal with simple misuse of a celebrity’s name, deepfake cases pose a far more complex threat, which pushes the limits of IP jurisprudence and raises serious concerns about privacy, dignity, and digital autonomy.

Limits of Delegated Legislation and the Rise of False Endorsement Jurisprudence

One of the strongest criticisms on the Draft Amendments to the Information Technology Rules, 2021 concerns the limits of delegated legislation. The amendments attempt to regulate an entirely new technological landscape, particularly GenAI and deepfakes, through subordinate rules rather than by updating the parent statute, the Information Technology Act, 2000. Rules are supposed to implement the objectives of the Act, not expand them. Yet the proposed changes introduce new and heavy obligations, such as mandatory labelling and metadata tagging for all Synthetically Generated Information (SGI) and requiring large social media platforms to verify user declarations before content is even uploaded.

The definition of SGI is extremely broad, potentially including harmless photo filters or simple edited images, which would be subject to the same regulatory burden as harmful deepfakes. At the same time, the task of accurately detecting and verifying synthetic content remains technologically unreliable. By imposing platform liability to the accuracy of these tools, the amendments place social media companies in a difficult position. Faced with the risk of losing safe harbour protection under Section 79 of the IT Act, platforms are likely to remove content aggressively to avoid penalties. This kind of over-compliance would inevitably affect satire, parody, artistic work, and other legitimate forms of expression, creating a clash with the constitutional right to freedom of speech under Article 19(1)(a).

In the absence of a clear legal framework specifically governing deepfakes, courts have begun turning to existing intellectual property and constitutional tools. The judiciary has increasingly treated deepfake misuse as a violation of the right to privacy under Article 21. Meanwhile, commercial entities and public figures have begun proactively using trademark law and the tort of false endorsement to protect themselves, such as in TATA Group’s action in the Taj Lake Palace dispute and Shah Rukh Khan’s extensive trademark filings.

Two Models of Deepfake Regulation: Ownership vs. Oversight

The global effort to respond to deepfakes has revealed a striking difference in how countries think about digital identity. Denmark and India, in particular, showcase two very different models: one grounded in personal ownership and the other in regulatory compliance.

Denmark has taken a bold and straightforward approach by proposing amendments to its Copyright Act that would treat a person’s face, voice, and likeness as a form of intellectual property. This reframes identity as something an individual owns in the same way they own creative work. Under this model, every person gains a copyright-like right over their image and voice. If someone creates a deepfake without consent, the affected person can demand its removal and seek compensation just as a copyright holder would in a piracy case. The violation is simply the unauthorized use of personal property, and the individual can act immediately without having to first prove financial or reputational harm.

On the other hand, India’s current approach to regulating SGI under the new IT Rules depends mainly on two mechanisms: metadata insertion and user declaration verification. The metadata requirement asks AI tools to place a permanent identifier or digital fingerprint inside the content file. While this sounds effective, it is easy for anyone with basic technical knowledge to remove, change, or fake metadata using widely available software.

The second requirement, user declaration verification, places the responsibility on social media platforms to check whether a user has honestly disclosed that their upload contains AI generated elements. Platforms are expected to use AI detection tools to verify this. But deepfake detection technology is still evolving, often inaccurate, and prone to false negatives, which means harmful content can slip through undetected. Together, these two mechanisms create a system that looks strong on paper but is easy to bypass in practice.

To address these weaknesses, the new proposed mandate introduces a more advanced method of traceability which cannot be easily altered or removed by the end user. Instead of relying on editable metadata, the updated system requires AI models to embed a unique, encrypted code within the content file. This code cannot be understood or used unless it matches a synchronized decryption key held by the platform where the content is uploaded. When someone uploads AI generated content, the platform can immediately decrypt the embedded signature and confirm its origin, without depending on a user’s honesty or unreliable detection tools. This approach shifts the technical responsibility to the AI generator itself, ensuring that traceability is built into the content at the moment of creation.

Conclusion

The Digital Personal Data Protection Act, 2023 marks an important early step in strengthening digital rights in India, but the deepfake crisis makes it clear that this alone is not enough. India remains dependent on an aging IT Act, 2000 and judicial patchwork to deal with problems that rapid technological advances have overtaken. At the same time, the answer cannot be a blanket ban on AI tools. Such an approach would stifle innovation and discourage the very small and medium enterprises and startups that are driving India’s AI transformation.

The recent India AI guidelines, 2025, a pro-innovation governance framework showcases the potential to devise strategic goals around AI. However, with governance, regulation is equally important. These guidelines are far from being exhaustive. While the underlying principles of these guidelines assist in decision-making, taking cognizance to ward off the threats posed by rash use of GenAI and deepfake technologies is imperative.

What India needs instead is a balanced and proportionate regulatory model, similar to the three-tier system used in the European Union’s AI Act, which classifies AI tools based on the level of risk they pose. Aligning the governance guidelines that support evolving AI technologies with regulations that are aimed at mitigating the significant threats deepfakes pose to privacy and personality rights is key to ensuring citizens are protected without slowing down innovation.

References

    1. Press Information Bureau. (2024, August 3). Government of India notifies amendments to Information Technology Rules. Press Information Bureau. https://www.pib.gov.in/PressReleasePage.aspx?PRID=2181719&reg=3&lang=2
    2. Ministry of Electronics and Information Technology. (2023, April 6). Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (as updated). Government of India. https://www.meity.gov.in/static/uploads/2024/02/Information-Technology-Intermediary-Guidelines-and-Digital-Media-Ethics-Code-Rules-2021-updated-06.04.2023-.pdf
    3. Bar & Bench. (2024, November 8). Delhi High Court grants relief to Aishwarya Rai; says breach of personality rights undermines celebrity’s dignity. Bar & Bench. https://www.barandbench.com/news/litigation/delhi-high-court-grants-relief-to-aishwarya-rai-says-breach-of-personality-rights-undermines-celebritys-dignity
    4. Bar & Bench. (2024, November 12). Delhi High Court grants interim injunction to Abhishek Bachchan to protect his personality rights. Bar & Bench. https://www.barandbench.com/news/litigation/delhi-high-court-grants-interim-injunction-to-abhishek-bachchan-to-protect-his-personality-rights
    5. SCC Online. (2024, August 2). Bombay High Court grants ad-interim injunction to Arijit Singh to protect his personality rights. SCC Online. https://www.scconline.com/blog/post/2024/08/02/bomhc-grants-ad-interim-injunction-to-arijit-singh-to-protect-his-personality-rights/
    6. SCC Online. (2024, May 21). Delhi High Court restrains entities infringing Jackie Shroff’s publicity and personality rights. SCC Online. https://www.scconline.com/blog/post/2024/05/21/dhc-restrains-entities-infringing-jackie-shroff-publicity-personality-rights/
    7. SCC Online. (2025, December 29). Bombay High Court grants relief to Shilpa Shetty against AI-generated deepfake content. SCC Online. https://www.scconline.com/blog/post/2025/12/29/bom-hc-shilpa-ai-generated-deepfake-content-scc-times/
    8. The Indian Express. (2024, September 12). Delhi High Court grants protection to Salman Khan’s personality rights. The Indian Express. https://indianexpress.com/article/legal-news/salman-khan-delhi-high-court-personality-rights-10414810/
    9. SCC Online. (2025, October 3). Bombay High Court grants interim injunction to Asha Bhosle protecting her personality rights. SCC Online. https://www.scconline.com/blog/post/2025/10/03/bombay-hc-grants-interim-injunction-to-asha-bhosle-protecting-her-personality-rights-orders-blocking-of-infringing-websites-platforms-and-youtube-videos/
    10. The Times of India. (2024, September 21). Delhi High Court protects Hrithik Roshan’s personality rights against unauthorised AI content. The Times of India. https://timesofindia.indiatimes.com/entertainment/hindi/bollywood/news/delhi-high-court-protects-hrithik-roshans-property-rights-ai-content-unauthorized-commercial-links-to-be-taken-down/articleshow/124570820.cms
    11. SCC Online. (2025, October 20). Bombay High Court condemns circulation of Akshay Kumar deepfake video. SCC Online. https://www.scconline.com/blog/post/2025/10/20/bombay-hc-condemns-akshay-kumar-deepfake-video/
    12. (2024, July 15). Delhi High Court rules in favour of journalist Rajat Sharma; restrains use of “Baap Ki Adalat”. LiveLaw. https://www.livelaw.in/high-court/delhi-high-court/delhi-high-court-rules-in-favour-of-journalist-rajat-sharma-restrains-use-of-baap-ki-adalat-260124
    13. SCC Online. (2025, June 2). Delhi High Court grants interim protection to Sadhguru’s personality rights against AI misuse. SCC Online. https://www.scconline.com/blog/post/2025/06/02/dhc-grants-interim-protection-to-sadhgurus-personality-rights-restrains-misuse-through-ai/
    14. SCC Online. (2025, May 29). Delhi High Court grants John Doe injunction to Ankur Warikoo against deepfake AI misuse. SCC Online. https://www.scconline.com/blog/post/2025/05/29/delhi-high-court-ankur-warikoo-john-doe-injunction-deepfake-ai-misuse-legal-news/
    15. Ministry of Law and Justice. (1999). The Trade Marks Act, 1999. Government of India. https://www.indiacode.nic.in/bitstream/123456789/15427/1/the_trade_marks_act%2C_1999.pdf
    16. Ministry of Law and Justice. (2019). The Personal Data Protection Act, 2019. Government of India. https://www.indiacode.nic.in/bitstream/123456789/16939/1/a2019-35.pdf
    17. (n.d.). Trademark search results for “SRK”. IndiaFilings. https://www.indiafilings.com/search/srk-tm-970166
    18. Ministry of Law and Justice. (2000). The Information Technology Act, 2000 (as amended). Government of India. https://www.indiacode.nic.in/bitstream/123456789/13116/1/it_act_2000_updated.pdf
    19. Legislative Department. (2024). Constitution of India (updated edition). Government of India. https://lddashboard.legislative.gov.in/sites/default/files/coi/COI_2024.pdf
    20. (2024, October 10). Delhi High Court orders takedown of alleged deepfake video shot at Taj Lake Palace, Udaipur. LiveLaw. https://www.livelaw.in/high-court/delhi-high-court/delhi-high-court-orders-take-down-of-alleged-deepfake-video-on-taj-lake-palace-udaipur-307826
    21. Vidhii Centre for Legal Policy. (2024, June 18). Does India also need to take a page from Denmark’s proposed amendment? Vidhii Legal Policy. https://vidhilegalpolicy.in/blog/does-india-also-need-to-take-a-page-from-denmarks-proposed-amendment/
    22. Ministry of Electronics and Information Technology. (2024, June). Advisory on generative AI and digital content. Government of India. https://www.meity.gov.in/static/uploads/2024/06/2bf1f0e9f04e6fb4f8fef35e82c42aa5.pdf
    23. European Union. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council (Artificial Intelligence Act). EUR-Lex. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689

Authored by Dr. Gunjan Chawla Arora, Assistant Professor of Law & Head, Centre for Intellectual Property Rights, Institute of Law, Nirma University & Ashwika M.M, third-year B.A. LL.B. (Hons.), student at the Institute of Law, Nirma University.

About the Authors

  1. Dr. Gunjan Chawla Arora, Assistant Professor of Law & Head, Centre for Intellectual Property Rights, Institute of Law, Nirma University, has over a decade of experience as an academician in Intellectual Property Rights law. She has written extensively on the interface of IP with digital piracy, sustainable fashion, metaverse, and GenAI.

  2. Ashwika M.M is a third-year B.A. LL.B. (Hons.), student at the Institute of Law, Nirma University. She has a keen interest in Intellectual Property Law and actively engages in research and practical learning in this field. She is a merit scholar student and an accomplished mooter.

    Disclaimer: The views, opinions, and information expressed in this blog post are solely those of the author(s) and do not reflect or represent the views, opinions, or positions of the organization. Readers are encouraged to conduct their own independent research and consult qualified experts before making any decisions based on this material.

Category