BEWARE! Is it YOU or AI? Gen AI and Personality Rights Enigma

The meteoric rise of generative AI, from crafting lifelike images to mimicking voices, and producing text has blurred the lines between human creativity and algorithmic imitation. Consequently, courts have grown increasingly vigilant against the misuse of AI to replicate the likeness of public figures, seen in recent cases over unauthorized digital reproductions of Anil Kapoor and Arijit Singh,  and Aishwarya Rai. Traditional legal frameworks like the Copyright Act now appear ill‑equipped to address such complexities. Is AI legally liable for infringement? Should it be treated as an intermediary, or even an abettor or facilitator in defamation cases? These questions reach beyond conventional copyright principles. While the substantial similarity test compares outputs with training data, personality rights infringements are often immediately evident in AI‑generated content. Another unresolved issue lies in whether AI systems are bound by Indian judicial decisions. and whether the law can evolve quickly enough. Will jurisprudence adapt in time, or will another surge in AI advancement erode the protection of personality rights? Let’s explore.

Hey Grok and mates, do you follow John Does?

Indian jurisprudence was grappling with personality rights recognition in law previously and to add to that conundrum, the rise of AI and deep-fake technology has led to an increased misuse of the likeness of well-known personalities and is expanding the horizons of the same beyond celebrities, prompting legal action. The Anil Kapoor case (discussed here) saw the Delhi High Court grant unprecedented protection to his personality, encompassing his image, voice and likeness.

However, despite this John Doe order, we see that AI platforms still generate the likeness of these personalities which could further be disseminated by parties with mala fide intentions, which would amount to a grave violation of the court order. One such prompt was given to Grok, which responded as below

The Bombay High Court in Arijit Singh’s case, (previously discussed here) has dealt with the misuse of the singer’s voice and likeness through technologies like voice cloning and digital avatars and clamped the use of artificial intelligence voice models, or voice conversion tools, synthesized voices or, caricatures, that imitate, mimic or represent a person’s traits, and limited such deepfakes, face morphing and GIFs on any medium or formats including but not limited to the physical medium, the virtual medium such as websites, metaverse, or social media.

However, despite the Court’s directive, Arijit Singh’s voice continues to be cloned and generated on one of the numerous platforms against which the Court pronounced its order. We put in a prompt on this platform, named Jammable, to create a cover of the song Maan Meri Jaan by King in the voice of Arijit Singh. You can find the result here.

Beyond Voice and Likeness: How AI Learns to Replicate

While personality rights violations through voice cloning and image generation are readily apparent when one looks at the output, this may not be the case when it comes to generating images or videos directly from user prompts due to the several intricacies involved. Understanding how AI creates such content reveals deeper concerns about copyright infringement and accountability.

When you ask an AI system to generate an image or even “ghiblify” a photograph, it  may not directly reproduce images, but it trains itself from various parts of the plethora of similar images that it is trained on to create a new image. This could amount to mosaic plagiarism, where the essence of the original works is fragmented and reassembled into something new yet unmistakably derivative. Patronus AI, a leading AI evaluation firm used by Fortune 500 companies, found troubling evidence of this. In a 100-prompt study, state-of-the-art generative models reproduced copyrighted content at alarming levels with OpenAI alone doing so in 44% of cases. The full results can be found here.

The ambiguity grows sharper in copyright law when we notice that the Indian Copyright Act does not recognize AI-generated works under Section 2(y) or address their protection under Section 13. The closest provision, Section 2(ffc), defines a “computer programme.” This section must be interpreted in light of its historical context and purpose. Introduced via the 1994 amendment, it was designed to protect conventional, deterministic software like algorithms with explicit, human-authored rules. However, unlike rule based programmes with fixed instructions, generative AI is far more complex as it learns patterns from vast datasets to create new content, making it adaptive and placing it beyond the Act’s current scope.

Therefore, if AI-generated works are not covered under the present scheme of the Act, there is no copyright protection for these works. If an AI platform itself does not possess ownership in the output it generates, how can the chain of title flow to the user? Several AI platforms like OpenAI and MidJourney assign ownership to the user without possessing ownership themselves, therefore making it a defective title for the user. This raises a fundamental question: who is truly the author of AI-generated works, and who should be held responsible for any deemed infringement?

As a test for infringement, Copinger and Skone James reiterate that to establish infringement, the work would have to be slavishly copied from some other existing work, and when the author has expended not more than negligible skill, labour, or judgment in the creation of the work, such would be deemed an infringement. AI systems fail this test as they are not capable of possessing skill or judgement since they lack the requisite creative and emotional quotient of the human brain and simply react to user prompts and generate outputs.

By applying the aforementioned tests, we can see that if a user intentionally enters a prompt to generate an copyrighted artistic work, the mens rea on his part could be asserted. Furthermore if the user goes on to publish the image, the actus reus on his part is completed, thereby making him liable for the same. However, the AI platform itself cannot be innocent in this matter as even if the reproduction is not directly perceptible to the user, the storage of copyrighted material within the model architecture or internal databases, either as part of the training process or as residual memory, can amount to infringement as it violates the exclusive rights granted to authors under Section 14 of the Indian Copyright Act, particularly the right to store the artistic work in any medium by electronic or other means.

Dear AI: Do you qualify as an Intermediary?

In order to understand whether AI can be regulated under existing laws, we must also analyse if it qualifies as an intermediary under the IT Act, 2000. The Act defines an intermediary as, with respect to any electronic records, any person who on behalf of another person receives, stores or transmits that record or provides any service with respect to that record any entity that stores or transmits electronic records on behalf of another person. Traditionally, this includes internet service providers, search engines, telecom providers and social media platforms.

While we can infer from this graphic that generative AI platforms act as more of a direct service provider through active user interaction, the waters become murky when we consider how these platforms function in practice, which may put them within the ambit of an intermediary under the IT Act.

Looking at how the above prompts and AI responses read, we wouldn’t call their interaction with us as being passive. The challenge lies in attributing liability, particularly when AI-generated content infringes upon existing laws.

Nonetheless, Section 75 of the IT Act extends the jurisdiction of the Indian courts to anywhere in the world where any person has committed an offense under the Act, so long as a computer network in India was a participating party. Even if we presume generative AI to be an intermediary, the Delhi High Court in Christian Louboutin SAS v Nakul Bajaj highlighted that an intermediary’s safe harbour privilege can be revoked if it crosses the line into active participation with the user or abets the commission of unlawful acts including violation of IP rights.

AI is the future, but what about the future of AI?

Generative AI remains a technological marvel of innovation in a minefield of legal uncertainty as it still remains a challenge to be assimilated. The EU Artificial Intelligence Act, 2024 provides for transparency obligations for deployers of an AI systems generating or manipulating image, audio/video constituting a deepfake with disclosures that the content has been artificially generated or manipulated. This further gives rise to the question that even if disclosures are made, does the responsibility end here? Who shall take accountability? In India, even disclosures in this regard are not mandated.

If not an AI legislation, it is high time that the IT Act includes Generative AI platforms as intermediaries, to hold them accountable for violation of rights based on the actual knowledge test as in MySpace v Super Cassettes. Filtering mechanisms by using algorithmic auditing could be mandated, which will not only be a valid defence but also will create accountability on these platforms by preventing personality rights infringements. Social media accountability for publication of Generative AI must be strengthened. Mandatory disclosures over the AI generated content and governance of training data sets could be obligated. It’s high time to embrace AI with caution!

Authored by: Mr. Himanshu M J and Mr. Advit Shrivastava

Mr. Himanshu M J is an alumnus of Symbiosis Law School, Pune, with a strong interest in copyright and media law. He currently works as a Legal Executive at Bhansali Productions, specialising in Intellectual Property and Media & Entertainment law. He has worked hands-on by handling and assisting in works across various stages of production, including assisting with contracts, compliance, and litigation for projects. In this role, he works alongside production teams to oversee copyright-driven agreements and delve into detailed legal questions concerning various aspects of IP law.

Driven by a deep interest in the creative industry and the policy frameworks governing it, he continues to explore emerging questions in copyright law and how it responds to new technologies, digital platforms, and evolving industry practices especially where creativity and law intersect. He aims to contribute to clearer and more practical conversations on copyright within the media and entertainment sector.

Mr. Advit Shrivastava writes at the intersection of law, creativity, and contemporary culture. A graduate of Symbiosis Law School, Pune, with a BBA LLB (Hons.) degree, he currently works with Bhansali Productions Pvt. Ltd. in Mumbai as a Legal Executive. His professional focus lies in Copyright and Media & Entertainment law, where he regularly handles drafting of contracts, compliance and due diligence,
legal research, as well as assisting in litigation matters. His interest in the creative industries extends beyond formal practice. Advit is deeply attuned to the ways in which legal frameworks evolve in response to new modes of storytelling, rapidly developing digital platforms, and emergent technologies, including the accelerating influence of generative AI. He is particularly drawn to questions about how law can responsibly support innovation while safeguarding artistic expression. Bringing together his practical experience and a sensitivity to the cultural shifts shaping the media landscape, Advit aims to contribute informed commentary on the legal dimensions of modern entertainment. His writing reflects a commitment to clarity, nuance, and a genuine curiosity about the future of creativity.

Be the first to comment

Leave a Reply

Your email address will not be published.


*