INTRODUCTION
The world of Copyright law is in a state of suspense since Disney and NBC Universal filed a lawsuit against Midjourney, a generative AI platform. Disney alleges that Midjourney’s image generation services are trained on vast datasets, which potentially incorporate copyrighted work, and generate images that blatantly copy the studio’s famous characters like Darth Vader, Minions and Elsa without any permission.
This dispute hinges on a complex yet very relevant legal question: Does using copyrighted data for AI training constitute fair use? If yes, then who’s the author of that AI-generated image? And what would be the liability of a platform that facilitates the creation of a substantially similar derivative work?
In Disney v. Midjourney, the U.S. court emphasised that unauthorised use of copyrighted works for the training of AI could potentially constitute copyright infringement, especially in those circumstances where the AI output substantially replicates distinctive elements of the original characters. The defence of fair use is highly fact-sensitive, focusing on the purpose, nature, amount, and effect of use on the original work.
If a similar dispute arose in India, let’s say makers of “Chhota Bheem” or “Motu Patlu” sued an Indian Generative AI Platform. Who’ll be protected under the Indian Copyright regime, original creators or the AI platform? Though India lacks specific AI legislation, its established doctrines concerning human authorship and the powerful, non-waivable nature of moral rights provide a distinct legal basis. This analysis posits that India’s current copyright regime, especially through the expansion of Section 57 of copyright act, enshrines moral rights, allowing authors to claim authorship of their work and to prevent any distortion, mutilation, or modification that could harm their reputation or the integrity of the work. Unlike U.S. copyright law, which primarily focuses on economic rights, this provision gives Indian creators a distinct legal avenue to challenge AI-generated reproductions that misrepresent or replicate their characters without authorisation. In the context of generative AI, this means that even if the output is technically produced by an AI, creators of characters like “Chhota Bheem” or “Motu Patlu” could invoke Section 57 to assert control over the integrity of their creations and prevent unauthorised reproductions or distortions.
AUTHORSHIP AND ORIGINALITY: THE HUMAN REQUIREMENT
Like most International regimes, Indian copyright law is also fundamentally human-centric. Section 2(d) of the Copyright Act, 1957 defines the author in terms of a natural person, contrary to that sub-clause (vi) of the same defines an author as the person who causes the work to be created for computer-generated works.
This human requirement is further reinforced by the Supreme Court in the case of Eastern Book Company v. D.B. Modak. Traditionally, the “sweat of the brow” doctrine recognised copyright based on the effort or labour expended in creating a work, regardless of originality. The Court moved away from this approach, holding that a work must contain a “modicum of creativity”, a minimum level of originality or intellectual effort during the creation of the work to qualify for copyright protection.
In a Midjourney-type scenario, the AI lacks consciousness and cannot be the author; this determination rests on the individual who designs the prompt. If the user just types “Chhota Bheem” and the image of the Character is generated, this lack of creativity means that the output is an autonomous reproduction with no human author, thus potentially leaving the output unprotected. But there is a catch if the user employs complex prompting and curatorial selection along with iterative editing, they may argue that they have infused the work with “a modicum of creativity”, claiming the authorship over that output. Notwithstanding anything said before, this authorship claim is instantly rebutted if the output is a replica or a substantial copy of the original copyrighted character, leading directly to infringement.
If a user is engaged in iterative prompting, curatorial selection, or modifies the AI output meaningfully, the question that arises is whether this human contribution meets the “modicum of creativity” threshold, or if the work is still a derivative copy. Indian courts have yet to clarify this boundary, making it a fertile area for judicial interpretation.
INFRINGEMENT: THE PROTECTION OF FICTIONAL CHARACTERS
If a similar case arises in India, it will depend on whether fictional characters are protected under the copyright law or not. Indian courts have consistently recognised that distinctive, well-developed fictional characters are protected under the Copyright Act as a part of the original artistic or literary work.
In the case of Raja Pocket Books v. Radha Pocket Books, the Delhi High Court, while granting an injunction, held that the character “Nagesh” was nothing but a substantial and clear reproduction of the distinctively dressed and powered “Nagraj”. The court also applied the “lay observer test” and the “doctrine of fading in memory”. It concluded by protecting the audio-visual representation of the character and stated that similarities in the visual portrayal, name and function would create an impression of copying.
In practice, it is seen that even minor alterations by the user may not shield them from infringement if the AI-generated character clearly evokes the copyrighted work. Indian law prioritises the integrity and recognition of the original character, allowing creators to enforce both economic and moral rights.
PLATFORM LIABILITY AND THE DATASET TRANSPARENCY GAP
Another substantial question is who will be liable for infringement in a similar case, the user or the platform; this is further complicated by Section 79 of the Information Technology Act of 2000. An Indian generative AI platform could initially claim safe harbour protection for third-party content, i.e., user-generated output, provided it adheres to due diligence guidelines. However, the Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2025 (“2025 Amendment”), enable the creation of Synthetically Generated Information (SGI), which now face proactive obligations. It must be ensured that AI content is embedded with a permanent metadata identifier and carries a visible label covering not less than 10% of the surface area; failure to comply with these may lead to loss of safe harbour, making the platform directly liable
The real battlefield for platform liability has shifted from the output to the training process, as in the case of ANI Media Pvt. Ltd. v. OpenAI Inc. & Anr., where the Delhi High Court has framed four seminal issues that will define the future of AI in the Indian IP Landscape:-
1. Storage as Infringement– Whether the act of storing copyrighted data constitutes a violation of the owner’s exclusive rights?
2. Output Infringement– Whether generating a response based on that data constitutes a fresh act of Infringement?
3. Fair Dealing Test– Whether AI training can be protected under the fair use doctrine enshrined in Section 52 of the Copyright Act, which is traditionally narrower when compared to other jurisdictions?
4. Jurisdiction– Whether Indian Courts could hold a foreign entity liable if servers are located outside India but harm occurs within Indian Territory?
The Court clarified that liability can arise not only from infringing output but also from storing and processing copyrighted data without authorisation. Fair dealing is narrowly construed, and foreign entities can be held accountable for harm caused in India, widening enforcement against AI developers.
This implies that Indian platforms may need proactive dataset auditing, licensing, or metadata compliance. Additionally, even foreign AI developers can face Indian liability if the output causes harm locally.
Additionally, the DPIIT Committee’s Working Paper proposes a “One Nation, One License” model mandating a statutory license for AI developers. If this gets implemented then the AI content generator would no longer be able to argue “Fair Dealing”; instead, it would be legally required to pay a revenue-linked royalty to a centralized collective by Copyright Royalty Collection and Administration Tribunal (CRCAT) for the “right to train”, which refers to the lawful permission to ingest, store, and analyse copyrighted works for the purpose of training AI models, irrespective of whether the final output directly reproduces the original work.
CONCLUSION AND WAY FORWARD
The affirmative answer can be given to the question of whether Disney v. Midjourney under Indian law would have a different outcome. Even though India faces the same problem regarding the AI model’s training data, a major obstacle in proving direct infringement against the developer, the Indian judicial precedents and statutory framework are quite powerful in providing redress against the infringing output and the facilitating platform.
The rigid “modicum of creativity” test established in the case of Eastern Book Company v. D.B. Modak effectively prevents an AI user from claiming authorship over a simple character replica. Moreover, the clear protection to fictional characters given by the court with distinctiveness ensures that any similar image or creation, such as “Chhota Bheem” or “Motu Patlu”, would constitute a clear copyright infringement under Section 51 of the Indian Copyright Act. The Indian platform hosting the infringing content would thereby be obligated to swiftly remove the infringing works following a notice, in accordance with the “safe harbour” provisions of Section 79 of the IT Act.
As a means of ensuring future resilience, government intervention is needed. There has to be an amendment to the Copyright Act’s Section 2(d) that will force AI-assisted works to obtain substantial human creative control to qualify for protection. Regulatory measures, such as the introduction of dataset transparency requirements for high-risk generative AI, will, however, be quite necessary to enable creators to enforce their rights directly against the systemic copying that occurs during the AI’s training phase. Finally, the clear codification of character merchandising rights would create a clearer mechanism for the commercial enforcement of the creator’s rights, thereby securing them. Additionally, following the DPIIT’s December 2025 Working Paper, India is likely to implement a “One Nation, One License” framework. This would mandate that AI developers pay revenue-linked royalties to a centralised society (CRCAT) for the “right to train,” effectively ending the era of unlicensed scrap.
Authored by: Mr. Ashutosh Mishra, Dharmashastra National Law University (DNLU), Jabalpur
Leave a Reply