A public interest litigation has been filed in the Supreme Court seeking a direction to the Union Government to frame regulatory and licensing framework for Artificial Intelligence (AI) system. The petition focuses that this must be mandatory for AI capable of generating synthetic images, videos, and audio impersonations of real individuals. The petition, filed by advocate Aarati Sah has urged the apex court to issue a writ of mandamus to the Ministry of Electronics and Information Technology (MEITy) and the Department of Telecommunications (DoT) to establish a statutory mechanism for the responsible deployment of AI technologies and to ensure accountability from digital intermediaries like Meta Platforms and Google.
According to the petition, the unregulated spread of AI-generated content, popularly known as deepfakes, has led to serious violations of privacy, dignity, and reputation. “The unchecked use of AI tools capable of cloning voices and images has already caused immense harm to individuals and poses an imminent threat to public trust, social harmony, and national security,” the petition has emphasised.
Core demands of this petition are to:
- Direct the government to create a regulatory and licensing framework for AI systems.
- Mandate digital platforms (like Meta and Google) to implement transparent and time-bound mechanisms for the removal of AI-generated impersonations.
- Constitute an Expert Committee to recommend ethical AI standards.
The petitioner cited a recent surge in deepfake incidents targeting public figures, including celebrities and journalists, noting that the Delhi and Bombay High Courts have granted interim protections in several such cases. Among those who obtained relief are Akshay Kumar, Kumar Sanu, and journalist Sudhir Chaudhary, the petition records. Supreme Court Drawing comparisons with international practices, the petition points out that jurisdictions such as the European Union, the United States, China, and Singapore have implemented regulatory regimes to curb the misuse of AI-generated content through risk-based classification, labelling, and enforcement systems. India, it argues, lacks such legal safeguards. The plea argues that inaction by the government violates citizens’ fundamental rights under Articles 14, 19, and 21 of the Constitution. It also accuses platforms like Meta and Google of failing to act swiftly on complaints of deepfake misuse, thereby rendering grievance redressal mechanisms ineffective. Of course, it must be noted that relief has surprisingly been quite selective. Those critical of the government or establishment seldom get relief.
The petitioner has contended that judicial intervention is now essential to prevent further harm and to safeguard citizens’ digital dignity. The PIL seeks three key directions from the apex court: To direct the Union Government to frame and notify a comprehensive AI regulatory and licensing framework. To mandate digital platforms to establish transparent, time-bound mechanisms for removal of AI-generated impersonations. To constitute an expert committee comprising government officials, jurists, technologists, and civil society members to recommend ethical AI standards. The petition emphasizes that deepfakes have the potential to “destroy lives, reputations, and institutions within moments” and warns that without immediate intervention, they could be weaponized to influence elections, incite communal discord, and undermine public faith in democratic institutions. The petition is filed through Adv Anilendra Pandey. Related – Government Proposes Amendments To IT Rules To Mandate Labelling Of
Current Regulatory Context
In parallel to this judicial action, the Union Government has taken swift steps to address deepfakes by proposing draft amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.
These proposed amendments directly tackle some of the PIL’s concerns by aiming to:
- Mandate Labelling: Require all AI-generated or synthetic content to be clearly and permanently labelled. For videos and images, the label must cover at least 10% of the screen area, and for audio, it must be included in the first 10% of the playback duration.
- Increase Platform Accountability: Require social media platforms to obtain user declarations on whether the content they upload is synthetic and use technical measures to verify this.
This push for a legal framework from the government side, coupled with the PILs filed in both the Supreme Court and the Delhi High Court, confirms that AI regulation and the threat of deepfakes are now a top priority in India’s legal and policy sphere.











