
New Delhi | The government on Wednesday proposed changes to IT rules, mandating the clear labelling of AI-generated content and increasing the accountability of large platforms like Facebook and YouTube for verifying and flagging synthetic information to curb user harm from deepfakes and misinformation.
The IT Ministry noted that deepfake audio, videos and synthetic media going viral on social platforms have demonstrated the potential of generative AI to create "convincing falsehoods", where such content can be "weaponised" to spread misinformation, damage reputations, manipulate or influence elections, or commit financial fraud.
The proposed amendments to IT rules provide a clear legal basis for labelling, traceability, and accountability related to synthetically-generated information.
Apart from clearly defining synthetically generated information, the draft amendment, on which comments from stakeholders have been sought by November 6, 2025, mandates labelling, visibility, and metadata embedding for synthetically generated or modified information to distinguish such content from authentic media.
The stricter rules would increase the accountability of significant social media intermediaries (those with 50 lakh or more registered users) in verifying and flagging synthetic information through reasonable and appropriate technical measures.
The draft rules mandate platforms to label AI-generated content with prominent markers and identifiers, covering a minimum of 10 per cent of the visual display or the initial 10 per cent of the duration of an audio clip.
It requires significant social media platforms to obtain a user declaration on whether uploaded information is synthetically generated, deploy reasonable and proportionate technical measures to verify such declarations, and ensure that AI-generated information is clearly labelled or accompanied by a notice indicating the same.
The draft rules further prohibit intermediaries from modifying, suppressing, or removing such labels or identifiers.
"In Parliament as well as many forums, there have been demands that something be done about deepfakes, which are harming society...people using some prominent person's image, which then affects their personal lives, and privacy...Steps we have taken aim to ensure that users get to know whether something is synthetic or real. It is important that users know what they are seeing," IT Minister Ashwini Vaishnaw said, adding that mandatory labelling and visibility will enable clear distinctions between synthetic and authentic content.
Once rules are finalised, any compliance failure could mean loss of the safe harbour clause enjoyed by large platforms.
With the increasing availability of generative AI tools and the resulting proliferation of synthetically generated information (deepfakes), the potential for misuse of such technologies to cause user harm, spread misinformation, manipulate elections, or impersonate individuals has grown significantly, the IT Ministry said.
Accordingly, the IT Ministry has prepared draft amendments to the IT Rules, 2021, with an aim to strengthen due diligence obligations for intermediaries, particularly significant social media intermediaries (SSMIs), as well as for platforms that enable the creation or modification of synthetically-generated content.
The draft introduces a new clause defining synthetically generated content as information that is artificially or algorithmically created, generated, modified or altered using a computer resource in a manner that appears reasonably authentic or true.
A note by the IT Ministry said that globally, and in India, policymakers are increasingly concerned about fabricated or synthetic images, videos, and audio clips (deepfakes) that are indistinguishable from real content, and are being blatantly used to produce non-consensual intimate or obscene imagery, mislead the public with fabricated political or news content, commit fraud or impersonation for financial gain.
The latest move assumes significance as India is among the top markets for global social media platforms, such as Facebook, WhatsApp and others.
A senior Meta official said last year that India has become the largest market for Meta AI usage. In August this year, OpenAI CEO Sam Altman had said that India, which is currently the second-largest market for the company, could soon become its largest globally.
Asked if the changed rules would also apply to content generated on OpenAI's Sora or Gemini, sources said in many cases, videos are generated but not circulated, but the obligation is triggered when a video is posted for dissemination. The onus in such a case would be on intermediaries who are displaying the media to the public and users who are hosting media on the platforms.
Over the treatment of AI content on messaging platforms like WhatsApp, sources said that once it is brought to their notice, they will have to take steps to prevent its virality.
India has witnessed an alarming rise in AI-generated deepfakes, prompting court interventions. Most recent viral cases include misleading ads depicting Sadhguru's fake arrest, which the Delhi High Court ordered US digital giant Google to remove.
Earlier this month, Aishwarya Rai Bachchan and Abhishek Bachchan sued YouTube and Google in a lawsuit that seeks Rs 4 crore in damages over alleged AI deepfake videos.
New Delhi | The proposed amendments to the IT Rules that would mandate clear labelling of AI-generated content and place greater onus on large social media platforms are a clear step toward ensuring authenticity in digital content, Mahesh Makhija, Partner and Technology Consulting Leader at EY India, said on Wednesday.
Labelling AI generated material and embedding non-removable identifiers will help users distinguish real content from synthetic, serving as the foundation for responsible AI adoption.
"...these measures will give businesses the confidence to innovate and scale AI responsibly. The next step should be to establish clear implementation standards and collaborative frameworks between government and industry, to ensure the rules are practical, scalable, and supportive of India's AI leadership ambitions," Makhija added.
According to Makhija, the proposed rules mark a clear step toward ensuring authenticity in digital content.
Akshay Garkel, Partner, Grant Thornton Bharat, believes that the proposed amendment is a timely and necessary move, especially as deepfakes are getting worryingly convincing.
"It's good to see the government and law enforcement taking the issue seriously and acting to curb this menace," Garkel said.
The government on Wednesday proposed changes in IT rules that would mandate clear labelling of AI-generated content and place greater accountability on large platforms such as Facebook, YouTube and others in verifying and flagging synthetic information - a move aimed at curbing user harm from deepfakes and misinformation.
The IT ministry noted that deepfake audio, videos and synthetic media going viral on social platforms have demonstrated the potential of generative AI to create "convincing falsehoods", where such content can be "weaponised" to spread misinformation, damage reputations, manipulate or influence elections, or commit financial fraud.
The proposed amendments to IT rules provide a clear legal basis for labelling, traceability, and accountability related to synthetically-generated information.
Apart from clearly defining synthetically generated information, the draft amendment - on which comments from stakeholders have been sought by November 6, 2025 - mandates labelling, visibility, and metadata embedding for synthetically generated or modified information to distinguish such content from authentic media.
The stricter rules would increase accountability of significant social media intermediaries (those with 50 lakh or more registered users) in verifying and flagging synthetic information through reasonable and appropriate technical measures.
New Delhi | The IT ministry has stipulated that intimation to social media platforms for removal of 'unlawful information' can only be issued by senior officials and would require precise details and reasons to be specified, as it notified IT Rules amendment to streamline content takedown procedures and bring transparency, clarity and precision in actions.
Further, all intimations issued under Rule 3(1)(d) will be subject to a monthly review by an officer, not below the rank of secretary of the appropriate government, to ensure that such actions remain "necessary, proportionate, and consistent with law".
Under Rule 3(1)(d), intermediaries are required to remove unlawful information upon receiving actual knowledge either through a court order or notification from the appropriate government.
"On the Rule 3 (1) (d)...accountability of the Government increases with this change and we will be giving a reasoned intimation whenever any such order is passed. The orders will be passed at a fairly senior level, joint secretary and above, DIG and above," IT Minister Ashwini Vaishnaw told mediapersons.
The IT ministry, in a note, said that the government has notified amendments to Rule 3(1)(d) of the IT Rules, 2021 to enhance transparency, accountability and safeguards.
A review undertaken by the IT ministry had highlighted the need for additional safeguards and checks and balances to ensure senior-level accountability, precise specification of unlawful content, and periodic review of government directions at higher level.
Accordingly, "any intimation to intermediaries for removal of unlawful information can now only be issued by a senior officer not below the rank of Joint Secretary, or equivalent, or, where such rank is not appointed, a Director or an officer equivalent in rank — and, where so authorised, acting through a single corresponding officer in its authorised agency, where such agency is so appointed".
In case of police authorities, only an officer not below the rank of Deputy Inspector General of Police (DIG), specially authorised, can issue such intimation.
The amendments also require intimation to specify reasons and details in such cases.
"The intimation must clearly specify the legal basis and statutory provision, the nature of the unlawful act, and the specific URL/identifier or other electronic location of the information, data or communication link (content) to be removed," the IT ministry said.
As per the ministry, the amendments strike a balance between the constitutional rights of citizens and the legitimate regulatory powers of the state, ensuring that enforcement actions are transparent and do not lead to arbitrary restrictions.
They aim for greater transparency and accountability, through clear guidelines on who can issue directions and how, with periodic review, ensuring checks and balances are in place.
"By mandating detailed and reasoned intimations, intermediaries will have better guidance to act in compliance with law," as per the ministry.
Further the reforms ensure proportionality and uphold the principles of natural justice while reinforcing lawful restrictions under the IT Act, 2000.