India has amended its Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules to tighten compliance requirements for deepfakes and other “synthetically generated information” (SGI), including a three-hour deadline for platforms to remove unlawful content after receiving an official order. The changes also introduce faster handling for certain urgent user complaints, along with new requirements around labeling and traceability for AI-made audio and video.
The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 were notified on February 10, 2026, and are set to come into force on February 20, 2026. The amendments update the IT Rules, 2021 and focus on synthetic audio, visual, and audio-visual content that can appear real.
What the new deadlines require
The amendments shorten the timeline for platforms to act once they receive “actual knowledge” of unlawful content through a court order or a reasoned notice from an authorized government officer. Under the revised rule, the platform must remove or disable access to the specified unlawful information within three hours of receiving that order or notice.
A separate, faster track applies to certain complaints from users, including content involving nudity, sexual content, morphed images, or impersonation, as described in the rules. For these complaints, intermediaries must remove or disable access within two hours of receiving the complaint, according to the MeitY FAQ document.
The rules also reduce the general timeline for resolving user grievances to seven days from receipt, and set a 36-hour timeline for certain urgent grievance cases tied to requests for removal or disabling access.
Labels, metadata, and traceability for AI content
The amendments create a dedicated due diligence framework for SGI through a new rule focused specifically on synthetically generated information. Intermediaries that enable or facilitate the creation, modification, publication, transmission, sharing, or dissemination of SGI are required to deploy technical measures to prevent unlawful SGI.
For SGI that is not prohibited, the framework requires clear and prominent labeling so users can identify it as synthetically generated. The MeitY FAQ document also states that permissible SGI should be embedded with permanent metadata or technical provenance mechanisms, including a unique identifier, to the extent technically feasible, to help identify the content as SGI and the computer resource used to create or alter it.
The rules also include an anti-tampering safeguard aimed at preventing the suppression or removal of SGI labels and embedded identifiers.
What counts as “synthetically generated information”
The MeitY FAQ defines SGI as audio, visual, or audio-visual information that is artificially or algorithmically created or altered using a computer resource in a way that appears real, authentic, or true and is likely to be perceived as indistinguishable from a real person or a real-world event.
The same document states that the definition includes exclusions intended to avoid covering routine, good-faith editing or enhancement that does not materially alter or misrepresent the underlying content. It also notes exclusions for certain good-faith document preparation and accessibility-related uses, as long as they do not create false documents or false electronic records.
The amendments primarily focus on synthetic media such as images, video, and audio, while stating that pure text outputs by themselves are not SGI under the SGI definition, according to the MeitY FAQ.
Enforcement pressure and debate over speed
The updated timelines are among the most aggressive shifts in the amendments, with the takedown window cut from 36 hours to three hours after notification, and a two-hour deadline for specific complaint categories such as non-consensual intimate imagery, as described by Policy Circle.
TechCrunch reported that the amendments bring deepfakes under a formal regulatory framework and compress compliance timelines, including a three-hour deadline for official takedown orders and a two-hour window for certain urgent user complaints. The same report included comments from legal and civil society voices raising concerns that shorter timelines could increase compliance burdens and push platforms toward automated removals, reducing the scope for human review.
What it could mean for platforms and creators
Mint reported that the amendments introduce strict guidelines for AI-generated content, including prominent labeling requirements and the need to take down non-compliant content within two to three hours. The changes are aimed at speeding up response and improving transparency around AI-generated audio and video.
Under the MeitY FAQ, intermediaries are also required to inform users at least once every three months about key obligations and consequences for rule violations, and intermediaries that facilitate SGI creation must issue additional warnings to users about potential legal penalties for unlawful SGI-related violations.
