Context: Amended IT Rules call for disclosure of AI-generated synthetic media, and warn platforms of loss of safe harbour for non-compliance; changes notified by the govt. to take effect on February 20.
- The Union government has notified amendments to the Information Technology Act, 2021, requiring photorealistic AI-generated content to be prominently labelled. The changes, which will come into force on February 20, also significantly shorten timelines for takedown of illegal material.
- Under the new rules, social media platforms will now have between two and three hours to remove certain categories of unlawful content, a sharp reduction from the earlier 24-36 hours.
- Content deemed illegal by a court or an “appropriate government” will have to be taken down within three hours, while sensitive content, featuring non-consensual nudity and deepfakes, must be removed within two hours.
- The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, defines synthetically generated content as “audio, visual or audio-visual information which is artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that such information appears to be real, authentic or true and depicts or portrays any individual or event in a manner that is, or is likely to be perceived as indistinguishable from a natural person or a real-world event.” The final definition is narrower than the one released in a draft version of these rules in October 2025. As with the existing IT Rules, failure to comply with the rules could result in loss of safe harbour, the legal principle that sites allowing users to post content cannot automatically be held liable in the same way as a publisher of a book or a periodical can.
- The rules include a carve-out for touch-ups that smartphone cameras often perform automatically.
- Platforms will be required to seek disclosures from users in case their content is AI-generated. If such a disclosure is not received for synthetically generated content, the official said, firms would either have to proactively label the content or take it down in cases of non-consensual deepfakes.
- The amended rules mandate that AI-generated imagery be labelled “prominently”. While the draft version specified that 10% of any imagery would have to be covered with such a disclosure, platforms have been given some more leeway, the official said, since they pushed back on such a specific mandate.
Safe harbour
- “Provided that where [a social media] intermediary becomes aware, or it is otherwise established, that the intermediary knowingly permitted, promoted, or failed to act upon such synthetically generated information in contravention of these rules, such intermediary shall be deemed to have failed to exercise due diligence under this sub-rule,” the rules say, hinting at a loss of safe harbour.
- The rules also partially roll back an amendment notified in October 2025, which had limited each State to designating a single officer authorised to issue takedown orders. States may now notify more than one such officer— an “administrative” measure to address the need of States with large populations, the official said.