India is reportedly shifting from a “light-touch” approach to AI regulation by forming a Technology and Policy Expert Committee and an interministerial AI Governance and Economic Group. The new framework targets deepfakes, cybersecurity threats, and platform accountability across vital sectors. India is also revising intermediary rules to require clearer labeling of AI-generated content by 2026.
Justice Nagarathna urged authorities to consider legislating specifically against deepfakes and AI-enabled forms of child abuse, pointing to rising threats to girls’ safety. Speaking at the Supreme Court’s Juvenile Justice Committee stakeholders consultation with UNICEF India, she highlighted the need for stronger safeguards and a safer environment for the girl child.
Your news, in seconds
Get the Beige app — every story in 60 words, updated hourly. Free on iOS & Android.
India is considering a major shift in AI governance, moving away from its earlier “light-touch” approach as risks mount around cybersecurity, deepfakes, and threats to critical sectors. A Mint report says a six-member TPEC and a 10-member inter-ministerial AIGEG are preparing fresh guidelines, potentially reshaping policy direction. The push comes amid Grok controversies, tighter intermediary rules, and court action over deepfakes.
YouTube has expanded its likeness protection program by offering Hollywood celebrities and entertainers a free deepfake detection tool. Designed to help artists identify AI-manipulated impersonations, the feature supports faster reviews and removal requests for misleading content. The move targets a growing wave of AI-generated deception as creators seek better protection of their careers and public image.
Swipe through stories, personalise your feed, and save articles for later — all on the app.