A proposed AI Act is being scrutinized as Europe worries that AI and algorithmic decision-making are becoming embedded in welfare and social safety nets. Critics say these systems can quietly shape eligibility and access to support, undermining social and economic rights unless safeguards, transparency, and accountability are strong enough to protect people from errors and bias.
India is considering a major shift in AI governance, moving away from its earlier “light-touch” approach as risks mount around cybersecurity, deepfakes, and threats to critical sectors. A Mint report says a six-member TPEC and a 10-member inter-ministerial AIGEG are preparing fresh guidelines, potentially reshaping policy direction. The push comes amid Grok controversies, tighter intermediary rules, and court action over deepfakes.
Your news, in seconds
Get the Beige app — every story in 60 words, updated hourly. Free on iOS & Android.
The Reserve Bank of India is in talks with global regulators and local banks to assess cybersecurity risks linked to Anthropic’s new AI model, Mythos. RBI officials are exploring safeguards and could seek direct access to the system to test for vulnerabilities, while also pushing for data localization to protect Indian customers’ information.
A new proposal argues AI should be allowed to self regulate, but only where the stakes are lower—while a tougher, EU style level of risk approach sets higher compliance bars for high consequence systems. It also calls for sector specific rules that reflect real context, balancing citizen rights, consumer welfare, innovation, economic interests, and national and geopolitical security.
Swipe through stories, personalise your feed, and save articles for later — all on the app.