A new proposal argues AI should be allowed to self regulate, but only where the stakes are lower—while a tougher, EU style level of risk approach sets higher compliance bars for high consequence systems. It also calls for sector specific rules that reflect real context, balancing citizen rights, consumer welfare, innovation, economic interests, and national and geopolitical security.
Swipe through stories, personalise your feed, and save articles for later — all on the app.