OpenAI has released Privacy Filter, an open-weight model aimed at detecting and redacting personally identifiable information in text. The tool is designed to be context-aware and run locally, targeting “privacy-by-design” for enterprises and developers. The goal: protect sensitive data during AI training and processing without relying on cloud infrastructure.
OpenAI has released Privacy Filter, an open-source model that detects and redacts personally identifiable information before data ever leaves an enterprise environment. Built from a gpt-oss variant, it runs locally on laptops or in browsers and supports large 128,000-token inputs. The tool targets fast, high-throughput privacy pipelines, but comes with a caution against treating it as a full safety guarantee.
Your news, in seconds
Get the Beige app — every story in 60 words, updated hourly. Free on iOS & Android.
Swipe through stories, personalise your feed, and save articles for later — all on the app.