OpenAI will regularly release its model safety evaluation results.

date
15/05/2025
On May 14th, local time, OpenAI announced the launch of a security evaluation center for OpenAI models. OpenAI stated that as an important measure to actively enhance security transparency, the center's content will be regularly updated. These results only reflect a partial dimension of OpenAI's security work and are intended to provide periodic snapshots. For a comprehensive evaluation of model security and performance, this center's evaluation data should be combined with system card descriptions, prepared framework assessment reports, and specialized research reports released with each model.
Latest
See all latestmore