Research: AI company security measures fall far short of standards

date
04/12/2025
According to the new version of the Artificial Intelligence Security Index released by the Institute for the Future of Life on Wednesday, the security practices of major artificial intelligence companies such as Anthropic, OpenAI, xAI, and Meta are "far lower than global emerging standards". The institute stated that security assessments conducted by independent expert groups found that although these companies are all racing to develop superintelligence, none of them have established a comprehensive strategy to control such advanced systems. Following multiple incidents of suicide and self-harm related to AI chatbots, the public is increasingly concerned about the impact of systems with reasoning and logic abilities, even smarter than humans, on society. This research was released in this context. Max Tegmark, professor at MIT and chairman of the Institute for the Future of Life, said, "Despite recent concerns about AI-driven cyberattacks and AI-induced mental disorders and self-harm, the regulation of AI companies in the United States is still weaker than that of restaurants, and they are continuing to lobby against binding safety standards."