OpenAI caught in a whirlwind of public opinion in Canada, pledges to strengthen safety measures drawing attention.
On February 26th local time, OpenAI made several solemn commitments to the Canadian government, aiming to comprehensively strengthen security measures. The company's Global Policy Lead, Anne Olery, revealed that the company had implemented a series of policy changes "several months ago". These changes cover a wide range, including actively consulting "mental health, behavioral, and law enforcement experts" to accurately assess when conversations from chatbots pose credible risks. Olery emphasized that with the enhanced security mechanisms, once a banned account is discovered, OpenAI will immediately hand it over to the relevant law enforcement agencies to ensure that potential risks are promptly addressed. In addition, OpenAI also promised to establish a direct and efficient communication channel with Canadian law enforcement agencies. This means that if OpenAI is concerned that ChatGPT users may be planning to commit violent acts in the real world, they can quickly notify the police, allowing law enforcement agencies to intervene promptly and nip the danger in the bud.
Latest
3 m ago

