The Japanese government has introduced a draft on artificial intelligence, requiring AI large model operators to reduce the rate of errors.
The Japanese government announced the draft guidelines for the use of artificial intelligence on the 12th. The core content includes requiring companies to actively disclose information, prevent the spread of highly realistic "deepfake" images and other inappropriate content, and urging the public to correctly understand the characteristics and potential risks of AI, such as bias and criminal uses. These guidelines are based on the Artificial Intelligence Law which came into effect in September, summarizing the basic precautions for citizens and companies when using AI. Although they are not legally binding, they help alleviate society's concerns about generative AI. The draft guidelines also require research institutions and development companies to formulate and disclose policies for collecting training data to reduce the risks of privacy infringement or model output bias. The document emphasizes the need to strengthen efforts to prevent the generation of fictitious or erroneous information by AI, as such issues could have widespread impact on society and business. The government also calls on the public to enhance their AI literacy. Article 15 of the AI Law specifically states that efforts should be made to promote AI-related education and learning to ensure that the public has a broad understanding of relevant technologies. The AI Law was passed by the Japanese parliament on May 28, 2025, partially implemented on June 4, and fully came into effect on September 1. The Japanese government plans to compile the draft of the "Basic AI Plan" by the end of the year and release specific action plans in early 2026.
Latest

