The Cyberspace Administration of China deploys to carry out the special action "Clear and Combat the Abuse of AI Technology".

date
30/04/2025
avatar
GMT Eight
In order to standardize AI services and applications, promote the healthy and orderly development of the industry, and safeguard the legitimate rights and interests of citizens, the Cyberspace Administration of China recently issued a notice to deploy a 3-month nationwide special action called "Clearing and Combating the Abuse of AI Technology".
In order to standardize AI services and applications, promote the healthy and orderly development of the industry, and safeguard the legitimate rights and interests of citizens, the Cyberspace Administration of China recently issued a notice to deploy and carry out a three-month special action called "Clear and Rectify the Abuse of AI Technology" nationwide. An official from the Cyberspace Administration of China stated that this special action will be carried out in two phases. The first phase will focus on strengthening the governance of AI technology sources, cleaning up and rectifying illegal AI applications, enhancing the management of AI-generated synthetic technology and content identification, and promoting website platforms to improve their detection and authenticity verification capabilities. The second phase will focus on prominent issues such as the production and dissemination of rumors, false information, and vulgar content using AI technology, impersonation of others, and engagement in online water army activities. It will involve the concentrated clean-up of related illegal and harmful information and the punishment of violators, MCN institutions, and website platforms. The first phase will primarily address the following six prominent issues: 1. Illegal AI products: Providing content services to the public using generative AI technology without completing the registration or filing procedures for large models. Providing features such as "one-click undressing" that violate laws and ethics. Cloning and editing others' voice and facial biometric information without authorization, thereby infringing on their privacy. 2. Teaching and selling illegal AI product tutorials and goods: Providing tutorials on creating fake videos and audio using illegal AI products. Selling information on illegal products such as "voice synthesizers" and "face-changing tools." Marketing, hyping, and promoting illegal AI product information. 3. Inadequate training data management: Using information that infringes on others' intellectual property rights, privacy rights, and other rights. Using false, invalid, and inaccurate content obtained from the internet. Using illegally sourced data. Failing to establish a mechanism for managing training data and not regularly checking and cleaning up illegal data. 4. Weak security measures: Failing to establish security measures such as content review and intent recognition that are appropriate for the business scale. Failing to establish an effective mechanism for managing illegal accounts. Failing to conduct regular security self-assessments. Social platforms are unclear about and lack strict control over AI automatic reply services accessed through API interfaces. 5. Failure to implement content labeling requirements: Service providers not adding implicit or explicit content labels to deep synthetic content, or providing and prompting users with explicit content label functions. Content dissemination platforms not conducting monitoring and identification of synthesized content, leading to misinformation misleading the public. 6. Security risks in key areas: AI products that have been filed providing question and answer services in key areas such as healthcare, finance, and minors, not setting up industry-specific security review and control measures, resulting in problems like "AI prescribing," "investment inducement," and "AI hallucinations," misleading students, patients, and disrupting the financial market order. The second phase will primarily address the following seven prominent issues: 1. Spreading rumors using AI: Fabricating various rumors concerning current events, political affairs, public policies, social livelihoods, international relations, and emergencies, or making baseless predictions and malicious interpretations of major policies. Fabricating reasons, progress, details, etc., during sudden events, disasters, etc. Impersonating official press conferences or news reports to spread rumors. Maliciously guiding with content generated by AI cognitive bias. 2. Disseminating false information using AI: Stitching together unrelated images, videos, etc., to create mixed true and false information. Blurting key elements such as the time, place, and people involved in events, regurgitating old news. Producing and disseminating exaggerated and pseudoscientific content related to finance, education, justice, healthcare, etc. Misleading and deceiving netizens using AI fortune-telling, AI fortune-telling, promoting superstitious thinking. 3. Publishing pornographic and vulgar content using AI: Generating synthesized pornographic content or revealing others indecent images and videos using AI functions such as undressing AI, drawing AI, etc., showing provocative or suggestive content, etc. Creating and disseminating bloody and violent scenes, distorted human bodies, hyper-realistic monsters, etc. Generating "erotic stories" and suggestive novels, posts, notes with obvious sexual implications. 4. Committing infringement and illegal acts by impersonating others using AI: Counterfeiting experts, entrepreneurs, celebrities, etc., using deep fake technologies such as face-changing and sound cloning, deceiving and even profiting from netizens. Making fun, defaming, distorting, or alienating public figures or historical figures using AI. Impersonating relatives and friends using AI for online fraud and other illegal activities. Improperly using AI to "resurrect the dead," abusing information of the deceased. 5. Engaging in online water army activities using AI: Using AI technology to "raise numbers," mimicking real people to register and operate social accounts in large numbers. Using AI content farms or AI plagiarism to generate and publish low-quality, homogeneous texts to attract traffic. Using AI group control software, social Siasun Robot&Automation mass likes, comments, and posts, manipulating volume and control comments, creating hot topics. 6. Violating AI product services and applications: Creating and disseminating counterfeit or shell AI websites and applications. AI applications providing illegal services such as tools for generating "hot search hot list hot topics expanded into articles," AI social, chatting software providing low-quality soft and erotic content dialogues, etc. Selling, promoting illegal AI applications, generating synthesized services or courses, and drawing traffic. 7. Violating the rights of minors using AI: AI applications induce minors to become addicted and contain content that affects the physical and mental health of minors in the mode of minors. The relevant person from the Cyberspace Administration of China emphasized that local cyberspace departments should fully recognize the importance of the special action in preventing the abuse of AI technology risks and safeguarding the legitimate rights and interests of netizens. They should effectively fulfill their responsibility for local management, supervise website platforms in accordance with the requirements of the special action, improve the mechanism for reviewing AI-generated composite content, enhance technical detection capabilities, and implement rectification effectively. They should strengthen the promotion of AI-related policies and popularize AI literacy, guide all parties to correctly understand and apply AI technology, and continue to consolidate governance consensus. This article is reprinted from the WeChat public account "Net Security China," GMTEight Editor: Liu Jiayin.