by ZHANG Xu
When Chinese billiards player WANG Sinuo discovered pornographic videos online that appeared to feature her, she issued a statement on Aug. 30, denouncing the clips as AI-generated and saying she is pursuing legal action.
Wang's experience is just the tip of the iceberg. Beyond deepfakes, AI-driven schemes range from fabricating data for fake reviews to sophisticated identity scams, potentially constituting civil liabilities, commercial fraud, or even criminal fraud.
Cases like these have prompted Beijing to step up oversight. On Sept. 1, the Cyberspace Administration of China (CAC) and three other agencies began enforcing new rules requiring all AI-generated text, images, videos, and audio to carry identifiers.
Labels can be explicit—such as visible watermarks shown to users—or implicit, embedded in metadata for tracing and verification.
The regulations also set out detailed technical guidance for explicit labels. For instance, visible watermarks on videos must cover at least 5% of the shorter side of the screen and remain visible for no less than two seconds.
The new regulation is the latest in a series of regulatory guardrails Beijing has introduced in recent years—ranging from deepfake rules to stricter algorithm oversight—to manage the rapid rise of China's AI sector.
ZHU Fuyong, professor at the School of Artificial Intelligence Law, Southwest University of Political Science and Law, told Jiemian News the new rules complement existing laws and regulations and "will progressively close legal loopholes in the industry."
China's major online video platforms are already moving to comply. Bilibili launched AI labelling tools in late August, while Kuaishou and Douyin- the Chinese sibling of TikTok- followed with their own features in September. Douyin said it would add identifiers automatically if creators fail to do so and had built functions to write metadata directly into AI content.
Striking the right balance between fostering innovation and containing risk remains AI governance's central challenge. The current clampdown on AI-generated content has led collateral damage, with creators reporting that original paintings, articles, and videos are being mistakenly flagged as AI-made and subsequently throttled or removed.
Zhu advocated a framework of "graded supervision, mandatory dual labeling, and clear accountability" to make rules predictable and enforceable. "The key is translating proportionality and responsibility-sharing into practical, actionable standards," he said.
Zhu urged AI providers to proactively manage non-compliant content and advised users to trade only labeled content, report suspicious activity, and preserve evidence if they suffer fraud or financial loss. Users who "knowingly use AI-generated content for illegal purposes", he suggested, must also bear responsibility.