China enforces AI labeling rules as deepfakes proliferate

Major video platforms roll out tools to comply with new regulation requiring identifiers on all AI-generated content.

From Jiemian News.

by ZHANG Xu

When Chinese billiards player WANG Sinuo discovered pornographic videos online that appeared to feature her, she issued a statement on Aug. 30, denouncing the clips as AI-generated and saying she is pursuing legal action.

Wang's experience is just the tip of the iceberg. Beyond deepfakes, AI-driven schemes range from fabricating data for fake reviews to sophisticated identity scams, potentially constituting civil liabilities, commercial fraud, or even criminal fraud.

Cases like these have prompted Beijing to step up oversight. On Sept. 1, the Cyberspace Administration of China (CAC) and three other agencies began enforcing new rules requiring all AI-generated text, images, videos, and audio to carry identifiers.

Labels can be explicit—such as visible watermarks shown to users—or implicit, embedded in metadata for tracing and verification.

The regulations also set out detailed technical guidance for explicit labels. For instance, visible watermarks on videos must cover at least 5% of the shorter side of the screen and remain visible for no less than two seconds.

The new regulation is the latest in a series of regulatory guardrails Beijing has introduced in recent years—ranging from deepfake rules to stricter algorithm oversight—to manage the rapid rise of China's AI sector.

ZHU Fuyong, professor at the School of Artificial Intelligence Law, Southwest University of Political Science and Law, told Jiemian News the new rules complement existing laws and regulations and "will progressively close legal loopholes in the industry." 

China's major online video platforms are already moving to comply. Bilibili launched AI labelling tools in late August, while Kuaishou and Douyin- the Chinese sibling of TikTok- followed with their own features in September. Douyin said it would add identifiers automatically if creators fail to do so and had built functions to write metadata directly into AI content.

Striking the right balance between fostering innovation and containing risk remains AI governance's central challenge. The current clampdown on AI-generated content has led collateral damage, with creators reporting that original paintings, articles, and videos are being mistakenly flagged as AI-made and subsequently throttled or removed.

Zhu advocated a framework of "graded supervision, mandatory dual labeling, and clear accountability" to make rules predictable and enforceable. "The key is translating proportionality and responsibility-sharing into practical, actionable standards," he said.

Zhu urged AI providers to proactively manage non-compliant content and advised users to trade only labeled content, report suspicious activity, and preserve evidence if they suffer fraud or financial loss. Users who "knowingly use AI-generated content for illegal purposes", he suggested, must also bear responsibility.

来源:界面新闻

广告等商务合作,请点击这里

未经正式授权严禁转载本文,侵权必究。

打开界面新闻APP,查看原文
界面新闻
打开界面新闻,查看更多专业报道

热门评论

打开APP,查看全部评论,抢神评席位

热门推荐

    下载界面APP 订阅更多品牌栏目
      界面新闻
      界面新闻
      只服务于独立思考的人群
      打开