Skip to content

Social media companies in China enforce a stringent AI labelling regulation, ensuring transparency for both human users and automated systems about the authenticity of content.

Social media companies in China are implementing labels for content produced by artificial intelligence, making these labels easily discernible to users and embedding a digital watermark that can be read by automated systems when scanning content.

Social media platforms in China abide by stringent AI labeling regulations, ensuring transparency...
Social media platforms in China abide by stringent AI labeling regulations, ensuring transparency for both users and automated systems, differentiating reality from artificial intelligence-generated content.

Social media companies in China enforce a stringent AI labelling regulation, ensuring transparency for both human users and automated systems about the authenticity of content.

In a bid to combat misinformation and fraud, various countries are introducing stricter regulations on AI-generated content. This movement is not limited to China, where the Cyberspace Administration has announced undisclosed penalties for using AI to disseminate misinformation or manipulate public opinion.

China's social media giants, including Tencent Holdings' WeChat, Bytedance's TikTok alternative Douyin, Weibo, and Rednote, are now required to apply a watermark or explicit indicator of AI content and include metadata for web crawling algorithms. Users are also expected to classify AI-generated content and label it as such.

Similar initiatives are being considered in other countries. The European Union, for instance, is planning to enact Article 50 of the EU AI Act from August 2026, requiring labeling of AI-generated content. Vietnam, South Korea, Peru, El Salvador, and the USA are among the other nations discussing or implementing such regulations.

In line with these global efforts, platforms are reserving the right to delete content that is uploaded without appropriate labeling. Google, for example, is taking steps to help users identify AI-edited images. The tech giant's Pixel 10 phones implement measures such as C2PA credentials for the camera to help users know if an image was edited with AI or not.

However, the widespread use of safeguards to prevent the manipulation of AI-generated content is increasing, despite some users circumventing them. Reports suggest that numerous users are finding ways to bypass these controls.

To aid in this battle, the Internet Engineering Task Force has proposed a new AI header field for disclosing AI-generated content using metadata. This proposal doesn't guarantee that humans can easily identify AI-generated content, but it can aid algorithms in doing so.

Each affected platform has posted messages informing users about the new requirement to label AI-generated content. Users are given the option to flag AI-generated content that is not correctly labeled.

As the world grapples with the increasing prevalence of AI-generated content, the implementation of stricter controls in China might lead to a similar situation in Western countries. The proposed AI header field is just one of many initiatives aimed at ensuring transparency and accountability in the digital age.

Read also:

Latest