🤖 AI Tools
· 7 min read

AI Content Labelling Laws in Asia -- What Developers Need to Know (2026)


Asia is enforcing AI content labelling laws faster than any other region. China’s rules have been live since September 2025. South Korea’s AI Basic Act followed in January 2026. India introduced 3-hour takedown deadlines in February. Vietnam became the first Southeast Asian country with a standalone AI law on March 1.

The EU AI Act’s transparency rules won’t be enforceable until August 2026. If you’re building AI products that generate text, images, audio, or video for users in Asia, you’re already behind on compliance.

Here’s what each country requires and what you need to implement.

China: Mandatory Dual Labelling (Live Since September 2025)

China has the most detailed AI content labelling regime in the world. The Measures for Labeling Artificial Intelligence-Generated Content, issued by the Cyberspace Administration of China (CAC), require two layers of identification on every piece of AI-generated content that could mislead the public.

Layer 1 — Visible labels: Text, audio cues, or graphic overlays that users can immediately recognize. Chatbot responses, face-swapped videos, voice clones, and synthetic images must carry a clear “AI-generated” marker in Chinese characters.

Layer 2 — Embedded metadata: Hidden watermarks containing the provider’s name, a unique content identifier, and encrypted data that survives compression, cropping, and redistribution.

Platform obligations: Platforms must detect incoming content, categorize it into three tiers (confirmed, possible, or suspected AI-generated), and reinforce or add labels accordingly. The CAC’s 2025 “Qinglang” enforcement campaign has already targeted unlabelled deepfakes.

Technical standard: China published GB 45438-2025, a national standard specifying labelling methods for different content formats.

What developers need to implement:

  • Visible “AI-generated” labels on all synthetic content
  • Metadata embedding with provider identification and content IDs
  • Watermarking that persists through common transformations (crop, compress, resize)
  • Content classification system (confirmed/possible/suspected AI-generated)

Who this applies to: Any AI service provider whose content reaches Chinese users, including through platforms that redistribute content.

South Korea: AI Basic Act Labelling Rules (Live Since January 2026)

South Korea’s Framework Act on Artificial Intelligence (AI Basic Act) took effect on January 22, 2026. Article 31 requires labelling of synthetic content that is “indistinguishable from reality.”

The key distinction: South Korea ties labelling obligations to the degree of realism.

  • Photorealistic deepfakes (video, audio, images that could be mistaken for real): Must display visible labels identifying them as AI-generated.
  • Clearly artificial content (cartoons, stylized artwork, obvious AI art): Only requires invisible digital watermarks.

Advertising rules: All AI-generated or AI-assisted advertisements must be labelled. Portal and platform operators must provide labelling tools and notify content providers of their obligations.

What developers need to implement:

  • Realism detection to determine which labelling tier applies
  • Visible labels for photorealistic synthetic content
  • Invisible watermarks for all AI-generated content
  • Labelling tools if you operate a platform where users create AI content

India: 3-Hour Takedown Deadlines (Live Since February 2026)

India’s IT (Intermediary Guidelines and Digital Media Ethics Code) Rules Amendment took effect on February 20, 2026. It introduces the concept of “Synthetically Generated Information” (SGI) and has the sharpest enforcement teeth in Asia.

Labelling requirements: Platforms must implement reasonable technical and organizational measures to detect deepfakes, apply AI content labels, and deploy provenance technologies.

Takedown deadlines:

  • Non-consensual intimate deepfakes: Must be removed within 2 hours
  • Other unlawful AI-generated content (misinformation, impersonation, forged documents): Must be removed within 3 hours

The penalty: Miss the deadline and the platform loses its safe harbour protection, exposing it to direct legal liability. This is not a fine — it’s a complete removal of legal protection.

What developers need to implement:

  • AI content detection and labelling pipeline
  • Provenance tracking for synthetic media
  • Rapid response system capable of 2-3 hour takedown windows
  • Content moderation infrastructure that can handle deepfake reports at scale

Vietnam: First Southeast Asian AI Law (Live Since March 2026)

Vietnam’s Law on Artificial Intelligence took effect on March 1, 2026, making it the first country in Southeast Asia with a comprehensive AI legal framework.

Key requirements:

  • Risk-based regulatory model with mandatory human oversight
  • Transparency requirements for AI systems
  • Labelling of AI-generated content, particularly deepfakes
  • Applies to both domestic and foreign developers and deployers

Grace periods: Legacy systems in health, education, and finance have compliance grace periods extending to September 2027.

What developers need to implement:

  • Content labelling for generative AI outputs
  • Human oversight mechanisms for high-risk AI applications
  • Transparency documentation for AI systems deployed in Vietnam

The EU Is Behind

For context, the EU AI Act’s Article 50 transparency obligations won’t become enforceable until August 2026. The EU’s Code of Practice on marking and labelling AI-generated content is expected to be finalized in May or June 2026.

The EU approach will require:

  • Machine-readable format marking for AI-generated synthetic content
  • Deepfake labelling for users
  • A common “EU icon” for AI-generated content

But none of this is enforceable yet. If you’re building for global markets, Asia’s rules are the ones you need to comply with today.

What This Means for Developers

If your AI product generates content that reaches users in any of these markets, here’s the minimum you need:

Content labelling pipeline

Every piece of AI-generated content needs both visible and invisible markers. The visible label tells users. The invisible metadata tells platforms and regulators.

Watermarking that survives transformation

China specifically requires watermarks that persist through compression, cropping, and redistribution. This rules out simple overlay approaches. You need embedded watermarking at the content level.

The C2PA (Coalition for Content Provenance and Authenticity) standard is emerging as the technical backbone for this. Over 6,000 enterprises have joined the Content Authenticity Initiative. Implementing C2PA is the closest thing to a universal compliance approach.

Rapid response infrastructure

India’s 2-3 hour takedown windows mean you need automated detection and response systems. Manual review processes won’t scale to these deadlines.

Realism classification

South Korea’s tiered approach (photorealistic vs clearly artificial) means your system needs to assess the realism of its own outputs and apply different labelling rules accordingly.

Regional compliance mapping

Each country has different requirements, different enforcement timelines, and different penalties. A single global labelling approach won’t work. You need to detect the user’s jurisdiction and apply the appropriate rules.

Timeline Summary

CountryLawEffectiveKey Requirement
ChinaCAC Labelling MeasuresSep 2025Dual labelling (visible + metadata)
South KoreaAI Basic ActJan 2026Realism-tiered labelling
IndiaIT Rules AmendmentFeb 20262-3 hour takedown deadlines
VietnamAI LawMar 2026Risk-based labelling + human oversight
EUAI Act Article 50Aug 2026Machine-readable marking (not yet enforced)

What’s Coming Next

The Philippines intends to push an AI regulatory framework during its 2026 ASEAN chairmanship. Indonesia’s AI presidential regulations are expected by mid-2026. Malaysia’s AI governance bill is under review.

The trend is clear: AI content labelling is becoming a baseline regulatory requirement across Asia. If you’re building generative AI products, implementing content provenance and labelling now is not optional — it’s the cost of doing business in the world’s fastest-growing AI markets.

FAQ

Do AI content labelling laws apply to developers outside Asia?

Yes. China, India, and Vietnam’s rules apply to any AI service whose content reaches users in those countries, regardless of where the developer is based. If your AI product generates content consumed by users in these markets, you need to comply.

What is C2PA and should I implement it?

C2PA (Coalition for Content Provenance and Authenticity) is an open standard for attaching provenance information to digital files. It records whether content was AI-generated and by which system. With 6,000+ members including major tech companies, it’s the closest thing to a universal compliance approach for content labelling.

What happens if I don’t label AI-generated content?

Penalties vary by country. In India, platforms lose safe harbour protection (direct legal liability). In China, the CAC actively enforces through campaigns targeting unlabelled deepfakes. In South Korea, violations of the AI Basic Act carry regulatory penalties.

Does this apply to text-only AI outputs like chatbots?

China’s rules apply to any AI-generated content that could mislead the public, which can include text. South Korea’s visible labelling rules focus on content “indistinguishable from reality,” which primarily targets audio, video, and images. India’s rules cover all “Synthetically Generated Information.” If your chatbot could be mistaken for a human, labelling is required in most jurisdictions.

When will the EU enforce similar rules?

The EU AI Act’s Article 50 transparency obligations become enforceable in August 2026. A Code of Practice on labelling is expected to be finalized in May-June 2026. Until then, Asia’s rules are the most advanced enforceable AI content labelling requirements in the world.