/socialsamosa/media/media_files/2025/10/31/india-proposed-ai-labelling-regulations-2025-10-31-18-38-49.jpg)
Made with AI
When Meta launched its ‘Made with AI’ labels in May 2024, the platform believed it had found the solution to transparency in synthetic content. Within two months, the platform had to rename the feature to ‘AI info’ after users complained about confusion. A photo brightened using AI enhancement showed the same warning as a completely synthetic image of a fictional event. The lack of distinction made the labels nearly meaningless.
On October 22, 2025, India's Ministry of Electronics and Information Technology responded to this industry-wide struggle with draft amendments that could alter how brands and platforms handle AI in advertising. The proposed rules require AI-generated content to carry visible labels covering at least 10% of the visual display area or 10% of the audio duration, with metadata traceability built into every piece of synthetic content.
The regulatory approach mirrors India's handling of social media platforms. When the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules took effect in 2021, platforms initially viewed them as burdensome. The rules required appointment of grievance officers, monthly compliance reports, and content takedowns within strict timelines. Yet over time, platforms integrated these requirements into standard operating procedures. The same evolution is expected with AI labelling, though the technical demands are more complex.
The proposed framework requires social media companies to obtain user declarations on whether uploaded content is AI-generated and deploy technical measures to verify accuracy before displaying appropriate labels. Research indicates consumers are increasingly ready for this transparency. Studies show that over 95% of consumers are more likely to trust brands that openly disclose their use of AI.
From compliance burden to creative advantage
Shweta Tiwari, VP of brand management at Liqvd Asia, sees the mandatory labelling requirement as forcing a long-overdue conversation about authenticity.
"With India proposing mandatory labels for AI-generated content, I actually see this as a huge opportunity for the industry to stay close to honest credibility and innovation," Tiwari says. "Instead of treating a label as a disclaimer, imagine turning it into a design element or a narrative cue that signals progress. For example, we could say 'Co-created with AI to reimagine tomorrow', which reframes compliance as creativity."
The shift requires restructuring across advertising workflows. Tiwari compares it to following house rules that at first feel like compliance but soon turn into instinct. Agencies will need clear declaration processes with every asset carrying metadata, provenance, and audit trails to differentiate human-made content from AI-assisted work.
The challenge for agencies is building simple yet tight guardrails to ensure easy adaptation. This involves technical infrastructure that wasn't necessary even two years ago. Asset management systems need to now track AI involvement at granular levels. Creative teams need protocols for documenting which elements used AI assistance versus full generation.
Manas Gulati, Founder and CEO of ARM Worldwide, shares that transparency has long been an essential part of the agency’s creative process. Gulati says, "AI tools are deeply integrated across research, design, and campaign development, but always with clear communication to clients about where and how they are used. This culture of openness not only builds trust but also enhances creative precision."
As disclosure norms mature, Gulati explains that the focus will shift toward operational excellence, embedding metadata directly into production systems, establishing auditable workflows, and treating disclosure placement with the same rigour as storytelling itself.
The technical requirements parallel what happened when India mandated traceability for WhatsApp messages in 2021. The platform initially resisted, arguing that breaking end-to-end encryption would compromise user privacy. Eventually, WhatsApp implemented measures to identify the first originators of viral messages.
Platforms set the precedent, governments follow
The gap between platform self-regulation and effective transparency may explain why India is now mandating what companies previously implemented voluntarily. YouTube introduced disclosure requirements in March 2024, mandating creators to label realistic content that viewers could mistake for real people, places or events. The creator-driven disclosure model has obvious vulnerabilities. An investigation by 404 Media found that a viral true crime series on YouTube was entirely AI-generated, from the narration to the imagery, yet carried no disclosure labels. The series had accumulated millions of views before anyone noticed.
Snapchat took a different approach in April 2024, using contextual icons and symbols throughout the app. The platform added watermarks to AI-generated images, a small ghost logo with a sparkle icon visible when images are exported or shared. Snapchat also implemented AI red-teaming, partnering with HackerOne on over 2,500 hours of testing to identify potential flaws in generative image models.
Despite these varied approaches, platform labelling remains inconsistent and often ineffective. User surveys conducted throughout 2024 indicated widespread confusion about what labels meant, when they would appear, and whether their absence meant content was human-created or simply unlabeled AI content.
Shreya Badola, Group Account Director at Wit and Chai Group, argues, "The proposed AI labelling rule isn't a creative killjoy, it's the industry's long-overdue reality check.” Badola says, "For too long, AI in advertising has lived in the grey zone between magic and mystery. Now, with mandatory disclosure, brands will need to show their receipts, not to restrict imagination, but to restore trust."
The shift will change how agencies work at every level, from briefing to approvals. Metadata tagging, traceability, and AI audit trails will become standard operating procedure. Badola notes that the 10% label will be part of the design challenge itself, requiring creative teams to consider visibility requirements during conceptualisation.
India's experience with social media regulation offers a preview of how enforcement might unfold. When the intermediary guidelines took effect in 2021, compliance was initially patchy. Yet by 2023, most platforms had established India-specific compliance teams, appointed resident grievance officers, and created systems to respond to government requests within mandated timelines.
India joins global AI governance with distinctive priorities
India's proposed framework positions the country within a growing international movement toward AI regulation, though each jurisdiction reflects different priorities.
The European Union's AI Act, which entered force in August 2024, remains the most comprehensive framework globally. The legislation uses a risk-based classification system with four tiers: unacceptable risk, which bans applications including social scoring and real-time biometric identification; high risk, requiring registration in an EU database for systems used in education, employment and law enforcement; transparency requirements for generative AI; and minimal risk. The act prohibits cognitive behavioural manipulation, requires general-purpose AI models like GPT-4 to undergo thorough evaluations, and mandates that serious incidents be reported to the European Commission.
Companies operating in the EU have to navigate a complex compliance timeline. The ban on unacceptable-risk AI systems took effect in February 2025. Codes of practice apply nine months after entry into force. Rules on general-purpose AI transparency apply after 12 months, and high-risk systems have 36 months to comply.
The United States has pursued a fragmented approach. President Donald Trump's administration revoked previous AI executive orders and signed new directives aimed at removing barriers to AI development. Federal action remains limited to voluntary commitments from AI companies and sector-specific bills rather than comprehensive legislation. States like Colorado have passed their own AI acts, creating a patchwork regulatory landscape. The approach prioritises innovation over standardisation.
Japan released its AI Basic Act in draft form, prioritising innovation through "agile governance" that provides non-binding guidance and defers to private sector self-regulation. The bill, which passed Japan's lower chamber in April 2025, requires companies to cooperate with government safety measures and permits public listing of companies whose AI use violates human rights.
The United Kingdom has delayed plans to regulate AI. Currently, the UK relies on existing sectoral laws to impose guardrails on AI systems. The government released an AI Opportunities Action Plan with the intent to support AI development domestically while building a model champion to compete with foreign companies.
India's proposed framework borrows elements from multiple models but imbues distinctly local concerns. The 10% visibility requirement for labels is an explicit, quantifiable standard and legal experts reportedly say this is one of the first times any country has set such an exact requirement for label visibility. The focus on metadata traceability and user declarations mirrors EU transparency requirements.
However, India currently lacks the EU's comprehensive risk classification system, high-risk AI registry, or enforcement mechanisms. The country's draft Digital India Act, intended to replace the IT Act of 2000, would regulate high-risk AI systems, but the legislation is under development with no clear timeline for passage.
Bridging the cost gap and creating new roles
The compliance systems involve substantial investment. Upgrading asset management platforms, adding validation layers, and implementing automated tools that embed metadata during content creation require resources that larger agencies can absorb, but smaller creators struggle to afford.
Badola explains, "While the small brands might feel the pinch, this also opens the door for shared infrastructure and SaaS-led compliance tools that make transparency affordable and not intimidating. Much like the data-privacy era post-GDPR bolstered collaboration and not competition, this will be the great equaliser."
When Europe's data protection regulation took effect in 2018, smaller companies struggled with compliance costs. Over time, third-party tools emerged to handle consent management and data mapping. The compliance burden became manageable through shared infrastructure.
Tiwari recommends exactly this kind of ecosystem support. She urges that at an industry level, open, low-cost tools or frameworks, ideally led by industry bodies like IAMAI or ASCI, will help ensure standardisation and fair common ground.
“I would urge platforms to offer free APIs or SDKs for tagging/watermarking, ensuring easy access to smaller entities. The bigger vision should be to democratise compliance, so transparency doesn’t become a privilege but a shared standard.”
The transparency mandate is pushing brands toward a fundamental rethinking of how they communicate AI use. Gulati notes that agencies and brands that embed disclosure within their storytelling will not dilute their message but strengthen it.
“‘Declared AI’ will soon be seen as a hallmark of authenticity, signalling that technology and creativity can coexist with honesty and integrity,” he mentions.
Badola believes that the “smartest brands won't hide AI behind disclaimers” and instead weave it into their storytelling. "Imagine a line that reads: 'Co-crafted by humans, fine-tuned by AI' being transparent, confident, and still creatively irresistible."
Tiwari shares this view, arguing that when a brand says 'This world was imagined with AI, to bring human creativity to life', it earns trust. She invokes David Ogilvy's observation that the consumer isn't a moron to make her point. Audiences aren't averse to AI and are already smart enough to catch it. They are just averse to being misled.
The regulatory shift is also creating entirely new professional roles within agencies.
Leaders anticipate roles like AI Compliance Leads, Provenance Engineers and Authenticity Strategists to become integral to creative teams as enablers of ethical creative storytelling.
Badola notes that hybrid roles designed to ensure technology enhances rather than erases emotion will emerge. As India's advertising market moves toward $17 billion by 2027, Gulati says that the balance between innovation and integrity will become the true differentiator. The next creative edge will not lie in how quietly AI is used but in how confidently and transparently it is revealed.
The proposed regulations are India's attempt to shape AI's role in different sectors, including advertising, before negative consequences become ingrained. The ministry has invited suggestions from the public and industry by November 6. Whether the 10% visibility requirement proves effective will determine if India's framework succeeds where voluntary platform initiatives have struggled.
/socialsamosa/media/agency_attachments/PrjL49L3c0mVA7YcMDHB.png)
/socialsamosa/media/media_files/2025/10/07/desktop-leaderboard-1-2025-10-07-15-55-17.png)
Follow Us