The draft amendments, published on 22 October, seek to crack down on the rising menace of deepfakes. However, creators and other industry stakeholders have called for some alterations to what the ministry has proposed. Mint breaks down the proposals and their impact on AI users.
Will India’s attempt to regulate AI really curb creators?
Over the past four days, creators, through policy think-tanks, have raised concerns that India’s attempt to curb content manipulated by AI does not offer enough nuance. The draft of the law, which is now available in public domain, urges any platform creating and distributing AI content to mandatorily use watermarks and invisible tags, also known as ‘metadata’, to identify where AI has been used.
While almost everyone agrees to this format in principle, creators worry about the proposed 10% rule, where Meity has suggested that any AI content must put a disclaimer on ‘10% of the visible surface area’ of the content. This means that for a text snippet of 100 words, at least 10 words must disclose that the said snippet was generated by AI.
Similarly, for an image created with AI, 10% of its full resolution must feature a watermark disclosing that it was made with AI.
Musicians and artists who create original content with AI said the rule does not differentiate in terms of the intent of a piece of content, and how AI is used in conjunction with human skills. Furthermore, a 10% mandatory watermark may “destroy the creative effect” of a content piece, potentially affecting the adoption of AI for commercial purposes.
A senior official directly associated with the proposed AI regulation framework said Meity has already taken note of this early framework, and much will depend on what stakeholders say in their submissions to the ministry, and what the top tech ministry decides henceforth.
What about AI providers like OpenAI, Google and others?
They, too, are accountable under India’s proposed AI rules, the official cited above affirmed. This has created a fresh round of concerns—OpenAI, Google, Anthropic and others build the foundational AI models that are then used by other companies to build further sub-models, or applications.
Any platform that uses watermark to denote AI-generated content, faces the risk of having that content labelled—no matter what the content is.
The IT ministry has also taken further cognizance that the current draft of AI regulations does not differentiate between truly harmful AI content—such as political misinformation created through an AI platform, versus a harmless group photograph where certain elements have been modified using AI.
Mandatory AI tagging, according to AI engineers, can be challenging for AI platforms to handle. This is because many AI platforms already deal with privacy concerns in terms of their data monitoring and usage. If they were to analyze every content generated through them for labelling, AI providers will need to play the role of a content monitor taking subjective calls—which industry stakeholders believe will be complicated.
A bigger point here is also the fact that for the first time, AI platforms are being clubbed under the same regulation as social media intermediaries.
In effect, the Centre is stating that AI providers such as OpenAI’s ChatGPT and Google’s Gemini are intermediaries, too. If this move is formalized, the very nature of AI usage through these services may change—since platforms will then need to be more stringent in tracking users who generate misinformation, and take actions against their accounts.
Is the law in line with what Europe has already done?
India’s AI regulation is much shorter than the European Union’s Artificial Intelligence Act, which the region started enforcing since August last year. In the latter, Chapter IV, Article 50 of the Act clarifies that any AI-generated content should be identified on social media platforms as so via metadata tagging. This way, the EU puts the onus on social media platforms to monitor content through a moderation team—and assess whether any content is harmless in intent, or shows any form of misinformation and manipulation.
Can all of this be technically feasible?
Yes. A senior official at Meity said last week that the Centre has already consulted with tech firms before framing the rules—and the latter have already expressed confidence that technically, tagging AI content is possible through voluntary disclosures from users distributing the content, or at source itself when an AI content is generated.
Senior executives at Meta, OpenAI and Google, which run the most prominent AI and social media platforms Instagram, ChatGPT and YouTube, have all said that they are currently evaluating the impact these rules may have on the usage of AI itself. All of the above-mentioned tech firms are betting massive amounts of capital on the rise of AI among users. Getting AI moderation right, on this note, will be critical for their business growth as well.
Would users face penalties for using AI?
While the liability to tag and identify AI-generated content will be on the platforms, users will have to voluntarily disclose if they are sharing any content that has been generated or altered with AI. While the penalties are not direct, social media platforms are expected to include AI usage rules into their respective community guidelines—clarifying to users what is legal, and what is not.
As a result, users sharing AI content that ends up being identified as a ‘deepfake’, or content that manipulates a fact into an alternate version using AI, may initially be warned by a platform for doing so.
Repeat offenders that face multiple content takedown orders, or warnings from the social media platforms themselves, may face stricter actions—such as being banned from a platform, or more. However, the AI rules in India, for now, do not dictate specific punitive measures for users who repeatedly propagate misleading AI content.
How will the AI law be enforced?
Alongside framing rules to curb the spread of AI-driven deepfakes, the IT ministry last Wednesday also notified an amendment to social media takedown mechanism in India. Starting 1 November, only officers at the rank of Joint Secretary and above at the Centre, or Deputy Inspector General and above in the police forces, will be allowed to issue takedown orders against any form of content on social media platforms.
Further to this, the official cited above said the rules will work in the form of voluntary disclosures mandated on social media platforms before a piece of content is shared. In case any content is reported, the latter may be liable for takedown based on existing regulation for social media intermediaries.
The Centre will also review the effectiveness of this mechanism on a monthly basis, the official said, adding that there will be room for altering the law as needed—since AI is an evolving field.
“AI deepfakes proliferation, impact and harm, be it to person or national security has now reached a critical scale, sufficient for the Centre to consider more robust and standalone AI laws. Criminal laws for instance act as deterrence and are not just intended for punishment and specificity and availability of AI specific criminal provisions may therefore be more efficacious to combat harms,” said N.S. Nappinai, senior counsel at Supreme Court and founder of cyber security advocacy platform, Cyber Saathi.
Also Read: Mint Explainer | AI deepfakes are everywhere: What Bollywood’s fight means for online platforms
AI regulation India,deepfake law,social media AI guidelines,AI content watermarks,Meity AI rules,India AI rules,generative AI India,AI platform accountability,EU AI Act vs India,AI misinformation,user penalties AI,AI law enforcement India,content takedown AI,IT ministry AI,digital India AI
#Mint #Explainer #Indias #draft #rules #affect #creators #social #media #platforms

