“These platforms are now being used to create fake human faces, hyper-realistic scenes, and emotionally manipulative visuals that feed into scams, smear campaigns, and even political disinformation,” said Srinivas.
Everyday users at risk
The most pressing dangers for common users lie in sophisticated phishing attacks enhanced by fake imagery, the risk of personal photos being scraped and turned into deepfakes, and the psychological toll of being targeted by manipulated content on social media and messaging platforms.
“Victims often don’t realise they’ve been targeted until the damage is done,” Srinivas added. “We’re seeing a rise in non-consensual content and AI-powered fraud, both of which are escalating rapidly.”
Five major crimes fuelled by AI-generated imagery
Srinivas listed five real-world scenarios demonstrating the threat posed by AI-generated visuals:
- Deepfake Business Email Compromise (BEC) – Fraudsters use AI avatars to impersonate CEOs or CFOs in emails or video calls, tricking employees into transferring funds or sharing confidential data.
- Sextortion and Image-Based Abuse – Criminals manipulate selfies into explicit images to blackmail victims, particularly targeting women and minors, often before the victim is even aware.
- Political Deepfakes During Elections – Synthetic images depicting fake rallies or violence are circulated to mislead voters and incite unrest, especially in fragile democracies with low media literacy.
- Bypassing Biometric Security – AI-generated faces and irises are now capable of fooling facial recognition systems, threatening banking, border security, and ID verification processes.
- Marketplace and Identity Fraud – Fake profiles using AI-generated headshots are used to scam users on platforms like Airbnb, Fiverr, and dating apps, fuelling fraud and laundering operations.
A blow to biometric security and surveillance
Generative AI’s ability to mimic facial and iris biometrics is also causing deep concern in sectors like fintech and surveillance. In financial services, synthetic identities can be used to bypass Know Your Customer (KYC) protocols, while in public surveillance, fake faces make it easier to evade detection or commit fraud.
“When a fake face can pass as real, the integrity of biometrics itself comes under threat,” Srinivas warned.
Deepfakes and disinformation in South Asia
The danger is particularly pronounced in South Asia, where political tension, rapid information spread via WhatsApp and Telegram, and emotional polarisation make AI-generated images an ideal vector for disinformation.
“A single fake image of a politician at a fabricated protest or a violent incident can go viral in minutes,” Srinivas said. “These visuals influence public opinion far faster than facts can catch up, making them powerful tools of manipulation.”
He also highlighted the weaponisation of deepfakes in cyber extortion schemes targeting journalists, activists, and women—a trend increasingly being exploited by foreign actors to sow discord and mistrust.
Are platforms doing enough?
While platforms such as OpenAI and xAI have introduced basic safeguards like content filters and limited watermarking, Srinivas believes these efforts are not yet sufficient.
“The technology is far ahead of the safety mechanisms. Anyone can generate convincing fake IDs or human faces with minimal effort,” he noted.
He proposed a multi-pronged strategy to mitigate the risks:
- Mandatory watermarking and metadata tagging for every AI-generated image.
- Risk-based access restrictions on high-sensitivity prompts.
- Public-facing detection tools for real-time verification.
- Transparent reporting and abuse response systems.
- Cross-industry collaboration with law enforcement and regulators to create shared standards for detection and accountability.
Here’s what you should do
Srinivas emphasised the need for law enforcement, journalists, and educators to urgently adapt.
Law enforcementagencies must strengthen their cyber forensic capabilities, update legal frameworks to handle synthetic content crimes, and collaborate internationally to track cross-border campaigns.
Journalists must integrate tools that verify visuals and metadata into everyday reporting. “In today’s media ecosystem, seeing is no longer believing,” Srinivas said. “Newsrooms must treat visual verification as seriously as fact-checking.”
Educators, he stressed, have a pivotal role in shaping the next generation’s ability to distinguish real from fake. “AI literacy and visual critical thinking must be part of the school curriculum. Students and teachers alike need to be equipped to understand how these tools work—and how they can be abused.”
Conclusion
In a world increasingly shaped by synthetic content, Srinivas believes that truth itself is under threat—but not beyond defence.
“With foresight, collaboration, and investment in digital resilience, we can equip our societies to protect the public from misinformation and restore trust in what we see,” he said.
AI, Generative AI misuse, Deepfakes, AI-generated imagery, Cybersecurity, South Asia disinformation, AI scams, Fake human faces, Political deepfakes, AI phishing attacks, Biometric security threat, Synthetic identities, AI in elections, Sextortion, Image-based abuse, Business email compromise, Marketplace fraud, AI-powered fraud, AI content regulation, Digital resilience, AI literacy, Deepfake detection, Cyber forensics, AI and democracy, AI risk mitigation, Visual misinformation, AI safety standards, AI watermarking, Fake IDs, KYC fraud, AI media manipulation, Journalist verification tools, AI manipulation, AI-generated disinformation, AI in cybercrime, Fake profile scams, AI in social engineering, AI ethics, AI governance, Deepfake extortion, AI-driven blackmail, Facial recognition bypass, AI and surveillance, AI and misinformation, Digital trust, Tech-enabled scams, Fake news detection, Cross-border cyber threats, RegTech, AI content abuse, Threat to democracy, Synthetic content crimes, Cyber law updates, Visual deception, AI accountability, Media literacy, AI regulation policy, AI misuse in politics, AI-enabled propaganda, Online safety, Tech-driven election fraud, AI safety frameworks, AI threat response, AI visual forensics
#ChatGPT #Grok #fueling #unthinkable #scams #identity #theft #Heres