When the Screen Becomes the Weapon: Online Harassment, AI, and What It Means for Your Workplace

In November 2023, a video went viral across Indian social media. In it, a young woman walks into a lift, dressed in a fitted bodysuit. The footage looked completely real and it was shared thousands of times before anyone paused to question it. When actor Rashmika Mandanna saw it, she was looking at her own face on someone else’s body. The video had been digitally manipulated to graft her likeness onto a British-Indian influencer’s clip, so seamlessly that most viewers never noticed. The accused was arrested under the Bhartiya Nyaya Sanhita and the IT Act. But the video had already travelled far beyond any court’s reach.

What made this case a turning point was not just who the victim was. It was how little it took. A publicly available photo, a free AI tool, and the decision to use it to humiliate someone. That decision is being made every day now, in workplaces, in professional networks, in group chats where colleagues think no one is watching. Online harassment has found a powerful new instrument in generative AI, and every workspace, physical or digital, is already within its reach.

Online Harassment has a New Toolkit

Deepfakes, which are AI-generated or AI-altered images, videos, and audio, are no longer a niche technological curiosity. The number of deepfake files skyrocketed from 500,000 in 2023 to an estimated 8 million by 2025, and the tools to create them require almost no technical skill. A social media profile, a few public photos, a voice note — that is enough raw material.

Deepfake-related cybercrime cases in India have increased as well. According to a McAfee survey, 75% of Indians have consumed some form of deepfake content in the last twelve months, and 88% have encountered deepfake scams. These are not abstract numbers. The harm is real, immediate, and overwhelmingly gendered.

It is Happening to Ordinary Women, Not Just Celebrities

The Rashmika Mandanna case made national headlines and triggered an FIR. But that incident, visible precisely because of who the victim was, represents a much larger crisis. A 2025 report based on cases submitted to Meri Trustline, a helpline by the Rati Foundation, found that 92% of women reporting deepfake abuse are ordinary women, not celebrities.

The content being created is not limited to viral videos. It includes morphed intimate images circulated in WhatsApp groups, fake profiles built from stolen LinkedIn photos, voice notes doctored to put words in someone’s mouth, and threats to upload manipulated imagery unless a demand is met. These are tools of intimidation, and they are showing up in professional contexts with increasing frequency.

In the workplace, deepfakes can be weaponised to harass, intimidate, retaliate, or destroy reputations, often with limited recourse under traditional employment policies. A fabricated image of a female colleague shared in an office group chat. A cloned voice note made to sound like an employee saying something compromising. An altered photograph used to discredit a woman who raised a complaint. Each of these scenarios is plausible. Several are already documented.

This is Workplace Harassment Under POSH

India’s Prevention of Sexual Harassment Act, 2013 defines harassment to include any unwelcome act that creates a hostile, intimidating, or offensive work environment. The law explicitly extends to the “extended workplace,” meaning any location where work-related interaction occurs. Digital channels are not an exception.

When a colleague’s image is morphed into obscene content and shared through a work group, that is sexual harassment under POSH. When a woman receives AI-generated explicit content from a co-worker, that is a violation. When someone’s voice is cloned to fabricate a conversation that then circulates in professional networks, there is both a POSH complaint and a criminal offence at play.

The challenge is that most Internal Committees have been trained to handle verbal and physical complaints. Digital harassment, especially when it involves social media, anonymous accounts, or content originating outside office hours, is a new frontier that existing IC training rarely covers. The gap between what the law covers and what organisations are prepared to investigate is significant, and closing it is now a compliance priority.

What Indian Law Says

India does not yet have a standalone deepfake law. But existing frameworks offer more protection than most people realise. The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, notified by MeitY in February 2026, now impose a strict three-hour takedown window for AI-generated content flagged as harmful, making this one of the most stringent platform liability provisions globally.

Key provisions currently available to victims:

Under Bhartiya Nyaya Sanhita (BNS) 2023:

  • Section 336 covers forgery using AI-altered media, carrying up to 7 years imprisonment
  • Section 79 addresses outraging modesty, including through morphed images, with up to 3 years imprisonment
  • Section 356 deals with defamation through published imagery, carrying up to 2 years imprisonment
  • Section 351(3) covers criminal intimidation using morphed imagery as a threat, with up to 7 years imprisonment

Under the Information Technology Act, 2000:

  • Section 66C covers identity theft through misuse of someone’s digital likeness, carrying up to 3 years imprisonment and a Rs 1 lakh fine
  • Section 66E addresses violation of privacy through publishing someone’s imagery without consent
  • Section 67A criminalises publishing sexually explicit synthetic content, with up to 7 years on repeat conviction

Courts have been responsive. In December 2025, Delhi and Mumbai courts granted emergency orders in favour of NTR Jr., R. Madhavan, and Shilpa Shetty, blocking the spread of AI-generated deepfakes and voice clones, and making clear that intermediaries must quickly remove AI-driven impersonations once notified.

What to Do if it Happens to You 

A fast response matters enormously, and under the 2026 IT Rules, acting quickly triggers the platform’s legal obligation to remove content within hours. Whether the victim is you, a colleague, or someone who approaches HR, the steps are the same.

  • Preserve evidence first. Screenshot the content, note the URL, and save any profile details of the person who uploaded it. Do not delete anything.
  • File a complaint on the National Cyber Crime Reporting Portal (cybercrime.gov.in) and at the nearest cyber police station. For sexual deepfakes, FIR registration is mandatory.
  • Report directly to the platform, triggering their takedown obligation under the 2026 IT Rules.
  • Escalate through POSH channels if the content involves any workplace connection, such as a colleague, a manager, or a shared professional network.
  • Seek urgent High Court relief in serious cases. Indian courts have granted nearly immediate takedown orders for deepfake materials that are potentially damaging, often within 12 to 18 hours.

Building a Workplace that Takes online Sexual Harassment Seriously

Digital conduct is not a grey area anymore. Organisations have a clear duty, ethical and legal, to treat online harassment with the same weight as anything that happens in a conference room. That means updating POSH policies to explicitly name digital and AI-generated harassment, training Internal Committees to investigate such complaints properly, and communicating without ambiguity that creating, sharing, or threatening someone with manipulated content is grounds for disciplinary action.

The women in your teams are navigating a professional environment where their faces, voices, and identities can be weaponised by anyone with a smartphone and a motive. Recognising that as a workplace safety issue, not just a social media problem, is the first and most important shift an organisation can make.