In the age of the broligarchs, women in public life are subjected to increasingly sophisticated online violence – from viral acts of “nudification” to AI-assisted synthetic rape designed to humiliate them into silence and steal their agency.
One UK MP who campaigns for tougher laws against such violations, Labour’s Jess Asato, was recently the subject of an AI-generated video which depicted her being chloroformed in preparation for rape.
Now, our new report, published this week by UN Women, offers a disturbing insight into the ways this violence manifests, and reveals the severity of its impacts on survivors, with alarming rates of mental health diagnosis and self-censorship identified.
Our research is based on a global survey of 641 women journalists, human rights defenders and activists across 119 countries conducted in late 2025, and shows that emergent technologies – including generative AI and nudification tools – are becoming increasingly integral to the deliberate targeting and silencing of women in public life by lowering technical barriers to abuse while amplifying its reach.
History shows that when authoritarian power consolidates, women who speak truth to power – from journalists to human rights defenders and politicians – are always among the first to be publicly abused, discredited and pushed out of public life by those in power.
Today, the authoritarian playbook is being propped up by digital infrastructures controlled by a small and unelected class of tech billionaires, many of whom have traded user safety and privacy for obscene profits and proximity to authoritarian power.
In turn, the broligarchical fusion of ultra-wealthy tech bros with political oligarchs who flirt with fascism has allowed the strategic use of online violence against women in public life to be digitised, automated and deployed at unprecedented scale and speed.
These forms of abuse were considered relatively rare a few years ago. But they are now a defining feature of the digital landscape for women in public life
Nearly a quarter of the women we surveyed said that they had experienced AI-assisted online violence of some kind, while 6% said they had been targeted via deepfakes or other manipulated imagery. Twenty-seven per cent reported having been sexually harassed via private messages, and 12% said they had experienced the non-consensual sharing of personal images, including those of a sexual or intimate nature.
These forms of abuse were considered relatively rare just a few years ago. But they are now a defining feature of the digital landscape for women in public life, with increasingly stark impacts.
Forty-one per cent of women in all forms of public life who responded to our survey said they had started self-censoring on social media to avoid further victimisation. And one-fifth reported self-censoring at work as a result of online violence they had faced.
With World Press Freedom Day falling this weekend, the rates of self-censorship specifically among women journalists – whose work depends on robust freedom-of-expression protections – are particularly alarming to note. Almost 22% of this group reported self-censoring at work, while approximately 45% said that they self-censor online. This represents a 50% increase in self-censorship from just five years ago when we surveyed an overlapping group of respondents for Unesco.
The chilling effect is verified. And the damage is real.
Nearly a quarter of the women journalists and media workers we surveyed said they had been diagnosed with anxiety or depression connected to the online violence they had experienced, and almost 13% reported being diagnosed with post-traumatic stress disorder (PTSD).
At the same time, the number of women journalists reporting acts of online violence to the police has doubled over the past five years, with 22% of these respondents saying that they had referred an incident to law enforcement, compared with 11% in 2020.
As we have previously reported, our survey also reveals an alarming escalation in the trajectory of online violence to offline harm. The number of women journalists and media workers who reported offline attacks, harassment and abuse linked to online violence has more than doubled since 2020.
One Indian journalist and community organiser who participated in the survey highlighted the economic impacts of online violence. She said that the “relentless pressure” she felt while being subjected to repeated online attacks had pushed her to resign from her job. Subsequently struggling with “severe financial problems”, she said she had been forced to subsist on rice porridge – an outcome she described as “a direct consequence of being forced into silence and out of work”.
And yet the infrastructures facilitating this harm continue to operate without consequence, profiting from harm even when the abuse unfolds in plain sight, or is perpetuated by powerful public figures. For example, when Grok – the AI chatbot owned by Elon Musk – was used to flood X with “nudified” images of women and girls, Musk's initial response was to post laughing emojis. When Ashley St Clair, the mother of one of Musk’s children, who had been a repeated target of the nudification, requested the removal of the images – some of which were based on photos taken when she was a minor – her pleas went unanswered.

When the same men who run the world’s most powerful online platforms also control the tools being used to silence women in public life and force them to retreat to the shadows, the need for corporate accountability matched by the enforcement of effective legislative and regulatory remedy has never been more urgent.
Women in public life are often among the first to be targeted when fascism looms, but they are never the last.
Dr Julie Posetti is director of the Information Integrity Initiative at (another!) The Nerve – the digital forensics lab established by the Nobel peace prize-winning journalist Maria Ressa. She is also professor of journalism and chair of the Centre for Journalism and Democracy at City St George’s, University of London
Kaylee Williams is a senior researcher at the Information Integrity Initiative and a PhD candidate at Columbia University studying technology-facilitated gender-based violence, with a particular emphasis on generative AI and non-consensual intimate imagery
Tipping point: Online violence impacts, manifestations, and redress in the AI age was co-authored by Dr Lea Hellmueller, Dr Pauline Renaud, Nabeelah Shabbir and Dr Nermine Aboulez