Approximately one-sixth of Congresswomen have become victims of sexually explicit deepfakes generated by artificial intelligence.

Approximately one-sixth of Congresswomen have become victims of sexually explicit deepfakes generated by artificial intelligence.

A recent study by The American Sunlight Project (ASP), an organization focused on disinformation and promoting democratic policies, reveals that more than 25 female and a single male congressional members have fallen victim to nonconsensual intimate image deepfakes. This disturbing trend illustrates a significant gender disparity in the usage of such technology and its harmful impacts on the participation of women in politics and civic activities.

ASP identified around 35,000 instances of nonconsensual intimate imagery (NCII) across 26 affected congressional members, with the majority being women. Most of these instances were swiftly removed once researchers shared their findings with the impacted politicians.

"We need to address this new digital landscape, recognizing that the internet has facilitated numerous harmful activities disproportionately impacting women and marginalized communities," highlighted Nina Jankowicz, a renowned expert in online disinformation and harassment, and the founder of The American Sunlight Project.

Nonconsensual intimate imagery, commonly known as deepfake porn although advocates prefer the preceding term, can be produced through the application of generative AI or by superimposing headshots onto adult performer media. At present, there is a limited policy framework in place to restrict its creation and dissemination.

ASP provided exclusive access to its groundbreaking findings to The 19th. To collect data, the organization developed a custom search engine, enabling it to find 118th Congress members by name or nickname across 11 popular deepfake websites. The study revealed no discernible relationship between political affiliation or geographical location and the likelihood of being a target. On the other hand, gender emerged as the most significant factor, with women in Congress being 70 times more susceptible to such abuse than their male counterparts.

ASP refused to disclose the identities of the affected lawmakers, aiming to avoid inadvertently encouraging searches. However, they reached out to all impacted offices to alert them and provide resources related to online harms and mental health support. Researchers pointed out that despite the swift removal of most content, the material could still be reshared or uploaded elsewhere. In some instances, search result pages remained indexed on Google, even after most content had been removed.

"The prompt removal might be coincidental. Regardless of the exact cause of content removal – whether 'cease and desist' demands, copyright infringement claims, or other interventions with deepfake content hosting sites – it underscores a substantial disparity in privilege," noted the study. "Individuals, particularly women, who can't afford the resources that Members of Congress enjoy, would struggle to secure a rapid response from deepfake content creators and disseminators if they initiated a removal request on their own."

As per the initial findings, around 16% of all female congressional members – or approximately 1 out of 6 congresswomen – are the victims of AI-generated nonconsensual intimate imagery.

Jankowicz, who has endured online harassment and threats for her work fighting disinformation, first learned about her deepfake abuse through a Google Alert in 2023.

"You could be portrayed in these compromising, intimate situations without your consent," she stated. "Though you might pursue a copyright claim against the original poster – as occurred in my case – these videos proliferate across the internet beyond your control, without any consequences for those amplifying or producing deepfake porn. This poses a constant risk for anyone in the public eye, engaged in public discourse, but particularly for women and women of color."

Image-based sexual abuse can have catastrophic psychological consequences for victims, impacting not only politicians but also everyday individuals, including children. Over the past year, cases of high school girls being targeted for image-based sexual abuse have been reported in states like California, New Jersey, and Pennsylvania. Officials have shown varying levels of response, while the FBI has issued a warning that distributing such imagery of minors is illegal.

The full implications of deepfakes on society are yet to be fully understood, but research suggests that 41% of females aged 18 to 29 intentionally censor their online activity due to fear of harassment.

"This represents a formidable threat to democracy and free speech if nearly half the population is self-censoring due to the fear of harassment they may experience," observed Sophie Maddocks, research director at the Center for Media at Risk at the University of Pennsylvania.

There is no federal law establishing criminal or civil penalties for individuals creating and distributing AI-generated nonconsensual intimate imagery. A handful of states have implemented legislation in recent years, including civil penalties, but not criminal ones.

AI-generated nonconsensual intimate imagery further represents a threat to national security by providing opportunities for extortion and geopolitical concessions. These impacts could affect policymakers regardless of their direct involvement in the creation or dissemination of such imagery.

"My hope is that this awareness among members inspires them into action, recognizing that this afflicts American women and imperils their own colleagues," remarked Jankowicz.

Image-based exploitation poses a distinct threat to women seeking political office. Following a close legislative race, a Republican operative distributed unauthorized recordings of sexually explicit live streams featuring Virginia Democrat Susanna Gibson and her husband to The Washington Post. In the wake of her defeat, Gibson shared that numerous young women had expressed reservations about entering politics due to concerns over their private images being used to harass them. Gibson subsequently established a nonprofit intended to combat image-based exploitation and an associated political action committee to back women candidates confronting infringements on their privacy.

Research by Maddocks has shown that women who advocate publicly are more likely to become targets of digital sexual abuse.

“It seems we’re stuck with an outdated, ‘women should be seen and not heard’ mentality that informs the notion that womanhood contradicts public speech. When women voice their opinions publicly, it almost feels like a call to shame, humiliate, and silence them. Understanding this motivation for shaming and silencing is crucial in grasping how sexual harm manifests in relation to congresswomen.”

ASP is advocating for Congress to enact federal legislation. The Disrupt Explicit Forged Images and Nonconsensual Edits Act of 2024, or DEFIANCE Act, authorizes legal action against any individual producing, sharing, or receiving such content. Additionally, the Take It Down Act imposes criminal responsibility for such activities and requires tech companies to remove deepfakes. Both proposals have received bipartisan support in the Senate, but face obstacles related to free speech and definition of harm in the House.

“Neglecting to pass any of these bills during this legislative session would be a failure of duty by Congress,” Jankowicz commented. “This form of AI-driven harm is already impacting everyday Americans – it’s not a future concern or an abstract idea. It demands swift action.”

In the absence of legislative action, the White House has partnered with the private sector to develop innovative strategies to curb image-based exploitation. However, skeptics question the ability of Big Tech to self-regulate, given its past track record of inflicting harm via its platforms.

“The message sent by these acts isn't just directed at the individual woman being targeted,” Jankowicz pointed out. “It's to all women – a warning that raising your voice in public may bring about such consequences.”

If you've suffered from image-based sexual exploitation, the Cyber Civil Rights Initiative provides a list of legal resources.

This article originally appeared on The Markup and was licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives license.

The report from The American Sunlight Project highlights the need for addressing the issue of nonconsensual intimate imagery, as the future of technology, particularly artificial intelligence, is being used to create deepfakes disproportionately affecting women in politics.

Given the increasing reliance on tech and technology in shaping our future, it's crucial that we establish robust policy frameworks to tackle the misuse of AI, such as the creation and dissemination of deepfakes, to ensure digital safety and equality in the realm of artificial intelligence.

Read also: