Skip to content

'AI generated "bikini interview" videos spreading rapidly online, fueling gender discrimination'

AI-created videos show striking realism, featuring women in bikinis conducting street interviews and soliciting lewd remarks, yet they are entirely fabricated, crafted via AI tools frequently deployed to inundate...

"Internet inundated with sexist AI-generated 'bikini interview' videos"
"Internet inundated with sexist AI-generated 'bikini interview' videos"

'AI generated "bikini interview" videos spreading rapidly online, fueling gender discrimination'

In the digital age, artificial intelligence (AI) has become a powerful tool, capable of creating vast amounts of content. However, a growing concern is the proliferation of AI-generated content that reflects and amplifies misogyny on popular social media platforms.

Last year, Alexios Mantzarlis discovered 900 Instagram accounts of likely AI-generated "models," predominantly female and scantily clad. These accounts cumulatively amassed 13 million followers and posted over 200,000 images. The AI-generated clips are so realistic that some users question whether the featured women are real.

One such viral video depicted a fake marine trainer named "Jessica Radcliffe" being attacked by an orca. The clip spread across platforms like TikTok, Facebook, and X, sparking global outrage from users who believed the woman was real.

Despite the outrage, there is no publicly identified single company or individual responsible for the mass production of these AI-generated, sexualized clips with misogynistic and locker-room humor on platforms like Instagram and TikTok. Such content is often produced systematically by multiple anonymous or dispersed actors driven by financial motives.

These creators offer paid courses on how to monetize viral AI-generated material on platforms like YouTube and TikTok. The trend is part of an increasing problem of AI-generated content competing with and eclipsing authentic content on the internet.

Nirali Bhatia, a cyber psychologist based in India, has described this trend as "AI-mediated gendered harm" and "fueling sexism." Emmanuelle Saliba of GetReal Security has stated that unlabeled AI-generated content erodes the little trust that remains in visual content.

AI-fakery proliferating online, and the numbers are now likely much larger than what was found last year. The clips often feature scantily clad female interviewers and contain sexist and offensive language. Some of these viral clips promote adult chat apps.

As platforms reduce their reliance on human fact-checkers and scale back content moderation, financially incentivized slop is becoming increasingly challenging to police. Content creators turn to AI video production as gig work, making it difficult to regulate and curb the spread of such content.

YouTube recently announced that creators of "inauthentic" and "mass-produced" content would be ineligible for monetization. AI consultant Divyendra Jadoun stated that AI doesn't invent misogyny, but it reflects and amplifies what's already there. This statement underscores the need for platforms and AI developers to take a proactive stance against the creation and spread of harmful and misogynistic content.

The trend of AI-generated misogynistic content is causing concern about its potential harm to women. As AI technology continues to evolve, it is crucial that we address these issues and work towards creating a safer and more equitable digital environment.

Read also:

Latest