Features

Taylor Swift, X and AI-Generated Images

x-app logo
Emma Burgess

Taylor Swift has become the latest target of deepfake creators, casting a spotlight on an emerging threat: Increasingly, individuals within the public eye are fighting against a new method of harassment, designed to humiliate celebrities, and strip them of their right to privacy. Hardcore videos of a Taylor Swift deepfake engaging in sexual activity have surfaced online, attracting millions of views.  

Graphika, a company which studies disinformation, found the images began to have circulating on a community on 4chan, a message board that has gained a reputation for sharing offensive, explicit content and hate speech. This then spread to other platforms, such as X. 

The AI boom has led to a new method of commodifying exploitation and coercion.

In response, X blocked all searches for ‘Taylor Swift’ in what head of business operations at X, Joe Benarroch, describes as a ‘temporary action’. X claims to have a ‘zero-tolerance policy’ towards non-consensual nudity and aims to create a ‘safe and respectful environment’ for its users. However, many critics have argued that social media platforms simply aren’t doing enough to protect their users from these new advancements in artificial intelligence.  

The AI boom has led to a new method of commodifying exploitation and coercion. Private companies have been set up to make personalised deepfakes for clients; technology has advanced to make creating deepfakes easier. This means using AI is now even easier to access and exploit.

Victims face impersonation of their identities, exploitation, and coercion

Victims of non-consensual deepfake videos and images have reported that attempting to remove this content from the internet is incredibly difficult and costly. Victims face impersonation of their identities, exploitation, and coercion as predators manipulate individuals to comply using counterfeit images and videos. 

However, the emerging threat of deepfakes content has caught the attention of US policymakers. White House Press Secretary Karine Jean-Pierre told ABC news that the government was ‘alarmed’ by the advancements in deepfake technology.  Yet, many fans are outraged that there is currently no federal law prohibiting the creation and distribution of non-consensual deepfakes.  

Politician Joe Morelle used the momentum gained by headlines that Swift, a high-profile celebrity, as a driving force for a bill that would make non-consensual sharing of digitally altered explicit images a federal crime. The No AI FRAUD Act would include punishments such as fines and prison sentences. 

Creating deepfake pornography destroys the lives of victims.

But does the punishment fit the crime? Creating deepfake pornography destroys the lives of victims. They suffer mentally, financially, and socially because of images and videos spread online. Most deepfake creators operate anonymously – the incentive to run a business creating deepfakes far outweighs the minimal repercussions of a fine.  

Many argue that social media companies play a vital role in the battle against deepfakes by taking more responsibility for the content circulated on apps such as TikTok and X, and clearly labelling content to alert users that content may have been artificially altered and should not be taken at face value. 

Emma Burgess


Featured image courtesy of Fachrizal Maulana via Unplash Image license found here. No changes were made to this image. 

For more content including uni news, reviews, entertainment, lifestyle, features and so much more, follow us on Twitter and Instagram, and like our Facebook page for more articles and information on how to get involved.

If you just can’t get enough of Features, like our Facebook as a reader or a contributor and follow us on Instagram.

Categories
Features

Leave a Reply