The Student News Site of Stony Brook University

The Statesman

50° Stony Brook, NY
The Student News Site of Stony Brook University

The Statesman

The Student News Site of Stony Brook University

The Statesman

Newsletter

Deepfakes put individual rights at risk

A stock image of a fake news generator technology concept. The advancements of deep fakes are abusing the reputations of public figures by becoming more accessible to the general public. WRIGHTSTUDIO – STOCK.ADOBE.COM

Social media has played a crucial part in formulating the identity of Generation Z. However, a new sinister software has entered the chat, better known as deep fakes. Putting yourself on the internet puts you at risk of being put in a compromising position. A sexually-explicit image could be generated using your likeness, or your voice could be used in a scam phone call. The culprit? Artificial intelligence (AI).

I believe there needs to be strict regulations and restrictions on AI on social media platforms before this problem gets out of hand. Although using these AI generators may seem fun and convenient, they are doing more harm than good. 

There have been various incidents of individuals abusing the use of AI to create provocative and disturbing images of people — especially public figures. This issue was at the forefront of social media discourse during the last week of January when these types of photos of pop superstar Taylor Swift started spreading across X (formerly Twitter). This is just the latest incident raising questions on the uses and ethicality of AI. 

With these advancements, the formation of deep fakes was created to “portray people doing or saying things they have never done.” They are made in such realistic ways it’s hard to tell if they are real or fake. These deep fakes were made of Swift in multiple provocative poses with NFL players, resulting in many of her fans coming to her defense, furious with X for allowing these photos to surface and spread. 

AI was first created by Alan Turing in 1950, who created a test of machine intelligence called The Imitation Game. From there, other scientists came out with new AI research; between the 1990s and 2010s, AI had taken great strides. In 1997, Gary Kasparov created the first AI program to ever beat a human chess champion. 

AI is only getting more advanced. In the past five years, it’s become mainstream and accessible to the general public — allowing bad actors to abuse the technology in pursuit of internet fame, revenge and vindictiveness. 

There needs to be legal protection for anyone who has deep fakes made of them. Essentially, it is a new-age form of defamation in its false portrayal of an individual. How would you feel if a deep fake was made defaming yourself or a loved one? When online, encountering a fake image or video of yourself doing explicit things or something you’ve never said or done would be horrifying. Not only does this affect people personally, but it could even cause larger-scale issues. For example, if a fake video was made of a government official or leader and spread across multiple platforms, this could incite violence or war, and even compromise elections. 

Recently, an audio clip circulated the internet of an alleged phone call of President Joe Biden’s voice that was later found to be made by AI-generated robocalls. This fake phone call depicted Biden discouraging Democrats from taking part in the primary election. 

Not only is this happening in the United States, but also internationally. Slovakia’s 2023 October election saw the prevalent use of deepfake audio recordings used against Michal Šimečka — leader of the pro-Western Progressive Slovakia party. The most infamous instance depicted Šimečka talking about rigging the election and even doubling the price of beer. These deepfake audios spread across social media and may have attributed to his loss in the election.

Thankfully, there are already four bills being introduced by federal lawmakers in the U.S. that are “specifically targeting the use of deep fakes and other manipulated content in federal elections and at least four others that address such content more broadly,” according to the Brennan Center for Justice. On the state level, lawmakers are addressing this by banning and/or regulating deep fakes from using deceiving media in election advertisements. These laws have already been passed in a few states. 

I believe that laws should be put in place to make sure that deep fakes give beginning and end disclaimers to ensure that no false information begins to circulate. 

There also needs to be strict regulation on all forms of deceptive media. In 2019, California passed a law that regulates the spread of a candidate’s depictions within 60 days of an election — which includes manipulated images, videos or audio that have the intent to hurt the candidate’s reputation or deceive voters. These laws need to be implemented at a federal level because of the threat AI poses to our individual rights and our country’s political practices.  

In my opinion, AI is doing more harm than good. AI needs to be regulated to prevent further damage. It is also important to be cautious and aware of anything you see on the internet now. AI can assist us in the future, but if unregulated, it could also be a threat. As time passes, AI will only become more advanced; we must learn how to manage it before it gets out of hand.

Leave a Comment
Donate to The Statesman

Your donation will support the student journalists of Stony Brook University. Your contribution will allow us to purchase equipment and cover our annual website hosting costs.

More to Discover
Donate to The Statesman

Comments (0)

All The Statesman Picks Reader Picks Sort: Newest

Your email address will not be published. Required fields are marked *