The Student News Site of Stony Brook University

The Statesman

51° Stony Brook, NY
The Student News Site of Stony Brook University

The Statesman

The Student News Site of Stony Brook University

The Statesman

Newsletter

The real and dangerous impacts of AI, technology and media

Artificial Intelligence Technology Graphic. AI technology is the most rapidly growing forms technology. Patrick Leyseele/FLICKR VIA CC BY-SA 2.0

What comes to mind when you hear the phrase “technology?” Maybe it’s a robot, a cyborg, or even something as mundane as the smartphones we carry in our pockets every day.

In its broadest definition, technology consists of a variety of objects, ideas and developments, but can essentially be described as any application of scientific knowledge that is used to make human lives easier. 

One of the most prominent and rapidly growing forms of technology is artificial intelligence, or AI. AI is now taking our civilization towards a phase that has never been seen before, and this transformation can largely be identified by three trends

There is a growing concern over the idea of man-made systems becoming increasingly competent, as they are completing tasks that we previously thought could only be done by humans. These include imitating human speech patterns, translating one language into another, diagnosing diseases, drafting important documents and more advanced technological processes.

However, when it comes to the negative effects of any speech or text-related activities of AI, built-in biases can be part of the algorithm that can be used to make the specific intelligence. If these algorithms themselves are biased, the results may be partial and inaccurate. 

Another observed trend is the growing omnipresence of AI systems.

It can be anticipated that technology will soon be implemented into everyday objects that we did not expect to become integrated with AI. This can already be seen in smart homes, where everyday utilities, such as lights, microwaves and refrigerators no longer have to be operated manually. While this does make our lives easier and daily activities much more convenient, there are concerns regarding privacy, as well as potential safety and security breaches, should the technology fail to work at critical moments. 

The third and final trend is that the existing formation of our society is becoming increasingly quantified, meaning we are generating a larger amount of data daily than we ever did. Thus, the individuals and institutions that control this data end up having a major insight into our lived experience. For example, social media and other networking sites are often created in such a manner that users are encouraged to overshare certain details of their lives, whether it be important personal information or even a seemingly inconsequential update on your vacation plans.

However, there is a dark side to such consistent sharing of personal details. Any personal data, such as important dates and places, tend to be the answers to security questions in financial accounts and retirement funds. Similarly, what we type in our search engines, which eventually appears in our search history, generates data that allows digital interfaces to suggest advertisements based on recent online activity.

The fact that all three of these trends are only accelerating indicates that the current increase in technology will fundamentally change the way we live.

We have never been in a world where our lives have been reduced to data to the extent that they are now, and while this might usher in certain benefits, such as producing various advancements in technology, the cons arguably outweigh the pros. Protecting personal privacy has become more difficult, and several examples of data breaches and misuses serve to emphasize this point.

Therefore, instead of looking at these machines merely through the lens of consumers, we must ask ourselves deeper questions about the consequences that the growth in technology will have on us. 

One of the most prominent of these impacts is that those who own and control the most powerful digital systems in the future will increasingly exercise power over those who do not. Currently, the largest beneficiaries of the current technological changes are not ordinary people, but rather tech corporations and governments, who can use the advanced equipment of the digital age for surveillance and enforcement of policies.

While this can be used for good, it largely seems to have chillingly dystopian consequences. For example, during the Black Lives Matter protests that took place last summer in the wake of George Floyd’s death, Dataminr — a New York-based AI startup — scanned millions of social media accounts and sent any crucial information to police departments and law enforcement agencies so that they could track any protests happening in neighboring areas. Another major repercussion of this advanced technology is what many call perception control.

All of us largely rely on third-party organizations — such as search engines and news publications — to inform us about things that are happening in the world, and increasingly, these sources are becoming mediated by digital technologies. When we watch a news channel or look for updates on current affairs by using a search engine, we are at the mercy of those technologies to decide which small slice of reality we will be presented with, which in turn determines our idea of what has occurred. It shapes our innermost thoughts as well as our collective understanding of what is important to us. While this can be positive when it comes to learning about world events, or simply becoming more educated and aware of certain topics, it can influence our innate biases and prejudices.

Media and technology have the power to affect our values, attitudes, and behaviors, and while this isn’t inherently wrong or dangerous, it can be when it is used to incite certain beliefs or actions that can be harmful to specific communities and demographics. This can most notably be seen in instances pertaining to misinformation in which false facts can increasingly be found on social media platforms. If false information becomes viral, it could have disastrous consequences in terms of misconceptions and overall efficacy of passing legislation.

When it comes to individual actions that we can take to curtail these technological abuses of power, everyday safety strategies can be employed, such as increasing security by enabling firewalls on any new device, or by installing or running anti-spyware and anti-virus software. New files and attachments should not be unnecessarily opened, and passwords and accounts should be distributed across your devices. 

That being said, personal safety actions can do very little to be able to eliminate threats posed by large corporations and technological systems. To tackle this, we have to use collective means and methods.

The most obvious mechanism is the state, with law and legislation being the most important tools we can use. However, it is a challenge not to give the state too much power, particularly when it comes to AI and digital systems; we must strike a balance between our faith and pessimism in politics.

Technology can become political, and while it might be easy or even popular to be skeptical of the government, especially when it tries to correct issues that are the result of private ordering, it is necessary to ensure that we control technology and not the other way around.

View Comments (1)
Donate to The Statesman

Your donation will support the student journalists of Stony Brook University. Your contribution will allow us to purchase equipment and cover our annual website hosting costs.

More to Discover
Donate to The Statesman

Comments (1)

All The Statesman Picks Reader Picks Sort: Newest

Your email address will not be published. Required fields are marked *

  • M

    MichaelSep 14, 2021 at 11:23 pm

    I wonder how history will look upon Ted Kaczynski decades from now.

    Reply