In just the last few months, billionaire and newly quasi-political figure Elon Musk sat down with Texas U.S. Senator Ted Cruz to discuss a wide variety of issues ranging from the current political climate to tech and the foreseeable progression.
There was one specific conversation in which Elon Musk expressed, with a feeling of neutrality and calmness, that AI will soon have the capacity to do far beyond what human intelligence can. I quote his words directly from his conversation with Ted Cruz: “In 10 years, AI could probably do anything better than the human can cognitively,” also predicting that “AI will be smarter than the smartest human.”
Contrary to what Elon was expressing, a feeling of urgency rushed through my body, despair, not for myself, but rather following the extreme logical conclusion that is bound to come if guardrails are not placed.
I came into the discussion about artificial intelligence pretty late compared to my peers. It wasn’t until late 2023 that I started extensively exploring and researching purely large language models, then proceeding to the ethical usage of AI, not just school related but also consequential policy related to AI and the world. Through thorough critical insights of AI, I had come to really adore the advanced pieces of software (and hardware for that matter), as well as the space that would be made for these technical fields to flourish. I was optimistic, optimistic about what was to come.
In class, I would often debate peers and defend the priorities and usage AI would soon bring to the world. I would address concerns of replacement and loss of jobs caused by AI and place it alongside concerns of the past as it related to inventions like the telephone or smartphones. I would assure people that AI would bring more jobs, not less, and that our priorities could be focused elsewhere on matters that are higher among our concerns.
Over time, though, that optimism began to fade as I observed what could be interpreted as negative consequences derived from artificial intelligence. The first and perhaps most alarming is the rise of autonomous weapons, like those used by the Ukrainian and Russian armies. Though these systems are still rare to come by and still being experimented with, AI-powered weapons are increasing in autonomy, begging the question: how far will this be taken? If you can systematically replace all systems of weaponization with autonomous, non-empathetic machinery, then during this process human oversight will slowly but certainly be removed, eliminating any moral hesitation, any ability to reconsider, or any last-second choice to spare life. This leads to possible catastrophic war escalation, and at that point civilian harm prevention is the least of a worry.
Another negative consequence that can be observed and is currently applicable to modern times is artificial intelligence leading to job displacements. It is reported that by the year 2030, 14 percent of employees will have already been forced out of their jobs, amounting to 375 million workers in search of new employment or even entirely new skills. That is just one statistic among many, and it speaks to what is likely to come.

Even more startling, Elon Musk has projected there could be more than 10 billion robots inhabiting the planet by 2040. The scale of automation is growing faster than most can comprehend.
Not only am I worried about the external effects this will have on our reality, but as many of you know, your situational externals inevitably affect that which is internal to a certain degree. The fact that you can’t avoid AI makes the case against it even stronger.
There are trends correlating depression to a society overly dependent on technological devices. Possible reasoning includes the discontent from reality in substitute for a false reality such as media apps and sites. What happens when that disconnect is pushed even further? The goalpost continues to be moved, and we will eventually enter an age of non-human experience.
AI has various purposes depending on the individual, but the common view is for it to solve problems not yet foreseeable to humans. But what if, one day, AI could “solve” emotion itself? We already have antidepressants for medical reasons, but what if AI hypothetically created a cure for all feelings other than happiness? Is that desirable? Does that take away from the human experience? I would argue yes, but there is room for ethical argumentation for both sides.
What if we become so interdependent on AI that our brains could be compared to those of animals? According to an MIT Media Lab study, participants who constantly engaged with large language models to write showed brain activity that dropped significantly in regions tied to creativity, memory, and critical thinking.
This is concerning especially alongside data from ScholarshipOwl surveying 12,000 high school and college students, concluding that 97 percent of those students rely on tools such as ChatGPT to get through school. The decline of our cognitive ability is near, and what makes us human would cease to exist, and then what? Mission accomplished.
Could it be that in the end, our species will look back on what unfolded over time and conclude we handed over everything that made us human, just to chase a lucid vision we were never meant to come into contact with? Could it be?
