Jesus' Coming Back

All Too Predictably, Reality Is Puncturing The AI Hype Bubble 

After so much hype and trillions of dollars of investment, the AI bubble seems like it might finally be bursting. Much of the market crash that happened last Monday was concentrated among technology companies, all of which are deeply invested in AI. The biggest loser was the famed chip company Nvidia, which, according to writer Chis Taylor, “lost a trillion dollars of valuation, 30% of the total, since its 2024 high.

Taylor goes on to list the reasons for this downturn: excessive hype from AI gurus like Sam Altman, burnout from consumers who are now turned off by products using AI, and impatient investors like Goldman Sachs wanting a quicker return on their investment. So, like most bubbles, AI businesses overpromised and underdelivered, causing the market to correct itself.

Unfortunately, it doesn’t end there. AI technology seems to be hitting a wall in its advancement. As many people have observed, the AI data servers suck up colossal amounts of energy, putting a huge strain on electrical grids. They also require equally colossal amounts of capital (running in the billions of dollars) and continuous funding — for reference, ChatGPT is estimated to cost OpenAI $700,000 a day to operate.

Worse still, Taylor explains that AI programs “have run out of stuff to train on, and the more they are trained on ‘the internet,’ the more the internet contains a body of work written by AI — degrading the product in question.” Originally, a Large Language Model (LLM) like ChatGPT could review the vast quantity of human-made content across the internet and take that data to produce a unique essay that would meet the parameters of its users. But now, there is so much AI-generated content online that any essay that the LLM produces will become increasingly derivative, defective, and incomprehensible. Garbage in, garbage out.

At the root of all these issues is a profound collective misunderstanding about AI. The problem is so obvious that it’s difficult to see, especially for the high IQ geniuses in Silicon Valley: artificial intelligence is not human intelligence. For too long, technologists have spoken about AI like it’s a synthetic brain that mirrors the functions of human brains. As such, they describe computers learning, communicating, and performing administrative tasks as though they were conscious workers.

And, if one listens to Techno-Optimists like Marc Andreessen and Amjad Massad, AI can even do the work of life coaches and teachers. Andreessen predicted that “Every child will have an AI tutor that is infinitely patient, infinitely compassionate, infinitely knowledgeable, infinitely helpful. The AI tutor will be by each child’s side every step of their development, helping them maximize their potential with the machine version of infinite love.” Not only does AI understand human beings, but it understands them infinitely better than human beings understand themselves and one another.

Coupled with this misguided personalizing of AI is the misconception that human minds are simply organic computers with deficient programming. The soul is commonly reduced to bits of data that can be downloaded or uploaded. No doubt, Andreessen imagines students sitting at the feet of their robot instructors quietly learning from them in the same way a computer installs a new application, except they don’t have a loading bar hovering above their heads.

Because of these flawed analogies, it was assumed by many that AI would replace most white-collar workers and eventually assume control of most organizations — after all, why have fallible mortals at the helm when one could have an infinitely wise and competent AI overlord handling everything? The real challenge, according to experts, was dealing with the displaced, unemployed masses who would inevitably become obsolete and making sure that the robot overlords remain kind to human beings. In this regard, the only real difference between the optimist and pessimist was that one believed that AI would create a utopia while the other believed it would do the opposite.

Of course, neither of these possibilities are true because AI doesn’t have the capacity to replace people, and it never did. At best, it is a powerful tool that has more in common with a spell/grammar check application or a search engine than it does with even the dumbest, most unimaginative human being.

This is why using AI to make money has been such a failure. Employers thought they could have an AI program to do the work of their employees more efficiently at a fraction of the price. However, the inputs and accompanying training required complete even the most basic tasks is overwhelming. Rather than learn how to program complex operations on a robot, most managers would be better off simply telling their workers what to do and trusting in their ability to do it. It also makes for a happier workplace.

The same goes for using AI to cure the current loneliness epidemic. Perhaps a few desperate souls might not mind the company of an AI companion, but this will only feed the very worst aspects of those struggling to form meaningful relationships. Instead of serving as practice for real interaction, it reinforces the anti-social habits that make a person lonely in the first place. Rather than making sacrifices and developing empathy, the lonely user is indulging delusions and cultivating extreme narcissism.

Ironically, the greatest benefit of AI is demonstrating just how special and irreplaceable human beings are. Consequently, its financial and cultural success will depend on whether it can raise human potential by becoming a helpful tool or do the opposite by becoming an expensive burden.


The Federalist

Comments are closed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More