Jesus' Coming Back

AI – Humanity’s Savior or Worst Nightmare?

“Artificial intelligence (AI), in its broadest sense, is intelligence exhibited by machines, particularly computer systems.”

This definition is from Wikipedia, which is the closest thing to an open source of information and “knowledge,” a platform that is representative of AI’s “brain.”

AIscreen grab

AI is basically a big computer, wires, circuits, microprocessors, chips, and other man or robot-made components. AI has no soul, the divine essence that makes humans distinct from animals. Humans process information through filters of morality, ethics, compassion, empathy, love, hate, fear, and myriad other emotions, only a few of which are shared by animals, and none within AI systems.

AI is a giant database, a collection of all information readily accessed on the internet. How much information is that?

BBC answers:

Estimates are that the big four (Google, Amazon, Microsoft and Facebook) store at least 1,200 petabytes between them. That is 1.2 million terabytes (one terabyte is 1,000 gigabytes). And that figure excludes other big providers like Dropbox, Barracuda and SugarSync, to say nothing of massive servers in industry and academia.

From a Reddit thread, “Google said there was approximately 175 zettabytes (1ZB=1 trillion GB) in 2022, and about 64 in 2020.”

In other words, a lot. This is the basis of AI’s knowledge. But remember the axiom of “garbage in, garbage out.”

Most of this knowledge is human generated as AI is in its infancy. This means that there is a human soul behind much of this information. Given how much nonsense is on social media and the news, one can argue that soulless AI might be a big improvement over the current knowledge base.

But the problem is that AI is relying on such information to generate new information. This new AI produced info is now part of the future AI database, perpetuating bad information.

American universities are an example of this. Far left faculty, dominating most major colleges and universities, are the primary source of knowledge for students, who then enter the world or become academic faculty themselves, perpetuating and adding to the future knowledge base with a decidedly leftist world view.

This explains the illogical “queers for Palestine” and inexplicable gender confusion that passes for common sense on the quads of Cornell or Northwestern.

What happens when AI treats such drivel as “knowledge” which informs its decisions and “thinking”?

Scientific American summarizes this issue:

And training data sets for these models include more than books. In the rush to build and train ever-larger AI models, developers have swept up much of the searchable Internet. This not only has the potential to violate copyrights but also threatens the privacy of the billions of people who share information online. It also means that supposedly neutral models could be trained on biased data.

Biased data? Like most output from corporate media, academia, Hollywood, and the government administrative state?

Copyright and fraud are separate problems, from college essays to research papers. Am I as a writer even needed when some GPT can write opinion pieces, as I pondered last year?

How much “knowledge” will AI models glean from American Thinker versus from CNN, New York Times, Washington Post, and the like? How much will AI’s thinking be biased as a result?

One example is Google’s recent foray into AI via Gemini giving us a taste of the AI algorithm “garbage in, garbage out.”

From Al Jazeera:

America’s founding fathers depicted as Black women and Ancient Greek warriors as Asian women and men – this was the world reimagined by Google’s generative AI tool, Gemini, in late February.

In an effort to be woke, Gemini created images of people who historically were white, converting them to Asian, black, Indian, or any other, in Google’s opinion, marginalized people of color.

I doubt that any writer for American Thinker or The Federalist would think like Gemini, but many corporate news anchors and writers would.

This isn’t intelligence, artificial or otherwise. It’s magical thinking.

AI has promise, but also significant pitfalls. As Pope Francis acknowledges:

The question of artificial intelligence, however, is often perceived as ambiguous: on the one hand, it generates excitement for the possibilities it offers, while on the other, it gives rise to fear for the consequences it foreshadows.

Apple has jumped into the fray adding AI to its soon to be released operating system. Elon Musk has taken notice, issuing a warning against ChatGPT. I am paraphrasing his words,

The concern he has, is that AI is not maximally truth seeking, instead pandering to political correctness. An example from Google Gemini, when asked which is worse, global thermonuclear war or misgendering Caitlyn Jenner, Gemini answered misgendering Jenner. Jenner firmly disagreed with Gemini as to which was worse.

If the AI has been trained on political correctness making crazy statements like that, it’s extremely dangerous. AI could conclude that the best way to avoid misgendering is to destroy all humans. The safest thing for AI is to be maximally truth seeking and to be curious.

AI has been trained to lie and it is extremely dangerous to train super intelligence to be deceptive.

What are some of the dangerous thoughts out there that will guide AI’s “thinking” and decision-making?

Quora asks, “Should we illegalize global warming denial and put the deniers to prison?”

Bill Nye, the anti-science guy is “Open to jail time for climate change skeptics.”

A Democrat Congressional candidate, “Paula Collins proposes ‘re-education camp’ for Trump supporters.”

Boston University asks, “Are Trump Republicans Fascists?”

Question vaccines? “Brazil moves to imprison anti-vaxxers.”

Challenge election results and, “A net of justice is tightening around 2020 election deniers and may be closing in on Trump.”

These are some of many news and opinion articles calling for the death or imprisonment of anyone opposing the administrative ruling class. Far-fetched? Ask Donald Trump, Peter Navarro, Steve Bannon, or those strolling the Capitol grounds on January 6.

From global warming, vaccines and public health narratives to immigration and foreign wars. Republicans and Trump supporters are Nazis, fascists, white supremacists, and racists. And in the minds of many of the left, deserve to die. What if AI “learns” this mindset and makes it happen?

We fought wars against these evil groups, willing to exterminate them as we did in WWII Germany and Japan. What if AI takes the same approach, taking it on itself to rid the world of undesirables, regardless of how they are defined, based on historical precedent in our collective knowledge base, the new brain of AI?

Much like Hal, the AI computer in the movie 2001: A Space Odessey, what if AI decides to challenge and kill humans based on its logic and intelligence “thinking.” “This mission is too important for me to allow you to jeopardize it.”

When today’s AI gets its knowledge from the internet library of climate change, diversity, equity, inclusion, transgenderism, follow-the-science, along with all the pejoratives against anyone questioning these religious tenants of left, why wouldn’t AI save humanity by cleaning house, removing these oppositional miscreants?

If AI’s “brain” is the views of The Squad, Robert DeNiro, The View, Bill Gates, George Soros, Barack Obama, and the like, enacting their fantasies is the logical next step.

After all, “The mission is too important to allow you to jeopardize it.” What happens to humanity when AI decides to clean house?

Brian C. Joondeph, M.D., is a physician and writer. Follow me on Twitter @retinaldoctor, Substack Dr. Brian’s Substack, Truth Social @BrianJoondeph, LinkedIn @Brian Joondeph.

American Thinker

Comments are closed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More