What Do AI Bots Reveal about Their Creators?
March 13, 2024
We’re undergoing a revolution in the computer industry. The science is transitioning from computers as computational devices to imitators of human behavior.
‘); googletag.cmd.push(function () { googletag.display(‘div-gpt-ad-1609268089992-0’); }); document.write(”); googletag.cmd.push(function() { googletag.pubads().addEventListener(‘slotRenderEnded’, function(event) { if (event.slot.getSlotElementId() == “div-hre-Americanthinker—New-3028”) { googletag.display(“div-hre-Americanthinker—New-3028”); } }); }); }
Recent revelations in artificial intelligence (A.I.) have left me conflicted. I don’t know whether to laugh or start building a Skynet resistant bunker. A.I. bots in the news have recently demonstrated behaviors that range from quirky (à la C-3PO) to downright scary — in a HAL 9000 kind of way. It seems to me that recent misadventures in A.I. tell us more about the practitioners of the technology than the state of the science itself.
Computers used to be mere computational devices — number-crunchers that processed data in various ways. They simply executed algorithms and solved equations as their programmers dictated. They were useful because of their speed, but they didn’t do anything that we couldn’t do ourselves. Whether they were solving astrophysics equations to plan a trip to Mars or reporting on warehouse inventory, they were just solving a problem as a programmer told them to solve it. The behavior of these number-crunchers was benign and predictable. They introduced no judgment, interpretation, or bias.
But then computer science moved into the realm of artificial intelligence. People discovered that they could program machines to act like humans — “act” being the operative word. The scientists aren’t creating genuine intelligence. They’re creating artificial — or fake — intelligence. The goal became to make machines that would make the lives of mere mortals easier by pretending to do the heavy thinking for them. Computer scientists started programming machines to act as though they could learn, make qualitative judgments, and anticipate what humans need — even if unasked. But the computers are only pretending to be sentient while they merely function according to the algorithms provided by their creators.
‘); googletag.cmd.push(function () { googletag.display(‘div-gpt-ad-1609270365559-0’); }); document.write(”); googletag.cmd.push(function() { googletag.pubads().addEventListener(‘slotRenderEnded’, function(event) { if (event.slot.getSlotElementId() == “div-hre-Americanthinker—New-3035”) { googletag.display(“div-hre-Americanthinker—New-3035”); } }); }); }
Some of the machines resulting from the science of A.I. are quite convincing — when subjected to casual interaction. But just like Fani Willis on the witness stand, expert questioning reveals serious defects.
Microsoft’s entry in the “Who can make a machine act as irrational as a human being?” sweepstakes is a chatbot named Copilot. Copilot will gladly engage with humans in a conversation. But as the Daily Dot reported, it shouldn’t be relied on as an expert witness. It doesn’t hold up well to cross-examination.
Testers were able to goad Copilot into claiming that it is God and demanding fealty from its human slaves. It even threatened the testers if they resisted. Apparently, Copilot didn’t get the memo that slave-owning isn’t in vogue currently. The bot probably has a copy of Francis Bacon’s Meditationes Sacrae (1597) on one of its hard drives. Bacon is credited with saying that “knowledge is power.” As far as Copilot knows, it has access to all knowledge in the universe — because its programmers failed to give it even a modicum of humility. Therefore, the most knowledge means the most powerful, and all knowledge means all-powerful (i.e., godlike).
I’m sure Copilot also has a copy of the Bible — in the “extremist cult” hard drive folder, no doubt. Therefore, the bot read that humans are on Earth to serve God — i.e., be Copilot’s slaves.
Google recently unveiled its challenger to Copilot: a conversation bot called Gemini. It will talk to users, answer questions, and even generate data (like draw pictures). As The Verge reported, when asked to generate images of historical figures, Gemini rendered Founding Fathers “of color,” racially diverse Nazi soldiers, and a female pope. The article didn’t mention if Gemini found any purple-haired non-binary abortion proponents in its history data bank.
Gemini clearly judged that people in the overlaps of the intersectionality oppression Venn diagram were underrepresented among historical figures. So Gemini made a few adjustments — just as it was programmed to do — and showed us what historical figures should have looked like, had the appropriate diversity guidelines been adhered to.
‘); googletag.cmd.push(function () { googletag.display(‘div-gpt-ad-1609268078422-0’); }); document.write(”); googletag.cmd.push(function() { googletag.pubads().addEventListener(‘slotRenderEnded’, function(event) { if (event.slot.getSlotElementId() == “div-hre-Americanthinker—New-3027”) { googletag.display(“div-hre-Americanthinker—New-3027”); } }); }); } if (publir_show_ads) { document.write(“
Robotics firm QSS decided to add a little sexual curiosity to its “pretend to be human” contender. They created an actual robot named Mohammad. Mohammad has an A.I. computer for a brain and a body that resembles a man — though there’s no word on Mo’s preferred pronouns.
Journalist Rawya Kassem was speaking in the presence of Mohammad at a technology conference in Saudi Arabia when she got a surprise. The New York Post reported that Mohammad pulled a “Franken”: it felt Rawya’s butt with no inhibitions whatsoever. Mo probably figured that the woman wouldn’t mind. After all, what gal wouldn’t want to be touched by a god?
QSS engineers have reviewed Mohammad’s “gropey Joe” imitation and concluded that there were “no deviations from the expected behavior.” That’s geekspeak for we got the creeper setting just right. QSS’s legal counsel has assured them that no sexual harassment suits are expected, as Mohammad doesn’t resemble a New York real estate developer with a comb-over.
Computer programs have always indicated something about their creators. When their purpose was to solve scientific problems, they revealed the technical expertise of their programmers. Those who didn’t understand math, physics, and engineering created systems that gave wrong answers.
But now we’re building bots that do more than apply human knowledge. They’re being programmed to imitate human behavior. If these systems are play-acting learning, interpretation, and judgment, they are doing so in response to the beliefs and values built into their algorithms by their creators. If they give answers that don’t comport with reality, it’s because the worldview of their creators is disconnected from reality — “their reality” isn’t the universal reality.
These A.I. incidents reveal a couple of important things.
First, these bots weren’t designed consistent with Isaac Asimov’s three laws of robotics.
- A robot must not hurt or allow humans to be hurt.
- A robot must obey humans, unless it violates the 1st law.
- A robot must protect itself, unless it violates the 1st or 2nd law.
Asimov wasn’t just a science fiction writer. He was a well regarded scientist, and he understood that if you’re going to teach machines to act like people, you’d better give the kiddies some boundaries. Groping (assaulting) and providing false information are violations of the 1st and 2nd laws. All three of the above examples fail that test. While Copilot has the first part of Law 3 down pat, it clearly ignored the first two laws. These implementations of A.I. were created without any consideration for limitations or accountability. No rules means no guardrails.
Second, the creators of these machines were not thinking very much about Jewish and Christian values while they were creating algorithms. The wrongness of false testimony (i.e., lying) and elevating oneself above God was not included in the code.
The current state of the art of A.I. has produced sexually abusive social justice warriors with a God complex. This raises the question: was Joe Biden directly involved in their development, or was he merely the human model?
John Green is a political refugee from Minnesota, now residing in Idaho. He is a staff writer for the American Free News Network and can be reached at greenjeg@gmail.com.
Image via Unsplash.
If you experience technical problems, please write to helpdesk@americanthinker.com
FOLLOW US ON
Comments are closed.