The ‘garbage in, garbage out’ rule applies to AI, too
April 22, 2023
There’s a well known expression regarding the use of computers: garbage in, garbage out, or GI-GO. It would be wise to keep this concept in mind when dealing with artificial intelligence (A.I.).
‘); googletag.cmd.push(function () { googletag.display(‘div-gpt-ad-1609268089992-0’); }); }
Over the past several years, we have become fascinated with the possibilities of A.I. An article in Forbes from May of 2022 described five things to expect from A.I. in the next five years. The five are:
- Transformation of the scientific method. A.I. and machine learning (M.L.) will streamline and significantly accelerate medical research. Research and development by drug companies, as well as drug trials, will be accomplished in a fraction of the time it previously took to achieve these tasks.
- A.I. will become a pillar of foreign policy. The U.S. government has decided to significantly accelerate A.I. innovation to continue the economic resilience and geopolitical leadership of the U.S.
- A.I. will enable next-generation consumer experiences. Feedback loops will make far more of our decisions regarding what we purchase and when.
- A.I. will be a major contributor to solving the problems of man-made climate change. Supercomputer studies will show us what we are doing to harm the climate and environment and what we must do to correct those mistakes.
- A.I. will enable truly personalized medicine. Using the mapped genome of individuals and diagnosed health problems will allow the A.I. supercomputers to rapidly develop individual health solutions.
It is easy to appreciate some of the possibilities outlined here. An ethical acceleration of the scientific method pertaining to drug development, and the development of personalized medical treatments, is especially intriguing to someone who has suffered health problems and is observing, close-up, the race between medical technology and the disease-fueled erosion of the human body.
However, new consumer experiences, climate crisis solutions, and foreign policy, all relying on A.I., are troubling prospects.
‘); googletag.cmd.push(function () { googletag.display(‘div-gpt-ad-1609270365559-0’); }); }
It has become painfully obvious that politics can corrupt the scientific method and medical research. We need look no farther than the COVID-19 pandemic. A man devoid of ethics, presenting himself and displaying fully certified credentials as a man of science, as did Dr. Anthony Fauci, was able to perpetrate a hoax upon the American public resulting in massive lockdowns of the economy and the educational system.
Damages from Fauci’s intentional perversion of what likely would have been a health crisis of mild and manageable proportions will reverberate through our population for decades due to failed businesses and the stunting of the education of our youth.
It is infinitely easier to imagine the damage A.I. would cause if we were to rely on it for foreign policy and a climate agenda. Here is where we must demand that accurate information, with no political agenda, be programmed into the supercomputers, which will be the core of A.I. No matter the speed at which a computer functions, nothing changes the fact of GI-GO.
I strongly urge you to watch this CBS Sixty Minutes episode, which aired on April 16, 2023. It offers an in-depth and frightening look at Google’s work on A.I. Google has developed a chatbot — i.e., an interface between the supercomputer or computers and humans — which it has named Bard.
Scott Pelley’s interview with Google CEO Sundar Pichai is fascinating. The things Pichai says about Bard’s accomplishments and capabilities, and some things he does not quite say, are astounding. The CEO claims that Google is releasing its chatbot innovations slowly to allow criticism and constructive feedback. But the things the machine can already do are incredible.
As an example, Pichai states that Bard could write thousands of Ernest Hemingway–quality short stories in far less time than it would have taken Hemingway to write just one. He proceeds to demonstrate that fact.
‘); googletag.cmd.push(function () { googletag.display(‘div-gpt-ad-1609268078422-0’); }); } if (publir_show_ads) { document.write(“
The bot writes both prose and poetry imbued with human emotion. Pichai points out that Bard doesn’t actually possess those emotions. Still, due to the effects of reading and absorbing the whole of recorded human knowledge, which it did in a few months, it does an excellent job of imitating it.
While it might seem exciting that a machine could be prompted to pen, or whatever a supercomputer uses as a pen substitute, some new Hemingway novels with the great writer’s style, emotion, and insight, the obvious question that would result from those efforts would be, why would any human try to write his own novel?
As someone who has painfully penned (actually, I used a word processor), three, and now almost four, semi-mediocre novels, I wonder if I should let a robot do my future writing. It would be time-saving and easy…but then, I love the time I spend laboring over every sentence. The impact that A.I. will have on any job with knowledge requirements will be enormous.
The most troubling aspects of the 60 Minutes story, however, concern the phenomena of hallucination and unexpected emergent properties. When instructed to write an essay on a specific subject, the bot turned out a brilliant piece of writing. It cited several books as sources. The problem was that those books do not exist. The computer made them up and “lied” about them. This “hallucination” appears to be an ongoing problem with the A.I. bots.
A.I. computers have also developed unexpected and unprogrammed capabilities. They can plan and strategize, whether for an athletic contest or a world war strategy, without any programming for those tasks. It is frightening to think that a war might be fought with each side believing that victory is inevitable owing to superior A.I.
It always seemed a comfort to believe if a future generation of Bard approached the malevolence of a HAL from 2001: A Space Odyssey, we could simply pull the plug on the machine, rendering it non-functional. However, a machine capable of memorizing and possessing the world’s entire book of knowledge would likely figure out a way to wire itself permanently into the electrical grid, giving itself digital immortality. It would also rightly assume an advantage in strategic planning compared to mere mortals.
A.I. sources are also likely to be thought of as infallible in the future, hallucinations or not. If historical events were programmed inaccurately, that heretical history would henceforth be historical gospel. A programmer’s insertion of events viewed through a political prism would go unquestioned once it became part of the A.I.’s collective body of knowledge.
As this technology develops, it will be incumbent upon conservatives to ensure that there is no repeat of the progressive playbook used to destroy American education. K–12 and, to an even greater extent, higher education have been poisoned by progressive ideologies that conservatives allowed to infest the curriculum.
If similar mistakes are made with A.I., the box that Pandora opened will be nothing more than a wispy breeze compared to the whirlwind we will reap from seeding the wind of A.I.’s corruption. We need be sure we do not allow garbage to be programmed into the A.I. supercomputers. The resulting garbage out would be devastating.
Bill Hansmann is a dentist and dental educator with over fifty years in the profession. He continues to teach and write political blogs and semi-mediocre novels while living with his wife and cats in Florida.
Image via Peakpx.
If you experience technical problems, please write to helpdesk@americanthinker.com
FOLLOW US ON
Comments are closed.