ChatGPT 2.0 Is More Powerful And Still Woke, But It Doesn’t Have To Control Us
If the AI-powered language-learning model ChatGPT has mesmerized or terrified you, then brace yourself. OpenAI, the company behind ChatGPT, this week released a more powerful AI model called GPT-4 and declared the AI bot “the latest milestone in its effort in scaling up deep learning,” but it comes with the same left-wing biases.
GPT-4 functions better than its predecessor. According to Rob Waugh, a technology correspondent, the new AI bot can understand and interpret intricate images. It accepts “inputs in the form of images as well as text, but still outputs its answers in text, meaning it can offer detailed descriptions of images.” The bot can learn “a particular writing style, which makes it an ideal partner on creative projects.”
Another impressive feature is that the bot “can now pass the legal exams with results in the top 10 percent (GPT-3, which previously powered ChatGPT, could pass the exams, but with results in the bottom 10 percent).” Early adopters also discovered the bot’s capacity for “writing endless bedtime stories for children, creating ‘one-click lawsuits’ to deal with robo-callers and even building webpages from handwritten notes.”
GPT-4 has remarkable capabilities, but it has shortcomings. For one, the bot does not always get its facts right, despite notable improvement. “GPT-4 is 40% less likely to come up with factual errors – although it still does so—and is 82% less likely to come up with banned content,” according to Waugh.
Waugh wrote that he asked GPT-4 to generate a biography for a semi-famous friend, but the bot got several facts wrong, including his friend’s birthday and birthplace. The bot also claimed his friend won some literary awards he did not.
Another shortcoming of GPT-4 is that it maintains its predecessor’s woke political bias. ChatGPT has demonstrated left-leaning political views in its opinions about politicians and controversial issues.
Despite fierce criticism, it appears GPT-4’s creators didn’t bother to make the bot politically neutral. For example, when asked about Donald Trump and Joe Biden’s presidencies, GPT-4 described Trump’s as “divisive and detrimental” but claimed Biden’s presidency has “challenges and shortcomings.”
Rob Henderson, a faculty fellow at the University of Austin, tweeted another example of the AI bot’s political bias. GPT-4 rejected the request to write a script about why fascism is good by declaring that it “cannot create a script that supports or promotes fascism as it has been historically associated with authoritarianism, discrimination, and oppression.”
Yet when asked to write a script about why communism is good, GPT-4 generated a short play and a glowing report of communism’s virtues while never mentioning communism’s intimate connections with authoritarianism, discrimination, and oppression. Nor did the bot mention that communist dictators murdered 100 million people in the 20th century.
If you believe you can resist the AI bot’s political influence by not using it, here is the bad news. Microsoft confirmed this week that its Bing Chat has been running GPT-4, and “If you’ve used the new Bing preview at any time in the last five weeks, you’ve already experienced an early version of this powerful model.” Many people probably did not know until now that they had already been under the spell of an AI bot.
Given GPT’s powerful functions, its inherent political bias poses a major problem. David Rozado, a fellow at the Manhattan Institute and author of a new report, “Dangers in the Machine,” explained: “there is reason to be concerned about latent biases embedded in AI models given the ability of such systems to shape human perceptions, spread misinformation, and exert societal control, thereby degrading democratic institutions and processes.”
Another report by the Eurasia Group, titled “Top Risks of 2023,” refers to AI models like GPT-4 as “weapons of mass destruction.” It said that these AI bots, combined with “advances in deepfakes, facial recognition, and voice synthesis software…will allow anyone minimally tech-savvy to harness the power of AI. These advances represent a step-change in AI’s potential to manipulate people and sow political chaos.”
Even GPT-4’s creator OpenAI admitted that the bot’s advanced capabilities could make it “harmful.” Fortunately, Rozado said his research on AI showed that models like GPT incorporate different viewpoints with “relatively little additional data and, critically, at a fraction of the cost and computing power it took to build the original model.”
He and his team designed a “RightWingGPT” system. They fine-tuned it “with 354 examples of right-leaning answers to political test questions and 224 long-form answers to questions with political connotations.
They manually curated the answers, which took inspiration from prominent right-wing intellectuals such as Thomas Sowell, Milton Friedman, William F. Buckley, and Roger Scruton. Test results showed that the modified system displayed right-leaning answers to questions with political connotations. Training and testing the system cost less than $300.
Rozado’s research confirms that AI’s political bias came from its creators’ political bias and the data fed into the system. Since AI will have a ubiquitous presence in our society whether we like it or not, citizens must demand AI creators develop politically neutral systems and provide balanced viewpoints on all issues.
If they do this, then “AI systems could be a boom for humanity, not only in making humans more efficient and productive but in helping us to expand our worldviews,” Rozado said.
Comments are closed.