Jesus' Coming Back

‘It Could Go Quite Wrong:’ OpenAI CEO Sam Altman Testifies Before Congress on ChatGPT’s Potential Dangers

OpenAI CEO Sam Altman recently testified to Congress about the potential risks and implications of AI technologies like ChatGPT, which have gained significant popularity in recent months.

The Daily Mail reports that Sam Altman, CEO of ChatGPT developer OpenAI, recently addressed Congress on the potential dangers and ramifications of ChatGPT and other artificial intelligence (AI) technologies that have grown incredibly popular in recent months. Legislators raised concerns during the hearing about the significant impact AI models might have on human history, comparing these effects to the printing press’s or the atomic bomb’s potential outcomes.

ChatGPT and OpenAI emblems are displayed on February 21, 2023. (Beata Zawrzel/NurPhoto via Getty Images)

“If this technology goes wrong, it could go quite wrong, and we want to be vocal about that. We want to work with the government to prevent that from happening,” Altman stated, acknowledging his fears of significant harm being caused by misuse of AI technology. This hearing marks the first in a series of planned discussions aimed at establishing rules for AI, a process lawmakers believe should have been initiated earlier. They drew parallels with the birth of social media, noting their past failure to seize control in its early stages, leading to issues such as the exploitation of children online.

OpenAI’s ChatGPT, a free chatbot tool, can produce responses that are remarkably human-like. The hearings are intended to cover strategies for ensuring that risks are disclosed, creating evaluative scorecards, and making AI models like ChatGPT transparent.

The impact of AI on jobs was one of the main topics discussed during the hearing, with worries that a coming industrial revolution would eliminate jobs. In his opening remarks, Sen. Richard Blumenthal (D-CT) stated that the impending industrial revolution that will result in worker displacement is “the biggest nightmare.”

While acknowledging that some jobs might be automated by AI technology, Altman also pointed out that new jobs would be created as a result of it. “I believe that there will be far greater jobs on the other side of this, and the jobs of today will get better,” he said.

Some have referred to AI superintelligence as the “nuclear weapons of software” in the discussions, which reflect broader societal anxieties about its effects and potential risks. “It’s almost akin to a war between chimps and humans,” said Kevin Baragona, one of the signatories of an open letter on The Future of Life Institute, which calls for a pause in the development of ChatGPT-like AI.

He added: “The humans obviously win since we’re far smarter and can leverage more advanced technology to defeat them. If we’re like the chimps, then the AI will destroy us, or we’ll become enslaved to it.”

Breitbart News previously reported on the open letter:

The open letter urges AI labs and independent experts to work together to create and put into practice a set of shared safety protocols for the design and development of advanced AI. To guarantee that AI systems adhering to them are risk-free, these protocols would be strictly audited and monitored by outside experts who are not affiliated with the company. The signatories stress that the proposed pause is only a temporary retreat from the dangerous race toward increasingly unpredictable black-box models with emergent capabilities, not a general halt to AI development.

The letter urges the creation of stronger governance systems in addition to the establishment of safety protocols. In addition to provenance and watermarking systems to help distinguish between authentic and fake content and track model leaks, these systems ought to include new regulatory bodies devoted to AI oversight and tracking. The experts also recommend increasing public funding for technical AI safety research and holding suppliers accountable for harm caused by AI.

Read more at the Daily Mail here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship. Follow him on Twitter @LucasNolan

usechatgpt init success

Breitbart

Jesus Christ is King

Comments are closed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More