Artificial Intelligence Poses ‘Risk of Extinction,’ Warns ChatGPT Founder and Other AI Pioneers; AI: Good News For Bad Guys, and related story
Artificial Intelligence Poses ‘Risk of Extinction,’ Warns ChatGPT Founder and Other AI Pioneers:
Artificial intelligence tools have captured the public’s attention in recent months, but many of the people who helped develop the technology are now warning that greater focus should be placed on ensuring it doesn’t bring about the end of human civilization.
A group of more than 350 AI researchers, journalists, and policymakers signed a brief statement saying, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
The letter was organized and published by the Center for AI Safety (CAIS) on Tuesday. Among the signatories was Sam Altman, who helped co-found OpenAI, the developer of the artificial intelligence writing tool ChatGPT. Other OpenAI members also signed on, as did several members of Google and Google’s DeepMind AI project, and other rising AI projects. AI researcher and podcast host Lex Fridman also added his name to the list of signatories.
Understanding the Risks Posed By AI
“It can be difficult to voice concerns about some of advanced AI’s most severe risks,” CAIS said in a message previewing its Tuesday statement. CAIS added that its statement is meant to “open up discussion” on the threats posed by AI and “create common knowledge of the growing number of experts and public figures who also take some of advanced AI’s most severe risks seriously.”
NTD News reached out to CAIS for more specifics on the kinds of extinction-level risks the organization believes AI technology poses, but did not receive a response byficial publication.
Earlier this month, Altman testified before Congress about some of the risks he believes AI tools may pose. In his prepared testimony, Altman included a safety report (pdf) that OpenAI authored on its ChatGPT-4 model. The authors of that report described how large language model chatbots could potentially help harmful actors like terrorists to “develop, acquire, or disperse nuclear, radiological, biological, and chemical weapons.” —>READ MORE HERE
AI: Good News For Bad Guys:
ChatGPT and artificial intelligence (AI) are all the rage right now.
A small group of companies with advanced capabilities in AI/GPT (Microsoft, NVIDIA, Google, Apple, and a few others) are rallying sharply on the profit and productivity potential offered by the new technology.
If the AI/GPT plays were removed from stock market indices, the remainder of the stocks would be down on a year-to-date basis. Whether this performance is a bubble or a genuine leap based on fundamentals remains to be seen.
History is filled with investing fads that fizzle out.
Still, there’s no doubt about the impact. That said, GPT has a dark side that is quickly coming to the fore. What do I mean?
Good News for Bad Guys
Malign actors can use the speed and comprehensiveness of GPT to produce fake images and content. They can then push that content into social media and mainstream channels to cause market rallies and crashes.
In other words, for market manipulators, inside traders, and geopolitical adversaries, GPT is one of the best tools ever invented. Here’s a recent case in point…
Last Monday, May 22, a story appeared on ZeroHedge, Facebook, Twitter, and several other media channels showing a large building on fire near the Pentagon along with speculation that a terrorist attack might be underway.
Stocks immediately began to sell off. Within minutes, it was realized that the building fire photo was fake (based on some windows that had an irregular instead of uniform appearance).
And indeed, the entire story was fake.
The image of the building with billowing smoke was generated by AI. Investors should get used to this type of AI-induced panic that can manipulate markets.
The AI/GPT technology is already in the hands of bad actors and they won’t stop using it just because this one fake was detected quickly. —>READ MORE HERE
Follow link below to a related story:
Air Force Denies AI-Controlled Drone ‘Killed’ Human Operator in Simulated Test, Colonel ‘Misspoke’
Comments are closed.