Major Development In AI Technology As Computer Passes An Eighth Grade Level Science Test
The desire of all corporations and governments is to create a person that will work endlessly with complete submission and at full capacity for ideally many years with little to no maintenance. Throughout most of history, this is what people call “slavery.” Under American law, direct slavery does not exist save for those in prison. However, there are still laws which provide basic protection to people to prevent them from being abused to death. Indeed, if it was “legal”, corporations would have no problems with chaining people onto machines and forcing them to work until they died, and then blaming the person for dying as the tell his replacement to clean off the dead body before going to work.
This desire for the “ideal” slave is the reason for the push into nanotechnology and artificial intelligence. It is well-known that robots can do things more accurately and consistently than humans can, and that robots are more durable overall, they can be abused, they do not need to rest, eat, drink, take smoke breaks, see family, have vacation time, and think for themselves in the human sense. Unlike animals, they can do advanced tasks, but will not snap back if mistreated.
The corporate mindset of the modern world is based on darwinism, and this is reflected in how people treat each other, where human life is of no value, and a company will not hesitate to get rid of people for little to no reasons so that the owners can get more profit, all the while working their laborers to the point of personal collapse. In their minds, and as many will admit, they do not see humans as resources, but as liabilities, and the sooner they can get rid of people, the better they are, and people are rewarded for turning against one and other in the name of helping the “business.”
One of the major obstacles to the advancement of AI is the ability of computers to think like humans do. In other words, can a computer look at a test for logic and then make the correct choices as a human would be expected to do, based on its processing capacity?
What is being called a major breakthrough in AI technology took place as scientists announced that an AI system passed an eighth grade science test with 90% correct answer, and a twelfth grade test with 80% accuracy:
Four years ago, more than 700 computer scientists competed in a contest to build artificial intelligence that could pass an eighth-grade science test. There was $80,000 in prize money on the line.
They all flunked. Even the most sophisticated system couldn’t do better than 60 percent on the test. A.I. couldn’t match the language and logic skills that students are expected to have when they enter high school.
But on Wednesday, the Allen Institute for Artificial Intelligence, a prominent lab in Seattle, unveiled a new system that passed the test with room to spare. It correctly answered more than 90 percent of the questions on an eighth-grade science test and more than 80 percent on a 12th-grade exam.
Match Wits With Aristo on 5 Quiz Question
Here are five questions that were presented to a computer system from the Allen Institute for Artificial Intelligence, a prominent lab in Seattle.
The system, called Aristo, is an indication that in just the past several months researchers have made significant progress in developing A.I. that can understand languages and mimic the logic and decision-making of humans.
The world’s top research labs are rapidly improving a machine’s ability to understand and respond to natural language. Machines are getting better at analyzing documents, finding information, answering questions and even generating language of their own.
Aristo was built solely for multiple-choice tests. It took standard exams written for students in New York, though the Allen Institute removed all questions that included pictures and diagrams. Answering questions like that would have required additional skills that combine language understanding and logic with so-called computer vision.
Some test questions, like this one from the eighth-grade exam, required little more than information retrieval:
A group of tissues that work together to perform a specific function is called:
(1) an organ
(2) an organism
(3) a system
(4) a cellBut others, like this question from the same exam, required logic:
Which change would most likely cause a decrease in the number of squirrels living in an area?
(1) a decrease in the number of predators
(2) a decrease in competition between the squirrels
(3) an increase in available food
(4) an increase in the number of forest firesResearchers at the Allen Institute started work on Aristo — they wanted to build a “digital Aristotle” — in 2013, just after the lab was founded by the Seattle billionaire and Microsoft co-founder Paul Allen. They saw standardized science tests as a more meaningful alternative to typical A.I. benchmarks, which relied on games like chess and backgammon or tasks created solely for machines.
A science test isn’t something that can be mastered just by learning rules. It requires making connections using logic. An increase in forest fires, for example, could kill squirrels or decrease the food supply needed for them to thrive and reproduce.
Enthusiasm for the progress made by Aristo is still tempered among scientists who believe machines are a long way from completely mastering natural language — and even further from duplicating true intelligence.
“We can’t compare this technology to real human students and their ability to reason,” said Jingjing Liu, a Microsoft researcher who has been working on many of the same technologies as the Allen Institute.
But Aristo’s advances could spread to a range of products and services, from internet search engines to record-keeping systems at hospitals.
ImageOren Etzioni, who oversees the Allen Institute for Artificial Intelligence, said an advance unveiled on Wednesday could have a wide impact, from search engines to medical records.
“This has significant business consequences,” said Oren Etzioni, the former University of Washington professor who oversees the Allen Institute. “What I can say — with complete confidence — is you are going to see a whole new generation of products, some from start-ups, some from the big companies.”
The new research could lead to systems that can carry on a decent conversation. But it could also encourage the spread of false information.
“We are at the very early stage of this,” said Jeremy Howard, who oversees Fast.ai, another influential lab, in San Francisco. “We are so far away from the potential that I cannot say where it will end up.”
In 2016, when a London lab built a system that could beat the world’s best players at the ancient game of Go, it was widely hailed as a turning point for artificial intelligence.
Dr. Etzioni’s excitement, however, was muted. Artificial intelligence was not nearly as advanced as it might seem, he said, pointing to the earlier Allen Institute’s competition that stumped the A.I. systems with an eighth-grade science test.
The Allen Institute improved on that earlier effort much quicker than many experts — including Dr. Etzioni — expected.
Its work was largely driven by neural networks, complex mathematical systems that can learn tasks by analyzing vast amounts of data. By pinpointing patterns in thousands of dog photos, for example, a neural network can learn to recognize a dog.
In recent months, the world’s leading A.I. labs have built elaborate neural networks that can learn the vagaries of language by analyzing articles and books written by humans.
At Google, researchers built a system called Bert that combed through thousands of Wikipedia articles and a vast digital library of romance novels, science fiction and other self-published books.
Through analyzing all that text, Bert learned how to guess the missing word in a sentence. By learning that one skill, Bert soaked up enormous amounts of information about the fundamental ways language is constructed. And researchers could apply that knowledge to other tasks.
The Allen Institute built their Aristo system on top of the Bert technology. They fed Bert a wide range of questions and answers. In time, it learned to answer similar questions on its own.
Not long ago, researchers at the lab defined the behavior of their test-taking system one line of software code at a time. Sometimes they still do that painstaking coding. But now that the system can learn from digital data on its own, it can improve at a much faster rate.
Systems like Bert — called “language models” — now drive a wide range of research projects, including conversational systems and tools designed to identify false news. With more data and more computing power researchers believe the technology will continue to improve.
But Dr. Etzioni stressed that the future of these systems was hard to predict and that language was only one piece of the puzzle.
Ms. Liu and her fellow Microsoft researchers have tried to build a system that can pass the Graduate Records Exam, the test required for admission to graduate school.
The language section was doable, she said, but building the reasoning skills required for the math section was another matter. “It was far too challenging.” (source, source)
The scientists admit there is much work to be done on AI systems in this sense, but the fact that the technology is progressing at the rate which it is should concern people because what is being built here are the replacements for men.
For years, people have said that robots will replace human beings in “low-level” jobs, and this is already happening with the automation-based systems at fast food restaurants and major department stores. However, even jobs determined to be “safe” from automation such as high-level professional jobs will eventually come to scrutiny because the computer is being made into a surrogate human, and if a robot can do many tasks a human can, why would a company, who bases its care for people on its ability to earn more money, would care about a man when it can buy a robot instead?
The biggest consequences of such robots will be in war. The next major weapon of a global conflict, like what the airplane, machine gun, and poison gas was to World War I and what the atom bomb was to World War II the weapon to watch for World War III is not merely AI-based systems, but specifically the emergence of human-like and humanoid (cyborg) type robots on the battlefield, thus beginning to replace humans as soldiers. This is likewise a concern related to “business” needs, for while a human being is a real person that comes from a family, has to grow, be trained, can think for himself, can die from a ten-cent bullet or have lifelong injuries with long-term family consequences, a robot can be made in a factory from metal, has no free will, has no morals, is stronger and more durable than a human, and can be sent into more dangerous scenarios than a human, and if it fails it just breaks and can be replaced by another robot. An army of human beings can be fought to exhaustion and fatigue, such as what happened with the US in the Vietnam war. Robots cannot be “exhausted,” and no matter how many are destroyed, another can always be sent in its place.
The next step of warfare then is not about whether or not one develops a better “gun”, but who can develop the next virus or computer-hacking tools to take over the CPU and other processing chips for these robots. It is already known that hackers and hacking are a serious problem for computers. Can one only imagine what would happen if a robot was to be taken over by a “virus,” whether it was designed by a government or even a somewhat eccentric or maliciously intended individual? Men can become sick, but because men think for themselves, they can choose to do good or evil, and no “virus” can be injected into a man’s brain to “force” him to do something he does not want to do. Governments and other people with evil intentions do not like this, but this is because these people want to do evil and they want to be able to force others to agree with them without objection. They will get this goal realized with a robot, but then there is the issue of viruses, and no computer is completely hack-proof.
The future is not “going to the starts,” but more along the lines of James Cameron’s Terminator film, because it only takes one malicious program to cause untold chaos and destruction, since as noted before, robots are not moral, but are soulless beings that act on orders.
.
Comments are closed.