The Catastrophic Threat of AI

WORDS BY JOSH WOOLLER

There are many dangers faced by humanity:. the threat of nuclear annihilation, a super bug which wipes large swathes of the population, mass economic collapse. Yet within the canon of human eschatology a ‘robot takeover’ is probably a threat that only rates a laugh. While the Terminator movies may well be a hyperbolic take on the threat robots present to humankind, many of the world’s top scientists are warning of the risk artificial intelligence or AI could pose in the future.
 
In April of this year, SpaceX founder Elon Musk described AI as the “biggest threat” that we currently face. In 2014, astrophysicist Stephen Hawking said the AI evolution could potentially “spell the end for the human race”. What’s more, far from being a risk in the distant future, some scientists believe that we could have general AI within the next decade.
 
In fact, there are only three propositions needed in order to accept that it is possible to develop super intelligent AI:

1. Intelligence is the product of information processing.
2. We will continue to improve our intelligent machines.
3. Information processing and intelligence are not unique to the biological material that forms our brain.
 
So what does a general artificial intelligence look like?
 
General artificial intelligence is distinct from regular AI because it is able to learn from itself, and make improvements to itself. Currently, the world’s best chess computer would not be able to play draughts. A general AI, though, would be proficient in both board games. If scientists are able to develop a machine that can learn from itself and build further iterations of itself as well as having access to the full spectrum of human intelligence, then it is simply a matter of mathematics. Electronic circuits function at about one million times the rate of bio-chemical circuits found in human brains. A super-intelligent general intelligence with access to the Internet (or all of human knowledge) running for a week, according to neuroscientist Sam Harris, could make the equivalent of “20,000 years of human intellectual work”.
 
As Sam Harris asks, “how would it be possible to constrain a mind making this sort of progress?”.  Within AI communities, Harris’ question has become known as the ‘Control Problem’ and it is the topic for most apocalyptic AI movies. Basically, the problem states that given the amount of intellectual advancement a super intelligent AI is making, any divergence between human goals and the goals of AI would lead to disaster.
 
Even without the threat of the ‘Control Problem’, a super intelligent AI still poses questions as to what the future of humanity looks like. Think for example of  the economic and political ramifications. We are now talking about a machine which is the perfect labour saving device. This would be impossible to absorb into the current structure of society. Not only that, a super intelligent AI could potentially become the perfect weapon of war. If the Russian or Chinese government were to become privy to knowledge that silicon valley was on the cusp of inventing an AI, then it may become logical to take drastic action. Why? Because we are dealing with what Harris describes as a “winner take all scenario”, in which the first people to invest in  this technology are essentially light years ahead of any other society technologically.
 
It seems that there are at least two prospects. Either AI will destroy us or it will inspire us to destroy ourselves.

Pulp Editors