There’s a lot of fear surrounding artificial intelligence these days, and it’s hard to know what's warranted and what isn’t.
Some people believe that AI is far more dangerous than nukes and the humanity would be in danger because of the next world war in which the consequent destruction would not be down to nuclear weapons, but AI.
However, a recent article published in Entrepreneur magazine argues against this myth. It stated that for all AI to overthrow humanity, four things would have to occur:
1. An AI would have to develop a sense of self distinct from others and have the intellectual capacity to step outside the intended purpose of its programmed boundaries
2. It would have to develop, out of the billions of possible feelings, a desire for something that it believes is incompatible with human existence
3. It would have to choose a plan for dealing with its feelings (out of the billions of possible plans) that involved death, destruction and mayhem
4. It would have to have the computing power / intelligence / resources to enact such a plan
An AI achieving any of these is highly unlikely. To achieve all of it? Next to impossible. It is improbable for an AI system to achieve consciousness -- the ability to think about oneself as an object and self-direct action. Machine learning is achieved by training a machine using millions of bits of diagnostic information in order to teach the machine how to do and what to do in a certain situation. The machine can only do what it is programmed for.
What we’re seeing again and again are forward-thinkers applying AI to situations where what we need is speed: identifying specifics based on complex statistical models, understanding and processing enormous amounts of data to solve otherwise-impossible problems. AI isn’t the “demon” it’s made out to be; inherently it’s useful and will allow us to affect change like we never have before.
Instead of fearing from the AI, the most pressing need is to brainstorm about the legislative and ethical aspects of AI. What if the training data is incorrect or not comprehensive? What if the training algorithm is bugy. What if the artificially intelligent machine make harm to anybody? Who will be prosecuted? Machine? the owner? the programmer? the manufacturer?