Scientists have warned that humanity risks losing control of Artificial Intelligence if it keeps developing.
AI software is becoming more common, with companies such as Amazon trialling self-automated vehicles.
Experts recently made a major breakthrough with a revolutionary new AI system that never stops learning.
But as technology develops, an international group of researchers have warned that there are increasing dangers of standalone software.
In a study published in the Journal of Arititial Intelligence Research Portal, author Manuel Cebrain said: “A super-intelligent machine that controls the world sounds like science fiction.
"But there are already machines that carry out certain important tasks independently without the programmers fully understanding how they learned it […], a situation that could at some point become uncontrollable and dangerous for humanity.”
According to the study, scientists trialled two methods to control AI software – isolating it from the internet, and programming it with an algorithm that stops it from harm.
The first method stopped the technology from performing its basic functions, but worryingly, there appears to be no algorithm that can guarantee a prevention from harm.
As reported by entrepreneur, researcher Iyad Rahwan said: “If we decompose the problem into basic rules of theoretical computing, it turns out that an algorithm that instructed an AI not to destroy the world could inadvertently stop its own operations.
'Creepy' AI chatbot pulled from Facebook after 'she started hating minorities'
“If this happened, we would not know if the containment algorithm would continue to analyse the threat, or if it would have stopped to contain the harmful AI.
“In effect, this makes the containment algorithm unusable.”
It comes as artificial intelligence researchers working on facial recognition systems claim their machine can now predict someone’s politics or sexual preference from just measuring their face.
The research, published this week in the Nature journal Scientific Reports, was conducted by Stanford University’s Michal Kosinski.
The paper explains that the algorithm studies body language. “Head orientation and emotional expression stood out,” Kosinski writes.
“Liberals tended to face the camera more directly, were more likely to express surprise, and less likely to express disgust.”
Source: Read Full Article