Imagine that you are in a spaceship directed by an invisible crew created with artificial intelligence. That computer is capable of processing information with an almost human capacity. You can learn from the observation of your peers made of flesh and blood, and operate according to the protocol they follow. Everyone has the same mission.
But a problem arises, the computer does not agree with the decisions of its crew and decides - on its own - to eliminate them and continue the mission alone. According to his calculations it is the best course of action. Nobody taught him to kill and he was never told to do it. How is it that within your code you could reach such a conclusion?
secrets of artificial intelligence robots
That scenario is the setting of the movie "2001: A Space Odyssey" (1968) by Stanley Kubrick. This is Hal 9000, one of the most outstanding villains in the cinema and represents one of the possible problems that will face the future of humanity: Artificial Intelligence (I.A.) independent.
Although the laws of robotics indicate that a machine will never hurt a human being, the experts still do not know precisely how the learning algorithms of the artificial intelligences that develop around the world work. The most recent and clear example reported by the MIT Technology Review points to an autonomous NVIDIA car that literally learned to drive simply by analyzing the behavior of an average driver.
Deep learning: the next step of the I.A.
secrets of artificial intelligence learning
That fact was considered a great achievement for the I.A. But, what would happen if suddenly that car decides not to act in a normal way? And, why is it important to ask that question? Let's say that someone gets on one of these devices, the car detects the human's weight, so it seals the doors, activates the safety belts and advances, but suddenly begins to accelerate and make curves in the middle of the street putting at risk the life of both the passenger and the passers-by.
Is the I.A guilty? Does it have defects? Did you learn wrongly or did you make the decision independently? Nobody knows the correct answer. This system is known as deep learning and could mean the main obstacle to creating more complex intelligences.
secrets of brain artificial intelligence
So is. Although computer and programming experts have created systems and algorithms that they can learn from humans, they do not know precisely how their decision-making process works, said Will Knight, a specialized journalist. According to his analysis, the advance can not continue if the technologies "do not render accounts". That is, no machine is free of failure unless its entire system is understandable. Therefore, it is not possible to use or give tasks to a robot with I.A., since it is not possible to predict what decisions it will make, as in "Odyssey in space".
secrets of artificial intelligence ai
The big problem is that it is almost impossible to decipher how these processes happen. We still do not know precisely how a person makes decisions based on their own learning, so it may be a natural element in intelligence. Humans, like machines, take information. He can be taught to perform a task in a certain way, but his mind is unique; he is likely to decide to implement other methods to fulfill it from his observation and his understanding of what he perceives. People can be questioned about their decision making and can lie. In this way it is also possible that, through learning, a machine also learns to do it. It is an element that is exploited in the science fiction film "Ex Machina" (2015), where the robot begins to create deception just like an individual.
Creating Artificial Intelligence with deep learning is creating an independent mind. It is within your code to make decisions based on your observation. As long as there is no human being aware of each process and how it reaches such conclusions, it is impossible to know what it will do. The autonomous automobile could collide, the spacecraft could kill its crew. It does not matter if the machine is taught about the laws of robotics, it will be independent enough to ignore them and do what it considers best. It is alarming and if ignored, we could end up in a future closer to "The Matrix" (1999), where we are just batteries for the machines we created.
It is the perfect punishment.