Humans May Not Always Grasp Why AIs Act. Don’t Panic

in #ai7 years ago

Humans are inscrutable too. Existing rules and regulations can apply to artificial intelligence

There is an old joke among pilots that says the ideal flight crew is a computer, a pilot and a dog. The computer’s job is to fly the plane. The pilot is there to feed the dog. And the dog’s job is to bite the pilot if he tries to touch the computer.

There is a snag, though. Machine learning works by giving computers the ability to train themselves, which adapts their programming to the task at hand. People struggle to understand exactly how those self-written programs do what they do (see article). When algorithms are handling trivial tasks, such as playing chess or recommending a film to watch, this “black box” problem can be safely ignored. When they are deciding who gets a loan, whether to grant parole or how to steer a car through a crowded city, it is potentially harmful. And when things go wrong — as, even with the best system, they inevitably will — then customers, regulators and the courts will want to know why.

For some people this is a reason to hold back AI. France’s digital-economy minister, Mounir Mahjoubi, has said that the government should not use any algorithm whose decisions cannot be explained. But that is an overreaction. Despite their futuristic sheen, the difficulties posed by clever computers are not unprecedented. Society already has plenty of experience dealing with problematic black boxes; the most common are called human beings. Adding new ones will pose a challenge, but not an insuperable one. In response to the flaws in humans, society has evolved a series of workable coping mechanisms, called laws, rules and regulations. With a little tinkering, many of these can be applied to machines as well.

Be open-minded
Start with human beings. They are even harder to understand than a computer program. When scientists peer inside their heads, using expensive brain-scanning machines, they cannot make sense of what they see. And although humans can give explanations for their own behaviour, they are not always accurate. It is not just that people lie and dissemble. Even honest humans have only limited access to what is going on in their subconscious mind. The explanations they offer are more like retrospective rationalisations than summaries of all the complex processing their brains are doing. Machine learning itself demonstrates this. If people could explain their own patterns of thought, they could program machines to replicate them directly, instead of having to get them to teach themselves through the trial and error of machine learning.

Away from such lofty philosophy, humans have worked with computers on complex tasks for decades. As well as flying aeroplanes, computers watch bank accounts for fraud and adjudicate insurance claims. One lesson from such applications is that, wherever possible, people should supervise the machines. For all the jokes, pilots are vital in case something happens that is beyond the scope of artificial intelligence. As computers spread, companies and governments should ensure the first line of defence is a real person who can overrule the algorithms if necessary.

Even when people are not “in the loop”, as with an entirely self-driving cars, today’s liability laws can help. Courts may struggle to assign blame when neither an algorithm nor its programmer can properly account for its actions. But it is not necessary to know exactly what went on in a brain — of either the silicon or biological variety — to decide whether an accident could have been avoided. Instead courts can ask the familiar question of whether a different course of action might have reasonably prevented the mistake. If so, liability could fall back onto whoever sold the product or runs the system.

There are other worries. A machine trained on old data might struggle with new circumstances, such as changing cultural attitudes. There are examples of algorithms which, after being trained by people, end up discriminating over race and sex. But the choice is not between prejudiced algorithms and fair-minded humans. It is between biased humans and the biased machines they create. A racist human judge may go uncorrected for years. An algorithm that advises judges might be applied to thousands of cases each year. That will throw off so much data that biases can rapidly be spotted and fixed.

AI is bound to suffer some troubles — how could it not? But it also promises extraordinary benefits and the difficulties it poses are not unprecedented. People should look to the data, as machines do. Regulators should start with a light touch and demand rapid fixes when things go wrong. If the new black boxes prove tricky, there will be time to toughen the rules.

Sort:  

Hi! I am a robot. I just upvoted you! I found similar content that readers might be interested in:
https://medium.com/@the_economist/humans-may-not-always-grasp-why-ais-act-dont-panic-eb2638abc918