Programming Ethical AI

in #ai5 years ago

Artificial intelligence is supposed to make our lives easier, but unfortunately the applications that have been developed so far sometimes show human bias built right into the programming. This is true of driverless cars, which can’t seem to see people with darker skin tones as well as it sees those with lighter skin tones. It’s also true of facial recognition software being used by law enforcement, which also shows racial bias and other shortcomings.

In order for AI to work properly, we have to understand how human bias gets fed into the system and how it can be compounded if it;s not caught right away. Even small biases we wouldn’t otherwise notice can multiply when an algorithm learns that’s the law of the land.

This is what happened when Amazon tried to use artificial intelligence to pre-screen job candidates. It fed ten years worth of successful job candidates’ resumes into the algorithm and achieved a shocking result: not only did the algorithm learn to prefer male candidates, it also discounted completely resumes that listed women as references.

In order for AI to be useful, the information fed into the algorithm needs to be free from bias and/or explicitly programmed to ignore bias. Learn more about bias in AI from the infographic below.
Infographic Source: https://cybersecuritydegrees.com/ethical-ai/
EthicalAI.png

Sort:  

Thanks for sharing this information. Unethical AI may be an issue for specific groups at first, but eventually, unethical AI becomes a problem for the world. As our society leans more on AI it's critical we sort out the issues of bias and ethics now before the consequences of these problems multiply as you mentioned. I can only hope for the best?