You are viewing a single comment's thread from:

RE: Killer Robots: Artificial Intelligence and Human Extinction

in #anarchism8 years ago (edited)

Good point, but truthfully I'm not sure.

It gets very tricky. Remember, AI will take values to their logical conclusions. If we cover all life, a rational AI may conclude that humans are very dangerous to other life forms—which we are. Oppositely, imagine if AI "generally preserved" humans. The result could be AI that over-protects humans from all risk whatsoever.

But also just look at your wording: "generally" and "preservation"

What cases does "generally" not cover? What are we trying to preserve? Life? OK here comes the over-protective robots. OK fine, lets give humans some freedom. But now there is an obvious grey area in the definition.

Please search up Bostrom's paper clip maximizer

I hope this is making some bit of sense :)

Sort:  

Thank you @nettijoe96 , I'll Google up Bostrom's paper clip maximizer.