You are viewing a single comment's thread from:

RE: Are we able to build AI that will not ultimately lead to humanity's downfall?

in #science6 years ago

I don't believe they would try to exterminate us bc we are in the way (as you describe). I also do not believe they would get smart, and in a nanosecond decide to kill us off (as the movie Terminator showed). The danger is that evil humans will build armies of robots that will annihilate humanity. It's not controlling technology that is the problem, it is stopping evil human beings that is the necessity. The question is, what kind of human being would do this, in reality, and how would you stop them?

Sort:  

It's an arms race. The first mover advantage will grant unprecendeted power. This in itself will lead to conflict

They could do this with nuclear weapon, but it did not happen. So why would it happen with AI weapon?