You are viewing a single comment's thread from:

RE: Stop The Steem Of Hate Rising

in #steem8 years ago

That was so fun to read! Nice job.

About the Trolley Problem:
I agree that humans' emotional decisions during crises are often pretty bad (or non-existent, if the people panic), but does that mean that deciding based on pure logic is better?

Kant thought that in making any moral decision, we should act as if the decision we make will become a universal law that must be followed by everyone from that moment on. If we had an AI that made all its decisions logically, wouldn't its decisions be based on some set of programmed laws or axioms? It would be acting according to a categorical imperative, and I think it's been shown that this could lead to some very bad decisions and f-d up scenarios.

To be really alive, wouldn't it have to have the ability to make decisions based on logic combined with some other factors?

About the human machines/AI:
Why do we always make the AI so humanoid in movies!? That's what we know, right? But in movies, people attribute human feelings and motives to the machines which causes some trouble. What do you think about this @cryptogee?

What if we tried to make something alive but as radically different from us as possible?

Sort:  

@moksha, when creating lookalike robots, we have to cross the "uncanny valley".
This valley is a zone where humans feel uncomfortable when in front of humanoid things that appear almost like real humans, but still having some subtle differences.

Humanoid robots have a fast adoption curve, until they are either too humanoid but not enough.
Looks like you're in currently in your own uncanny valley. Get out of that ;)

Looks like you're currently in your own uncanny valley.

@arcange, I'm not going to deny that. Lol.

Loading...