I think the difference the people say about AI is that our previous tools lack agency and AI will either get sophisticated enough to have agency of their own or we will, at least, be surrendering more of our human agency to the AIs.
You are viewing a single comment's thread from:
This was a recent article I read that talks about various algorithms for autonomous vehicles but failed to see how they can be applied consistently when people have different values:
https://www.fastcodesign.com/90149351/this-game-forces-you-to-decide-one-of-the-trickiest-ethical-dilemmas-in-tech
If we have trouble programming cars when there are traffic rules and insurance policies, how far would we get with other applications of AI?
That only deals with one AI. What if there's an AI in the van and it acts differently; there's a very real chance they both mow down the cyclist and have a collision that damages both vehicles, has the miscarriage and injures the men. Dumb AI.
I guess that's my point. If we can't get the simpler problems correct what are the chances we will get the more complex problems right?