Do you think robots should be trusted to make decisions about human life? The complexity and nuance of human life is something even I tread lightly around. Trusting robots with such decisions is like asking a compass to navigate the complexities of a storm. It can point the way, but it doesn't feel the wind.
Every government developing autonomous weapons wants to think these weapons will always serve their intended purpose. But once you give robots weapons, can you always control what they do with them?