For this to work, the bot needs to be super right so it doesn't harm any good-intentioned people.
It would likely happen on occasion but it could be arbitrated by the manual curators.
For this to work, the bot needs to be super right so it doesn't harm any good-intentioned people.
It would likely happen on occasion but it could be arbitrated by the manual curators.
I was thinking something similar. There should always be the ability for manual assessment and intervention as no system is perfect. Yes, it will take a bit more time and effort on the part of people to do that, but when we aren't willing to put that effort in people get failed. I see that enough in real life with our government and police systems.