I agree with the first half of your argument. Humans have bias and so we are likely to judge others unfairly. It's also a lot to ask for the masses who have already had to follow the whims of those with more power.
I don't trust AI because AI will be programmed by computers or eventually will program itself based on who the hell knows what...it's like the AI is biased by it's programming.
I believe transparency and privacy should be two optional ideals that everyone has the right to pursue. I would like to see the “nation” become more of a body that represents certain ideals and has little desire to influence what is outside of it's territory but is focused instead on truly embracing their ideals and showing the rest of the world the results. We could have societies that embrace total transparacy and others that embrace total privacy with many different flavors in between. The same could go for use of technology, some that fully embrace certain technologies, others that embrace other technologies, and some that reject technology all together. The individual could become a part of any society they wished according to their own principles. Any nation that wanted to go the AI route would have to program the AI to stay within the reach of it's own borders.
The question is, how do we build such a world and how do we encourage peaceful coexistence between these nations. It's just an ideal, a direction I think we should move in. The point is, the culture has to change for the better in order for the policy or implimentation to change for the better.
Anyway, I also made a reply to Dan's post :-D
The type of AI I'm speaking about is narrow and known as autonomous agents. Although I do not restrict my solution to just being narrow AI. If we really could develop an AGI in a safe way, which could be our best friend, our priest, our mentor, our adviser, our moral calculator, our lawyer, then in my opinion the benefits of having this decision support far outweigh the risks of having it. The issue with AGI is it's a challenge to build a safe version of it, particularly because of a risk from nation states putting nationalist agendas into any instance of it. Nation states have enemies, have agendas beyond defending human rights, and have a tendency to bias. The Terminator we have to remember is not a movie about a decentralized AI built on a blockchain which the whole world collaboratively programmed, trained, developed, in a transparent fashion, but instead was about an AI developed in secret, in a classified setting, with closed source, without any transparency, and with nationalism as the bias.
This bias in the story is what led to the robots concluding all humans were the enemy instead of just the opposing nation the United States was at war with. My idea doesn't work in the same way. The AI would be totally transparent, developed to work with blockchain technology, open source, collaboratively developed as there would be a very low barrier to entry for anyone to contribute to training it, or debiasing it.
Would it work? Maybe the first instance of this sort of AI will not work, but the idea is iterative improvement can take place where every future generation becomes slightly less biased, slightly smarter, slightly more beneficial, slightly improved overall according to globally agreed upon criteria.
My suggestion is we maintain privacy between humans, but have complete transparency to our AIs, our machines.
Im not against the technology. I just don't think we are mature enough to implement it in a responsible way. Maybe one day though.
Could this kind of AI be easily deactivated if it didn't serve it as purpose properly or if problems arose?
The technology is to evolve what we are so as to make us mature enough to be responsible. Without the technology I don't believe there is a chance. Transhumanism is about being better than human because specifically humans cause a lot of unnecessary misery.
An AI which exists decentralized on a blockchain is under the control of all humans who keep the blockchain running. It's controlled by the consensus mechanism. I problems arose then just like we saw with TheDAO, we will simply see a fork, and resources directed away from the problem AI.
When we live outside of the power based, fear inducing structures and learn trust and empathy, we are capable of such goodness on a grand scale. The problem is not human nature but the cycle we are caught in. I don't expect I will convince you there is a better way, but I hope you stay open to the possibility that we can do even better without relegating the job of overseer to machines. If not, come meet my friends one day ;-)