You are viewing a single comment's thread from:

RE: Total transparancy benefits the top of the pyramid and may not actually work as intended (there are costs)

in #politics7 years ago (edited)

You assume AI is some centralized thing controlled by "them" which you'll be forced to give power to. I never said give any power to the AI. I said allow people to have the option of consulting with AI. In other words, I do not promote for example government built AI which everyone has to then give all decision making to. I promote decentralized personal AI where each of us has our own decision support network to recommend to us decisions. It cannot decide for you, it cannot control you or manipulate you, all it can do is help you decide. When it is decentralized and done right, each individual gets to decide how much or how little to trust AI, and only people who trust it completely to make decisions on their behalf will have the AI doing that.

Transparency pushed on everyone is a different deal. Not everyone is prepared to live under that because not everyone is familiar with life in a small town. Personally, if I have to be transparent I would trust my machines and my AI more than any person, provided that the AI has exactly my morals because I selected it. That AI which I can trust, can be transparent to, could see everything, be my character witness, and solve the same problem Dan wants to solve using the crowd. The difference is the AI can solve the problem without loss of privacy, without unnecessary suffering, and actually improve my ability to be moral (Dan's transparency does nothing to help people be better).

It is easy to punish people in the short term for being bad (or just stupid), but it is very hard to help people become increasingly less bad over time. The issue I have with Dan's solution is it's more of same of what we have, where you just make it easier to punish "wrongdoers" and find fault, but offer absolutely no help for people to improve or be moral, or avoid being punished. In other words, it's a new trap with the blockchain capturing every mistake for later punishment. My idea is to help people reduce the risk of being caught in these traps in the first place by using the technology to help produce wiser decisions.