You are viewing a single comment's thread from:

RE: Should we let our soon to be AI assistants tell us who to vote for?

in #politics7 years ago (edited)

They are less controlled by the public actually. It's easier to buy a token from an exchange than it is to buy Google shares is it not? Also Google shares are very expensive and Google shares aren't widely distributed. In fact almost half of Americans don't own any stocks at all.

Centralized AI is controlled by a public but not necessarily the public. First if the code isn't transparent then you don't know for sure what the AI is doing. Second, if you want to trust an AI to tell you who to vote and it's code is not known to you and the company is also influenced by public opinion then how can you be 100% certain that the advice you get from the AI isn't also influenced by that same public opinion which influences the share prices or the share holders? The AI could for example suggest to you the candidates best for the profitability of Google for the benefit of Google shareholders and not actually in your best interest.

On a blockchain you would be able to know that your personal AI assistant isn't going to be backdoored, or that the results or feedback it gives you isn't going to be censored, or that the AI isn't going to violate your personal morality in favor of what the public thinks is right at the time. It's the balance between being an individual and a collective.

To have an individual mind in my opinion you need independence. You cannot get independence if you trust a centralized organization to think for you. If it's decentralized then in terms of dangerous or safe, it's actually potentially safer because an AI which is ethically aware would know the same public sentiment that any corporation knows, but it would be like your personal corporation in a sense.

If we think of intelligent agents as corporations in a sense legally, then these are our little personal businessmen who can be just as legal, just as ethical, as Google, but following our personal morality which is something Google cannot give us. Google can only give us the collectively defined "right and wrong" based on what is perceived as good and bad in public sentiment. If a person can only think through Google then a person cannot ever take a risk to go against the current perceived "right" and "wrong" defined by public sentiment, making all thoughts censored by the collective potentially, and all suggestions on who to vote for also could be skewed.

Sometimes it might be important to have the ability to give more weight to your personal moral goal than to the mere public perception of right and wrong. In that instance you would need a decentralized AI so you can ask it for advice on for example who to vote for, even if the best person to vote for is not popular, is controversial, is ugly, etc.