Very interesting. Very good article. Do you think AI in hands of companys like google are less dangerous because these companys are more controlled by the public?
You are viewing a single comment's thread from:
Very interesting. Very good article. Do you think AI in hands of companys like google are less dangerous because these companys are more controlled by the public?
They are less controlled by the public actually. It's easier to buy a token from an exchange than it is to buy Google shares is it not? Also Google shares are very expensive and Google shares aren't widely distributed. In fact almost half of Americans don't own any stocks at all.
Centralized AI is controlled by a public but not necessarily the public. First if the code isn't transparent then you don't know for sure what the AI is doing. Second, if you want to trust an AI to tell you who to vote and it's code is not known to you and the company is also influenced by public opinion then how can you be 100% certain that the advice you get from the AI isn't also influenced by that same public opinion which influences the share prices or the share holders? The AI could for example suggest to you the candidates best for the profitability of Google for the benefit of Google shareholders and not actually in your best interest.
On a blockchain you would be able to know that your personal AI assistant isn't going to be backdoored, or that the results or feedback it gives you isn't going to be censored, or that the AI isn't going to violate your personal morality in favor of what the public thinks is right at the time. It's the balance between being an individual and a collective.
To have an individual mind in my opinion you need independence. You cannot get independence if you trust a centralized organization to think for you. If it's decentralized then in terms of dangerous or safe, it's actually potentially safer because an AI which is ethically aware would know the same public sentiment that any corporation knows, but it would be like your personal corporation in a sense.
If we think of intelligent agents as corporations in a sense legally, then these are our little personal businessmen who can be just as legal, just as ethical, as Google, but following our personal morality which is something Google cannot give us. Google can only give us the collectively defined "right and wrong" based on what is perceived as good and bad in public sentiment. If a person can only think through Google then a person cannot ever take a risk to go against the current perceived "right" and "wrong" defined by public sentiment, making all thoughts censored by the collective potentially, and all suggestions on who to vote for also could be skewed.
Sometimes it might be important to have the ability to give more weight to your personal moral goal than to the mere public perception of right and wrong. In that instance you would need a decentralized AI so you can ask it for advice on for example who to vote for, even if the best person to vote for is not popular, is controversial, is ugly, etc.
To put it most simply because the first response was long and not clear.
So this is the difference between the mainframe (Google) and the personal computer. People used to wonder why would anyone ever need a personal computer? Well now we know why. The same is true for personal agents or personal AI. If you want to be able to trust it to deal with your most personal problems then perhaps you want to know 100% that it's aligned to you as much as possible. To make it safe requires self regulation which means your personal AI has to be ethically aware, morally aware, legally aware, but it pursues your goals as you define them within that awareness.
So, your agent might be given a goal to accomplish a task for you like make money as a personal business and the agent would then with full understanding of the laws, of ethical standards, of your morals, be able to generate content and make an income for you. Suppose there is a current debate going on about for example the morality of pornography and the agent is smart enough to monitor current public opinion on that topic to determine when the best time is to enter into that market? If you don't have anything specific in your personal morality restricting it from doing so then when public sentiment shifts in favor of pornography then it will enter in that market perhaps by buying stocks in companies in that industry or more directly.
So what is the point? The point is you don't need Google to keep you safe. You just need smart enough AI to have awareness of the same laws, ethics and morals Google as a company is aware of. The rest is just cost benefit analysis calculations on what actions are worth the risk or not.
well...who knows where all this leads to. Steemit is a good example to study... even if simple bots, that are not "carrier" of AI... and only doing things for their "owners" or makers... they react faster, never forget, are not emotional... they are changing the way of interaction... not very romatically... anyway might be better than centralized manipulation from a capitalistic aimed company like google.