Should we let our soon to be AI assistants tell us who to vote for?

in #politics7 years ago (edited)

Intelligent agents that help us vote better are on the way

Currently we are entering into the phase where AI is making it's impact on politics. During the Trump election we saw the growth of fake news, AI twitter bots, and AI journalism. In the near future we will all have access to AI assistants which will be able to analyze the facts whilst also understanding our self interest(goals), morality, and the law. This next generation of AI assistants may be in both centralized less than transparent forms and in decentralized blockchain enabled forms. The major question we have to now begin to ask is what impact this breed of AI will have on the 2018-2020 elections not just in the United States but world wide?

An AI assistant can also be considered as an autonomous agent. It can act on the behalf of the operator and become the part of their "digital body" and "extended mind". This AI assistant may not have the legal authority to directly vote on our behalf but it can research 24/7 and continuously update it's suggestion of who we should vote for.

The problem with centralized AI offering "assistance"

When a company like Apple, Google, or others offer a centralized solution we have to recognize that every company is biased. The bias comes from the fact that these are public companies with shareholders who are known, who can be pressured by the general public. So if Google were to for example offer a voting assistant which could suggest who we should vote for then politics could get involved where people cannot trust Google or it's advice. On the other hand if a decentralized AI exists which can offer AI assistant technology then that AI can truly be trusted to become the extended mind of the individual. That AI could suggest who an individual should vote for without the lack of transparency and risks of a centralized possibly manipulated version.

The importance of moral, ethical and legal efficiency

The final point I want to make is based on a previous post where I introduced the concept of legal efficiency. I defined legal efficiency:

The term legal efficiency is a term I choose to use to represent a measurable benefit whereby a project and or organization can achieve their goals with minimal legal costs. Legal costs can be measured in the form of legal risks (risk of lawsuit, regulatory risk, etc). In order for a project to be efficient the resources of the project must not be wasted and legal expenses cost a lot to projects.

Put more simply, if we are speaking of an intelligent agent then the intelligent agent is more legally efficient if it can get more done (achieve the objectives) with less red tape and or regulatory resistance. In essence if the intelligent agent can follow the law perfectly while also being able to complete your tasks then it's legal efficiency and your own legal efficiency increases as it can do more with less cost measured in terms of legal risks.

Legal efficiency alone isn't enough because it's not enough just to do what is legal. Reputation does matter and because of the importance of reputation there is a cost to being unethical and immoral. I distinguish a difference between "ethics" and "morality" for this discussion where ethics are the professional ethical standards, or societal ethical standards (what society expects you to do in a given situation) while morality is the personal opinion you hold on what is right and wrong.

An intelligent agent is ethically efficient if it is ethically aware, which would mean it would have to be aware of current public sentiment (public expectations), and on social norms. An intelligent agent which can be respectful of both public sentiment and social norms (both which are always changing) can avoid unnecessary conflicts (reduce costs) and the result is ethical efficiency.

An intelligent agent which is morally efficient is an intelligent agent which can accomplish the goals of the individual while also adhering as much as possible to personal morality also doing cost benefit analysis with regard to potential conflicts between personal morality, public sentiment (ethical standards), and the law. If an intelligent agent can somehow become smart enough to be as aligned as possible to the personal morality of the individual it's acting on the behalf of then it's morally efficient but of course this can be something hard to achieve.

So what can we have ultimately? We can have intelligent agents which ideally can become extensions of our mind and body, to help our actions become as moral, ethical and legal as possible whilst also accomplishing our goals. In a sense, more benefit with less risk to empower the individual to achieve their objectives.

References

  1. https://steemit.com/blockchain/@dana-edwards/legal-efficiency-in-blockchain-projects
Sort:  

Freedom of choice must not be taken by AI.

Great post. Just to add, that sometimes public sentiment can be seriously flawed by government´s manipulation of the media. If the AI relies on the public sentiment it will be biased towards what the government wants us to believe. Current example is the in fact illegal mass migration into middle Europe where we see a strong bias of public media towards the benefits of the migration and suppression of its negative consequences. I have no idea how it ever could be accomplished that an AI screens through the internet, picks up all of the very contrary information, and summarizes this into an opinion. Probably as impossible a defining the truth out of 1000 contradicting facts.

Exactly why I mentioned personal morality. Sometimes it is the case that public sentiment (and ethics) are manipulated by propaganda from many different foreign governments and or the local government. AI has to not "rely" on public sentiment but factor it into it's cost benefit analysis. It's the same thing corporations do and to some extent individuals.

Everyhting is very nice but u are using the AI just for to mantain the status quo we have til now that is not the best! I will ask just something ... if we start to use AI for everything ... why do we need humans for? Maybe would be better to fix how the govermment work, how politics is done and so ... but of course it is more easy to let other to think for us and made the dirty work so we do not feel guilty any more of our mistakes.

Nice post and great information 👍

Very interesting. Very good article. Do you think AI in hands of companys like google are less dangerous because these companys are more controlled by the public?

They are less controlled by the public actually. It's easier to buy a token from an exchange than it is to buy Google shares is it not? Also Google shares are very expensive and Google shares aren't widely distributed. In fact almost half of Americans don't own any stocks at all.

Centralized AI is controlled by a public but not necessarily the public. First if the code isn't transparent then you don't know for sure what the AI is doing. Second, if you want to trust an AI to tell you who to vote and it's code is not known to you and the company is also influenced by public opinion then how can you be 100% certain that the advice you get from the AI isn't also influenced by that same public opinion which influences the share prices or the share holders? The AI could for example suggest to you the candidates best for the profitability of Google for the benefit of Google shareholders and not actually in your best interest.

On a blockchain you would be able to know that your personal AI assistant isn't going to be backdoored, or that the results or feedback it gives you isn't going to be censored, or that the AI isn't going to violate your personal morality in favor of what the public thinks is right at the time. It's the balance between being an individual and a collective.

To have an individual mind in my opinion you need independence. You cannot get independence if you trust a centralized organization to think for you. If it's decentralized then in terms of dangerous or safe, it's actually potentially safer because an AI which is ethically aware would know the same public sentiment that any corporation knows, but it would be like your personal corporation in a sense.

If we think of intelligent agents as corporations in a sense legally, then these are our little personal businessmen who can be just as legal, just as ethical, as Google, but following our personal morality which is something Google cannot give us. Google can only give us the collectively defined "right and wrong" based on what is perceived as good and bad in public sentiment. If a person can only think through Google then a person cannot ever take a risk to go against the current perceived "right" and "wrong" defined by public sentiment, making all thoughts censored by the collective potentially, and all suggestions on who to vote for also could be skewed.

Sometimes it might be important to have the ability to give more weight to your personal moral goal than to the mere public perception of right and wrong. In that instance you would need a decentralized AI so you can ask it for advice on for example who to vote for, even if the best person to vote for is not popular, is controversial, is ugly, etc.

To put it most simply because the first response was long and not clear.

  • Centralized Google can be thought of as a person, a friend who you ask questions to but who has his own self interest.
  • Decentralized AI can be individualized, personalized, so that it becomes the extension of your mind and body, it's part of your digital self.

So this is the difference between the mainframe (Google) and the personal computer. People used to wonder why would anyone ever need a personal computer? Well now we know why. The same is true for personal agents or personal AI. If you want to be able to trust it to deal with your most personal problems then perhaps you want to know 100% that it's aligned to you as much as possible. To make it safe requires self regulation which means your personal AI has to be ethically aware, morally aware, legally aware, but it pursues your goals as you define them within that awareness.

So, your agent might be given a goal to accomplish a task for you like make money as a personal business and the agent would then with full understanding of the laws, of ethical standards, of your morals, be able to generate content and make an income for you. Suppose there is a current debate going on about for example the morality of pornography and the agent is smart enough to monitor current public opinion on that topic to determine when the best time is to enter into that market? If you don't have anything specific in your personal morality restricting it from doing so then when public sentiment shifts in favor of pornography then it will enter in that market perhaps by buying stocks in companies in that industry or more directly.

So what is the point? The point is you don't need Google to keep you safe. You just need smart enough AI to have awareness of the same laws, ethics and morals Google as a company is aware of. The rest is just cost benefit analysis calculations on what actions are worth the risk or not.

well...who knows where all this leads to. Steemit is a good example to study... even if simple bots, that are not "carrier" of AI... and only doing things for their "owners" or makers... they react faster, never forget, are not emotional... they are changing the way of interaction... not very romatically... anyway might be better than centralized manipulation from a capitalistic aimed company like google.

For me that would be going to far, I don't want AI voting for me or even thinking for me. I like to research facts and things for myself and come to my own conclusions that is why I watch very little MSM, it is filled with bias and propaganda. I would rather gather the facts on my own then use critical thinking skills to base an opinion off of.

Feel free to make your own mistakes. If and when this AI is ready then you'll be one of the people who reject it.

That I will, along with microchips inside my body

you know what i want AI to do. i want it to analyze all the great inventions man has made and tell me what i should invent next that would be successful. i then want another AI to make that invention. if you are a programmer who can program such AI HMU for this non-paid position.
Thank You! XD

Im wondering if voting might not turn out better if we effectively forced people to think through what their real motivations and goals are, before voting.

But of course that would probably not survive constitutional challenge. General belief is even idiots have to be allowed to vote, :(

It is supposed to be easier to vote, with less ID required, than you need to buy a bottle of beer or wine. Sad state of affairs.

Situation would be quite different in the US if voting was comparable to steemit.com Those who pay biggest taxes get the most weight in their vote.

I understand the logic completely - if you don't pay any taxes, why should you be allowed to vote on how high confiscatory tax rates are?

But of course intelligent agents are never going to help us get to the nirvana I am imagining

I nice you..
Follow my @imamalkimas

Yeah, I think you need to let him

Congratulations @dana-edwards! You have completed some achievement on Steemit and have been rewarded with new badge(s) :

You published 4 posts in one day
You published a post every day of the week

Click on any badge to view your own Board of Honor on SteemitBoard.
For more information about SteemitBoard, click here

If you no longer want to receive notifications, reply to this comment with the word STOP

By upvoting this notification, you can help all Steemit users. Learn how here!

Awesome post!

 7 years ago  Reveal Comment