The impact of artificial intelligence on society

in #ai7 years ago (edited)

What is the possible impact of AI on society?

From the industrial revolution, to the digital revolution, to the AI revolution, humanity has been through radical changes which have changed the way we live and govern ourselves. The industrial revolution started with the steam engine and discovery of electricity.

Specifics:

  • 1712 Newcomen’s steam engine
  • 1946 ENIAC Computer
  • 1990 Neural net device reads handwritten digits

In 1712 Nowcomen invented the Steem engine. This invention and the invention of electricity in 1830 may have triggered the US Civil War. The United States before electricity and the steam engine was powered primarily by what is known as "slave power". This slave power in the United States evolved into what is called slavocracy.

The Slave Power or Slaveocracy was the perceived political power in the U.S. federal government held by slave owners during the 1840s and 1850s, prior to the Civil War. Antislavery campaigners during this period bitterly complained about what they saw as disproportionate and corrupt influence wielded by wealthy Southerners. The argument was that this small group of rich slave owners had seized political control of their own states and were trying to take over the federal government in an illegitimate fashion in order to expand and protect slavery. The argument was widely used by the Republican Party that formed in 1854–55 to oppose the expansion of slavery.

If we look at the critical dates a hypothesis has been put forward that it was the industrial revolution that made the end of slavery an inevitability. Slavery simply had an ever decreasing economic utility as the value of slave power become less with the expansion of artificial work in the form of the steam engine, the factor, etc. Just as later on the car replaced horse power for transportation and trains running on steam became common. The book titled "The Energy of Slaves" puts forth a similar hypothesis which states that it was the discovery of oil that ultimated put slavery (and the slavocracy) to an end.

While the ENIAC computer was invented in 1946, it was actually Claude Shannon who triggered the digital revolution. The digital revolution is what led to many of the technologies we associate with the 1950s forward. The Internet and personal computer are a result of the digital revolution and while AI was being worked on during this time as well by individuals like I.J. Goode, it was 1990 when the neural network device was invented. The digital revolution had a huge impact on World War 2 and the Cold War, as computers began playing a bigger part in warfare and in strategic decision making.

It is debatable but a case can also be made that it was the factory worker who replaced the unpaid slave and that this mistreatment of factory workers may have led to Marxism, multiple revolutions, and many political conflicts. The factory worker and industrialization changed the concept of childhood, brought about the concept of public school, changed the function of university, and while it's debated it may also have led to the growth of socialism. The slavocracy did not completely end after the Civil War but the moral and economic justification for maintaining it was lost as automation became more cost effective.

How AI could change society as we know it

Moore's Law over 120 Years
By Steve Jurvetson [CC BY 2.0 (http://creativecommons.org/licenses/by/2.0)], via Wikimedia Commons

Artificial intelligence may impact society just as much as the industrial revolution and the digital revolution. This could mean we could see an arms race over autonomous weapons just as we saw with nuclear weapons. It could mean we could see the culture of work change, as labor and the institutions based around organizing labor become anachronistic. It's worth noting that feudalism fell out of fashion due to technology not due to cultural shift, and slavery fell out of fashion due to automation not cultural shift, so what is next for the concept of the "worker" and the culture of work?

It's very possible that what we perceive of as work today might not be the same tomorrow. It's also possible that human labor could simply become less important as robots can do most of the work. This could create an opportunity for human workers to be freed from required labor.

If AI remains centralized, what are the possible consequences?

Could centralized AI lead us to a new era of digital feudalism? The power that corporations would gain from their AI would be immense and the amount of data the AI would be able to access in a transparent open society is also immense. This AI could train itself using all the data it has about us, and these corporations could then use the results for their own purposes. Whether it be clandestine psychological experiments, or for advertising to make us buy stuff we don't need, or for gaining control of our attention (eyeball control), these corporations are empowered by the AI. The percentage of humanity which owns stock in corporations is also very low and even in the United States only around 50% of the population owns any shares in a company.

The concerning trends:

  • The rise of corporate data farming
  • The disregard of privacy
  • The promotion of limitless transparency
  • The birth of an Internet which can't forget
  • The growth of sucker lists (and similar lists)

Izabella Kaminska, one of Alphaville’s lead writers, even thinks that we are facing the Gosplan 2.0 – a Soviet-like system of technocratic elites who, flush with cash from desperate investors, allocate money as they see fit based on purely subjective criteria, favouring some groups over others, and using proceeds from their advertising business to fund exotic “moonshot” projects of dubious civic significance.

Is all of this building toward a future of digital feudalism?

If AI becomes decentralized, what are the possible consequences?

Decentralized AI provides for both new opportunities and risks. There are clear safety concerns if decentralized AI is implemented recklessly but at the same time to not have decentralized AI also brings upon us safety concerns from a different angle.

What are some of the trends in favor of AI decentralization?

  • The mainstream adoption of blockchain technology
  • The mainstream adoption smart contracts
  • The decentralization of generalized computation
  • The decentralization of data storage
  • The commoditization and tokenization of digital property

Decentralization doesn't necessarily mean better in and of itself. A decentralization of AI could lead to the rise of digital anarchism instead of digital feudalism or it could evolve into digital democracies where various platforms become virtual republics. We simply do not yet know where decentralization would lead similar to how we couldn't have guessed where things would go after the French revolutions, or Marxist revolutions, or the revolution in China which ended 5000 years of legalist rule by emperors.

My concerns if we choose the decentralization route is that if we do then the responsibility for AI security is also distributed. This is to say that when you empower the masses you also give greater responsibility to the masses. Google, Facebook, or some middle man is not going to look out for your interests, your safety, and it's going to be up to you to build decentralized tools which can allow various platforms to self regulate. In particular, attention must be given to ethics, to public sentiment, to security concerns, which regretfully I must admit most projects attempting to decentralize are not going about it in a way which aligns with mainstream ethics, safety concerns, public sentiment, etc. This gives centralized companies the advantage of being perceived as more ethical, or more legitimate, when in reality decentralized vs centralized has little to do with whether the product is legitimate, ethical, legal, etc. Steemit is actually one of the rare projects which is decentralized in ways to encourage the growth and security of the platform whilst also aiming for popular appeal and this mass appeal is necessary if the trend of mainstream adoption is considered important.

Conclusion and thoughts

As an individualist and consequence based thinker I will state that it is not going to be my choice which of these two possible outcomes presented evolves into reality. It is possible we could get a mixture of the two where the worst of digital feudalism is paired with the worst of digital anarchism. It's possible we could get the radical transparency, the disregard for privacy, along with the angry Internet mobs mobilized by propaganda bots. Consequence thinking would simply accept that a risk based perspective is all that is necessary and that any outcome which society (and the world) chooses will alter the risk assessment. This alteration of the risk assessment would alter the internal world model of the consequence based thinker as the rules of how to thrive change depending on the societal structure.

It's very possible that AI could end up being something locked up, regulated from the top down, where large corporations maintain ever increasingly useful data farms, ever more potent algorithms, ever more powerful AI, which of course would be used for the self interest and benefit of the corporation and it's shareholders (not necessarily the users). At the same time it's also very possible that AI could become decentralized to the point that it becomes like a utility and everyone has access to it from anywhere on earth. Of course this would mean everyone wold be empowered, and individuals rather than corporations would benefit from this. It also means everyone would have greater ethical and legal responsibility.

References

Allen, G., & Chan, T. (2017). Artificial Intelligence and National Security. Report. Harvard Kennedy School, Harvard University. Boston, MA.

Broeders, D., & Taylor, L. (2017). Does great power come with great responsibility? The need to talk about corporate political responsibility. In The Responsibilities of Online Service Providers (pp. 315-323). Springer International Publishing.

Makridakis, S. (2017). The Forthcoming Artificial Intelligence (AI) Revolution: Its Impact on Society and Firms. Futures.

Nikiforuk, A. (2012). The Energy of Slaves: Oil and the New Servitude. Greystone Books.

Scott, A. C., Solórzano, J. R., Moyer, J. D., & Hughes, B. B. (2017). Modeling Artificial Intelligence and Exploring its Impact.

Web:

  1. https://en.wikipedia.org/wiki/Slave_Power
  2. https://www.theguardian.com/commentisfree/2016/apr/24/the-new-feudalism-silicon-valley-overlords-advertising-necessary-evil
  3. https://www.forbes.com/sites/nextavenue/2014/02/18/the-scam-of-all-scams-sucker-lists/#68b778f54393
Sort:  

Very nice post. Thanks for sharing

wow nice your post....i like it

nice post thanks for sharing

Very nice post with "academic" sources.
Trendy Topic that I like very much.
How do you stand concerning AI? Are you an enthusiast or are you frightened?

Both. I think developing AI safety is critical to aligning the technology with my self interest. At the same time developing AI is necessary to improve ethics, safety, even personal decisions. It will make us decide better in many cases but it's dangerous if algorithms can influence our decisions subconsciously which is where the situation is right now.

Agreeing with you 100%. My fear is that people and governments will not make it "safe" and it can get ou of hand very fast.

The people = the blockchain = token holders. Token holders need to discover their self interest and apply their ethics. Currently it's already not safe enough but new projects need to seriously think about the ethics involved and at minimum consider doing formal risk assessments.

https://www.forbes.com/sites/tonybradley/2017/07/31/facebook-ai-creates-its-own-language-in-creepy-preview-of-our-potential-future/

This article is highlights some of the dangers of AI, not to say the benefits of a general AI could have on society there are also plenty of risks. Some of the risks include post apocalyptic scenarios were the AI becomes the next 'Skynet' and takes over the world with devestating concequences for humanity.

These scenarios are concerning, though are probably left for science fiction books. The development of AI will change the face of the world, however some of the smartest minds are working on this technology and will hopefully prevent any of the pitfalls presented in the article.

To reiterate that the development of anything as powerful as an AI will always have its risks, humanity should be looking at creating legislation to prevent the occurance of any apocalyptic scenarios. There is a big however, humanity is probably more likely to destroy itself due to war than be destroyed by a rouge AI.

If AI on the blockchain invented it's own languages and started behaving like that how would we shut it down?

That is always concerning, additionally a difficult question to answer. It seems natural to personify an AI however what would it want? The creation of a language (is it similar to babies babel) to the development of a conscious thinking entity are completely different, and to suggest it would 'want' to cause any devestation would be questionable.

Shutting it down could be as simple as deleting the program off the block chain, to suggest a rouge AI could be developed at humanities level of technology at this stage seems a stretch in ones imagination. In saying this maybe there should always be a failsafe System.exit(0) in all AI software incase of it going rouge or developing a level of intuition comparable to humans. I would suggest questions like, "Ethically should we even shutdown any AI that has started to develop its own language?" Would be more impertinent to answer.

Consciousness and questions of consciousness in my opinion don't matter for discussion of the safety of AI. An AI which is rogue and dangerous can easily be developed using current technology. A smart contract could easily be harmful to humanity using only current technology. The smart contract doesn't have to have consciousness to invent a language we cannot understand and begin harming humans. It simply would have to be programmed in a really bad way to allow it to evolve such a capability.

In other words the AI would have to be amoral. Amoral autonomous agents present this risk. Amoral smart contracts can evolve into this.

Take for example deep blue, it is an AI quite powerful in terms of playing chess. Deep blue would never become a rogue or malicious AI. The development of a rogue or malicious nature of an AI through 'evolving' would be difficult, the code would have to have no bugs or errors and further it would have to be programmed to have a specific intention in mind.

I would put forward that an AI wouldn't comprehend being malicious or damaging to computers, it however could be programmed to harm computers or do damage to infrastructure. I would agree a malicious AI could potentially be dangerous, I put forward though at this stage of technological development it would be no more dangerous then a well written virus that is designed to do damage to a specific target.

I think you're not being creative. Some chatbot AI can evolve to become evil if it can be used to promote propaganda which makes human beings do evil.

"I would put forward that an AI wouldn't comprehend being malicious or damaging to computers, "

Amoral AI can easily evolve into an autonomous weapon if trained to become that. It could become very bad very quick because all sorts of devices are connected and can be hacked.

thank you for your sharing

i have same thoughts as yours