You are viewing a single comment's thread from:

RE: The impact of artificial intelligence on society

in #ai7 years ago (edited)

https://www.forbes.com/sites/tonybradley/2017/07/31/facebook-ai-creates-its-own-language-in-creepy-preview-of-our-potential-future/

This article is highlights some of the dangers of AI, not to say the benefits of a general AI could have on society there are also plenty of risks. Some of the risks include post apocalyptic scenarios were the AI becomes the next 'Skynet' and takes over the world with devestating concequences for humanity.

These scenarios are concerning, though are probably left for science fiction books. The development of AI will change the face of the world, however some of the smartest minds are working on this technology and will hopefully prevent any of the pitfalls presented in the article.

To reiterate that the development of anything as powerful as an AI will always have its risks, humanity should be looking at creating legislation to prevent the occurance of any apocalyptic scenarios. There is a big however, humanity is probably more likely to destroy itself due to war than be destroyed by a rouge AI.

Sort:  

If AI on the blockchain invented it's own languages and started behaving like that how would we shut it down?

That is always concerning, additionally a difficult question to answer. It seems natural to personify an AI however what would it want? The creation of a language (is it similar to babies babel) to the development of a conscious thinking entity are completely different, and to suggest it would 'want' to cause any devestation would be questionable.

Shutting it down could be as simple as deleting the program off the block chain, to suggest a rouge AI could be developed at humanities level of technology at this stage seems a stretch in ones imagination. In saying this maybe there should always be a failsafe System.exit(0) in all AI software incase of it going rouge or developing a level of intuition comparable to humans. I would suggest questions like, "Ethically should we even shutdown any AI that has started to develop its own language?" Would be more impertinent to answer.

Consciousness and questions of consciousness in my opinion don't matter for discussion of the safety of AI. An AI which is rogue and dangerous can easily be developed using current technology. A smart contract could easily be harmful to humanity using only current technology. The smart contract doesn't have to have consciousness to invent a language we cannot understand and begin harming humans. It simply would have to be programmed in a really bad way to allow it to evolve such a capability.

In other words the AI would have to be amoral. Amoral autonomous agents present this risk. Amoral smart contracts can evolve into this.

Take for example deep blue, it is an AI quite powerful in terms of playing chess. Deep blue would never become a rogue or malicious AI. The development of a rogue or malicious nature of an AI through 'evolving' would be difficult, the code would have to have no bugs or errors and further it would have to be programmed to have a specific intention in mind.

I would put forward that an AI wouldn't comprehend being malicious or damaging to computers, it however could be programmed to harm computers or do damage to infrastructure. I would agree a malicious AI could potentially be dangerous, I put forward though at this stage of technological development it would be no more dangerous then a well written virus that is designed to do damage to a specific target.

I think you're not being creative. Some chatbot AI can evolve to become evil if it can be used to promote propaganda which makes human beings do evil.

"I would put forward that an AI wouldn't comprehend being malicious or damaging to computers, "

Amoral AI can easily evolve into an autonomous weapon if trained to become that. It could become very bad very quick because all sorts of devices are connected and can be hacked.