You are viewing a single comment's thread from:

RE: The impact of artificial intelligence on society

in #ai7 years ago (edited)

That is always concerning, additionally a difficult question to answer. It seems natural to personify an AI however what would it want? The creation of a language (is it similar to babies babel) to the development of a conscious thinking entity are completely different, and to suggest it would 'want' to cause any devestation would be questionable.

Shutting it down could be as simple as deleting the program off the block chain, to suggest a rouge AI could be developed at humanities level of technology at this stage seems a stretch in ones imagination. In saying this maybe there should always be a failsafe System.exit(0) in all AI software incase of it going rouge or developing a level of intuition comparable to humans. I would suggest questions like, "Ethically should we even shutdown any AI that has started to develop its own language?" Would be more impertinent to answer.

Sort:  

Consciousness and questions of consciousness in my opinion don't matter for discussion of the safety of AI. An AI which is rogue and dangerous can easily be developed using current technology. A smart contract could easily be harmful to humanity using only current technology. The smart contract doesn't have to have consciousness to invent a language we cannot understand and begin harming humans. It simply would have to be programmed in a really bad way to allow it to evolve such a capability.

In other words the AI would have to be amoral. Amoral autonomous agents present this risk. Amoral smart contracts can evolve into this.

Take for example deep blue, it is an AI quite powerful in terms of playing chess. Deep blue would never become a rogue or malicious AI. The development of a rogue or malicious nature of an AI through 'evolving' would be difficult, the code would have to have no bugs or errors and further it would have to be programmed to have a specific intention in mind.

I would put forward that an AI wouldn't comprehend being malicious or damaging to computers, it however could be programmed to harm computers or do damage to infrastructure. I would agree a malicious AI could potentially be dangerous, I put forward though at this stage of technological development it would be no more dangerous then a well written virus that is designed to do damage to a specific target.

I think you're not being creative. Some chatbot AI can evolve to become evil if it can be used to promote propaganda which makes human beings do evil.

"I would put forward that an AI wouldn't comprehend being malicious or damaging to computers, "

Amoral AI can easily evolve into an autonomous weapon if trained to become that. It could become very bad very quick because all sorts of devices are connected and can be hacked.