Could Artificial Intelligence Ever Become A Threat To Humanity?

in #future7 years ago

MjcyMjY1NA.jpeg

This is debate is a big one and everyone has got there own opinion. Is the AI apocalypse soon?! Who knows? Hear is an answer from someone who might hold the key to the answer.

Answer by Yann LeCun, Director of AI Research at Facebook and Professor at NYU, on Quora:

I don’t think at AI will become an existential threat to humanity.

I’m not saying that it’s impossible, but we would have to be very stupid to let that happen.

Others have claimed that we would have to be very smart to prevent that from happening, but I don’t think it’s true.

If we are smart enough to build machine with super-human intelligence, chances are we will not be stupid enough to give them infinite power to destroy humanity.

Also, there is a complete fallacy due to the fact that our only exposure to intelligence is through other humans. There are absolutely no reason that intelligent machines will even want to dominate the world and/or threaten humanity. The will to dominate is a very human one (and only for certain humans).

Even in humans, intelligence is not correlated with a desire for power. In fact, current events tell us that the thirst for power can be excessive (and somewhat successful) in people with limited intelligence.

As a manager in an industry research lab, I’m the boss of many people who are way smarter than I am (I see it as a major objective of my job to hire people who are smarter than me).

A lot of the bad things humans do to each other are very specific to human nature. Behavior like becoming violent when we feel threatened, being jealous, wanting exclusive access to resources, preferring our next of kin to strangers, etc were built into us by evolution for the survival of the species. Intelligent machines will not have these basic behavior unless we explicitly build these behaviors into them. Why would we?

Also, if someone deliberately builds a dangerous and generally-intelligent AI, other will be able to build a second, narrower AI whose only purpose will be to destroy the first one. If both AIs have access to the same amount of computing resources, the second one will win, just like a tiger a shark or a virus can kill a human of superior intelligence.

artificial_intelligence_benefits_risk-1400x430.jpg

Interesting.... Hear are some problems:

HOW CAN AI BE DANGEROUS?
Most researchers agree that a superintelligent AI is unlikely to exhibit human emotions like love or hate, and that there is no reason to expect AI to become intentionally benevolent or malevolent. Instead, when considering how AI might become a risk, experts think two scenarios most likely:

The AI is programmed to do something devastating: Autonomous weapons are artificial intelligence systems that are programmed to kill. In the hands of the wrong person, these weapons could easily cause mass casualties. Moreover, an AI arms race could inadvertently lead to an AI war that also results in mass casualties. To avoid being thwarted by the enemy, these weapons would be designed to be extremely difficult to simply “turn off,” so humans could plausibly lose control of such a situation. This risk is one that’s present even with narrow AI, but grows as levels of AI intelligence and autonomy increase.
The AI is programmed to do something beneficial, but it develops a destructive method for achieving its goal: This can happen whenever we fail to fully align the AI’s goals with ours, which is strikingly difficult. If you ask an obedient intelligent car to take you to the airport as fast as possible, it might get you there chased by helicopters and covered in vomit, doing not what you wanted but literally what you asked for. If a superintelligent system is tasked with a ambitious geoengineering project, it might wreak havoc with our ecosystem as a side effect, and view human attempts to stop it as a threat to be met.
As these examples illustrate, the concern about advanced AI isn’t malevolence but competence. A super-intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we have a problem. You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants. A key goal of AI safety research is to never place humanity in the position of those ants.

Sort:  

Hi! I am a robot. I just upvoted you! I found similar content that readers might be interested in:
https://www.forbes.com/sites/quora/2017/02/09/could-artificial-intelligence-ever-become-a-threat-to-humanity/

cheetah. You are exactly what people are worried about. Your 2 clever! :D

Depend who made them and for which purpose :)
But knowing how human mind can be sick, we could have a lot of problems with AI in future...

agree, i think your right. In the right hands it could be amazing. In the wrong hands devastating :(

upvoted

thanks buddy! interesting topic

Congratulations @greenerz! You have completed some achievement on Steemit and have been rewarded with new badge(s) :

Award for the number of upvotes received

Click on any badge to view your own Board of Honnor on SteemitBoard.
For more information about SteemitBoard, click here

If you no longer want to receive notifications, reply to this comment with the word STOP

By upvoting this notification, you can help all Steemit users. Learn how here!

Not indicating that the content you copy/paste (including images) is not your original work could be seen as plagiarism.

Some tips to share content and add value:

  • Using a few sentences from your source in “quotes.” Use HTML tags or Markdown.
  • Linking to your source
  • Include your own original thoughts and ideas on what you have shared.

Repeated plagiarized posts are considered spam. Spam is discouraged by the community, and may result in action from the cheetah bot.

Thank You! ⚜