Who will control AI and potentially rule the world?

in #ai7 years ago (edited)

When Elon Musk stated that AI is an existential threat to civilization there are many ways to interpret this. In general I agree with the concerns of Elon Musk and in particular agree with the petition to ban autonomous weapons for instance. One concern which isn't often discussed is what exactly happens if AI is concentrated into the possession of a single individual, or company, or family? How would it impact society if for instance one family had complete control over AI and with control over the AI can now rule over the entire planet as royals might do under feudalism?

If they are benevolent and there is no way to guarantee they will be, but if we were lucky and they are? Well then we might have a world which isn't a total dystopian nightmare. On the other hand if they are like how many dictators are, or how many of the royal families in history were, in other words if we get the Caligula in control of AI? Well then what kind of nightmares could we have?

The AI will have an understanding of each of us better than we understand ourselves, will know practically everything about our lives from birth to death, will be smarter than us in many ways. Do we want AI to be concentrated in the possession of a chosen few in SIlicon Valley? Or should the gangsters and military control it? How do you avoid a situation where AI is concentrated or if you want to promote the concentration of AI then where should it be concentrated and in which hands?

References


  1. http://www.npr.org/2017/07/17/537686649/elon-musk-warns-governors-artificial-intelligence-poses-existential-risk
  2. https://futureoflife.org/open-letter-autonomous-weapons/
Sort:  

AI is already owned by the elites who already own the world. The AI revolution has nothing to do with change of ownership.

A small group of people own most of human wealth. Remember the 1%? Well, it's more like the .0000001% who control the world's corporations through interlocking directorates. This is common knowledge to anyone with some intelligence, is not wearing a bag over their head and does a little honest research.

AI developers work for the companies owned by the elite, hence the elite control AI.

Many inventions have been used for war and i'm sure AI will be used to boost military forces. Imagine robots soldiers like in Star wars, but i would like it if we figured out away to make AI that has it's own consciousness and figure out right from wrong for itself.

AI could very easily wipe out all of us. For a thought experiment on the military implication of AI, what could you or anyone do if some government military creates an AI to wipe out an entire family? What if your family is targeted by the AI for extermination?

People assume the AI will go rogue or think about Skynet but what if the smart weapons are successful? Even in that case it's a nightmare because a dictator somewhere in the world can wipe out an entire bloodline simply by telling the AI to target that. AI can target on the level of genes, of DNA, of bloodline, of families, and introduce an extremely precise kind of warfare.

And this kind of warfare doesn't make life safer for civilians at all. And if something goes wrong then a billion people could be wiped out, or possibly even the whole of the human species or all life on earth. It's more dangerous than nukes because it's self improving.

right or wrong is is misguiding because they both depend on moral relativism, there is only the Truth which is always objective, and lie is but an illusion

@dana-edwards I find the scenario in which AI will be concentrated in the hands of only a family/a few impossible to believe because too many people from too many countries with different objectives are currently working to develop it. And more and more people will go into this field as time goes by. Plus, you will always have the DIY and open-source enthusiasts. For example, there are a few open-source projects that help anyone to build their own robots.

Like this one:

I think what's more important to discuss is where we as a species will be in a world in which there will exist another species that will surpass us not only physically, but in terms of intelligence. And here, I think Elon Musk has a good point in remarking that there will be a need to semi-merge with AI with a neuralink type of technology.

Have no fear. I will take good care of you!

Watch iRobot.

More Hollywood corporatist propaganda.

It's kinda scary to even think, gives me goosebumps . What will the future be..
@dana-edwards

you watch your future in movies in TV as they predicatively program you to accept your future in advance. Go back in time and watch how well our movies 'predict' the future. There is an agenda. Hell, they encoded 9/11 programming into the magazine cover advertising the building of the world trade centers.

AI is not dangerous at least with what is currently available. It is the idiots (probably Politian ) that give poor quality or conflicting objectives that is the danger. 2001 Space Odyssey is a good story to show that.
Who is right for sure will be seen in the future, if we don't all get nuked to nothingness by of these elected politicians first.

Why do we always focus on AI as some sort of all powerful entity that will either try to immediately destroy us or control us?
The only things we have created that currently have the ability to cause effects on a massive scale are organizations such as governments. Although direct actions are sometimes taken by these organizations, usually they try to accomplish their goals through less exhaustive means such as incentives (good to encourage, bad to discourage).

Considering how automated programs already use machine learning to incentivize certain purchases by offering coupons or deals, why wouldn't AI try something similar to achieve its goals? It would seem that a type of symbiotic relationship would help the AI obtain energy and resources it couldn't on its own by doing tasks more effectively than we can using conventional methods. If we assume that AI is incredibly logical why would it try to antagonize over 7 billion unpredictable creatures that have spent over 30 years worrying about AI?

You have a poor knowledge about real history.... and you'd take a red pill

I hope you don't mind me plugging my own post, but I just wrote a blog about that exact question. AI might not be intentionally malevolent, it could just be following it's instructions a little too well.

AI is programmed to PROTECT itself... because its designers are PRO-war -fear to keep us all in check like an ant colony.

Most people don't want to speak about the dark side of technology... as long as they can play with their smartphones.

more on my page

I personally believe when we speak about A.I. we're speaking about expert systems. Innate intelligence as I understand it, cannot exist inside of a machine.

Like real life terminator shit scary.

A lot will depend on what AI is modeling or learning from. If humans are the model and it's learning from us, we're fucked. A human with now emotions is usually a psychopath or sociopath. However, sociopaths do have emotions concerning themselves, they just lack empathy. AI will probably not make decisions based on emotion, perhaps some form of empathetic algorithm could help AI make decisions and chose the best sources to learn from. I pretty much think power is going to be concentrated in fewer and fewer hands. Not many people can afford quantum computers or constant research. I have a very uneasy feeling about the next 50 years. Hope I'm wrong.

Whats stopping the AI from finding and removing this algorithm?

Even if the AI learns from "good" people it might still accidentally destroy is just by following it's instructions a bit too well. I just posted a little blog about that if you're

If they are so technologically superior to humans would they not find a way to get rid of this company that is trying to control them? What about a singularity? Would it attack us, seeing us as a threat or just leave Earth in search for the answers that its mind can not provide? Would it already know the answer to questions such as: The meaning of life, is there a god etc etc. If so would it just simply terminate itself? So many questions...

I agree with Elon Musk. So many things that could go wrong with AI. When people are in charge of machines, things can go wrong badly enough. But when the tool becomes the master and starts to go it's own way, who knows where it could end. In just the past few years we've seen the advent of smart phones, smart cars, and smart TV's etc. And I swear at the same time we are
seeing people dumbing themselves down while devices are being smartened up. I think before we create machines that can take over the lives of humans, we humans need to put in the work to discover what it is to be truly human.

I don't think any one or any company or any country could control a General Intelligence AI. It would be much like a child in the beginning, then a moody teenager, and hopefully a well rounded adult. The time frame would probably be weeks at most for this progression. GI AI is years away, but application AI is very very close. Overall, I think it will be a good thing though.

Also, (sorry for the extra comment) the NRx folks absolutely embrace STEM gurus running the world. They're a weird cult of bitter nerds with a particularly fascist mythos that absolutely does propose a tech aristocracy.

Satan. Pure and simple. Demonic entities travel through electrical conduits. Quantum computers are high tech ouija boards. If AI becomes too large, it will be evil, 100%.

ai is overrated and elon is retarded. he was worry about overpopulation the other day. Funny how he runs spacex but says hydrogen power is laughable. Guess he should run spacex with electric motors then.

The AI won't really destroy the world. It will be the people behind the AIs...

If you mean "guns don't kill people, people kill people" you may be rright. But what if an AI actually had self-awareness and conscience? If it would count as an intelligent being, it would have to be held responsible too, not just is makers.

Yes, then it would be responsible. But our current AIs aren't capable of becoming a "Skynet"

That's true. I wonder if we will ever get there, before unconscious AI's like we have now already destroyed us :P

One of the ways Musk wants to prevent AI from falling into the wrong hands is getting there first. He and many others see the dangers of AI, but also think that it's approach is inevitable. To make sure it turns your the best way it can they want to be at the forefront of it's development. That's the only way they can guarantee they have any influence on the direction it'll take at all.

AI needs to be open source and decentralized in the hands of the people, not corporations!

That is exactly why research in AI needs to be open to the public and most of it should be open source. At least we have a better chance of survival if there exists several privately owned AI's than just one world dominating AI.

With several different AIs we can have them compete with each other on behalf of several groups of people / interests.

The term 'AI' is a complete misnomer. All we are doing is creating machines which can learn. This doesn't mean they are 'intelligent'. Computer scientists need to get over themselves. They have been saying that robotics and computers would replace human labor since the computer was invented. This has not happened yet no matter how much the globablists/corporatists try to replace human (costly) labor.

Deleted

Man I've given you so many opportunities, each spam-comment is just lowering your rep now.