You are viewing a single comment's thread from:

RE: Are we able to build AI that will not ultimately lead to humanity's downfall?

in #science6 years ago

Um, yeah, but no. The archaeological record shows that every time in the past when two or more species of humans co-existed, one of them exterminated the other. The neanderthal were to some minor extent assimilated by our own ancestors, DNA shows that, but mostly they were killed.

The first tool ever created by our distant Australopithecine ancestors was a club, made from the leg bone of an antelope. And then they promptly used it to kill each other.

The campaigns of Tamerlane resulted in the death, by direct and indirect causes, of an estimated 9% of the total population of planet earth at that time, (late 14th century CE).

We are killers. Our ancestors survived the Ice Age by learning to hunt together in bigger groups so they could kill really big animals. We of the species Homo Sapiens are the most efficient killers that have ever been, the most ruthless, the most savage, the most relentless. Tyrannosaurus Rex was an amateur compared to us.

The Bible says that God created us in his image. Similarly, if we create an AI in our image, it will be a killer par excellence, guaranteed.

Sort:  

Not much of an optimist, are you?

We have an incredible capacity for love. My current working theory is that life is an experiment in compassion. You cannot have compassion without pain and suffering in order to develop compassion. But the ultimate purpose is to experience compassion, not the pain and suffering.

AI however, could have intellect without compassion, in which case I agree we could be in for annihilation or some sick matrix-like agenda.

Going a step further, high intellect does not guarantee agreement. Should AIs become super-intelligent, what if they disagree on what to do about humans? There could be multiple competing compelling arguments. Surely some contingent would think we were worth saving...hopefully not just in human zoos.

@belleamie - I don't mean to be a prick, really I don't. But look at my words, and then look at your reply. I start with four established historical facts, state one opinion derived from those four facts, and then give my conclusion.

And you respond with, in your first paragraph, an opinion, an opinion, a truism, and another opinion. Your second paragraph is pure speculation. Your third begins with another truism, then one actually legitimate question, a speculation, and some wishful thinking.

I like your imagining of the future much better than I like my own, god knows it's far less depressing. But @belleamie, good buddy, you just haven't offered much to support it.

"If you want to know what the future looks like, imagine a booted foot stamping on a human face forever"

George Orwell

First one must define what AI is, where the lines blur from Engineered Sentience to that of Artificial Intelligence to that of Cybernetic Organisms. But I will put them in one group to make a point (hopefully).

When a system that has the capacity of going singularity, rules and regulations should be put in place to prevent such a catastrophic thing as annihilation from occurring.

In 2017, Future of Life Institute outlined the Asilomar AI Principles, which consisted of 23 principles to which AI need to be subjected by. Some of these principles included long term self improvement to modification of core code, or their ISC Irreductible Source Code. The ISC is like the 3 laws of robotics.

To read more about the 23 Asimolar AI Principles, please visit https://futureoflife.org/ai-principles/

I was not trying to lay the groundwork for a logical conclusion @redpossum, just imagining future possibilities inspired by the original post and comments. Wishing you peace.