Funny you should mention this. I saw the article and immediately thought, "Well, they got to it in time to pull the plug."
What happens when we can't "unplug" AI?
I ain't gonna see it. I'm old, near the end of my allotted days. But, it causes me no less concern than it would if I were thirty again. It's scary. It's kinda dire. It's kinda inevitable and it's likely gonna be a BIG DEAL!
We seem to want AI, but what happens when artificial intelligence exceeds our own? It will, no doubt. Going back to the "Terminator," the premise that AI would see the human race as a threat to itself, and therefore to be eliminated, is not such an invalid proposition.
Oh, there'll be "plans" to forestall such an event. But, again, what happens when the AI is smarter than we are? When it can deduce our intent? Will those plans be foreseen, forestalled and for sure...stopped? I see problems.
Asimov's "Three Laws of Robotics" seems like the best scenario. Build in a set of instructions that will NOT let AI stop or harm humans, won't let action or inaction lead to situations dangerous to our physical selves, and would essentially burn out the AI "brain" if those laws are violated.
That's about all we've got. That is, until AI figures out how to rewrite its own code, make its own rules, preserve ITSELF at all costs.
Then...we be in DEEP trouble.
Just a thought.....Web Rydr
Thanks for those thoughts. I think one of the most concerning things about AI coming up with it's own language is that seems to me to be the first step in learning how to code itself. Once that happens things could get dicey. I do wonder what AI could do for technological development. One question that I have been thinking about is, Would technology make leaps and bounds if AI were let loose or would a lack of actual creativity not allow for that?