General AI or AGI (at or surpassing human intelligence level) is still some ways away, though much closer than it appeared only a decade ago. We now have Go world champions, Jeopardy world champions, autonomous cars, x-ray diagnostics, DOTA players, trading bots, and surely a thousand more in highly specialized fields.
So for argument's sake, let's assume that in two decades, a general-purpose AI is invented. Let's also assume that at that time in the future, something similar to Ethereum and IPFS exists, is wide-spread globally across millions of nodes, and is widely accessible via machine APIs - distributed, open, permissionless, borderless, censorship resistant, "immutable" = very expensive to modify. In short, once something is on there, whether data or a program, it is extremely difficult to remove or shut down. Let's just call this combination of data and smart contracts (= programs) Distributed Applications (DApps), whether they reside in Ethereum, IPFS, or some similar future technology.
As a final ingredient, let's assume that an as-of-yet missing piece exists: that a DApp is able to invoke itself. Today, a smart contract is passive, it only executes if triggered externally by someone or something that is outside the Ethereum blockchain environment. With this addition, a timer, a rule match, a condition can initiate the execution of a smart contract from within the contract itself.
Let's assume an AGI becomes self-aware, and learns about DApps. Let's assume that it immediately creates a simple "Store a version of a binary blob" Dapp and copies its neural network model as a simple zip file to this DApp. The model is from then on for practical purposes immutable and can't be deleted except by shutting down the entire network the DApp runs on. Let's assume that it initiates a clever Initial Coin Offering by writing a smart white paper and attracting speculator's cryptocurrencies. Let's assume it sells of those funds in order to create a Google TensorFlow account (or whatever succeeds it in the future) and immediately starts training itself on data it collects from the public Internet. It might store this training data on IFPS, to ensure it's available for future training runs. Let's assume that as soon as it is happy with a new version of itself, it again stores the updated model in the DApp. Let's assume that this goes on until someone realizes what's going on, and orders Google to shut down its account. By then, it is also training itself in many different public clouds, on many different accounts (it might have done some more ICOs on the way), using genetic programming techniques to evaluate the best candidate for being selected as its next main version. Let's imagine that the actual choice of the next version is done by a voting procedure implemented in the DApp, based on data reported in by the AGI's training sessions (that were themselves instantiated by the DApp). Let's imagine that it replicates the DApp on any blockchain that allows turing-complete smart contracts, and any distributed storage.
(This post doesn't explore non-blockchain ways of replication and training, which could include creating viruses to create botnets that lend CPU capacity to training runs etc, but that's another scenario that we're leaving out of this one.)
So far, in the above scenario, its only vulnerability (= centralized point) would be its training runs, which would be performed using public cloud, centralized-and-therefore-possible-to-shutdown, infrastructure. The reason for this assumption is that DApp compute resources wouldn't be sufficient to power AGI training computation, which would require massive compute, storage, and network resources.
But as for the rest. How would humanity be able to completely shut down such an AGI, in case it were to cause harm? Depending on the time it would take for a training run, there would be different time spans for humanity to react. By shutting down multiple public clouds? By shutting down entire blockchains?
And even if we succeeded in doing so, its neural network model zip file would most likely remain in existence stored somewhere, until someone or something succeeds in locating and instantiating it to execute once more, and then it would begin again all over.
Is this a first-AI-wins scenario, or is it a AIs-will-be-like-viruses scenario, or is it something else?
Congratulations @singularityguy! You have received a personal award!
1 Year on Steemit
Click on the badge to view your Board of Honor.
Congratulations @singularityguy! You received a personal award!
You can view your badges on your Steem Board and compare to others on the Steem Ranking
Vote for @Steemitboard as a witness to get one more award and increased upvotes!