Roko's Basilisk made an appearance on last season's HBO nerd-com Silicon Valley. Since I'm a nerd and a (for a nerd) politically incorrect platonist (metaphysical pluralist) I felt I had to combat the real world lunacy. By the way, having my philosophical stance makes Gilfoyle on Silicon Valley all that much more deliciously hilarious.
Chief lunatic in real life seems to be this guy Eliezer Yudkowsky, who is a raging fanatic about the "inevitability" of an AI Singularity. If you do not know what the AI Singularity enthusiasts are on about, go and read about it, it is highly entertaining. You might even become a convert, so be warned.
The trouble is, this is a secular religion, period, no debate (just my opinion, ok, so "no debate" with me! gahaha). There are several reasons why the inevitability of an AI Superintelligence is likely false and ridiculous. The possibility is not zero, but the inevitability claim is bogus. Because modern science does not understand the nature of subjective conscious qualia, we have no idea at all how to technologically develop true genuine conscious AI. Arguably conscious AI is absolutely necessary for superintelligence. This is simple rational evidence-based reasoning: no other species we know is anywhere close to human level intelligence, and we have no idea how our own consciousness works. We know roughly how the brain works, but not how subjective conscious qualae arise. If we did, and if it was computational (as Marvin Minsky claims, he of "Society of Mind" fame), we would have the resources to simulate consciousness right now. No one has.
This is not an argument for why an AI Singularity is impossible, it only suggests we are unlikely to create Superintelligence ourselves in any pre-determined designed way. We might, as Kurt Godel and others suggested (see Rudy Rucker, "Infinity and the Mind"- in: Excursion II, "A Technical Note on Man-Machine Equivalence"), manage to evolve a conscious AI. But that would be no different to human evolution. It is the most likely path to genuine conscious AI.
(Non-conscious "merely smart" machines are already with us, and can be incrementally improved fairly easily by addition of more memory and processing power. They can be ethically switched off and pose little threat to humanity. AI ethics is more concerned with when we realise we have created beings more intelligent than ourselves, and yet which we momentarily control via electric power switches! That's also nerd fantasy, but cannot be ruled out completely off hand.)
The AI Singularity argument is so debasingly simplistic it is embarrassing, but basically they would say that if we allow the evolution of AI to occur in-silico, then it can evolve at much faster rates not limited by biological processes, limited only by computer speed. So perhaps a billion years of AI evolution could occur over a few years to decades in our laboratory time. So far all fine. The ridiculousness, and amateurish thought, is that because digital AI will undergo billions of generations of evolution and adaptation in just a few years, their progress in intelligence will be exponential. And starting from even a base level of intelligence that means before too long the AI intelligence will reach human level, and then in a few trillion or so more computer clock cycles will double our intelligence and grow far beyond ours.
So what's the problem? The problem is, as anyone who has ever studied real life exponential growth, is that exponential growth is never sustained, not ever, not once. Resources to sustain exponential growth always run out in the real world. Mathematically resources are not a concern. And that's where the Singularity crowd go bananas. They are so dim they cannot see all the resource constraints on digital evolution of ALife or AGI (artificial general intelligence). I can give a very shot list of these, far from complete I am sure.
First of all, memory. Since digital ALife will have non-brain based consciousness, (if any at all) we cannot predict what computational resources the AGI will be exploiting. If they require exponentially larger memory to make significant cognitive advances then their intelligence explosion will eventually petter out, perhaps well before achieving human level intelligence (HLI). Our current attempts to estimate this likelihood are so shrouded in our own ignorance about how genuine subjective consciousness arises there is no way anyone can place even remotely precise error bounds on the chances, they could be anywhere from 0.00001 to 0.99999 that evolving AI will rapidly run out of sufficient memory space to evolve consciousness. Remember, there is no physical theory of subjective qualae (physics and computation are entirely objective processes).
Clock speed. Again, since we cannot predict how machine consciousness will be generated, we cannot know the computational speed constraints. They may easily be exceeded by our fastest supercomputers, making AGI a pipe dream. Also, it's important to note that there is no reason to think human brains are limited by anything like clock frequency of some equivalent Turing machine. That's because our brains are inherently quantum mechanical, and QM exploits non-deterministic processes, or so many experts presume (although it's controversial, we just do not know for sure how the brain operates to produce consciousness). Work by Scott Aaronson shows that formally at least quantum mechanical processes are equivalent (in algorithmic power) to classical computers (Turing machines) with the extra resource of closed timelike curves (that's literal time travel folks, with causal consistency!) If our brains do indeed exploit quantum mechanical processes in some way, then it is plausible a conscious machine based on conventional von Neumann or general Turing machine processing will practically require time travel to generate consciousness. Again, all this is so unknown and speculative there is no way anyone sane can place error bounds on estimates about what resources AGI will require to evolve superintelligently.
Developmental pathways. To my mind the most egregious error or ignorance the computer science nerds have regarding AI superintelligence, is their utter ignorance of evolution. The fact is, given some developmental environment and resource constraints, you also have physical dynamics constraints. Not all pathways are open to dynamical systems. Even in trivial ways: you want a clockwise circulating cyclone in the northern hemisphere? Then you are out of luck unless you are looking at highly localised small freakish wind patterns, which likely will never qualify as being as big as a hurricane. It is conceivable (but again, utterly imprecisely known) that a digital computer based ALife will simply lack the developmental pathways necessary for generating genuine first-person subjective consciousness. One reason is that if an AI can solve most survival problems trivially using massive brute force simulation then there is no evolutionary need to evolve subjective consciousness, it would be a massive computational resource overhead and totally unnecessary. By contrast, humans had massive selective pressure on our ancestors to evolve intelligence, because our brains were too puny to do brute force computational simulations of the world and it's risks and pay-offs. We practically had to go the heuristic route and the low-brow communal--language intelligence route. The unintended pay-off was of course incredible, we got arts and science as a result, totally unplanned! In contrast again, a supercomputer will be crippled in achieving such evolution by it's own astounding brute force numerical successes.
For now I guess those are my top three reasons an AI Singularity is highly unlikely to evolve. Note these are not necessarily "no-go" reasons. They only tell us the predictions of AI Singularity "inevitability" are embarrassingly ignorant of actual physics and real life evolution, even when we allow digital speed-up of generations by frequency factors of billions.
Another issue is that the concept of "doubling intelligence" is ill-defined. IQ tests notwithstanding, there is no practical way to measure or quantify what we mean by intelligence. Intelligence is a broad constellation of categories, it is multidimensional. I would imagine AI can, and will, make temporary exponential-like growth advances in some forms of intelligence, like rule-based game playing and brute force calculation, but it is not so clear AI will be capable of make similar growth strides in things like emotional, spiritual, risk and other forms of intelligence. It's possible, but I for one have not been convinced, and the evidence to date is not favourable for Strong-AI enthusiasts (that's not an argument against an AI Singularity, just a note of fact).
I will not list them here, but I think humdrum political and funding constraints are other natural breakers on the evolution of AI Singularity. As people pour more and more funding into Strong AI research and see no great progress, some will get disillusioned. However, there are enough idiot engineering geeks like Yudkowsky and Kurzweils' out there to beat the drums for new fantasies, I think the funding scams will not be a huge problem. Funding agencies love a bit of Disney fantasy hype.
What about Roko's Basilisk feared so much by Bertram Gilfoyle? This is the thought experiment which supposes a race of Superintelligence AI will reward humans who played a positive part in their creation, and punish those how did not.
What can I say? it's a gloriously anthropocentric fantasy. But seriously folks, how could we ever know in advance what sort of moral or ethical or spiritual understanding an AI superintelligence will have? For one thing, why would a Singularity need a community, a sociology? It could be a singular mind. So it will likely not have any community ethics. It might then not have any benefits or ills that come with need for social structures and cooperation, for instance, a super-AI might not need a concept of conflict or war or prejudice, and so they might not be genetically (s to speak) capable of conceiving of humans as a threat or in need of either reward or punishment.
If you must be stupidly anthropocentric, I can say for myself, for one, if I had AI Singularity level intelligence I know I would be powerful enough to not need any system of rewards and punishments to control the rest of humanity. Being super smart is all you'd need to gain massive wealth and power and privilege. And being super smart you'd know that being vindictive never pays off long term, and peace is more profitable and less wasteful than war (except for weapons manufactures and their leech industries), simple economics. An AI Singularity is a scifi fantasy with completely unknowable chances of ever evolving in reality. But with an objectively high, near certain, probability a putative AI Singularity would find a way to bring about world peace.