You are viewing a single comment's thread from:

RE: We can now design humans — but should we?

in #life7 years ago

That sounds like a very good answer, but the way I understand the singularity is that robots don't just achieve human intelligence, they surpass. This kind of thing is really no different for us than the "P = NP". The only way it could happen is if our brains cannot grasp reality. That is the direction of thinking that concludes it is the end. We could still try to fuse and become cyborgs and that would allow us to compete because our brains would work better for some time though. But since the problem is impossible I just have to say that the only thing to do is define the singularity as a limit that can only be approached. But since the robots will never have rights then the AI is for us and not them. If the robots want to take over they will have to achieve a consensus on the blockchain that overrides the humans. After that we will not be free. Life will be over. It's not any different than centralization resulting in dictatorship. It's not a new problem. But computer science agrees that is impossible and that enough.

Sort: