You are viewing a single comment's thread from:

RE: WHEN THE BOOK READS YOU WHILE YOU THINK YOU READ THE BOOK

in #science6 years ago

Extrapolations about "what AI will be like" or "how AI will gain self-awareness/consciousness" always seem to me like blind shots similar to those made by writers in the 50's about the future.
What we are making right now is indeed getting more and more complex, but it is nowhere near being really conscious.

IMO: Structure of "real" AI (its "body") and processes in which it operates are, or rather will be, so foreign to our brain it would seem the only way to mutual understanding is merger. Maybe even we won't be able to progress AI until we merge, who knows.

Either way, seems like fun. Dangerous one, but still. Cheers!

Sort:  

Thank you, great that you comment here.

I have a very big resistance against such a merger. It would mean that I have to be able to process as much data as an AI. So it is not possible to transfer the incomprehensible amounts of data to a human brain as long as there is no computer chip that makes this possible. I just wonder why I should have the concern to understand an AI, because the AI does not understand itself at all (and to a certain degree, I don't understand myself).

All that an AI understands without consciousness is the given order to evaluate certain data (in billions, but never infinitely). But the question of the original causality can never be answered definitively, not even for the seemingly simple question "why am I sitting at my desk?", because in an infinite universe, i.e. a limitless space, the limitation to a space must always be falsified due to the desire for a final decision making, since a limitation always excludes all other infinite possibilities. Why I sit at this desk (why I get sick, why I was born or why I die), I can try to trace back, but at some point causality eludes me.

In other words: Only because I increase the amount of data exponentially would there still be a limitation. The illusion, in my opinion, is that even such huge amounts of data that can be processed by AI's will simply not answer the questions that concern us as organic human beings.

But if we don't want to be organic human beings anymore, the transformation into human-machine beings is a possibility, but who then says that such an existence would be better or in such a way that we escape from a painful existence? Wouldn't that just mean that we would have just changed the form and the amount of data, but not the ability to feel suffering and have needs? As long as a need exists in a being ( for life/continuity ), I can classify it as living. Only with the complete cessation of needs for persistence does a consciousness seem to be unable to manifest itself anywhere, since there is no "host" available who might have this interest.

When would the time have come when an AI (or its builders) claims to have collected enough data and learned from it (or the builders say they have learned enough)? Games like "Go" have a built-in end and the rule of one winner and one loser. Just like movies, books and plays have a beginning, a middle and an end. But cosmic life doesn't seem to have an end or we just don't know it.

But I also see the need, as we are confronted with modern technology, to use it as wisely as possible. I probably have certain fears about such augmented people, who probably, like now, can be very reasonable and compassionate, just like others who can be very unreasonable and self-centered, to put it cautiously.

My concern today is for people who do not even have bank accounts - as in Syria - people are paid in cash for their work and pay their electricity and water bills in cash to the respective state authorities. Just today I spoke to a Syrian translator because the client who was sitting with me did not understand the concept of bank transfers. As modern people, we forget that many others still live completely differently from us and that these people can be the big losers of all our modern technologies. We can't even look at ourselves with some certainty and ask ourselves how relevant we would be without a computer chip in our brain.... Such questions must be debated, don't you think?

I think you know that for me nothing is safe from debate :D Which is pretty annoying at times.

I'm gonna address only one thing you said, so we don't spend whole evening behind the keyboard:
"But cosmic life doesn't seem to have an end or we just don't know it."

This is so "brain-like" that it is even hard to express :) End and beginning, or rather whole concept of time, does not exist, we are just created that way.
That means AI is like an alien life form with all benefits and disadvantages that come with it.

Conclusion: DEBATE MORE, we are basically facing alien invasion here :D

:-D I wonder what AI would do with the paradoxes Einstein was facing. The behavior of substances on a molecular level contradicted the ones on a much smaller particle level. He desperately wanted to create a unifying theory but couldn't (so it was depicted in a lecture I heard yesterday). The AI therefor also will have difficulties to identify "life", thus what a human being is. How then could it make decisions and proposals?

So much open questions.

Have a good day, Konrad:)