I am usually an advocate for technology, but not like that. Technology is a tool. The moment you start giving AI the ability to think and feel, it is no longer a tool. It is a person, one that can now obstruct our goals and challenge our perspectives. At that point, we will end up going back to the drawing board to think about how we treat AI e.g. should they have constitutional rights? Is shutting it down considered murder? Do we have to be conscious of our language towards it? This will send us backwards instead of forward.
Also, we have a right to be lonely. Using technology to take away our right to privacy in terms of our thoughts is pretty much a crime.
Indeed - you made some important points. We need to look closely at the questions you ask. For example, there is already the question of what is actually going on with the driving cars that run over and kill someone. Who then made a mistake? The builder of the car, the programmer, the originator of the self-driving software? And do we have to treat this decision as a legal vacuum? Hardly. Is the AI to be held accountable? What about self-learning AIs? When are they grown up and have to be made "liable" for their decisions? Otherwise, it would be comparable to letting adults who cause a car accident hold their parents (their creators) liable. These and other questions are inevitable and will have to be discussed. I also wonder which car insurance company will actually be liable if even driving cars are allowed on the roads. What about the psychological consequences of people dying in traffic? How do people cope when a relative dies in a car driven by an AI or killed by one? Are there also advantages in such a consideration?
Since "Go" self-learning AI are no longer fiction but reality. When did an AI grow out of its children's shoes and the creator cannot or does not want to assume any more liability for it? The question as to whether AIs should actually have rights is not entirely unjustified. So it is something we are confronted with for the very first time and nobody seems to have an answer to it. Such things really need to be deliberately debated in consensual democratic group discussions and put to consensus (I told something about systemic consensus in my last article, a far better method than the one we have). It has to be global and not national.
You see, the issue raises questions. To deal with them is certainly helpful, because they force us to ask questions of an existential nature and perhaps, as never before, to deal with philosophical questions that we thought could be treated like stepchildren.
... Oh, yes, true. Privacy and silence people for sure should seek.