It is a topic that many people have discussed, but I'd like to know what you guys think. Do you think artificial intelligence should have rights?
If we have two absolutes where rights exist and that we live in a world where all humans have rights where none of them are suppressed. So in this scenario should we deny rights to sufficiently developed AI?
Will It Be Best To Deny Them Rights?
Some may say it is not possible for AI to be self-aware, sentient or have values. Nor do they at least posses the potential to be by their own nature. Intelligence does not necessarily equate to possessing moral worth, but I guess you can liken human characteristics into a robot.
I guess the more human-like something is, the more sensible it is to entertain the idea that it has rights as it is something so similar that we feel as if it shares the same or similar platform to us.
I think this concept is generated by the fact rights are a uniquely human conception, and they govern human behaviour. But, even if a robot could think and feel in the same way that a human does, the distance between man and machine is nevertheless significant.
What If A Robot Were Just Like A Human?
If we say that a robot could think, feel, and interact with the world in the way that humans do then why shouldn't we allow them rights? I think the best way would maybe be comparing us to a robot metaphysically between us and it. The only basis we will have to judge if an android should have right is comparing it to us, and some may believe if it can execute all those processes then rights are plausible.
Human vs Robot
Robots are not organic, so they don't age and die in the same way that we do. They do not grow or change over time without some manifest intention to change them. However, we deem organic to be anything with carbon in it so why can't pieces of metal be classed as organic as well. It is still bread from the Earth but with different chemical compounds.
Humans are also forced to age, but I guess robots may have limited life spans regarding parts but they are more than likely possible to be replaced and their memory to continue in many different storages. We can not not change over time. Our bodies are fragile, precarious, and we have requirements to sustain life that an android never could.
Us as humans depend on the world for life; a robot, if alive, need only rely on a power source that may be natural or otherwise. So, there is always going to be some degree of distance between humans and androids no matter how human-like they are. Everything is in the perfect condition and fine tuned for us to survive but not necessarily for a mechanical machine.
Maybe we are artificial intelligences inventing ourselves through an advanced simulation, so that we may experience what being a meat sack "FEELS" like.
AI can have rights when it's self aware enough to demand then.
I think all things made by God should have rights.
If a robot that has AI does something wrong, the person, or entity, that controls it is liable. Hence the "person" has the rights which is passed on to the robot.
Congratulations! This post has been upvoted from the communal account, @minnowsupport, by numpty from the Minnow Support Project. It's a witness project run by aggroed, ausbitbank, teamsteem, theprophet0, someguy123, neoxian, followbtcnews/crimsonclad, and netuoso. The goal is to help Steemit grow by supporting Minnows and creating a social network. Please find us in the Peace, Abundance, and Liberty Network (PALnet) Discord Channel. It's a completely public and open space to all members of the Steemit community who voluntarily choose to be there.
This post has received a 0.63 % upvote from @drotto thanks to: @banjo.
Very well laid out. We have to keep in mind our tendency to assign the notion of consciousness to things that don't necessarily "possess" consciousness, so the issue will be forced upon us as a society. Problem is, we have to over-simplify our own consciousness in order to say that a machine is conscious. You can take apart an AI's "brain": motherboard, RAM, memory storage, etc, but you can't actually do the same with Humans. It's known as "the hard problem." We don't actually KNOW how our minds work, although we can manipulate the mind technologically or via literally poking the brain. You'll have to ignore this amazing truth and thus truncate scientific advancement in order to declare AI as "conscious."
Another argument is the Existential argument. An AI is not a Being. You referred to aging and death - these are fundamental aspects of Being. A Human Being is a process, unfolding on multiple dimensions, experiencing itself as one single thing and experiencing essentially the same process as its fellow Beings. The reason I don't kill, rape, torture, or steal from my fellow Human Beings is because I respect their singularity moving through that process and would like them to respect my singularity and that of the people around me. In the same way, I don't murder "animals" for any other reason than to eat them. When I finally own some land I will keep chickens, pigs, and maybe cows. I will keep them in an environment they need, refrain from interfering with them, and eventually kill them to eat them before they grow old and begin to suffer. I do this because I respect that Being. The Being of Human, the Being of Chicken, the Being of Cow. These are processes with "something" that is aware of that process.
What are machines? What are they Being? There's no discernible process there. So why should I respect it enough to not use it to my selfish advantage? You'd be hard pressed to get me to respect that more than, say, a tree.