Assuming there is no apocalyptic event that sets humans back technologically (as in the novel “A Canticle for Leibowitz”), it is likely that sometime in the future cybernetics and direct biological manipulation will have sufficiently advanced to allow at least some individuals to become substantially different from present day humans. For example, there may be mergers with artificial intelligence through integrating the brain and specialized computers, direct biological manipulation to enhance cognitive functioning and perception, specialized prosthetic devices, or people may be able to upload their consciousness into a computer.
()
Regardless of the manipulations undertaken, enhanced individuals will be substantially different than humans are now, and given the stubborn divisions between haves and have-nots that exists, it isn’t a foregone conclusion that everyone will have access to such technology; indeed, there is a good chance that at least for some time period, contemporary humans will co-exist with these enhanced beings (will it be accurate to, at some point, continue to call the human?). This brings up a legitimate ethical question: what, if any, ethical obligations will such enhanced individuals have towards those that are unenhanced?
Most individuals at least act as if there are certain ethical obligations that humans have towards one another (e.g., refraining from slavery, slaughtering, robbing, and raping). Those that violate these sorts of general ethical precepts are either criminals, deranged, or employed by governments. That doesn’t even take into account altruism that at least exists on a family level (which, through ideals such as patriotism and nationalism) can be expanded to encompass larger groups of people.
Will extremely enhanced humans – for example those for whom it is no longer possible to determine if they are biological species, cybernetic beings, or fully consumed by machines (for example if one’s conscious were able to be uploaded into a supercomputer) – have any obligations towards non-enhanced humans? Is it possible that they will see non-enhanced humans as dead weight that competes with their enhanced selves for resources? Will they see the non-enhanced humans as slave material?
How these questions are answered in practice (regardless of the theoretical conclusions that are drawn from them) will likely determine the future trajectory of the species and its successors. If these enhanced humans or post-humans feel no ethical obligations towards non-enhanced humans, they may very well find it most convenient to eliminate humans or, at a minimum, dominate humans. There will be, for them, likely fewer ethical hang-ups about such behavior because they are dominating a lesser species (in the same manor that humans domesticate and dominate other animals). It is difficult to determine if when such beings see an obviously weaker and less capable being, it will see where it came from such a weaker being and allow it live in peace or if it will see an object to dominate to eliminate from resource competition. Without altruistic tendencies towards those that are unenhanced, humans will be first species that are currently known to have, from its own intellect, created its own successor.
I am interested in hearing what others think about this. There are articles about what AI will or could do to humans, but there isn't much that explores what a self-created successor species will do to the remaining humans. Also, to give credit where credit is due, here is a link to the site I got the image from: click here
Congratulations @edgon! You have completed some achievement on Steemit and have been rewarded with new badge(s) :
You got a First Reply
Click on any badge to view your own Board of Honnor on SteemitBoard.
For more information about SteemitBoard, click here
If you no longer want to receive notifications, reply to this comment with the word
STOP
If you want to support the SteemitBoard project, your upvote for this notification is welcome!