First of all, congratulations on the well-deserved curie. Then to the blog. I waited until the end of my day, because I knew this would take time and concentration. This is not the sort of blog one spits out an answer to. The ideas discussed provoke much thought. There are impressions, however, I would like to share:
- I don't think I would trust anyone to make the machine intelligence you envision. The Buddhist concepts might be ideal, but unless you can get Buddhist monks to engineer these machines, I don't think the ideals would be incorporated.
- That the machine would not have a sense of self would seem to be a good thing, to eliminate ego and the drive toward things that separate us from others. But ironically, I do think it is a sense of self that gives us compassion. I think that when we see ourselves in others we feel their suffering. That's the root of empathy. If we didn't have a sense of self, we wouldn't feel so deeply the pain of others.
I can see where writing this and doing the research was exhausting. I listened to Buddhism and Science and fell asleep part of the way through, but will turn this on in my waking hours so that I absorb it better.
This very thoughtful piece obviously deserves another reading. So much of what you discuss interests me. But for now, this is my reaction.
You do a lot of thinking--I wonder, would AI have moods, and aren't moods important?