An excellent response to Dan's post and lots to think about again. Thanks also for the support for my position in the comments.
I definitely agree with you about the benefit of transparency to the socially wealthy as opposed to the rest of us and in regard to the current ability some have to do terrible things and achieve success in our world as things stand. I accept that with humanity we will always have some immoral elements within our society and should aim to limit these as much as possible by collective vigilance but I fear the unknown with AI. Maybe in doing so and during such discussion we might decide it is so or may find a less potentially unknown outcome.
Again, lots to think about. :)
I want the algorithms or AI to know, but for them to maintain their privacy (human access should be restricted). This way my own bias will not get in the way of making the best possible decisions from the information that exists.
I'm not trying to be a smart-arse but it is clear that you have a bias towards AI in terms of it's ability to lead the way in this area and would like to see such a process implimented because you feel it is the best route forward but I would suggest that perhaps you should propose discussion towards discovering whether all of us feel this is the best route forward.
The difference is with the AI option you can determine whether you want AI assistance or not, how much to trust AI, what values to configure it and so on. The solution from Dan's article is not opt-in, is not voluntary, because radical transparency doesn't give us a choice to join or not join, and so we have to develop AI just to live in that environment with no privacy due to the massive increase in complexity that the extreme transparency and unforgiving blockchain will bring.
If you would rather take your chances when anything you say or do can turn the crowd against you, this is of course the option you have. I just want people to have more options than to leave things up to chance.