You are viewing a single comment's thread from:

RE: Does Freedom Require Radical Transparency or Radical Privacy?

in #eos7 years ago

In nature brains forget over time. A blockchain never forgets and remembers everything.

Right, and they remember very imperfectly to begin with. Each remembering, each retelling is a alteration of the memory. Narrative is added, speculations on motivations, the addition of both relevant and spurious detail. Up to and including complete fabrication.

The human element cannot analyze the data without bias and cannot make use of the data.

I agree, I try to bring this up as much as possible in my discussions. Data is nothing without interpretation and even just looking at so-called "raw" data implies an interpretation. There's no such thing as "just the facts".

I don't know about your AI solution, it seems like the start of a great but terrifying SciFi movie. AI (so far) can only work at the bidding of people, and in all cases will only be able to work at least indirectly at their bidding. The bias you mention is in everything we touch, including AI.

"Good" and "bad" people can be narrowed down, normal and abnormal. Do we want to encourage normalcy to the maximum degree? [...] There are no "baddies". So the idea of baiting and ferreting them out, if that is the only motivation behind transparency then that in my opinion is evil. Bad is merely your own subjective definition.

While I agree with @alexander.alexis in the overreach of subjectivity here and in your closing statement, at the core you are right I think. I would go so far as to say that we are all baddies, rather than none of us are. Isn't this what the issue with Twitter-scale social shaming is all about? Anyone can fall foul of the mob for an indefensible throw away comment. Will we now all be judged by the entirety of the online population? For anything you possibly say I'm certain I can find thousands of people that would shout at you for it. In "the world as a village" this is how it works.

So ironically in the world of radical transparency secrets would be even more important.

Sort:  

I agree, I try to bring this up as much as possible in my discussions. Data is nothing without interpretation and even just looking at so-called "raw" data implies an interpretation. There's no such thing as "just the facts".
I don't know about your AI solution, it seems like the start of a great but terrifying SciFi movie. AI (so far) can only work at the bidding of people, and in all cases will only be able to work at least indirectly at their bidding. The bias you mention is in everything we touch, including AI.

Quite true.

I would go so far as to say that we are all baddies, rather than none of us are.

And maybe this is why everyone being open to the scrutiny of everyone else would help, rather than hinder, our moral evolution. Fault-finding humans, like flea-picking apes! Grading each others' tests.

But you go on to talk about this intensely PC climate of ours. I see your point. But I would hope people would grow more intelligent than that!

Humans can never be moral is my point so there will be no moral evolution which comes from punishment cults or "fault finding humans". Why?

Humans can never be perfect, will always make mistakes, will always be biased. This is why in my opinion transhumanism is the only path to improving the morality and ethics of the individual. We have to move beyond being mere humans who make human level decisions, and instead start to receive decision support from intelligent machines. In the same way humans notoriously aren't good at math, and have found that using calculators is a way to improve the precision of engineering beyond what could be achieved by using human computers.

Bias is in everything humans touch, but bias isn't equal in everything. Not everything is equally biased, and there are ways of reducing bias over time. The point is, you can reduce bias of an algorithm or of data over time (randomization of samples was used for instance), but this does not happen naturally just by giving humans lots of data to deal with.

Humans will need machines to help the debiasing process, and these machines will help debias the artificial intelligence iteratively over time. For the first generation there will be bias, but the point is that with each new generation the level of bias according to global criteria should be decreasing.

I'm not a big believer is trans-humanism, even in the ancient styles, i.e. ascending to heaven to be perfect with God, or in the modern style of evolutions of the mind. So as far for the "moral evolution" I do not believe something fundamental to us much change for us to be moral. The systems we create to liberate and bind us, they are extra-human. However note that I do take this position as a sceptic. I just don't see the evidence for it. I'm happy to be wrong.

That always puts me at odds with many technologists. I would strongly oppose the fault-finding humans as flea-picking apes. There is something the old ways of thought had wisdom in: voluntary surrender, choice in choosing the inevitable. We see it today, sure, but corrupted. I do not think the systems we're creating free our minds, they manipulate us.

Or, to make a concrete point, social anxiety is not something we can dismiss. Total transparency would make basket cases of the majority of humanity.