You are viewing a single comment's thread from:

RE: Does Freedom Require Radical Transparency or Radical Privacy?

in #eos7 years ago

I agree, I try to bring this up as much as possible in my discussions. Data is nothing without interpretation and even just looking at so-called "raw" data implies an interpretation. There's no such thing as "just the facts".
I don't know about your AI solution, it seems like the start of a great but terrifying SciFi movie. AI (so far) can only work at the bidding of people, and in all cases will only be able to work at least indirectly at their bidding. The bias you mention is in everything we touch, including AI.

Quite true.

I would go so far as to say that we are all baddies, rather than none of us are.

And maybe this is why everyone being open to the scrutiny of everyone else would help, rather than hinder, our moral evolution. Fault-finding humans, like flea-picking apes! Grading each others' tests.

But you go on to talk about this intensely PC climate of ours. I see your point. But I would hope people would grow more intelligent than that!

Sort:  

Humans can never be moral is my point so there will be no moral evolution which comes from punishment cults or "fault finding humans". Why?

Humans can never be perfect, will always make mistakes, will always be biased. This is why in my opinion transhumanism is the only path to improving the morality and ethics of the individual. We have to move beyond being mere humans who make human level decisions, and instead start to receive decision support from intelligent machines. In the same way humans notoriously aren't good at math, and have found that using calculators is a way to improve the precision of engineering beyond what could be achieved by using human computers.

Bias is in everything humans touch, but bias isn't equal in everything. Not everything is equally biased, and there are ways of reducing bias over time. The point is, you can reduce bias of an algorithm or of data over time (randomization of samples was used for instance), but this does not happen naturally just by giving humans lots of data to deal with.

Humans will need machines to help the debiasing process, and these machines will help debias the artificial intelligence iteratively over time. For the first generation there will be bias, but the point is that with each new generation the level of bias according to global criteria should be decreasing.

I'm not a big believer is trans-humanism, even in the ancient styles, i.e. ascending to heaven to be perfect with God, or in the modern style of evolutions of the mind. So as far for the "moral evolution" I do not believe something fundamental to us much change for us to be moral. The systems we create to liberate and bind us, they are extra-human. However note that I do take this position as a sceptic. I just don't see the evidence for it. I'm happy to be wrong.

That always puts me at odds with many technologists. I would strongly oppose the fault-finding humans as flea-picking apes. There is something the old ways of thought had wisdom in: voluntary surrender, choice in choosing the inevitable. We see it today, sure, but corrupted. I do not think the systems we're creating free our minds, they manipulate us.

Or, to make a concrete point, social anxiety is not something we can dismiss. Total transparency would make basket cases of the majority of humanity.