You are viewing a single comment's thread from:

RE: Total transparancy benefits the top of the pyramid and may not actually work as intended (there are costs)

in #politics7 years ago

I really agree with you here and I'm glad you decided to gather your thoughts into a full post response. I hadn't thought of it exactly how you present it here but it's inspired me.

For example, I really like data being available but scrubbed in some way. For example it's really useful to have huge amounts of medical data, but it's not so great if that data is tied to individual identities. I also favor one way functions which reduce the dimensionality and / or granularity (detail) of information, perhaps also redacting information.

It's really important on a blockchain to know what all the accounts are doing but it is great that we don't necessarily know who is doing it. Or put the other way around, it's good that when I see someone on the street that I don't know what they blockchain account or public key is.

Regarding @dan 's imagined world of totalitarian transparency, I think this is one of the most original thoughts you have on it:

The idea of total transparency sounds most beautiful to the most wealthy in society. When I use the word "wealth" I'm not speaking about the narrow sense where the richer you are, the more on top you are, the wealthier you are. By wealth I'm talking not just net worth but your traits.

Exactly. As I said in one of my comments, in some way we are all abnormal. We may be the most average person but have a couple of weird quirks. That's what a norm is, not "this is what a normal person is like" but "here's things that between most people we find acceptable".

We still need norms, and anyway we cannot get rid of them, but forcing people to expose their abnormalities will result in a favoring of the "positive" average (the average of only positive things).

Sort:  

In my opinion true freedom comes from having both privacy (between humans) and transparency (to the AI/machines). That is if the AI are collaboratively developed, decentralized, and trending toward being unbiased. I do not assume bias will be removed overnight just as I don't assume bugs will not exist in any software, but I do assume if we design it so that over time (iterative improvement) the trend is toward less and less biased AI, then we will eventually reach a point where we will be satisfied with some level of minimum bias and for sure it will be better than humans. Humans don't typically become less biased as humans get older. Humans don't typically become less biased over time as they are exposed to more knowledge, as some humans even ignore the latest knowledge if it conflicts with their world views/feelings.

Good point, it could work in theory. I wouldn't throw it out outright but I would need to see it in action.