Thoughts here and in the previous posts assume that there exists an objective (correct) evaluation of human work. Any talk about "noise" would not make sense otherwise.
@jamesmart already made a similar point, but I will say it in my own words: there's no such thing as objective (or correct) evaluation of human work.
Let's say, we have an ideal world AI which would evaluate contributions. The result would be subjective. Why? Because any such algorithm would have to have a criteria by which it evaluates and that would make it biased towards that specific criteria as opposed to some other values. So in the end the biases of the designer of the algorithm would make the whole evaluation biased. Different designers will have different values, which will make them prefer different criteria. How do we choose criteria then?
Therefore, best we can do is: 1) attempt to reach consensus on what we as a community value and 2) try to rank each other according to that consensus. Fractally consensus meetings try to do both in the same process.
I think the analogy that fractally process as a measurement tool, which with enough samples will approach some correct measurement, is missleading, since it assumes that "correct" measurement exists as some physical reality, free from human interpretation.