The methods are thorough, complicated, and account for almost everything, no doubt about it.
This conversation however, brought into my mind an article or a lecture which I've read/seen long ago (I can't remember) about "noise" and filtering any data.
It was about how despite "filtering" any data as much as possible for example, we'd still end up with some "noise" left which we usually try to ignore and throw away. But when looked at and reassessed as part of the bigger scheme of things, it proved to be an integral and essential part of reality. And without them, the actual "interesting" data wouldn't give the whole picture or make much sense.
(I'm sorry I couldn't find it, and I probably explained it in a little messy way to make it short)
What I think @agmoore was partly talking about, is that filtering that happens by the triger mechanisms and all other consequential filtering which takes place, could be omitting some interesting "stuff".
@lemouth, you maybe needed to include more info about what "gets ignored" and why, and why the low energy events are considered not interesting.
"Minimum bias" also, is just that, "minimum".
Going into the very interesting brain analogy and the filtering of our sensory. Such "filtering" does an amazing job to give us useful data for "our needs" but it's far from the reality, in the end our perception is very limited and we detect a very limited info of what's "out there".
To sum it up and to clarify, do you think there are limitations in how the data is collected/processed in LHC that can make a "big difference" of the results?
Isn't this noise entering the uncertainties inherent to any calculation? At least this is how I see it in particle physics.
This is precisely what minimum bias event recording is aimed to. We randomly record events as soon as we get them, without any filtering. However, these events are dominated by Standard Model contributions, so that it would be very hard to seen any (rare) signal in it. It is more like a control check of the Standard Model (or its quantum chromodynamics corner dedicated to the strong interactions).
This corresponds to event configurations for which we are sure the Standard Model is correct (because we have a century of data in this regime). According to the Standard Model, we will be dominated by events originating from quantum chromodynamics interactions, and this occurs at a huge rate. Therefore, any rare phenomenon would just be invisible (i.e. well hidden in the error bars).
I do not think so. What we filter away are configurations for which we know exactly how the Standard Model behaves, and that the Standard Model is the good theory there. As no new phenomenon has been observed in the last 100 years, therefore if those are there, they must be rare.
Actually, there should be so rare that we won't notice their presence due to the typical size of the error bars on any measurement at the LHC. Therefore, even if there is new phenomenon in what we are filtering away, those will be out of our capacity of detection (and they are thus not relevant).
I hope I clarified. Otherwise feel free to come back to me. :)