Some time ago on Twitter I commented about how it seemed like 2020 was the year that science became politics. It makes sense in a world where everything is both polarized and politicized.
As I’ve thought about it more in recent weeks, I’m beginning to believe that much of the problem relies with science itself. Or, more specifically – data.
Back in centuries and millennia past, there was no good access to data. Much of our scientific progress came from thinkers who saw something that could not be measured and then set out to find out whether it was true or not. The thought – the scientific breakthrough – came before the data.
Today, the opposite is true. We have access to so much data, yet we have no idea what to do with it. It’s backward from how it used to be. Instead of having the breakthrough first and then looking for evidence, now we start with the numbers and then try to find significance.
This dynamic – where we take the data first and then try to assign a meaning to it – has led to a lot of bad science.
Numbers can be made to say almost anything. You can come up with 100 data points that say X is good and 100 more that say X is bad. This is sometimes done deliberately and with malicious intent, but not always. There is just too much data and it gets us confused.
So we have all these numbers – this information – and we try to make it mean something. We get caught up in it, and we try to assign meaning that isn’t really there. It’s like a mirage in a desert.
The fact is that you just can’t trust most numbers anymore. Everything should be taken with a grain of salt. There should be no one source or study that you listen to without hesitation. You can’t rely on this vague notion of ‘data’.
It’s an uncomfortable place to be. The numbers might be legitimate, but the conclusions drawn from them are very often not. Nobody gets to be an authority anymore.
Take a look at the chart below from a 2016 article by FiveThirtyEight about how you can’t trust nutrition information. Note the correlations.
These correlations are statistically significant from the sample, yet they mean absolutely nothing.
This is bad science, and it’s representative of why the idea of ‘science’ is getting less reliable. Experts are just as guilty of this as anyone. A lot of genuinely intelligent people – experts – do the equivalent of deriving meaning from charts just as nonsensical as this.
There are often two camps of people on this kind of stuff. The ‘trust the experts’ people and the ‘do your own research’ folks. Neither of these approaches is correct. The experts present the bad data as fact, and the DYOR people just look for different experts that match their already existing, preconceived notions.
Obviously, plenty of good also comes from our ability to measure so much. But the rise in available information has led to a world where sources that we have traditionally believed to be ‘fact’ cannot be completely relied on. You need to be more skeptical.
There’s too much data out there. And it’s making us all dumber.