Jason Mitchell has a profoundly silly essay on his site explaining that because experimental errors exist, you should never, ever, ever challenge the validity of experimental social science results. If that sounds like it’s self-contradictory, well, congratulations. You are officially smarter than at least one Harvard professor.
Specifically, his argument is that if you take result A, found through an experiment, and you repeat the experiment and do not find result A, it is more likely that you screwed up the experiment than that the result wasn’t true, so all you’ve done is prove that experimenters are fallible, and we all knew that, so why are you wasting your time?
The glaringly obvious hole in this logic is that if you don’t believe any experimental results, why did you believe result A to begin with? Mitchell does, to his credit, address this, although it takes him a while to get there. Here is his take:
Although the notion that negative findings deserve equal treatment may hold intuitive appeal, the very foundation of science rests on a profound asymmetry between positive and negative claims. Suppose I assert the existence of some phenomenon, and you deny it; for example, I claim that some non-white swans exist, and you claim that none do (i.e., that no swans exist that are any color other than white). Whatever our a priori beliefs about the phenomenon, from an inductive standpoint, your negative claim (of nonexistence) is infinitely more tenuous than mine. A single positive example is sufficient to falsify the assertion that something does not exist; one colorful swan is all it takes to rule out the impossibility that swans come in more than one color. In contrast, negative examples can never establish the nonexistence of a phenomenon, because the next instance might always turn up a counterexample. Prior to the turn of the 17th century, Europeans did indeed assume that all swans were white. When European explorers observed black swans in Australia, this negative belief was instantly and permanently confuted. Note the striking asymmetry here: a single positive finding (of a non-white swan) had more evidentiary value than millennia of negative observations. What more, it is clear that the null claim cannot be reinstated by additional negative observations: rounding up trumpet after trumpet of white swans does not rescue the claim that no non-white swans exists. This is because positive evidence has, in a literal sense, infinitely more evidentiary value than negative evidence.
So basically he believe in a world in which any positive result is fundamentally more likely to be real than any negative result. Let’s point out a couple of flaws in this: his analogy is stupid. Specifically, it’s not analogous. Here is the proper analogy:
Common Wisdom: All swans are white.
Counterintuitive experimental result: No, I saw one that was black.
Now the reproduction test is not about, as Mitchell puts it, “rounding up trumpet after trumpet of white swans.” It is, rather, going to the place where black swans are purported to exist, looking diligently for one, and not finding it.
And further, we might note that it’s not obviously true that one sighting of a black swan indicates it exists. Perhaps there is fraud. Perhaps there is not fraud, but an observer mistook a cormorant (a black, long-necked, waterfowl) for a black swan. Perhaps the observer saw a swan that was covered in oil. If we only ever have a single sighting of the purported black swan, and the result is never reproduced, we should in fact wonder if that sighting is valid.
It is also far from clear that the kind of social science experimental results fit cleanly into his positive/negative evidence model in the first place.
But Mitchell is not wrong that in many cases, there is an asymmetry of evidentiary value. If black swans exist, but are rare, we should expect that a large number of diligent observers may fail to find them. And here’s where Mitchell falls flat on his face: he doesn’t seem to have ever heard of Bayesian reasoning. This tool allows us to exit the absurd binary hole that Mitchell digs for us. We can state not just our belief, but our certainty. And different evidence can alter our certainty different amounts. In other words, where Mitchell fails is when he says that “This is because positive evidence has, in a literal sense, infinitely more evidentiary value than negative evidence.” It’s the infinitely part that is not true. Positive evidence may have more evidentiary value than negative evidence, but it’s finitely more. And, indeed, when the positive evidence is pretty flimsy to begin with (as, for example, when one is doing social science research and the asserted positive results are quite possibly a random fluke rather than something concrete like a black bird), it may have very little more evidentiary value than negative evidence.
When we hear of the existence of black swans, and then the failure to find another example of a black swan, we can then say, “I am now less sure than I was that black swans exist.” If there are five failures to find them, we can say, “I am much less sure than I was that black swans exist.” If there are ten failures to find them, we can say, “I now think that, on balance, the evidence suggests they do not exist,” but remain open to the idea that if another expedition finds such a swan, that our conclusions remain open to persuasion.
Bayesian reasoning opens us up to a world of continuity. The way that most people approach most things in their life is a series of atomic decisions. I have belief B. Someone introduces evidence E. I now look at B and either discard B and believe not-B, or find E unconvincing and continue to believe B. And then I continue on my way. If someone now produces evidence E2, I once again reexamine B, in a context-less void. And that means that people don’t change their minds unless someone can produce a really compelling piece of evidence.
You see this about climate change. Someone’s prior belief is (reasonably!) that humans do not have a significant effect on the weather. You show them a series of historical temperatures. They (reasonably!) say, “Oh, but that could be a natural cycle.” They don’t change their belief. You show them a computer model. They (reasonably!) say, “Oh, but your computer model is massively less complicated than the real world, and doesn’t predict very well.” They don’t change their belief. Etc.
And, lest anyone think I’m being partisan here, you can see the same thing with, oh, say, GMO foods, where environmentalists are utterly intractable in the face of overwhelming evidence that GMO foods are harmless.
With a Bayesian mindset, you can reconcile the notion of accepting evidence into your worldview without changing all your beliefs at a moment’s notice. By explicitly conditioning yourself to think of your beliefs not as binary monoliths, but as shades of certainty, you can accept evidence into your worldview without either becoming a hopeless waffler or remembering with perfect clarity everything you’ve ever been told. You can say to yourself, “Self, I should be less certain of this thing, even if I don’t recall the exact details of the argument being made when next I hear an argument about this thing.” Lossy compression for the mind.
Also, you’ll be less likely to write a self-congratulatory, toolish essay like Jason Mitchell’s.