Reproducible results

The other day, my friend and colleague Dennis was frustrated by some sloppy mistakes he found in an old magazine article. I will take his word for it that the article has some mistakes (oh the irony…), but I would like to reiterate this takeaway:

If you don’t give people your code or your data and open your methods up for scrutiny, can you be 100% sure that you did not make a mistake?

Stated another way, should I trust someone’s results if I cannot reproduce them? With the published code and data, I can easily validate the results. Moreover, the supplementary material presents a different, and often more pragmatic, view on the arguments and methods of a work.

Then again, the exercise of independent validation and verification is crucial to peer review. Relying on published data and code might be a nice crutch for a reviewer, but a weak crutch may break later on. Avoiding such crutches lends a review more confidence and adds an important element of independence.

In Dennis’ case, he had no crutch and thus was able to spot the mistakes. Had he quickly glanced over the author’s published simulation1 and seen matching results, would he have marched on only to discover his misunderstanding two hours before his deadline?