Kris T. Huang, MD, PhD, CTO

 

All models are wrong.

But, some are useful.

 

The idea that “All models are wrong” is generally attributed to statistician George Box [1], but it certainly isn’t new. Of course, even though all models are wrong, it’s clear that some models are quite useful, and good models (should) encapsulate some insight that provides meaning to observed data and an explanation for predictions. A good model should provide insight into a system’s mechanism.

This more traditional scientific notion of the model concept is in fairly stark contrast to today’s growing reliance on so-called “black box” deep learning solutions for increasingly important decision processes. It’s not that deep learning methods are not understood, but the models produced by deep learning typically are, at least for the time being, not interpretable, i.e. they cannot provide explanation for the calculations that led to their predictions.

The need for insight doesn’t just apply to results, it also applies to model formation.

Continue reading “All models are wrong.”

Kris T. Huang, MD, PhD, CTO

Make no mistake, neural networks are powerful tools. This class of algorithms single-handedly brought about drastic and rapid advancement in tasks like classification, speech recognition, and natural language processing, bringing about the end of the second “AI winter” that lasted from the late 1980s until around the late 2000s.

Continue reading “Medicine, Deep Learning, and Black Boxes”