Kris T. Huang, MD, PhD, CTO

 

All models are wrong.

But, some are useful.

 

The idea that “All models are wrong” is generally attributed to statistician George Box [1], but it certainly isn’t new. Of course, even though all models are wrong, it’s clear that some models are quite useful, and good models (should) encapsulate some insight that provides meaning to observed data and an explanation for predictions. A good model should provide insight into a system’s mechanism.

This more traditional scientific notion of the model concept is in fairly stark contrast to today’s growing reliance on so-called “black box” deep learning solutions for increasingly important decision processes. It’s not that deep learning methods are not understood, but the models produced by deep learning typically are, at least for the time being, not interpretable, i.e. they cannot provide explanation for the calculations that led to their predictions.

The need for insight doesn’t just apply to results, it also applies to model formation.

Continue reading “All models are wrong.”

 

“Water, water, every where, Nor any drop to drink.”

The Rime of the Ancient Mariner (1834 text)
by Samuel Taylor Coleridge

 

Kris T. Huang, MD, PhD, CTO

Deep learning requires data. Lots of it. There’s lots of medical data, almost 25 exabytes according to IEEE Big Data Initiatives [1], so where’s the problem? The problem is that more than 95% of medical data is unstructured, in the form of raw pixels (90%+) or text, essentially putting it out of reach of large scale analysis.

Continue reading “Data Augmentation”

Kris T. Huang, MD, PhD, CTO

Based on the notion of biological neurons, deep neural networks (DNNs) loosely mimic the networked structure of a (very) simplified brain of sorts. DNNs have revolutionized and automated a number of tasks that were once considered next to intractable, yet we appear to be reaching a plateau as we bump up against the limitations of DNNs. From the opaque nature of neural network models, susceptibility to adversarial attacks, to large data requirements, there are a number of weaknesses uncovered by research that have been quietly reminding us that although pure connectionist models like DNNs mimic biological systems, they remain for the time being rough approximations.

Continue reading “Can An Old Neural Net Learn New Tricks?”

Kris T. Huang, MD, PhD, CTO

Deep learning is a tool. Machine perception is a potential resultant ability, the ability of a machine to interpret data in a manner similar to humans. Being (very) loosely patterned after biological systems, deep neural networks (DNNs) are able to accomplish certain tasks, like image classification or playing Go, with apparent human-like and at times even super-human skill. With performance like that, it is easy to believe (i.e., extrapolate), that its behavior is human-like, or perhaps in some way better.

Continue reading “If Deep Learning Were Human…”