Image processing algorithms for microscopy are often tested on tiny round particles such as some type of metallic particles in electron microscopy or fluorescent particles in light microscopy. In 3 dimensional imaging, the 3rd dimension is often stretched. So when you take a picture of something spherical, it will be elongated in the 3rd dimension.
We can use image processing to restore the true shape of the particle. And often times we use this type of sample as a test. In a 3D picture, the particles will be elongated as below (it is a 2D picture showing some particles distributed in 3D space).
We can run a computer program to try and reconstruct the correct shape of the particles. Below is the result of running such a program. If the particles in the output become spherical again, then our reconstruction system is wonderful… right?
The problem is we haven’t learned any new information. We already knew the shape of the particles. We’ve only confirmed our expectations which is good but not the entire story. In real life, things become more complicated. We want to learn something we don’t already know. For example, we may want to learn something new about a cellular structure. If the cell is alive, things might be moving. Our system that works quite well on the known may not (or may) work on the unknown. To make a conclusion about the complicated system is much more difficult.
This seems to be what happened to Malaysia Flight 370. The tracking systems were designed and tested for the known. A plane goes from one place to another, from Kuala Lumpur to Beijing, or from Chicago to San Francisco, on predetermined paths, emitting it’s own radar signature. In retrospect the tracking procedure was tested on an obvious toy system. Usually we know exactly where the plane is going and furthermore the plane tells us where it is. Under typical conditions the measurement procedure confirms the expectations, but under unusual circumstances the tracking system breaks down with dramatic results.