All models are wrong

Since mental models are so important to trouble hacking, it seems only appropriate to start with this observation from mathematician George Box. 
All models are wrong; some models are useful. Box, G. E. P. (1979), “Robustness in the strategy of scientific model building”, in Launer, R. L.; Wilkinson, G. N., Robustness in Statistics, Academic Press, pp. 201–236.
Box was saying that systems in the real world are so complex that any model of those systems will have to be a simplification.    You have to ignore some aspects of the real world in any model, and this makes your model wrong.   But a wrong model can still be useful in making predictions about the real world.   It just depends on whether the simplifications in the model matter to the behaviors you are trying to predict. This is as true for software systems as it is for physical systems.  The behavior of a software system results from a complex interplay between hardware, users, data and time.   And here’s the key point:   when we test software as part of the development process, we are really testing a model.  A deployed software system may be vastly more complex than our model;  it may have many more users, or much more data, or run for much longer. If our test systems are models, then we know that they are wrong.   It should not come as any great surprise, then, that the users will find defects that the test process didn’t.  Nonetheless, our test models can be useful;   we may still be able to make predictions about a deployed system based on our testing.   The point is to be aware of the fundamental wrongness of our test models, and work to improve those models if we discover that they are different from deployed software in important ways. How does this apply to trouble hacking?   Well, as you begin investigating a problem, you have a model in your head of how the system works, and you compare what you observe from the deployed system to what you expect from your model. But since your model (like all models) is wrong, there are two things that you should expect to happen.    First, your model is going to evolve as you learn more about the problem.  Finally, your model influences how you interpret what you observe from the system.  If you hold onto a model for too long, you might miss the important clues that will lead you to solving a problem. I’ll look at these two topics in more depth in future posts.