Often modelling simply adds rigour to thought experiments
Results from computer models should always be scrutinized carefully (just like any other result in science, really). It is essential that all source code (including version information), parameters, and hardware information should be made public.
In many cases, I find that failure of simulations is often the most interesting part. People postulate a mechanism to explain some phenomenon, and might leave it at that: a just-so story. By drawing up a computer simulation you can test ideas more rigorously, and you sometimes find that the proposed mechanism cannot produce the observed effect. Once you have ruled out numerical instability or just programming errors, you can then safely state that the proposed mechanism does not explain observed behaviour, and that therefore the theory requires some adaptation.
Likewise, leaving out parts of a system because they are too hard to model, or you first want to start with a simple model, and add complexity later (always a good approach) can also give important information. I have done some modelling of bacterial communities in the intestine (insert "your model is shit" joke here), and found that quite a few properties can be modelled well enough fairly simply. For example, even without modelling an immune system, ratios of aerobic vs anaerobic bacteria could be reproduced very well simply from decent estimates of the amounts of nutrients and oxygen entering the system. Practically no other parameter had ANY impact on that ratio (ODE/PDE solvers used, hardware used, time steps used etc.). Not that surprising to biologists perhaps, but medical researchers had assumed the immune system controlled the intestinal microflora. It might do so, but apparently it is not necessary to control the aerobe/anaerobe ratio.
By contrast, if your simulation explains things nicely, you have not learnt that much, only that your mechanism is plausible. Proof requires a lot more than mere simulation