I spent the first decade of my working life in mathematical modelling, using computers. I simulated all kinds of things to design better ones. I started on aircraft hydraulic braking systems, moved on to missile heat shields, springs, zoom lenses, guidance systems, electromagnetic brakes, atmospheric behaviour, lightning strikes, computer networks, telecomms systems, markets, evolution….
I wrote my last computer model soon after I became a futurologist, 21 years ago now. Why? Because they don’t work, in anything other than tiny closed systems. Any insight about the future worth mentioning usually requires thinking about highly complex interactions, many of which are subjective. Humans are very good at deductions based on very woolly input data, vague guesses and intuition, but it is not easy or even possible to explain all you take into account to a computer, and even if you could, it would take far more than a lifetime to write a model to do what your brain does routinely in seconds. Models are virtually useless in futurology. They only really work in closed systems where all the interactions are known, quantifiable, and can be explained to the computer. Basically, the research and engineering lab then.
Computer models all work the same way, they expect a human to describe in perfect detail how the system works. When you are simulating a heat shield, whether for a missile of a space shuttle, it is a long but essentially very simple process because only very simple known laws of physics are involved. A few partial differential equations, some finite difference techniques and you’re there. The same goes for material science or biotech. Different equations, but essentially a reasonably well-known closed system that just needs the addition of some number crunching. When a closed system is accurately modelled, you can get some useful data. But the model isn’t reality, it is just an approximation of those bits of reality that the modeller has bothered to model, and done so correctly.
People often cite computer models now as evidence, especially in environmental disciplines. Today’s papers talk of David Attenborough and his arguments with Lawson over Polar Bears. I have no knowledge about polar bears whatsoever, either may be right, but I can read. The cited report http://pbsg.npolar.no/en/status/population-map.html uses mostly guesses and computer-generated estimates, not actual bear counts. I’d be worried if the number of bears was known and was actually falling. Looking at the data, I still don’t have a clue how many bears there are or whether they are falling or growing in number. The researchers say they are declining. So what? That isn’t evidence. They have an axe to grind so are likely to be misleading me. I want hard evidence, not guesses and model outputs.
I discovered early on that not all models are what they appear. I went to a summer school studying environmental engineering. We had to use a computer model to simulate an energy policy we designed, within a specific brief. As a fresh mathematician, I found the brief trivially easy and jumped straight to the optimal solution – it was such a simple brief that there was one. I typed the parameters in to the model and it created an output that was simply wrong. I challenged the lecturer who had written it, and he admitted that his model was biased. Faced with my inputs, it would simply over-rule them for ethical reasons and use a totally different policy instead. Stuff that!
It was a rude awakening to potential dishonesty in models, and I have rarely trusted anyone’s models since. My boss at the time explained it: crap in, crap out. Models reflect reality, but only as far as the modeller allows. Lack of knowledge, omissions, errors and quite deliberate bias are all common factors that make models a less than accurate representation of reality.
Since that was my first experience in someone deliberately biasing their models to produce the answer they want, I have always distrusted environmental models. Much of the evidence since has confirmed bias to be a good presumption. As well as ignorance. The environment is an extremely complex system, and humanity is a very long way from understanding all the interactions well enough to model it all. Even a small sub-field such as atmospheric modelling has been shown (last year by CERN’s CLOUD experiment) to be full of bits we don’t know yet. And yet the atmosphere interacts with the ground, with space, with oceans, with countless human activities in many ways that are still in debate, and almost certainly in many ways we don’t even know exist yet. Without knowing all the interactions, and certainly without knowing all the appropriate equations and factors, we don’t have a snowflake’s chance in a supernova of making a model of the atmosphere that works yet, let alone the entire environment. And yet we see regular pronouncements on the far future of the environment based on computer models that only look at a fraction of the system and don’t even do that well.
Climate models suffer from all of these problems.
First, there is a lack of basic knowledge, even disagreement on what is missing and what is important. Even in the areas agreed to be important, there is strong disagreement on the many equations and coefficients.
Secondly, there are many omissions. In any engineering department, people will be well familiar with the problem of ‘not invented here’. Something invented by a competing team is often resented, rather than embraced. The same applies in science too. So models can feature in great detail interactions discovered by the team, though they may be highly reluctant to model things discovered by other scientists. Some scientific knowledge is therefore often missing from models, or tweaked, or discounted, or misunderstood and mis-modelled.
Thirdly, there is strong bias. If a researcher wants their work to further some particular point of view, it is extremely easy to leave things out of change equations or coefficients to produce the output desired. There are very many factors causing the bias now. Climategates 1 and 2 are enough to convince any right-thinking person that the field is corrupt beyond repair.
Finally, there are errors. There always are. Errors in data, algorithm, programming, interpretation and presentation.
Models can be useful, but they are far too open to human failings to ever consider computer model outputs as evidence where there is any debate whatsoever about the science or data. There is huge debate in climate science and researchers are frequently accused of bias, error, omission and lack of knowledge. But quite simply, these model outputs fail by the ‘crap in, crap out’ rule. Their output cannot be considered evidence, however much it may be spun that way by the researchers.
Let’s put it another way. One of the simplest programs most programmers write is to write ‘X is a genius’ again and again on the screen. But that doesn’t make it true, however often or large it is printed. The same goes for models. The output is only as honest as the researcher, only as accurate as their completeness and representation of the entire system. Using a long winded program to print ‘we’re all doomed’ doesn’t make it any more true. I don’t trust the researchers, I know the tricks, I don’t trust their models, and I don’t trust their output.