A lot of the graphs being used by the Anthropogenic Global Warming crowd are not based on actual data, they are based on computer models derived from proxy measurement.
There is an interesting post at TechDirt about the problems with this code.
Frequent Errors In Scientific Software May Undermine Many Published Results
It's a commonplace that software permeates modern society. But it's less appreciated that increasingly it permeates many fields of science too. The move from traditional, analog instruments, to digital ones that run software, brings with it a new kind of issue. Although analog instruments can be -- and usually are – inaccurate to some degree, they don't have bugs in the same way as digital ones do. Bugs are much more complex and variable in their effects, and can be much harder to spot. A study in the F1000 Research journal by David A. W. Soergel, published as open access using open peer review, tries to estimate just how much of an issue that might be. He points out that software bugs are really quite common, especially for hand-crafted scientific software:
It has been estimated that the industry average rate of programming errors is "about 15-50 errors per 1000 lines of delivered code". That estimate describes the work of professional software engineers -- not of the graduate students who write most scientific data analysis programs, usually without the benefit of training in software engineering and testing. The recent increase in attention to such training is a welcome and essential development. Nonetheless, even the most careful software engineering practices in industry rarely achieve an error rate better than 1 per 1000 lines. Since software programs commonly have many thousands of lines of code (Table 1), it follows that many defects remain in delivered code -- even after all testing and debugging is complete.
Much more at the site - a lot of the climate models have been written in the R statistical language. Given that most of the code is being written by the grad students of the researchers, there is no real rigorous testing.
When I worked at MSFT, I managed software test labs and they really put their code through the wringer.
One of the key indicators of the quality of the climate models is that they do not hindcast well at all. We have a couple hundred years of decent historical data - plug those into the models and they do not output anything like what the recorded climate was like.
Leave a comment