If you fit a parabola to a data set, you're going to get a parabola. If you fit a logarithm curve to a data set, you're going to get a log curve. If you're curve fitting, your own assumptions dictate the final results--and probably won't predict the future:
Climate change prophecy hangs its hat on computer climate models. The models have gigantic problems. According to Kevin Trenberth, once in charge of modeling at the National Center for Atmospheric Research, “[None of the] models correspond even remotely to the current observed climate [of the Earth].” The models can’t properly model the Earth’s climate, but we are supposed to believe that, if carbon dioxide has a certain effect on the imaginary Earths of the many models it will have the same effect on the real earth.
The climate models are an exemplary representation of confirmation bias, the psychological tendency to suspend one’s critical facilities in favor of welcoming what one expects or desires. Climate scientists can manipulate numerous adjustable parameters in the models that can be changed to tune a model to give a “good” result. Technically, a good result would be that the climate model output can match past climate history. But that good result competes with another kind of good result. That other good result is a prediction of a climate catastrophe. That sort of “good” result has elevated the social and financial status of climate science into the stratosphere...
Testing a model against past history and assuming that it will then predict the future is a methodology that invites failure. The failure starts when the modeler adds more adjustable parameters to enhance the model. At some point, one should ask if we are fitting a model or doing simple curve fitting. If the model has degenerated into curve fitting, it very likely won’t have serious predictive capability.
A strong indicator that climate models are well into the curve fitting regime is the use of ensembles of models. The International Panel on Climate Change (IPCC) averages together numerous models (an ensemble), in order to make a projection of the future. Asked why they do this rather than try to pick the best model, they say that the ensemble method works better. Why would averaging worse models with the best model make the average better than the best? This is contrary to common sense. But according to the mathematics of curve fitting, if different methods of fitting the same (multidimensional) data are used, and each method is independent but imperfect, averaging together the fits will indeed give a better result. It works better because there is a mathematical artifact coming from having too many adjustable parameters that allow the model to fit nearly anything.
One may not be surprised that the various models disagree dramatically, one with another, about the Earth’s climate, including how big the supposed global warming catastrophe will be. But no model, except perhaps one from Russia, denies the future catastrophe.
Read the whole thing.
1 comment:
Thank you for explaining the differences. Not being a math type, I didn't know all of that. Makes sense to me.
Post a Comment