As someone interested in the science of climate, I often see claims that the current paths used by researchers to investigate our earth are not 'science'. That is, part of the program of climate science uses physical models based on both classical and quantum physics to re-produce and predict past and future climatic conditions, respectively. Some believe that since some of these model outputs cannot be 'falsified', the entire endeavor is un-scientific.
I do agree that there is some truth to parts of these claims. Prof. Roger Pielke Sr. has long been critical of aspects of climate model use and makes important points. Today in fact, in response to a paper published in the Geophysical Research Letters, Peilke the Elder pointed out,
'Moreover, they (the authors of said paper) write “[t]he credibility of model-simulated cold extremes is evaluated through both bias scores relative to reanalysis data in the past and multi-model agreement in the future.” The testing against reanalysis data for the period 1991-2000 is robust science. However, bias scores using“multi-model agreement in the future” is a fundamentally incorrect approach.
Now, in this post, Pielke the Elder pulls no punches as to how he feels about this paper and the research program that he believes it more broadly represents. He calls the approach 'scientifically flawed', 'fails as robust science' and represents a 'failure of the scientific method'.
But he never says that it is not 'science'. To him, it's just bad 'science'.
I think this is an important distinction to make in the context of this notion of 'science'. Some commenters are stuck on the idea that there are very strict rules, mostly coming from esoteric and mostly useless (in my opinion) philosophical texts, that parse out the exact way in which 'science' is to be done. A course of investigation that follows this path can bear the name 'science', while anything less is disparaged and not worth second glance. These proponents of a squeaky clean notion of 'science' often point to the simplest of physical laws to makes their case. Most times confusing the process in which that physical knowledge was discovered and verified.
What one finds in the process of doing actual scientific research that the process is quite different from these 'rules of science', however. One finds often there is not a clear-cut 'answer' to a specific situation. The data are consistent with multiple models or the models cannot account for the full variation that the data spell out or several other, more complex situations can emerge from a given research track. In such a situation, which is very, very common in scientific research, the practitioner finds that the 'rules of science' are not very helpful. Specific hypotheses cannot be rigorously tested, yet the structure of the scientific community necessitates 'a paper'. It becomes a matter of the researcher's judgment as to what explanation seems most reasonable.
Sometimes that judgment is good. Other times it's not so good.
From that point, other researchers can pursue the problem further, with different methods and approaches that may shed light on an aspect unseen originally. Replication can better confirm at times, but also create more confusion based on what is already known in that given situation. And so the process continues.
Which brings us to the larger point. 'Science' is what needs to be done to 'solve' a particular research situation. That might mean using a computer simulation of a physical or biological model with known errors. It might mean using an experimental technique not analytically designed for the experiment you want to run. Ideally, one would find the absolutely best technique for a given research 'job' and use that technique to its utmost ability. Unfortunately, one often finds oneself in a situation where any data helps better inform one's judgment about what is happening. Therefore, we are willing to use techniques that suffer poorer resolution of the pertinent dynamics or numerical calculations with known problems. It may simply be a resource problem, but it can't stop us from trying to do 'science'.
All of that said, if one is using a known problematic technique, that fact has be very transparent when the work is reported to the community. In the case of the paper highlighted by Peilke the Elder, the group in question is overconfident in the approach they're using. They are not forthcoming with the fact that the models they use are not good at predicting the parameters.
I think that's what makes their paper 'bad science'. But it's 'science' nonetheless.