Friday, December 28, 2012

From the APA: Philosophers and climate change


by Massimo Pigliucci

It's that time of year: the period between Christmas and New Year's Eve, when for some bizarre reason the American Philosophical Association has its annual meeting. This year it's in Atlanta, and I made it down here to see what may be on offer for a philosopher of science. This first post is about how philosophers see climate change, at least as reflected in an APA session chaired by Eric Winsberg, of the University of South Florida. The two speakers were Elisabeth Lloyd (Indiana) and Wendy Parker (Ohio), with critical commentary by Kevin Elliott (South Carolina).

The first talk was by Lloyd, who began by addressing the claim — by climate scientists — that the robustness of their models is a good reason to be confident in the results of said models. Broadly, however, philosophers of science do not consider robustness per se to be confirmatory. To put it simply, models could be robust and wrong.

Still, Lloyd argued that robustness is an indicator that a model is, in fact, more likely to be true. She began by referring to a point made by some theoretical ecologists: good models will predict the same outcome in spite of being built on different specific assumptions.

Lloyd stressed that there are different concepts of robustness. One is that of measurement robustness, the most famous example of which is the estimation, based on as many as 13 different methods, of Avogado's number. The concept used by Lloyd, however, is one of model robustness, which deals with the causal structure of the various climate models. The focus, then, shifts from the outcome (measurement robustness) to the internal structure of the models themselves.

Climate models are a way of articulating theory, because the equations that describe atmospheric dynamics are not analytically solvable. Lloyd went into some detail concerning how these models are actually built, pointing out how often predictions of crucial variables (like global mean surface temperature) are the result of six to a dozen different models, incorporating a range of parameter values. A set of models, or model "type," is characterized by a common core of causal processes underlying the phenomena to be modeled. An interesting example is that when climate models do not include greenhouse gases (but are limited to, say, solar and volcanic effects), they are indeed robust, as a set, but their output does not match with the available empirical data.

The point is that if a model set includes greenhouse gases as a core causal component, all models in the set produce empirically accurate estimates of the output variables — that is, the set is robust — for a range of parameter values of the other components of the model. Moreover, it is the case that individual parameter values in a given model within the set are themselves supported by empirical evidence. The result is a strong inference to the best explanation supporting particular models within a given causal core set.

While I find Lloyd's analysis convincing, it seems to me that in a backward sense it does reach the conclusion that it isn't robustness per se that should generate confidence in a model (or set of models), but rather the robustness together with multiple lines of evidence pointing toward the empirical adequacy both of the outcome of the model and of its specific parameter settings.

Wendy Parker gave the second talk, tackling the role of computer simulations as a form of theoretical practice in science. She referred to Fox Keller, according to whom describing simulations as "computer experiments" qualitatively changes the landscape of what counts as theorizing in science. Parker is interested in the sense in which simulation outcomes can be thought of as "observations," and the models themselves as observing instruments.

She began with a description of numerical weather predictions, which started in the 1950s, before modern digital computers. The data anchoring the analysis of weather models today are produced by satellites and local stations. While the models are set up as regular grids on the territory, the data are of course not comparably evenly spread. Forecasters then use various methods of "data assimilation," which take into account not only the available empirical data, but also the previous forecasts for a given area. The goal is to find a better fit than the one characteristic of the previous forecast alone to achieve a best estimate of the state of the system.

The resulting sequence of constantly updated weather snapshots was soon seized upon by climate scientists to help bridge the gap between weather and climate. This process of re-analysis of weather data, integrated by additional empirical data not originally taken into account by the forecasters, is now a common practice of data assimilation in climate science (the example discussed by Parker in detail is that of a procedure known as 4DVAR).

The point is that re-analysis data sets are simulation outputs, which however are treated as observational data — though some researchers keep them distinct from actual observational data by referring to them in published papers as "reference data" or some such locution. The problem begins when some climate scientists think of assimilation models themselves as "observing instruments" in the absence of actual measurement instruments on the ground. (Interestingly, there are documented cases of assimilation models "seeing" atmospheric phenomena that were not registered by sparsely functioning ground instruments and that were later confirmed by satellite imagery.)

Parker wants to reject the claim that models should be thought of as observing instruments, while she is sympathetic to the conceptualization of simulation outcomes as "observations" of atmospheric processes.

Her objection to thinking of assimilation models as observing instruments is that, although they are informative and, indirectly, empirical (because at some previous iteration empirical data did enter into them), they are not "backward-looking" as true observations themselves are (i.e., you don't "observe" something that hasn't happened yet) are and so are best thought of as predictions.

Parker's argument for nonetheless considering simulation outcomes as observations is that they are empirical (indirectly), backward-looking (partially, because model assimilation also uses observations made at times subsequent to the initial model projections), and informative. That is, they fulfill all three criteria that she laid out for something to count as an observing or measuring procedure. Here Parker is building on van Fraassen's view of measuring as "locating in logical space."

While I enjoyed Parker's talk, in the end I was not convinced. To begin with, we are left with the seemingly contradictory conclusion that assimilation models are not observation instruments, and yet produce observations. Second, van Fraassen's idea was meant to apply to measurement, not observation. Parker acknowledged that van Fraassen distinguishes the two, but she treated them as effectively the same. Lastly, it is not clear what hinges on making the distinction that Parker is pushing, and indeed quite a bit of confusion may arise from blurring too much the distinction between actual (that is, empirical) observations and simulation outcomes. Still, the underlying issue of the status of simulations (and their quantitative outputs) as theoretical tools in science remains interesting.

The session was engaging — regardless of one's agreement or not about specific claims made by the participants — because it showcased some of the philosophical dimensions of ongoing and controversial scientific research. It is epistemologically interesting, as Lloyd did, to reflect on the role of different conceptualizations of robustness in modeling; and it is thought provoking, as Parker did, to explore the roles of computer simulations at the interface between theory and observation in science. Who knows, even climate scientists themselves may find something to bring home (both in their practice and in their public advocacy, which was commented upon by Lloyd) from this sort of philosophical analysis!

2 comments:

  1. Thanks for giving such a detailed and thoughtful discussion of our session! Inevitably, I’d like to clarify a couple of things.

    First, it’s not my aim to argue that scientists should be describing the results of data assimilation as “observations” or “measurements” – it may well be that doing so would bring more confusion than benefit. Rather, what interests me is how some practices involving simulation (like 4DVar) are in fact more difficult to clearly distinguish from traditional measuring practices than we might expect. While I think it’s intriguing that practitioners do sometimes refer to the results of data assimilation as “observations”, my interest is really in the concept of measurement, rather than observation. I failed to make this clear in my talk (even the title was misleading in this regard!); I will try to make it much clearer as I get the paper in shape.

    Lastly, regarding the seemingly contradictory conclusions: I don't want to claim that assimilation models produce observations, at least not on their own; I did suggest that a data assimilation system as a whole, defined to include traditional observing instruments, might be considered a complex observing system/instrument whose results might be considered measurements of atmospheric properties.

    Thanks again for your post!

    ReplyDelete
    Replies
    1. Wendy,

      thanks for the clarifications, much appreciated!

      Delete

Note: Only a member of this blog may post a comment.