Saturday, December 29, 2012

Science and metaphysics


by Massimo Pigliucci

Afternoon time at the annual meeting of the American Philosophical Association. I’m following the session on science and metaphysics, chaired by Shamik Dasgupta (Princeton). The featured speakers are Steven French (Leeds-UK), James Ladyman (Bristol-UK), and Jonathan Schaffer (Rutgers-New Brunswick).  I have developed a keen interest in this topic of late, though as an observer and commentator, not a direct participant to the discussion. Let’s see what is going to transpire today. A note of warning: what follows isn't for the (metaphysically) faint of heart, and it does require at least some familiarity with fundamental physics.

We started with French on enabling eliminitavism, or what he called taking a Viking approach to metaphysics. (The reference to Vikings is meant to evoke an attitude of plundering what one needs and leave the rest; less violently, this is a view of metaphysics as helping itself to a varied toolbox.) French wishes to reject the claim made by others (for instance, Ladyman) that a prioristic metaphysics should be discontinued. However, he does agree with critics that metaphysics should take science seriously.

The problem French is concerned with, then, is how to relate the scientific to the ontological understanding of the world. Two examples he cited were realism about wave functions and the kind of ontic structural realism favored by Ladyman and his colleague Ross.

Ontic structural realism comes in at least two varieties: eliminativist (we should eliminate objects entirely from our metaphysics, particles are actually "nodes" in the structure of the world) and non-eliminativist (which retains a "thin" version of objects, via the relations of the underlying structure).

French went on to talk about three tools for the metaphysician: dependence, monism, and an account of truth making.

Dependence. The idea is that, for instance, particles are "dependent" for their existence on the underlying structure of the world. A dependent object is one whose features are derivative on something else. In this sense, eliminitavism looks viable: one could in principle "eliminate" (ontologically) elementary particles by cashing out their features in terms of the features of the underlying structure, effectively doing away with the objects themselves.

The basic idea, to put it as French did, is that "if it is of the essence, or nature or constitution of X that it exists only if Y exists, so that X is dependent on Y in the right sort of way, then X can be eliminated in favor of Y + structure."

As French acknowledged, however (though he didn't seem sufficiently worried about it, in my opinion), the eliminativist still needs to provide an account of how we recover the observable properties of objects above the level of fundamental structure.

Monism. This is the (old) idea that the world is made of one kind of fundamental stuff, a view recently termed "blobjectivism" (everything reduces to a fundamental blob). As French put it, this is saying that yes, electrons, for instance, have charges, but there really are no electrons, there is just the blob (that is, the structure).

A number of concerns have been raised against monism, and French commented on a few. For instance, monism can't capture permutations in state space. To which the monist responds that monistic structure includes permutation invariance. This, however, strikes me as borderline begging the question, since the monist can always use a catch all "it's already in the structure" response to any criticism. But how do we know that the blob really does embody this much explanatory power?

Truthmakers. French endorses something called Cameronian truthmaker theory, according to which < X exists > might be made true by something other than X. Therefore, the explanation goes, < X exists > might be true according to theory T without X being an ontological commitment of T.

Perhaps this will be made clearer by looking at one of the objections to this account of truth making: the critic can reasonably ask how is it possible that there appear to be things like tables, chairs, particles, etc. if these things don't actually exist. French's response is that one just needs to piggyback on the relevant physics, though it isn't at all settled that "the relevant physics" actually says that tables, chairs and particles don't exist in the strong eliminativist sense of the term (as opposed to, say, they exist as spatio-temporal patterns of a certain kind, accessible at the relevant level of analysis).

Next we moved to Ladyman, on "between eliminativism and monism: the radical middle ground." He acknowledged that structural realism is accused by some of indulging in mystery mongering, but Ladyman responded (correctly, I think) that it is physics that threw up stuff —  like fundamental relations and structure — that doesn't fit with classical metaphysical concepts, and the metaphysician now has to make some sense of the new situation.

Ladyman disagrees with French's eliminativism about objects, suggesting that taking structure seriously doesn't require to do away with objects. The idea is that there actually are different versions of structuralism, which depend on how fundamental relations are taken to be. James also disagrees with the following speaker, Schaffer, who is an eliminativist about relations, giving ontological priority to one object and intrinsic properties (monism). Ladyman's (and his colleague Ross') position is summarized as one of being non-eliminativist about metaphysically "thin" individuals, giving ontological priority to relational structures.

One of the crucial questions here is whether there is a fundamental level to reality, and whether consequently there is a unidirectional ontological dependence between levels of reality. Ladyman denies a unidirectional dependence. For instance, particles and their state depend on each other (that is, one cannot exist without the other), the interdependence being symmetrical. The same goes for mathematical objects and their relations, for instance the natural numbers and their relations.

As for the existence of a fundamental level, we have an intuition that there must be one, partly because the reductionist program has been successful in science. However, Ladyman thinks that the latest physics has rendered that expectation problematic. Things got more and more messy in fundamental physics of late, not less so. Consequently, for Ladyman the issue of a fundamental level is an open question, which therefore should not been built into one's metaphysical system — at least not until physicists settle the matter.

Are elementary quantum particles individuals? Well, one needs to be clear on what one means by individual, and also on the relation between the concept of individuality and that of object. This is a question that is related to that old chestnut of metaphysics, the principle of identity of indiscernibles (which establishes a difference between individuals — which are not identical, and therefore discernible — and mere objects). However, Ladyman collapses individuals into objects, which is why he is happy to say that — compatibly with quantum mechanics — quantum particles are indeed objects. The idea is that particles are intrinsically indiscernible, but they are (weakly) discernible in virtue of their spatio-temporal locality. 

Ladyman, incidentally, is aware of course of the quantum principle of non-locality, which makes the idea of precisely individuated particles problematic. But he doesn't think that non-locality licenses a generic holism where there is only one big blob in the world, and that individuality can be recovered by thinking in terms of a locally confined holism. Again, that strikes me as sensible in terms of the physics (as I understand it), and it helps recovering a (thin, as he puts it) sense in which there are objects in the world.

Finally, we got to Schaffer, who argued against ontic structural realism of the type proposed by either French or Ladyman. He wants to defend the more classical view of monism instead. He claimed that that is the actual metaphysical picture that emerges from current interpretations of quantum mechanics and general relativity.

His view is that different mathematical models — both in q.m. and in g.r. — are best thought of as just being different notations related by permutations, corresponding to a metaphysical unity. In a sense, these different mathematical notations "collapse" into a unified picture of the world.

Schaffer's way to cash out his project is by using the (in)famous Ramsey sentences, which are sentences that do away with labels, not being concerned with specific individuals. Now, one can write the Ramsey sentences corresponding to the equations of general relativity, which according to the author yields a picture of the type that has been thought of since at least Aristotle: things come first, relations are derivative (i.e., one cannot have structures or relations without things that are structured or related). If this is right, of course, the ideas that there are only structures (eliminitavism a la French) or that structures are ontologically prior to objects (Ladyman) are incorrect.

So, Schaffer thinks of Ramsey sentences as describing structural properties, which he takes to be the first step toward monism. Second, says Schaffer, what distinguishes abstract structures from the one describing the universe is that something bears those structures. That something is suggested to be the largest thing we can think fits the job, that is the universe as a whole. He calls this picture monistic structural realism: there is a cosmos (the whole), characterized by parts that bear out the structures qualitatively described by the Ramsey translation of standard physical theories like relativity and quantum mechanics. Note that this is monism because — thanks to the Ramsey translation — the parts are interchangeable, related by the mathematical permutations mentioned above.

Okay, does your head spin by now? This is admittedly complicated stuff, which is why I added explanatory links to a number of the concepts deployed by the three speakers. I found the session fascinating as it gave me a feeling for the current status of discussions in metaphysics, particularly of course as far as it concerns the increasingly dominant idea of structural realism, in its various flavors. Notice too that none of the participants engaged in what Ladyman and Ross (in their Every Thing Must Go, about which I have already commented) somewhat derisively labeled "neo-Scholasticism," that is the entire discussion took seriously what comes out of physics, all participants conceptualizing metaphysics as the task of making sense of the broad picture of the world that science keeps uncovering. That seems to me to be the right way of doing metaphysics, and one that may (indeed should!) appeal even to scientists.

The philosophy of genetic drift


by Massimo Pigliucci

This morning I am following a session on genetic drift at the American Philosophical Association meetings in Atlanta. It is chaired by Tyler Curtain (University of North Carolina-Chapel Hill), the speaker is Charles Pence (Notre Dame), and the commenters are Lindley Darden (Maryland-College Park) and Lindsay Craig (Idaho). [Note: I’ve written myself about this concept, for instance in chapter 1 of Making Sense of Evolution. Check also these papers in the journal Philosophy & Theory in Biology: Matthen and Millstein et al.]

The title of Charles' talk was "It's ok to call genetic drift a force," a position — I should state right at the beginning — with which I actually disagree. Let the fun begin! Drift has always been an interesting and conceptually confusing issue in evolutionary biology, and of course it plays a crucial role in mathematical population genetic theory. Drift has to do with stochastic events in generation-to-generation population sampling of gametes. The strength of drift is inversely proportional to population size, which also means it has an antagonistic effect to natural selection (whose strength is directly proportional to population size).

Charles pointed out that one popular interpretation of drift among philosophers is "whatever causes fail to differentiate based on fitness." The standard example is someone being struck by lightening, the resulting death clearly having nothing to do with that individual's fitness. I'm pretty sure this is not what population geneticists mean by drift. If that were the case, a mass extinction caused by an asteroid (that is, a cause that has nothing to do with individual fitness) would also count as drift. Indeed, discussions of drift — even among biologists — often seem to confuse a number of phenomena that have little to do with each other, other than the very generic property of being "random."

What about the force interpretation then? This is originally due to Elliott Sober (1984), who developed a conceptual model of the Hardy-Weinberg equilibrium in population genetics based on an analogy with Newtonian forces. H-W is a simple equation that describes the genotypic frequencies in a population where no evolutionary processes are at work: no selection, no mutation, no migration, no assortative (i.e., non random) mating, and infinite population size (which implies no drift).

The force interpretation is connected to the (also problematic, see Making Sense of Evolution, chapter 8) concept of adaptive landscape in evolutionary theory. This is a way to visualize the relationship between allelic frequencies and selection: the latter will move populations "upwards" (i.e., toward higher fitness) on any slope in the landscape, while drift will tend to shift populations randomly around the landscape.

The controversy about thinking of drift as a force began in 2002 with a paper by Matthen and Ariew, followed by another one by Brandon in 2006. The basic point was that drift inherently does not have a direction, and therefore cannot be analogized to a force in the physical (Newtonian) sense. As a result, the force metaphor fails.

Stephens (2004) claimed that drift does have direction, since it drives populations toward less and less heterozygosity (or more and more homozygosity). Charles didn't buy this, and he is right. Stephens is redefining "direction" for his own purposes, as heterozygosity does not appear on the adaptive landscape, making Stephens' response entirely artificial and not consonant with accepted population genetic theory.

Filler (2009) thinks that drift is a force because it has a mathematically specific magnitude and can unify a wide array of seemingly disparate phenomena. Another bad answer, I think (and, again, Charles also had problems with this). First off, forces don't just have magnitude, they also have direction, which, again, is not the case for drift. Sober was very clear on this, since he wanted to think of evolutionary "forces" as vectors that can be combined or subtracted. Second, it seems that if one follows Filler far too many things will begin to count as "forces" that neither physicists nor biologists would recognize as such.

Charles' idea is to turn to the physicists and see whether there are interesting analogs of drift in the physical world. His chosen example was Brownian motion, the random movement of small objects like dust particles. Brownian motion is well understood and mathematically rigorously described. Charles claimed that the equation for Brownian motion "looks" like the equation for a stochastic force, which makes it legitimate to translate the approach to drift.

But I'm pretty sure that physicists themselves don't think of Brownian motion as a force. Having a mathematical description of stochastic effects (which we do have, both for Brownian motion and for drift — and by the way, the two look very different!) is not the same as having established that the thing one is modeling is a force. Indeed, Charles granted that one could push back on his suggestion, and reject that either drift or Brownian motion are forces. I'm inclined to take that route.

A second set of objections to the idea of drift as a force (other than it doesn't have direction) is concerned with the use of null models, or inertial states, in scientific theorizing. H-W is supposed to describe what happens when nothing happens, so to speak, in populations of organisms. According to Brandon, however, drift is inherent in biological populations, so that drift is the inertial state itself, not one of the "forces" that move populations away from such state.

Charles countered that for a Newtonian system gravity also could be considered "constitutive," the way Brandon thinks of drift, but that would be weird. Charles also object that it is no good to argue that one could consider Newtonian bodies in isolation from the rest of the universe, because similar idealizations can be invoked for drift, most famously the above mentioned assumption of infinite population size. This is an interesting point, but I think the broader issue here is the very usefulness of null models in science in general, and in biology in particular (I am skeptical of their use, at least as far as inherently statistical problems of the kind dealt with by organismal biology are concerned, see chapter 10 of Making Sense).

Broadly speaking, one of the commentators (Darden) questioned the very benefit of treating drift as a force, considering that obviously biologists have been able to model drift using rigorous mathematical models that simply do not require a force interpretation. Indeed, not even selection can always be modeled as a vector with intensity and direction: neither the case of stabilizing selection nor that of disruptive selection fit easily in that mold, because in both instances selection acts to (respectively) decrease or increase a trait's variance, not its mean. Moreover, as I pointed out in the discussion, assortative mating is also very difficult to conceptualize as a vector with directionality, which makes the whole attempt at thinking of evolutionary "forces" ever more muddled and not particularly useful. Darren's more specific point was that while it is easy to think of natural selection as a mechanism, it is hard to think of drift as a mechanism (indeed, she outright denied that it is one), which again casts doubt on what there is to gain from thinking of drift as a force. The second commentator (Craig) also questioned the usefulness of the force metaphor for drift, even if defensible along the lines outlined by Pence and others.

Even more broadly, haven't physicists themselves moved away from talk of forces? I mean, let's not forget that Newtonian mechanics is only an approximation of relativity theory, and that "forces" in physics are actually interpreted in terms of fields and associated particles (as in the recently much discussed Higgs field and particle). Are we going to reinterpret this whole debate in terms of biological fields of some sort? Isn't it time that biologists (and philosophers of biology) let go of their physics envy (or their envy of philosophy of physics)?

Friday, December 28, 2012

From the APA: Metaethical antirealism, evolution and genetic determinism


by Massimo Pigliucci

The Friday afternoon session of the American Philosophical Association meeting from which I am blogging actually had at the least three events of interest to philosophers of science: one on race in population genetics, one on laws in the life sciences, and one on the strange combination of (metaethical) antirealism, evolution and genetic determinism. As it is clear from the title of this post, I opted for the latter... It featured three speakers: Michael Deem (University of Notre Dame), Melinda Hall (Vanderbilt), and Daniel Demetriou (Minnesota-Morris).

Deem went first, on "de-horning the Darwinian dilemma for realist theories of value" (no slides, darn it!). The point of the talk was to challenge two claims put forth by Sharon Street: a) that the normative realist cannot provide a scientifically acceptable account of the relation between evolutionary forces acting on our evaluative judgments and the normative facts realists think exist; b) that the "adaptive link account" provides a better explanation of this relation than any realist tracking account. (Note: much of this text is from the handout distributed by Deem.)

The alleged dilemma consists in this: by hypothesis, evolutionary forces have played a significant role in shaping our moral evaluative attitudes. If so, how is the moral realist to make sense of the hypothesis while holding on to moral realism? Taking the first horn, the realist could deny any relation between evolution and evaluative judgments. But this would mean either skepticism about evaluative judgments, or lead to a view that evolved normative judgments coincidentally align with moral facts, neither option being palatable to the moral realist.

The second horn leads the realist to accepting the link with evolution. But this means that the s/he would have to claim that tracking normative truths is somehow biologically adaptive, a position that is hard to defend on scientific grounds.

According to Street there are two positions available here: the tracking account (TA) says that  we grasp normative facts because doing so in the past has augmented our ancestors' fitness. The adaptive link account (ALA) says that we make certain evaluative judgments because these judgments forged adaptive links between the responses of our ancestors and the environments in which they lived. Note that the difference between TA and ALA is that the first talks of normative facts, the latter of evaluative judgments.

Street prefers ALA on the grounds that it is more parsimonious and clear, and that it sheds more light on the phenomenon to be explained (i.e., the existence of evaluative judgments). Deem doesn't think this is a good idea, because within the ALA evaluative judgments play a role analogous to hard-wired adaptations in other animals, which seems implausible; and because it is mysterious why selection would favor evaluative judgments.

Deem then went on to propose a modified ALA: humans possess certain evaluative tendencies because these tendencies forged adaptive links between the responses of our ancestors and their environments. Note that the difference between standard ALA and realist ALA is that the first one talks of evaluative judgments, the latter of evaluative tendencies. (This distinction makes perfect sense to me: judgments are the result, at least in part, of reflection; tendencies can be thought of as instinctual reactions or propensities. So, for instance, humans have both, while other primates only — as far as we know — possess propensities, but are incapable of judgments.)

To put it in his own words, Deem claims that "the realist can show that his/her position is compatible with evolutionary biology and can provide an account of the relation between the evolutionary forces that shaped human evaluative attitudes and independent normative facts. ... [However] it seems evolutionary theory underdetermines the choice between realism and antirealism in metaethics."

Okay, I take it that Deem's idea is to reject the suggestion that evolution makes it unnecessary to resort to the realist idea that there are normative facts. Perhaps so, in a way similar to which an evolutionary account of our abilities at mathematical reasoning wouldn't exclude the possibility of mathematical realism ("Platonism"). But one needs a positive reason to contemplate an objective ontological status of moral truths, and I think the case for that is far less compelling than the analogous case for mathematical objects (one of the reasons being that while mathematical abstractions truly seem to be universal, moral truths would still apply only to certain kind of social organisms capable of self-reflection).

Melinda Hall talked about "untangling genetic determinism: the case of genetic abortion" (another talk without slides, or even a handout!). She is interested in abortion in cases where medical evidence predicts that the infant will be severely disabled. Given such information, is it moral to terminate the pregnancy ("genetic" abortion, a type of negative genetic selection) or, on the contrary, is it moral to continue it?

The basic idea seems to be that genetic abortion is conceptually linked to genetic determinism, i.e., an overemphasis on the importance of genetic factors in development. In turn, Hall argued, the decision to terminate pregnancies in such cases contributes to stigmatize, as well as reduce social resources for, the disabled community.

Disability has both a social and a biological component, and if a lot of the negative effects of disabilities on life quality are the result of social construction, then the main issue is social and not biological. Disability advocates claim that it is problematic to make a single trait (the disability, whatever it is) become an overriding, criterion on the basis of which to make the decision to abort.

There is thus apparently a tension — which Hall sought to diffuse — between the usually pro-choice attitude of disability advocates and the restriction on the mother's reproductive rights if one objects to "genetic abortion."

A reasonable (I think) worry is that "gene mania," i.e., the quest for purely or largely biological explanations for human behavior, may encourage the search for simplistic solutions to problems that are in reality complex and in good part social-environmental. My own worry about Hall and some of her colleagues' approach, however, is the opposite danger that disability advocates may seriously underestimate the biological basis of disabilities, which may in turn lead to an equally problematic tendency to reject medical preventive solutions. (Indeed, Hall at one point made the parenthetical comment that disabilities may not be a "problem" at all. I think that's willful rejection of the painful reality in which many human beings live.)

Hall went on to invoke the nightmarish social scenario depicted in the scifi movie Gattaca. I don't object to using scifi scenarios as evocative thought experiments, but of course there is a huge disanalogy between the situation in Gattaca and the issue of disabilities. Gattaca's "inferiors" were actually normal human beings, pitted against genetically enhanced ones. Disable people are, in a very important sense, the mirror image of the movie's enhanced humans, since they lack one or another species-normal functionality typical of humans.

Though Hall qualified this, disability advocates apparently worry that "negative genetic selection" may nurture a societal attitude that it may one day be possible to eliminate disability, which somehow could turn into decreased social support for disabled people. Frankly, I think that's an egregious example of non-sequitur, and moreover it flies in the face of the empirical evidence that Western societies at least have significantly increased allocation of resources to the disabled (see, for instance, the Americans with Disabilities Act).

This whole discussion seems to be predicated on an (unstated and, I think, indefensible) equivalency or near-equivalency between the moral status of a fetus who is likely to develop into a disable person and that person him/herself. As the commentator for the paper (Daniel Moseley, UNC-Chapel Hill) pointed out, it is hard to see what is morally wrong in parents' decision to abort a fetus that has a high likelihood — based on the best medical evidence available — to develop a disability that would be hard to live with, regardless of whatever support society will provide (as it ought to) to the disabled person resulting from that pregnancy, should the parents decide not to abort.

Finally, Daniel Demetriou spoke about "fundamental moral disagreement, antirealism, and honor." (Yay! Slides!!) He took on Doris and Plakias' argument that moral realism predicts fundamental moral agreement (analogously, say, to agreement about mathematical or scientific facts). However, empirically there is plenty of evidence for moral disagreements, for instance in the case of the "culture of honor" among whites in the American South. This is turned by Doris and Plakias into an argument against moral realism (i.e., there are fundamental disagreements about moral norms because there is no objective thing of the matter about moral norms).

There are indeed interesting data showing that white Southerners respond more violently to insult and aggression. The alleged explanation is that these people inherited (culturally, not genetically) a culture of honor, which comes from their pastoral ancestors. More broadly, an honor culture according to some authors is likely adaptive in pastoralist social environments, where goods are easily stolen and a reputation for prompt and violent reaction may function as an effective deterrent (as opposed to, say, the situation in agricultural societies, where goods like crops are not easily stolen).

Interestingly, African pastoralists, as well as pastoralists in Sardinia and in Crete, consider raiding from other livestock owners a way to prove their honor as young men. The same goes for the Scottish highlands, again highlighting the connection between honor and violence.

Demetriou, however, is not convinced by this account, raising a number of objections, including the fact that pastoralist societies are still concerned with fairness, as in the concept of fair fighting. Fairness in fighting would not be a good deterrence against aggression, contra the above thesis. Moreover, there are several honor cultures that are not in fact violent. Instead, Demetriou put forth a "competition ethic account" of honor, where honor has to do with social reputation.

Metaethically, Demetriou agreed that honor really is different from the liberal ethics of welfare, favoring prestige instead. Similarly, liberalism favors cooperative principles, while honor ethics favors competition. So for Demetriou the honor outlook is much more fundamentally different from the liberal ethos than even the story based on the effectiveness of violence would suggest.

However, the author concluded, moral realism has no problem with the divergence between liberalism and honor, since it is possible to accommodate the difference invoking pluralism of a realist sort. Well, yes, though it seems to me that this strategy is capable of accommodating pretty much any set of data demonstrating empirical divergence of ethical systems... Moreover, one of Demetriou's comments toward the end was a bit confusing. He wondered why a white Southerner who has grown up in an honor culture couldn't "wake up" to a liberal approach, perhaps (his examples) after watching the right movie or reading the right book. But wait, that seems to imply no pluralism at all, but rather a situation in which the person steeped in the honor culture was simply wrong and realized, under proper conditions, that he was so. That, of course, may be, but it is a very different defense of realism against the empirically driven antirealist argument. Which one is it? Actual pluralism, or the idea that there is one correct moral system and some people are simply in error about it?

Overall this felt as a somewhat disjointed session, particularly because the second talk had hardly anything at all to do with antirealism, while neither the first nor especially the last talk had much to do with genetic determinism. But such is the way of many APA sessions, and each of the three talks did raise interesting questions about the relationship between ethics and science. It has been pretty uncontroversial for a while among moral philosophers that their discipline (just like every other branch of philosophy, I would argue) better take seriously the best scientific evidence relevant to whatever philosophical issues are under discussion. The much more interesting and thorny question is that of what exactly the implications of the science are for ethical and even metaethical positions, as well as — conversely — what the implications of our ethical theories are for the way science itself is conducted and scientific advise is implemented in our society.

From the APA: Philosophers and climate change


by Massimo Pigliucci

It's that time of year: the period between Christmas and New Year's Eve, when for some bizarre reason the American Philosophical Association has its annual meeting. This year it's in Atlanta, and I made it down here to see what may be on offer for a philosopher of science. This first post is about how philosophers see climate change, at least as reflected in an APA session chaired by Eric Winsberg, of the University of South Florida. The two speakers were Elisabeth Lloyd (Indiana) and Wendy Parker (Ohio), with critical commentary by Kevin Elliott (South Carolina).

The first talk was by Lloyd, who began by addressing the claim — by climate scientists — that the robustness of their models is a good reason to be confident in the results of said models. Broadly, however, philosophers of science do not consider robustness per se to be confirmatory. To put it simply, models could be robust and wrong.

Still, Lloyd argued that robustness is an indicator that a model is, in fact, more likely to be true. She began by referring to a point made by some theoretical ecologists: good models will predict the same outcome in spite of being built on different specific assumptions.

Lloyd stressed that there are different concepts of robustness. One is that of measurement robustness, the most famous example of which is the estimation, based on as many as 13 different methods, of Avogado's number. The concept used by Lloyd, however, is one of model robustness, which deals with the causal structure of the various climate models. The focus, then, shifts from the outcome (measurement robustness) to the internal structure of the models themselves.

Climate models are a way of articulating theory, because the equations that describe atmospheric dynamics are not analytically solvable. Lloyd went into some detail concerning how these models are actually built, pointing out how often predictions of crucial variables (like global mean surface temperature) are the result of six to a dozen different models, incorporating a range of parameter values. A set of models, or model "type," is characterized by a common core of causal processes underlying the phenomena to be modeled. An interesting example is that when climate models do not include greenhouse gases (but are limited to, say, solar and volcanic effects), they are indeed robust, as a set, but their output does not match with the available empirical data.

The point is that if a model set includes greenhouse gases as a core causal component, all models in the set produce empirically accurate estimates of the output variables — that is, the set is robust — for a range of parameter values of the other components of the model. Moreover, it is the case that individual parameter values in a given model within the set are themselves supported by empirical evidence. The result is a strong inference to the best explanation supporting particular models within a given causal core set.

While I find Lloyd's analysis convincing, it seems to me that in a backward sense it does reach the conclusion that it isn't robustness per se that should generate confidence in a model (or set of models), but rather the robustness together with multiple lines of evidence pointing toward the empirical adequacy both of the outcome of the model and of its specific parameter settings.

Wendy Parker gave the second talk, tackling the role of computer simulations as a form of theoretical practice in science. She referred to Fox Keller, according to whom describing simulations as "computer experiments" qualitatively changes the landscape of what counts as theorizing in science. Parker is interested in the sense in which simulation outcomes can be thought of as "observations," and the models themselves as observing instruments.

She began with a description of numerical weather predictions, which started in the 1950s, before modern digital computers. The data anchoring the analysis of weather models today are produced by satellites and local stations. While the models are set up as regular grids on the territory, the data are of course not comparably evenly spread. Forecasters then use various methods of "data assimilation," which take into account not only the available empirical data, but also the previous forecasts for a given area. The goal is to find a better fit than the one characteristic of the previous forecast alone to achieve a best estimate of the state of the system.

The resulting sequence of constantly updated weather snapshots was soon seized upon by climate scientists to help bridge the gap between weather and climate. This process of re-analysis of weather data, integrated by additional empirical data not originally taken into account by the forecasters, is now a common practice of data assimilation in climate science (the example discussed by Parker in detail is that of a procedure known as 4DVAR).

The point is that re-analysis data sets are simulation outputs, which however are treated as observational data — though some researchers keep them distinct from actual observational data by referring to them in published papers as "reference data" or some such locution. The problem begins when some climate scientists think of assimilation models themselves as "observing instruments" in the absence of actual measurement instruments on the ground. (Interestingly, there are documented cases of assimilation models "seeing" atmospheric phenomena that were not registered by sparsely functioning ground instruments and that were later confirmed by satellite imagery.)

Parker wants to reject the claim that models should be thought of as observing instruments, while she is sympathetic to the conceptualization of simulation outcomes as "observations" of atmospheric processes.

Her objection to thinking of assimilation models as observing instruments is that, although they are informative and, indirectly, empirical (because at some previous iteration empirical data did enter into them), they are not "backward-looking" as true observations themselves are (i.e., you don't "observe" something that hasn't happened yet) are and so are best thought of as predictions.

Parker's argument for nonetheless considering simulation outcomes as observations is that they are empirical (indirectly), backward-looking (partially, because model assimilation also uses observations made at times subsequent to the initial model projections), and informative. That is, they fulfill all three criteria that she laid out for something to count as an observing or measuring procedure. Here Parker is building on van Fraassen's view of measuring as "locating in logical space."

While I enjoyed Parker's talk, in the end I was not convinced. To begin with, we are left with the seemingly contradictory conclusion that assimilation models are not observation instruments, and yet produce observations. Second, van Fraassen's idea was meant to apply to measurement, not observation. Parker acknowledged that van Fraassen distinguishes the two, but she treated them as effectively the same. Lastly, it is not clear what hinges on making the distinction that Parker is pushing, and indeed quite a bit of confusion may arise from blurring too much the distinction between actual (that is, empirical) observations and simulation outcomes. Still, the underlying issue of the status of simulations (and their quantitative outputs) as theoretical tools in science remains interesting.

The session was engaging — regardless of one's agreement or not about specific claims made by the participants — because it showcased some of the philosophical dimensions of ongoing and controversial scientific research. It is epistemologically interesting, as Lloyd did, to reflect on the role of different conceptualizations of robustness in modeling; and it is thought provoking, as Parker did, to explore the roles of computer simulations at the interface between theory and observation in science. Who knows, even climate scientists themselves may find something to bring home (both in their practice and in their public advocacy, which was commented upon by Lloyd) from this sort of philosophical analysis!

Friday, December 21, 2012

Lakatos Award for Wolfgang Spohn

The London School of Economics and Political Science announces that the Lakatos Award, of £10,000 for an outstanding contribution to the philosophy of science, has been won by Wolfgang Spohn of the University of Konstanz for his book The Laws of Belief: Ranking Theory and its Philosophical Implications (Oxford University Press, 2012).

The Lakatos Award is given for an outstanding contribution to the philosophy of science, widely interpreted, in the form of a book published in English during the previous five years.  It was made possible by a generous endowment from the Latsis Foundation.  The Award is in memory of the former LSE professor, Imre Lakatos, and is administered by an international Management Committee organised from the LSE.

The Committee, chaired by John Worrall, decides the outcome of the Award competition on the advice of an international, independent and anonymous panel of Selectors who produce detailed reports on the nominated books.
________________________________________________________________________

Nominations can now be made for the 2013 Lakatos Award, and must be received by Friday 19th April 2013. The 2013 Award will be for a book published in English with an imprint from 2008-2013 inclusive. A book may, with the permission of the author, be nominated by any person of recognised standing within the profession.  (The Management Committee is not empowered to nominate books itself but only to respond to outside nominations.)

For further details of the nomination procedure or more information on the Lakatos Award 2013, contact the Administrator, Tom Hinrichsen, at t.a.hinrichsen@lse.ac.uk

Monday, December 17, 2012

CfP for the 11th Graduate Conference at the University of Western Ontario

The University of Western Ontario is organizing its 11th Graduate Conference in Philosophy of Mind, Language, and Cognitive Science (May 23-25, 2013). The Call for Paper can be found there. The Deadline is March 1, 2013.

Friday, December 14, 2012

Conf: Evolution, Intentionality and Information (Bristol, May 2013)



Conference: Evolution, Intentionality and Information.
University of Bristol, May 29th-31st 2013.

A three-day inter-disciplinary conference at the University of Bristol.
This is the inaugural event in the ERC-funded project 'Darwinism and the
Theory of Rational Choice', directed by Professor Samir Okasha. The aim of the conference is to discuss the use of 'intentional', 'strategic' and 'informational' concepts in evolutionary biology.

Plenary speakers: Evelyn Fox-Keller, Daniel Dennett, Joan Roughgarden, Eva
Jablonka, David Haig, Denis Noble, Ken Binmore, Samir Okasha

Contributed papers are welcome.

For further information and details of how to register, please see the conference website

Monday, December 10, 2012

Fellowships: Philosophy of Science (Pittsburgh)

The application deadline of December 15 is approaching for senior, postdoctoral, and visiting fellowships at the Center for Philosophy of Science, University of Pittsburgh, for the academic year 2013-2014.

For more details, see joining on the Center Web site (http://www.pitt.edu/~pittcntr).

Monday, November 26, 2012

Job: Postdoc AOS: Metaphysics of Science (IHPST, Paris)

A postdoctoral position will be available at IHPST (Institut d'Histoire et de Philosophie des Sciences et des Techniques, Paris) within the French ANR funded project  “Metaphysics of Science”, for one year (September/October 2013 - August/September 2014), renewable for a second year (September/October 2014 - August/September 2015).
The successful candidate must pursue research, and already have some expertise, in at least one of the three domains in the focus of the project: 1) Levels of reality, 2) Individual objects in physics and biology, and 3) Dispositions in psychology and physics.

The post-doc will be expected to present his/her research at conferences and seminars, and to publish in peer-reviewed journals.

He or she will work at IHPST in Paris and will provide organizational support for the activities of the teams. Residence in Paris is strictly mandatory.

Major tasks will be to:
1) run the Metaphysics of Science seminar on a regular basis,
2) help organize the workshops of the research project,
3) create and maintain a website on the metaphysics of experimental sciences, which will provide tools of cooperation within the team and help disseminate the results of our research,
4) constitute a database on metaphysics of science.

Applicants must have a doctorate in philosophy. Knowledge of French is not required, but fluency in English is.

Salary will be approximately 2000 € net (2500 € gross) per month.

Application material:
-A cover letter addressed to Max Kistler, Metascience coordinator
-A CV with a list of publications
-A writing sample (e.g., a publication or a dissertation chapter)
-Three letters of recommendation
-A statement of research agenda that fits into one of the areas of the project (2-3 pages)

Applications should be submitted electronically, in a single PDF file, to:
Max Kistler: mkistler@univ-paris1.fr

Deadline for submission of application: 15 February 2013.
Candidates will be informed of the decision by 31 March 2013.

For further information, please contact Max Kistler.

Job: Assistant Professor AOS: Philosophy of Physics (LMU Munich)

The  Chair of Philosophy of Science at the Faculty of Philosophy, Philosophy of Science and Study of Religion and the Munich Center for Mathematical Philosophy (MCMP, http://www.mcmp.philosophie.uni-muenchen.de/index.html) at LMU Munich seek applications for an Assistant Professorship in Philosophy of Physics. The position is for three years with the possibility of extension for another three years. The appointment will be made within the German A13 salary scheme (under the assumption that the civil service requirements are met), which means that one has the rights and perks of a civil servant. The starting date is October 1, 2013.

The appointee will be expected (i) to do philosophical research in the philosophy of physics and to lead a research group in this field, (ii) to teach five hours a week in philosophy of physics and/or a related field, and (iii) to take on management tasks. The successful candidate will have a PhD in philosophy and some teaching experience.

Applications (including a cover letter that addresses, amongst others, one's academic background and research interests, a CV, a list of publications, a list of taught courses, a sample of written work of no more than 5000 words, and a description of a planned research project of 1000-1500 words) should be sent by email (ideally everything requested in one PDF document) to sabine.krueger@lrz.uni-muenchen.de by December 10, 2012. Additionally, two confidential letters of reference addressing the applicant's qualifications for academic research should be sent to the same address from the referees directly.

Contact for informal inquiries: Professor Stephan Hartmann (Stephan.Hartmann@lrz.uni-muenchen.de)

Wednesday, October 31, 2012

CFP: Philosophy of the Social Sciences (Venice, September 2013)

THE EUROPEAN NETWORK FOR THE PHILOSOPHY OF THE SOCIAL SCIENCES & THE PHILOSOPHY OF SOCIAL SCIENCE ROUNDTABLE

Call for Papers:

First joint European/American Conference University of Venice Ca' Foscari
3-4 September, 2013

The European Network for the Philosophy of the Social Sciences and the Phil= osophy of Social Science Roundtable invite contributions to their first joi= nt conference. Contributions from all areas within the philosophy of the so= cial sciences, from both philosophers and social scientists, are encouraged= .

Keynote speakers:

  *   Cristina Bicchieri (University of Pennsylvania)
  *   Nancy Cartwright (University of Durham/ University of California San =
Diego)

Submissions:

  *   An abstract of no more than 1000 words suitably prepared for blind re=
viewing should be submitted electronically through the Easychair system at =
  https://www.easychair.org/conferences/?conf=3Denpossrt2013. Only one abst= ract per person may be submitted.
  *   Deadline for submission: 27 January, 2013
  *   Date of notification of acceptance: 15 March, 2013

Local organizers:

  *   Eleonora Montuschi, Luigi Perissinotto (University of Venice Ca' Fosc=
ari, Dept. of Philosophy and Cultural Heritage, Philosophy Section).

Conference homepage:
For more information about the conference see www.enposs.eu<http://www.enpo= ss.eu>

Publication:

  *   Selected papers from the Conference will be published in an annual sp=
ecial issue of the journal Philosophy of the Social Sciences

ENPOSS:
The purpose of the European Network of Philosophy of the Social Sciences is=  to promote, encourage and facilitate academic discussion and research in t= he philosophy of the social sciences broadly conceived.
Steering Committee: Alban Bouvier (Paris), Byron Kaldis (Athens), Thomas Ue= bel (Manchester), Julie Zahle (Copenhagen), and Jes=FAs Zamora-Bonilla (Mad= rid).

PSSRT:

The Philosophy of Social Science Roundtable serves as a forum for communica= tion among philosophers and social scientists who share an interest in disc= ussion of epistemology, explanatory paradigms, and methodologies of the soc= ial sciences.

Programme Committee: James Bohman (St. Louis), Mark Risjord (Atlanta), Paul=  Roth (Santa Cruz), Stephen Turner (Tampa), Alison Wylie (Seattle)

Tuesday, October 30, 2012

CFP: Models and Decisions (Munich, April 2013)

***************************************
6th Munich-Sydney-Tilburg conference on

MODELS AND DECISIONS

Munich Center for Mathematical Philosophy

10-12 April 2013

http://www.lmu.de/ModelsAndDecisions2013

****************************************
Mathematical and computational models are central to decision-making in a wide-variety of contexts in science and policy: They are used to assess the risk of large investments, to evaluate the merits of alternative medical therapies, and are often key in decisions on international policies – climate policy being one of the most prominent examples. In many of these cases, they assist in drawing conclusions
from complex assumptions. While the value of these models is undisputed, their increasingly widespread use raises several philosophical questions: What makes scientific models so important? In which way do they describe, or even explain their target systems? What makes models so reliable? And: What are the imports, and the limits, of using models in policy making? This conference will bring together philosophers of science, economists, statisticians and policy makers to discuss these
and related questions. Experts from a variety of field will exchange first-hand experience and insights in order to identify the assets and the pitfalls of model-based decision-making. The conference will also address and evaluate the increasing role of model-based research in scientific practice, both from a practical and from a philosophical point of view.

We invite submissions of extended abstracts of 1000 words by 15 December 2012. Decisions will be made by 15 January 2013.

KEYNOTE SPEAKERS: Luc Bovens (LSE), Itzhak Gilboa (Paris and Tel Aviv),
Ulrike Hahn (Birkbeck), Michael Strevens (NYU), and Claudia Tebaldi (UBC)

ORGANIZERS: Mark Colyvan, Paul Griffiths, Stephan Hartmann, Kaerin
Nickelsen, Roland Poellinger, Olivier Roy, and Jan Sprenger

PUBLICATION: We plan to publish selected papers presented at the
conference in a special issue of a journal or with a major a book
publisher (subject to the usual refereeing process). The submission
deadline is 1 July 2013. The maximal paper length is 7000 words.

GRADUATE FELLOWSHIPS: A few travel bursaries for graduate students are
available (up to 500 Euro). See the website for details.

Special Issue: Kuhnian Perspectives on the Life and Human SciencesSCIENCES

To mark the 50th anniversary of the publication of Thomas Kuhn's The Structure of Scientific Revolutions, a Kuhn-and-revolutions-themed special issue of articles from Studies in History and Philosophy of Biological and Biomedical Sciences is now available for free downloading at the journal's website:

http://www.journals.elsevier.com/studies-in-history-and-philosophy-of-science-part-c-studies-in-history-and-philosophy-of-biological-and-biomedical-sciences/

In the main journal, articles from the current (September 2012) issue include:

* Anna Maerker on Florentine anatomical wax models in eighteenth-century Vienna
* Roberta Millstein on Darwin, race and sexual selection
* Leon Rocha on Needham, Daoism and Science and Civilization in China

Monday, October 29, 2012

From the naturalism workshop, part III

And we have now arrived at the commentary on the final day of the workshop on “Moving Naturalism forward,” organized by cosmologist Sean Carroll. It was my tun to do an introductory presentation on the relationship between science and philosophy, and on the idea of scientism. (Part I of this commentary is here, part II here.)

I began by pointing out that it doesn’t help anyone if we play semantic games with terms like “science” and “philosophy.”  In particular, “science” cannot be taken to be simply whatever deals with facts, just like “philosophy” isn’t whatever deals with thinking. So for instance, facts about the planets in the solar system are scientific facts, but the observation that I live in Manhattan near the Queensborough Bridge is just a fact, science has nothing to do with it. Similarly, John Rawls’ book A Theory of Justice, to pick an arbitrary example, is real philosophy, while Ron Hubbard’s nonsense about Dianetics isn’t, even though he thought of it as such.

So science becomes a particular type of structured social activity, characterized by empirically driven hypothesis testing about the way the world works, peer review, technical journals, and so on. And philosophy is about deploying logic and general tools of reasoning and argument to reflect on a broad range of subject matters (epistemology, ethics, aesthetics, etc.) and to reflect on other disciplines (“philosophies of”).

Another important thing to get straight: philosophy is not in the business of advancing science. We’ve got science for that, and it works very well. Some philosophy is “continuous” with science, but most is not. Also, philosophy makes progress by exploring logical space, not by making empirical discoveries.

I then brought up the Bad Boy of physics, Richard Feynman, who famously said: “Philosophy of science is about as useful to scientists as ornithology is to birds.” True enough (except when it comes to ornithologists helping out avoiding the extinction of some bird species), but surely that does not imply that ornithology is thereby useless.

Next, I moved to a discussion of scientism. I suggested that in the strong sense this is the view that only scientific claims, or only questions that can be addressed by science, are meaningful. In a weaker sense, it is the view that the methods of the natural sciences can and should be applied to any subject matter. I think the first one is indefensible, and that the second one needs to be qualified and circumscribed. For instance, there are plenty of areas where science has little or nothing interesting to say: mathematics, logic, aesthetics, ethics, literature, just to name a few.

It is, of course, true that a number of philosophers have said, and continue to say, bizarro things about science, or even about philosophy itself (Thomas Nagel and Jerry Fodor come to mind as recent examples). But a pretty good number of scientists are on record has having said bizarro things about philosophy, or even about science itself (Lawrence Krauss, and more recently Freeman Dyson).

What I suggested as a way forward is that we should work toward re-establishing the classical notion of scientia, which means knowledge in the broader sense, including contributions from science, philosophy, math, and logic. There is also an even broader concept of understanding, which is relevant to human affairs. And I think that understanding requires not only scientia, but also other human activities such as art, music, literature, and the broader humanities. As you can see, I was trying to be very ecumenical...

In the end, I submitted that skirmishes between scientists and philosophers are not just badly informed and somewhat silly, they are anti-intellectual, and do not help the common cause of moving society toward a more rational and compassionate state than it finds itself in now.

The discussion that followed was very interesting. Alex Rosenberg did stress that philosophers interested in science need to pay close attention to what goes on in the lab, to which both Sean Carroll and Janna Levin responded that there are very good examples of important conceptual contributions made by philosophers to physics, particularly in the area of interpretations of quantum mechanics. Rosenberg also pointed out that some philosophers — for instance Samir Okasha — have contributed to biology, for instance in the area of debates about levels of selection.

We then talked about the issue of division of intellectual labor, with Dennett stressing the ability (and dangers!) of philosophers to take a bird’s eye view of things that is often unavailable to scientists. This, I commented, is because scientists are justifiably busy with writing grant proposals, doing lab work, and interacting tightly with graduate students. That was my own experience as a practicing evolutionary biologist. As a philosopher, I rarely write grant proposals, I don’t have to run a lab or do field work, and my interactions with graduate students are often in the form of visits to coffee houses and wine bars. All of which affords me the “luxury” (really, it’s my job) to read, think and write more broadly now than what I could do when I was a practicing scientist.

Along similar lines, Sean Carroll remarked — again going back to actual examples from physics — that scientists concern themselves primarily with how to figure things out, postponing the broader question of what those things mean. That’s another area where good philosophy can be helpful. Rebecca Goldstein added that philosophy is hard to do well, and that scientists should be more respectful and less dismissive of what philosophers do. Janna Levin observed that much of the fracas in this area is caused by a few prominent, senior (quasi-senile?) scientists and philosophers, but that in reality most scientists have a healthy degree of respect for philosophy.

At this point Coyne asked a reasonable question: we have talked about contributions that philosophers have made to science, what about the other way around? Several people offered the examples of Einstein, Bell and Feynman (ironically, the same guy of the philosophy-as-ornithology comment mentioned above), the latter for instance on the concept of natural law.

That was it, folks. What did I take from the experience? At the least the following points:

* On naturalism in general: we agreed that there are different shades of philosophical naturalism, and that reasonable people may disagree about the degree of, say, reductionism or determinism that the view entails.

* On determinism: given that even the physicists aren’t sure, yet, whether quantum mechanics is best interpreted deterministically or not (not to mention of the interpretation of any more fundamental theory), the question is open.

* On reductionism: Rosenberg’s extreme reductionism-nihilism was clearly, well, extreme within this group. Most participants agreed that one can, indeed should, still talk about morality and responsibility in meaningful terms.

* On emergence: there was, predictably, disagreement here, even among the physicists. Carroll seemed the most sympathetic to the concept, repeatedly talking, for instance, about the emergence of the Second Law of thermodynamics from statistical mechanics. Even Weinberg agreed that there are emergent phenomena in a robust sense of the term, but of course he preferred a “weak” concept of emergence, according to which the reductionist can write a promissory note that “in principle” things could be explained by a fundamental law. It was unclear what such principle may be, or even why that fundamental law couldn’t itself be considered emergent from something else (the “it’s turtles all the way down” problem).

* On meaning: following Goldstein, most of us agreed that there is meaning in human life, which comes out of the sense that we matter in society and to our fellow human beings. Flanagan’s concept of “eudaimonics” was, I think, most helpful here.

* On free will and moral responsibility: the debate between incompatibilists (Coyne, Rosenberg) and compatibilists (most of the rest, led of course by Dennett) continued. But we agreed that “free will” is far too loaded a concept, with Flanagan’s suggestion that we go back to the ancient Greeks’ categories of voluntary and involuntary action being particularly useful, I think. Even Coyne agreed that there is a Dennett-like sense in which we can think of morally competent vs morally incompetent agents (say, a normal person and one with a brain tumor affecting his behavior), thereby rescuing a societally and legally relevant concept of morality and responsibility.

* Relationship between science and philosophy: people seemed in broad agreement with my presentation (again, including Jerry), from which it follows that science and philosophy are partially continuous and partially independent disciplines, the first one focused on the systematic study of empirical data about the world, the second more concerned with conceptual clarification and meta-analysis (“philosophy of”). We also agreed that there are indeed good examples of philosophers of science playing a constructive role in science, and vice versa of scientists who have contributed to philosophy of science (take that, Krauss and Hawking!).

This, added to the positive effect of meeting one’s intellectual adversaries in person, sharing meals and talking over a beer or a glass of wine, has definitely made a stupendous success of the workshop as a whole. Stay tuned for the full video version on YouTube... 

Saturday, October 27, 2012

From the naturalism workshop, part II


by Massimo Pigliucci

Second day of the workshop on “Moving Naturalism forward,” organized by cosmologist Sean Carroll. Today we started with Steven Weinberg (Nobel in physics) introducing his thoughts about morality. Why is a physicist talking about morality, you may ask? Good question, I reply, but let’s see...

The chair of the session was Rebecca Goldstein, who mentioned that she doesn’t find the morality question baffling at all. For her, moral reasoning is something that we have been doing for a long time, and moreover where philosophy has clearly made positive and incremental contributions throughout human history. She of course accepts the idea of a naturalistic origin for morality, but immediately added that evolutionary psychological accounts are simply not enough. In the process, she managed to both appreciate and criticize the work of Jonathan Haidt on the different dimensions of liberal vs progressive moral reasoning.

Weinberg agreed with Goldstein’s broad claim that we can reason about morality, but was concerned with the question of whether we can ground morality using science, and particularly the theory of evolution. He declared that he has been “thoroughly annoyed” by Sam Harris’ book on scientific answers to moral questions. He went on to observe that most people don’t actually have a coherent set of moral principles, nor do they need it. Weinberg said that early on in his life he was essentially a utilitarian, thinking that maximization of happiness was the logical moral criterion. Then he read Huxley’s Brave New World, and he was disabused of such a simplistic notion. Which is yet another reason he didn’t find Harris compelling, considering that the latter is a self-described utilitarian.

Weinberg also criticized utilitarianism by rejecting Peter Singer-style arguments to the effect that more good would be done in the world by living on bare minimum necessities and giving away much of your income to others. Weinberg argued instead that we owe loyalty to our family and friends, and that there is nothing immoral about preferring their welfare to the welfare of strangers. Indeed, although I don’t think he realized it, he was essentially espousing a virtue ethics / communitarian type of ethics. Weinberg concluded from his analysis that we “ought to live the unexamined life” instead, because that’s what the human condition leads us to.

Goldstein’s response was that we don’t need grounding postulates to engage in fruitful moral reasoning, and I of course agree. I pointed out that ethics is about developing reasonable ways to think about moral issues, starting with (and negotiating) certain assumptions about human life. In my book, for instance, Michael Sandel’s writings are excellent examples of how to engage in fruitful moral reasoning without having to settle the sort of metaethical issues that worry Weinberg (interestingly, and gratifyingly, I saw Jerry Coyne nodding somewhat vigorously while I was making my points). Dennett added that there are ways of thinking through issues that do not involve fact finding, but rather explore the logical consequences of certain possible courses of action — which is why moral philosophy is informed by facts (even scientific facts), but not determined by them. And for Dennett, of course, we — meaning humanity at large — are the ultimate arbiters of what works and doesn’t work in the ethical realm.

Dawkins agreed with Goldstein that there has been moral progress, and that we live in a significantly improved society in the 21st century compared to even recent times, let alone of course the Middle Ages. Dawkins also mentioned Steven Pinker’s work demonstrating a steady decrease in violence throughout human history (Goldstein humorously pointed out that Pinker got the idea from her). Dawkins also made the good point that we talk about morality as if it were only a human problem because all other species of Homo went extinct. Had that not been the case, we might be having a somewhat different conversation.

Both Weinberg and Goldstein agreed that a significant amount of moral progress comes from literature, and more recently movies. Things like Uncle Tom’s Cabin, or Sidney Poitier’s role in Guess Who’s Coming to Dinner, have the power to help changing people’s attitudes about what is right and what is wrong.

Which led to my comment about Hume and Aristotle. I think — with these philosophers — that moral reasoning is grounded in a broadly construed conception of human nature. Aristotle emphasized the importance of community environment, and particularly of one’s family and early education environment; but also of reflection and conscious attempts at improving. Hume agreed that basic human instincts are a mix of selfish and cooperative ones, but also argued that human nature itself can change over time, as a result of personal reflection and community wide conversations.

Carroll noted a surprising amount of agreement in the group about the fact that morality arose naturally because we are large brained social animals with certain needs, emotions and desires; but also about the fact that factual information and deliberate reflection can both improve our lot and the way we engage in moral reasoning. Owen Flanagan, however, pointed out that most people outside of this group do think of morality in a foundational sense, which is untenable from a naturalistic perspective. Owen went on to remind people that David Hume — after the famous passage warning about the logical impossibility of deriving oughts from is — went on to engage into quite a bit of moral reasoning nonetheless, simply doing so without pretending that he was demonstrating things.

Weinberg claimed that he cannot think of a way to change other people’s minds about moral priorities when there is significant disagreement. But Dennett pointed out that we do this all the time: we engage in societal conversations with the aim of persuading others, and in so doing we are changing their nature. That is, for instance, how we made progress on issues such as women rights, gay rights, or animal welfare (as Goldstein had already pointed out).

Terrence Deacon remarked that there was an elephant in the room: how is it that this group agrees so broadly about morality, if a good number of them are also fundamental reductionists? Isn’t moral reasoning an emergent property of human societies? That is indeed a good question, and I always wonder how people like Coyne or Rosenberg (or Harris, who was invited but couldn’t make it to the workshop) can at the same time hold essentially nihilistic views about existence and yet talk about good and bad things and what we should (ought?) to do about them? Carrol agreed that we should be using the emergence vocabulary when talking about societies and morality. In his mind, the stories we tell about atoms are different from the stories we tell about ethics; the first ones are descriptive, the latter ones become prescriptive. To use his kind of example, we can use the term “wrong” both when someone denies the existence of quarks and when someone kills an innocent person, but that word indicates different types of judgments that we need to keep distinct.

Simon DeDeo asked what sort of explanation do we have for saying that, say, Western society has gotten “better” at ethical issues? (We all agreed that, more or less, it has.) We don’t seem to have anything like, say, the evolutionary explanation of what makes a bird “better” at flying. But Don Ross replied that we do have at least partial explanations, for instance drawing on the resources of game theory. In response to Ross, DeDeo pointed out that game theory can only give an account of morality within a consequentialist framework. Both Ross and (interestingly) Alex Rosenberg disagreed. Dennett helped clarifying things here, making a distinction between what he called “second rate” (or naive) consequentialism, which is a bad idea easily criticized on philosophical grounds, and the broader concept that of course consequences matter to human ethical decision making. In general, I think that we are still doing fairly poorly in the area that we need to answer DeDeo’s question: a good theory of cultural evolution. But of course that doesn’t mean it cannot be done or will not be done at some point (as is well known, I’m skeptical of memetic-type theories in this respect).

In the second part of the morning session we moved to consider the concept of meaning, with Owen Flanagan giving the opening remarks. He pointed out that the historical realization that we are “just” animals caused problems within the context of the preceding cultural era during which human beings were thought of as special direct creations of gods. Owen brought us back 2,500 years ago, to Aristotle and the ancient Greek’s concept of eudaimonia, the life that leads to human flourishing. Aristotle noted that people have different ideas of the good life, but also that there are some universals (or nearly so). One of these is that no normal person wishes to have a life without friends. Flanagan thinks — and I agree — that we can use the Aristotelian insight to build a discipline of “eudaimonics,” one that is both descriptive and normative. The good  life is about the confluence of the true, the beautiful and the good (all lower case letters, of course).

An example I brought up of modern-day analysis of a concept that Aristotle would have been familiar with is the comparison between people’s day-to-day self-reported happiness vs their overall conception of meaning in their life when it comes to having children. Turns out that having children actually significantly decreases day-to-day happiness, but it also increases the long-term positive meaning that most people attribute to their lives.

Rebecca Goldstein argued that novelists have a unique perspective on the issue of meaning, because of the process involved in devising characters and their stories. She claims that her writing novels taught her that a major component of flourishing and meaning is the idea of an individual mattering to other people. (Again, Aristotle would not have been surprised.) Rebecca connected this to the question that she is often asked about how can she find meaning in life as an atheist. She had a hard time even understanding the question, until she realized that of course for theists meaning is defined universally by an external agency on the basis that we “matter” to the gods. So the atheist is still using the idea that mattering and meaning are connected, she just does away with the external agency.

Dennett suggested that we as atheists need to think of projects and organizations that help secular people feel that they matter in more productive ways than, say, joining a crusade to kill the infidels. Janna Levin brought up the example of a flourishing of science clubs in places like New York City, which provide a community for intellectual kins (and of course there are also a good number of philosophy meetups!). Still, I argued (and Carroll, Goldstein, Coyne, and Flanagan agreed) that attempts in that direction — like the various Societies for Ethical Culture — are largely a failure. Secularists, especially in Europe, find meaning and feel that they matter because they live in a society they feel comfortable in and are active members of. Just like the ancient Greeks’ concept of a polis that citizens could be proud of and contribute to. It’s the old Western idea of civic pride, if you will. 

I need to note at this point, that — just as in the case of morality discussed above — the nihilists / reductionists in the group didn’t seem to have any problem meaningfully talking about meaning, so to speak, even though their philosophy would seem to preclude that sort of talk altogether... (The exception was Rosenberg, who stuck to his rather extreme nihilist guns.)

The afternoon session was devoted to free will, with Dennett giving the opening remarks. His first point was that there is a difference between the “manifest image” and the “scientific image” of things. For instance, there is a popular / intuitive conception of time (manifest image), and then there is the philosophical and/or scientific conception of time. But it is still the case the time exists. Why, then, asked Dennett, do so many neuroscientists flat out deny the existence of free will (“it’s an illusion”), rather than replacing the common image with a scientific one?

Free will, for Dennett, is as real as time or, say, colors, but it’s not what some people think it is. And indeed, some views of free will are downright incoherent. He suggested that nothing we have learned from neuroscience shows that we haven’t been wired (by evolution) for free will, which means that we also get to keep the concept of moral responsibility. That said, contra-causal free will would be a miracle, and we can’t help ourselves to miracles in a naturalistic framework.

Citing a Dilbert cartoon, Dennett said that the zeitgeist is such that people think that it follows from naturalism that we are “nothing but moist robots.” But this, for Dennett, is confusing the ideology of the manifest image with the manifest image itself. An analogy might help: one could say that if that is what you mean by color (i.e., what science means by that term), then color doesn’t exist. But we don’t say that, we re-conceptualize color instead. For instance: it makes perfect sense to distinguish between people who have the competence and will to sign a contract, and those who don’t. We have to draw these distinctions because of practical social and political reasons, which however does not imply that we are somehow cutting nature at its joints in a metaphysical sense. Moreover, Dennett pointed out that experiments show that if people are told that there is no free will they cheat more frequently, which means that the conceptualization of free will does have practical consequences. Which in turn puts some responsibility on the shoulders of neuroscientists and others who go around telling people that there is no free will.

Jerry Coyne gave the response to Dennett’s presentation, not buying into the practical dangers highlighted by the latter (Jerry seemed to think that these effects are only short-term; that may be, but I don’t think that undermines Dennett’s point). Coyne declared himself to be an incompatibilist (no surprise there), accusing compatibilists of conveniently redefining free will in order to keep people from behaving like beasts. However, Jerry himself admitted to having changed his definition of free will, and I think in an interesting direction. His old definition was the standard idea that if the tape of the history of the universe were to be played again you would somehow be able to make a different decision, which would violate physical determinism. Then he realized that quantum indeterminacy could, in principle, bring in indeterminism, and could even affect your conscious choices (through quantum effects percolating up to the macroscopic level). So he redefined free will as the idea that you are able to make decisions independently of your genes, your environments and their interactions. To which Dennett objected that that’s a pretty strange definition of free will, which no serious compatibilist philosopher would subscribe to.

Jerry then plunged into his standard worry, the same that motivates authors like Sam Harris: we don’t want to give ground to theologically-informed views of morality, and incompatibilism about free will (“we are the puppets of our genes and our environments”) is the best way to do it. Dennett was visibly shaking his head throughout (so was I, inwardly...).

In the midst of all of this, Jerry mentioned the (in)famous Libbett experiments, even though they have been taken apart both philosophically and, more recently, scientifically. Which Dennett, Flanagan, and Goldstein immediately pointed out.

During the follow up discussion Weinberg declared his leaning toward Dennett’s position, despite his (Weinberg’s) acceptance of determinism. We weigh reasons and we arrive at conscious decisions, and we know this by introspection — although he pointed out that of course this doesn’t mean that all our own desires are transparent and introspectively available. Weinberg did indeed paint a picture very similar to Dennett’s: we may never arrive — given the same circumstances — to a different decision, but it is still our decision.

Rosenberg commented that we have evidence that we cannot trust our introspection when it comes to conscious decision making, again citing Libbett. Both Dennett and Flanagan once more pointed out that those experiments have been taken conceptually apart (by them) decades ago (and, I reminded the group, questioned on empirical grounds more recently). Dennett did agree that introspection is not completely reliable, but he remarked that that’s quite different from claiming that we cannot rely on it at all.

Owen Flanagan discussed experiments about conceptions of free will done on undergraduate students. The students were given a definition of free will and then asked questions about whether the person made the decision and was responsible for her actions. The majority of subjects turned out to be both determinists and compatibilists, which undermines the popular idea that the commonsense concept of free will is contra-causal.

I pointed out, particularly to Jerry and Alex Rosenberg, that incompatibilists seem to discard or bracket out the fact that the human brain evolved to be a decision making, reason-weighing organ. If that is true, then there is a causal story that involves the brain, and my decisions are mine in a very strong sense, despite being the result of my lifelong gene-environment interactions (and the way my conscious and unconscious brain components weigh them).

Sean Carroll also objected to Coyne, using an interesting analogy: if Jerry applied his argument toward incompatibilism to fundamental physics, he would have to conclude for an incompatibility between statistical mechanics and the second law of thermodynamics. But, Sean suggested, that would be a result of confusing language that is appropriate for one level of analysis with language that is appropriate for another level. (Though he didn’t say that, I would go even further, following up on the previous day’s discussion, and suggest that free will is an emergent property of the brain in a similar sense to which the second law is an emergent property of statistical mechanics — and on the latter even Steven Weinberg agreed!)

Terrence Deacon asked why we insist in using the term “free” will, and Jerry had previously invited people to drop the darn thing. I suggested, and Owen elaborated on it, that we should instead use the terms that cognitive scientists use, like volition or voluntary vs involuntary decision making. Those terms both capture the scientific meaning of what we are talking about and retain the everyday implication that our decisions are ours (and we are therefore responsible for them). And dropping “free” also doesn’t generate confusion about contra-causal mystical-theological mumbo jumbo.

Dennett, in response to a question by Coyne about the evolution of free will, pointed out two interesting things. First, if we take free will to be the ability of a complex brain to exercise conscious decision making, then it is a matter of degrees, and other species may have partial free will. Second, and relatedly, human beings themselves are not born with free will: we develop competence to make (morally relevant, among others) decisions during early life, in part as the result of education and upbringing.

Jerry at some point brought up the case of someone who commits a murder because a brain tumor interfered with his brain function. But I commented that it is strange to take those cases — where we agree that there is a malfunction of the brain — and make them into arguments to reject moral responsibility. Dennett agreed, talking about brains being “wired right” or “wired wrong,” which is a matter of degree, and which translates into degrees of moral responsibility (lowest for the guy affected by the tumor, highest for the person who kills for financial or other gain). Jerry, interestingly, brought up the case of a person who becomes a violent adult because of childhood traumas. But Dennett and I had a response that is in line with our conception of the brain as a decision making organ with higher or lower degrees of functionality: the childhood trauma imposes more constraints (reduces free will) on the brain’s development than a normal education, but fewer than a brain tumor. Consequently, the resulting adult bears an intermediate degree of moral responsibility for his actions.

The second session of the afternoon was on consciousness, introduced by David Poeppel. He claimed — as a cognitive scientist — that there are good empirical reasons to reject the conclusion that Libbett’s experiments (again!) undermine the idea of conscious decision making. At the same time, he did point to research showing that quite a bit of decision making in our brain is in fact invisible or inaccessible to our consciousness.

Dennett brought up experiments on priming in psychology, where the subjects are told not to say whatever word they are going to be primed for. Turns out that if the priming is too fast for conscious attention to pick it up, the subjects will in fact say the word, contravening the directions of the experimenter. But if the time frame is sufficiently long for consciousness to come in, then people are capable of stopping themselves from saying the priming word. The conclusion is that this is good evidence that conscious decision making is indeed possible, and that we can study its dynamics (and limits).

Rosenberg warned that we have good evidence leading us to think that we cannot trust our conscious judgments about our motives and mental states. Indeed, as Dennett pointed out, of course there is self-deception, rationalization, ideology, and self-fooling. But it is also the case that it is only through conscious reasoning that we get to articulate and reflect on our thoughts. We need consciousness to pay attention to our reasons for doing things. Conscious reasons can be subjected to a sort of “quality control” that unconscious reasons are screened off from. For Dennett human beings are powerful thinking beings because they can submit their own thinking to analysis and quality control.

And of course Daniel Kahneman’s work on type I (fast, unconscious) vs type II (slow, conscious) thinking came up. Poeppel pointed out that sometimes type I thinking is not just faster, but better than type I. To which Dennett replied that if you are about to have brain surgery you might prefer the surgeon to make considered decisions based on his type II system rather than quick type I decisions. Of course, which system does a better job is probably situation dependent, and at any rate is an empirical question.

Carroll asked whether it is actually possible to distinguish conscious from unconscious thoughts, to which both Poeppel and Goldstein replied yes, and we are getting better at it. Indeed, this has important practical applications, as for instance anesthesiologists have to be able to tell whether there is conscious activity in a patient’s brain before an operation begins. However, the best evidence indicates that consciousness is a systemic (emergent?) property, since it disappears below a certain threshold of brain-wide activity.

Dennett brought up the example of the common experience of thinking that we understand something, until we say it out loud and realize we don’t. No mystery there: we are bringing in “more agents” (or, simply, more and more deliberate cognitive resources) into the task, so it isn’t surprising that we get a better outcome as a result.

Rosenberg asked if we were going to talk about the “mysterian” stuff about consciousness, things like qualia, aboutness, and what is it like to be a bat. I commented that the only sensible lesson I could take out of Nagel’s famous bat-paper is not that first person experiences are scientifically inexplicable, but that the only way to have them is to actually have them. Dennett, however, remarked that he pointedly asked Nagel: if you had a twin brother who was a philosopher, would you be able to imagine what it is like to be your brother? To which Nagel, unbelievably I think, answered no. Of course we are perfectly capable of imagining what it is like to be another biologically relevantly similar to ourselves being.

Flanagan brought up Colin McGinn’s “mysterian” position about consciousness, pointing out that there is no equivalent in neuroscience or philosophy of mind of, say, Godel’s incompleteness theorems or Heisenberg’s indeterminacy principle. Similarly, Owen was dismissive (rightly, I think) of David Chalmers’ dualism based on mere conceivability (of unconscious zombies who behave like conscious beings, in his case).

I asked, provocatively, if people around the table think that consciousness is an illusion. Jerry immediately answered yes, but the following discussion clarified things a bit. Turns out — and Dennett was of great help here — that when Jerry says that consciousness is an epiphenomenon of brain functioning he actually means something remarkably close to what I mean by consciousness being an emergent property of the brain. We settled on “phenomenon,” which is the result of evolution, and which has functions and effects. This, of course, as opposed to the sense of “epiphenomenon” in which something has no effect at all, and which in this context leads to an incoherent view of consciousness (but one that the “mysterians” really like).

At this point Rosenberg introduced yet another controversial topic: aboutness. How is it possible, from a naturalist’s perspective, to have “Darwinian systems” like our brains that are capable of semantic reference (i.e., meaning)? Terrence Deacon responded that the content of thought, its aboutness, is not a given brain state, but brain states are necessary to make semantic reference possible. Don Ross, in this context, invoked externalism: a brain state in itself doesn’t have a stable meaning or reference, that stable meaning is acquired only by taking into account the largest system that includes objects external to us. Dennett’s example was that externalism is obviously true for, say, money: bank accounts have numbers that represent money, but they are not money, and the system works only because the information internal to the bank’s computers refer to actual money present in the outside world.

Rosenberg seemed bothered by the use of intentional language in describing aboutness. But Dennett pointed out that intentionality is needed to explain a certain category of phenomena, just like — I suggested — teleological talk is necessary (or at the very least very convenient) to refer to adaptations and natural selection. And here I apparently hit the nail on the head: Rosenberg rejects the (naturalistic) concept of teleology, while Dennett and I accept. That is why Rosenberg has a problem with intentional language and Dennett and I don’t. 

And that, as it tuns out, was a pretty good place to end the second day. Tomorrow: scientism and the relationship between science and philosophy.