As I read the paper there are two argumentative goals, but I’m not sure how they’re related. There’s the bit where you argue against literalism and for agnosticism (roughly, that our best explanations are ontologically neutral in the sense that no ontological commitments can simply be read off of them). And then there’s the bit at the end where you advocate for some version of the semantic view of theories—though really I think you want to advocate for the centrality of models in science, rather than the classical semantic view.
For clarification, I take it that you’re associating literalism with the axiomatic view of theories, so undermining literalism is supposed to support the move to the semantic view. Is this correct? (As a side note, if the only support for literalism was the axiomatic view, that would automatically count against it in my opinion, since effectively nobody believes the axiomatic view anymore. And no one should.)
Suppose we accept some form of model-based explanation in science. This involves taking models to be the central descriptive and explanatory constructs in science, rather than theories. And suppose we read models in something like Giere’s way, as abstracta that bear various similarity relations to real target systems in the world. Does this help avoid the objections to literalism? I think it doesn’t.
To see this, first note that we can couch these models in
terms of either high-level or low-level properties, just as we can the descriptions
that give rise to them. Issues about proportionality and explanatory aptness,
then, can just as easily arise when we are considering models as when we are
considering theories. The issues now just turn on which abstract entities make
it into the model, rather than which terms comprise the sentences of the theory;
but in either case we can raise the exact same question: how are these elements
of the model/theory related to the real structures, properties, and causal
powers in the target system?
To put the point more directly and forcefully, a model may
fit or fail to fit the world in various ways. (See Giere’s discussion of
similarity mapping here.) Even a model that is successful in various ways may
not correspond well with the structure of a real world system. To know whether
a model’s success is due to its tracking real structures, we need to adopt a
range of methods to confirm the existence and nature of the entities that the
model talks about. We cannot simply read off from the structure of the model
any facts about the target system. This is the equivalent of treating models
literally, and it is just as much a mistake here as when we were dealing with linguistically
formulated theories.
So my question is: how does moving to models help us avoid the problem of literalism, when we face the very same sort of mapping problem in deciding whether a model is sufficiently similar to the real world target system that it is intended to characterize? The ontological commitments of good models are no more transparent than are those of theories.
Hi Colin!! This was a great paper—really interesting! It relates to some issues on ontological commitment that I’ve been thinking about too recently.
I confess that I had the same worries as Dan that I couldn’t quite see how the two bits of the paper hang together—the choice of literalism vs. agnosticism just seems independent of the semantic model of scientific theories.
I was also a bit worried about the construal of literalism.
I thought that literalism was a bit more exacting than you make out. Literalism says that an ontological commitment appears when you have a term that ineliminably contributes explanatory value. In other words, if you were to eliminate the term from science, or try to paraphrase it away, you end up with a significantly worse science—net explanatory value would go significantly down.
If a term passes this criterion, then this shows that it does ‘explanatory work’, which literalism assumes is the hallmark of hooking onto something real in the world.
I guess that I wasn’t sure how the example you gave threatened this.
On the HH-model, the ratio GNa / Gk doesn’t seem to be ontologically committing for a literalist, since since this ratio talk can easily be paraphrased without incurring explanatory loss (e.g. as the ratio of two conductances).
In contrast, if one were to eliminate or paraphrase talk of high-level geometrical properties (the shape of the square peg), that would have a massive negative effect on the kinds of explanations one could give in science—they’d end up being horribly and infinitely disjunctive in many cases.
So GNa / Gk fails the test of being explanatorily ineliminable, while Putnam’s geometrical properties pass that test...?