Vincent F. Hendricks and Duncan Pritchard (eds.), New Waves in Epistemology, Palgrave Macmillan, 2008, 383pp., $32.95 (pbk), ISBN 9780230537880.
Reviewed by Dennis Whitcomb, Western Washington University
This collection contains three kinds of papers: mainstream, formal, and programmatic. The mainstream papers use tools like conceptual analysis and historical scholarship to illuminate the nature and extent of our knowledge and justified belief. The formal papers bring tools like modal logic and probability theory to bear on these same topics and related topics like proper reasoning. The programmatic papers outline new approaches to epistemological research. All the papers appear here for the first time. As usual for this sort of volume, the quality varies with the best papers being quite good.
The volume’s first four papers — those by Tim Black, Duncan Pritchard, Michael Bergmann, and Nikolaj Nottelmann — fall squarely into the mainstream camp. Black develops a novel version of the “sensitivity” principle that we know only what we wouldn’t believe if it were false. According to Black’s version of this principle, we sensitively believe (and know) that we aren’t brains in vats. Pritchard outlines the anti-luck epistemology developed in his recent book (Pritchard 2005), demonstrating some of the power of his version of the “safety” principle that we know only what wouldn’t be false if we were to believe it. Bergman also outlines work from a recent book (Bergmann 2006), this work impressively developing an externalist account of justification inspired by Thomas Reid. Nottelmann examines epistemic blameworthiness, excusability, and doxastic voluntarism; on these issues he delineates a variety of distinctions and applications.
Several of the formal papers are closely connected to mainstream work — in particular those by Boudewijn de Bruin, Erik Olsson, Paul Egré, and Berit Brogaard and Joe Salerno. In de Bruin’s paper we find an introduction to some central issues in epistemic and doxastic logic: the standard formalisms; applications of these formalisms (for instance in describing skeptical arguments and Moore’s paradox); and problems with these formalisms (for instance, problems about their status as idealizations and the sense, if any, in which they are normative). Olsson also connects formal and mainstream work. The last fifteen years have seen an explosion of theorizing about coherence, much of which has been informed by probability theory. Olsson summarizes this work, deftly and gently introducing its main strands and locating fruitful avenues for further research. Egré’s paper starts with an exploration of several Nozick-inspired modal conditions on knowledge — conditions in the ballpark of safety and sensitivity. It then focuses on a particular condition among these, namely Williamson’s (2000) “margin for error” principle, and tries to resist the widely discussed anti-luminosity argument in which Williamson employs that principle. In the final paper that closely connects mainstream and formal work, Brogaard and Salerno nicely canvass several approaches to Fitch’s knowability paradox. They then develop their own approach; its basic idea is that the inference from “all truths are knowable” to “all truths are known” is invalid due to context-sensitivity of the term “someone” in the expressions “someone knows that p” and “it is possible that someone knows that p”.
Readers with a taste for more deeply technical material will appreciate the papers by Troy Catterson, Franz Huber, and Jeffrey Helzner. Catterson instructively discusses several puzzles for Hintikka-style epistemic logics. He tries to resolve some of those puzzles, and argues that the others cannot be resolved — at least not so long as we take propositions to be either sets or classes. Huber and Helzner both attempt to construct general structures for classifying and evaluating formal models of proper reasoning such as the various forms of Bayesianism and belief revision theory. Helzner focuses on Isaac Levi’s models of proper reasoning, situating them in a general structure and exploring some of their ramifications. Huber identifies two competing desiderata for theory acceptance, namely plausibility and informativeness. He lays down principles for weighing these desiderata, develops out of those principles a view about theory acceptance that incorporates strands of Bayesianism and formal learning theory, and applies that view to a variety of problems in the philosophy of science.
All of this work of Huber’s is very interesting, but the paper is full of printing errors in places where it matters. Quantifiers are replaced by ellipses, or other symbols, or left out entirely; the double-arrows for entailment are replaced by the epsilon symbol; and other similar troubles abound, all without explanation. The same sorts of troubles appear (less dramatically) in the paper by Brogaard and Salerno and (much less dramatically) in the paper by Catterson (as well as the papers by Riggs and Neta, which I will discuss below). One gets the sense that these errors resulted from technical glitches in the late stages of the printing process. In any case, all of the papers in question are well worth reading. In order to do so without frustration, you might wait for a corrected version of the volume to appear, or you might try to hunt down unsullied drafts on the internet (as I eventually did). With Huber’s paper, you could just read the expanded version (Huber 2008), which seems to be printing-error-free.
The volume finishes with programmatic papers by Finn Spicer, Wayne Riggs, and Ram Neta. Spicer argues that the human mind contains a module for reasoning about epistemic states, that this module encodes a tacit theory about what knowledge is, and that this module runs Gigerenzer-style fast and frugal heuristics for ascribing knowledge. Riggs outlines a value-theoretic approach to epistemology, attempting to make full use of a wide range of notions from axiology. Neta explores a couple of versions of epistemological naturalism and defends his own.
Riggs’ program pushes epistemology deep into normative territory, putting value-theoretic notions at the heart of the enterprise. Neta’s program pushes epistemology deep into naturalistic territory, putting naturalistic notions at the heart of the enterprise. It might seem as if these programs are at odds with one another. It is often claimed, at least, that naturalistic epistemologies cannot succeed because they cannot countenance normativity. Taken together, the papers by Riggs and Neta shed doubt on these claims.
Riggs’ approach is as thoroughly normative as epistemology gets, and Neta’s is as thoroughly naturalistic as epistemology gets. And as it turns out, the two approaches are sufficiently similar to be better classified as allies than adversaries. In their alliance we see naturalism and normativity standing not apart but together. And in seeing as much, we see that the supposed conflict between naturalism and normativity is not as deep as is often believed. Or so I’d like to suggest. Let me start developing this suggestion by outlining Riggs’ and Neta’s programs.
Riggs thinks epistemologists should appeal to work on such matters as the distinction between intrinsic and extrinsic value, the various species of extrinsic value (such as instrumental, indicative, and contributory extrinsic value), and the various items instantiating these various sorts of value. Leveraging this work, he invites us to think about which properties are intrinsic epistemic values, which properties are extrinsic (and which species of extrinsic) epistemic values, and which items instantiate these epistemic values.
Candidates for intrinsic epistemic values include the standard so-called “epistemic goals”: truth, lack of falsehood, and even such properties as understanding and wisdom. Candidates for extrinsic epistemic values include such properties as reliability (an instrumental extrinsic value) and coherence (perhaps an indicative extrinsic value, indicative of truth — or perhaps a contributory extrinsic value, contributing to understanding). Clearly, these different candidate values are instantiated by different sorts of objects. Truth is a property of (among other things) beliefs, reliability is a property of processes, and coherence is a property of bodies of beliefs (and perhaps other things like sensations). Who knows what understanding and wisdom are properties of; maybe they’re properties of persons.
To fill out the details of this structure is to engage in an epistemological research project in its own right. That project does not focus on answering the skeptic, or on describing the nature of knowledge, or on describing the nature of some condition on knowledge such as justification. Instead it focuses on epistemic value directly, for its own theoretical sake, and not for the sake of one of these other projects. It is thus best described as constituting its own subfield: epistemic value theory. This subfield has produced some of the most exciting work done in recent epistemology. It has interacted with other subfields too; in this interactive capacity it is often called “value-driven epistemology”. This interactive work includes, for example, discussions of the “swamping problem” for reliabilism — a problem perhaps more widely discussed in recent years than any other problem for any theory of the nature of knowledge. At the core of this “swamping problem” we find issues about the nature of intrinsic and extrinsic value, and the relationships between them. We thus see epistemic value theory applied to another subfield of epistemology, namely the theory of the nature of knowledge. Riggs aptly suggests that we should continue to pursue these sorts of applications by taking value first and leveraging it for epistemological progress more generally.
Believe it or not, Neta’s naturalism comes very close to constituting such an application. Neta starts by describing and rejecting two contemporary naturalistic programs in epistemology. The first of these is Kornblith’s (2002) program of taking knowledge to be a natural kind whose essence has been empirically revealed by cognitive ethology to consist in reliably produced true belief. The second such program is Bishop and Trout’s (2004) program of appealing to various branches of psychology and statistics (and related disciplines) to identify and recommend reasoning strategies that reliably produce true belief across a robust range of significant problems. In place of these two (and other) programs for naturalizing epistemology, Neta suggests a new program that he calls “teleological naturalism”.
According to teleological naturalism, cognitive systems have goal-states in the very same way other bodily systems (such as the digestive system) have goal-states. The goal-state of the digestive system is nourishment. In the same way, the goal-state of the cognitive system is knowledge. Now it turns out, Neta tells us, that there are variations in the goal-states of the cognitive systems of different species, and of different organisms of the same species, and even of the same organism at different times. In some cases, the goal-state of a cognitive system amounts to reliably produced true belief. In other cases, the goal-state of a cognitive system amounts to “true beliefs that are responsibly based upon good reasons” (344). What knowledge really is, then, is the multiply realizable higher-level state whose lower-level realizers consist in all of these goal-states: the goal-states of all cognitive systems. Other epistemic goods as well, goods like rationality and justification, are also to be defined in terms of cognitive systems.
By theorizing about knowledge and these other epistemic goods in terms of cognitive systems, Neta sets the groundwork for a substantively naturalistic research program. Just as we naturalistically study the digestive system, we can naturalistically study the cognitive system — which of course we do, in all the various branches of cognitive science. In naturalistically studying these systems, we can naturalistically study their goal-states — the various realizers of knowledge — and thereby study knowledge itself. But despite its status as a natural goal-state naturalistically studied, knowledge still has enormous value. Nourishment certainly does, despite its status as a natural goal-state — and, we may assume, knowledge is in this respect similar. Nothing in the status of either of these goal-states as natural keeps them from having value. Neta even goes on to claim that knowledge, in addition to being the goal-state of cognitive systems, is partly constitutive of human well-being.
There are several questions worth asking about this remarkably Aristotelian picture. For example: will definitions of the various bodily systems end up appealing to their goal-states? If so, then the definition of knowledge as the goal-state of cognitive systems is bound to be circular. Or at least, it is bound to be circular if it is true. But how confident should we be that it is true? How confident should we be that every cognitive system has (a realizer of) knowledge as its goal-state? Couldn’t some cognitive systems have as their goal-states desire satisfaction, biological fitness, or some other such state that is obviously not (a realizer of) knowledge? Isn’t this at least possible, perhaps even actual? And if not, then why not? There are other questions worth asking too, but I’ll leave them aside.
In their place I want to focus on how deeply normative Neta’s program is, despite its full-fledged naturalism. We can see in Neta’s program the beginnings of an answer to Riggs’ call for a three-fold identification of intrinsic epistemic values, extrinsic epistemic values of the various kinds, and epistemic value bearers. We’ve got intrinsic epistemic values: the goal-states of cognitive systems. These intrinsic values suggest extrinsic values too — namely whatever properties cause and/or indicate and/or contribute to the intrinsic epistemic values. And on top of that, we’ve got value bearers, objects instantiating these values. These turn out to include (among other things) beliefs, since belief is a necessary condition on the knowledge at which cognitive systems aim. We thus find in Neta’s naturalism an implementation of Riggsian epistemic value theory.
All right, I admit it: this is all a bit procrustean. But only a bit. There is a real connection between the programs on offer from Riggs and Neta, despite the status of these programs as full-blown versions of normative and naturalistic approaches to epistemology respectively. Neta finds, in the natural world, systems whose goal-states he explicitly takes to be of value; and starting with this locus of value we can begin to engage the task on offer from Riggs. The connection between these two programs sheds doubt, I think, on the idea that there is a particularly deep conflict between normativity and naturalism in epistemology.
Alston, William. 2005. Beyond “justification”: dimensions of epistemic evaluation. Cornell University Press.
Bergmann, Michael. 2006. Justification without awareness. Oxford University Press.
Bishop, Michael and J.D. Trout. 2004. Epistemology and the psychology of human judgment. Oxford University Press.
DePaul, Michael. 2001. “Value monism in epistemology”, in Matthias Steup (ed), Knowledge, truth, and duty. Oxford University Press.
Goldman, Alvin. 1986. Epistemology and cognition. Harvard University Press.
Haddock, Adrian, A. Millar and D.H. Pritchard (eds). Forthcoming. Epistemic value. Oxford University Press.
Harman, Gilbert. 1986. Change in view. MIT Press.
—. 2000. Explaining value. Oxford University Press.
Huber, Franz. 2008. “Assessing theories, Bayes style”, Synthese 161: 89-118.
Kornblith, Hilary. 2002. Knowledge and its place in nature. Oxford University Press.
Kvanvig, Jonathan. 2005. “Truth is not the primary epistemic goal”, in Matthias Steup and E. Sosa (eds), Contemporary debates in epistemology. Blackwell.
Pritchard, Duncan. 2005. Epistemic luck. Oxford University Press.
—. 2007. “Recent work on epistemic value”, APQ 44: 85-110.
Quine, W.V.O. 1969. “Epistemology naturalized”, in his Ontological relativity and other essays. Columbia University Press.
Williamson, Timothy. 2000. Knowledge and its limits. Oxford University Press.
Zagzebski, Linda. 1996. Virtues of the mind. Cambridge University Press.
 See the references in Bishop and Trout (2004: 110-111). Of course, there are many kinds of naturalized epistemology. Often, it is only some of these kinds that theorists have in mind, when they claim that naturalized epistemology cannot countenance normativity. In particular, they often have in mind only Quine’s (1969) eliminative brand of naturalized epistemology. Often, that is, but not always. Sometimes the target of these critiques turns out to be epistemological naturalism more generally; again see the references in Bishop and Trout.
 Actually, we’ve seen naturalism and normativity together in a number of epistemological projects. Goldman (1986), Harman (1986), and Bishop and Trout (2004) are all both paradigmatically normative and paradigmatically naturalistic. There are other examples too, and Neta’s paper is one of them.
 See Harman (2000: 103-116, 137-150) for an overview of this value-theoretic terminology.
 See e.g. Zagzebski (1996), DePaul (2001), Kvanvig (2005), and Alston (2005). The last of these writings is most widely known for its argument that we should stop theorizing about justification. But it should be better known for its positive replacement program: which turns out to be epistemic value theory. Of course, one need not stop theorizing about justification to do epistemic value theory; the two programs are quite compatible. I’m just pointing out that Alston’s recent work has the positive program as well as the negative one, and that it should get more attention for that fact.
 For an introduction to the swamping problem literature see Pritchard (2007); for a collection of essays largely focusing on the swamping problem, see Haddock et. al. (forthcoming).
 Compare Neta’s paper to Book VI of the Nicomachean Ethics.