Posts filed under ‘Epistemology’
One of my first posts on this blog was about essentialism. For some reason, Google has since picked up the keywords in the article, and it is a popular result for people looking for “essentialism” “philosophy” and “biology”. This has mystified me, because it simply is not a very informative post. If people were looking for a reference on any of those subjects, they would do better to look elsewhere.
For a long time, I have been vowing to write a better and more original post on essentialism in philosophy. This is that post, but it is still not a reference on philosophy, essentialism, or biology. For more information on essentialism, I would encourage everyone to check out John Wilkins’ brief discussion of the subject over at TalkOrigins, here.
This post concerns essentialism not in biology, but in philosophy. And I do not mean Platonic or Aristotelian essentialism; although the roots of essentialism in biology lie in part with the ancient Greeks, I think few philosophers today would say they consider truth in the same way as Plato or Aristotle. I want to discuss essentialism as the term has come to have meaning in biology in the 1930s and 40s, and in philosophy since philosophers have decided to turn their interests to modern biology. Ernst Mayr contrasted essentialist or typological thinking with population thinking in biology, as I mentioned in my previous post, but in discussing essentialism I think philosophers have failed to do the same.
What do I mean? Well, essentialism is primarily about identity, truth, and our capacity to recognize truth in a world in transition. Essentialism concerns knowledge of the characteristics that identify something, whether that thing is a species, a concept, or a person. In contrast to this, Mayr placed population thinking, whereby there is no one single organism that is a perfect model of a species, because that was not what a species is. A species consists of a population of organisms with varying traits. This makes life difficult, but not as difficult as looking for the perfect Gazelle and accordingly classifying all other Gazelles as flawed, but close enough.
Of course, philosophy doesn’t usually work in terms of Gazelles or other organisms. Philosophy deals in concepts, explanations, and other such abstractions. Because of this, I think that philosophers have traditionally focused more on essentialism than population thinking. Philosophers of biology bring both up at the same time, as I have, but when essentialism is discussed in other philosophical contexts population thinking is not mentioned. As in my original post on essentialism, philosophers in more traditional fields like epistemology and metaphysics discuss essentialism as something to be avoided, perhaps, as a pitfall or a fallacy. Sometimes, it seems as if “essentialism” is just to be equated with “oversimplification”.
Population thinking should be involved when essentialism is discussed, however, because it is an alternative to essentialist/typological thinking. How, then, should population thinking figure into the minds of, say, epistemologists? On the one hand, there is what the concept of population thinking actually consists of, which is interesting for sure. On the other, perhaps population thinking has something to say about how we should go about seeking knowledge. In most accounts of knowledge, truth is the goal, the primary ingredient, always an essential part. But truth itself resists explanation; it is what is accurate to the world, intuitive, plausible, and correct. The other bits of knowledge should be truth-seeking or conducive to finding the truth. And, unlike the populations of organisms that make up a species, there can be only one truth. There are arguments for truth pluralism, to be sure, but I can’t say they convince me. A problem for epistemologists and humans everywhere is that finding the truth, singular and perfect, is extremely difficult.
My proposal, then, is this: we should certainly not stop looking for truth, but perhaps we need some waystations before we arrive at it. Population thinking may be able to help us find these way stations. As has been pointed out and rehashed by many philosophers, relying on natural selection to find the truth may not be the best way to go about the search, because fitness ensures survival, not necessarily knowledge. Natural selection alone has not shaped humans into the ultimate truth-seekers. But, to paraphrase one of Karl Popper’s famous metaphors, perhaps humanity is lost on an endless, darkened plain. About this plane are scattered lanterns, all with different ranges of illumination. We can only pick up one at a time, and it is difficult (although not impossible) to tell if one we come upon provides more light than the one we hold. So we go about picking up lanterns and sometimes, after a short or long distance, we have to go back to one we dropped back along the way.
So we are lost in a population of lanterns. But perhaps some may be judged, on sight, as better in some ways than others. We might not be able to judge for truth (as I am not sure we know truth when we see it), but perhaps we can specify the fitness criteria of the better lanterns before we set ours down and pick a new one up. Perhaps, given the range of theories scientific and philosophical, it would be best to leave aside truth for a while and set our energies towards defining new optimums and examining our populations of theories for those.
I’ve been reading a lot of Hempel lately, in addition to all the articles, essays, studies, and polls on evolution and creationism. Because of this, I have more than the usual blend of 20th century philosophy of science and epistemology brewing in my brain. Anila Asghar and Brian Alters’ study (forthcoming) introduces an interesting situation for probability theory and Bayesianism, on one of its most controversial topics: that of prior probabilities.
In the study, one of the differences between North American Muslim science teachers and their Pakistani counterparts was that of the separation or blend of science and religion in the classroom. Such separation often seems to extend out of the classroom and into personal beliefs. North American Muslim science teachers favor the separation of science and religion, and one interviewee in particular espouses something similar to Stephen Jay Gould’s idea of Non-Overlapping Magesteria. Pakistani science teachers, on the other hand, see no need for such separation. The textbook chapters on evolution begin with Qu’ranic verses, and the provincial standards of education outline the need to teach science within the context of the Qu’ran.
To tie this into contemporary philosophy of science and probability theory, consider the problem of prior probabilities. A brief outline of the problem is this: before we are faced with evidence for or against a theory, we all have our own prior degree of belief in the facticity of the theory undergoing a test. Because these are subjective probabilities, they are difficult to describe. When faced with the results of a test, it’s difficult to see how someone is to alter their subjective probability from the prior probability to the updated, post-test probability. Bayesians have done a lot of footwork around this problem, but not much of it is convincingly conclusive. I’m not convinced of the entire Bayesian program of describing/reconstructing rationality as a series of probability calculations, but those are thoughts for another time.
For now, I’m interested in how the attitudes of the teachers in the study correspond to the idea of prior probabilities. If North American Muslim science teachers really believe in the separation of science and religion, then their prior probability for a given scientific theory should be free of influence from their religious beliefs. But Pakistani science teachers believe in a blend of science and religious explanation, so their priors for a given scientific theory should be influenced by their religious beliefs. If true, this is an important situation that Bayesians need to consider. The same theory is being tested, but there is an explicit religious influence in the prior probabilities of one evaluator and not in another. This is so even though there is great demographic similarity between the evaluators—both are Muslim science teachers of Pakistani descent. The main difference, in many ways, comes down to where they teach. That this should have such an impact on prior probabilities by way of the introduction of a large category of thought in one situation and its absence in another is a large effect for a relatively small difference (or at least a small difference in terms of Bayesian calculations).
I’m not sure how much work Bayesians have done regarding the effects of culture on prior probabilities, but the teachers described by Asghar and Alters make a strong case for its study. It shouldn’t be news that non-scientific beliefs have an different impact on scientific beliefs that depends on cultural context. When reconstructing rationality as a probabilistic process, though, I think this important idea has been largely overlooked. Both Bayesians and their critics should make sure to include this cultural dimension in their probability calculus at risk of obscurity and irrelevance.
How much does a mind’s embodiment have to do with its recognition of other minds? The question might seem to come out of science fiction, as science fiction is full of examples of intelligence that has a different physical incarnation from our own. Most of the time, the denizens of science fiction universes seem to have little trouble in recognizing other intelligent entities as intelligent, regardless of their appearance. The bodies range from the nearly identical, in the case of humanoid replicants in Blade Runner to the gargantuan and fantastic, as is the case of the vessel that houses HAL in 2001. In most cases, the illusion is believable. Give an object a voice, preferably one that seems to correlate with some movement, and the audience is easily taken to think some mental process is behind it.
Not so when the audience leaves the theater and with it, their suspension of disbelief. We deal with robots all the time. I’m interacting with one while I type this, use an ATM, or call the automated voice that tells me what numbers to press when I want to pay my phone bill. At no point in my regular interaction with these devices am I convinced that what I’m dealing with has intelligence. There is no knockdown artificial entity that can convince us, today in the world, that there is consciousness behind the voice and movement.
I’ve written about this before—it was the topic that introduced my philosophy of mind class a year and a half ago. My interest in embodied cognition came about because I wanted to investigate the question. Then, as now, I predict that our recognition of any differently embodied (artificial or otherwise) would be clouded by bias. We want to look into eyes like our own and perceive, somehow, a spark that appeals to our intuition.
The paper I wrote focused on what an approach enlightened by theories of embodied cognition might do to help us understand how our bodily bias might affect our recognition of minds with embodiment different from our own. It made heavy use of Nagel’s famous “What is it Like to Be A Bat?” and Andy Clark’s work, as well as some sources from professors at UCSD’s embodied cognition lab. It was a solid final paper for my first philosophy of mind class.
This semester, I’ve been auditing a graduate class on embodied cognition at the University of Edinburgh. Today was the second to last seminar of the semester, and I couldn’t stop wondering about my original question afterwards. Near the end of this term, how much closer am I to answering that question?
I think that I now have a better understanding of the question, itself. The problem of other minds has many facets, but my question is most concerned with a particular facet of the problem. We intuitively recognize cognition, to different degrees, in many places external to ourselves. Sometimes, this intuition is overanalyzed. This is the problem Wittgenstein considers wayward philosophy makes when it declares that animals do not talk because they do not think, instead of considering that “they simply do not talk” (Philosophical Investigations §25). It is easy to ascribe a measure of cognition to a dog, and the reality of other minds is presupposed by many of our interactions. At least in the case of those interactions with that which has, in an intuitively obvious fashion, cognition.
That, of course, is the rub. Some dogs have big, watery eyes. They make sounds and assume postures similar to our own when we feel a certain way. It is fairly easy to simulate, in one’s own mind, what the dog might be feeling. What goes into this simulation in our minds? We can’t have the same phenomenal experience of a dog, after all. How much is our biology directly responsible for that instant, non-theory laden simulation (if that is indeed what happens) of what it is like to be a dog? To what degree does our environment contribute? These sorts of questions are central to embodied cognition theorists, many of which have their sights set on the higher question of what, exactly, constitutes cognition.
The course has thoroughly covered Jesse Prinz’ theory of emotions, body image, body schema, and lately the phenomenology of agency. These are all topics that can be informed by embodied cognition theory. The problem of other minds has appeared periodically in all of my classes this semester, and I get the feeling that philosophers are getting past the apparent truth of “they simply do not talk” to what happens when we attribute agency, or consciousness, to ourselves and others. Hopefully, when we understand how we recognize consciousness in those beings that have embodiment similar to our own we will also be on the way to an understanding of consciousness that allows us to see through our bodily bias.
By now I’ve finished the second chapter of Evidence and Inquiry, and I think gathered some material for a Foundherentist support of Bayesianism. One of Haack’s prime objections to Lewis’ formulation of foundationalism is that he demands certainty in order for a belief to be justified. Repeatedly, however, Haack declares that beliefs only need to be justified to “some degree”.
“A’s belief that p cannot be justified to any degree, non-relatively, unless, eventually, the chain ends with a belief or beliefs which is or are justified to some degree independently of further beliefs. But it is not required that the basic belief or beliefs eventually reached be completely justified independently of any further beliefs.” (Haack, 43)
This is the primary reason why foundherintism beats out foundationalism as a coherent epistimological system. While I haven’t yet investigated Haack’s other writings for the same Bayesian thread, this doesn’t seem like a bad line of inquiry to pursue. Also, Haack is so precise in her definition of terms (and so critical of others’ lack of precision, as in her somewhat sarcastic critique of Lewis use of terms on 38) that I doubt it’s a coincidence that she phrases her argument with “to some degree”.
If anyone needed evidence that I try to make as many connections as possible between different readings, this blog would be a smoking gun. Todays philosophy mash-up comes from the beginning of the second chapter of Susan Haack’s Evidence and Inquiry: Towards Reconstruction in Epistemology. As usual, I need to read more of the book, and more of Haack’s other writings, before I can fully flesh out my idea. At this point, however, it seems to me that Haack’s foundherentism may be very sympathetic to Bayesian theories of confirmation.
I wrote a paper this past semester on Clark Glymour’s take on Bayesianism, and was generally sympathetic to his criticism. It seems altogether too subjective a system for something which is supposed to establish whether or not evidence confirms an idea (or perhaps, is justified in believing an idea based on evidence). My views on Bayesianism will probably be fleshed out further on this blog at a later time. Like most of the things I post about.
While I don’t like Bayesian ideas, I do think that Foundherentism has something to it… it will be interesting to see if 1) the correlation between Foundherentism to Bayesianism has more to it and 2) if I look more or less favorably on Bayesian ideas following my reading of Haack.