Our Bodily Bias

March 11, 2008 at 8:19 pm 2 comments

How much does a mind’s embodiment have to do with its recognition of other minds? The question might seem to come out of science fiction, as science fiction is full of examples of intelligence that has a different physical incarnation from our own. Most of the time, the denizens of science fiction universes seem to have little trouble in recognizing other intelligent entities as intelligent, regardless of their appearance. The bodies range from the nearly identical, in the case of humanoid replicants in Blade Runner to the gargantuan and fantastic, as is the case of the vessel that houses HAL in 2001. In most cases, the illusion is believable. Give an object a voice, preferably one that seems to correlate with some movement, and the audience is easily taken to think some mental process is behind it.
Not so when the audience leaves the theater and with it, their suspension of disbelief. We deal with robots all the time. I’m interacting with one while I type this, use an ATM, or call the automated voice that tells me what numbers to press when I want to pay my phone bill. At no point in my regular interaction with these devices am I convinced that what I’m dealing with has intelligence. There is no knockdown artificial entity that can convince us, today in the world, that there is consciousness behind the voice and movement.

I’ve written about this before—it was the topic that introduced my philosophy of mind class a year and a half ago. My interest in embodied cognition came about because I wanted to investigate the question. Then, as now, I predict that our recognition of any differently embodied (artificial or otherwise) would be clouded by bias. We want to look into eyes like our own and perceive, somehow, a spark that appeals to our intuition.

The paper I wrote focused on what an approach enlightened by theories of embodied cognition might do to help us understand how our bodily bias might affect our recognition of minds with embodiment different from our own. It made heavy use of Nagel’s famous “What is it Like to Be A Bat?” and Andy Clark’s work, as well as some sources from professors at UCSD’s embodied cognition lab. It was a solid final paper for my first philosophy of mind class.

This semester, I’ve been auditing a graduate class on embodied cognition at the University of Edinburgh. Today was the second to last seminar of the semester, and I couldn’t stop wondering about my original question afterwards. Near the end of this term, how much closer am I to answering that question?

I think that I now have a better understanding of the question, itself. The problem of other minds has many facets, but my question is most concerned with a particular facet of the problem. We intuitively recognize cognition, to different degrees, in many places external to ourselves. Sometimes, this intuition is overanalyzed. This is the problem Wittgenstein considers wayward philosophy makes when it declares that animals do not talk because they do not think, instead of considering that “they simply do not talk” (Philosophical Investigations §25). It is easy to ascribe a measure of cognition to a dog, and the reality of other minds is presupposed by many of our interactions. At least in the case of those interactions with that which has, in an intuitively obvious fashion, cognition.

That, of course, is the rub. Some dogs have big, watery eyes. They make sounds and assume postures similar to our own when we feel a certain way. It is fairly easy to simulate, in one’s own mind, what the dog might be feeling. What goes into this simulation in our minds? We can’t have the same phenomenal experience of a dog, after all. How much is our biology directly responsible for that instant, non-theory laden simulation (if that is indeed what happens) of what it is like to be a dog? To what degree does our environment contribute? These sorts of questions are central to embodied cognition theorists, many of which have their sights set on the higher question of what, exactly, constitutes cognition.

The course has thoroughly covered Jesse Prinz’ theory of emotions, body image, body schema, and lately the phenomenology of agency. These are all topics that can be informed by embodied cognition theory. The problem of other minds has appeared periodically in all of my classes this semester, and I get the feeling that philosophers are getting past the apparent truth of “they simply do not talk” to what happens when we attribute agency, or consciousness, to ourselves and others. Hopefully, when we understand how we recognize consciousness in those beings that have embodiment similar to our own we will also be on the way to an understanding of consciousness that allows us to see through our bodily bias.


Entry filed under: Embodied Cognition, Epistemology, Philosophy, Wittgenstein.

Letter to Wittgenstein Prior Probabilities and Culture

2 Comments Add your own

  • 1. ungtss  |  May 11, 2008 at 4:08 pm

    Interesting …

    For my part, I think that intelligence is defined by the capacity not only to respond to stimuli, but to respond to stimuli with respect to certain preexisting desires.

    Thus you don’t just see a hamburger; you see a hamburger with either preexisting hunger and desire to eat it, disgust because you don’t like hamburgers, lack of interest because you don’t want to die of heart disease, etc.

    And when we speak of “bodiliy bias,” I think what we’re really doing is tuning into the tell-tale signs of those pre-existing, apparently spontaneous desires. It’s easy to spot those signs in “bodies,” because we’re aware of how we respond when WE “want” something, “fear” something, etc.

    When a dog sees you pick his bowl up and go to the garage, he gives off every indication of excitement, anticipation, and hunger. He may dance, or run, or start to make noises. It’s those spontaneous, emotional reactions that make him appear intelligent to us.

    And nobody preprogrammed those desires and assocations into the dog. He “learned” that “bowl=food” from experience. And we know that too.

    Computers and ATMs don’t show those reactions. It’s clear that we press a button and there is always the same reaction. We don’t have any experience with computers wanting things.

    Now IM would be a good test for bodily bias. If somebody IMs us out of the blue, do we automatically assume they’re unintelligent? No. Is it possible? Yes. But at least today, it’s easy to “see through” artificial attempts at conversation, because we can’t “program” those spontaneous desires and pre-existing reactions.

    We can’t make computers appear to “want” anything.

    For what it’s worth anyway.

  • 2. Bicameralism  |  June 19, 2008 at 11:31 am

    Somehow i missed the point. Probably lost in translation 🙂 Anyway … nice blog to visit.

    cheers, Bicameralism!!!


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

Trackback this post  |  Subscribe to the comments via RSS Feed

Recent Posts


Everything on this blog should be taken as a draft, the spilling over of mental activity flung far and wide. The author is a graduate of Hampshire College in Amherst, MA who enjoys many things but devotes most of this space to matters academic.
March 2008
« Feb   Jul »

%d bloggers like this: