Posts filed under ‘Ethics’
I was reading and commenting on Ethics, morality, and legality of robitic wars over on Salman’s blog, and I left a comment over there, but I have more to say. I kind of feel like I might be expending too much typing energy on this topic when I could be writing another page of my Div III, but what are you gonna do…
The short of my comment over at that spot is this: I’m worried. Worried about what might discussion of the new technology of war might affect, and what it might cost us if we don’t talk about it. I don’t want it to cost discussion of the ethics of war in general. Aw, hell, I’ll just cut and paste the comment:
I don’t want people worrying about robots killing at the expense of worrying about people killing.
I don’t know how to anticipate how one will affect the other– Singer has some speculation on this, as in his discussion of the depersonalization or disconnection of both the warrior and adversary. But it seems to me that War has been getting less personal for its entire history. Trench warfare and chemical warfare in world war one, air warfare in world war two, the atomic bomb, .50 caliber sniper rifles, tomahawk missiles, etc. are all technological progressions in war that have led to depersonalization and disconnection. All Quiet on the Western Front was written not about the last few major wars, but the one at the very beginning of the 20th century.
Maybe that should be the first issue to impact speculation on the effect of robotic warfare– that it doesn’t just change the context of war, but that the context of war has already changed. The situations in which drones are used are different from WWII, Vietnam, or even the first Gulf War. Increasing perceptions of disconnect and depersonalization has been happening throughout the last century(for another literary instantiation of this, I would recommend Anthony Swafford’s Jarhead). Maybe robotic warfare isn’t as revolutionary is its technological trappings would have us believe, and we should take the opportunity of the shock caused by the novelty of robotic weapons to re-open discussion about the ethics of war, period.
Now that I’m writing over here, in my own space, I’m going to say more. I get the feeling that in many discussions of technological development, discussions over things like Transhumanism and Uploading and whatnot, people become distanced from where a lot of the market and funding for the technological front lies. Since the second world war at the very least, much of that front has been taken up by the military.
Now, for anyone reading this from my hometown of Tucson, I doubt this comes as a surprise. A lot of people in Tucson are employed by either Raytheon or Davis Monthan AFB, and the marriage of technology and the military is in everyone’s backyards. Out here on the East Coast, I feel like that sort of thing isn’t as prevalent. In any case, the idea that the technology of the glittering tomorrow may first be put to use killing other people is something that gets pushed somewhat under the carpet. True, in books like Radical Evolution it is no secret that a lot of the technology under discussion is funded or connected in some way with DARPA, but the discussion centers more on what it is on our side of war that may be technologically improved, not what may happen with technological improvements in war, warts and all.
What I’m saying is this: it’s attractive and easy to talk about the great things that technological improvements bring, even while acknowledging the military impetus behind technological development. It’s much more difficult to ask if our path of technological development is progressing in the right way. This is more than an ethical question, I think, and it’s certainly more than a question about being comfortable about how the technology I use as a consumer is developed, as well. It would be easy, again, to take a radical stance and align oneself with the luddites of old in protest of the link between technology and death.
The difficult questions, I think, are these: Is there a way in which technology could make better progress without its relationship with the military? Should we, or how should we, look to divorce technology and war?
In “The Deep Ecology Movement: Some Philosophical Aspects” and other essays, Arne Naess wrote that an anthropocentric system of ethics is not a sound foundation for deep ecology. This was true for Naess even if such an anthropocentric ethic seemed to support the goals of the deep ecology movement. On this point, I think that Naess has it dead wrong. While all the possible foundations that Naess mentions provide a very intuitive basis for believing in the deep ecology platform, it is possible to have an anthropocentric ethic and still believe that the goals of deep ecology are important to pursue.
Right out the gate, it seems like the deep ecology platform is opposed to an anthropocentric ethic. The first tenant of the platform, after all, refers to the intrinsic value of all human and non-human life. How can an anthropocentric ethic recognize the intrinsic value of non-human life? If recognizing the intrinsic value of non-humans means that we must equate their value with those of humans, than I’m afraid Naess has me. It would seem a contradiction in terms to think that an anthropocentric ethic could work in such a way and remain anthropocentric. If, on the other hand, we can recognize the intrinsic value of non-human life and then acknowledge that different beings have different value, and that the flourishing of a being means something different for each kind of being, then an anthropocentric ethic can work for deep ecology. I see no reason why this take on intrinsic value is incorrect.
Naess is concerned, however, that even this variety of anthropocentric thinking provides too shaky a foundation for the deep ecology platform. He writes in “Deep Ecology: Some Philosophical Aspects” that such a foundation does not effective enough in producing belief in the deep ecology movement. The deep ecological ethic “would surely be more effective if it were acted upon by people who believe in its validity, rather than its usefulness.” This brings to my mind Richard Rorty’s call for sentimental education as a background for ethics. Rorty identifies the difficulty human rights ethicists have in posing effective arguments to those racists or sexists who believe that those they persecute are less than human. He proposes an education that emphasizes empathy and sentimentality as a means of promoting human rights that bypasses the arguments and deaf ears. The difference between Rorty’s call and Naess’ is that Rorty is open about his advocacy of sentimentality on the basis of its usefulness and Naess is not.
By promoting certain kinds of foundations on the basis of their usefulness and then refusing to count a pragmatic ethic among them, Naess is being somewhat inconsistent. It’s fair to say that the usefulness of a foundation is not Naess’ only criteria for an adequate foundation for the deep ecology platform, but it should be acknowledged as one criteria among many. For Naess, however, acknowledging utility as a sound ethical criteria falls into the category of shallow (read: narrow-minded) ecology. I think that this is to the detriment of what should be the big tent of the deep ecology movement, especially as an anthropocentric ethic can include belief in the intrinsic values of non-humans and can be quite effective in motivating ethical action.
This is reposted from an email that I sent off to some friends in response to some questions about Arne Naess– so if you guys are reading this, feel free to respond here, too.
The reason why I think that deep ecology doesn’t quite qualify as a philosophy is that it ties philosophical positions to political action directly, and refuses to differentiate between the two. While this is part of what makes it interesting, it also makes it philosophically vulnerable. Because deep ecology was designed as a movement, it has weak philosophical foundations. When asked to defend his value of interconnectedness, for example, Naess falls back on Spinoza’s metaphysics. Since Spinoza’s metaphysics have a substantial supernatural component, I think they’re untenable. I also think that his ideas regarding substance are quasi-mystical at best, and nonsensical at worst. There are a lot of better ways to defend the idea that by endorsing the values of deep ecology, you’re also endorsing an idea that will help the progress of the human race in general. We don’t need enlightenment or 19th century philosophy to back us up on this point– 20th and 21st century philosophy can do the job just fine.
For example, consider evolutionary ethics. There are a couple people out there who are trying to blend work in evolutionary psychology on the nature of altruism with traditional systems of ethics. It’s important to note that this kind of work is mostly descriptive, and not prescriptive, so it’s not the strongest kind of ethics. What it does describe, however, is some basic reasons why ethical action is important to humans as a species. Beyond that, we can take cultural and pragmatic hints and flesh out the sort of ethics we think are important, and they will become important (kind of like hauling yourself up by your bootstraps) just because they are things we value. Our ethics will then become twofold– one part descriptive and very naturalistic, one part prescriptive and pragmatic. Knowledge of the first will help inform how we want to develop the second, until we can answer the question of how we should act.
With that sort of system, we don’t need to rely on a spinozistic metaphysics or the other quasi-mystical principles that Naess is into in order to get to the goals that Naess wants. Since I agree with his goals, but not his foundation, this is just where I want to be. To return to talking about the value of interconnectedness, let’s take a critical look at Naess’ foundation. He believes in the interconnectedness of beings because of a unity of substance in the world– since all beings are made out of one kind of substance, we’re all connected because of similar qualities. Some beings have a different, sort of divine substance which enables conscious action, and we as humans are also made up of this substance. Because of this, we can improve the overall quality of substance by maximizing the flourishing of all beings. Naess has a very special definition of flourishing that differs only slightly from the idea of utilitarian good, but since they’re mostly analogous to one another I won’t go into it here.
But we can defend the value of interconnectedness without all that talk about substances by taking a more naturalistic turn. First, we have some biological similarity with other beings. This is closest with other primates, mammals, and then spreads out from there. We are also increasingly concerned with sustainable development, partially because we’re starting to realize (as a political whole, hopefully), that our lifestyle depends upon a better stewardship of the resources we use to maintain those lifestyles. As our interests are similar to the interests of some other creatures on the planet, and also tied up within the interests of other non-human beings, it makes sense pragmatically to place more value on how our goods are tied up with the goods of non-humans. If we want better lifestyles for increasing numbers of people, it seems like this is a value that will help us achieve that goal. All of that teleological ethical thinking is valid, and it doesn’t rely on Naess’ more spaced-out thinking. That is where we should all want to be.
I posted yesterday about a code of ethics created for scientists by Sir David King. My review of the code was pretty negative.
I decided to do some more homework on Sir King’s code today. As part of this, (and I’m kicking myself for publishing before having a more thorough understanding of the topic), I discovered this Letter from the UK Council for Science and Technology. The letter is followed by a draft of the code. While larger than the edition reported by the BBC here, it remains a document about which I am pessimistic.
More interesting than the draft of the code is the letter which the council released prior to the code’s circulation. Both were published in May of 2005 (I was still in High School, so I think it’s forgivable that I missed the announcement). Little seems to have changed between this version and its publication, but it appears that the code went up for a six-month period of something resembling peer review. Institutions were asked for the views on the usefulness of a universal code of ethics for the scientific community.
I wonder if I can dig up the responses (presuming there were any) to the “peer review”? It would be interesting to be able to look inside the guts of a policy paper written by the scientific community in a way that resembles normal scientific discourse. Hopefully I’ll be able to do that soon, in part 3.
Sir David King does, apparently. In this BBC article, an effort by Sir King (Chief Scientific Adviser to the UK government) outlines a code of ethics for scientists. The code is constructed out of the following seven points.
Act with skill and care, keep skills up to date
Prevent corrupt practice and declare conflicts of interest
Respect and acknowledge the work of other scientists
Ensure that research is justified and lawful
Minimise impacts on people, animals and the environment
Discuss issues science raises for society
Do not mislead; present evidence honestly
Certainly sounds like a good list. It’s full of common sense, and several points seem to already be well incorporated into how the scientific community operates. “Respect and acknowledge the work of other scientists”, for example, is already built into the concept of citation and peer review. Both concepts are included for practical reasons, so tacking them on as ethical considerations seems like an easy thing to do. Likewise, there are very practical reasons for conducting lawful research (it’s hard to continue a career following a felony conviction) and keeping skills up to date.
Unfortunately, (as has been pointed out by at Adventures in Ethics and Science) these ethical guidelines probably won’t have much effect. The idea of a universal scientific code of ethics for scientists is a good idea, but one developed in the terms of policy talking points sounds like one doomed to have little practical impact.
Let’s look at King’s example, as quoted in the article:
“Place yourself in the position of a scientist who works for a tobacco company, and the company asks you to counter evidence about the health impacts of tobacco.
“That scientist would be able to look at the code and say, ‘I can’t do that’.”
I’m fairly confident that, as it stands, the scientists employed by tobacco companies can already say that they follow a code of ethics with a straight face. It’s probably included in Phillip-Morris’ mission statement. In fact, let’s take a look at that mission statement:
Our mission is to be the most responsible, effective and respected developer, manufacturer and marketer of consumer products, especially products intended for adults. Our core business is manufacturing and marketing the best quality tobacco products to adults who use them.
Well, damn. It sounds like the tobacco companies already have a code of ethics. Their mission is to be “responsible” and “effective”, and only manufacture and market” tobacco products to adults who use them.” Why would a company like that ask a scientist to blatantly violate a scientific code of ethics like Sir King’s? And why would a scientist who swore up and down to Sir King’s code of ethics agree to do such a thing, if asked?
Probably because, with a certain amount of talking, they could find a justification for doing so. What if (to borrow a banner waived with fervor by the ID movement), the scientist decided to do research about how tobacco might not have some of the harmful effects ascribed to it in the name of intellectual freedom? The public and the scientific community say one thing, but they might be wrong! They might be doing the wrong tests! How sure are people, anyway, about those statistics linking increased probability of lung cancer to cigarette addiction? It’s possible surely, that all these studies have been conducted in a manner unfair to the tobacco industry. Based on this, wouldn’t Phillip-Morris say that they would have a moral mandate to conduct new research? And conduct it until they got the results they wanted?
Seven bullet points does not a compelling code of ethics make. People do not perform unethical research merely because nobody has yet come along and outlined a code of ethics for them. People perform unethical research before, during, and after reading ethical theory with far more universal, convincing, and thorough arguments than “science would be better if someone proposed a universal code of ethics.” And I do not doubt that people with a copy of Sir King’s code on their wall will, before long, do something that violates the spirit of the text.
What, then, would give it some teeth? A better argument for why scientists should follow this particular code (or any code) would be a good start. Unfortunately, arguments like that are difficult to create and generate far less publicity than the announcement of the UK’s Chief Science Adviser
I think the question is only valid in relation to educational settings in which a grade is given. For schools which use written evaluations (Waldorf schools, New School in Florida, Hampshire College in Mass., Evergreen in Washington, and (formerly) UC Santa Cruz, among others) in place of grades, there can be no problem regarding extra credit. This is so because there is no out-and-out point system of evaluation. Students can be motivated (and encouraged) to do extra work or an extended number of paper revisions as a way of extending their own learning and then exhibiting that learning to the professor. Students who do not do such extra work miss out on those additional benefits, but they are just that: additional.
I suppose the implication for this on more traditional grading systems is this: if the choice to do extra work is ultimately left to the student, free of pressure or constraint (ie: “I have to do this extra paper so I can bump the “C” on the exam up to a “B”.”), then extra work (and accompanying credit) is beneficial. Unfortunately, that lack of pressure seems awfully unrealistic within the trappings of a more conventional setting. Viva alternative education!