SPT logo

Techné: Research in Philosophy and Technology

SPT Logo
Editor-in-Chief: Joseph C. Pitt, Virginia Tech, Vol. 11, no. 1 (Fall 2007) -
previous editors: Paul Durbin 1995/97; Peter Tijmes 1997/99; Davis Baird 2000/07

Number 3
Spring 2004
Volume 7

DLA Ejournal Home | Techne´ Home | Table of Contents for this issue | Search SPT and other ejournals

Ethical Colonialism

Joseph C. Pitt
Virginia Tech

The issue of finding an appropriate ethical system for this technological culture is an important one. The book that Jozef Keulartz, Michiel Korthals, Maartje Schermer and Tsjalling Swierstra have put together provides an excellent discussion of the need for a new vocabulary for ethics vis-à-vis technology, and they make a compelling case for pragmatism. That said, I have some concerns. I raise them in the spirit of continuing the exploration of the pragmatic ethics they have begun and not merely to be contentious. For some years I have been disaffected with the status of work in ethics—the little that I come in contact with, and I find the discussion in Pragmatic Ethics for a Technological Culture not merely promising but invigorating. My two concerns are: (1) over what I perceive to be a move towards ethical colonialism, and (2) the authors' conception of pragmatism.

At a conference held at Virginia Tech on the metaethics of moral value in March 2003, Sara Williams Holtzman gave a fascinating talk in which she wanted to accord mountains moral status. In the discussion that followed, I proposed that she was engaged in ethical imperialism. After reading Pragmatic Ethics for a Technological Culture, alas, I feel compelled to accuse Keulartz, Korthals, Schermer and Swierstra, of being unknowing allies of Holtzman and equally committed to ethical Imperialism, or more to the point, ethical colonialism.1

What do I mean by ethical colonialism? It is the attempt to endow everything in the world as an actor with moral value. It is to deny that there are other types of values, such as epistemic values, which have their own integrity and can operate in an ethical neutral framework. Consider this claim from the opening essay: "The example of the pill makes it clear that technological artifacts possess a written-in or built-in normativity. They embody particular options and restrictions, and thus reinforce or alter existing role divisions and power relationships" (p. 9).

There is a certain seductive allure to this Latourian idea that material things have a kind of agency. By virtue of their existence in our field of action, we are forced to accommodate artifacts and that, it is mistakenly claimed, gives them agency. I, however, find it somewhat problematic to attribute agency to a tree simply because I have to walk around it.

The idea that technological artifacts possess normativity follows the same line of thought as Latourian material agency. That objects are to be used in a certain way, or come to be used in a certain way, however, does not mean they possess normativity. Nevertheless, the authors in their Introduction note that, "Technological artifacts carry a script or scenario within them; they require particular role patterns and role divisions and lay down a specific 'geography of responsibilities'" (p. xvii). Now, why is that the case?

The example they use to cash this idea out is the birth control pill—they point to the fact that it has given women control over their bodies and over when they will bear children—altering the power relation between men and women. They also point out the fact that the pill has facilitated the separation of reproduction from sexual activity, and its use has altered our perception of family planning. All of that is to the good—no objections from me here—but they neglect to mention that the pill was developed by a Catholic researcher for the express purpose of regularizing menstrual cycles of women who were having difficulty becoming pregnant because of irregular cycles. The point was to have more children, not fewer. If the pill embodies normativity, it is the norms of a tradition in a male dominated society. Now surely that is not the desired outcome of the example.

Let me try an alternative account, one that is not in conflict with the desired end of a pragmatic ethic, but one that places normativity in people, not in things. I do not object to the authors' claim that technological artifacts have normative significance. But the normative significance is a direct function of how people choose to view them and use them. It is the use to which artifacts are put that exhibits the normativity of the users, not the things. And this is a very pragmatic point of view. As I argued in Thinking About Technology (2000) in discussions of technologies, the emphasis should be on the decision-makers, not on the objects. Whatever normativity there is with respect to the designing, manufacturing and use of artifacts, is to be found in the values, both epistemic and non-epistemic, involved in making the decisions to do this rather than that, to use this material rather than that, and to use this tool rather than that. Decision-making is a value-dominated activity. With respect to technological artifacts, whenever they are employed it is because a decision has been made to do so. Making decisions is an inherently value-laden activity since it always involves making a choice. Understanding the normative dimension of technological artifacts requires an analysis of the factors that played into the relevant decisions. If you want to understand the normative dimension of the birth control pill, ask a woman why she chose to use it. Ask the manufacturer why those materials rather than others—or why this shape for the dispenser rather than another. If I may be so bold, there is, therefore not one normative dimension to technological artifacts but many—and that in part is why there is so much discussion over technologies that promise the most. The more a technology promises, the more choices have to be made.

The insight that our authors are in risk of losing is that new technological artifacts open up possibilities for human action—which is what I think they mean by the "geography of responsibilities." The responsibilities, however, are not in the object, they are ours. For example, it is our collective responsibility to come up with a protocol or two regarding human cloning. It makes no sense to say the responsibility lies…where? in the process? But what is the process other than what people do? Somewhat in jest I propose to my students that now that human cloning is possible, not only will someone do it, but it also means that men are now facing extinction. Talk about changing the power relations!—if the only thing men are needed for is reproduction, then we are no longer needed. Given recent events, peace might have a better chance if men did not control the decision making process in our government. Women control the economy—they are smarter than we are and they live longer—we are done for.

To return to the issue at hand—the normativity of technological artifacts—it seems to me that by placing the normativity in the people making the decisions rather than in the technologies, we make the possibility of the authors' end-state, creative democracy, more attainable. Why do I think this? As I understand the authors' argument, it goes like this: (1) life in a technological world is dominated by change—therefore there can be no universal principles (I agree); (2) new technologies make for new possibilities for action (right again, especially since possibilities for action raise the specter of new moral problems); (3) the best way to decide what to do with new possibilities and new moral problems is to have as many people as possible in the discussion (idealistic and probably not really a good idea because not everyone has something contributory to say on every issue); (4) new possibilities and new solutions require a new way of speaking—therefore we need creativity, and the best way to get that is to have everyone at the table (not clear).

As I see it, what makes change problematic is that change, especially technological change, threatens a person's perception of the good life and in so doing challenges his or her values. No one likes to have his or her values challenged. Our values are at the core of who we are—not all our values, but the basic ones, like, for example, protection for our children. If a proposed technological change, like locating a nuclear power plant next to my house, is perceived by me as a potential threat to my children, then you are right to expect me to object. And because there is little rational argument or rational deciding on our values, the possibility of rational discourse when there are clashes of values is very low unless something intervenes. Therefore, just bringing people to the table is not enough—it is not enough to get them to listen to one another and it is not enough to generate the kind of creativity needed for making decisions about the new possibilities and the nature of the new moral problems these possibilities bring. Several things are at issue here. The first concerns getting people to engage. The second concerns the meaning of "creativity".

Concerning getting people to listen to someone with a different point of view, this is a rock bottom problem; it permeates all peoples and societies. This is the same problem as understanding the possibilities a new technology offers—it is the problem of overcoming our fear of the unknown. Often people refuse to engage in discussion with someone who holds radically different views from theirs because they are afraid of the challenge the new ideas may pose to their own views, views for which they know they have no defense. That is, they are afraid of what they don't know or what they fear they may lose. Likewise, it is not so much that people object to technological innovation because of what they know about it—it is rather what they don't know that makes it so difficult to have a reasoned discussion. So I agree, something like creativity is called for—but I think we can be a bit more precise. If I am right and the problem is fear of the unknown, then what we need to do is to eliminate the fear by making the unknown more familiar, or, rather, to make the technology appear more familiar, so that what it can do is not so threatening. I think the way to do this, to reduce the fear of the unknown, is to use metaphor. But, to see why this may work, we need to get a better handle on what we mean by "creativity".

Elsewhere I offer the following account of creativity: To be creative is to produce variation given the constraints of the materials and other parameters within which you engage in the deliberate design and manufacture of the means to manipulate the environment to meet humanity's changing needs and goals (Pitt forthcoming).

The problem in seeking creative solutions to ethical problems posed by technological innovation is in not knowing where to start. My suggestion is to start with the way we talk about our technologies and their possible ramifications. If we can come to a common language in which to discuss the problems, we may have a chance to actually finding solutions. But, unlike the positivists, I am not proposing that we develop a formal language from the start. Rather, I think we should circle the problems, trying out different ways of talking about it until we find one that satisfies all parties. The way to begin this process is through metaphor.

Metaphor, by its very nature, gives meaning to the new by way of associating it with something already understood. Irrespective of what Al Gore had to do with it, calling the world-wide web the information super highway was very helpful to many people in coming to grips with the potential for this new technology. It also helped to open up some of the ethical issues, like privacy. Calling it an "information" highway raises the red flag that should come up whenever issues of information are discussed. Further, because it is not just a case of potential eavesdropping, the manner in which this phrase serves as a metaphor becomes clearer. By using metaphor, however, we are not doing what our authors do not want us to do, which is to live in the past. The fact that we can rely on what is already understood, does not entail that we stop there. Metaphor extends the use of language, it changes the meaning of words, words that had a familiar meaning now mean even more. The material world is not the only thing that changes constantly—so does language—just try understanding a 16 year old today. But because it looks like the old language, we are often not aware of the extent to which language changes—unless you are French and keep a careful watch over the purity of the mother tongue. But how do you say website in French?

The way to find the right metaphor or metaphors is by applying the pragmatic method. The pragmatist's first maxim is "consider the consequences". If we keep in mind the consequences of using this metaphor rather than that we can work our way toward a metaphor that captures what concerns us in terms of the consequences of allowing this new technology. For ethics, this does not reduce to mere consequentialism. The consequences the pragmatist considers are not just the effects of his or her actions; it is the effects of those actions on his or her knowledge base and on his or her values and goals. Translated, this says that in the evaluation of choosing A over B or C over D, there is more to consider than merely the physical consequences, there are also consequences for your vision of the good life, for the rock bottom values that constitute what you hold most dear, what you hope all people value. If you lie, what does that do to your self-image? If you use a gun to kill deer, what does that say about your conception of a civilized human being? If you use 5,000 pound bombs and withering machine gun fire to kill enemy soldiers and civilians alike for no clear reason, what does that say about the character of a country's leaders? What a pragmatist does is to consider the consequences and then using a feed-back loop return to his or her assumptions, values, knowledge, beliefs and readjust in the wake of what has transpired. Yes, it has a bit of relativism associated with it—but the second maxim of pragmatism helps to derail the slide to total relativism.

The second maxim is: the ultimate arbiter is the community. However you adjust your values, there are still the values of the community that override and with which the values of an individual must co-exist, and that must be considered as well. But what happens when the community fails, when the election process is subverted, when the leadership does not listen to its people? I don't know, but that is a problem for all, not exclusively for pragmatism.

So in closing, let me summarize:

Technological artifacts do not contain values or normativity.
Democracy will not solve all our problems because you have to get people to listen to others—not just talk at them.
Using metaphor to demystify the new may help in getting people to actually communicate so they can talk reasonably about new technologies and old.
Nevertheless, the two basic pragmatic maxims can serve as a basis for an evolving ethical system.
  1. Consider the consequences
  2. The community is the ultimate arbiter.

References

Keulartz, J., Korthals, M., Schermer, M., & Swierstra, T. Pragmatist Ethics for a Technological Culture. Dordrecht: Kluwer Academic Publishers, 2003.

Pitt, J. "Successful Design in Engineering and Architecture; a Plea for Standards" in Creativity in Engineering, Mathematics and the Arts. edited by Hans-Joachim Braun, forthcoming.

________. Thinking About Technology. New York: Seven Bridges, 2000.


_____________________________________

1 Andrew Garnar suggested the shift from "imperialism" to "colonialism". For he is correct in that the idea I am after is subjugation of new lands and the imposition of a new set of values for indigenous ones.


DLA Ejournal Home | Techne´ Home | Table of Contents for this issue | Search SPT and other ejournals