SPT v1n3n4 - Technological Fixes for Moral Dilemmas


Numbers 3-4
Spring 1996
Volume 1

TECHNOLOGICAL FIXES FOR MORAL DILEMMAS

Ted Lockhart, Michigan Technological University


Critics of technology sometimes point out that new or prospective technologies raise difficult moral questions, and they insinuate that this is a reason not to develop or introduce those technologies in the first place. Biological or medical technologies like genetic engineering or artificial organs often receive criticism of this sort.1 At best, the critics' argument is incomplete, for they need to explain why difficult moral issues should be averted. Do we have a moral obligation to prevent morally problematic situations? If so, what is its basis? And if the reasons for rejecting morally troublesome technologies are not moral reasons, then what kind of reasons are they and why are they important?

However, as interesting as these questions are, I wish to raise a more important question viz., if the fact that a technology would create difficult moral decisions is reason enough to discourage the development of that technology, then does it not follow that the fact that a technology would enable us to avert difficult moral decisions is good reason to encourage its development? Although neither critics nor defenders of technology have raised this question, it seems an appropriate one to ask. And if the answer is yes, it suggests a new way in which technologies might be defended on moral grounds.

In this essay, I shall argue that some technologies do enable us to avert morally problematic situations. I shall do so by citing examples of both past and present technologies that have this characteristic. Then I shall consider why the matter of creating and avoiding moral dilemmas is important and how we should weigh this in our decisions about which technologies to develop and employ. Finally, I shall discuss a controversial technology that is on the horizon viz., a biomedical technology that would allow us to slow down or reverse the aging process in humans. I shall argue that when we consider whether a life-span-extending technology would avert moral dilemmas, we shall find good reason to favor the development of that technology.

TECHNOLOGIES THAT AVERT MORAL DILEMMAS

Some of the best examples of technologies that avert moral dilemmas2 are biomedical technologies. One is genetic screening, which prevents moral dilemmas associated with aborting fetuses with serious genetic diseases, like Tay Sachs disease and Lesch-Nyhan syndrome. It does so by warning potential parents who are carriers of those diseases so that they may use contraception. Another dilemma-averting technology is general anesthesia. Surgery before the advent of anesthesia is vividly depicted in the following 19th-century account of the repairing of a dislocated hip:

Big drops of perspiration, started by the excess of agony, bestrew the patient's forehead, sharp screams burst from him in peal after peal all his struggles to free himself and escape the horrid torture, are valueless, for he is in the powerful hands of men then as inexorable as death. . . . At last the agony becomes too great for human endurance, and with a wild, despairing yell, the sufferer relapses in unconsciousness.

As Martin Pernick notes in his essay, "The Calculus of Suffering in 19th-Century Surgery," "For many early-19th-century surgical students, learning to inflict pain . . . constituted the single hardest part of their professional training."4 The development of general anesthesia saved physicians from having to decide whether the torture of surgery outweighed its benefits for their patients.

Other examples of moral-dilemma-averting biomedical technologies abound. Polio vaccination has eliminated the epidemics of polio that terrorized populations only a few decades ago. It enabled physicians to bypass agonizing decisions about putting polio victims in iron lungs from which they might never escape.5 And a whole class of technologies that prevent moral dilemmas are those that enable physicians to detect and treat serious illness in its early stages before people's lives are seriously imperiled. These technologies include biopsies and radiography to detect cancer while it is still operable or treatable.

However, biomedical technologies are not the only technologies that forestall moral dilemmas. The invention of the safety lamp in the early 1800s dramatically lowered the incidence of mine explosions and thus prevented painful decisions about when to terminate efforts to rescue trapped miners.6 The invention of the chronometer allowed ships to determine their longitude in the open ocean and thus prevented shipwrecks of the kinds that lost bearings previously caused. It is likely that this prevented some "sinking lifeboat" dilemmas that would have occurred otherwise.7 And the development of the air bag for automobiles is today preventing serious injuries in automobile accidents and thus also avoiding the ensuing dilemmas about terminating life support for the victims of those accidents.

Critics of technology will perhaps note that in all of these examples the moral dilemmas that the technologies prevent are themselves the products of technology. For example, the dilemmas averted by genetic screening would not occur in the first place if prenatal diagnosis of genetic disorders had not been developed. These critics may wish to argue that technology at best only fixes problems that it causes.8 However, even if this were true, it would not mean that technology never produces a net gain, morally speaking. There are other considerations that may apply. A particular technology may, after all, produce great benefits for those who use it, which may make its development and use morally justifiable. Furthermore, even without technology, it may be impossible, for us to avoid all moral dilemmas. Therefore, we should not jump too quickly to the conclusion that every preventable moral dilemma should be prevented. Even if we could prevent all moral dilemmas by abolishing all technologies, it is extremely doubtful that such a course of action would be morally justified. What then should we say about moral dilemmas and which of them we should seek to prevent? This is the question that we need now to address.

SHOULD WE AVERT MORAL DILEMMAS?

What we decide on a particular occasion often determines to a large extent what kinds of decisions we shall make in the future. For example, one's decision to become a surgeon means that she will make future decisions that are very different from the decisions she would make if she became a librarian. This is true of moral decisions in particular. What one chooses to do on a particular occasion can, and often does, affect the number and kinds of moral problems that she will encounter in the future.

What policy should we adopt toward future moral dilemmas? Should we strive to avert moral dilemmas, or minimize the number of them that we must resolve? Is there a moral justification for such a policy? And what if it is necessary to commit a morally wrong action in order to prevent future moral dilemmas? Can this ever happen? And if so, how can there be moral justification for doing what is morally wrong?

Some philosophers would argue that we should not care whether our actions avert moral dilemmas. After all, if our overriding objective is to act morally, then when we have determined that a certain action would be morally right we need not consider anything further about the action, such as whether it will increase or decrease the incidence of moral dilemmas. In particular, we need not worry whether the technologies that we introduce will lead to or bypass moral dilemmas.

This argument is straightforward and plausible. I believe, however, that it is mistaken. To see why, let us imagine that instead of choosing our actions one by one we must choose all of our future actions simultaneously. I shall use the phrase "course of action" to refer to a person's sequence of actions over some period of time. Let us imagine that each of us is choosing our entire future lifetime course of action all of the individual actions that constitute it instead of just our next action. Of course, choosing a lifetime course of action would include choosing the next action, since that action will be the first action in the sequence. But we are, for the moment, considering the entire sequence of actions and not emphasizing any action in the sequence over the others.

Clearly there are moral differences among courses of action. For example, if a course of action included a much higher proportion of morally right individual actions than some other course of action, then the former would seem, ceteris paribus, to be better, morally speaking, than the latter. And if we were to choose between the two courses of action on moral grounds, we would choose the former.

But should we always choose courses of action that have the highest proportion of morally right individual actions? This rule seems appropriate for a deontological perspective according to which our moral obligation is to obey specific moral rules. On such a view, our "moral success" over a series of actions would depend on how frequently those actions were morally right. However, from a consequentialist moral perspective, that tells us to maximize (expected) utility, it seems more appropriate to evaluate courses of action on the basis of the total (expected) utility that they produce. This is so because, according to a consequentialist moral theory, the whole reason for acting in a particular way is to produce as much utility as possible.

What if our moral theory has both consequentialist and deontological components? For example, our theory might regard utility maximization as a prima facie obligation but also recognize other prima facie obligations such as distributive justice or nonmaleficence. Here it seems that our moral assessment of courses of actions should depend on both the proportion of morally right individual actions and the total amount of utility produced, where the two factors would be weighted to reflect their relative importance according to the theory.

We are now in a position to see what is wrong with the above argument that concluded that we should concentrate on the rightness or wrongness of our current actions and ignore whether our actions lead to future moral dilemmas. Suppose that a decision-maker's best available lifetime course of action, morally speaking, is one that would be initiated by a morally wrong individual action. Then the decision-maker would have moral justification for performing an individual action that is morally wrong, by virtue of that action's being part of the best course of action. Here there are really two moral standards viz., the familiar right-and-wrong standard for individual actions and a new standard for courses of action. But to put it this way is somewhat misleading because it suggests that the two moral standards apply to two altogether different things viz., individual actions and courses of action. Obviously, a person performs a course of action only by performing each individual action in the sequence. If one embarks on a particular course of action, she must first perform the initial action. Therefore, a moral standard for courses of action is, at the same time, a moral standard for individual actions as well. And if the initial action in the prescribed course of action turns out to be morally wrong, then the two moral standards give us conflicting prescriptions.

Two questions now arise: (1) In the final analysis, which of the two moral standards should a maker of moral decisions follow? And (2) how is it possible for the best course of action to be initiated by a morally wrong action? The correct answer to the first question, in my view, is that ultimately one should follow the standard for courses of action. To see why, suppose that we had to choose between two individual actions x and y. And suppose that x would be morally right but would be followed by a course of action consisting of 10,000 morally wrong individual actions. Furthermore, let us assume that y would be morally wrong but would be followed by a course of action consisting entirely of 10,000 morally right individual actions. Surely, it would make more sense from a moral point of view to choose y and the course of action that it initiates. But this means choosing the courses-of-action standard over the individual-actions standard in this instance of conflicting standards. Therefore, in cases of conflict, the standard for courses of action is the one we should follow.

This brings us to the second question i.e., how can the best course of action, morally speaking, be initiated by a morally wrong action? This seems impossible, for according to the "ought implies can" principle, whatever is morally right for us to do must always be possible for us to do. And if it is possible for each moral decision, it is possible for every moral decision that we make. Therefore, among the courses of action that we can perform there is at least one that is morally impeccable that consists of individual actions all of which are morally right. Surely, a morally impeccable course of action would be better, morally speaking, than any morally imperfect course of action. Hence, the best available course of action must consist entirely of morally right individual actions. In the real world, however, we know that we shall not always do what is morally right. We know this for at least two reasons. First, even though it is always within our power to do what is right, we know from past experience that we shall occasionally freely choose to do what is wrong. Second, in many situations, we are unsure which of our alternatives are morally right. And since we are morally fallible, sometimes our moral judgments are mistaken. Thus moral uncertainty is a second reason why we should expect some of our future actions to be morally wrong.

We can now see why, in general, we should try to avoid moral dilemmas. Moral dilemmas are situations in which all of our options are undesirable or problematic in some respects, and we find it more difficult to do what is morally right or to know which of our alternatives would be morally right. Because of this, we are less likely to do what is morally right when we find ourselves in moral dilemmas. And moral uncertainty is especially likely to occur in moral dilemmas because moral dilemmas typically raise difficult and controversial ethical issues. As a general rule, the more often we encounter moral dilemmas the more often we shall fail to do what is morally right.

The moral standard for courses of action implies that, in general, we should try to stay out of situations in which we are at great risk of acting wrongly. The qualifier "in general" is needed because, if utility production is morally relevant, then we must include it in our evaluation of courses of action. In some situations, the best course of action, morally speaking, may tolerate some moral dilemmas in order to increase utility. Thus, the moral standard for courses of action would not necessarily always prescribe minimizing the number of moral dilemmas that we shall encounter. But if a certain type of dilemma occurs often, it may be best to prevent or minimize its occurrence, even if doing so requires some actions that are morally wrong or suspect.

Suppose, for example, that a certain disease is moderately contagious and causes all who contract it to die a horrible, painful death. Because of this, people in the final painful stages of the disease often request that their deaths be hastened by the physician's active intervention. Physicians and families of those patients are thus placed in the awful position of having to decide whether to accede to those requests. Researchers have finally produced a vaccine that effectively protects people against infection. Unfortunately, the vaccine brings temporary but severe discomfort to those who receive it. Because of this discomfort and also because many people think (perhaps wishfully) that they will not contract the disease, many individuals do not volunteer to be vaccinated. However, unless a critical mass of the population is vaccinated, there will be an ever-worsening epidemic that will cause the painful deaths of many people. These dilemmas would be largely averted if vaccination against the disease were mandatory for everyone. Therefore, even if a mandatory vaccination program would be morally wrong because it would infringe excessively on individual liberty (and this is of course debatable), our moral standard for courses of action might prescribe such a program. It would prescribe the program if the moral costs of that infringement were offset by the moral benefits derived from preventing moral dilemmas. Thus, the morally wrong action of mandating that everyone be vaccinated would be morally justified in light of the moral standard for courses of action.

This illustration shows how it is possible for a morally objectionable or dubious technology to be morally justified by enabling moral agents to shortcircuit recurrent moral dilemmas. This happens whenever averting those moral dilemmas results in courses of action that are better, morally speaking, than courses of action that expose us to those dilemmas. Can we use this notion of a moral standard for courses of action to resolve current disputes about technology? Are some technologies that are now on the drawing board like the mandatory vaccination program in the above example? In the next section, I shall argue that one such technology is technology that would greatly extend the human life span a technology which biomedical research may deliver to us in the relatively near future.

LIFE-SPAN-EXTENDING TECHNOLOGIES

Among the most heartrending moral decisions that we ever have to make are decisions about medical or custodial care of an aging spouse or parent. Too often old age brings serious debilities, such as Alzheimer's disease or crippling arthritis. It also makes us increasingly vulnerable to such assaults as heart attacks, strokes, and cancer. Decisions about continuing heroic and costly medical intervention when the patient has little or no chance of returning to good health are often moral dilemmas of the most distressing sort.

Deciding whether to commit one's aging spouse or parent to a nursing home is another tragic choice that many of us have to face. Often the only alternatives to the nursing home option are (1) to care for the spouse or parent oneself or (2) to hire professional nurses or custodians to provide at-home care. However, even if the demands of one's job do not make the first alternative impossible, taking on such a task may quickly lead to physical or emotional exhaustion. And the second alternative is often prohibitively expensive. But living in a nursing home is rarely as desirable as living in one's own home or in that of a close relative. Spouses or parents who are placed in nursing homes often feel abandoned by and resentful of their families, whom they may perceive as selfish and ungrateful. And nursing homes are often dismal environments with high concentrations of people with serious physical or mental impairments.

These are the sad facts of life for many of us today. In the industrialized world, where hunger and poverty have been subdued to a considerable degree, the tribulations associated with growing old are a major source of people's unhappiness and anxiety. And as populations age the problems will only get worse. Demographers predict that early next century, when the vanguard of the first post-World War II generation reaches retirement age, the demand for medical and economic resources will greatly increase at a time when the ratio of the labor force size to the over-65 population will be small by historical comparison.11 These changes threaten to cause considerable social, political, and economic upheaval.

The moral costs of allowing these trends to continue will be enormous. The number of decisions that we shall have to make about who will receive expensive long-term medical or custodial care will increase dramatically. Many of these will be agonizing decisions that will test our emotional capacities. One solution that has been suggested is to limit access to highly costly biomedical technologies, such as dialysis and organ transplantation, on the basis of age. Some countries have already enacted such policies.12 However, even if age restrictions on certain kinds of medical procedures become the accepted norm, this will not resolve the moral dilemmas associated with such practices. We shall continue to have serious moral misgivings, for example, about denying a heart transplant to a 66-year-old retiree who has just begun to enjoy the retirement that she has long worked for and anticipated.

We can reasonably expect therefore that, without some radical departure from the path we are on, moral dilemmas associated with medical care for the aged will become increasingly prevalent in the coming decades. Is there a way in which technology might allow us to avert some of these awful decisions? I believe that quite possibly there is. Although scientists are not yet sure what causes us to age, evidence indicates that the maximum life span of some species, including some mammals, can be significantly lengthened by relatively simple methods.13 With the acceleration of progress that has occurred in the biological sciences in recent years, there is a reasonable chance that, within the next 2 or 3 decades and with a concerted, adequately financed research effort perhaps even sooner scientists will understand the aging process well enough to design an effective life-span-extending technology for human beings.

A number of objections have been raised against any attempt to interfere with the human aging process. Some critics have noted the effects on overpopulation and the economic and social disruptions that an effective anti-aging technology would cause. Others have claimed that intellectual and cultural stagnation would result from the strict limits that would have to be placed on reproduction if an anti-aging technology were widely used.15 Another objection alleges that a research program to develop an effective anti-aging technology would unjustly divert resources from more important and urgent biomedical needs.16 Yet another criticism is based on the notion of a natural death.17 And no doubt many people would oppose anti-aging technology on the grounds that its development would be an act of hubris and an illicit violation of our nature as mortal beings.

Although I do not find any of these arguments convincing, I grant that the issue of the moral rightness of developing anti-aging technologies is problematic and that its correct resolution is uncertain. However, if what I have said about courses of action is correct, the critical issue is whether the moral standard for courses of action supports developing such a technology. To decide, we must determine whether courses of action that include developing life-span-extending technologies would be better, morally speaking, than courses of action that do not include developing those technologies.

We must bear in mind that an effective life-span-extending technology would spare us many agonizing life-and-death decisions about medical and nursing care for the aged. It would do so by significantly increasing the human life span,18 thus greatly increasing people's opportunities for reaching their life goals. Once an individual has reached her main goals, it may be easier for her, and for others, to accept her death. We shall still have to make life-and-death decisions, but, generally speaking, they will less often be the tragic choices that our brief life span forces on us in the present era.

Of course, we can foresee only very few of the details of the lifetime courses of action that we shall set into motion by our collective decision to develop, or not to develop, an anti-aging technology. Consequently, we shall have to apply the moral standard for courses of action on the basis of our imprecise and indefinite perceptions of the available alternatives and our summary judgments about which ones are best, morally speaking. However, we must consider the magnitude of the problems that our aging populations present now and will increasingly present in the future, as well as the risks that we shall make bad decisions when we decide matters of life and death for the aged. I believe that, if we do so, we shall conclude that seeking to avert or minimize those dilemmas is likely to be the better course of action. The long-term moral benefits of reducing the number of those life-and-death decisions that we shall have to make will outweigh any short-term moral improprieties.

CONCLUSIONS

In this essay I have contended that, although some technologies may raise moral questions that are difficult to answer, others may enable us to avert moral dilemmas. I have argued also that we should recognize a moral standard for courses of action in addition to the ordinary moral standard for individual actions and that we should follow the courses-of-action standard whenever there is conflict between the two. This means that, in general, we should prevent moral dilemmas by choosing courses of action that minimize their frequency. Technologies that enable us to avert moral dilemmas may conform to the moral standard for courses of action even when they violate the ordinary moral standard of right and wrong. A biomedical technology that extends the human life span is a future technology that, I believe, fits into this category.

If my arguments are sound, they show that there is an important moral norm that philosophers of technology have not yet recognized. It advises us to pursue morally optimal courses of action and thus generally supports technologies that enable us to avert morally problematic decisions. Perhaps this essay will provoke courses of action that will begin to rectify these omissions.

NOTES

1. For example, Jeremy Rifkin in several of his writings has raised ethical questions about genetic engineering, where it is clear from his rhetorical style that, at the very least, he is proposing that research be suspended until those questions are resolved. Given the likelihood that any publicly debated ethical question can ever be answered to the satisfaction of the vast majority of the population, his proposal amounts to suspending genetic engineering research indefinitely. See Declaration of a Heretic (Boston: Routledge and Kegan Paul, 1985) and Algeny (New York: Viking Press, 1983). David Suzuki and Peter Knudtson also express moral trepidation about biotechnology in Genethics (Cambridge, MA: Harvard University Press, 1990), revised edition, in which they write:

The clash between genetics technologies and human values has already resulted in a deluge of difficult ethical problems. And it will continue to do so in the future, in ways that are simply impossible to anticipate (p. 333). Their rhetorical purpose appears to be the same as Rifkin's.

2. I use the term "moral dilemma" in its traditional meaning i.e., a moral decision in which all the alternatives are undesirable or objectionable to a significant degree. I do not have in mind the meaning that it has been given in recent philosophical discussion i.e., a decision in which an agent ought to do x and also ought to do y but can do only one of them.

3. Quote attributed to a 19th-century anesthesia promoter by Betty MacQuitty, in The Battle for Oblivion: The Discovery of Anaesthesia (London: George G. Harrap, 1969), p. 68, and quoted in Martin S. Pernick, "The Calculus of Suffering in 19th-Century Surgery," The Hastings Center Report, 13 (April 1983): 26-36. Pernick's essay is reprinted in Judith Walzer Leavitt and Ronald L. Numbers, eds., Sickness and Health in America (Madison, WI: University of Wisconsin Press, 1985), pp. 98-112. The quote appears on p. 99 of Leavitt and Numbers. 4. Pernick, in Leavitt and Numbers, Sickness and Health, p. 99.

5. The notion that permanent incarceration in the iron lung was the usual fate of polio victims before the advent of polio vaccine is convincingly discredited by James H. Maxwell in, "The Iron Lung: Halfway Technology or Necessary Step?," The Milbank Quarterly 64 (1986): 3-29. For an interesting discussion of the merits of the iron lung as a medical technology, see Maxwell's essay and Lewis Thomas's "Response to James H. Maxwell's Essay, The Iron Lung'," The Milbank Quarterly 64 (1986): 30-33.

6. See Arthur Birembant, "The Mining Industry,"