SPT logo

Techné: Research in Philosophy and Technology

SPT Logo
Editor-in-Chief: Joseph C. Pitt, Virginia Tech, Vol. 11, no. 1 (Fall 2007) -
previous editors: Paul Durbin 1995/97; Peter Tijmes 1997/99; Davis Baird 2000/07

Number 1
Fall 2006
Volume 10

DLA Ejournal Home | SPT Home | Table of Contents for this issue | Search SPT and other ejournals


From Challenger to Columbia: What lessons can we learn from the report of the Columbia accident investigation board for engineering ethics?

Junichi Murata
University of Tokyo
Department of History and Philosophy of Science
Tokyo, Japan

Abstract

One of the most important tasks of engineering ethics is to give engineers the tools required to act ethically to prevent possible disastrous accidents which could result from engineers’ decisions and actions. The space shuttle Challenger disaster is referred to as a typical case in almost every textbook. This case is seen as one from which engineers can learn important lessons, as it shows impressively how engineers should act as professionals, to prevent accidents. The Columbia disaster came seventeen years later in 2003. According to the report of the Columbia accident investigation board, the main cause of the accident was not individual actions which violated certain safety rules but rather was to be found in the history and culture of NASA. A culture is seen as one which desensitized managers and engineers to potential hazards as they dealt with problems of uncertainty. This view of the disaster is based on Dian Vaughan’s analysis of the Challenger disaster, where inherent organizational factors and culture within NASA had been highlighted as contributing to the disaster. Based on the insightful analysis of the Columbia report and the work of Diane Vaughan, we search for an alternative view of engineering ethics. We focus on the inherent uncertainty of engineers’ work with respect to hazard precaution. We discuss claims that the concept of professional responsibility, which plays a central role in orthodox engineering ethics, is too narrow and that we need a broader and more fundamental concept of responsibility. Responsibility which should be attributed to every person related to an organization and therefore given the range of responsible persons, governments, managers, engineers, etc. might be called “civic virtue”. Only on the basis of this broad concept of responsibility of civic virtue, we can find a possible way to prevent disasters and reduce the hazards that seem to be inseparable part of the use of complex technological systems.

Keywords: engineering ethics, risk, safety, space shuttle accidents, civic virtue

One of the most important characteristics of technology is that we can use it to produce certain instruments that can then be used to lighten our work loads, and/or produce safer working conditions for ourselves. It is also well known that the meaning of technology cannot be reduced to the role of instrumentality (Tenner 1996). For example, during the process of production, and while using technology, unintended situations sometimes arise which can be considered a source of creativity but can also lead to technological failures and accidents. How can we interpret this unpredictable and unmanageable aspect of technology? I think this problem is pivotal to the philosophy of technology.

In the view of technological determinism, the processes of technological development and the introduction of a technology to a society are seen in hindsight. This hindsight allows us to interpret them as the processes dominated by technological rationality and efficiency and the unpredictable and unmanageable aspect of technology remains out of focus. In contrast to this deterministic view, the social constructivist approach focuses on technological unpredictability and unmanageability and finds that these aspects provide interpretative flexibility and a chance for users of a technology to take the initiative to develop the technology in a new direction. Thus, our perspective on the philosophy of technology depends on how we characterize these aspects of technology, or on which facet of these aspects we focus (Murata 2003a; 2003b).

A similar situation can be found in discussions of the ethics of technology. One of the most important tasks of those dealing with the ethics of building /using technology is to clearly define methods that can be used to predict and control the process of technology development and by so doing minimize the potential for the new, redeveloped technology to cause harm. However, if an unpredictable and uncontrollable character is essential for the processes of development and use of technology, those dealing with the ethics of technology will be confronted with an apparently contradictory task, i.e. that of predicting and controlling an unpredictable and uncontrollable process (Murata 2003c).

In spite of these circumstances, in discussions of “engineering ethics,” a topic which has recently become very popular in the field of applied ethics, this issue has not been sufficiently emphasized as a fundamental and central problem of the field, although it is sometimes touched on. It is common in orthodox textbooks of engineering ethics to describe examples of difficulties engineers meet in their workplaces in such a way that what engineers as professionals must do is clear from the beginning. The difficult ethical problem comes later when the question of how an engineer must realize a task comes up against various disturbing factors arising from circumstances outside the particular technological domain in question. If we regard the problems raised in the field of engineering ethics in this way, the essential character of uncertainty is neglected and the contradictory character of the task of engineering ethics is left out of consideration.

In this paper, I would like to consider the status and the significance of this problem of uncertainty in the field of engineering ethics, taking two famous examples of technological disaster; the two space shuttle accidents. How can ethicists deal with the unpredictable, uncontrollable and creative character of technology? And: what is important in this field, what do engineering ethicists need to do to deal with this problem? These are the questions that I will address in the paper.

I will start with an analysis of the report of the Columbia accident investigation board (Report 2003). This report clearly demonstrates that the essential cause of the accident was to be found not in the failure of individual decisions of engineers or managers, as is usual in the orthodox view of accidents, but rather in structural features, such as the history and culture of NASA, in which the methods used to deal with various uncertainties, dangers and risks are institutionalized and at the same time the organizational sensitivity to possible dangers has gradually been paralyzed. If we take this interpretation of accidents seriously, engineering ethics, which is based on such concept as “professional responsibility” in the narrow sense, is insufficient, as this narrow definition focuses too much on individual decisions and professional actions and overlooks the role of history and culture as an implicit background to each action within an organization. In contrast to the usual perspective taken on engineering ethics, in this paper I focus on a much more fundamental and wider dimension of ethics, in which ethical virtue such as sensitivity to possible danger plays a central role as a cultural element. This virtue can be called “civic virtue”, as it can be attributed to “professionals” and also to every person involved in a technological system. Only when we choose to look at responsibility in a broad sense, can we find a possible way to cope with the apparently contradictory problem of preventing a hazard, something that is inevitable in complex technological systems, such as the space shuttle program. This is, I think, one of the most important lessons we must learn from the report of the Columbia accident investigation board.

1. Columbia: report of the accident investigation board

On February 1, 2003, the space shuttle Columbia disintegrated in flames over Texas a few minutes before its scheduled landing in Florida after its 16 day mission in space. This flight, called STS-107, was the space shuttle program’s 113th flight and Columbia’s 28th. This was the second disastrous accident in the history of the flight of the space shuttle, which began in 1981 with the first flight of the same orbiter Columbia. The first accident was the explosion of the Challenger shuttle, which exploded just 73 seconds into its launch on January 28, 1986.

Immediately after the Columbia accident, the “Columbia accident investigation board” was organized to conduct a widespread investigation. In August, 2003, about seven months after the accident, the voluminous report of the board was published. In the report we find analyses of many kinds of documents and debris and a far-reaching examination of the organizational problems of NASA, which are rooted in its long history.

The report clearly identifies the physical cause of the accident. The physical cause was found to be a breach in the thermal protections system on the leading edge of the left wing of the orbiter, caused by a piece of insulating foam which separated from the external tank 81.7 seconds after launch and then struck the wing. During re-entry this breach allowed superheated air to penetrate through the leading edge insulation and then progressively melt the aluminum structure of the left wing, resulting in a weakening of the structure until increasing aerodynamic forces caused loss of control, failure of the wing, and break up of the orbiter (Report 2003: 9,49ff.).

The board of commission set up to seek the cause of the Columbia disaster did not rest with looking for the physical causes of the disaster, it also looks widely into other areas, especially in organizational factors. This report takes a very critical stance towards NASA’s fundamental attitude to the shuttle program in general, an attitude arising from the history of NASA and which is rooted in its culture.

“The Board recognized early on that the accident was probably not an anomalous, random event, but rather likely rooted to some degree in NASA’s history and the human space flight program’s culture.” (Report 2003: 9, 97, 177)

The report makes it clear that the fact that a lot of foam debris had struck the Orbiter in an unusual way right after the launch was not overlooked by the people in NASA but rather was focused on by many engineers from the very beginning phase of Columbia’s flight.

As soon as Columbia reached orbit, engineers belonging to the photo working group began reviewing liftoff imagery recorded by video and film cameras and noticed that a large piece of debris from the left bipod area of the external tank had struck the orbiter’s left wing: because they did not have sufficiently resolved pictures to determine potential damage and had never before seen such a large piece of debris strike an orbiter so late in ascent, the engineers decided to ask for ground-based imagery of Columbia, requesting shuttle program managers to get in contact with the defense force (Report 2003: 140f.).

Having heard that a large piece of debris had struck the orbiter’s wing and having been anxious about the possibility of a disaster resulting from this fact, engineers belonging to various sections began to analyze and discuss the issue. They even constituted a debris assessment team and continued to work through the holidays. They also tried to obtain imagery of the current situation of the left wing of the orbiter in flight, informing managers about their concern and their requests to obtain a good image of the wing damage. In all the engineers attempted three times to get its imagery; however, each of these attempts was unsuccessful, because they were not made through the formal hierarchical route and the request was ultimately declined by a chief manager of the mission management team.

Some engineers were frustrated by this result, but they could not make the mission management managers listen to their concerns. In the formal meeting led by the mission managers, the engineers could not demonstrate the hazard presented by the impact of the debris and were unable to persuade managers to take action because they did not, and could not, acquire the right kind of detailed information. Mission managers, whose main interest lay in keeping the flight on schedule, did not pay much attention to this foam strike. Above all they relied on their presupposition that many previous flights had been successful in spite of debris strikes and that in this sense the debris strike could be considered an “in family” and “turnaround” issue and in this sense an “accepted risk” and not some event which had significance for “safety of the flight”. In this situation engineers found themselves in “an unusual position of having to prove that the situation was unsafe—a reversal of the usual requirement to prove that a situation is safe” (Report 2003: 169).

The Report focuses attention on various organizational and cultural factors as the main causes of the accident, i.e. reliance on past success as a substitute for sound engineering practices, organizational barriers that prevented effective communication of critical safety information and stifled professional differences of opinion, and so on (Report 2003: 9, 177).

“In the Board’s view, NASA’s organizational culture and structure had as much to do with this accident as the External Tank foam. Organizational culture refers to the values, norms, beliefs, and practices that govern how an institution functions”. (Report 2003: 177)

Reading through the report, we cannot help but notice the similarities between the story of the Columbia accident and that of the Challenger. In fact, in many places the report made a comparison between these two accidents and found in the later accident “echoes of Challenger”.

“As the investigation progressed, Board member Dr. Sally Ride, who also served on the Rogers Commission, observed that there were “echoes” of Challenger in Columbia. Ironically, the Rogers Commission investigation into Challenger started with two remarkably similar central questions: Why did NASA continue to fly with known O-ring erosion problems in the years before the Challenger launch, and why, on the eve of the Challenger launch, did NASA managers decide that launching the mission in such cold temperatures was an acceptable risk, despite the concerns of their engineers?” (Report 2003:195)

Reading the report, we can ask exactly the same question concerning Columbia: Why did NASA continue to fly with a known debris strikes problem in the years before the Columbia launch? And: Why, during the flight of Columbia after the debris strike, did NASA managers decide that the reentry of Columbia with the strike damage was an acceptable risk, overrunning the concerns of their engineers?

What lessons did the managers and engineers in NASA learn from the Challenger accident? Or had they in fact learned nothing from the Challenger accident?

Our questions or doubt only becomes intensified when we consider the status of the Challenger accident in the field of engineering ethics. In almost every textbook of engineering ethics we find the story of the Challenger accident as a typical case in which the ethical problems of engineering can be seen and from which something can be learnt. Every student, post Challenger, who has attended a course in engineering ethics remembers at least the word “O-ring”, the sentence one of the managers of Morton Thiokol said to his engineer colleague at the decisive moment of the decision making, “take off your engineering hat and put on your management hat”, and the engineers’ hero, Roger Boisjoly, who stuck to his professional conscience until the last moment.

The time between Challenger and Columbia saw the inception and rise of a new discipline, that of engineering ethics. During this time everyone working in the field of technology began to hear about various ideas being discussed in the field of engineering ethics. Practitioners and students should have become more conscious than before that safety was a high priority objective in their professional field. For example, many professional groups expected their members to work according to various codes of ethics, first either setting in place a code or overhauling a code if a certain field already had a code of ethics. Looking at these circumstances, we are lead to ask what kind of role can engineering ethics have in a real work place. In the face of the fact that almost the same accident can happen, precisely in the place where the lessons of the first accident should have been learned, can we still argue that engineering ethics has a meaningful role? What is lacking? And: What is wrong in orthodox engineering ethics?

In order to tackle these questions, I would like to examine how the accident of Challenger is dealt with in popular textbooks.

2. Challenger: two stories

We have now at least two different versions of Challenger accident. One is orthodox, on which discussions in orthodox textbooks of engineering ethics are based. The other is a revisionist version, which seems to be more realistic but difficult to use for engineering ethics. In comparing these two stories, I hope to find some hints on how to revise orthodox engineering ethics.

(1) Story 1, a paradigm case of engineering ethics

It is widely recognized that engineering ethics should be classified as professional ethics, in other words, that because engineers have a special knowledge and influential power as engineers they have a special responsibility to prevent dangerous results caused by their actions. In this sense, engineers must be much more ethically careful when they act as engineers than when they act in everyday situations. Various concepts and issues belonging to engineering ethics are characterized under this presupposition. For example, various professional codes of ethics of engineers are interpreted as rules which explicitly determine what engineers have to do to fulfill their special professional responsibility as engineers. Various concepts, such as honesty, loyalty, informed consent or whistle blowing, are considered to have a similar role as that of codes of ethics, i.e. they should be used to give people guidance on how to decide and act, in order to fulfill their professional responsibility as engineers (Harris et al. 2000: chap.1 and chap. 6; Davis 1998: chap. 4; Johnson 2001: chap.3).

At first sight the Challenger accident seems to give us an impressive example that we can use to learn what it is like to act ethically as a professional engineer in a concrete situation.

The fundamental presuppositions of the story in the orthodox version are given below:

(a) Engineers knew the problem concerning O-ring very well. “Chief O-ring engineer Roger Boisjoly knew the problems with the O-ring all too well. More than a year earlier he had warned his colleagues of potentially serious problems.” (Harris et al. 2000: 4f.)

(b) Although the data given on the eve of the launch was incomplete, it was clear that a correlation exists between temperature and resiliency of the O-ring.“The technical evidence was incomplete but ominous; there appeared to be a correlation between temperature and resiliency”. (Harris et al. 2000: 4f.)

(c) With respect to the value evaluation there was a clear difference or conflict between engineers and managers. Engineers regarded safety as more important than schedule or profit, and managers prioritize these issues in a reverse order. “Turning to Robert Lund, the supervising engineer, Mason directed him to “take off your engineering hat and put on your management hat.” (Harris et al. 2000: 4f.)

(d) Boisjoly is considered to be a role model for engineers, although the result of his action was unsuccessful. “It was his professional engineering judgment that the O-rings were not trustworthy. He also had a professional obligation to protect the health and safety of the public.[ ----] Boisjoly had failed to prevent the disaster but he had exercised his professional responsibilities as he saw them.” (Harris et al. 2000: 4f.)

Under these presuppositions, the story seems to show us dramatically how important it is that, in accomplishing their responsibility, engineers stick to their professional knowledge and obligations and resist various influences which come from outside of engineering.

On the other hand, we cannot ignore the fact that this story has decisive weaknesses.

One of the problems of the story based on these presuppositions is that it is understandable only in hindsight. If we take seriously the actual situation in the past, we cannot easily presuppose that Boisjoly really understood the problem of the O-ring very well.

First, up to the teleconference on the eve of the Challenger launch, Boisjoly believed that the joint was an acceptable risk because of the redundancy of the second O-ring (Vaughan 1996: 187). Second, if he had really known the problem of O-ring, he could have demonstrated the correlation between temperature and resiliency in a much more definite and persuasive way, and above all not on the eve of the launch but much earlier. We cannot characterize someone’s belief as a genuine knowledge, which turns out to be true afterwards, as long as it could not be persuasively demonstrated to be true when it was demanded. Third, it is only a groundless supposition that Boisjoly might have done more than he really did, as he himself “felt he had done everything he could’,” and “it is also questionable that Boisjoly believed in the fatal consequence of the launch under the expected condition, as even Boisjoly acted as if he expected the mission to return” (Vaughan 1996: 380 ). In addition, even managers would have not allowed the launch, if there had been clear evidence of the possible danger. What is missing in the story is an understanding of the character of technological knowledge and judgment.

What is characteristic in engineers’ activity is that engineers must judge and make decisions in uncertain situations in which no clear or definitive answer can be found in advance. In this sense, Boisjoly’s judgment must be regarded as one possible judgment among others, and therefore the conflict concerning whether the launch could be approved or not is to be found between the engineers and managers and among the engineers themselves (Vaughan 1996: 324ff., 334, 356).

In addition to this problem, the story is also problematic in its narrative of the behavior of Boisjoly because the story ends only with admiration of Boisjoly’s behavior and we can find no recommendations or suggestions as to how Boisjoly could have acted to prevent the accident. The story might indicate that even if engineers act ethically as engineers accidents cannot be avoided. In other words, it could suggest that it is possible to be assumed to be sufficiently ethical as the engineer in a self-contained way, independent of the ultimate results.

In this context, we can find an interesting episode in the report of the investigation board of Columbia. During the Columbia flight, and after finding out that the request for an image of the orbiter from some outside source had been cancelled by the managers, one engineer wrote an email in which he emphasized that the damage by the debris could possibly bring about a very dangerous result, citing one of the mottoes of NASA, “If it is not safe, say, so”. Considering the content of the e-mail, we can imagine that the engineer must have known very well what an engineer should do in such a situation. However, he did not send it. Instead he printed out and shared it with a colleague.

“When asked why he did not send this e-mail, Rocha replied that he did not want to jump the chain of command.” (Report 2003: 157)

“Further, when asked by investigators why they were not more vocal about their concerns, Debris Assessment Team members opined that by raising contrary points of view about Shuttle mission safety, they would be singled out for possible ridicule by their peers and managers.” (Report 2003: 169)

It seems that there was no “Boisjoly”, at least Boisjoly à la story 1, in the case of Columbia. However, if a possible lesson of Challenger can be found in the point that even Boisjoly could not prevent an accident, it is understandable that engineers became skeptical about their chances of preventing accidents by trying to be more vocal and committing themselves to a kind of (near) whistle blowing action.

Of course this is not a logically necessary conclusion derived from the orthodox Challenger narrative. However, it cannot be denied that the possibility of drawing such a conclusion remains as long as, on the basis of this story, we have no indication as to the question of what Boisjoly could have done to prevent a possible accident beyond what he actually did.

(2) Story 2: the normalization of deviance

If we leave the perspective in which hindsight is dominant and go back to the real situation in the past, when engineers were confronted with various uncertainties, we find a very different story. This revised story, which was originally written by a sociologist, Diane Vaughan, is so impressive and persuasive that many researchers use it to criticize the orthodox story (Vaughan 1996; Collins and Pinch 1998; Lynch and Kline 2000).

In focusing on the process of (social) construction of “acceptable risk”, which plays a decisive role in engineers’ judgment and decision, this revised story gives us an answer to the questions raised above, i.e. questions of why NASA continued to fly with a known O-ring erosion problem in the years before the Challenger launch, and why, on the eve of the Challenger launch, NASA managers decided that launching the mission in such cold temperatures was an acceptable risk, despite the concerns of their engineers.

First of all, we must confirm that there is no absolute certainty in the realm of engineering and that we can never objectively know the amount of risk. Larry Wear, an engineer at the Marshal Space Center, expressed this situation in the following way.

“Any airplane designer, automobile designer, rocket designer would say that [O-ring] seals have to seal. They would all agree on that. But to what degree do they have to seal? There are no perfect, zero-leak seals. All seals leak some. [----] How much is acceptable? Well, that gets to be very subjective, as well as empirical.” (Vaughan 1996: 115)

At least until the eve of the launch of the Challenger shuttle, engineers regarded the problems concerning the O-ring to be acceptable risk. For this interpretation of the O-ring’s problem to be changed, there would have had to have been some decisive evidence. Boisjoly thought the apparent correlation between temperature and resiliency was sufficient evidence, but others did not think so. How should the dispute have been settled? Exactly in the way engineers and managers did on the eve of the launch.

“Without hindsight to help them the engineers were simply doing the best expert job possible in an uncertain world. We are reminded that a risk-free technology is impossible and that assessing the working of a technology and the risks attached to it are always inescapable matters of human judgment.” (Collins and Pinch 1998: 55)

In this way, we can find nothing special in the activities of engineers on the eve of the launch. Engineers and managers acted according to the established rules, just as in the case of a normal flight readiness review meeting. According to one manager of NASA, “with all procedural systems in place, we had a failure” (Vaughan 1996: 347). At least in this sense, they did their best as usual but failed.

However, if what the engineers did on the eve of the launch can be considered the best action engineers can take, we again become perplexed by the conclusion of the story. Was it inevitable that the accident occurred? Was there no way to prevent it? And: Was there no lesson to be learnt from this story?

Perhaps the only lesson would be that the accident was inevitable in a complex technological system. While Vaughan’s conclusion seems to be close to this pessimistic view, she tries to draw some lessons from her story. The lessons we should learn, however, can be drawn from the event which occurred on the eve of the launch, and from the preceding process of judgments and actions, in which the degree of acceptable risk concerning the O-ring was gradually increased. To explain this process, Vaughan proposed the term “normalization of deviance” (Vaughan 1996).

In many launches before that of Challenger, engineers had found various cases of erosion and blow-by of O-rings. However, what they found in these cases was not interpreted as an indicator of a safety problem but rather as evidence of acceptable risks, and as a result they step by step widened the range of acceptability. “The workgroup calculated and tested to find the limits and capabilities of joint performance. Each time, evidence initially interpreted as a deviance from expected performance was reinterpreted as within the bounds of acceptable risk” (Vaughan 1998: 120). Once such a process of normalization of deviance is begun and then gradually institutionalized, it is very difficult to stop this process of a “cultural construction of risk” (Vaughan 1998: 120). The only possible way to interfere with this process is to change the culture in which such a process is embedded and regarded as self evident, requiring a paradigm shift, such that anomalies that were neglected in the former paradigm become a focus (Vaughan 1996: 348,394).

This brings us close to the conclusions drawn by the Columbia accident investigation board, which stated NASA’s history and culture should be considered the ultimate causes of the accident.

What then can we do to change the culture of an organization and prevent possible accidents?

It seems there is no special method immediately available. Changing an organizational structure and introducing new rules and guidelines would be possible measures; and these measures were taken within NASA after the Challenger accident. However, there is no guarantee that the realization of these measures will create a better situation, in which accidents would be prevented. On the contrary, there is even a possibility that we will introduce a new hazard, just as we often find in cases of design change. It is well known that any design change, no matter how seemingly benign or beneficial, has the potential to introduce a possibility for failure. (Petroski 1994: 57)

“Perhaps the most troubling irony of social control demonstrated by this case [structural change done by NASA after the Challenger accident] is that the rules themselves can have unintended effects on system complexity and, thus, mistake. The number of guidelines— and conformity to them—may increase risk by giving a false sense of security” (Vaughan 1996: 420).

This comment, which was written by Vaughan before the Columbia accident, was unfortunately verified by the Columbia accident.

3. Normal accidents and responsibility as civic virtue

The second, revised story of the Challenger accident seems to be much more realistic than the first one, but the conclusion derived from it seems to be much worse, or at least more pessimistic. Can we gain some lessons concerning engineering ethics from it? In an attempt to find a possible set of ethics which takes into account lessons derived from the second story seriously, I will consider several ideas proposed by two thinkers, Charles Perrow and John Ladd (Perrow 1999; Ladd 1991).

(1) Normal accidents

On the basis of the analysis of the accident of the nuclear plant at Three Mile Island, Charles Perrow proposed the term “normal accident” to characterize what happens with high-risk technologies. In highly complex technological organizations where factors are tightly connected, accidents occur in an unpredictable, inevitable and incomprehensible way.

These elements of unpredictability, inevitability and incomprehensibility are not a factual limit, which we can overcome with some new knowledge or technologies. Rather every effort to overcome the limit of these characters cannot but make an organization more complex and produce new possible dangers.

“If interactive complexity and tight coupling—system characteristics—inevitably will produce an accident, I believe we are justified in calling it a normal accident, or a system accident. The odd term normal accident is meant to signal that, given the system characteristics, multiple and unexpected interactions of failure are inevitable” (Perrow 1999: 5)

The term “normal accident” is very insightful, as it indicates that in high-risk technologies the normal processes of engineers’ activities at workplace are to be considered processes for producing products and, simultaneously as processes that produce new hazards. If engineers want to avoid committing themselves to such processes and take a conservative position as far as possible, the only possible way for them to work would be to restrict themselves to working within a laboratory. In this context, Vaughan cites an interesting expression “engineering purist”, which is used by Marshall’s engineers to characterize an engineer, who works only in a laboratory, does not have to make decisions and can take the most conservative position in the world (Vaughan 1996: 88). In contrast to this “engineering purist”, every engineer who works and makes decisions in a real workplace, in which not only “purely” technical problems but various kinds of conditions such as cost and schedule must be taken into consideration, can never take the most conservative position.

If we take these circumstances seriously, we cannot but change our view of the meaning of the everyday activities of engineers. For example, if we follow this normal accident view, every decision process about a certain acceptable risk must also be regarded as a process at the same time producing another possible risk, and therefore previous success cannot be used as a justification for accepting increased risk. Perhaps this sounds a little extreme: but we can find this kind of warning in the statements of working engineers. Petroski emphasizes that if engineers design new things past success is no guarantee of the success of new design and cites the following statements of engineers: “Engineers should be slightly paranoiac during the design stage”. “I look at everything and try to imagine disaster. I am always scared. Imagination and fear are among the best engineering tools for preventing tragedy” (Petroski 1994: 3, 31). If engineers could continue to take this kind of view in every step of their work, the process of normalization of deviance would not remain invisible but would inevitably come to the fore.

What is important here is that this kind of change of attitudes cannot be realized by changing explicit rules or institutional structures, as the main point is that these changes always have the potential to produce new risks. It is remarkable that the recommendations made by the Columbia accident investigation board focus on this point.

For example, in the report the normal accident theory is used to analyze the causes of the accident. The report indicates the need for a change of culture within NASA and makes the following proposals.

“The [Space Shuttle] Program must also remain sensitive to the fact that despite its best intention, managers, engineers, safety professional, and other employees, can when confronted with extraordinary demands, act in counterproductive ways” (Report 2003: 181).

“Organizations that deal with high-risk operations must always have a healthy fear of failure—operations must be proved safe, rather than the other way around.” (Report 2003: 190)

These sentences suggest clearly where we should search for resources to change the culture in question. Surely not in the ethics in the narrow sense of the word, as “best intentions” people might have cannot contribute to preventing failures. Rather “sensitivity” to possible accidents and “a healthy fear of failure” must play a decisive role.

What kind of ethics would we have, if we take these indications seriously?

(2) Civic virtue

On the basis of the analysis of one typical case of a normal accident, the catastrophy at the chemical factory Union Carbide at Bhopal in India, John Ladd attempts to identify the ethical dimension indicated by cases of normal accidents by proposing the interesting concept of “civic virtue”.

Ladd introduces a difference concerning the concept of responsibility. One is a narrow, legal and negative concept of responsibility, which is also characterized as job-responsibility or taskresponsibility. If someone does not fulfill this responsibility, he or she will be blamed. In this understanding, the concept of responsibility is used exclusively, and the concept of nonresponsibility plays as important a role as the concept of responsibility, and the question of who is responsible and who is not is important in this context. “We hear claims of responsibility voiced in hearing ‘It’s my job, not his’ as well as disclaimers of responsibility in hearing ‘It’s his job, not mine’” (Ladd 1991:81).

In contrast to this kind of concept, the second concept of responsibility is characterized as broad, moral and positive.

According to this concept, even if someone does not fulfill a responsibility, it is not necessary that he or she will be blamed. In other words, if someone is responsible for something in this sense, it does not exclude others from also being responsible. In this sense, the “collective responsibility” of a large part of the population for the same thing is possible (Ladd 1991, 81). This moral responsibility is “something positively good, that is, something to be sought after” and “something that good people are ready and willing to acknowledge and to embrace” (Ladd 1991: 82).

Ladd calls this kind of responsibility “civic virtue”. It is characterized, firstly, as moral virtue, because it contains as an essential factor an attitude of concern for the welfare of others, i.e. humanity. Secondly, as this attitude of caring and regarding of others is a virtue everyone should have when exercising relationships with others, this responsibility is characterized as civic.

“Our attitude towards whistle blowing illustrate how far we have gone in turning our values upside down: the concern for safety, which should motivate all of us, has been relegated to the private realm of heroes, troublemakers and nuts. Our society assumes that it is a matter of individual choice (and risk) to decide whether or not to call attention to hazards and risks instead of being, as it should be, a duty incumbent on all citizens as responsible members of society.

This is where virtue comes in, or what in the present context I shall call civic virtue. Civic virtue is a virtue required of all citizens. It is not just something optional—for saints and heroes.” (Ladd 1991: 90)

To this last sentence we could add the word ‘engineers.’ According to this view, to prevent hazards and risks is not the special responsibility of engineers as professionals but rather the universal responsibility of all citizens.

If we relate the indications derived from the concept of normal accident in the last section to this discussion, we will become able to add some content to the concept of the responsibility as civic virtue.

“Sensitivity” to a possible danger and “a healthy fear of a failure”, which can be regarded as essential factors constituting a culture of safety, must be a central feature of civic virtue, a feature which can contribute to the prevention of possible accidents.

As long as we remain in the dimension of negative responsibility, it is difficult to identify someone who is to be blamed in the case of normal accidents, but it is also unhelpful to do so, as the replacement of the individuals to be blamed would not necessarily change the culture of the organization. If we look at the situation from the standpoint of positive responsibility, we can find many irresponsible acts, such as lack of concern, negligence in the face of signs of a hazard and so on, which are rooted in a general culture, exactly as in the cases of Challenger and Columbia. In this way, from the view of civic virtue, we can take into account the collective responsibility of an organization and indicate a need to change its culture to prevent accidents. In this sense, the concept of civic virtue is to be understood as a virtue belonging to an individual and as a virtue belonging to an organization.

In addition to it, as Vaughan and the report of the Columbia accident investigation board emphasize, the organizational culture of NASA is constrained and constituted by external political and economical factors decided by the USA Congress and the White House. From the point of view of civic virtue, we could extend the scope of collective responsibility to the people belonging to these organizations, as these people, as responsible citizens, cannot evade responsibilities, just as the NASA administrators, middle level managers and engineers cannot evade responsibilities. The concept of civic virtue must be ascribed to every responsible and related person. In this sense, if one wants to promote a hazard aware environment, where people work, act and govern ethically, it is necessary to cultivate professional responsibility but more importantly, civic virtue must be nurtured within the organization and society. Such civic virtue is rooted in a capacity to respond to and care for others and thus constitutes a fundamental dimension of ethics.

4. Conclusion

Considering all of these discussions, what lesson can we learn for engineering ethics?

Firstly, as already indicated, engineering ethics is usually characterized as professional ethics. This kind of ethics might be very helpful for producing honest, loyal and “responsible” engineers who can solve the various problems they confront in their work place and accomplish their work as engineers. However, as long as engineering ethics remains in the dimension of professional ethics, based around the actions of the engineer, engineering ethics will fail the ultimate goal of being a means by which engineers can be made to think about, and take responsibility for, preventing possible disasters that could result from their everyday normal practices. To fulfill this role engineering ethics needs to encompass factors which are rooted in a much more fundamental dimension than that of professional ethics.

Secondly, “engineering ethics” is commonly classified as ethics on the micro level in contrast to the “ethics of technology”, in which philosophical and political problems concerning the relationship between technology and society on the macro level is discussed. Surely we cannot confuse the different levels of discussions. However, when it comes to preventing disastrous “normal” accidents, and when causes of normal accidents are rooted in organizational culture, which is inseparably connected with macro level factors, we cannot leave the discussion of engineering ethics within the micro and individual dimension but must extend its discussion and connect it to the discussions taking place on the macro level. The concepts of “culture” and “civic virtue” can be used to mediate between the two levels of discussions thus extending and making more fruitful the field of “engineering ethics ”.

REFERENCES

Collins, Harry and Trevor Pinch. 1998. The Golem at Large: what you should know about technology. Cambridge: Cambridge University Press.

Davis, Michael. 1998. Thinking Like an Engineers, Studies in the Ethics of a Profession. Oxford: Oxford University Press.Harris, Charles, Michael Pritchard and Michael Rabins. 2000. Engineering Ethics, Concepts and Cases,second edition.Belmont, CA: Wadsworth.

Johnson, Deborah G. 2001. Computer Ethics. Englewood Cliffs: Prentice Hall.

Ladd, John. 1991. Bhopal: An Essay on Moral Responsibility and Civic Virtue. Journal of Social Philosophy 22(1): 73-91.

Lynch, William and Ronald Kline. 2000. Engineering Practice and Engineering Ethics. Science, Technology and Human Values 25(2): 195-225.

Murata, Junichi. 2003a. Creativity of Technology: An Origin of Modernity?. In Modernity and Technology, edited by Thomas Misa, Philip Brey, and Andrew Feenberg. Cambridge MA:The MIT Press.

Murata, Junichi. 2003b. Philosophy of Technology, and/or, Redefining Philosophy. UTCP Bulletin 1. University of Tokyo, Center for Philosophy: 5-14.

Murata, Junichi. 2003. Technology and Ethics—Pragmatism and the Philosophy of Technology. The Proceedings for the UTCP International Symposium on Pragmatism and the Philosophy of Technology, Volume 2: 60-70.

Perrow, Charles. 1999. Normal Accidents, Living with High-Risk Technologies. Princeton NJ: Princeton University Press.

Petroski, Henry. 1994. Design Paradigms, Case Histories of Error and Judgment in Engineering. Cambridge: Cambridge University Press.

Report. 2003. Columbia Accident Investigation Board, Report Volume 1, August 2003. Washington D.C.: Government Printing Office.

Tenner, Edward. 1996. Why Things Bite Back, Technology and the Revenge Effect. London: Fourth Estate. Vaughan, Diane. 1996. The Challenger Launch Decision, Risky Technology, Culture, and Deviance at NASA. Chicago: The University of Chicago Press.


DLA Ejournal Home | SPT Home | Table of Contents for this issue | Search SPT and other ejournals