SPT logo

Techné: Research in Philosophy and Technology

SPT Logo
Editor-in-Chief: Joseph C. Pitt, Virginia Tech, Vol. 11, no. 1 (Fall 2007) -
previous editors: Paul Durbin 1995/97; Peter Tijmes 1997/99; Davis Baird 2000/07

Number 2
Spring 2005
Volume 8

DLA Ejournal Home | SPT Home | Table of Contents for this issue | Search SPT and other ejournals


Great Uncertainty About Small Things

Sven Ove Hansson
Royal Institute of Technology


Introduction

Much of the public discussion about nanotechnology concerns possible risks associated with the future development of that technology. It would therefore seem natural to turn to the established discipline for analyzing technological risks, namely risk analysis, for guidance about nanotechnology. It turns out, however, that risk analysis does not have much to contribute here.

The reason for this is that the tools of risk analysis have been tailored to deal with other types of issues than those presently encountered in connection with nanotechnology. Risk analysis was developed as a means to evaluate well-defined dangers associated with well-known technologies, such as the risk that a bomb explodes accidentally or that exposure to a specific chemical substance gives rise to cancer (Rechard 1999; Hansson 1993). The characteristic activity of risk analysts is to estimate the probabilities of such events. Elaborate methodologies have been developed to estimate probabilities of events that depend on complex chains of technological events, such as nuclear accidents, or on biological processes that are only partially known, such as chemical carcinogenesis.

These methodologies are not of much help when we are dealing with the issues commonly associated with nanotechnology. Critics of nanotechnology typically refer to unrealized possibilities, such as that nanotechnological devices can be used for eavesdropping and other privacy intrusions, that nanorobots can replace soldiers, that nanodevices can be implanted to control a human being, or that self-replicating nanosystems may eventually replace the human race instead of serving us. These are certainly serious concerns, but nobody knows today whether or not any of these types of nanodevices will ever be technologically feasible. Neither do we know what these hypothetical technologies will look like in case they will be realized.1 Therefore, discussions on such dangers differ radically from how risk analysis is conducted. The tools developed in that discipline cannot be used when so little is known about the possible dangers that no meaningful probability assessments are possible.

In the terminology of risk analysis, the possible dangers of nanotechnology should be treated as uncertainties rather than risks. The distinction between risk and uncertainty derives from decision theory. By decision-making under risk is meant that we know what the possible outcomes are and what are their probabilities. In decision-making under uncertainty, probabilities are either not known at all or only known with insufficient precision (Knight 1935, pp. 19-20; Luce & Raiffa 1957, p. 13). In most decision-theoretical treatments of uncertainty, it is assumed that, besides probabilities, most other features of the situation are well-defined and known. In real life it is not unusual to encounter situations of great uncertainty. By this is meant that other types of information than probabilities are lacking as well. Hence, in decisionmaking under great uncertainty we may be unaware what the options are that can be chosen between, what the possible consequences of the options are (and not only the probabilities of these consequences), whether or not information from others (such as experts) can be relied upon, or how one values (or should value) different outcomes (Hansson 1996).

The effects of future, yet unrealized technologies are in most cases subject to great uncertainty. Nanotechnology is an unusually clear example of this. As already mentioned, the technological feasibility of the nanoconstructions under ethical debate is in most cases uncertain. Furthermore, many of the possible future nanotechnologies are so different from previous technologies that historical experience provides very little guidance in judging how people will react to them. The development and use of new technologies is largely determined by human reactions to them, that have their influence via mechanisms including markets, politics and social conventions (Rosenberg 1995).

It is not only the negative but also the positive effects of nanotechnology and other future technologies that are subject to great uncertainty. The most fervent proponents of nanotechnology have argued that it can solve many of humanity's most pressing problems: Nanotechnology can make cheap solar energy available, thus solving the energy problem. Nanoscale devices injected into the bloodstream can be used to attack cancer cells or arterial plaques, thus eradicating major diseases. Synthetic human organs can be constructed that replace defective ones. According to leading cryonics companies, nanotechnology will be used to bring back cryopreserved persons to life.2 These predictions are all subject to great uncertainty in the same way and for the same reasons as the more dire predictions referred to above. However, whereas expounders of the positive predictions seem fully aware of the uncertainty inherent in the negative predictions, and vice versa, both groups tend to deemphasize the uncertain nature of their own predictions.

In dealing with the usual topics of risk analysis, namely reasonably welldefined event types and event chains, experts in particular fields of science and engineering, such as toxicology, structural mechanics, nuclear technology, etc. can provide much of the information that is needed to assess the risks and guide decision-making. In issues of great uncertainty, such as the positive and negative effects of future nanotechnology, the problem-solving potential of such specific knowledge is smaller. Instead, issues such as the structure and validity of arguments will be more important. These are issues for philosophers specializing in informal logic and argumentation analysis. Therefore, uncertainty analysis offers a promising, although unexplored, area for applied philosophy. It is the purpose of the present contribution to introduce a systematic approach to one central topic in uncertainty analysis that is particularly relevant for debates on nanotechnology, namely the critical appraisal of arguments referring to the (mere) possibility of positive or negative future developments.

Mere possibility arguments

Public debates about future technologies are often conducted in terms of what future developments are possible. Nanotechnology is a typical example of this. Opponents of nanotechnology claim that we should refrain from developing it since it can lead to disastrous outcomes. Its most enthusiastic proponents maintain that we must develop it since it can solve many of the problems that are plaguing humanity. I will use the term mere possibility argument (MPA) to denote an argument in which a conclusion is drawn from the mere possibility that the choice of an option, behaviour, or course of action may lead to, or be followed by, certain consequences.

Clearly, the 'can' of some MPAs is accessible to disambiguation. Consider the following dialogue:

I: "It would be wise of you to stop smoking. Otherwise the cigarettes can kill you."
II: "But there are thousands or things that could kill me, and I cannot quit all of them. In the last few months, the newspaper contained articles saying that eggs, meat, milk and I think even more food-stuffs can be deadly. I cannot stop eating all of these."
I: "There is a big difference. These food-related dangers are all quite uncertain. But scientists have shown that about half of the smokers die prematurely because of smoking."

Here, the first speaker puts forward an MPA, which the second speaker tries to neutralize (with a type of argument that we will return to in section 4). The first speaker then substantiates the argument, by transforming it from an MPA to a probabilistic statement. This is a common argument pattern. When MPAs are put under attack, their proponents often try to reconstruct them to make them more conclusive.

Although the disambiguation (and probabilistic reconstruction) of MPAs is an important form or argumentation, the focus of the present article is on argumentation that remains on the level of mere possibilities. There are two reasons for this. First, it is a judicious research strategy to study argumentation on the MPA level before ways to go beyond that level are introduced. Secondly, in nanotechnology it is often not possible to go beyond the MPA level of argumentation.

There are two major variants of MPA arguments:

The mere possibility argument (MPA), negative version:
A can lead to B.
B should not be realized.
Thus, A should not be realized.

The mere possibility argument (MPA), positive version:
A can lead to B.
B should be realized.
Thus, A should be realized.

To exemplify the negative version, let A be the development of nanotechnology and B the emergence of new technological means for mind control. To exemplify the positive version, again let A be the development of nanotechnology, but let B be the construction of nanodevices that efficiently remove arterial plaques.

It is important to realize that argumentation based on mere possibilities need not be faulty. There are situations in which it seems reasonable to let an MPA have a decisive influence on a decision. Suppose that on a visit to an arms factory, a person takes up a just finished pistol, puts it against his head and shows intention to pull the trigger, just for the fun of it. Then someone says: 'Do not pull the trigger. You never know, it can be loaded.' Although there is no reason at all to believe that the pistol is loaded, it would seem reasonable to heed the warning. However, there are also many cases in which it is rational to reject a mere possibility argument or consider it overruled. Suppose, for instance, that someone wants to stop research aimed at constructing nanodevices capable of carrying drugs to their target organ and releasing them there. The argument given for stopping this research is the MPA that these devices may turn out to have severe toxic effects that will only be discovered after they have been in use for many years. This argument is much less persuasive than the argument in the previous case that the pistol might be loaded, for the simple reason that we also need to take into account the possibility that such devices can be used to cure diseases more efficiently than currently available therapies.

A major problem with MPAs is that an unlimited number of them can be created. Due to the chaotic nature of causation, mere possibility arguments can be constructed that assign extreme positive or negative consequences to almost any action that we can take. As one example of this, almost any action that we take can give rise to social conflicts that in the end provoke a war. However, this applies to all actions (and omissions). Therefore, in the absence of reasons to consider it more credible for some of the options we are considering than for others, this is an unspecific (or background) uncertainty that should be excluded from most decision-guiding deliberations. Generally speaking, we need to distinguish between unspecific MPAs that can mostly be disregarded and more specific MPAs that need to be considered in relation to the particular issue under discussion. This distinction can be made by considering other possible future technologies than that under discussion, and determining whether or not the MPA is equally applicable to (some of) them as to the technology for which it was proposed.

A systematic analysis of MPAs is needed in order to protect us against at least two (sometimes overlapping) fallacies. The first of these consists in acting or reasoning on the basis of the previously formulated possibilities only, i.e. on the MPAs that have been brought to our attention rather than on those that are specific to the situation. The second fallacy consists in making a biased selection of MPAs, so that one pays attention to those MPAs that support one's own preconceived viewpoint, but neglects those that speak against it.

In order to avoid such mistakes, and facilitate a rational use of MPAs, two tests will be introduced in the following two sections. The two tests are both based on existing patterns of argumentation, and they can be seen as systematizations of these patterns. They both aim at clarifying whether or not a proposed MPA is relevant for its intended purpose.

The test of alternative effects

An MPA can be defeated by a counterargument showing that we have at least as strong reasons to consider the possibility of an effect that is opposite to the one originally postulated.

Negative MPA, defeated by alternative effect:
A can lead to B.
B should not be realized.
Thus, A should not be realized.
However:
B′ is not less plausible thanBin the case of A.3
It is at least as urgent to realize B′ as not to realize B.
Thus, A should be realized.4

Positive MPA, defeated by alternative effect:
A can lead to B.
B should be realized.
Thus, A should be realized.
However:
B′ is not less plausible thanBin the case of A.
It is at least as urgent not to realize B′ as to realize B.
Thus, A should not be realized.

For a simple example, consider the argument that the development of new nanotechnology (A) may lead to the construction of devices that can be implanted into the human brain, and then used to control behaviour (B). This is a negative MPA. In evaluating it, we also need to look into alternative uses of this technology, such as the implantation of devices with which disabled persons can regain motor control and sensory contact with their body.

The test of alternative effects consists in searching for defeating arguments of these forms. For an example, consider the argument against nanotechnology that is based on the possibility that flying robots, the size of insects, may be developed, and that these can be used for purposes of military attack (Altmann 2001). A possible counterargument can be based on an alternative effect of that technology: If flying robots can be developed, then it is equally possible that they can be used for intelligence purposes. Under the assumption that mutual access to reliable intelligence reduces the risk of war, this may contribute to the avoidance of military conflict.5

In this case it would be natural for the person who put forward the first MPA to modify it by pointing out that insect-sized robots could be used for attack, not only by states but also by terrorists. To this, however, it could be retorted that the employment of such robots for intelligence purposes could radically reduce the capabilities of terrorist organizations to hide away. It is not obvious whether or not the argument referring to military uses of flying nanorobots can ultimately be reconstructed in a form that resists the test of alternative effects. This is not the place to resolve this controversy. What is important, however, is that the application of this test will induce a careful analysis of the MPA and its presuppositions.

The test of alternative causes

The other major way to defeat or weaken an MPA is to show that the postulated cause A is not decisive for the possibility that B will occur. As we noted above, if B is not a specific effect of A, but equally possible in the absence of A, then it should be excluded from consideration. Therefore, counterarguments against MPAs can be constructed along the following lines:

Negative MPA, defeated by alternative cause:
A can lead to B.
B should not be realized.
Thus, A should not be realized.
However:
B′ is not less plausible in the case of not-A than B in the case of
A.6
It is at least as urgent to not to realize B′ as not to realize B.7
Thus, Ashould be realized.

Positive MPA, defeated by alternative cause:
A can lead to B.
B should be realized.
Thus, A should be realized.
However:
B′ is not less plausible in the case of not-A than B in the case of A.
It is at least as urgent to realize B′ as to realize B.8
Thus, A should not be realized.

The test of alternative causes consists in searching for defeating arguments of this type. For example, consider the argument against nanotechnology that it can give rise to a 'nano divide', i.e. growing inequalities between those who have and those who do not have access to nanotechnology. This argument is equally plausible for any new technology that has a potential to improve certain aspects of our lives. We already have, on the global level, large 'divides' in terms of sanitation, food technology, medical technology, ICT, etc. It can reasonably be argued that any new technology (including technologies that will receive more resources if we refrain from funding nanotechnology) will expectedly follow the same pattern. Therefore the 'nano divide' is a non-specific effect that does not seem to pass the test of alternative causes.

For another example, consider the statement, sometimes used as an argument in favour of nanotechnology, that it can provide us with means for cheap desalination. The problem with this argument is that we do not know what technologies (if any) can be used to achieve this aim. In particular, we do not know if nanotechnology or some other technology (such as biotechnology) will most probably provide the solution. The prospect of finding means for cheap desalination can possibly be used as an argument for furthering scientific and technological development in general. However, in the absence of a credible outline of a technological solution it cannot be used as an argument for furthering a specific technology such as nanotechnology.

Conclusion

The systematic application of the two tests introduced above helps us to avoid the two fallacies mentioned in Section 2. Both tests involve a search for new, analogous MPAs, thereby rectifying the fallacy of reasoning only on the basis of previously formulated possibilities. Furthermore, in both cases this search focuses on finding new MPAs that constitute arguments against the given MPAs, thereby providing a remedy against the fallacy of only considering MPAs that point in one direction, namely that of one's preconceived opinions. In combination, the two tests will eliminate many untenable MPAs. This makes it possible to focus discussions on a smaller number of such arguments, that can then be subjected to a more detailed analysis.9 The two tests should only be seen as a first beginning. In order to analyze more fully the discourse on nanotechnology (or other subjects dominated by issues of great uncertainty), an extensive study of actual argumentation is needed, as a basis for a much more comprehensive discussion of the validity of the various arguments in actual use.



References

Altmann, J. Military Uses of Microsystem Technologies. Dangers and Preventive Arms Control. Münster: Agenda Verlag, 2001.

Hansson, S.O. "The False Promises of Risk Analysis." Ratio 6 (1993): 16-26.

________. "Decision-Making Under Great Uncertainty." Philosophy of the Social Sciences 26 (1996): 369-386.

Knight, F.H. Risk, Uncertainty and Profit. London School of Economics and Political Science, London, 1933.

Kroes, P. & Meijers, A. "The Dual Nature of Technical Artifacts - Presentation of a New Research Programme." Techné 6, 2 (2002): 4 - 8.

Luce, R.D. & Raiffa, H. Games and Decisions, Wiley, New York, 1957.

Rechard, R.P. "Historical Relationship Between Performance Assessment for Radioactive Waste Disposal and Other Types of Risk Assessment." Risk Analysis 19, 5 (1999): 763-807.

Rosenberg, N. "Why Technology Forecasts Often Fail." Futurist July/August 1995, 16- 21.

Notes

1These technologies have been characterized in terms only of their functional, not their physical characteristics. On functional characterization of technologies, see Kroes and Meijers 2002.

2see e.g. http://www.alcor.org/.

3This holds if, in the case of A, either (i) B′ is at least as plausible as B, or (ii) B′ and B cannot be distinguished in terms of plausibility. Therefore, this clause does not require that the MPA be reconstructed in terms of plausibility (which would, arguably, be a way to reintroduce probabilities through the backdoor). The function of this clause is instead to prevent the use of MPA level argumentation when there is contravening probabilistic or quasi-probabilistic information.

4Strictly speaking, if it is equally urgent to realize B′ as not to realize B, then the argument does not suffice to conclude that A should be realized, only to invalidate the argument that A should not be realized. The corresponding caveat applies to the other defeating arguments outlined in this and the following section.

5This is not an uncontroversial assumption. Note however that the original MPA relies on another controversial assumption, namely that access to more efficient weapons increases either the risks or the consequences of war.

6 As a special case, B′ and B can be identical.

7 This line can be omitted if B′ and B are identical.

8 This line can be omitted if B′ and B are identical.

9 In (Hansson 1996) some criteria are given for identifying serious cases of high-level consequence-uncertainty, including novelty, lack of spatial and temporal limitations, and interference with complex systems in balance. These criteria can also be used in the evaluation of MPAs.



DLA Ejournal Home | SPT Home | Table of Contents for this issue | Search SPT and other ejournals