SPT v4n4 - Preferences and Value Assessments in Cases of Decision under Risk

HTML [15 KB]

Number 4
Summer 1999
Volume 4

PREFERENCES AND VALUE ASSESSMENTS IN CASES OF DECISION UNDER RISK

Alois Huning, University of Düsseldorf

Mankind has begun to take an active part in the evolution of nature, even in the evolution of our own human nature. The well known philosopher of technology Hans Sachsse once even said that we have helped nature in its evolution. But it seems that it is not so sure yet whether our interference is indeed a help or not. Very often we do not know for sure what will be the consequences of our activities. Nevertheless we have to decide and to act; and even when we want to avoid action we have to notice that not to decide or not to take action has consequences. Regarding the future we know and have to take into account that all our activities and all our abstentions from activities are decisions under risk.

The notion of risk has quite a series of associations or connotations. Risk often is defined as a combination of the level of possible damage or loss and the probability that this damage or loss will occur. We speak about environmental or ecological risks, of user's risk, of consumer's risk, of future risks, of risky consequences, of the risk of possible abuse, of the risk of failure. All this is an indication of the general ambiguity of technology and more generally of every human activity.

It is our everyday experience that we live in a world that is no longer-- if it had ever been--pure virgin nature, but a world that has been shaped by the activity of free human decisions and actions. And we can only survive in this world because of and by means of our technical interventions. Every technology is an intervention in the course of nature which is shaped and used according to human purposes which follow from needs and wants and which in turn have effects or consequences on ourselves. How far are we allowed to go in our transformation of nature? It is certain that our technical actions cannot be without limits; we cannot and may not permit risks that destroy nature as our means of life, that deprive us of the basis of our existence.

It has always been an ethical and responsible task to decide about the development and use of techniques and technologies. This becomes especially evident in our present times, when we experience more than ever in history the continuously increasing disposition of humans over nature, the increasing manipulation of the evolution of nature, and when we have, ever more precisely, to note the harmful consequences of our decisions and actions. That means that technology and transformations of our world through science and technology force us to reflect on the advantages and disadvantages, to take up our responsibilities in our time, and this means that we have to consider carefully what we are allowed to do and what not.

Doing this, we very soon notice that ethically responsible action in the field of technology cannot easily, if at all, be oriented under categorical or absolute claims or principles. There is not always, indeed there is more and more rarely a clear vision of good and evil; and our decisions are not always to be made between these extremes. Even with the very best morally good intentions the risk in the uncertainty concerning the results remains. Very often it is unrealistic to think in terms of either-or, of good or evil; instead of this choice we have to weigh a "more or less," because we have to decide in favor of a positive result that we want to achieve, while at the same time knowing of negative consequences or at least risks that we believe we can take because of the consequences that we intend.

Having to deal with risks that above all have their reason in our uncertainty about prognoses of the future, we are obliged to develop and practice an ethics of value assessment, an ethics that has to weigh, measure and evaluate different goods and evils, values and losses.

These ethics of value assessment as a realization of responsibility in practice can only be a probabilistic ethics of human rights on the basis of natural law. That is the thesis that I want to illustrate. After briefly mentioning three fundamental statements, I will concentrate on what I mean by "probabilism" in cases of decision under risk.

The first statement concerns the phenomenon that I, that everybody, is living in one world together with others--that is, with other human beings, with animals, plants, and material realities--in one world which is common to all of us.

Regarding the foundations of natural law it is important to notice that this does not mean an absolutely fixed status of the nature of reality; there are changes in the course of history even in the relatively constant fundamental realities. "Nature" is not an eternal idea, but is becoming; physis is growth, change in the course of time. The historicity of human practice has to be integrated into a concept of nature that includes evolution. Marx could have said: Since man is a product of his work and of his technology, his nature is continuously changed by his own actions. This idea can be found already in the philosophy of Aristotle, where the entelecheia is not at once and forever existent in its final perfection; it presents itself in changing historical phenomena, in a space-time appearance as a kind of step towards perfection. This introduction of historicity into the very nature of human beings brings with it a certain relativity even for the fundamental value of human life and of the right to life.

The second fundamental statement is that the right to life and to other claims which follow from the first statement can only be guaranteed if I grant everybody else what I myself expect from them. Since everybody finds himself in the same existential situation, everybody has to acknowledge this so-called Golden Rule in its positive as well as in its negative formulation. Whoever does not accept the consequences of the golden rule, whoever does not acknowledge this rule, can be forced to follow it by means of sanctions in order to preserve the conditions of the fundamental value of life. This rule can be formulated in different versions of ethical theories, in contractual ethics as well as in different kinds of utilitarian ethics, in the ethics of justice or in Habermas's ethics of communication as the transcendental condition of participation in the community of argumentation. Even in the early form of the famous categorical imperative it seems to be the basis, when Kant says that we have to "act in such a way that the maxim of (our) will can simultaneously apply as the basis for a universal law."

In order to pass from formal rules to concrete orders we need a third basic statement. Our decisions and actions are morally justified, if they are reasonable, if they correspond with reason--this reason reasoning within the framework of the entire order of reality.

This correspondence with reason can only be a probabilistic one, because all our actions concern the future, which is never absolutely certain and cannot be known with absolute certainty.

Reasonableness in a probabilistic sense means that there are good positive reasons for the legitimation or justification of an action, even if there are also reasons which support the demand for the omission of this action or of another action; this, of course, does not apply if the fundamental right of life is in unjustifiable danger.

With the background of these fundamental assumptions--my life in one world with others, the Golden Rule, and the principle of reasonableness--I will now explain what I understand by "probabilism" as the method of stating preferences and making value assessments, of evaluating goods and values in cases of decision under risk.

A theory is probabilistic if it does not rely on certain or certainly true and valid reasons but on likely and good reasons. The latin word probabilis means "likely," "founded," "approvable." Since especially with regard to the future we never have complete security, we can never be without any doubt. Of course there are cases where there is almost no doubt, and we can have moral decisions without hesitation; these are the cases where doubt and risk and indecision amount to or come close to zero.

Probabilism is not a theory for deciding between good and evil, but for decisions between more or less good or bad. Probabilism is the theory which opts for the freedom of decision in all the cases where there are good reasons for this decision, even when there are also arguments for the contrary. The question is a matter of balance, of weighing the importance of the arguments. How much weight must the arguments have to allow me to proceed to a decision or to an action? But, also, how much weight have the risks that I can foresee? Is any reason good enough? Or must the good reasons outweigh the counter-arguments, and to what extent? This is the discussion between probabilism and probabiliorism--the latter demanding more probability for the side for which the decision will be made.

Probabilism refers to the old Roman rule: lex dubia non obligat (when there is doubt about a legal ruling then there is no obligation) or in ethics: obligatio dubia obligatio nulla (when there is doubt about an obligation, then there is no obligation).

I am convinced that probabilism is an ethical attitude which can guide individual and social practice at least for the so called normal risk, while probabiliorism, which demands more important arguments, is the right theory for extraordinary risks. (And probabiliorism must not go as far as rigorism or tutiorism-- theories which demand certainty to qualify an action as morally justified and acceptable.) Here a remark is necessary: it is not self-evident, whether a risk is a normal or an extraordinary one.

Certainty in the evaluation of risks is logically impossible, since risks are linked to our uncertainty about the future. Therefore no rigorism can be justified. We Germans still suffer from Kant, who was such a rigorist. He once said: Quod dubitas, ne feceris. If you have any doubt, you may not do this. That means that one would not be allowed to do anything at all where there might be the danger or the risk that it might be wrong or harmful or might have unwanted or harmful consequences.

It is quite interesting that Hans Jonas, in his book, The Principle of Responsibility, has taken up this rigorism (and that certain social groups follow him), when, for example, in the face of technological risks he claims that preference should be given to negative or pessimistic prognoses. A preference for negative prognoses means precisely this. As long as you have doubt, you may not act. That means that every assessment of technology--which regards the future--must end up in an arrestment of technology. But we must live with technology, because we want not only to survive, but to lead a life worthy of human beings; and we want to live not only today, but tomorrow as well.

The consequence is that we must follow the theories of probabilism and probabiliorism as middle positions between laxism (which allows anything that can be done) and rigorism (which bans all activity and progress that includes risks).

But how much weight must my arguments bear to favor an action? My answer is this. For my own decisions and actions, and for extraordinary risks, I need more or heavier arguments; for the evaluation of the actions of others and for normal risks, I can only demand probabilistic reasons. This can be expressed by another principle of Roman law: in dubio pro reo: in case of doubt, favor the defendant (or the one who is concerned for his freedom). This argument has certain consequences for legal rulings, since the law cannot make the strictest moral exigency a norm for everybody; the law can only guarantee peace and order in a society.

We have to lead our morally responsible life under the conditions of a contingent world, where we have to choose and find a balance between good and evil. As Wilhelm Korff in his teaching on social ethics puts it, whatever contributes to the success of our human existence is the result of processes of optimization. Evaluation of goods and values is the reality and gives seriousness to ethics in our daily practice. That also includes the right or the obligation to choose the lesser evil, if an evil consequence cannot be excluded.

Examples of the risks and the opportunities of new technologies are discussed these days above all in the fields of information technology and of genetic engineering; but in principle these discussions have accompanied the development of technology throughout history. For example, in the introduction of electricity, of electric light, the changes in mobility by using a horse, a carriage, a car, the railway or the airplane. A famous example is the introduction of water pipelines into private households in England. The immediate and intended effect was better health and longer life expectancy-- people died at about sixty years of age instead of forty--but people suffered from kidney diseases in their later years. The change from lead to copper in the pipelines freed them from kidney trouble but brought on other diseases, especially for babies. And nobody yet knows what kind of evil will follow from the change to plastic pipes, which avoid the evils of lead and copper. But there are good reasons for these innovations, the consequences of which we cannot measure yet. We decide and act under risk; the future is not ours to see. But that does not have the fatalistic consequence, Que sera, sera, since it is we who decide and act, and what will be, depends on us.

As philosophers, we have to act as a kind of attorney, pleading for the values of humanity; for preferences and value assessments in the real world that will open up opportunities for a human and humane future for ourselves and for future generations. We must help all the peoples of the world to learn and acknowledge that we all live in one world, in which we take part together with others. We can only find our orientation in a reasonable evaluation of dangers and chances, where our decisions and actions always take place under the condition of risks.

Copyright 1996, by the Society for Philosophy and Technology.

Copying is permitted for noncommercial use by academic libraries, computer conferences, and individual scholars. Libraries are authorized to add the journal to their collectin, in electronic and/or printed form, at no charge. This message must appear on all copied material. Any commerical use requires permission.