SPT v4n3 - Humans and Future Communication Systems
View the index of articles available in PDF
Guest Editor: Evandro Agazzi
Guest Editor: Hans LenkHUMANS AND FUTURE COMMUNICATION SYSTEMS
Bernulf Kanitscheider, University of Giessen
In his famous work, The Poverty of Historicism (1957), Karl Popper proved the impossibility of forecasting scientific discoveries and technical instruments. The main reason is logical. The discovery of tomorrow cannot be known today ; otherwise it would not be a future one. The second reason is of a systems-theoretical nature: creative intuition includes an accidental character. The origin of ideas, like the origin of every spontaneous order, is phase transitions within the neurochemistry of the brain, and those are never deterministic processes—and therefore are unpredictable. Creativity and the outcome of it, novel ideas, can not be subjected to forecasting on account of the inherent stochastic elements.
Already, Niels Bohr had formulated this in his famous words: "Prediction is always difficult, above all of the future." Despite Bohr’s skeptical position, there exist a host of scientists who attempt to know some traits of future social or technical development. Most of what futurologists maintain can only be regarded as entertainment, but there are a few authors who try to elaborate pattern predictions; they use a kind of plausible reasoning that eschews Popper's strict logical argument because pattern predictions never figure out exact solutions of theoretical or technical problems (see Hayek, 1964 ). There are tendencies and structures within the processes of scientific and technical discoveries, and these patterns can be used to extrapolate into the future.
This is not the same as analytical computation on the basis of deterministic laws, as in celestial mechanics. It is of utmost importance and not just a funny game to have some faint ideas about how nature, society, and individual men will fit together in times to come. Only if we have some outline knowledge of future problems are we able to cope with them. Extrapolation from the present state of technological development, however, seems to involve great uncertainty. That can be easily estimated by the prospects of science fiction novels written at the beginning of the century. At that time, writers of science fiction novels were quite confident about the future fate of society and urban life. They outlined new towns hovering in space in which men could realize all their desires and fantastic illusions unhampered by terrestrial restrictions.
Today, predictions mirror an atmosphere of gloom, because the spirit of the times has changed considerably. The prospects of novelists nowadays have become more pessimistic; they tell of catastrophes of the environment or nuclear winters. Be that as it may, a few times science fiction writers have predicted what seemed to be impossible. For example, the idea of a stationary rotating satellite had been described by Arthur C. Clarke in 1948, ten years before its realization.
The geostationary satellite exhibits a clear cut application of the laws of gravity; it does not operate outside the realm of physics. This example shows that there are two types of extrapolation: one within the domain of scientific theories, the other outside the corroborated rules of scientific rationality. Visions of the future that are worthwhile mentioning operate within the world of science and they respect the laws of thermodynamics, quantum mechanics, and relativity, as well as the pertinent biochemical and neurological laws that rule living beings. It seems to me boring to play with logically possible worlds governed by arbitrary laws, but it is exciting to ponder possible processes within the realm of our scientific laws.
Jules Verne is the epitome of the visionary of future applications of scientific laws. It was he who opened new technical roads within the margin of classical physics. Although we encounter the idea of space travel already in the seventeenth century, e.g., with Cyrano de Bergerac, Jules Verne with his exact accounts anticipated the conquest of space. With his novels, his audience got an idea how to feel as a space traveller, to move in a free floating orbit, to live in a state of weightlessness. In our time there were authors like Isaac Asimov and Stanislaw Lem who tried to fathom the possible states of reality within the domain of scientific law. Some famous scientists, like Fred Hoyle or Olaf Stapelton, have also attempted to sketch utopias.
From a philosophical point of view there appeared one publication that gave all speculations on fictitious events a new direction: William Gibson’s Neuromancer of 1984. In this utopian novel Gibson coined the terminology which got applied later on in the technology of cyberspace, and that led to the crucial concept of virtual reality (see also Angulo, 1995 ). The introduction of this key term engendered a lively epistemological debate on the ontology of technological objects. In accounts of the customary view of epistemology, objective knowledge consists in analysis, description, and explanation of traits which are part and parcel of autonomous nature beyond human sense perception. In every realist epistemology there is a clear cut distinction between the subject of knowledge inside the minding animal and a real system as the object and aim of recognition. Both kinds of reality, spiritual reality of the mind and concrete reality of the world of material things, were neatly separated.
Of course, there were always figments of the imagination, e.g., the fantasies of novels, fairytales, legend, myth, fable. But however lively they were, it was quite clear that the inventor of those stories created possible worlds, never real events. Only children would ask occasionally whether Little Red Ridinghood ( Rotkäppchen ) refers to real or virtual situations. No adult could doubt that the fairy of the Tales of a Thousand and One Nights does not refer to anything that exists.
Only philosophers brooded about the existence of the possible— e.g., Aristotle on the seafight tomorrow; and a few of them caviled at the reality of future events, doubting even the objectivity of time. Extravagant philosophers like the Austrian thinker Alexius Meinong (1915) claimed the existence of impossible worlds at least as something about which we can speak. Obviously we talk about something when we quarrel about the existence of impossible things. Remember that the White Queen of Lewis Carroll did seven impossible things before breakfast. Outside the realm of philosophy and the protecting walls of a university institute, it is very dangerous for survival to confound virtual and real objects. To live with illusions in wild deserts means to be in jeopardy of one’s life. In the struggle for existence there was every reason to be aware of distinctions between real and virtual entities. But with the coming of civilization and the relief of survival stress, there was a change in that situation.
The radio receiver seemed to initiate a novel type of reality, communicating to the listener occurrences which are far away, and, what is more, whose control is no longer possible for the listener. Broadcasting was not especially true-to-life, not very much more than a representation of a sequence of events, like the description of a book. But a lively written book could bring a world described by the novelist to the mental presence of the reader. Then with the development of television there began a process that can be regarded as a slippery slope towards blurring the distinction between natural and virtual reality (see Carrascosa, 1992 ). It reached bottom when the Gulf War was in fact nothing but a telenovel, like the daily sequences of a crime thriller.
An epistemological purist will argue that from a logical point of view, the existence of the war and its representation on television are distinct ontological levels. But this ontological difference is a theoretical one. With the growing intensity of the representation, both kinds of reality get blurred, at least phenomenologically. In principle, a member of the audience keen to dig out ontological truth could interview a technician, but for the normal media addict there is no way to discriminate between media virtuality and original thinghood. The difference between these two kinds of reality will in the future become of pure theoretical importance, like, for example, the apparent movement of the sun around the earth. Only professional astronomers can explain to a layman the true picture of the motion of the celestial bodies that is hidden beneath the phenomena.
Visionaries of technical evolution outline breathtaking prospects for future worldwide communication. In the twenty-first century, we will have millions of connected computers which deliver, by means of thousands of satellites, pictures of every point of the planet; even the depth of the sea will show up before our eyes. The future television viewer will be sent on a virtual voyage ; he or she will experience, without leaving home, landscapes and inspiring impressions of faraway regions; he will become an electronic globetrotter well fed with bits of information which have a mixed ontological status. Excursions to Deimos and Phobos, the Martian moons—or to the outer margins of planetary systems—will be presented in a way that is partly real and partly fantasy. There will be a type of quantum superposition, in cinema, of adventures involving figments and representations.
Are all of these displacements of reality to be lamented? As I see it, there are pros and cons.
The advantages can be regarded as an embellishment of the nasty traits of reality, and therefore as a relief for humans. The manipulation of the original ugliness of daily life will bring some comfort. Some critics, however, argue that virtualization of reality will encourage a flight from existing problems, without solving them. Already now, psychologists warn about television, pointing to the fact that addicts live a second-hand life, that they do not experience actual adventures, but only realize emotions (e.g., fear) without the dangers that normally threaten life in certain situations. The television viewer acts like a coward, eschewing the challenges of a real adventuresome life. It can be expected that people with little strength and courage will take refuge in electronic virtual worlds. And the resort to an ecological niche of virtuality debilitates the original vitality. That is what is feared.
On the other hand, one of the exciting prospects of future communication systems is social. It begins with participatory television. The viewer takes part in the occurrence; he interacts with the telenovel and can steer it in a desired direction. Thereby a new kind of social participation can be gained, and a heavy impact on social structure is to be expected. All citizens of a state are, although separated by space, connected by the media system, and they will influence one another in the most intimate way. Defenders of the interactive screen hope to transform the passive "couch potato" into a creative person. In future interactive television, the viewer will be set amidst the thicket of things. He can interfere with virtual reality and eventually become willy-nilly a central figure of the telenovel. But there is more: film makers and movie directors are already planning computer substitutes for the heroes of the screen. Think of the first computer-animated film "Toy Story" from Walt Disney Productions.
If it turns out this way, the reality shown can be regarded as virtual in a double sense: the plot has been invented, and, beyond that, the actors are not living persons but software products constructed by a clever engineer. At this point it becomes quite clear that at least phenomenologically we are encountering a continuous transition between various types of reality. In the recently constructed Center for Art and Media Technology in Karlsruhe, we encounter an unending variety of virtual possibilities: "Interactive Plant Growing," where, with a delicate touch, a host of never seen flowers are raised; "The Tree of Knowledge," which can be made to sprout on verbal command; "The Legible City," where a virtual bicycle ride takes you to New York works of art; and an interacting girl, who can be excited by the right questions.
A highlight of the current way of constructing virtual worlds occurs in the new technology of cyberspace. A cyberworld contains a program to augment natural reality with new components of hitherto nonexisting entities, or to connect a person with objects of desire heretofore unattainable. Here we see the proper dynamics of technology at work. Not only objects are at stake but also persons, communication with whom can be established although they died long ago. The cyberworld includes in its domain every possible and thinkable system of emotional and even sexual desire. Even the psychology of partnership is affected by virtual reality, if, for example, one or both gets deeply involved with a computer-produced creature.
Does it make sense that a wife becomes jealous of her husband’s computer interaction with a virtual Claudia Schiffer? Or vice versa if she enters into a relationship with virtual David Copperfield? Obviously we need a redefinition of jealousy, or rather a bifurcation of the semantics of the concept, one concerning natural and one concerning virtual persons. A few years ago there was a discussion in Computerserve about this novel moral problem engendered by cybersex. The participants agreed on this demarcation: "If you don’t put it in, it ain’t cheating" ( Gers, 1997 ). But it seems quite clear that this is nothing but a preliminary and purely conventional definition.
What technicians have in mind is not primarily personal satisfaction but a global concatenation of singular consciousness as coupled in a network of a neuronal-electronic grid system. The neural network of each brain gets linked to a world-wide netlike information system, where it can communicate with every other brain, however distant. We may call this a joining together of all human brains in a global intelligence, where humankind will merge into one thinking entity with a capacity for reasoning much higher than the set of individuals thinking separately. What philosophical conclusion can be drawn from this situation if the prospects of the visionaries of technics get fullfilled? Will it be a nightmare to lose independence of thinking, or should we be pleased that new margins of intellectual activities have opened?
My reactions are very mixed to say the least. The new modes of communication foster the free play of intellectual forces, and a high level of communication reduces xenophobia because persons who exchange valuable information and therefore have to rely on each other will tend to be less hostile. The communication network system will surpass language barriers, and the existing babel of tongues is one of the main obstacles to a peaceful coexistence of populations with their different cultural and social heritage. Foreign social structures will no longer hamper friendly contacts, because the network delivers the necessary transformation.
The other side of the coin is the permanent connection and availability. Probably it will be rather impossible to disconnect oneself from the network to return to the individuality of the hoary days of yore. The momentum of technics has a tendency to enslave humans, as does every social network. Disconnection from the network will require justification. He who cuts the communication channel will later on be asked why he did so, and he will have to defend himself.
More and more contemporaries get nervous when they realize the persecution that the future has in store for them. Will it be possible to flee from all communication to a faraway island, or is there no escape from the social control connected with the inherent obligation to remain within reach? Behavior control is the great menace that anybody has in mind when confronted with a global communication system.
Petty bourgeois society already tends toward behavior control, and it is to be feared that a global communication system will enhance this tendency. Deviant behavior has always been suppressed by defenders of middle class values and bourgeois conformism, so one has to expect that the novel means of control will strengthen discrimination against outsiders within society, those who live "beyond the fringe." Cynics have remarked that the abbreviation, PC, can be read in two different ways: in the electronic way of spelling, it means Personal Computer, but ideologically it can be read as Political Correctness. Skeptics of communication technology have assumed hidden connections between these two notions. Obviously there is a grain of truth in this play on words, because communication exerts an equalizing force on the behavior of participants.
Beyond that, there is a minority of recalcitrant people who refuse to esteem it as marvelous to be connected to the whole world, because abroad we encounter the same stupid and boring people as in our vicinity. To have the trivialities of life with which we are quite familiar become the norm in a world wide web appears to them a monstrosity. These critics deny that, on the opposite side of the planet, we will meet anything that is worthwhile to be permanently connected with; they say that our local area is already crowded with narrow-minded contemporaries. In any case, the expected kinds of worldwide communication systems are in need of additional regulation. Otherwise, the permanent connection of all individuals will lead to a loss of freedom or to persecution.
The ambivalence of the new technologies can be judged also from the point of view of evolutionary biology. We are primates with brains conditioned in the stone age; we are Pleistocene hunter-gatherers who received our mold of social behavior long ago. These historical brains will be combined with the super-structure of universal communication. It can be doubted whether the two programs can be brought into alignment at all. We should be watchful; what is at stake is nothing less than the fate of mankind. Will we be luckier with the new channels of information? I am not quite sure. We are not only the subject but also the object of these new experiences.
At first glance everything seems to promise simplicity. We underline with a pencil an interesting book announced in a journal. The pencil is connected by an electronic device with a book store. Within a few minutes it becomes clear whether the desired book is in stock. We need not search any longer; so we can relieve traffic congestion in the city.
The other side of the coin: there is no incentive to leave the information terminal, no stimulation to pursue anything (other than members of the opposite sex). The future way of life becomes, for that, more and more boring, without risk or adventure; even the traffic jam loses its hair-raising aspects because we have been warned long ago to avoid the blocked highway. Stoppage of traffic has its funny sides; many a time people have come into close contact with one another.
Queuing up has its social aspects too; it is possible to exchange the latest news or to make friends with people waiting in the line. Although connected with a host of persons electronically, people imprisoned in an information terminal should be lonely, unless they jump—fed up with virtuality—into real life.
Travelling by plane gets much easier, without delay: for example, our alarm clock can be linked with the airport. If there is a departure delay, we need not hang around in the waiting room but are able to sleep two hours longer.
To gain time and to spare nerve-racking waiting periods sounds quite promising. We are eager to save time, money, and energy, and we are interested in relieving anxieties. If computers can take over the burden of daily life, it would be a great relief. We could live unembarrassedly up to our spiritual and emotional level of freedom.
However, it may be that this way of enjoying the freedom of self-realization is only the dream of a few intellectuals. It is likely that many contemporaries would perhaps suffer from extreme boredom without having to organize daily life. The freaks of technology promise us the moon, but, as I see it, unhindered mechanization is like a trip to nowhere in particular, a mystery tour. Presumably the success or failure will depend on personal openmindedness and flexibility, and surely a few will fall into bad ways.
There is another realm of investigation in which the new technology of communication is absolutely mandatory and of an existential character for mankind— global ecology (see Mohmann, 1991 ). Mankind has grown to an unmanageable size, and the complexity of this oversized system has engendered emergent problems that cannot be handled by natural brains alone. Population explosion, scarcity of raw materials, and garbage disposal bring to the fore a complex situation, with which the historical biological neural network is unable to cope. Here we encounter exciting proposals of neuroinformatics to solve those problems which are too difficult for human brains. Neural networks have learning abilities far beyond the biological hardware of primates. Neuronal networks can be used to amplify human intelligence in order to overcome the limited theoretical possibilities of our inherited brain. The advocates of neuroinformatics are quite confident that with the enhanced intelligence of the coming generations of neural nets, we will be able to cope with the problems of terrestrial ecology, with controlling the atmosphere and the oceans. Neuro or parallel computers will serve as amplification devices with the long term aim to substitute for the limited hardware of our primate brain (see Eckmiller, 1990 ).
From a philosophical point of view, one has to keep in mind that the introduction of novel rational entities means a ceding of competence . In certain areas—e.g., the worldwide economy or ecology—we have to concede the authority of a superior intelligent being. There is no escape from trusting in its proposals because we can only criticize what we understand. If the global neurocomputer operates beyond human understanding, we are well advised to submit ourselves to its suggestions.
Up to now, a global neurocomputer does not exist. It is an ethical question whether we should implement such a thinking device. Maybe within a few years we will be confronted with an overwhelming choice: submission of mankind under an artificial higher intelligence, or the destruction of mankind. At that time we would encounter a lively debate on the survival of mankind. There have been attempts to estimate the risk of extinction. The famous cosmologist, Brandon Carter, has put forward a doomsday argument. In the formulation of John Leslie (1996) , we ought to be reluctant to believe that we are exceptionally early among all the humans who will ever have lived; there is some reason to think that humankind will not survive for many more centuries, let alone colonize the galaxy using computer intelligence.
I do not like to propagate the feeling of doom that is nowadays the order of the day. But some global problems are real. Artificially intelligent entities evolve by learned instruction—in a Lamarckian way, so to speak—instead of as natural biological entities which vary only by very slow Darwinian evolution, in a time scale measured by millions of years. A Darwinian increase in the capabilities of the primate brain will always be too late to solve the pending world-wide problems. Neurologists are convinced that the human brain has a disposition to enlarge, probably to double its size within the next million years. There are even quantitative models of the phylogenetic growth of the hominid brain, developed by Otto-Joachim Grüsser (1985) . But in the face of urgent global problems, it becomes clear to everybody that these difficulties will, a long time ago, have outgrown the human brain when it has possibly reached sufficient efficiency. For this reason, we can hardly avoid counting on computer technology in order to survive (be it only for some time if Brandon Carter’s doomsday argument is correct).
It seems to me a philosophical consequence of utmost anthropological importance that humans in the long run cannot cope with the very problems caused by the species. Obviously, like any other animal species, we are not self-contained. We must rely not only on tools ( Werkzeug ) but also and in the same way on thinking devices ( Denkzeuge ) in order to secure survival.
H. G. Wells, the famous futurologist, had a foreboding of the new situation when he uttered the remarkable sentence: "The most exciting fact which derives from future science is that man is not the ultimate; as I see it the most fascinating question is what comes after men." This vision perhaps sounds frightening; within our anthropocentric worldview, there is hardly room for a successor of mankind, be it natural or artificial. In any case, we need not understand Wells in the sense that computers will supersede mankind at one stroke, but in the sense that we cannot evade serious cooperation with AI devices. There will surely be a long period of transition. What will happen very far in the future—i.e., on a cosmological time scale—is beyond even theoretical speculation.
REFERENCES
Angulo, M. M. 1995. Random Access Memories: Mechanism and Metaphor in the Fiction of William Gibson . Ann Arbor: University Microfilms.
Carrososa, J. L. 1992. Quimras de Conocimiento . Madrid: Fundesco.
Eckmiller, R., ed. 1990. Advanced Neural Computers . Amsterdam: Elsevier.
Gers, F. 1997. "Von Chat Zum Bett." Der Spiegel , special number 3, p. 34
Gibson, W. 1984. Neuromancer . New York: Acer.
Meinong, A. 1915. Über Möglichkeit und Wahrscheinlichkeit . Leipzig.
Grüsser, O. J., and L. R. Weiss. 1985. "Quantitative Models on Phylogenetic Growth of the Hominid Brain." In Hominid Evolution: Past, Present and Future . Liss. Pp. 457-464.
Leslie, J. 1996. The End of the World . London: Routledge.
Mohmann, H. 1991. "Internationales Umweltrecht und globale Umweltpolitik." Spektrum der Wissenschaft . June. Pp. 68-80.
Popper, K. R. 1957. The Poverty of Historicism . London: Routledge.
von Hayek, F. 1964. "The Theory of Complex Phenomena." In M. Bunge, The Critical Approach to Science and Philosophy . New York: Free Press. Pp. 332-349.