SPT v7n3 - Man and Machine in the 1960s


Number 3
Spring 2004
Volume 7

Man and Machine in the 1960s 1

Sungook Hong
University of Toronto
Seoul National University

"Remember your humanity and forget the rest." (From the invitation to the first Pugwash Conference, 1957)

Introduction

In 1960, the father of cybernetics Norbert Wiener published a short article titled "Some Moral and Technical Consequences of Automation" in Science . Wiener distinguished here between industrial machines in the time of Samuel Butler (1835-1902, the author of the novel on the dominance of humans by machines, Erehwon ) and intelligent machines of his time. Machines circa 1960 had become very effective and even dangerous, Wiener stated, since they possessed "a certain degree of thinking and communication" and transcended the limitations of their designers. Describing in detail gameplaying and learning machines, he contemplated a hypothetical situation in which such cybernetic machines were programmed to push a button in a "push-button" nuclear war. Simply by following the programmed rules of the game, Wiener warned, these machines would probably do anything to win a nominal victory even at the cost of human survival. Since machines became so fast, smart, and irrevocable, humans, unlike humans in the industrial age, "may not know, until too late, when to turn it off." The fictional dominance of humans by machines, which Butler had worried about and vividly depicted in his Ehehwon , had been transformed into a reality ( Wiener 1960 , 1355- 1358).

Wiener's essay symbolized the beginning of a new conception of the manmachine relationship in the 1960s. The sixties witnessed the Cuban missile crisis, the Apollo Project, the counter-culture movement, Vietnam and student protests, political assassinations, the civil-rights and feminist movement, the publication of Rachel Carson's Silent Spring (1962) and the beginning of the environmental movement. To some the sixties was a Golden Age or "mini-renaissance"; to others it was the age of the disintegration of traditional values.

The sixties was also an age of new science and technology, for which people had high hopes, as well as deepest fears. "Quarks. Quasars. Lasers. Apollo. Heart transplants. Computers. Nylon. Color TV. Pampers. The Pill. LSD. Napalm. DDT. Thalidomide. Mutual Assured Destruction. Star Trek. Dr. Strangelove . The Sixties had them all" ( Moy 2001 , p. 305). In molecular genetics, the structure and the function of RNA and the mechanism of genetic coding were discovered. In technology, satellite communication became popular, and man landed on the moon in 1969. The introduction of contraceptive pills in the early 1960s helped trigger the sexual revolution, and progress in other medical technologies—the synthesis of insulin, new vaccines, and transplantation of organs (tissue, kidney, heart, lung and larynx)—were notable. Electrical technologies such as color TVs and music players, as well as computers, became more popular. Some technologies, however, were tightly linked to war. Dwight Eisenhower explicitly warned in his farewell address in January 1961 about the dangers of the "military-industrial complex." The "electronic battlefield" introduced by the US government during the Vietnam War simulated enemy movement electronically. The rapid stockpiling of hydrogen bombs heightened fears of total annihilation ( Mendelsohn 1994 ). Criticizing "man-made pollutants that threaten to destroy life on this earth," Rachel Carson's Silent Spring compares nuclear fallout to pesticides like DDT, which was itself a product of America's "total war against human and insect enemies" during WWII. Films such as Fail-Safe (1964), Dr. Strangelove (1964), 2001: A Space Odyssey (1968), and Colossus (1970) depicted frightening relationships between humans and new technologies such as the control box of the nuclear bombers (fail-safe box), the Doomsday Machine, and intelligent computers. 2

This paper will discuss new conceptions and ideas about the relationship between man and machine that emerged in the 1960s. Although the domination of humans by machines has always been a feature of critical commentaries on technologies, the relationship between man and machines in the 1960s had some unique features. Automation with cybernetic and flexible machines, which had begun in the 1950s, created widespread concerns and debates in the 1960s. Cybernetics, systems theory, and intelligent computers blurred the strict boundary between machine and organism, forcing people to rethink the nature of human intelligence and understanding. Machine and technology became part of what it meant to be a human. However, man-made machines—chemical pollutants and defoliants, the electronic battlefield, and the hydrogen bomb in particular—began to threaten the very existence of humans. In this paradoxical and uncertain context, an alternative essence of humanity was sought to save humans from the threat of automation and total annihilation. I will show that the heart of humanity shifted from the realm of intelligence to that of emotions and feelings.

Cybernetics, Mumford and the "Megamachine"

According to Norbert Wiener's cybernetics, there was no essential difference between man's intentional movements and a torpedo-in-motion that follows its target. Both could be explained in terms of control by the feedback of information. 3 In the 1960s, cybernetic ideas became more popular for several reasons. First, Wiener's popular book, God and Golem , was widely read and reviewed. The book outlined some provocative implications of cybernetics for religion, including one that as humans made a learning (cybernetic) machine, they became equivalent to God. Second, from the late 1950s, second-wave cybernetics was proposed by Heinz von Foerster. Second-wave cybernetics developed a reflexive aspect by including the observer into the system, and it was later extended .into the theory of the self-organizing system. Several symposiums were held in the late 1950s and 1960s on the self-organizing cybernetic system ( Wiener 1964 ). 4 Third, Mansfield Clynes and Nathan Kline, who worked for the American space program, coined the term "cyborg" in 1960. It stood for a " cyb ernetic org anism," a hybrid system of both artifact and organism. They considered that it could give man a freedom "to explore, to create, to think, and to feel" in a highly mechanized environment like a spaceship. Before long, cyborg became a very popular term. Alvin Toffler's Future Shock , published in 1970, had a section entitled "The Cyborgs Among Us." 5

The idea of the cyborg captured the attention and imagination of the public in the 1960s largely because it was proposed at a time of intense concerns over automation. Mechanization began in the Industrial Revolution, when workers were called a "factory hand," or a part of the machine. Andrew Ure and Charles Babbage were well known to be ardent supporters of factory mechanization ( Babbage 1963 , p. 54). However, the mechanization that began with the Industrial Revolution was considered in the 1960s to be only the first stage of automation, that is, the stage of dependent machines. The second stage—the stage of semi-automatic machines—began in the early twentieth century. In the 1950s, the third stage of full automation with computerized automatic machinery and cybernetic devices was initiated at an accelerated speed. For example, throughout the US, only seven generalpurpose digital machines were used in manufacturing in 1951, but within ten years, the number of the machines had increased to 7,500. Numerically controlled (NC) machines had been well developed, and were widely used from the aerospace industry to offices. In the mid-1960s, computers were used in the electric-power industry, cement companies, natural and artificial materials industries, automotive industries, food processing, and papermaking companies. Automation was even applied to the post office and supermarkets. However, the impact of automation upon society was not yet certain. There was as much optimism as pessimism among the concerned. Optimists argued that automation would "ensure future technological progress, increase productivity and ease the strain on workers" by freeing them from drudgery and monotonous labor. 6 Pessimists and critics, on the other hand, argued that automation would replace workers with machines, increase the number of the unemployed, and transform the remaining workers into part of the machinery. It was the machine's logic, not human needs, that dictated automation. As one commentator put it, "automation not only frees human operators from routine work; it also frees the machinery from the restrictions imposed on it by man's limitations" ( Santesmases 1961 , p. 111).

The criticism directed towards the automated factory was extended to technological society and technical rationality in general. The psychoanalyst Erich Fromm deplored the notion that the ideal man for modern capitalist society was an "automation, the alienated man" ( Fromm 1965/6 , p. 31). Jacques Ellul's Technological Society , first published in French in 1954 and translated into English in 1965, blamed modern technology for emphasizing technological efficiency over humane values. Throughout the book, he emphasized the loss of human autonomy due to automated machinery: "The combination of man and technics is a happy one only if man has no responsibility; ... technique prevail over the human being ... Human caprice crumbles before this necessity; there can be no human autonomy in the face of technical autonomy " ( Ellul 1964 , p. 136-38). The American sociologist C. Wright Mills also characterized individuals in mass society as "cheerful robots" ( Mills 1959 , chap. 9, sec. 3). Herbert Marcuse, in his widely read One Dimensional Man , criticized technological rationality as a form of control and domination ( Marcuse 1964 ). 7

Lewis Mumford was an influential critic. In his short article published in Technology and Culture in 1964, "Authoritarian and Democratic Technics," Mumford divided technics into two types: authoritarian technics, which is system-centered and seeks for uniformity and standardization; and democratic technics, which is human-centered and values variety and ecological complexity. Criticizing the inventors of nuclear bombs and computers as "pyramid builders of our own age," Mumford pointed out that "through mechanization, automation, cybernetic direction, this authoritarian technics has at last successfully overcome its most serious weakness." Its most serious weakness was nothing but "its original dependence upon resistant and sometimes actively disobedient" humans. Mumford reasoned that the more technology becomes system-centered, the more it becomes autonomous or alive escaping from human control, even from the control of "technical and managerial elites." Authoritarian technics was a "megatechnics" or "megamachine" which had both technoscientific and bureaucratic apparatuses ( Mumford 1964 , p. 1; p.5). 8

The alternative to the megamachine lay in injecting "the rejected parts of human personality" into science and technology. "We cut the whole system back to a point at which it will permit human alternatives, human interventions, and human destinations for entirely different purposes from those of the system itself" (p. 7-8). Men must be disobedient: Be a Thoreau rather than a Marx. To support his argument on the significance of human elements, Mumford described two interesting episodes. The first was the huge electric power failure in the northeast US in 1965. Mumford had cited a magazine article which reported that the electric failure turned the entire city of New York dark and dead, but suddenly "the people [in New York] were more alive than ever." The second episode was the experience of US astronaut John Glenn. His spaceship was programmed to control itself automatically, but when its automatic control began to malfunction, John Glenn insisted on controlling it manually by sending to the earth a message, "Let man take over!" It is the message that Mumford wanted to propagate ( 1970 , p. 412). 9

Mumford was the founding member and leading figure of the Regional Planning Association of America. It advocated the idea of a "garden city," for which harnessing technology for the common good, as well as the revival of an "organic balance," were crucial ( Wojtowicz 1996 , p. 1-3). Mumford's humanistic, ecological vision, however, collapsed in the 1960s, since, as technology became autonomous, humans lost autonomy and became mechanized. "Instead of functioning actively as an autonomous personality, man will become a passive, purposeless machine-conditioned animal" ( Smith 1994 , p. 29).

His pessimism was shared by many. The economist John Kenneth Galbraith wrote (in The New Industrial State ) that "we are becoming the servants...of the machine we have created to serve us" ( Winner 1977 , p. 14). René Dubos, a famous microbiologist, also remarked that "technology cannot theoretically escape from human control, but in practice it is proceeding on an essentially independent course" ( Ibid .). The relationship between master and slave was reversed. One commentator asked: "Will the human be slave to machine, performing machine-required drudgery, or will he be master of a tireless, efficient slave?" ( Meng 1968 , p. 417) To pessimists such as Mumford and Ellul, men were no longer the masters of technology. This theme was vividly expressed in the movie Colossus: The Forbin Project (1970). Here, the US scientist Dr. Forbin constructs an intelligent computer, Colossus, but once constructed, it rapidly becomes intelligent, manifests an independent intelligence, gets out of control, take over the world and enslave humans. Technology out of control was also a theme in 2001: A Space Odyssey (1968). Isaac Asimov had introduced the famous "three laws of robotics" in 1942, the first of which was that "a robot may not injure human beings." Asimov's fictional fear had now become real to Mumford and others in the 1960s. 10

A long time ago, some thinkers had already felt that men had become a "hand" of the mechanical system. In "Signs of the Times" (1829), Thomas Carlyle stated that "men are grown mechanical in head and heart, as well as in hand." Andrew Ure's Philosophy of Manufactures (1835) described the factory as a "vast automation, composed of various mechanical and intellectual organs, acting in uninterrupted concert for the production of common object, all of them being subordinated to a self-regulated moving force" ( Mazlish 1993 ). Karl Marx also noted that "an organized system of machines, to which motion is communicated by the transmitting mechanism from a central automaton, is the most developed form of production by machinery. Here we have, in the place of the isolated machine, a mechanical monster whose body fills whole factories, and whose demon powers, at first veiled under the slow and measured motions of his giant limbs, at length breaks out into the fast and furious whirl of his countless working organs" ( Channel 1991 , p. 85).

Mumford's megamachine was, however, different from mechanization during the Industrial Revolution. It was much more than an automated machine or a mechanized factory. The megamachine was technocracy plus bureaucracy, with its own methods, philosophy and religion. It was essentially uncontrollable and invincible. Mumford had not been as pessimistic in Technics and Civilization in 1934 , where he insisted that we should absorb "the lessons of objectivity, impersonality, neutrality, [and] the lessons of the mechanical realm" ( 1934 , p. 363). But after having witnessed "mechanization, automation, [and] cybernetic direction" ( Mumford 1964 , p. 5) endowing authoritarian technics with immense power, Mumford had become pessimistic and critical. The reason for Mumford's change had to do with the emerging paradigm of "cyberscience."

"Cyberscience" and Blurring the Man-Machine Boundary

Several historians of science and technology have recently noted that some new branches of science and engineering reinforced each other in the 1950s and 1960s, creating a powerful "discourse of information." The impact of this discourse was most apparent in molecular biology and genetics. Marshall Nirenberg , one of the scientists who successfully solved the problem of genetic coding, predicted in 1966 that man will be able to " program his own cells with synthetic information" ( 1967 , p. 633) in the near future. In 1970, the Nobel laureate François Jacob proudly announced that "heredity is described today in terms of information, message, and code" ( 1974 , p. 1) and that "the program [of modern biology] is a model borrowed from electronic computers; it equates the genetic material of an egg with the magnetic tape of a computer" (p. 9). This blurred the boundary between man and machine, said Jacob. Now, "the machine can be described in terms of anatomy and physiology" as much as "organs, cells and molecules are united by a communication network" (pp. 253-54) that made possible exchanges of signals and messages.

Jacob's novel description reflected the transformations of molecular biology in the 1950s and 1960s. The historian of science Lily Kay discussed the combined influence of Wiener's cybernetics, Claude Shannon's information theory, and John von Neumann's automata upon molecular biology. Shannon's definition of information as negative entropy was stripped of its semantic values (i.e., its meanings) in ordinary languages, leaving only its technical (i.e., syntactic) values. Information was thus a metaphor, Kay argued, and an information discourse in molecular biology functioned as a "metaphor of metaphor" which transformed the human genome into a sort of text "without a referent" ( Kay 1997 , p. 28). Evelyn Fox Keller disagreed with Lily Kay about the extent to which these new sciences affected molecular biology. Keller argued that the traffic of information from information theory and cybernetics to molecular biology was almost useless due to differences between genetic information and information defined as negative entropy. But Keller also acknowledged the importance of "cyberscience"—information theory, cybernetics, systems analysis, operations research, and computer science—for providing new metaphors such as information, message, coding, and feedback to molecular biologists. Computers, rather than clocks and steam engines, had become the new model for the organism ( Keller 1994 ).

In the same vein, the historian of technology David Channel claimed that a new "bionic world view" or a new concept of "vital machine" emerged in the second-half of the twentieth century because of the combined effect of the development of system building, cybernetics, computer science, artificial intelligence, new biomedical engineering like artificial organs and electronic prosthetic devices, and genetic engineering. Many important developments that Channel described took place in the 1960s. Channel particularly emphasized the impact of the systems theory of Ludwig von Bertalanffy that was popularized in the 1960s by the Society for the General System Research (founded in 1954). The most interesting feature of systems theory was that some components of a particular system would function in a more organic way than the others. Similarly, some components are more mechanical than others. In other words, a system consisted of both organic and mechanical components, and it was therefore neither wholly organic nor mechanical. This certainly blurred the boundary between organic and mechanical realms ( Channel 1991 ). 11 The systems idea, based on cybernetics and computer science, was used widely to explain biological, ecological, social, military, and world systems (Bertalanffy 1968). 12

Artificial intelligence (AI) or a "thinking machine" was another field where the boundary between man and machine was blurred. Some primitive thinking machines had been available since the early 1950s, but various ideas and working models of AI machines were proposed since the late 1950s. Frank Rosenblatt's Perceptron (1958) was modeled after the human neuron network. It was advertised as the first machine capable of producing original ideas. The cybernetician Gordon Pask invented a machine with a sense organ for detecting a magnetic field. Around the same time, John McCarthy of the Massachusetts Institute of Technology (MIT) endeavored to create a computer program that would enable a machine to reason verbally. "Adaptive robots" to deal with changing situations were developed in 1960. In 1961, the US Air Force designed a machine that could distinguish between reward and punishment. Pask designed a programmable "teaching machine" that could perform the work of a tutor, and the Raytheon Company built a machine ("Cybertrons") that could solve problems for which no formula was known. The same year, automatic speech recognition, as well as translating, machines marked a significant advance. A reflex machine that could be conditioned was developed in 1963. The ambitious pattern-recognizing machine of the MIT, the Pandemonium, was also constructed. The idea of dynamic programming for the use of computers for decision-making was proposed in 1966. In 1969, the General Electric's "Cybernetic Anthropomorphous Machine"—a gigantic cyborg robot moved by following the movement of a human—surprised the world. 13 Yet, in spite of such remarkable achievements, AI machines were view with suspicion and considered more threatening than promising. Due to the blurring of the boundary between humans and thinking machines, man's ego could be affected. A kind of Frankenstein's fear that one's own creation would eventually destroy oneself was also widespread, as is seen in Felicia Lamport's poem, "A Sigh for Cybernetics."


Thinking machines are outwitting their masters,
Menacing mankind with ghastly disasters.
These mechanized giants designed for compliance
Exhibit their open defiance of science
By daily committing such gross misdemeanors
T hat scientists fear they'll make mincemeat of Wieners
( 1961 , p. 57).

Also intriguing about "cyberscience" in the 1960s was the military support for its development. Information science, cybernetics, operational research, and computers were created as solutions to the increasing complexity of military operations during WWII. Information science was developed in the effort to maximize the efficiency of communication. Wiener's cybernetics was devised to effectively control a new hybrid system of anti-aircraft predictor. Operational research was exercised for the efficiency of military maneuvers. Electronic computers were built for the calculation of projectiles and the atomic bomb. The link between cyberscience and the military continued well into the sixties. The Perceptron (a neural-network computer) designed by Rosenblatt was funded by the Navy's Office of Naval Research. Navy was then eager to solve the problem of the increasing complexity of military operations by developing a computer that could learn. The Navy, as well as the Air Force, sponsored in the 1960s several symposiums on the self-organizing system. It was further developed by von Forester at the University of Illinois, whose laboratory was fully supported by the military. The Air Force also supported symposiums on bionics. The Air Force had been interested in communication networks in complex systems, and, as it is well known now, it was the Air Force that commissioned the ARPA (Advanced Research Project Agency) to devise the first computer network, the Arpanet, which later became the backbone of the Internet. The ARPA's IPTO (Information Processing Technique Office) supported computer science and research on artificial intelligence such as MIT's MAC project. 14

However, although such military-supported research on artificial intelligence, communication theory, and systems theory eventually changed our understanding of the relationship between man and machine, the military technology that had the strongest impact on people's psychology of the man-machine relationship was the nuclear bomb. Nuclear bombs made the total annihilation of human beings possible, and people had to learn how to live with such horrible weapons. The most crucial problem of the sixties was "survival." 15

Nuclear Weapons Out-of-Control

In the 1950s, important advances in nuclear weapons were made. Not only did nuclear bombs become more powerful and smaller, but they were installed on the warhead of the inter-continental ballistic missiles (ICBMs). ICBMs were made possible by the development of solid-fuel rockets and the accurate guidance system. As ICBMs gradually replaced old bombers, the nature of the decision to start a nuclear attack or counter-attack dramatically changed: from giving an order to launch bombers to pushing a button to shoot missiles. The psychological shift it accompanied was tremendous. In the case of bombers, the president would have two or three hours before the bombers reached the point of no-return. In the case of missiles, the ultimate decision to push a button should be made in just a few minutes, and was obviously irrevocable. Once fired, missiles could not be recalled. John F. Kennedy was perhaps the first president with the power and burden to push the button. His inaugural address in January 1961 began with the statement that "the world is very different now; for man holds in his mortal hands the power to abolish all forms of human poverty and all forms of human life." But he seemed to sense the irony that peace in the nuclear age stood on the "uncertain balance of terror," since Kennedy , in the same address, stressed that "only when our arms are sufficient beyond doubt can we be certain beyond doubt that they will never be employed" ( 1988 , pp. 82-84).

In 1960, almost half of Americans believed that the Soviet Union was far superior to the United States in the nuclear capabilities. Although this alleged "missile-gap" turned out to be non-existent, Kennedy decided to accelerate the strategic missile program, adding 220 new nuclear missiles and doubling the production capacity of ICBMs. The United States apparently thought that they were in the defensive position, but the American armament was seen by the Soviet Union as monopolizing power at its best, and as preparing an aggressive attack at its worst. In 1962, a crisis took place first in Berlin, and then in Cuba, where the Soviets had installed some offensive missiles. When the Cuban crisis was at its peak, the world was on the brink of nuclear war. Although the US Secretary of Defense Robert McNamara later recalled that he had estimated the chance of a nuclear war as one in fifty, Kennedy privately estimated the chance as one in five ( Chang & Kornbluh 1992 ). Bertrand Russell declared at that time that "We'll all be dead in a week," and students protested shouting "Invasion means Retaliation means Annihilation" ( Rorabaugh 2002 , p. 49).

From the 1950s on, and for some years after 1962, peace stood on a delicate balance of terror. RAND analysts like Bernard Brodie and Albert Wohlstetter had developed the idea of nuclear deterrence on the ground that if each side had enough "second-strike" forces, each side would not attack the other first, and a condition for strategic stability (i.e., peace) would be created. As Winston Churchill aptly pointed out, "safety [is] the sturdy child of terror and survival the twin brother of annihilation" ( Wohlstetter 1969 ). In early- 1960s, the Kennedy administration developed a new nuclear strategy called "flexible response." The US Secretary of Defense Robert McNamara changed the target of the US's counter-strike from the enemy's military forces to a wider range of targets, including its industrial power and populated cities. He systematized these principles and named it "Mutual Assured Destruction" in mid-1960s. The doctrine of Mutual Assured Destruction became an official nuclear strategy of the United States for some time, and this also became a strategic ground on which the Test Ban Treaty (July 1963) and the Non- Proliferation Treaty (1968) were signed. Mutual Assured Destruction was soon called by its critics "MAD" ( Shapley 1993 ).

Renowned scientists—fifty-two Nobel prize winners—declared in the "Mainau Declaration" (1956) that "All nations must come to the decision to renounce force as a final resort of policy. If they are not prepared to do this, they will cease to exist" ( Newman 1961 , p. 198). The horrible impact of nuclear weapons on people's lives was also highlighted by the publication of a study of Robert Lifton. He had lived in Hiroshima for four months in 1962, and investigated the survivors' lives seventeen years after the bombing. There were many chilling stories and recollections in Lifton's studies, but the most horrible phenomenon was the survivors' intimate identification with the dead, incorporating the atomic disaster into "their beings, including all of its elements of horror, evil, and particularly of death" ( Lifton 1963 , p. 482). He then repeated a question that had been asked a few times previously: "Would the survivors envy the dead?" Lifton's answer was "No" because "they would be incapable of such feelings" ( 1967 , p. 31). Nuclear-bomb survivors "would not so much envy as...resemble the dead" (p. 541). The nuclear war even destroyed the possibilities of symbolic survival.

However, uncertainty dominated horror. There was a deep and essential uncertainty on the issue of nuclear weapons. The strategic analyst Herman Kahn, the notorious "defense intellectual" at Rand Corporation, refuted the opinions of anti-nuclear scientists as "nonsense" and "layman's view." His "objective" and "quantitative" studies, performed from a "systems analysis point of view," showed that if a nuclear war occurred between the US and the Soviet Union, only forty to eighty million (40,000,000-80,000,000) US civilians would be killed. His point was that after the end of the war, civilization and economy could be rapidly rebuilt by the survivors: "[My] thesis [is] that if proper preparations have been made, it would be possible for us or the Soviets to cope with all the effects of a thermonuclear war, in the sense of saving most people and restoring something close to the prewar standard of living in a relatively short period of time" ( Kahn 1960 , p. viii). The figure he provided, 40-80 million, was significant, because, according to Kahn's research, most Americans regarded ten to sixty million causalities as acceptable in the case of a total nuclear war. Sixty million (one-third of the total population at that time) casualties were the upper limit. Kahn thus claimed that only forty million US civilians would be killed if we carefully prepared for it. For all this, the US must have enough nuclear capability to launch a first strike in case the Soviet Union makes an outrageous provocation. But this was not enough, since it might just induce the Soviet Union to attack the US rather than provoking it. The US must therefore have enough retaliatory capacity to make the enemy's first attack unattractive. This thinking, Kahn proclaimed, was rational and logical. Antinuclear scientists were rendered irrational and illogical. 16

Another strategic analyst, Albert Wohlstetter, criticized scientists' involvement in strategic decisions. He quoted Bertrand Russell's famous letter (1955) in which Russell said "I enclose a statement, signed by some of the most eminent scientific authorities on nuclear warfare," and then criticized it because "among the ten physicists, chemists, and a mathematical logician who were included, not one to my knowledge had done any empirical study of military operations likely in a nuclear war" ( Wohlstetter 1962/3 , p. 468). Scientists built the hydrogen bombs, but this did not automatically guarantee that they were the experts on the strategy of nuclear warfare. The issue of countermeasures in military conflicts, which involved political, military, and strategic (rather than technological) decisions, should be dealt with by a new discipline and new experts who relied upon "the [quantitative] method of science," not on "the authority of science." 17 Up to the early 1950s, scientists such as James Conant, Robert Oppenheimer, and Ernest Lawrence remained influential, but since then, nuclear strategies were developed by the new defense intellectuals such as Paul Nitze (ex-banker), Bernard Brodie, Henry Kissinger, Wohlstetter, Robert McNamara, Herman Kahn and Alain Enthoven (McNamara's Whiz Kids). In the early 1960s, Kahn was the most public, and notorious, expert in this new field of strategic analysis. He eventually became the model for Dr. Strangelove in Stanley Kubrick's movie Dr. Strangelove (1964). 18

The uncertainty of the nuclear war was magnified by technology itself. Kahn identified five different ways in which nuclear war could start, and the most likely trigger was by accident such as false alarms, mechanical error, or human errors. In the same article, he discussed the Doomsday Machine , a computerized machine that could destroy the entire earth, and the Doomsday-in-a-Hurry Machine , which was a Doomsday Machine for a different situation. Although Kahn concluded for the lack of strategic utility in such a Doomsday-machine family, it was very similar to the secret nuclear strategy of the 1950s known as the Sunday Punch ( Kahn 1960b ). Further, Kahn's discussion of the Doomsday machines and nuclear accidents was interpreted by most as indicating the probability that a nuclear war could be initiated by machines alone ( 1962 ). Wiener also warned that in a cybernetic war the decision to start the war would be made by an "electronic brain" ( Wiener 1960 ). People feared that the computerized nuclear system including devastating bombs had become so complicated and out of control that a small mechanical error would trigger the entire system ( Illson 1959 ). Eugene Burdick and Harvey Wheeler's novel Fail-Safe (published in 1962 and made into a movie in 1964) begins with a burn-out of a tiny condenser, which triggered a minor malfunction of the control system at Strategic Air Command Headquarters in Omaha, which eventually caused a bombing of a hydrogen bomb in Moscow. A worry about technology out of control was real and widespread. For instance, residents at Pomona, California, worried if the opening of their automatic garage doors might not accidentally launch guided missiles, because the test launch of missiles near the city caused the garage doors to fly open ( Rorabaugh.2002 , p. 39).

Since the late 1950s, nuclear weapons had "proliferated" like living organisms. The arms race was partially accelerated by the "potential volatility of military technology" ( Zoppo 1965/6 , p. 599). The situation became more complicated, because such technical uncertainty and uncontrollability could be, and in fact was, used strategically to make a nuclear threat credible to the enemy. This was evident in the US military official's statement that when the US military chose nuclear weapons, "we largely abandon to terms and results dictated by the nature of nuclear weapons" ( Berry 1989 , p. 56). John F. Kennedy stated a few times that the push-button weapon system was capable of both mechanical and human errors, and that a nuclear holocaust could occur through an act of inadvertence or irrationality. In his speech before the United Nations in September 1961, Kennedy explicitly warned that "every man, woman, and child lives under a nuclear sword of Damocles, hanging by the slenderest of threads, capable of being cut at any moment by accident or miscalculation or by madness" ( Lapp 1968 , p. 36). Politicians like Richard Nixon and Robert McNamara alluded that the US might start a nuclear war irrationally. 19 It was in this context that Erich Fromm lamented that the world was full of "impotent men directed by virile machines" ( Fromm 1960 , p. 1020). Paraphrasing the poet Emerson's phrase that "things are in the saddle and ride mankind," Fromm claimed that "we still have a chance to put man back into the saddle" "1965/6 , p. 34). However, this solution would not make everyone happy, in particular one who would think that "evil is not in things but in man. ... To Control the Bomb is absurd, [...] what we need to control is man" ( Rougemont 1958 , p. 48). 20

Whose opinion should be trusted? That of Nobel laureates or the Rand Corporation? Kahn argued that anti-nuclear scientists were neither logical nor rational, 21 while SANE (National Committee for Sane Nuclear Policy) protested that any discussion of the actual use of nuclear bombs were insane and irrational. Kahn's system might be based upon a rational relationship among its parts, but seen as a whole, it seemed totally irrational. Rationality and sanity were at dispute. C. P. Snow noted that "between a risk [in the restriction of nuclear armament] and a certainty [in the total disaster], a sane man does not hesitate." But what could an ordinary sane man do? There was, however, one little thing that he could do, and that was building a nuclear shelter. The Kennedy government promoted the civil defense program, including building nuclear shelters. In 1961, a student at Radcliffe college wrote in her essay that "the construction of shelters has become...a fad, like the suburban swimming pool; for the rich, [it is] a new luxury, for the handyman, a do-it-yourself toy." She however added that "the Bomb...is a sign of schizophrenic inconsistency" and the shelter thus represented "not a reasoned effort to survive but a senseless gesture." Schizophrenia was an apt metaphor for the mental status of humans living in the nuclear age ( Adler 1961-2 , p. 53-56).

Schizophrenic Man, Sane Robots

Schizophrenia was frequently invoked in discussions about the nuclear bomb in the 1960s. It symbolized the inhuman condition of the sixties. Fromm stated that "in the nineteenth century inhumanity meant cruelty; in the twentieth century it means schizoid self-alienation" ( Fromm 1965/6 , p. 33). Recall that Gregory Bateson, under the influence of Wiener's cybernetic ideas, had proposed a very interesting theory of schizophrenia in 1956. A person might become a schizophrenic if he had been forced to endure (while very young) a "double-binding" situation—a totally contradictory situation in which he could not win no matter what he would do. A typical double-binding situation was created in a family with a contradictory mother and the absence of a strong father ( Bateson, et al . 1956 ). 22 Many people viewed the sixties as something like a double-binding situation. Nuclear horror and conflicting authorities "mobilize[d] and actualize[d] this world of schizophrenic," in which the total destruction of the world dominated, and the boundary between reality and fantasy was obliterated. 23

In his book Burning Conscious (1962), the Austrian philosopher Günter Anders wrote that the reality and the image of nuclear mass murder created the "raging schizophrenia of our day" where people act like "isolated and uncoordinated beings." Here schizophrenia was more than a metaphor. The book was a collection of the correspondences between Anders and the "hero of Hiroshima" Major Claude Robert Eatherly who was at that time suffering from "the delayed action of the atomic bomb on its possessors" ( Eatherly & Anders 1962 ). In the 1950s, Eatherly had twice attempted suicide, been arrested for fraud, and alternated between court appearances and mental hospitals several times. He had been diagnosed as a schizophrenic. Bertrand Russell was sympathetic with Eatherly, emphasizing that insanity existed within the society, not within him. Defining the condition of mankind as the "technification of our being," Anders explained in his first letter to Eatherly that he who had been a screw in a "military machine" now wanted to be a human again. Being schizophrenic was the only way to revive his humanity or to live sanely in the crazy world. 24

In one of his letters to Anders, Eatherly spoke somewhat hopefully of nuclear scientists.

I would like to ask you some questions. Could we trust those nuclear scientists to delay their work and paralyze the political and military organizations? Would they be willing to risk their first love by giving up all the grants, laboratories and government support, and to unite and demand a trusted guardian for their brainchild? If they could do this, then we would be safe (p. 22).

Could science and scientists save people? A detailed study of nuclear armaments by two scientific advisors reached the pessimistic conclusion that "this [nuclear] dilemma has no technical solution" ( Wiesner & York 1964 , p. 35), which meant that the solution to the problem could not be found in the (techniques of) natural sciences. A respectable scientist, Theodore von Laue, was also pessimistic. To survive, he urged, "we must resolutely turn away from our reliance on science" ( von Laue 1963 , p. 5). Further, scientists were sometimes depicted as suffering from a sort of schizophrenia. For example, René Dubos stressed in his George Sarton lecture of 1960 that "many modern scientists suffer from the schizophrenic attitude," because of the disparity between scientists' claim about the usefulness of science and criticisms from anti-science activists who described scientists as "thoroughly dehumanized" and "mechanized" ( Dubos 1961 , p. 1209-10).

Dubos's comment is interesting, because it links a schizophrenic attitude to the "dehumanized" and "mechanized." Since the psychiatrist Victor Tausk's classic paper on the "influencing machine" in schizophrenia, it had been well-known that some schizophrenic patients felt as if they were influenced by a mechanical environment (thus the name "influencing machine"), and further that part of their body was projected onto the environment, resulting in the loss of "ego boundary" between the human self and the mechanical environment ( Tausk 1991 ). The March 1959 issue of Scientific American reported a surprising story about Joey, a "Mechanical Boy," who thought of himself as a machine or robot while suffering from severe schizophrenia. Joey behaved as if he was a machine, being controlled by a remote control of his own fantasy. The imaginary machines that surrounded him had emotions and will power. He connected himself to various tubes and motors, but sometimes destroyed them with fury and then immediately connected himself up to bigger tubes and machines. Dr. Bruno Bettelheim who treated Joey discovered that his parents had transformed him into a type of machine by treating him "mechanically" without love or tenderness. Joey wanted these machines to dominate his mind and body because it had become too painful for him to be a human. The doctor therefore tried to revive the sense of human trust and feelings inside him. As Joey made progress, he gradually regained control of the mechanical environments around him. Then, he became able to relate emotionally to people. Bettelheim remarked that "robots cannot live and remain sane," because they will become "golems" and "destroy their world and themselves." Before this happened to Joey, humanity went back into the saddle and saved him ( Bettelheim 1959 ).

But machines entered the scene again. Five years later, in 1965, the New York Times published an article that reported the use of a machine, the Computerized Typewriter (Edison Responsive Environmental Learning System), to successfully treat autism, where standard psychotherapy had failed and no cure or cause was known. The Computerized Typewriter was a human-like machine: it talked, listened, responded, and drew pictures, but it never punished. The doctor who treated autistic children with the machine had noted that many of these children had an abnormal preoccupation with mechanical objects. Several boys who had refused to speak to humans began talking with the machine. After a year's of therapy, they began to respond to human conversation. Some were able to return to school ( Sullivan 1965 ). I t was an irony that a man-made machine—the nuclear bomb—pushed people into a schizophrenic mentality (metaphorically), but another machine—the communication device Computerized Typewriter —in effect treated it.

Conclusion: From Intelligence to Emotions

In the 1960s, people perceived and expressed new relationships between man and machine. Automation, system theory, cybernetics, genetics, information theory, artificial intelligence, computers, and atomic weapons contributed to these new visions. The visions ranged from optimism to apocalyptic pessimism. Some were close to reality, while others were metaphoric, imaginary and fantastic. The underlying philosophical questions, however, were similar: How could we retain our essential humanity in such a machinized age? What could make us more than machines? As I cited in the epigraph at the beginning of this paper, the first Pugwash Conference urged participants to "remember your humanity and forget the rest." Because "if you can do so, the way lies open to a new Paradise"; but "if you cannot, there lies before you the risk of universal death." But what was, and where lay, our essential humanity? ( Wohlstetter 1962/3 , p. 471)

Since the time of Aristotle, Western people have believed that "the soul" or "the self" could distinguish humans from non-humans. 25 The manifestation of the soul's capacity is most clearly expressed in Descartes's cogito —a capacity of reasoning or intelligent thinking. Animals could feel, but they could not think. Animals were machines. Therefore, non-animal, man-made machines—mechanical clocks, Vaucanson's defecating duck, the self-acting loom, the steam engine, and the telegraph—could not think either. But would this distinction remain valid in the age of automation, cybernetics, intelligent computers, self-reproducing automata, and the computerized Doomsday machine? Not only did machines show intelligence, but biologists discovered that bacterial cells had a kind of intelligence too. 26

What kind of machine is man? A biomedical scientist asserted that man was just a machine with a certain amount of disorder built into DNA ( Potter 1964 ). Cyberneticians and AI experts claimed that there was no essential difference between man and machines. In a popular exposition of the Turing machine and automata, John Kemeny concluded that "there is no conclusive evidence for an essential gap between man and a machine [like an electronic computer]; for every human activity we can conceive of a mechanical counterpart" ( Kemeny 1955 , p. 67). Using an evolutionary metaphor, Bruce Mazlish emphasized that the distinction between man and machine had almost disappeared. He epitomized it in the discourse on "fourth discontinuity." Throughout human history, Mazlish argued, there existed three great thinkers who destroyed man's naive self-love: Copernicus, who abolished the discontinuity between the earth and the universe; Darwin, who eliminated the discontinuity between man and animals; and Freud, who erased the discontinuity between the conscious and unconscious. But "a fourth and major discontinuity, or dichotomy, still exists in our time; it is the discontinuity between man and machine" ( Mazlish 1967 , p. 3). This discontinuity would be eliminated in the near future, Mazlish continued, if we realized a continuity between man and machines. Herbert Simon also noted that "as we begin to produce mechanisms that think and learn, [man] has ceased to be the species uniquely capable of complex, intelligent manipulation of his environment" ( Diebold 1965 , p. 152). So did Wiener , who claimed that "machines can and do transcend some of the limitations of their designers" ( 1960 ). John von Neumann, the designer of the stored-program computer and the first automata, pointed out that, to survive technology, we must understand three (not one) essential human qualities: patience, flexibility, and intelligence. Intelligence alone was not enough, because some machines could think ( von Neumann 1955 ).

Even if computers were said to think, it seemed that computers could not feel. Emotions and feelings could be considered to be something uniquely human. However, the idea that emotions interfere with rational thoughts is as old as the Western intellectual history. Cicero thought that emotions were hasty opinions, and Augustine believed that Gods and angels do not have emotions. In the 1960s, "defense intellectuals" nearly unanimously blamed emotion as an erroneous human impulse that would ruin the rational choice of the optimum strategy. The senior RAND researcher Bernard Brodie had warned in the 1950s that strategic analysis should not be tainted by human emotions induced by the imagination of the horrors of nuclear wars. Kahn asserted:

It is not that the problems [of war and peace] are not inherently emotional. They are. It is perfectly proper for people to feel strongly about them. But while emotion is a good spur to action, it is only rarely a good guide to appropriate action. In the complicated and dangerous world in which we are going to live, it will only increase the chance of tragedy if we refuse to make and discuss objectively whatever quantitative estimates can be made ( Kahn 1960a , p.47 n.1).

Robert McNamara told an interviewer in 1963: "You can never substitute emotion for reason. I still allow a place for intuition in this process, but not emotion. They say I am a power grabber. But knowledge is power, and I am giving them knowledge, so they will have more power" ( Shapley 1993 ). He also developed a way to intellectualize emotions. 27 McNamara's use of neutral and emotionless language such as "spasm response," "megadeath," and "counterforce collateral damage" partly served McNamara's purpose. Arthur Koestler also proclaimed that our irrational beliefs and paranoia were anchored in emotion, and further that we must develop techniques to cure the split between reason and emotion (which he called "schizophysiology") to survive as a species in the nuclear age. He proposed an artificial tampering with human (emotional) nature as one possible measure to cure this schizophysiology ( Koestler 1967 , p. 289; p. 327). The psychologist Paul T. Young, in the conclusion of his textbook on emotion, declared that understanding emotions under the threat of nuclear war and total destruction was immensely important, because such an understanding "can help to make us tolerant of the weaknesses and limitations of others" (1973, p. 444). He, however, put more emphasis on the role of reason than that of emotion in stating that "the ultimate destiny of man on this planet may depend on whether or not reason and sanity can dominate behavior." Such segregation between reason and emotion was supported not only by an old dictum that the seat of thoughts is in the brain whereas that of emotions in the heart, but also by the contemporary neurophysiological theory of the brain, called the Papez-MacLean theory of emotions. To put it briefly, the theory distinguished between the limbic system (the reptilian and primitive mammalian brain) and the neocortex ("human thinking-cap" responsible for language and symbolic thinking) in human brains, and considered the limbic system to be the seat of emotions and the neocortex the seat of reason. The gap between reason and emotion was the gap between "the newest and most highly developed part of the brain" and "the crude and primitive system" ( Koestler 1967 , p. 283-289).

Throughout the 1960s, however, new perspectives on human emotion gained a wider public recognition and support, although this deserves more studies than is presented here. First of all, the physiological basis of human emotions was explored further. The sixties was the period of chemical drugs. For example, in 1958, dimethylaminoethanol (DMAE) was discovered to have an anti-depressive effect. This "happy drug" was widely used in the 1960s. In 1963, the tranquilizer Valium was introduced. The psychologist James Olds, who had found the "pleasure centers" of the brain of rats in the 1950s, reported that human brains had similar points, which, if stimulated, produced quasi-orgasmic sexual pleasure. Manfred Clynes, who first coined the term cyborg as an organism-machine hybrid in 1960, claimed ten years later that human emotions could be artificially created by inducing a musculo-skeletal signature in the nerve system. In 1970, he proposed "Cyborg II" as a manmachine hybrid for space travel, but this time, the new cyborg embodied a mechanism to produce artificial emotions ( Clynes 1995 ).

Secondly, and more importantly to our discussion, the cognitive theory of emotion was proposed. Pascal once said that "the heart has reasons that reason does not know at all." Contrary to the dichotomy between reason and emotion, the new cognitive theory of emotion proposed that emotions served a cognitive function supplementary to, and along with, rational thoughts. Human emotion and reason were not two separate entities, but overlapped considerably. Emotion, according to the new theory, was a kind of appraisal or assessment, which frequently involved a human action toward or away from a person, thing or situation. To perceive something emotionally meant that it was appraised as desirable or undesirable, valuable or harmful to us. Our fear of nuclear war was caused by our concern that our lives and dignity were endangered by it, and by our uncertainty about how to cope with it. Emotion represented a person's concern; or more precisely, emotion was created out of the interaction between our concerns and situational meanings. Emotion was dependent upon interpersonal, social, and cultural contexts. Emotion was also dependent upon the ways in which we view the situation, and the possible choices open to us. Fear, anger, remorse, happiness, and jealousy were seen as a kind of reason. 28

In this sense, emotion was uniquely human. Isaac Asimov asserted in a popular article that "[what] the computer can not equal the human brain is their feelings." J. Bronowski accepted it as the truth that the computer is not merely a set of gears, wheels or electrical circuits, but it is a flexible, self-regulating (feedback) machine with mechanized inputs and outputs. The computer as a feedback machine can think that "I won't do this again because it hit others last time," Bronowski argued, because the hitting can be calculated by using the formal Newton's law. There is, however, an essential difference between humans and computers: Only humans are able to think " I won't do this again because it embarrassed others last time." Bronowski explained that since we recognize ourselves in others in this mode of thought, this is knowledge of the self Bronowski 1965 , p. 20-23). We can, however, see that it is a thought imbued with human emotion. One's concern about others, as well as oneself, were reflected in this knowledge. In the same vein, an expert in artificial intelligence opposed the idea that humans and computers were identical:

If machines really thought as men do, there would be no more reason to fear them than to fear men. But computer intelligence is indeed "inhuman": it does not grow, has no emotional basis, and is shallowly motivated. These defects do not matter in technical applications, where the criteria of successful problem solving are relatively simple. They become extremely important if the computer is used to make social decisions, for there our criteria of adequacy are as subtle as multiply motivated as human thinking itself ( Neisser 1963 , p. 197).

Human intelligence had an emotional basis and was deeply motivated. In human beings, emotions and feelings were intertwined with reason and intelligence.

Emphasizing emotion also became a popular theme in science fiction novels. Philip Dick's Do Androids Dream of Electric Ship (1965), which was later made into the movie Bladerunner , depicted that the essential quality of humanity shifted from reason and rationality to emotions and feelings. 29 As Alfred Whitehead had maintained, more people in the sixties believed that "our experience of love and beauty" was not just feelings, but "moments of metaphysical insights" (Matson 1964, p. 256-57). The case of Joey the Mechanical Boy was even dramatic. After several years of treatment, Joey went out of the hospital for a Memorial Day parade. He carried a sign that said: "Feelings are more important than anything under the sun" ( Bettelheim 1959 , p. 126). In the age of smart machines and nuclear holocaust, emotion and feelings became what would make the essence of humanity.

References

Abbate, J. Inventing the Internet . Cambridge, Mass.: MIT Press, 1999.

Adler, R. "We Shall All Char Together..." New Politics 1 (1961-2): 53-56.

Arieti, S. "Studies of Thought Processes in Contemporary Psychiatry." American Journal of Psychiatry 120 (1963): 58-64.

Arnold, M. Emotion and Personality 2 volumes. New York: Columbia University Press, 1960.

Babbage, C. On the Economy of Machinery and Manufactures . New York: Kelly, 1963.

Bateson, G., Jackson, D.D., Haley, J. & Weakland, J. "Toward a Theory of Schizophrenia." Behavioral Science 1 (1956): 251-264.

Berry, W. "Property, Patriotism, and National Defense." in D. Hall (ed.), The Contemporary Essay . St. Martin's Press, 1989.

Bettelheim, B. "Joey: A "Mechanical Boy." Scientific American 200 (March, 1959): 117-126.

Bowker, G. "How to be Universal: Some Cybernetic Strategies, 1943-70." Social Studies of Science 23 (1993): 107-127.

Bronowski, J. The Identity of Man . Garden City: The Natural History Press, 1965.

Carroll, J.D. "Noetic Authority." Public Administration Review 29 (1969): 492-500.

Chang, L. & Kornbluh, P. (eds.). The Cuban Missile Crisis, 1962: A National Security Archive Document Reader . New York: New Press, 1992.

Channel, D.F. The Vital Machine . Oxford: Oxford University Press, 1991.

Clynes, M.E. "Cyborg II: Sentic Space Travel." in Gray, C.H. (ed.) The Cyborg Handbook . London: Routledge. 1995, pp. 35-39.

Cohn, C. "Sex and Death in the Rational World of Defense Intellectuals." Signs 12 (1987): 687-718.

Damasio, A.R. Descartes' Error: Emotion, Reason, and the Human Brain . New York, 1994.

Diebold, J. "Automation: Perceiving the Magnitude of the Problem" Cybernetica 8 (1965): 150-156.

Dubos, R. "Scientist and Public." Science 133 (1961): 1207-1211.

Eatherly, C. & Anders, G. Burning Conscience . New York: Monthly Review Press, 1962.

Edwards, P. The Closed World: Computers and the Politics of Discourse in Cold War America . Cambridge, MA: MIT Press. 1997, pp. 315-327.

Ellis, A. Reason and Emotion in Psychotherapy . New York, 1962.

Ellul, J. The Technological Society . New York: Vintage Books, 1964.

Fromm, E. "The Case for Unilateral Disarmament." Daedalus 89 (1960): 1015-1028.

________. "The Present Human Condition." American Scholar 25 (1965/6): 29-35.

Galison, P. "The Ontology of the Enemy: Norbert Wiener and the Cybernetic Vision." Critical Inquiry 21 :(1994), 228-266.

Gray, C.H. (ed.). The Cyborg Handbook . New York and London: Routledge, 1996.

Hayles, N.K. How We Became Poshuman: Virtual Bodies in Cybernetics, Literature, and Informatics . Chicago: University of Chicago Press, 1999.

Hong, S. "How Unfaithful Offspring? Reflections on Technology, Its Origins, and Its Trajectories." Perspectives on Science 6 (1998): 259-87.

Illson, M. "Mankind Warned of Machine Peril: Robot 'Brain' Can Destroy Its Creators and Users, Prof. Wiener Declares." New York Times (17 May 1959).

Jacob, F. The Logic of Living Systems . Allen Lane, 1974.

Kahn, H. On Thermonuclear War . Princeton: Princeton University Press, 1960a.

________. "The Arms Race and Some of Its Hazard." Daedalus 89 (1960b): 744-780.

________. Thinking about the Unthinkable . New York: Avon Books, 1962.

Kay, L.E. "Cybernetics, Information, Life: The Emergence of Scriptural Representations of Heredity," Configurations 5 (1997): 23-91.

Keller, E.F. "The Body of a New Machine: Situating the Organism between Telegraphs and Computers." Perspectives on Science 2 (1994): 302-323.

________. "Marrying the Pre-modern to the Post-modern: Computers and Organism after WWII." manuscript.

Kemeny, J.G. "Man Viewed as a Machine." Scientific American 192 (April 1955): 58-67.

Kennedy, J. "Inaugural Address (20 Jan. 1961)." in Sarkesian, S. & Vitas, R. (eds.), US National Security Policy and Strategy: Documents and Policy Proposals . New York: Greenwood Press, 1988.

Koestler, A. "The Urge to Self-Destruction," in The Heels of Achilles: Essays 1968-1973 London: Hutchinson. 1974, 11-25.

Lamport, F. "A Sigh for Cybernetics." Harper's Magazine 222 :(Jan. 1961).

Lapp, R.E. The Weapon Culture . New York: Norton, 1968.

Lazarus, R. Psychological Stress and the Coping Process . New York: McGraw-Hill, 1966.

LeDoux, J. The Emotional Brain . New York: Simon & Schuster, 1996.

Lifton, R.J. "Psychological Effects of the Atomic Bomb in Hiroshima: The Theme of Death." Daedalus 92 (1963): 462-497.

________. Death in Life . New York: Random House, 1967.

Marcuse, H. One Dimensional Man . Boston: Beacon Press, 1964.

Matson, F.W. The Broken Image: Man, Science and Society . New York: George Braziller, 1964.

Mazlish, B. "The Fourth Discontinuity." Technology and Culture 8 (1967): 1-15.

________. The Fourth Discontinuity: The Co-Evolution of Humans and Machines . New Haven: Yale University Press, 1993.

Mendelsohn, E. "The Politics of Pessimism: Science and Technology circa 1968." in Ezrashi, Y. et al. (eds.), Technology, Pessimism, and Postmodernism . Dordrecht: Kluwer. 1994, 151-173.

Meng, J. "The Computer Game." American Scientist 56 (1968): 414-419.

Michael, D. "Review of On Thermonuclear War." Science 133 : (3 March 1961).

Miller, D.L. "The Myth of the Machine: I. Technics and Human Development." in Hughes, T.P. & Hughes, A.C. (eds.), Lewis Mumford: Public Intellectual . Oxford: Oxford University Press. 1990, pp. 152-163.

Mills, C.W. The Sociological Imagination . New York, 1959.

Mindell, D. Between Human and Machines: Feedback, Control, and Computing before Cybernetics . Baltimore and London: Johns Hopkins University Press, 2002.

Moy, T. "The End of Enthusiasm: Science and Technology," in Farber, D. & Bailey, B (eds.), T he Columbia Guide to America in the 1960s . New York: Columbia University Press, 2001.

Mumford, L. The Myth of the Machine: I. Technics and Human Development . Harvest Books, 1964.

________. Pentagon of Power . New York, 1970.

________. "Authoritarian and Democratic Technics." Technology and Culture 5 :(1964), 1-8.

________. Technics and Civilization . New York: Harcourt, Brace and Company, 1934.

Neisser, U. "The Imitation of Man by Machine." Science 139 (18 Jan. 1963): 193-197.

Newman, J.R. "Two Discussions of Thermonuclear War." Scientific American 204 : (March 1961): 197-204.

Nirenberg, M.W. "Will Society be Prepared?" Science (11 Aug. 1967).

Pitt, J. "The Autonomy of Technology." in Gayle L. Ormiston (ed.), From Artifact to Habitat: Studies in the Critical Engagement of Technology . Bethlehem: Lehigh University Press. 1990, 117-131.

Potter, V.R. "Society and Science." Science 146 (20 Nov. 1964): 1018-1022.

Rorabaugh, W.J. Kennedy and the Promise of the Sixties . Cambridge: Cambridge University Press, 2002.

de Rougemont, D. "Man v. Technics?" Encounter 10 (1958): 43-52.

Russell, E. War and Nature: Fighting Humans and Insects with Chemicals from World War I to Silent Spring . Cambridge: Cambridge University Press, 2001.

Santesmases, J.G. "A Few Aspects of the Impact of Automation on Society," Impact of Science on Society 11 (1961): 107-126.

Schelde, P. Androids, Humanoids, and Other Science Fiction Monsters: Science and Soul in Science Fiction Films . New York and London: New York University Press, 1993.

Schneid, O. The Man-Made Hell: A Search for Rescue . Toronto: Source Books, 1970.

Segal, H. "Silence Is the Real Crime." in Levine, H.B., Jacobs, D. & Rubin, L.J. (eds.), Psychoanalysis and the Nuclear Threat: Clinical and Theoretical Studies . Hillsdale, NJ: The Atlantic Press. 1998, 35-58.

Seligman, B. Most Notorious Victory: Man in an Age of Automation . New York: The Free Press, 1966.

Shapley, D. Promise and Power: The Life and Times of Robert McNamara . Boston: Little, Brown and Company, 1993.

Simpson, G. "Western Man under Automation." International Journal of Comparative Sociology 5 (1964): 199-207.

Smith, M.R. "Technological Determinism in American Culture." in Smith, M. R. & Marx, L. (eds.), Does Technology Drive History? The Dilemma of Technological Determinism . Cambridge: MIT. 1994, 1-35.

de Sousa, R. The Rationality of Emotion . Cambridge, MA: MIT Press, 1991.

Stephen, D.H. Cyborg: Evolution of the Superhuman . New York: Harper, 1965.

Sullivan, R. "Computerized Typewriter Leads Schizoid Children Toward Normal Life by Helping Them to Read." New York Times (12 March 1965).

Tausk, V. "On the Origin of the 'Influencing Machine' in Schizophrenia." in Roazen, P. (ed.), Sexuality, War, and Schizophrenia: Collected Psychoanalytic Papers . New Brunswick: Transaction Publishers. 1991, 185-219.

Toffler, A. Future Shock . New York: Random House, 1970.

von Laue, T.H. "Modern Science and the Old Adam." Bulletin of the Atomic Scientists 19 :1 (1963): 2-5.

von Neumann, J. "Can We Survive Technology?" Fortune (June 1955): 106-152.

Weaver, W. "The Moral Un-Neutrality of Science." Science 133 (1961): 255-62.

Weart, S. Nuclear Fear: A History of Images . Cambridge: Harvard University Press, 1988.

Wiener, N. Cybernetics: or, Control and Communication in the Animal and the Machine , Cambridge, MA: MIT Press, 1948.

________. "Some Moral and Technical Consequences of Automation," Science. 131 (6 May 1960).

_________. God and Golem inc.: A Comment on Certain Points where Cybernetics Impinges on Religion . London: Chapman & Hall, 1964.

Wiesner, J.B. & York, H.F. "National Security and the Nuclear-Test Ban." Scientific American Vol. 211 (October 1964): 27-35.

Winner, L. Autonomous Technology . Cambridge: MIT, 1977.

Wohlstetter, A. "The Delicate Balance of Terror." in Hahn, W.F. & Neff, J.C. (eds.), American Strategy for the Nuclear Age . New York: Anchor Books. 1960, pp. 197-218.

________. "Scientists, Seers, and Strategy." Foreign Affairs 41 (1962/3): 466-478.

Wojtowicz, R. Lewis Mumford and American Modernism . Cambridge: Cambridge University Press, 1996.

Young, P.T. Emotion in Man and Animal 2 nd ed. Huntington, 1973.

Zoppo, C.E. "Nuclear Technology, Multipolarity, and International Stability." World Politics 18 (1965/6): 579-606.


______________________________

1 I thank Evelyn Fox Keller, Kyung-Soon Im, Hong-Ryul So, Janis Langins, and the anonymous referees for their helpful comments on the draft of this paper.

2 For an interesting discussion on the development of pesticides in the context of a "total war" against insect enemies, and Carson's usage of nuclear fallouts as a metaphor for chemical pollutants, see Russell (2001) . Movies in the 1960s are well analyzed in Edwards (1997) .

3 For Wiener's "first-wave" cybernetics, see Wiener (1948) and Galison (1994) . For a prehistory of cybernetics including research in the Bell Lab and Sperry Gyroscope Company before 1945, see Mindell (2002) ..

4 Geof Bowker (1993) has pointed out that such religious claims—to create a new life or to destroy the earth—was one of the strategies that cyberneticians used to make cybernetics universal. For second-wave cybernetics, see Hayles (1999 , pp. 6-11; 72-76).

5 For Clynes and Kline's cyborg in 1960 and cyborgs since then, refer to Gray (1996). See also Stephen (1965) ; Toffler (1970) .

6 For such periodization, see Simpson (1964) . For the status of automation in the mid-1960s, see Seligman (1966) .

7 For Marcuse in the 1960s, see Mendelsohn (1994) .

8 For Mumford's megamachine, see also Mendelsohn (1994 , ppp. 167-170) and , "Politics of Pessimism," pp. 167-170 and Miller (1990) .

9 see Mendelsohn (1994 , p. 169). A later investigation revealed that the 1965 electric failure was largely due to the multiplied complexity of the electric power system that transcended the control of humans. See Carroll (1969) .

10 The term robot means a slave or servant in Czech or Polish. The Latin word servo (in servomechanism) also means a slave. The term robot was first used in Karl Capek's play R.U.R. (1923). The term robotics was first used by Isaac Asimov in his novel "Runaround" (1942) which begins with his famous "Three Laws of Robotics." See also Edwards (1997 , pp. 315- 327) For a critical discussion on the notion of technology-out-of-control, see Hong (1998) .

11 The interconnectedness among the parts, as well as the holistic nature, of a (technological) system was recognized as early as by Edison, but Bertalanffy's system theory was different from old ideas in that it argued some components are mechanical while others are organic.

12 But while used in such a overarching way, Bertalanffy soon deplored, the systems idea itself became another "technique to shape man and society ever more into the 'mega-machine'" ( Winner 1977 , p. 289).

13 See the following articles for further information: "Computer Develops Capacity to "Learn"," New York Times (4 Jan. 1958). "Thinking Machine Projected at MIT," New York Times (25 Nov. 1958). "A. M. Andrew, "The Perceptron: a Brain Design," ibid., p.1392; "Reason-Machine Seen for the Future," New York Times (24 March 1959); "Automation Gets a Language Tool," New York Times (26 Feb. 1959). See also "A Robot Machine Learns by Error," New York Times (15 Aug. 1961); W. K. Taylor, "Adaptive Robots," The New Scientist 8 (29 Sep. 1960), 846-848; Gordon Park, "Machines that Teach," The New Scientist 9 (11 May 1961), 308-11. "Robot Multiplies Man's Strength," New York Times (3 Apr. 1969).

14 See Edwards (1997) ; Keller (manuscript) . For the ARPA and IPTO, see Abbate (1999) .

15 Theodore M. Hesburgh's comment on Charles P. Snow's lecture on Weaver (1961) . In 1968, Arthur Koestler summed up the crisis of the time in a single sentence: "From the dawn of consciousness until the middle of our century man had to live with the prospect of his death as an individual; since Hiroshima, mankind as a whole has to live with the prospect of its extinction as a biological species" ( 1974 , p. 11).

16 See James Newman (1961) . Kahn's weakness, as another reviewer pointed out, lay in the neglect of the "human condition in the post-attack period" such as "the behavior of groups, individuals, and leaders under extreme threat, in the face of sudden disaster, or in ambiguous situations" ( Michael 1961 , p. 635). In Dr. Strangelove (1964), Stanley Kubrick parodied Kahn by having General Turgidson claim that [there are] "two admittedly unfortunate scenarios—one where we've got 10 million killed and another where we've got 150 million." I thank an anonymous referee for this phrase.

17 In Wohlstetter (1962/3) , he criticized C.P. Snow, Edward Teller and Hans Bethe, who had "hostility to the fact of hostility itself" and tended to "think of harmony rather than conflict" (p. 474).

18 Newman's statement is riveting: Is there really a Herman Kahn? It is hard to believe. Doubts cross one's mind almost from the first page of this deplorable book: no one could write like this; no one could think like this. Perhaps the whole thing is a staff hoax in bad taste. The evidence as to Kahn's existence is meager. ... Kahn may be the Rand Corporation's General Bourbaki, the imaginary individual used by a school of French mathematicians to test outrageous ideas. The style of the book certainly suggests teamwork. It is by turns waggish, pompous, chummy, coy, brutal, arch, rude, man-to-man, Air Force crisp, energetic, tongue-tied, pretentious, ingenuous, spastic, ironical, savage, malapropos, square-bashing and moralistic. ... How could a single person produce such a caricature? ( 1961 , p. 197)

19 See also Weart (1988) .

20 See also Pitt (1990) , where he stated that "those who fear reified technology really fear other individuals; it is not the machine that is frightening, [but] it is what some individuals will do with the machine" (p. 129).

21 A concerned pacifist stated that "in spite of all this Mr. Kahn is no warmonger, on the contrary; individual passages sound, though sacrificing consistency, rather unprejudiced and almost pacifistic" ( Schneid 1970 , p. 256).

22 Bateson reconstructed the theory of schizophrenia in terms of the breakdown in the system of meta-communication.

23 For a contemporary analysis, see Segal (1998) , esp. pp. 42-44 (on his discussion of "the world of schizophrenics" created by the prospect of the atomic war).

24 Eatherly's condition was diagnosed as follows: "An obvious case of changed personality. Patient completely devoid of any sense of reality. Fear complex, increasing mental tensions, emotional reactions blunted, hallucinations" ( Eatherly & Anders 1962 , p. xviii).

25 The search for the soul in the mechanized age is the central theme of many science fictions. See Schelde (1993 , p. 126).

26 For the "bacterial intelligence," see Potter (1964 , p. 1020).

27 An interesting feminist analysis of the emotionless language of defense intellectuals can be found in Carol Cohn , "Sex and Death in the Rational World of Defense Intellectuals," Signs 12 ( 1987 ), 687-718.

28 For the new cognitive theory of emotions, see Ellis (1962) ; Arieti (1963) ; Arnold (1960) ; Lazarus (1966) . The seed of the "emotional revolution" was thrown in the 1960s, but it was fully flourished in the 1990s. For recent discussions of human emotion, see de Sousa (1991) ; Damasio (1994) ; LeDoux (1996) .

29 In this movie, as Halyes points out, one of the most interesting scenes is the use of the Voigt-Kampff test to tell humans from androids. The test detects human emotions linked to their memories and thoughts ( Hayles 1999 , p.175).