SPT v3n3 - Teller's Technical Nemeses: The American Hydrogen Bomb and Its Development within a Technological Infrastructure


Number 3
Spring 1998
Volume 3

TELLER'S TECHNICAL NEMESES:
THE AMERICAN HYDROGEN BOMB AND ITS DEVELOPMENT WITHIN A TECHNOLOGICAL INFRASTRUCTURE

Anne Fitzpatrick, George Washington University


In World War II the U.S. Army contracted the University of Pennsylvania's Moore School of Engineering to develop a new, large electronic computer—among the first of its kind—in hopes that the machine would be able to perform ballistics calculations for the war effort. The machine was not completed before the end of the war, however, and the Army was not even the first group to utilize the machine. The first calculation ever run on the Electronic Numeric Integrator and Calculator (or ENIAC, as it was known), was for the Los Alamos nuclear weapons laboratory. The "Super problem" was the first attempt to calculate the feasibility of a thermonuclear bomb. The problem, however, was too complicated for the ENIAC with its 1000 bits of memory and 18,000 vacuum tubes, and only a very simplified version of the calculation was run, revealing very little about how such a weapon might work.

Although Los Alamos was exploring hydrogen weapons during and right after the Second World War, why did the U.S. not successfully test a thermonuclear bomb until 1952? I will argue that the American thermonuclear weapons program was, early on, entrenched in a technological infrastructure which affected the pace and initial results of the project, demonstrating how one particular aspect of this infrastructure—computing—influenced the practice of nuclear weapons research, design, and development. Why it took over ten years for weapons scientists to develop this very large—and potentially very deadly—technology, was due to a wide array of causes, but a lack of adequate scientific instruments was an outstanding one; not least among these instruments were computers.

Philosophical historical studies of technology are still underrepresented within contemporary studies of science. Yet focusing on technology is crucial in order to understand modern science, since the latter is ever more dependent on the former. Scientific computing emerged in the 1940s and 1950s as one of the most prominent parts of the technological infrastructure of modern physics helping to shape the course of nuclear weapons research and development. While sociologist Donald MacKenzie has suggested that the fantastic computational needs of nuclear weapons research and development strongly shaped modern computer development and even architecture, it is important to note that the relationship between computing and nuclear weapons science did not flow only in one direction.

Modern scientific practice is based on elaborate instrumentation. Scientific computing, for example, is routinely used for simulations of complex scientific processes. Joe Pitt has argued that it is the technological infrastructure of science, rather than science itself, which is responsible for the monumental changes taking place in science today. Thus, science on its own is not responsible for how knowledge changes. Pitt (1995) has claimed that, "The picture of science as the major mover responsible for the transformation of knowledge is inaccurate because it leaves out the role of technology. . . . Mature sciences are increasingly characterized by their technological infrastructure.” This is particularly true of twentieth century big-government- sponsored physics, where much emphasis was placed on building elaborate reactor, accelerator, and other instruments. The nuclear weapons complex (a nascent science in the 1940s) was a central part of this, and indeed represented one of the first large government-sponsored scientific programs to become imbedded in a technological infrastructure.

More specifically, physicist Edward Teller's "Super" thermonuclear weapon theory was nearly an intractable problem in 1942. The technology with which weapons scientists could determine this weapon's viability would not be completed for nearly ten years.

Los Alamos's employment of the ENIAC for a hydrogen weapon calculation was not only novel in 1945, but it signaled the beginning of a crucial relationship between the nuclear weapons complex and computers. In the postwar, weapons scientists initially saw electronic digital computers as necessary to developing a thermonuclear weapon. Moreover, computing was the bottleneck, in historical perspective, that draws attention to the technological infrastructure within which the atomic laboratory had to operate.

The idea for a thermonuclear weapon occurred in 1941 to physicist Enrico Fermi one afternoon after lunching with Edward Teller in New York, when the Italian physicist pondered aloud about whether or not an atomic weapon, which was already in prospect, might be used as a trigger for a deuterium (D) weapon. In principle, a bomb that fused hydrogen to helium was far more economical and would produce a much greater explosion than a fission device. Inspired by Fermi's suggestion, Teller took up the cause of exploring a fusion weapon (see Rhodes, 1986, and Rhodes, 1995 ).

Igniting deuterium alone would require temperatures of hundreds of millions of degrees, so Teller's colleague Emil Konopinski suggested that tritium (H 3 ) be added to the mixture to lower the ignition temperature. These ideas became the basis of the "Super" weapon—a fusion bomb that in principle would create an explosion on the order of megatons.

The idea was exceedingly difficult to understand. During the war Teller and his group attempted to work out the Super theory analytically (by hand), only to find out it was more complex than anyone, even Teller, had originally imagined. One major obstacle to the Super appeared in the form of energy dissipation. The incredible speed of all the reactions inside the deuterium would make it difficult to deliver the energy needed to reach the ignition point in a short time. Furthermore, the Inverse Compton Effect would cause cooling of the hydrogen electrons by collisions with photons coming from the fission initiator.

Hydrogen bomb calculations involve charged particles in addition to neutrons. Ignition of the Super required heating the material to a critical temperature rather than assembly of a critical mass (see Metropolis and Nelson, 1982, p. 355 ). Thus, calculating whether or not the Super could be ignited was a high hurdle; so was calculating whether or not a cylinder of deuterium, if ignitable, would be self-propagating and burn. These two aspects constituted what came to be known at Los Alamos as the "Super Problem."

How could the Super problem be calculated? Enter mathematician John von Neumann, who had worked at Los Alamos during the war, and was not only aware of but intrigued by the prospect of a fusion weapon. Von Neumann arranged to have the Super problem run on ENIAC.

Only the first part (ignition) of the Super problem was run on the ENIAC in 1945 and 1946. The entire calculation was meant to predict the behavior of deuterium-tritium systems corresponding to various initial temperature distributions and tritium concentrations. Collectively, the calculations attempted to predict whether or not a self-sustaining nuclear reaction would occur and ignite a cylinder of pure deuterium.

The calculations were only one-dimensional. Because of the ENIAC's memory limits, several effects had to be left out of the problem. Though the Los Alamos problem was the most complicated of its time, even using 95 percent of the ENIAC’s control capacity, it did not truly answer the question of whether or not a Super could be ignited, much less propagate (see Harlow and Metropolis, 1983 ). Mathematician Stanislaw Ulam once gave his opinion on this calculation:

The magnitude of the problem was staggering. In addition to all the problems of fission . . . neutronics, thermodynamics, hydrodynamics, new ones appeared vitally in the thermonuclear problems: the behavior of more materials, the question of time scales and interplay of all the geometrical and physical factors. . . . It was apparent that numerical work had to be undertaken on a vast scale (quoted in Aspray, 1990 , p. 47).

The difficulty of the problem exceeded the technology of the time. Even Teller, optimistic about the Super's feasibility, realized this and acknowledged ENIAC’s limitations. Teller recommended that attention be paid to developments in high-speed electronic calculators; thermonuclear calculations so far indicated that the complexity of the problems required at least an instrument like the ENIAC. In 1946, however, there simply were no other large machines available to Los Alamos besides the ENIAC. The Super problem would have to wait.

Far from Los Alamos, construction of large computers was underway— slowly, and most would not be ready for several years. Since other new computers did not seem to be available quickly enough, the Los Alamos laboratory did not want to wait and began work building its own electronic digital computer. It was intended to be an exact copy of one that von Neumann was building at the Institute for Advanced Study (IAS) in Princeton.

Teller and his colleagues had hoped that either the IAS computer or the Los Alamos equivalent would be able to carry out a full simulation of the Super by 1949 or 1950, but construction on both computers fell way behind schedule. Growing impatient for any electronic machine to be ready, Ulam and University of Wisconsin mathematician C. J. Everett attempted to solve the ignition part of the Super problem with slide rules and hand computers, doing simplified calculations; still, they were not certain if the device would ignite or not, even though their results looked negative. (See Mark, 1974 , and Ulam, 1976. ) This was a problem, then, that weapons scientists felt could not be solved with any certainty analytically—at least in a reasonable amount of time. Von Neumann once estimated that completing a hand computation of the Super problem would require 100 hand computers and 4 years time.

Teller wrote to von Neumann in May, 1950, lamenting that the laboratory was in a "state of phenomenal ignorance" about the Super. At this time there was still no machine calculation which had unquestionably proven nor disproven the feasibility of the Super, and Teller believed that this ignorance was due to the lack of fast computers. (Personal accounts of some of these difficulties can be seen in Bethe, 1982 , and Los Alamos Historical Society, 1996. )

Teller was disillusioned and depressed by the end of 1950 because his Super weapon was still not proven either to be workable or not. Moreover, Teller and his colleagues were under more significant political pressure to at least test a hydrogen weapon now that the Soviet Union had demonstrated its attainment of an atomic weapon. At Los Alamos, Ulam produced the first breakthrough to a workable but very different thermonuclear bomb, which he presented to Teller in January 1951. Subsequently, Teller and a young protégé, Frederic DeHoffman, produced a second crucial part of the new thermonuclear configuration. Teller named this device the "Sausage"—perhaps ironically, as an easier device to calculate than the Super configuration (see Hewlett and Duncan, 1972 ). Collectively, the ideas of Ulam, Teller, and DeHoffman comprised a new thermonuclear system that appeared viable on paper. It is also more commonly known as the Teller-Ulam configuration. But even the Sausage would have to be calculated and tested.

To expedite calculations and to help Los Alamos overcome its shortage of theoretical help, Teller's friend and colleague, John Archibald Wheeler, had set up his own group at Princeton University to calculate part of the new thermonuclear configuration, since the laboratory needed as many available electronic machines as it could get. Wheeler's secret project was code-named "Matterhorn-B" (B for bomb), and that is where Sausage calculations were done, and two-dimensional hydrodynamic problems began to indicate the feasibility of the burning of deuterium in the Sausage.

In the spring of 1952, the Los Alamos computer—the MANIAC (Mathematical and Numeric Integrator and Calculator; "Gamow" was its alternative name)—was completed, and it too was instead used for cylindrical radiation implosion calculations for the Sausage. These hinted at success for the upcoming Ivy Mike test in November, 1952, which yielded proof of Teller's fantasy of a multimegaton explosion.

By choosing to develop the radiation implosion Teller-Ulam configuration, nuclear weapons scientists bypassed the Super problem. The Teller-Ulam configuration, although itself a difficult model to calculate, was easier to compute than the Super, a full calculation of which was not done on a computer until the late 1960s on the Control Data Corporation's 6600 computers. Furthermore, the Teller-Ulam configuration also coincided with what Los Alamos Theoretical Division leader T. Carson Mark called the ”log-jam” in computing. Weapons scientists were bound by the limits of technology and thus had to adjust their research program under political pressure—with the limits of the technological infrastructure in mind.

Did computing affect the way that nuclear weapons scientists acquired knowledge about their work, or even alter the way they went about their work? In the early program, when Los Alamos’s main focus was on the Super configuration, the lack of adequate computing retarded the hydrogen bomb project in that this bottleneck prevented weapons scientists from acquiring any detailed knowledge about the device's feasibility or functioning.

For the nuclear weapons laboratory, computing also allowed knowledge to be acquired much faster than ever before, thus drastically affecting the pace of knowledge production. It is important to note that scientific computing, as opposed to business computing, was still a very new technology in the postwar. Thus, a part of the technological infrastructure encompassing nuclear weapons science was still being formed alongside two and three-stage hydrogen weapons. (For other sources, see Goldstine, 1972 , and Hansen, 1995 .)

In this sense, the practice of this particular—and very secret—science, and its technological support system, grew up together. The Manhattan Project itself is often referred to as marking the beginnings of "big science," yet perhaps it is perhaps more appropriately characterized—as historian Thomas Hughes and others have suggested—as "big technology," in that a massive technological system had to be established in the forms of nuclear materials production reactors, metals fabrication facilities, and numerous other facilities. This massive support structure allowed in part for the atomic device to be developed in a short period of about three years.

The H-bomb, on the other hand, took a lot longer, because of the lack of an adequate technological support structure—mainly computing power, as I have attempted to demonstrate here. What more can we learn from the thermonuclear weapons case study? Mainly, that it is valuable for philosophers and historians to look beyond the science part of scientific activity; focusing on the instruments provides more fertile ground for research and a fuller perspective on scientific activity and historiography.

REFERENCES

Aspray, William. 1990. John von Neumann and the Origins of Modern Computing . Cambridge, MA: MIT Press.

Bethe, Hans. 1982. "Comments on the History of the H-Bomb." Los Alamos Science , Fall 1982:43-53.

Goldstine, Herman H. 1972. The Computer from Pascal to von Neumann . Princeton: Princeton University Press.

Hansen, Chuck. 1995. The Swords of Armageddon: U. S. Nuclear Weapons: Development since 1945 . Sunnyvale, CA: Chuckelea, CD-Rom.

Harlow, Francis, and N. Metropolis. 1983. "Computing and Computers: Weapons Simulation Leads to the Computer Era." Los Alamos Science , 132-141.

Hewlett, Richard G., and Francis Duncan. 1972. Atomic Shield: A History of the United States Atomic Energy Commission , vol. 2: 1947-1952 . U. S. Atomic Energy Commission.

Los Alamos Historical Society. 1996. Behind Tall Fences: Stories and Experiences about Los Alamos at Its Beginning . Los Alamos: Los Alamos Historical Society.

Mark, Carson. 1974. "A Short Account of Los Alamos Theoretical Work on Thermonuclear Weapons, 1946-1950." LA-5647 MS. Los Alamos, NM: Los Alamos Scientific Laboratory.

Metropolis, N., and E. C. Nelson. 1982. "Early Computing at Los Alamos." Annals of the History of Computing , 4:4 (October):348-357.

Pitt, Joseph C. 1995. "Discovery, Telescopes, and Progress." In J. Pitt, ed., New Directions in the Philosophy of Technology . Dordrecht: Kluwer. Pp. 1-16.

Rhodes, Richard. 1986. The Making of the Atomic Bomb . New York: Simon and Schuster.

. 1995. Dark Sun: The Making of the Hydrogen Bomb . New York: Simon and Schuster.

Ulam, Stanislaw. 1976. Adventures of a Mathematician . New York: Scribner's.