SPT v5n1 - Organic Necessity: Thinking about Thinking about Technology

Number 1
Fall 2000
Volume 5

ORGANIC NECESSITY: THINKING ABOUT THINKING ABOUT TECHNOLOGY

Davis Baird,
University of South Carolina.

JOE PITT AND THE ATOMIC BOMB

J. Robert Oppenheimer described the reason he and others pursued the development of the atomic bomb as follows:

But when you come right down to it the reason that we did this job is because it was an organic necessity. If you are a scientist you cannot stop such a thing. If you are a scientist you believe that it is good to find out how the world works; that it is good to find out what the realities are; that it is good to turn over to mankind at large the greatest possible power to control the world and to deal with it according to its lights and its values ( Smith and Weiner 1980 p. 317 ).

There are two aspects of Oppenheimer's remarks that I would like to focus attention on as a route into commenting on Joe Pitt's Thinking about Technology . The first is Oppenheimer's overriding concern with what I will call epistemic values. It is good to find out 'how the world works,' 'what the realities are.' Oppenheimer expresses an overwhelming concern to know. Once scientists know, they can 'turn over to mankind at large the greatest possible power to control the world.' Mankind at large may then 'deal with it according to its lights and its values.' In short, epistemic values where the realities are come ahead of moral, political or ideological values mankind's lights and its values. The second is Oppenheimer's belief that, for scientists anyway, it was an 'organic necessity' that drove work on the bomb. It was not optional. The decision to pursue or not pursue this work was not a matter of a freely taken decision. Powerful forces compelled, by an organic necessity, work on the bomb.

A concern with epistemic values and technological determinism are two of the central features of Pitt's book. Curiously, however, I would guess that Pitt would agree with Oppenheimer's assessment of the importance of cognitive values, but disagree with Oppenheimer's claim that scientists were driven by some kind of 'organic necessity.' Examining this parting of the ways between Oppenheimer and Pitt is one useful route into understanding Pitt's book.

But Oppenheimer's remarks are even more useful for me. For I disagree with Oppenheimer in exactly the opposite way from the way I take Pitt to disagree with him. I do not endorse a strong privileging of cognitive values over other values. I do not think it is up to the scientists to 'to turn over to mankind at large the greatest possible power to control the world,' and then let us deal with this power as we see fit. Neither do I believe in a strong separation of social roles. Scientists are 'us.' Nor do I believe that cognitive values are all that do or should underlie decisions about research. At the same time, I am persuaded that there is a kind of 'organic necessity' to the work that went into developing the bomb. Once we saw how a given doable research program would lead, with reasonable likelihood, to a tremendous power, there was a kind of compulsion to pursue that program, to find 'where the realities are' in respect to this tremendous power. This is a strong compulsion. Even when we know that this knowledge will bring us no good we are drawn down this path. This is not a claim I defend concerning the atomic bomb project. However, I do believe such a claim could be argued for research on aging.

In a sense I have a tragic picture of the human relationship to knowledge where Pitt has a comic picture. In my view, forces beyond our control although instantiated in and through us compel us to open Pandora's box. Pitt believes that we should find out what the facts are and act accordingly. We are in the driver's seat.

2. 'MT' OR 'MODEL OF TECHNOLOGY' OR 'MY THEORY'

At a general level, Pitt does three things in Thinking about Technology . He defines technology as 'humanity at work,' and constructs a model for how humanity goes about being at work. He examines the relations between science and technology with respect to knowledge, explanation, and epistemic priority. And, he argues at length for placing epistemic values ahead of moral, political or ideological values. For Pitt, finding out what the realities are comes ahead of ideology, politics and ethics. This is not to say he advocates doing unethical things to ascertain the facts. Rather it is to say that, for Pitt, ascertaining the facts is a primary good that stands alongside of, and independent of, moral, political and ideological goods. Pitt believes that his model for how humanity goes about being at work shows that we have control. Technology is no monster and we do not constitute a collective Dr. Frankenstein. Pitt calls on the heroic picture of scientific inquiry, finding out what the realities are and deploying these realities for humankind's betterment.

Pitt provides a model of technology that he gives the acronym, 'MT,' without telling us what 'MT' stands for. Perhaps it is his Model of Technology or perhaps it simply is his theory 'My Theory.' The central focus of his model is human decision-making. His model concerns the decisions we make as we construct human and artifact systems for improving work that is, improving technology or 'humanity at work.' There are three basic components to his model. Two components can be characterized in input/output terms. He calls them transformations. First order transformations are the decisions people make when confronted with a problem. As input, first order transformations take an 'established knowledge base' and a problem or set of problems. The output to a first order transformation is a decision to do something about the problem. Second order transformations change the material conditions that confront us. Pitt's example is an oil refinery, crude oil in, gasoline out. The third part of Pitt's model is assessment. Pitt ( 1999 p. 22 ) advocates 'The Commonsense Principle of Rationality (CPR): Learn from experience." Thus, when we do not like the outcome of one of our first order transformations (one of our decisions), or when we do not like the outcome of a second order transformation (some artifact or 'refined stuff'), this response serves as an input problem to another first order transformation. We do something about it.

There is a lot to like about Pitt's book. I like his pragmatic basis. The resolution of problems by human decision making is a useful way to think about technology. Discussions about technology should be connected with human decisions and actions. I like the connections Pitt focuses on between science and technology. He resists the attempt to fold one into the other, while recognizing significant connections between the two. I like the fact that he is adamant about the need for a philosophy of technology that integrates philosophical considerations about technology into the broader philosophical project, and vice versa, that we recognize the importance of understanding technology for any broad philosophical project. Philosophy of science always has received more attention from philosophers, but philosophy of technology, given its broader and deeper impact, surely should receive more attention. I think Pitt is right in arguing that until philosophy of technology embraces the full range of philosophical concerns, from epistemology and metaphysics to ethics, it will continue to play second fiddle to philosophy of science to the detriment of all.

3. ON PITT'S HIERARCHY OF VALUES

That said, I disagree with Pitt on two central points, exactly those points where, as I take it, our responses to Oppenheimer differ. I do not believe in a hierarchy of values with cognitive values on top. Pitt does. I do believe in a kind of autonomy of technology. Pitt does not. In this section I examine the first of these points of disagreement. In the next I examine the second.

Pitt likes hierarchy. He likes analytical hierarchy. Thus, in Pitt's view, one must do philosophy of technology before history of technology:

The kind of relationship we seek to expose is conceptual . That is, the understanding we seek is not to be found in the examination of the history of technology, nor in history of science, since the proper construction of such histories requires prior answers to the sorts of preliminary questions we are asking here ( p. 25 ).

The first chapter Pitt writes is, 'Looking for Definition: Epistemology and Technology,' and there he arrives at his iconoclastic definition of technology as 'humanity at work' ( p. 11 ). Which implies that my Atmos clock, sitting quietly ticking away in my empty office while I am on sabbatical is not an instance of technology, while my writing this philosophy paper is.

Pitt argues that there are two stages to criticism,

Part of what I have been arguing is that before we engage in assessing blame and in social criticism, we need to know what really is or was going on. That is accomplished at Stage One of a two-stage process. Properly done, social criticism is a Stage Two activity. Stage One activities are designed to uncover the facts of the situation ( pp. 53-54 ).

At this point Pitt adds a footnote, "Before this claim gets laughed out of court by the postmodernists, let it be noted that, cynicism about knowledge aside, there is a world 'out there,' and things do happen in it" ( p. 54 ).

Later he writes, 'Once again we find yet another area in which epistemological factors must be settled before social criticism can make serious headway' ( p. 119 , emphasis added). Pitt has a clear vision that first things must come first. First we define our terms, then we find out what the realities are, and finally, if necessary, we engage in social criticism.

The first point I would like to make about Pitt's love of hierarchy is that it seems to run counter to his underlying pragmatic philosophy ( pp. 4-5 ). Pragmatism does not say, first define your terms, then find out what the facts are, and finally engage in social criticism. Pragmatism says attend to the problems that confront you with the tools you have at hand. If you need new tools to bring your problems to resolution, invent them. Pragmatism should not start seeking definition; it should start identifying a problem. If the problem exposes a need for a better definition, only then should we seek a better definition. I think this point applies generally to Pitt's love of hierarchy. We need only engage in a two-stage process if there is a problem ascertaining what's going on. And even then, arguments could be put forward that how we ascertain what is going on is influenced by our moral, political and/or ideological commitments. If this poses a problem for Pitt in a specific case he needs to show how, in such a case, we can ascertain what is going on in an ideologically neutral way, and why we should want to. There is no a priori commitment on pragmatism's part for a two-stage process.

The second point I want to make about Pitt's love of hierarchy is that, while it sounds appealing to first ascertain the facts and then take a moral, political and/or ideological stance with respect to them, such a separation of activities is not possible. In finding out what the facts are we always are confronted with three problems. First, we have to make a decision about which facts to learn about first. Second, we have to decide to what lengths we are willing to go to ascertain these facts. Third, we have to determine just how certain we require ourselves to be before we accept and/or act on 'the facts.' All of these decisions involve values that are not exclusively cognitive. Which facts we seek to know depends on our interests and the problems we seek to resolve. (Pitt's pragmatism captures this point very well.) The lengths to which we go to ascertain the facts depends on our resources and how pressing knowing these facts happens to be. Deciding how confident we have to be to act on our knowledge is a matter of trading off our resources against our perception of the risks involved. As all of these issues, in most cases, are resolved at the level of a community where individual members would answer the questions differently, political power dynamics also play a role. Many values beyond cognitive values are involved in ascertaining the facts.

These problems are resolved differently in different cases. There is no a priori set mixture of values that sorts out our fact-finding activities. Consider some bits of the history of intelligence testing in the twentieth century. In 1915, Robert M. Yerkes, a Harvard psychologist, sought to make 'soft' psychology into a 'harder' science. He did this by pursuing pencil-and-paper mental tests. As World War I broke out he convinced the Department of the Army to allow him to test all 1.75 million new recruits. He went to some lengths to insure that his tests were independent of language and cultural bias. Unfortunately a contemporary critique shows that he did not go far enough. But he went further than was common in his day. His tests 'demonstrated' that the average intellectual age of Americans was 13.08. The tests also demonstrated that there was a strong relationship between country of origin and mental age. Russians had a mental age of 11.34, Italians 11.01, Poles, 10.74, Africans, 10.41. These results became a key part of the move to restrict immigration into the U.S. The 1924 Immigration Restriction Act restricted the number of people from different nations who could immigrate to the U. S. to 2% of the number of people from each of these nations in the U. S. in 1890. Why 1890? Because a large wave of southern European immigration occurred after 1890. 1

In theory, Yerkes was doing just what Pitt advocates. He was trying to ascertain the facts first. Then, once we realized that the immigration of dumb southern Europeans, perhaps including Sicilian realists, was sending our national intelligence down the toilet, we did something about it. But at each stage of the story decisions were being made to operationalize intelligence in terms of paper-and-pencil tests with particular questions, to test millions of army recruits, to examine intelligence according to country of origin, to act on the results given reasonable confidence in their certainty.

This result is striking because it exposes the non-cognitive values that went into the decisions that were made along the way. Testing intelligence in this particular way, focusing on racial, ethnic and national group differences, these were not the only options open to psychologists in the early decades of the twentieth century. Making an atomic bomb was not the only available option for physicists in 1939. The choices to find where these particular realities lay were driven by many values, not all of them cognitive. But even cases where now we are satisfied with the outcome call on such non-cognitive values. In Pitt's own example, Galileo's decision to investigate the heavens with a telescope was not the only investigative option open to him. He took this option for a variety of reasons, which included his need to court the Court and his pugnacious desire to show up the academics (see also Pitt 1991 ; Biagioli 1993 ; Pitt 1999 pp. 92-99 ).

My third and final point about Pitt's hierarchical separation of cognitive values from non-cognitive values is that there are points where these values collide or collude in a single concept. My favorite example of this is the concept of objectivity. On the one hand, objective methods are supposed to be methods that are value-free. They operate independently of human biases. They provide a straight conduit to the truth. Pitt should like objective methods. At the same time, objective methods are supposed to be fair morally fair. They do not privilege any one person's interests over another's, and this, we believe, is a moral good. Thus, within this single concept we simultaneously encapsulate both cognitive and moral values. This allows for a kind of equivocation calling something objective because of its cognitive status and drawing a moral conclusion about it. But it is not exactly equivocation since objectivity is a hybrid concept; both moral and cognitive factors are invoked in the concept itself.

Intelligence testing again provides a nice example. There is a complicated story here involving the development of a meritocracy based on intelligence tests of various sorts, the machine grading of these tests, and the accuracy and the fairness of it all. Objectivity sits right in the middle of this conceptual and political tangle. Objective tests provide an important guarantee of fairness, and if we are going to use these tests to implement a meritocracy, proof of fairness is essential. But objective tests just are those tests that can be graded by machine. These are 'fill in the bubble' tests with questions that have multiple answer options, one of which can be machine graded as unambiguously correct. The alternative, 'constructed response' tests, demand essays that have to be graded by humans and this introduces subjectivity and error. 2

Consider the following remarks by Henry Chauncey, the first president of the Educational Testing Service (ETS), and a central figure in promoting the wide adoption of cognitive ability testing notably the Scholastic Aptitude Test (SAT) for admission to schools, jobs, the armed services, etc. During the late 1940s, Chauncey worked with Harvard psychologist, Henry Murray, to develop a test that would yield a more general profile of human personality than the SAT. In January 1950 he abandoned this collaboration, writing to his second-in-command at ETS:

I personally am convinced that the laborious and subjective methods that he [Murray] uses are not going to result in … any effective measurement of personality traits… I personally am not so much interested in obtaining an absolutely complete understanding of each individual as I am in identifying and measuring some important factors that will be useful on an actuarial basis in the prediction of success (quoted in Lemann 1999 p. 89 ).

Murray's tests, which had to be graded by humans, were 'laborious and subjective.' Chauncey's machine graded tests are objective, but this does not mean they provide an 'absolutely complete understanding of each individual.' They need only be good enough to be 'useful on an actuarial basis.' More recently, in a RAND Corporation report we are told that pencil-and-paper tests are a 'cheap way to fix problems' ( McDonnell 1994 p. 23 ).

In this context, I find it helpful to think of objectivity in terms of an equilateral triangle with 'accuracy' at one vertex, 'fairness' at a second vertex and 'cost' at the third vertex. The multifaceted joint concept of objectivity calls on all three vertices, but trade-offs are made. Chauncey would accept somewhat less accuracy for lower cost and the fairness that machine-grading insured. There is no a priori way to determine how these trade-offs should be made. In any given use of the concept of objectivity, the actual degree to which each of these pure poles accuracy, fairness, and cost is built into 'objectivity' is a contingent matter. At a given time, we can think of the concept of objectivity as a point somewhere inside this triangle. Over time, this point wiggles around within the triangle as different aspects of the concept get emphasized.

The significant point for my current concerns is that this single concept synthesizes cognitive, moral and indeed economic values. How it does so is historically contingent and not entirely stable over time. Such concepts do not allow for the kind of cognitive hierarchy of concepts that Pitt desires.

4. ORGANIC NECESSITY

Pitt does not like the idea that our technologies constrain us. Our technologies are our tools. They are the systems we make (and unmake and remake) to help us get on with our work. It is our decisions, which Pitt models with 'MT,' which control the technosphere we live in, not vice versa. We make decisions, not our technologies. Like a true stoic, he writes, ' it is the perception, or lack of it, that people have of the usefulness of a new product that determines the extent to which they are willing to make concessions in its direction ' ( 1999 p. 91 , emphasis in the original).

Unfortunately, I must remind Pitt of his own reminder to the postmodernists, 'there is a world out there,' and things do happen in it' ( Pitt 1999 p. 54n ). When I learned how to type in 1970 I had to learn the 'qwerty' keyboard. Never mind that this is a keyboard designed to have the most frequently used letters typed by the weakest fingers. Never mind that it was developed to slow down typing because the early typewriters were not mechanically able to keep up with proficient typists. Qwerty became the standard, and, whether one liked it or not and in 1970 I did not typewriters were made with the qwerty keyboard. This was not optional. Even now, when we can change keyboards with a simple, individualizable, bit of software, qwerty hangs on ( Pool 1997 pp. 159-161 ).

Qwerty reminds us of two important features of technology. System is one central aspect to many technologies. Most humans, when they work, work in groups, and group needs frequently override individual needs. A keyboard standard was needed in the late nineteenth century when the typewriter was developed and qwerty became that standard. With this standard there could be standardized production of typewriters and standardized training of typists. Typewriters could move around offices from one typist to another, and so on. Qwerty also reminds us of the historical momentum that technological systems frequently attain. Qwerty was introduced to cope with typewriters that could not keep up with typists. Typewriters got better, yet qwerty remained. The 'installed base' of qwerty typewriters and qwerty typists was (and remains) too large for it to be economical to switch to a 'more rational' keyboard layout.

Historical momentum and the need for system-wide standardization produce a kind of technological imperative. As individuals we are not at liberty to choose our preferred technological poison. To a considerable extent although not completely of course group demands for standardization choose for us. Neither are we, as a group at a given time, at liberty to choose our preferred technological poison. Historical momentum, from choices made long ago, chooses for us. Qwerty is a well-known example. Robert Pool documents a similar case for the development of nuclear power ( 1997 particularly chapters 1 and 2) .

Resource issues frequently drive this kind of technological imperative. Systemic standardization is cheaper in many diverse respects than letting every individual go his or her own way. Once a large amount of resources have been committed to a given technology, the cost of 're-tooling' simply can be too high. It can be cheaper to stay with a less than ideal standard. Put in these terms, Pitt would be right to point out that, at least in some collective sense, we control the technologies we use. In our efforts to control these technologies, several competing values pull on our decisions. Cost, almost always, is a compelling value, and various other desirable qualities frequently are traded off against cost. There is no autonomous monster out there Technology that forces us to use the qwerty keyboard, or build nuclear power plants that are less than perfect. We choose our technologies and one of the important factors we consider in making our choices is cost. True enough.

But the question of a technological imperative cannot rest at this point. Let us accept for the sake of argument Pitt's claim that human decisions drive the developments of our various technologies. Let us accept Pitt's input/output model for technology, 'MT.' On this model, we distinguish the inputs that go into these decisions the current state of things (including, the artifacts and human systems in place and available resources), perceived problems, and extant value system for making choices from the outputs that result decisions to organize ourselves and our things in various ways. Put in these terms, three kinds of inputs drive decisions, values, perceived problems and the current state of things.

Consider one of these inputs, the current state of things. The fact that this serves as an input into decisions about what to do means that where we go from one point depends on that point. Else the current state of things would not have to be an input in the decision problem. But this means that we cannot go just anywhere from a given point. There are historical constraints on where we can go, on how we can deal with our perceived problems. These historical constraints are beyond our control, for we do not get to choose our moment in history.

Consider another one of these inputs, values. Here again, we are not in control of the values we employ. Of course we can work very hard to increase our consciousness of our value system, and, in so doing, we can put effort into changing values because we do not like what we see. (One can ask, on what basis we judge that we don't like our value system? Does this not presuppose a value system? But there is no reason to suppose that this kind of circularity is vicious.) But the particular values we employ in our decision making from aesthetic preferences to a calculated desire for cost-efficiency largely are a matter of our cultural and biological inheritances. Oppenheimer said that seeking 'the greatest possible power to control the world' was an organic necessity for a scientist. In so saying, he identifies one of the values that drove his culture during the mid-twentieth century. Someone advocating research into the Gaia hypothesis that the total biosphere of the Earth can be understood as a kind of organism on which we depend, and of which we are only a part would not have gotten anywhere in that culture.

Occasionally when I meet people and they find out I am a philosopher they will recall their own experiences in philosophy alas, frequently these are not fond memories. On one such occasion I met a man on the Theatre faculty at New York University. At one time he had been a graduate student in philosophy of science. He approached the subject historically and had little patience for formal logical analysis. Unfortunately for him, he happened on philosophy of science when logic chopping was the vogue and no one saw value in an historical approach to the subject. Reluctantly, he left the field for theatre.

These matters, like those concerning the current state of things, are largely beyond our control. We do not get to choose our historical moment.

The kind of technological imperative I am describing, both in respect to how the current state of things directs our decision making options and in respect to how the values we inherit direct our decision making preferences, is a consequence of the historical dimension to technology, science and culture. The fact that one's place in history matters for one's decision making preferences and possibilities, and the fact that one cannot choose one's place in history, implies that at no point in time is anyone or any group simply free to choose. At any time, the state of humanity at work of technology at that time plays an essential role in directing our possibilities and our preferences.

One of the charges that this historicist picture of technology, science and culture levels on the philosophy of technology is to understand synchronically how the current state of technology bears on decision making at a given time in a given context. Doing so, perforce, grants a degree of autonomy to technology. Another of the charges that this historicist picture levels on the philosophy of technology is to understand diachronically how the state of technology has changed over time. This is a complicated issue. It is a difficult descriptive question to document such changes. (I take this to be one of the contributions of Peter Galison's Image and Logic , Chicago: University of Chicago Press, 1997; Galison focuses on a small, but influential, corner of technology, particle physics during the twentieth century.) Such a study raises the possibility of exposing trends and/or forces directing changes in the state of technology over time. Here we would have a considerably more robust autonomy of technology. Finally, as a third charge, it is important to realize that this is not simply a descriptive matter. Philosophers of technology can play active roles in urging certain values over other values. Given their acquaintance with these matters, they are in a good position to do so. Thus, through their writing they can attempt to alter the value matrix that directs the preferences we bring to our decisions about humanity at work at our moment in history.

This leaves us with a final irony in Pitt's work. He complains at length about 'the social critics' of technology. At one point, Pitt subjects the passage that gave the title to Langdon Winner's book, The Whale and the Reactor ( 1986 p. 165 ) to extended and sharp criticism ( pp. 72-75 ). In the passage, Winner describes returning to a California beach near his childhood home. He comes over a bluff and is confronted with a vista that sends him reeling. There

nestled on the shores of a tiny cove, was the gigantic nuclear reactor … a huge brown rectangular block and two white domes… At precisely that moment another sight caught my eye. On a line with the reactor … a California Grey whale suddenly swam to the surface, shot a tall stream of vapor from its blow hole into the air, and then disappeared beneath the waves ( Winner 1986 p. 165 ).

Pitt decries Winner's rhetoric, 'the pitch to the emotions.' Pitt correctly points out that Winner is 'making a series of explicit value judgments.' He complains that Winner is 'pushing an ideology.' As I understand this passage, Winner is attempting to change the value matrix that was in place in the mid-1980s. If successful, this might prompt different decisions about nuclear power. Pitt is right to rail against the idea that we fall helpless before the steamroller of Autonomous Technology. The social critics whom Pitt trashes are attempting to gain more insight and control over our technologies. They are fighting against an Autonomous Technology, and attempting to realize Pitt's own vision of conscious human decisions creating technologies that offer 'new and promising avenues of human development' ( p. 120 ).

I like Oppenheimer's phrase, 'organic necessity.' It captures two central features of the autonomy of technology. In the first place it recognizes a kind of autonomy. There is a necessity here. But it is not a logical necessity or an a priori necessity. It is an organic necessity. I understand this to mean it changes over time and it changes in response to our decisions about our technologies. We are not helpless victims of Autonomous Technology. Neither are we Masters of the Universe. The relationship is more complex and interdependent, more organic.

NOTES

1 . There are many good places to look into the history of mental testing. S. J. Gould, The Mismeasure of Man (New York: Norton, 1981) is a good place to start, and my material about Yerkes is taken from Gould. A. Jenson, Bias in Mental Testing (New York: Free Press, 1980) provides an alternate view on this history. N. J. Block and G. Dworkin, eds., The IQ Controversy: Critical Readings (New York, Pantheon, 1976); H. J. Eysenck and L. Kamin, The Intelligence Controversy (New York: Wiley, 1981) are useful dialogues on the subject. N. Lemann, The Big Test: The Secret History of the American Meritocracy (New York, Farrar, Straus and Giroux, 1999) is a more recent history focused on the Scholastic Aptitude Test and the Educational Testing Service.

2 . On the connection between machine scoring and objectivity as opposed to human scoring and subjectivity, see N. Longford, Models for Uncertainty in Educational Testing (New York, Springer-Verlag, 1995); and note that Longford's chapter on grading "constructed response tests" is titled "Adjusting Subjectively Rated Scores." M. J. Allen and W. M. Yen, Introduction to Measurement Theory (Belmont, CA: Wadsworth, 1979) notes that determinations based on individual human judgment are subjective, and therefore more subject to error than those based on objective methods of discrimination.

REFERENCES

Allen , M. J., and W. M. Yen. 1979. Introduction to Measurement Theory . Belmont, CA: Wadsworth.

Biagioli , M. 1993. Galileo, Courtier: The Practice of Science in the Culture of Absolutism . Chicago: University of Chicago Press.

Block , N. J., and G. Dworkin, eds. 1976. The IQ Controversy: Critical Readings . New York, Pantheon Books.

Eysenck , H. J., and L. Kamin. 1981. The Intelligence Controversy . New York: Wiley.

Galison , P. 1997. Image and Logic: A Material Culture of Microphysics . Chicago: University of Chicago Press.

Gould , S. J. 1981. The Mismeasure of Man . New York: Norton.

Jenson , A. 1980. Bias in Mental Testing . New York: Free Press.

Lemann , N. 1999. The Big Test: The Secret History of the American Meritocracy . New York: Farrar, Straus and Giroux.

Longford , N. 1995. Models for Uncertainty in Educational Testing . New York: Springer-Verlag.

McDonnell , L. M. 1994. Policymakers' Views of Student Assessment . Santa Monica, CA: RAND Corporation.

Pitt , J. 1991. Galileo, Human Knowledge and the Book of Nature: Method Replaces Metaphysics . Dordrecht: Kluwer.

Pitt , J. 1999. Thinking about Technology . New York: Seven Bridges Press.

Pool , R. 1997. Beyond Engineering: How Society Shapes Technology . Oxford: Oxford University Press.

Smith , A. K., and C. Weiner, eds. 1980. Robert Oppenheimer: Letters and Recollections, . Cambridge, MA: Harvard University Press.

Winner , L. 1986. The Whale and the Reactor: A Search for Limits in an Age of High Technology . Chicago: University of Chicago Press.