Society for Philosophy and Technology
For purposes of this paper, I will view science very roughly and intuitively as those disciplines concerned with theoretical understanding of what is found out about the world, and technology as the construction of machines or instruments or other material structures designed for whatever purpose. The aim of my analysis will be to throw light on the nature and functioning of science and technology and the relations between them, not to provide essentialist definitions of them. As will be seen, my view is that science, technology, and their relations, far from remaining the same throughout their history, evolve, and do so in intelligible ways.
Many scientific ideas are easily seen as having developed from technological innovations. A paradigmatic example is the case of the science that emerged from the invention of the electric battery. Although electricity in various forms was already known, and various theories had been advanced regarding its nature, the real start of the development of our modern understanding of it, according to this view of the history, came with Galvani's serendipitous discovery, in the 1780s, that a frog's leg would twitch when placed in contact with two dissimilar metals. From this event, or so this interpretation of the history continues, a burst of scientific activity ensued that led from the invention of the voltaic cell through two lines of development in the nineteenth century. On the one hand were applications of the voltaic cell to the understanding of chemical combination and separation, leading successively to the theories of Davy and Berzelius that chemical compounds are produced by electrical attraction between elements, and that, correspondingly, they can be broken down by the application of electrical forces, and the subsequent amendments and corrections of those views; and to the valency theories of Frankland, Kekulé, and others. On the other hand were theoretical and experimental attempts to understand the nature of this electricity produced by the battery, a line of development which led to the Faraday-Maxwell theory of electromagnetism, embracing the hitherto-distinct phenomena of electricity, magnetism, and light in one of those great syntheses which punctuate the history of science. These two lines of development then came together in the quantum-mechanical unification of the understanding of ordinary (chemical) matter and light (and electromagnetism in general) in early twentieth-century relativity and quantum mechanics, and still more deeply in quantum electrodynamics, as manifestations of one of the fundamental forces of nature.
Other examples illustrating technology as a source of scientific ideas lie easily at hand. Even more than in the case of the battery, where at least some of the central ideas that would enter the new theories were already available, the focal concepts of temperature, energy, and entropy had to be forged in the wake of the invention of the steam engine before the modern science of thermodynamics became possible. Indeed, the clarification of the concepts was necessary also for some of the technological improvements that were made in the steam engine. Still other cases exist which, to varying degrees, fit the model wherein new inventions--for example, the telescope and microscope, to name only two of the most prominent--played central roles in the raising of new problems of understanding and to the formation of theoretical ideas which are clearly scientific in nature.
Certainly it is possible to quarrel with such simplistic interpretation of these cases. Such terms as "theory," "speculation," and "presupposition" as applied to science are vague enough that one can argue that the technological inventions presupposed some scientific ideas, so that the view that it was the technology that was the origin and science the result would lose at least some of its force. Again, a case could be made for saying that, even if the technology had been one of the causes of the subsequent burst of scientific activity, it was only one among several. However, I will not argue here against the primacy of technology in these cases. For regardless of the modifications that would be brought about by a deeper analysis, it seems to me that such a more careful interpretation would have to leave a good dose of the primacy-of-technology thesis untouched. Beyond stimulating the search for practical applications, the discoveries of the battery and the steam engine clearly did inspire a great deal of activity that we recognize as scientific. Even if there were other causes of those spates of scientific activity, and even if the full story of the paths by which those activities ultimately reached relatively stable culminations in thermodynamics and quantum electrodynamics is far richer and more complex than a triumphant march from a technological first step, it nevertheless remains true that a technological event was, in these cases at least, seminal. Rather than being obliterated by more careful analysis, that seminal character would have to be explained by any account with a pretense to adequacy.
One of the concerns of this paper will, then, be with the considerable kernel of truth in the view that some technological innovations have stimulated efforts to gain scientific understanding. I shall try to show that there is an important lesson to learn from such cases, and especially from a comparison of them with ones in which, conversely, scientific understanding leads to new technology. Indeed, though it is a lesson that should be familiar, I will argue that its significance has not been grasped, that failure to grasp it has been the source of some of the major failures in our understanding of both science and technology, and that the lesson itself points the way to a new and more adequate view of the nature of the scientific enterprise and its results. Further, it throws light on the relations between science and technology, and, perhaps still more importantly, locates those two subjects within the context of a larger picture of the human career.
Besides the influences of technology on science, there have been many cases in which the influences went the other way, from science to technology. If what I said earlier about the vagueness of the terms theory and speculation was correct, then, depending on what is meant by speculation, it makes sense to argue, as does Donald Cardwell (1995, p. 491), that Von Guericke's speculations led to the first steam engine, and to give similar diagnoses of the sources of a number of other technological inventions. Among the other scientific ideas to which Cardwell points as leading to technological advances is Hertz's confirmation of the existence of radio waves, predicted by Maxwell's theory. The relative lateness of this event (1888) is suggestive; indeed, another historian of technology, Basalla (1993, p. 28), remarks that, "Only during the latter half of the nineteenth century did science begin to have a substantial influence on industry." The influence of science on technology has, in many areas, been far more extensive and rapid in the twentieth century than it was before; Cardwell (1995, p. 494) contrasts the slowness with which the principles of heat engines were grasped with "the rapid development of electronic engineering in the present century."
The explanation of this increased scope and speed with which technology follows science is obvious: it lies in the fact that we have far more and far deeper scientific understanding now than we had earlier. The capsule summary I gave earlier of the events following on Galvani's discovery and the invention of the voltaic pile, and of other technology leading to the advance of science, brings this out. Science before the latter part of the nineteenth century was piecemeal in its inquiries, often groping to develop previously unheard-of concepts. In two respects which are relevant to the present inquiry, this situation has changed radically in the twentieth century, resulting in a transformation of science, and with it, its relations to technology. First, the piecemeal approach of earlier science--that is, the study of specific subject matters, like the motion of bodies, the nature of salts, the nature of gases, each studied in isolation from other domains--increasingly was succeeded by a new form of inquiry, in terms of increasingly broad and detailed unifications of such isolated subject-matters. Although such unifications had been achieved earlier--for example, in Newton's unification of terrestrial and celestial mechanics, and in the gradual realization that "electricks" were all manifestations of the same agencies--the pace and depth now accelerated enormously. A few well-known examples typify the scope of such unification of understanding. Electricity, magnetism, and light were brought together in Maxwell's theory. That theory, together with its extensions, conceived electromagnetic phenomena as belonging to a single continuous spectrum, from high frequency gamma rays to long wavelength radio waves, thus removing any temptation to think of those various phenomena as belonging to separate domains, requiring distinct explanations. Quantum mechanics clarified the relations between matter and light, showing how the absorption and radiation of light take place, and explaining the characteristics of the electromagnetic spectrum. It also provided explanations of the periodic table, chemical bonding, and the solid state. The process of unification continued, and even accelerated, in the second half of the twentieth century. In physics, the electroweak unification took a very long step in joining quantum electrodynamics, the quantum theory of the electromagnetic force, with the weak force, and considerable progress was made in constructing possible fusions of the electroweak and strong forces. These efforts were united with the Big Bang theory of cosmology to give deep insight into the early moments of the history of the universe, and into the possibilities of its far-distant future. Reasons could be found for seeking yet a further unification, this time of all four of the known fundamental forces of nature, the three quantum forces (electromagnetic, strong, and weak) with the gravitational force. In biology, a "synthetic theory of evolution" was developed, bringing under its scope such diverse subjects as Darwinism, genetics, and paleontology, previously seen as conflicting; and molecular biology brought chemistry into intimate contact with major areas of the life sciences. And so on for innumerable other examples, all seeming to fall into place within a larger view of the nature and history of the universe (see Shapere, 1991). The process is, of course, far from complete, and difficult problems still exist. But the contrast with mid-nineteenth century science and before stands out clearly.
The second major relevant respect in which science has been transformed has been in the vast multitude of details which are embraced within these sorts of syntheses and others. With regard to the examples I have been giving, one need only recall such things as the extraordinary precision with which the characteristics of lines in the electromagnetic spectrum are known, and the exquisite detail with which inferences from those characteristics can be made; the available knowledge of the details of the electronic structure of the chemical elements, and the variety of types of bonding that exist under various known circumstances; of the detailed steps gone through in specific chemical reactions; of the ever-increasing details concerning paths and modes of communication between the cell nucleus, cell contents, and the environment of the cell; and so on and on.
To be able to incorporate so much detail within some unified framework which is both broad in scope and detailed in account: those are the two achievements of twentieth-century science that are relevant here. The reason for their relevance is that it is these characteristics that have most enabled science to raise new questions, to see what is needed and what can be done, to a far greater extent than could be done in earlier science, fragmentary and groping as it so often was. In particular, scientific theories today, for all their still-existing problems (or rather including them), are powerful enough to raise questions that demand specific sorts of apparatus for their solution; the technology of science is, far more than before, a matter for trained scientists, knowledgeable about the theory of their subject and its needs. The designers, builders, and users of particle accelerators, the architects of gravitational wave detectors, genetic engineers, those engaged in rational drug development, must of course still be highly competent in practical ways that mere theoreticians are not, but the competence of such workers must include understanding of the theory and its needs, or must be guided at least in part by others who do have such understanding.
As Bacon noted, knowledge is power. Of primary interest here is the power that knowledge gives to find new knowledge, and new inventions. In these cases, it is clear that the situation in regard to science and the technology relevant to it has come far since the last half of the nineteenth century. It is no wonder that, so often in earlier science, technological inventions were important spurs to science. In the absence of deeper knowledge, something outside science usually had to be the spur, to lead the way. But today, it becomes less and less likely (though not impossible) that someone who is an utter outsider to science--an army officer like Sadi Carnot, a travelling salesman like Nicolaus Otto--will invent a device which will propel particle physics, say, into brand new areas and ways of thinking; and although there is still the occasional physicist who moves over to biology and provides original new insights, neither new ideas nor new inventions are likely to come without the interloper acquiring a detailed knowledge of the scientific field concerned. It is no wonder, either, that today's science leads technology far more than was the case earlier. Many of the innovative features introduced in the great revolution of our times, in electronics and computers, are in part applications of quantum-mechanical tunnelling, a process which was literally unthinkable before quantum mechanics, and which could not have been applied in invention, or even conceived, even by the cleverest of inventors before the advent of that theory.
This is by no means to denigrate technology, to say that it has "lost the lead," so to speak. On the contrary: science itself has, as I have indicated, changed. The flexibility conferred on scientific thinking by the interrelatedness of so many different subject matters under a fewer unified theories has made it possible to see the same subject from many different perspectives, and in this way and others has changed the characteristics of scientific thinking and interchange between scientists in significant ways. It has also brought science into closer unity with the various technologies that the various sciences have made their own. Science and its relevant technology play leapfrog with one another in ways that were rare in earlier science; the powerful synthetic theories lay out agendas for inquiry, which require that certain problems be solved. The problems require certain experimental apparatus, and the technology comes in; its results lead to modification of the original theories, and the cycle is repeated on a new level. We tend to think of this as the process that has always existed in science; but in the era when science was largely piecemeal and groping, it was a rarity to get a theory of a single isolated subject matter, and, when it was gotten, there the subject tended to stand, as long as it remained an isolated subject. When the problem of the motion of bodies was solved, that subject was largely settled; if there was an agenda, it was to apply the completed theory of Newtonian mechanics to other domains (such as chemistry and biology), or at most to reformulate it in more perspicuous ways or put it into mathematically more sophisticated form or make it conform better to some philosophical ideal, not to develop it more deeply on its own. To extend the theory more deeply, to see, in an increasing variety of cases, precisely what needs to be done (say, for example, to explain the undetermined parameters of the theoretically-incomplete Standard Model of elementary forces and particles) is a characteristic of post-Maxwellian science, not of the era before, except in fairly rare instances.
Perhaps the preceding points seem obvious; perhaps they should be so. Indeed, it ought by now to be a platitude to say that the more we know, the more we can learn, and the better we can deal with the world around us. The ability to acquire new knowledge presupposes the possession of prior knowledge, and that ability increases the more relevant prior knowledge is possessed. And, too, the activities of individual lives and of societies can be performed more effectively or lived more richly the more we know; the more we know the more we can do or make, and do or make better. Yet if this appears tritely familiar to many of us, it is not a truth universally acknowledged; it has not been absorbed by everyone. Most scandalously, perhaps, it has not yet been absorbed by many thinkers who claim their primary concern to be with the nature of inquiry, its products and applications, with the interpretation of the history and philosophy of science and of technology. Their failures in this regard have, as I will now argue, left us still without adequate understanding of the processes by which we gain scientific understanding and the fruits of practical technology, and particularly of the relations between the two.
The failure to grasp the role of prior knowledge in inquiry stands out starkly in the philosophy of Karl Popper. On realizing that the falsificationist view of science he had developed in his first work, Logik der Forschung (1954 ), was subject to the Duhemian criticism that single propositions are not falsifiable in isolation from a context of other propositions (any one of the set being rejectable or retainable as one wills), Popper tacked on to his philosophical repertoire the idea of what he called "background knowledge." In the 1954 English translation, Popper added three footnotes to the text--pp. 78 and 124--two of which mentioned the problem. Popper said Duhem (1954) erred, and to my knowledge Popper never confronted the objections effectively. Background knowledge was, in scattered places, described as serving a number of functions, the most important of which was to make it possible to select, in any given experimental situation, the one particular hypothesis which was to be put under test and potentially falsified.
This suggestion, I believe, was basically sound; but Popper's own use of it to accomplish this selection is utterly unclear. For despite his admission that "we constantly add to our background knowledge," Popper (1962, p. 239) repeatedly insisted that we accept any piece of background knowledge "only temporarily," "for the time being, and for the discussion of this particular problem," never recognizing that many of our scientific beliefs about the universe have come to be accepted more securely than that, at least until we have found specific reason to doubt and reject them. Indeed, he could hardly say otherwise, given his view that we never accept any proposition in science because it has been confirmed and accepted on the basis of evidence in the past: "The falsificationist or fallibilist . . . does not accept this background knowledge; neither as established nor as fairly certain, nor yet as probable. He knows that even its tentative acceptance is risky, and stresses that every bit of it is open to criticism, even though only in a piecemeal way" (1962, p. 238).
In the light of these views, it is impossible to see why the same background knowledge is used over and over again in different problem situations; presumably that would be, in Popper's most consistent eyes, a foolhardy thing to do. In the final analysis, Popper's background knowledge is something suspect, not accepted, used only very temporarily and tentatively, in the context of a particular discussion, presumably for the sake of argument. (Indeed, it is hard to see why anyone would use it or rely on it--why, in fact, Popper calls it "knowledge.") And despite his recognition that "any particular part of it [our background knowledge] may be challenged at any time," he does not--consistently, he cannot--recognize that the background is accepted in a more lasting and secure sense than the immediately temporary and tentative, nor that it is used in the pursuit of further knowledge--as, in short, a "logic of discovery," in the sense of guiding reasons; nor that it is used in the development of applications both for carrying further the work of science and for more practical uses. For rejection of any appeal to prior knowledge lies at the very heart of Popper's whole falsificationist view of science.
Popper is far from alone in effectively ignoring the importance of prior knowledge in inquiry and its applications. His archrivals, the Hypothetico-Deductivists, agree with him that there is no "logic of discovery"--that there are in the final analysis no legitimate, justified background ideas which are used, built on, in new inquiry and new applications. Cohen and Nagel (1934, p. 251), for example, do speak of prior hypotheses as necessary (though, it is now clear, for very limited purposes) to make possible selection of kinds of evidence to seek as relevant, for example. But they refer to them as mere "assumptions," and never say why they, and not some other prior hypotheses, are used. There is only a psychology of inquiry; the only logic, i.e., reasoning, that takes place in science is in the justification or rejection of hypotheses already formulated. Nor do Kuhn, or any of the views roughly similar to his, do any better. For Kuhn, for example, no positive reasoning leads to the introduction of a new paradigm; it seems just to appear from nowhere (nowhere in the realm of reasoning, at any rate) in the mind of a revolutionary thinker. For all these parties, a hypothesis (however it is designated) is simply formulated out of thin air, or at best in terms of non-rational sources.
In short, none of these received philosophies of science offer us anything in the way of describing how, in science and technology, we build on what we have learned. I said earlier that it is almost a platitude to say that the more we know, the more we can learn, and the better we can deal with the world around us. Anyone who is not a philosopher knows that. No wonder that historians and philosophers of technology complain that they have little to learn from philosophers of science. After all, historians and philosophers of technology (and of course an occasional philosopher of science) are interested in what an innovator knew, why he or she conceived a problem in a certain way, what prior knowledge he or she put to work to obtain the innovative idea and work it out, or why; so to be told that prior knowledge is irrelevant, or that it is not knowledge at all, or that it is merely a matter for psychologists, or that it pops up out of thin air--the sorts of views I have been criticizing--are of no help at all with such topics. What lies behind this failure? I suspect that either of two deep-rooted mistakes are involved.
The first possibility is an outmoded concept of objectivity. Once upon a time, being objective in science was thought of as the scientist's "looking at nature without any preconceptions whatever." Do not bring your beliefs with you into the laboratory; leave them at the doorstep, and just observe the results of your experiment. The beliefs which were forbidden to be brought to inquiry included any for which the barest possibility of being wrong existed; and of course that included all prior scientific beliefs. (After all, according to Popper, "Any particular part . . . may be challenged at any time," even if there is at the present time no specific reason to doubt it.) Objectivity thus meant viewing nature without any preconceptions whatever--otherwise, to shift metaphors, you might be interpreting nature-in-itself through rose-colored glasses. Perhaps there is a lingering hangover of such a conception of objectivity in thinkers like Popper and the hypothetico-deductivists, who, even while admitting the necessity of using some background beliefs in inquiry, nevertheless hesitate or refuse to accord those beliefs the status of true knowledge, subject to potential (if not actual) criticism though such knowledge may be.
If it is indeed true that this hangover has persisted, its continued presence in Popper and in such hypothetico-deductivists as Cohen and Nagel embodies a deep irony. For it is a vestige of the older inductivist view, according to which we derive theories from uninterpreted observation; and that is precisely the view that both Popper and the hypothetico-deductivist philosophers intended to reject. A far better view must surely be based not on the principle that we should bring to inquiry and its applications no presupposed beliefs whatever, but rather on the principle that we should bring the right background beliefs, where by "right" is meant those beliefs which have satisfied the criteria relevant to the acceptance of ideas, and which are subject to no specific and compelling reasons for doubt.
What those criteria are is too big a subject to discuss here. I should remark, though, that the expression, "no specific and compelling doubt," excludes universal reasons for doubt--such as, "A demon might be deceiving me," "I may be dreaming," or "I may be a brain in a vat"--from being reasons for doubt at all. That is because the doubts they suggest apply with precisely equal force to any and all beliefs whatever, including the negations of those beliefs. And we have learned in the course of inquiry that that sort of universal doubt is not a reason for doubting any specific belief.
The second possible mistake involved in failing to acknowledge the role of prior beliefs in inquiry is the idea that creative imagination plays a role in scientific and technological invention just as it does in art. This was almost certainly an underlying motive of Kuhnian views of scientific revolutions as complete breaks with past ideas. Creative imagination was seen as spontaneous novelty, as total originality, freed from the fetters of the past. There is, then, no explanation of the work of such genuine original genius--no past which is relevant to the act of creation. Yet again we see a fundamental mistake: whatever it is in art, imagination in science and technology consists in knowing one's subject and what is relevant to it, and being able to reconceive, extend, and reshape that subject so that it appears in a new way.
If philosophers of science have contributed to the failure to grasp the nature of scientific and technological change, and the mutual influences of science and technology leading to innovation, historians and philosophers of technology are not immune to equal criticism. One example will have to suffice here. In his book, The Evolution of Technology (1993), George Basalla offers an interpretation of the history and philosophy of technology in terms of four central ideas, the most fundamental of which is continuity. By this he means that every technological artifact has antecedents. Basalla formulates this view in an unusually strong way, as a necessary truth: "Whenever we encounter an artifact, no matter what its age or provenance, we can be certain that it was modelled on one or more preexisting artifacts" (p. 209). If the strength of this statement seems questionable, Basalla's way of arguing in specific cases is even more so. His approach is to focus narrowly on the design of the new device or structure as it is first conceived and made, to show that it was anticipated, and to argue that the design of the precursor actually did influence the design of the later innovation. He is thus able to present at least a prima facie case, for example, for the dependence of the Whitney cotton gin on the much earlier Indian charka. But restricting discussion to initial design, and assuming that a few examples of precursors is evidence that, in all respects, all new inventions have precedents, ensures by fiat that there is nothing new under the technological sun. No wonder, then, that Basalla finds himself forced, at the end of his book, to admit his "inability to account fully for the emergence of novel artifacts" (p. 210).
The cases of primary interest for the present paper, those having to do with the question of whether technological change might be discontinuous with the past when it draws on new scientific ideas, bring out the severe limitations of his approach, which cause him to paint himself into this corner. It is highly unreasonable to expect (much less to demand, as a necessary truth) that cases like the scanning electron microscope, the Homestake subterranean neutrino detector, gravitational wave detectors, or gamma ray telescopes, show nothing new in their design, to say nothing of the uses to which they are put, uses which are certainly of importance in assessing novelty. It is reasonable to expect some borrowings in every new human production, whether in ideas or material things. (What else could we expect?) But in the cases I have mentioned, there are also genuinely new design features, forced by the scientific knowledge which lies behind the instrument. Gamma ray telescopes are, in the very principles of their construction, very unlike optical or radio telescopes, and the reason for building neutrino detectors deep underground (surely a feature of their design) is to shield the device from the effects of muons which physics has learned can mimic the effects of extraterrestrial neutrinos. Which precursors took such a thing into account or could have taken it into account before an understanding of muons had been reached? The view I have proposed here looks at such technological innovations from this broader perspective, and is thus able, as Basalla is not, to support the claim that the more we know, and the more detail with which we know it, the more new knowledge we are able to obtain by new instruments. In a double sense--an epistemic and a technological one--we build on what we have learned.
Despite lip service to the achievements owed to science, to the necessity of bringing background knowledge to inquiry, and to the occasional revolutionary innovation of scientific thought, the philosophies of science which have been most influential in the second half of this century have tended to view inquiry and application as starting from scratch in every new problem-situation. For Popperians and hypothetico-deductivists alike, every experimental situation is a new situation, to be looked at with pristine objectivity, freedom from bias of any kind, even scientific bias. If Popper gives us background knowledge with one hand, he takes away its status as knowledge with the other; if Cohen and Nagel call attention to the (quite scanty) need for prior beliefs, they then refer to those prior beliefs as "assumptions," and declare that there is no "logic" in discovery. For Kuhnians the new situation is something longer in duration; it is a whole tradition. Nevertheless, there is no "logic of discovery" of a new and revolutionary paradigm; it is, again, a matter of starting from scratch every time. Science is either flatly irrational or merely psychological or, perhaps, sociological. The growth of knowledge is ignored, as are the ways in which such growth can (at least ultimately) be based on reasons, can lead to something that is legitimately describable as knowledge, and can be used to find new knowledge and new applications. It may be noted that Popper (1957) wrote of "the poverty of historicism," as though concerns with the growth of knowledge are irrelevant to the reasoning that takes place in science. But his views on this matter were logical consequences of his own mistaken philosophy.
Even when prior beliefs are mentioned, their role or roles in new inquiry tend to go undiscussed; even for the hypothetico-deductivists, they may have been "verified" or "confirmed," but their roles in a large variety of the most important functions of scientific inquiry are left out of account, indeed receive no mention. Examples of these functions are the articulation of new problems, the setting of agendas for their solution, the determination of what is to be counted as observational evidence, the depiction and development of the kinds of instruments needed to carry out the agendas and solve the problems, and the formulation of possible solutions to the problems. (In this connection, as an example, it is remarkable that the Standard Model in physics, and its various possible GUT extensions, even suggest a range of possible candidates for Dark Matter.) Surely these are central features of the scientific enterprise; to ignore them is to ignore the heart of science. But to deal with them successfully it is necessary to recast the way philosophy of science is done, shifting away from the essentialist, atemporal methodology that has too often characterized it, to focussing rather on the ways in which science moves from one stage to another in its inquiries--on its evolution.
Despite some superficial similarities, the differences between the present view and those of such "historicist" writers as Hanson, Toulmin, and Kuhn run deep, not only in matters of detail but also in basic principles. (My view was erroneously classified with those writers in Fred Suppe's Introduction and Afterword to his The Structure of Scientific Theories, 1974.) The view I have developed differs from these others in its reasons for rejecting prior approaches, in the problems it considered and deals with, in the methods by which it approaches those problems, and in the aims it proposes to achieve. Although this is not the place to detail the differences between the present view and those of the authors mentioned, the view offered here must be sharply distinguished from them.
In this paper, I have sketched a few elements of a view which does not ignore such issues, but places them at its central focus. It is a view which emphasizes rather than submerges the point that the more we learn, the more we are able to learn--that is, the more questions we can ask, the more equipment we can design to help answer the questions, the more effectively we can seek out evidence, the more kinds of evidence we can access (e.g., not just what is visible, but the whole electromagnetic spectrum; not just electromagnetic interactions, but also neutrinos; not just the electroweak force, but even, in the foreseeable future, the gravitational; and so on). It is a view which tries to do justice to the evolving character of science and technology and their interrelations, and thus to explain the desirability and necessity of avoiding essentialism with regard to the nature of these subjects. The sort of story told here about the relations between science and technology could equally well be told (with differences, of course) about the relations between science and mathematics. For those two subjects have also often inspired chains of development, each in the other.
There are broader implications of such a shift in the perspective from which we study science and technology and their histories. In the last two or three decades, the relevance of evolutionary theory to areas other than biological evolution has finally begun to be responsibly appreciated. The view I have all too briefly outlined here is in accord with that development. Human evolution in particular must be seen as a process of becoming able to deal with an environment and its changes. For these purposes our ancestors made stone implements. Referring to the development of stone-tool manufacture, Basalla (1993, p. 27) declares: "Technology is as old as humankind. It existed long before scientists began gathering the knowledge that could be used in shaping and controlling nature." As well as being a primitive technology, I see primitive tool-making as an ancestor of science, in one of the two aspects noted earlier. For the toolmaking, foraging, and perhaps hunting or scavenging on which their survival depended, our prehistoric ancestors required a great deal of very specific knowledge. (For example, attempts to duplicate the feats of stone-age toolmakers have revealed the intricacy of the knowledge required for making a precisely-chipped stone tool.) The attention to detail so characteristic of sophisticated science is as much a descendant of that early approach to a structure of beliefs as is modern technology.
Also, it must not be forgotten that at some epoch in history, our ancestors constructed myths, embodying something of a grasp of the world in which they found themselves. Though historical origin is in no way a criterion of validity, such myth-construction was a precursor, however incorrect or imperfect, of the search for understanding of the universe in which we live. Though the possibility of gaining unified accounts of vast areas of nature is a matter to be decided by investigation and not by our evolutionary psychological heritage from mythical ways of thinking, the course of the history of science has shown that such unified understanding is an ideal that is in fact achievable, at least to a very high degree. In other respects, too, today's science has gone far beyond the restricted environment of the everyday lives of our ancestors and ourselves: to realms of the very large and the very small where even the concepts that give some measure of power over the savannah and the earth are inadequate.
(Herein lies one of the deepest ways in which a study of the changing relations between science and mathematics would differ from the present story about the relations between science and technology: for mathematics has enabled science to develop concepts transcending those extracted from everyday experience in ways that differ profoundly from the "newness" of the concepts resulting from technological innovation, for all their departures from what went before. See my "Testability and Empiricism," in E. Agazzi and M. Pauri, eds., The Reality of the Unobservable; Dordrecht: Kluwer, forthcoming.)
In extending to such realms, science has transcended the dictum that knowledge is merely for the sake of control over nature; for, especially as regards our current knowledge of the larger universe, there is little promise of that knowledge conferring any control over that part of nature. But even with the recognition of all the ways in which modern science is removed from the primitive mentality, both science and technology are but descendants of our ancestral attempt to gain understanding and control of our environment, even while altering the nature of those aims and transcending them in profound ways. In this portrayal of knowledge as an evolving body, growing in its grasp of details and in the connections it provides between different areas of inquiry and activity, both science and technology have a place as part of the career of humanity. Indeed, like knowledge in general, and despite lingering differences, science and its relevant technology have grown closer in their mutual influence.
Basalla, George. 1993. The Evolution of Technology. Cambridge: Cambridge University Press.
Cardwell, D. S. 1995. The Norton History of Technology. New York: Norton.
Cohen, M. R., and Ernest Nagel. 1934. An Introduction to Logic and Scientific Method. New York: Harcourt, Brace.
Duhem, Pierre. 1954. The Aim and Structure of Physical Theory. Princeton, N.J.: Princeton University Press.
Popper, Karl R. 1954 . The Logic of Scientific Discovery. New York: Basic Books.
_______. 1957. The Poverty of Historicism. London: Routledge & Kegan Paul.
_______. 1962. Conjectures and Refutations. New York: Basic Books.
Shapere, Dudley. 1991. "The Universe of Modern Science and Its Philosophical Exploration." In E. Agazzi and A. Cordero, eds., Philosophy and the Origin and Evolution of the Universe. Dordrecht: Kluwer. Pp. 87-202.