SPT v5n1 - The Author Replies
THE AUTHOR REPLIES
Joseph C. Pitt,
Virginia Polytechnic Institute and State University
I am deeply flattered by the care and attention to detail that each of my critics has put into reading and commenting on Thinking about Technology . Below I try to respond to many but not all of their concerns. I would love to engage each of the issues they raise, but it would be presumptuous to do so in this venue. Before attending to the particulars, some general comments can be made here.
At some point in each of the comments, I was criticized, chided, etc., for not doing this or that, or this and that. At one level I am truly complimented by such charges, for if my critics in fact believe that I can do everything they wish me to do, then that is high praise indeed. At another level, I am equally gratified by those charges. What seems to have happened is that the critics found the book triggering lots of ideas in their own minds. If Thinking about Technology can help bring such a rich array of issues to the fore, then I can believe I have succeeded beyond my wildest dreams.
What was I trying to do and why? The why is easy I said it in the book and I repeat it several places in my responses below. I want the philosophy of technology to be in the mainstream of philosophical discussion. What was I trying to do? I was proposing one way to achieve that goal: through the doors of the philosophy of science. There are several reasons for taking that approach. First, because questions about the nature of technological explanations, evidence, and laws need to be raised. Second, in a backwards sort of way, it helped make the case for breaking the classic connection between science and technology and open the way to genuinely fruitful discussions about technologies.
The big complaint is that much of the book is overdrawn, simplistic, naive, or idealistic. If the answers to the issues and questions I call attention to require more sophisticated answers than I have given, that is fine. I have done my job by identifying a problem and proposing a solution. If this approach seems promising, but incomplete, or even if the solution is not promising, but others can do it better, I cheerfully invite them to join the fray. I do not presume to claim that Thinking about Technology is the final word. I hope it is not! But if it can elicit the quality of constructive criticism and thinking found in the critiques of Allchin, Baird, Shrader-Frechette and Thompson, then I think we have made a good start.
SECTION 1. ON BREEDING REASONABLY
Paul Thompson's carefully crafted critique of my pragmatism requires an equally sensitive response. I hope I can provide it. Let me try by addressing what I take to be his major concerns: (1) the narrowness of my approach, (2) my misreading of Heidegger, (3) Winner's identification of the rights of individuals in the light of technological change. He frames these concerns by noting that I do not clearly acknowledge that questions of the reification of technology should not be confused with issues of ideology. He then addresses worries of reification within World Gone Wrong Scenario #1, and questions of ideology within his World Gone Wrong Scenario #2.
Let me begin my agreeing with Thompson that I certainly appear to run reification and ideology together and they certainly can and should be separated. Nothing I said, however, requires they be linked, and so I would meekly suggest that my failure to clarify this point does not mean that my account requires any such linkage. Nevertheless, Thompson's point is well taken.
On the question of the narrowness of my account, I think Thompson has thrown me a softball. Yes, I do emphasize engineering and breakthrough technologies. I do so in order to provide a focus — a narrow focus. But that does not mean that from such a narrow focal point we cannot move outward. I would therefore challenge his claim that my account excludes 'systematic observation and selection of natural variation used in plant breeding or drug development' as examples of design.
Let me explain why. Plant breeding and drug development differ somewhat, but the basic principles are the same and they can be explored using MT, the model of technology I propose. And while I may not have said it explicitly, I intend MT to be a generic model of design. Several factors are relevant here. In case, drug development or plant breeding, we start with some desired outcome, e.g., resistance to disease, and with a knowledge base. We then proceed to manipulate plants and/or drugs until we get to the desired end. Why doesn't this count as design? Granted, it does not necessarily break out into all the steps of say, Vincenti's model. But the design process in general is concerned with manipulating materials to produce an artifact that meets a specified need. It can be more or less complex. I suggest that while plant breeding historically has been less complex than drug development, that degree of difference may be disappearing. In an attempt to be a bit more convincing let me move to an example with which I have some degree of familiarity, but which bears more than a superficial similarity to plant breeding and drug development: dog breeding.
For over twenty years my wife and I have been breeders of Irish Wolfhounds (IWs). There are many reasons for our fascination with this breed, but one of them is that as breeders of IWs we, and others of our ilk, are engaged in a process of reconstruction we are attempting to breed, read 'design,' a dog that was almost extinct by the mid-nineteenth century. The Irish Wolfhound is properly called a giant hound. It is an ancient breed. There are reports of Julius Cesear parading a pair through Rome upon his return from the conquest of the British Isles. It was renowned for its size and gentle temperament through the ancient world. But for a variety of reasons, paramount among them the desire of the British to eliminate all traces of Irish culture after their takeover of Ireland, by the nineteenth century the IW was a pale shadow of its former glorious self. A certain British Captain, George Graham, realizing the impending danger of extinction, set about doing what he could to resuscitate the breed.
As is well known, in the world of pure bred dogs, each breed has what is called a Standard of Excellence, which is supposed to lay out the characteristics of the ideal representative of that breed. In the standard for the IW there is an interesting phrase: 'It is desired to firmly establish a race that shall average from 32 to 34 inches in dogs, showing the requisite power, activity, courage and symmetry.' This phrase remains from Captain Graham's initial draft of the standard. It recognizes that the breed as it existed in the 1870s was below standard and that the job ahead was to meet certain specifications. But the job was going to be a difficult one, for he started with a total of two dogs and four bitches, a very small gene pool. The largest of his dogs was 26 inches at the shoulder, a far distance from the desired 32-34 inches. The problem was how to regrow the ancient giant hound from this diminished gene pool and not get into serious genetic trouble. The problem remains the same today. As we breed the hounds, seeking to improve on what we currently have, using the standard as our guide, our concerns stem from the problems we have in the existing dogs: short life span, cancer, heart trouble, as well as issues over phenotype. We work over pedigrees and detailed histories of the blood lines, we mix and match as we try to move away from one problem while not moving into another. We use computer pedigree programs that provide calculations of inbreeding coefficients, old photographs, medical records, and so on. And then we wait and see what we get. It is not an exact science, and I often think that Nature enjoys the game. However, we are designing this breed. And I think that the process is one of starting from a data base, with specific objectives and values, learning from various experiments (read 'breedings'), reevaluating our assumptions and updating our data base and assessing the results and consequences of our actions and then factoring that back into the lot. My somewhat educated guess is that plant breeding works in the same way, and so does drug development. One of the nice twists in Darwin's Origin of Species is that he acknowledges borrowing the metaphor of natural selection from the practices of animal and plant breeders, the 'unnatural' (?) selectors perhaps designers?
In short, I do not believe that design is restricted to engineers it is part and parcel of daily life, from designing a house to a life style, we go through all the steps outlined above. While the attention is given to engineering design in the book, nothing restricts my account to engineering design.
Moving now to Heidegger, what can I say? I am in no position to argue the fine points of Heideggerian scholarship with one as well versed as Thompson. And while he is certainly correct to remind us that virtually all the great philosophers change their point of view or emphasis over time, it is not clear to me that what I have done is terribly out of order. Let me put it this way: Heidegger is a very difficult read. Thompson has done me a wonderful service in explaining how one should proceed when approaching Heidegger's thought. But my point was not to overwhelm the reader with the fine points of Heidegger's method, but to try to locate historically one source of a tradition of social criticism within the philosophy of technology. Many of the other caricatures of Heidegger’s work stem from reading his 'The Question concerning Technology' in some anthology in isolation from a proper Thompsonian explication of what is going on. Just because a tradition of criticism can be traced back to someone or other does not mean that that tradition incorporates a proper interpretation of its source. Aristotelian philosophies of the fifteenth to seventeenth centuries notoriously misread Aristotle. If I was unclear, let me try to set the record straight here: I propose my reading of Heidegger as typical of the uninitiated, or uneducated. But, I propose again, that it is the kind of reading that lies at the source of a philosophical tradition of reacting to technological issues.
Finally, I must turn to Thompson's reaction to my attack on Langdon Winner's position. As I read the objection, it is claimed that I have not acknowledged Winner's key point: 'that in a democracy, this conflict of rights ought to be treated like any other conflict: it should be understood as a political dispute to be arbitrated by democratic procedures and argued in terms of competing conceptions of justice and the good life.' While I did not discuss Winner's first book, I must have absorbed it to a degree that I did not recognize. For it seems to me that in discussing problems brought about by technological innovations I used such terms as 'conflicts of values,' differing conceptions of 'the good life' and the recognition that the resolution of these issues is to be undertaken in the political arena. This occurs even when I am at my most patronizing.
What I really like about Thompson's critique here is the way in which he frames what I call conflicts of values as circumstances which force constitutional issues. However, while this works well for the United States and other locations where constitutions and the rule of law are in some evidence, I do worry about how to extend this analysis to countries where these mechanisms are not available. That is one reason why I chose to speak of conflicts of values as embodying conflicts over lifestyles.
In closing I would like to comment on Thompson's final paragraph, where he suggests that I dismiss all social critics as ideologues. If that is how it is read, it is misleading. It does not follow that all social critics are ideologues. I thought I was addressing those philosophers of technology who have managed to marginalize the field because their approach to technologies is negatively ideological. Nothing says that social criticism of a technological innovation cannot be perfectly legitimate. However, within the philosophy of technology, there is a history of social criticism that reasonably can be viewed as ideological. I suggest that Langdon Winner's The Whale and the Reactor falls into that category.
Noting that in my patronizing fashion I suggest that resolution of value conflict requires building consensus and that this is a political process, not one governed by reason, Thompson wonders why reason cannot be applied to the political process of building consensus. He suggests that we do this by critiquing the arguments of the critics, improving them or rejecting them, but in any case using reason to achieve that end. He is right. In this he echoes Douglas Allchin's proposals for a technology of rational discourse. We should subject the arguments of social critics to critique. We should analyze them for validity and soundness. And in a world where reason ruled, that would end the discussion, for in that world reason dictates that we reject bad arguments and accept good ones even where they seem to oppose our inclinations.
But I suggest that the political process is not a reasoned process. Politics is about power. And here perhaps we can draw a distinction. Over and against Thompson's two Worst Case Scenarios, let me offer the Best Case Scenario and the Real World Scenario. The Best Case Scenario is where reason rules. But, as I note in my response to Allchin, one of my heroes is Hume and Hume can be read as suggesting that reason does not rule. 'Reason is and ought to be the slave of the passions.' The Real World Scenario paints a picture of a world where good and well-intentioned people use reason and even sometimes let it guide them. But when reason fails to convince others to act as you would have them, that is when politics comes into play, the art of power, of buying and selling and brokerage, of compromise and sometimes even practicality. The distinction then is between consensus achieved in the Best Case Scenario and consensus achieved in the Real World Scenario. I am firmly committed to the analysis and evaluation of all arguments, epistemological and moral/political. But I live in the real world. It would be wonderful, I sometimes think, if reason could always carry the day. But Hume's picture is not only closer to reality, it is more interesting. Thank you, Paul Thompson, for forcing some serious issues to surface and doing so in a philosophically sophisticated manner.
SECTION 2. DEFENDING FRIENDS
As always, it is a pleasure to read anything by Kristin Shrader-Frechette. Clarity, rigor, and elegance uniformly characterize her work. In this case, I want to thank her for an illuminating and very helpful critique. She made a number of points with which I completely agree. For instance, it is clear that it would have been helpful had I considered more than the DN model of explanation in developing an account of technological explanation, were I striving for a definitive account. But that was not the objective. It was, rather, to open a discussion. I hope she and others will pursue the question of what constitutes a technological explanation especially if mine is so flawed. Further I appreciate her attempt to apply some of the criteria I use to evaluate how successful my efforts are, fair is fair.
Given the limitations of space and time, I cannot respond to all of Shrader-Frechette's complaints and objections. I have selected five that seem to me more important than the others. (1) Her claim that 'Pitt has [thus] stipulatively defined ethical and political analyses of technology as not part of philosophy and philosophy of technology'; (2) her complaints about my treatment and/or lack of it of several philosophers of technology; (3) her complaint about my stipulative definition of 'technology'; (4) her rejection of my account of rationality, the Common Sense Principle of Rationality (CPR); (5) her complaints about all the things I did not do.
First, I did not stipulatively rule out ethical and political analyses as legitimate parts of the philosophy of technology. For decades now I have been concerned that philosophy of technology is not perceived by the vast majority of philosophers as mainstream. This is not news; my views have been publicly made and argued. My diagnosis of this problem has been and remains that for the most part, rightly or wrongly, philosophers of technology are viewed as Luddites, ideologically opposed to technology of any sort and whose analyses and attacks are based on privileged ethical concerns. I wish to bring topics in the philosophy of technology into the larger ongoing philosophical discussion. My suggestion in the book was that, in order to do this, we should start with epistemological issues. For example, in chapter 7, I argue that when making decisions about controversial technological issues, we need to get the facts straight, but a simple call for more information is not enough. As I point out, 'Many disagreements appear to be over 'the facts,' when the real issue is how those facts were created, generated, found, etc.' (p. 119). This is a call for the assessment of the methods of generating the information upon which positions are taken and decisions made. And, by the way, it was Shrader-Frechette who first put me onto this point.
The important issue, however, is not merely that we must assess the methods themselves which we use to generate facts, it is the also the point I raise next. Once we have as good a grip on the 'facts of the matter' as we can get, the remaining issues in the decision making context will concern values and it is here that questions of ethics and social and political philosophy should make their entry. To claim that I eliminate these concerns from the philosophy of technology is not correct. I do, however, relocate them in the discussion.
To turn to the second point I wish to comment on, Shrader-Frechette's complaints about my treatment and/or lack of treatment of certain figures in the field. I have the greatest respect for Albert Borgmann and Carl Mitcham. I did not include them in the discussion for several reasons. First, I did not intend for this book to be a survey of the field. Carl Mitcham had just published a book which did purport to do just that and it is an excellent volume. But it is not clear that Mitcham's own work readily fit into the discussion I was trying to initiate. That discussion was directed at how to bring philosophy of technology closer to the mainstream and how to do it through the doors of the philosophy of science. Mitcham's work does not do that, nor does it bear on my concerns here. Likewise for the work of Albert Borgmann. However, if I had had access to his new book, Holding onto Reality : The Nature of Information at the Turn of the Millennium , I probably would have included a discussion of that work, since we seem to converge on a number of matters.
In general I am not moved by Schrader-Frechette's complaints about having left people out of my discussion. As Shrader-Frechette herself admits, no one else has tried to approach the philosophy of technology through the philosophy of science. If others had and I had left them out, then I would be in error. To be perfectly honest, this is a small group and for the most part we are all friends. Shrader-Frechette's defense of the importance of her friends, however, is not a proper challenge to me.
Now if we turn to her complaints about my treatment of Langdon Winner, I am again not moved. In my critique I did use the phrase 'quasi-pathological.' But it was not to impute motives. Shrader-Frechette is not being completely honest in her charge here, for in my analysis of ideology, I argue that an ideology is a conceptual framework used in the employment of a privileged set of values. But since there is no valid argument for any set of values being privileged over another, the attachment to one set over another is generally not the result of rational decisions. The use of the term 'quasi-pathological' identifies that non-rational component. So I was not insulting Winner, I was employing the tools of a prior analysis, which was one of the few points in the book that Shrader-Frechette did not reject.
Third, what about my account of 'technology as humanity at work'? Clearly Shrader-Frechette did not take seriously what I said in the Preface. There I make the following point:Central to my concerns is the disturbing tendency of the social critics and others to speak about 'Technology' as if it were one thing. Try as I may, I cannot find the one thing. I can find automobiles, power stations, even specific government offices, but nowhere can I locate Technology pure and simple. . . . And so, in this essay one of the themes, which I discuss in a number of different ways, is that there is no one thing called 'Technology.' In this respect, the definition of 'technology' I offer humanity at work should be seen as punctuating the need to stop talking about Technology simpliciter and to start focusing on the specific problems we encounter and the techniques, materials, etc., we employ, as well as the consequences of using these techniques and materials to solve those problems (pp.x-xi, italics added).
Two things here: (1) the breadth of my definition is deliberate. I want to talk about specifics, not about some vague, troublesome thing called Technology. (2) I introduce at the very beginning of this book the need to be concerned about the consequences of using various technologies to solve problems which hardly looks like a dismissal of ethical concerns.
Fourth, Shrader-Frechette does not like my account of rationality, CPR, charging me, among other things, with suffering from the same charge I leveled against those who endorse the homo economicus point of view: that it explains everything, thereby explains nothing. I do not think that is true. It does not explain how we come to have the values we do, nor does it explain how we choose our goals. It does not explain artistic inspiration and technological creativity. It fails to explain a lot. Second, yes, it is possible on the position I lay out for two people to come to different conclusions with regard to what to do under the same circumstances. That is not troublesome. It would be if you thought that an account of rationality should provide you with a prescription for decision-making that would guarantee the right result.
First, let us not confuse coming to the same decision with being right. You could both be wrong. Second, so far, no attempt to define rationality so that success is always guaranteed has proved viable. Further, the idea that there is a single, right result seems to fly in the face of reality. It is often the case that a problem has several alternative solutions, each of which is successful. That a theory of rationality would permit such an outcome seems to me to be a point in its favor. What my account allows for is the fact that people make decisions based on their own experiences and that these experiences differ. Further, a significant component of the theory, which Shrader-Frechette did not discuss, is the feedback loop which requires updating of knowledge, values, and goals as one learns how well this or that approach to a problem succeeded. Yes, the account I give of this process could be elaborated, and it will be. But I do not see it as a problem in itself. The heart of CPR is the recognition that you can be rational and fail. That seems to me to be both right and reasonable.
The final criticism of CPR Shrader-Frechette raises is that it will not work for group decision-making. I did not say it would. The best efforts at an account of rational decision-making for groups are to be found in Public Choice Theory, and I am not yet convinced that is the right approach. As I said in the book, group decision-making is a matter of politics and compromise it should come as no surprise that there is little rationality involved in such a process. But what would it take to develop such an account? I believe it would require the assumption that group decision-making always occurs in the same sorts of circumstances, and surely this is not correct. No algorithm for rational group decision-making is possible because, among other things, groups are composed of individuals with differing experiences. Further, one important reason for insisting on diversity at all levels is to increase the variety of experiences brought to the table. To then insist that that heterogeneous group now adopt one way of thinking would be to defeat the purpose of diversifying the decision making group. Now if Shrader-Frechette wants to argue that we should not seek diversity when group decision-making is at issue, but limit membership in those groups to individuals of similar background and like minds, then she might find a theory of group decision-making, but I doubt it and I doubt she would like the consequences it would entail.
This is not to say that all is lost and that we are doomed to perennial political solutions to all our problems. One way to develop an account of group decision-making which implies some degree of initial agreement, I discuss below in response to Douglas Allchin's proposal for a technology of rational discourse.
Finally, Shrader-Frechette in her characteristically exhaustive manner lists a myriad of things I do not do from detailed examples that would please her, to a complete analysis of everything. Let us be clear about one thing: I wanted to open the door to a different kind of approach to the philosophy of technology. I am not sufficiently deluded to believe that I have the answers to all the questions. If I did I would have written a much bigger and much worse book. But I believe that raising issues in novel ways and proposing solutions, even if flawed, from a different perspective has its own value.
SECTION 3: DROPPING THE BOMB IS NOT THE ISSUE
Let me now turn to the very rich set of observations and arguments from Davis Baird. First of all, I love the start: Joe Pitt and the Bomb! I am sure many of my colleagues would think that an appropriate connection.
Baird has two major objections and lots of nested ones. I will tackle the big issues first. To begin with, Baird objects to what he calls my love of hierarchy for values. He claims that separating cognitive from aesthetic and moral values violates my pragmatism, and then argues that trying to do so simply cannot be done. Even in areas that are arguably cognitive, many non-cognitive values operate, and he shows this well in the two examples of psychological testing and objectivity.
However, I do not believe I said that cognitive values have some sort of hierarchical superiority to other values. The problem here is that in work of this kind one must start in medias res. I think that Baird has collapsed the work that a philosopher does in developing some account or other with the fact of living in the world. Baird wants things to be historically situated. Well, so do I — this is a point to which I am fully committed, and which I address in chapter 8 when dealing with scientific change. We are what we are because of where we came from in large part. I agree. However, philosophers, whatever else they do, have always had a profound commitment to improve the state of humanity in the world. Proposals for improving our lot are based on the assessment that the present state of affairs, whenever the present may be, i.e., 500 BCE or today, is less than optimal and may even be detrimental to human progress and the welfare of other species and the planet. That said, the work that comes forth starts with the present state of affairs, recognizing the historical contingencies and moves on from there. Baird is quite correct that in the effort to find out what is the case many other factors also are in play. This does not mean we cannot be alerted to the advantage of drawing a few distinctions with the aim of improving the future. Baird's example of Yerkes's development of an intelligence testing instrument proves that point very well, even though he was trying to make the opposite point. Baird shows exactly how non-cognitive values influenced the outcome — and that is consistent with my point, that we need to be able to do that if we are going to learn from our mistakes and improve the chances for the future.
That said, let me turn directly to Baird's objection to what he casts as my know first-value later account. There is no hierarchy of values set in stone for all time, at least not for me. However, I would argue that there ought to be a hierarchy of action policies. I stand by my claim that first you must find out what the facts are, as best you can, acknowledging that you may never unearth all the assumptions upon which you are working. Second, only after you have the facts in hand can you determine the best thing to do. The essential point here is that you cannot do the second without the first. You cannot reasonably choose the best path of action without knowing the facts to the extent that the facts are knowable. That is the heart of the matter. Now Baird argues that you can never know the facts completely, and with that I also agree. That is the point of developing something like MT my model of technology. It represents an iterative process whereby we feed back into our decision-making what we have learned; we may not know all the facts, but we can try to correct for whatever error is the result of ignorance and misguidedness and try again. I happen to think that is a considerable advance in our way of thinking about the growth of knowledge. This is not to deny the role of values, but to embrace it. Richard Rudner taught us that all judgments involve values. However, I would also argue that not all values operate with the same force at all times.
But this still does not bring us to the heart of Baird's objection. Baird claims that, 'For Pitt, ascertaining the facts is a primary good that stands alongside of, and independent of, moral, political and ideological goods' and he sees that stance as problematic first, because we cannot extract ourselves from the value-bound world into which we are born, and second, he objects to a hierarchy of values.
I hope I have shown that I do not believe in a hierarchy of values, but rather in a dynamic interactive relationship between knowledge and values.
But perhaps it would help if I identified the enemy. I always tell my students to read a philosopher as if he is carrying on a conversation with someone who disagrees with him and then try to identify the other protagonist. The people I am addressing are those individuals who think it is both possible and desirable to develop social policies and criticisms without regard either to the facts of the matter or to a commitment to finding out the facts. It is not that I see ascertaining the facts as a primary good independent of moral, political and ideological goods, but rather that I would like to see ascertaining the facts included as a legitimate good alongside those others except for ideological goods, which I see as an oxymoron. The point about ideology is that I see it as quasi-pathological. Those who operate from an ideological stance not only disregard the facts they do not like but also, more importantly, do not flinch from distorting the facts to fit their own ends hence my critique of Winner.
I guess what I am after here is this. If your objective is to bring about your specific desired state of affairs i.e., if your values are absolutely fixed and unyielding, no matter what the facts are, then you and I cannot engage in fruitful conversation. If you are, as am I, interested in seeking to improve the lot of humanity, then you need to recognize that human knowledge is frail and incomplete. Further, we need a method by which we can show the dynamic interaction between what we know about the world and what we hope will be the case.
Four basic points provide us with a pragmatic view of the relation between epistemology and value/action theory. Why is it pragmatic? Because it sees successful action as the ultimate criterion.
(1) You cannot separate, except for the most trivial of analytic reasons, knowledge and action.
(2) What we know only counts as knowledge if we can do something with it.
(3) How we act is a function of what we want to accomplish and what we know.
(4) What we want to accomplish changes with respect to what we know and how we act.
Thus, I respectfully submit, I should not be read as arguing that we seek the facts simpliciter seeking the facts is an imperfect self-correcting knowledge acquisition process which intimately involves values at every stage.
The interesting thing about values is that they not only express preferences and ideal outcomes, but the system of values with which we operate is constantly changing. Not only does the order of values change, but also the values themselves change. At one point in our lives we may think that the most important thing to do is party. At another point it may be to save for the kids' education, and then later it becomes saving for retirement. And then, having learned a lot about how investing works, we think we should have had saving for retirement as a top priority from the get-go. Values are there all the time, but they are also not isolated. They are informed by what we know and wish to do. Thus, I think that by looking at how I would like MT to work, Baird and I are closer than he may think. As a footnote, I would suggest that those individuals who never change their values, or at least claim they never do you know, the ones you cannot argue with are brain dead or ideologues; not much difference, but equally dangerous.
Before I turn to Baird's entrapment in technological determinism, let me plead for something. Baird says that he has 'a tragic picture of human knowledge, where Pitt has a comic picture.' I must confess, while sometimes a jokester, I did not think that I had a comic view of human knowledge. Two pictures came to mind given the iterative features of MT, I wondered if Baird saw the idea of returning to readjust our knowledge base and values in the light of what we have come to know as something like a Marx brothers scenario? The second picture was that of Slim Pickens at the end of Dr. Strangelove or How I Learned to Stop Worrying and Love the Bomb , riding the Bomb down giving a good old rebel yell. I do not like either well, I like both, but not as pictures of human knowledge. But later in that same paragraph Baird notes that 'Pitt calls on the heroic picture of scientific inquiry' and my margin notes say 'better than comic.' However, if I had to choose among tragic, comic, and heroic it would be a tie between tragic and heroic. Heroic because the struggle is against all odds do we really have that much hubris that we actually think we can figure it all out? And tragic because we are doomed to fail. But that does not mean we should give up; the battle is what is important.
Now, about technological determinism. Baird claims:Historical momentum and the need for system-wide standardization produce a kind of technological imperative. As individuals we are not at liberty to choose our preferred technological poison. To a considerable extent although not completely of course group demands for standardization choose for us. Neither are we, as a group at a given time, at liberty to choose our preferred technological poison. Historical momentum, from choices made long ago, chooses for us.
For Baird we are victims of the past. I argue that we are not complete victims of current technological developments. I suggest and even give some examples of decisions to abandon technologies which would appear to be safe from further human meddling given their size as well as meeting Baird's view of momentum, such as the Ilco Reactor, the super colliding super conductor, that wonderful elevated highway in downtown Toronto which was canceled leaving a road to nowhere. I suspect there is still room in Montana if we wish to escape the current mode of social life but at what cost? Baird says we cannot even choose our technological poison. We most certainly can but there is a cost. What he forgets is the range of possibilities made possible by technological innovations and how readily and completely we have embraced them. The types of lifestyles and arrangements made possible by the array of technological innovations virtually define what it is to be human today. It is hard to think of living other than as we do. Today we are not merely animals seeking shelter and food. Living as I do on a farm, I know well how constrained my life is. I am bound by the rhythms of nature, the demands of my animals and vicissitudes of nature. This was brought home to me in a strange way recently. I had occasion to spend several days in Washington, D.C., by myself. One morning I actually sat outside at a Starbucks, drinking coffee and reading a newspaper all the way through. I cannot remember when I last experienced such luxury. On Sunday I watched people wandering around the Mall, jogging, enjoying the sunshine, doing things city people did things I, the born New Yorker, had long forgotten were possible because of my current more primitive lifestyle. For those who bemoan the manner in which we are strapped into this or that lifestyle, I observe for you that you can give it up and do something else but do you want to pay the price?
Now the other side of the story is this: my account of scientific change. At the end of Thinking about Technology I sketch out a view which in many respects is similar to Baird's despairing account, but it is restricted to the way the sciences change. I argue that as a science matures it becomes embedded in a technological infrastructure which makes it possible for the theories of that science to be developed and tested, but which also constrain what is possible. So, it is not quite true that I deny constraints they are there, but I am not as despairing as Baird about the possibilities of throwing off our chains. If we want the science we have to go in the directions it is going; we may be forced to continue to buy into that technological infrastructure. But we can say enough is enough. And, more importantly, we do. Canceling the SCSC sent a message to a certain branch of physics. You can also opt out of the New York stockbroker rat race and do subsistence farming in west Texas but, do you want to accept the consequences?
Finally, Baird claims an irony in my attack on the social critics because they are the ones fighting the big bad Autonomous Technology for human control. Two points: there is no big bad Autonomous Technology see above; second, if the social critics are the real heroes in all this, they are doomed to failure because they choose ideology over pragmatism; and it says something for someone to choose to be pathological.
SECTION 4: THINKING ABOUT 'THINKING ABOUT'
I want first to thank Douglas Allchin for a rich and provocative analysis, but also for reminding me of my roots. I was first drawn into the philosophy of technology by a bumper sticker I saw on a car going around the Virginia Tech Drill Field in the late 1970s. 'Guns Don't Kill, People Do.' There is clearly something right there, but also something missing, something wrong, if you will. Likewise, Allchin is right and wrong about one important point. Allchin is right when he notes that, 'The very process of technology, as humanity-at-work, is politicized.' Of course it is, especially if you understand technology as a continuous social process involving the key ingredients of feedback and assessment. We are forced to reassess what we thought we knew and also what we thought was important. Most of the arguments are going to be over the latter; that is where the politics enters. Thus I think he is wrong when he says that I seem 'to peripheralize social concerns or make them secondary.' The politics is built into the assessment process. My point here is that in order to perform the kind of philosophical job Allchin wants us to perform, we need to first clearly identify what we are talking about, be it the facts or the values. My model provides the basis for developing a strategy for doing that. Clearly this needs development. In what follows I address two of Allchin's themes, directing most of my attention to his very exciting notion of a ìtechnology of discourse,' which addresses the issue of politics. But first the question he raises concerning the absence of an account of evidence needs to be addressed.
According to Allchin, 'We should expect an epistemology of technology, foremost, to set norms for evidence in justifying the adoption, rejection, development or revision of a given technology.' There is one sense in which I could not agree more, for at the heart of epistemology is the problem of evidence. On the other hand, there is a problem here. The concept of evidence changes over time, and it often changes in the light of new technologies. What counts as evidence is a matter for the community of investigators to determine at a place in a time. That said, it does not follow that I have ignored the issue. On the contrary, it is at the core of the feedback loop in MT. Recall the idea there that we start with some database that we use to propose solutions to problems. Once having enacted the proposed solution, it is assessed in terms of success or failure, consequences (intended or unintended), etc. These data are then fed back into the base and used as evidence in argument for or against revising our knowledge claims, values, goals, etc. To say more is to offer absolutist solutions to a highly variable problem.
Now, given that, several questions could be raised, for instance: (1) Are there no limits to what counts as evidence? (2) What counts as success or failure? In response to (1) let me simply recall a distinction I offered between knowledge and candidate knowledge claims. In the social process of technological activity, whatever the results of a communal decision, the results of those decisions are digested by individuals and then brought back to the table for further discussion by the appropriate group. It is in that social group context that what counts as legitimate evidence for revising company policy, or whatever, comes out. What an individual accepts as evidence may not impress a group. So what counts as evidence depends on the level at which you are directing your analysis. An individual may have an epiphany and explain to the group that God spoke to her last night. That does not mean the group will change its mind, unless for that group that counts as evidence. So, clearly, I am committed to the role of the social here, but the social is complex and itself interactive. If a company seeks scientific evidence for its policy making, they will do one thing; if they want divine inspiration, they will do another. Is this a weakness? Yes and no. Yes, because it does not provide a firm demarcation for acceptable evidence. No, because to rule on that question a priori fails to allow for historical contingency and scientific and social change.
Turning now to Allchin's technology of discourse, there are two points which need to be made. First, the architectonic of community building requires two different models, inclusive and exclusive. Second, Allchin's emphasis on listening leaves the overconscientious listener open to a sucker punch. In either case, many of Allchin's positive recommendations rest on his sense of human nature. At its heart remains an Enlightenment commitment to the maxim that reason will rule. I, however, as noted above, take my clue on this from Hume (which is not to say I buy his whole account especially his theory of ideas).
Hume's Treatise can be read in a number of different ways. I think of it as a work in political philosophy. It is also an attack on one of the key assumptions of the Enlightenment, i.e., that reason motivates the actions of men and women. As I read the Treatise , Hume wants to know what makes people do what they do. That is, he wants to know what will actually work when it comes to motivating people in the context of a functional state. Book I is, therefore, devoted to an analysis of the role of reason. On Hume's account, reason comes up wanting. Book II offers a positive account of the motivation for human action, the passions, and concludes that, 'Reason is and ought to be the slave of the passions.' Book III then says, now that we know what causes people to behave the way they do, we can propose the kinds of social arrangements that will actually work. Based as they are, as I would have it, on the facts.
Allchin wants us to 'listen' to people. Further, we should also abandon the 'militaristic' model of winning battles. The point here is to try and uncover what the protagonist really means and says and needs. The objective is creative problem solving where the aim is to forge a consensus. All of this is good. But it assumes, first, that others also want to forge a consensus. The other may, however, want to win. When I watch political discussions, the most salient point is the lack of desire for consensus. It is not until defeat is imminent that opponents sometimes accept compromise and that should not be confused with consensus.
Allchin also urges us to interpret 'the affective subtext and values that motivate the reasoning' of the other. Here I worry. Too much is open to misinterpretation. People may say one thing motivates them when it is really something else. I find it difficult, if not impossible, to build a basis for rational communal consensus building around what can be at best a guess. But Allchin's overall worry here, namely the problem of consensus building, is real and needs work. But it has to be based on what can be put on the table — trying to read subtexts only opens the door to mistrust. I suggest that we not try to read subtexts. Rather, let us see if we can agree on the goal. If we are in agreement on what we would like to achieve, then we can begin to work our way towards agreement on how to achieve it. If Hume is right, and I think he is about this, and it is our passions which motivate us, and if our passions are not open to rational persuasion then we should put that effort to one side. Uncovering motivations and values does not necessarily pave the way to rational discourse. In fact it can block the path if we find the motives of the other repulsive. But, on the other hand, if we can agree on a common goal, irrespective of our motives, then we have common ground.
Is this position in conflict with my insistence on finding out the facts of the matter when it comes to making decisions about technologies? I think not. The facts of the matter concern how the thing works and the legitimacy of the grounds for predicting various impacts and consequences. Certain values may be part of the motivation for a technological undertaking, but knowing what they are is not essential to understanding the technology itself.
A more telling possible objection would be to observe that, if group interactions are political and not always open to rational processes, then does this mean that scientific knowledge is also nothing more than the result of political battles and power plays? To see that as a logical result of my views would be to place me squarely with the Social Constructivists, and here I would have to do some serious back pedaling. Fortunately, it is not a proper conclusion. As noted above, if we concentrate on the goals we have in common, and not on understanding the other in some deep and meaningful way, then science may be our best example of how this methodological recommendation works. (I know I appear to be reifying science but bear with me for science, read the many major areas of scientific inquiry and their many and multiplying subfields.) If the goal is to find out how the world really works and what there really is out there and in here, and we can agree on that as scientists, then coming to agreements on which methods work best in which areas, and what criteria we should use to evaluate results will follow. The way to those agreements may be littered with bodies as the criteria and methods are fought over, but if the process is, as I suggest it is, iterative and fluid and one in which all results and methods are constantly being reevaluated and past results updated, then I propose we have it exactly as science has historically developed. Keep your eye on the prize: seek knowledge, and the rest will work itself out not cleanly, not even nicely, but it will come.