SPT v7n2 - The Hidden Side of Visualization


Number 2
Winter 2003
Volume 7

The Hidden Side of Visualization

Agustin A. Araya
San Jose State

In the arc of history that extends from early human communities to present day societies, the history of seeing occupies a prominent place. Intimately related to thinking and language, and crucial in most of our comportments, seeing and its history co-determines the essential history of human beings, that is, the history of their essence . Writing, a fateful development in human history, in a sense is an attempt at seeing spoken language. As we move towards our present day, milestones in the history of seeing become increasingly visible, such as printing, perspective drawing, musical notation, tele-scopy, micro-scopy, photo-graphy, cinema, and tele-vision. Computersupported visualization, that is, the use of computers to visualize 'things', is but the currently last element in this series, series whose end is nowhere in sight.

Although a relatively young discipline, computer visualization has already expanded in multiple directions. It can help us to visualize that which is too small and complex for us to see, as in the visualization of the structure of complex molecules, or that which is too big for us to grasp, such as the planet Earth itself. It can also support the visualization of that which only exists in our imagination, without a counterpart in the 'real' world, as in certain applications of Virtual Reality. Visualization can also be used to visualize phenomena that are not visible or sensible in themselves, as in the visualization of mathematical constructs or of abstract relationships between pieces of information. 'Visualization' has even been called the second computer revolution because it goes beyond conventional uses of computers, decidedly stepping into the 'cognitive' domain. 1

A variety of reasons account for the current emergence of computersupported visualization. From the practitioner's perspective, as the raw power of computing and communications technologies grows—accompanied by a significant growth in the amount of data, information, and 'knowledge' available through that technology—the human 'cognitive' capabilities that allow us to deal with them are 'stretched to the limit'. It is this increasing 'mismatch' 'threatening' human-computer interaction that visualization addresses. As the need for innovative interactive technologies grows and, simultaneously, enabling computing technologies such as three-dimensional graphics mature, the terrain is fertile for the rapid dissemination of computer supported visualization.

Standing, then, at the edge of a possible era in which interacting with visualizations and using them in a variety of activities would be as common place, if not even more so, as it is today to write with the help of a computer, we ask the following questions. What is a computer-supported visualization? What is it that we encounter when we interact with a visualization? If visualizations and the machines that generate them turn out to be new kinds of entities, exhibiting novel ontological traits, could this technology transform us in subtle but fundamental ways? And what kinds of transformations would they be?

We start by examining the motivations and principles underlying the technology of computer-supported visualization. After characterizing the approach we will follow to determine what visualizations are and introducing the notions of ontological operations and biases, we enter into a detailed analysis of what we encounter when interacting with visualizations. Because of the close relationships between visualizations on one hand, and geometry as well as applications of geometry in science, on the other, we perform an in-depth exploration of the ontological operations embedded in geometry and its applications. We proceed by developing an ontological reinterpretation of Edmund Husserl's reconstruction of the origin of ancient geometry, of Galilean science, and of Cartesian geometry.

After a critical appraisal of Husserl's reconstruction and of our own reinterpretation of it, we apply the ontological approach to determine both the ontological operations embedded in visualization machines and the ontological traits of the interactive visual worlds supported by these machines. We are then in a position to determine, from an ontological perspective, what is that which we encounter when we become involved with interactive visual worlds. Finally, we explore potential transformations of fundamental ontological traits characterizing us, humans—in particular, transformations of our 'being in the world'—that could emerge from our pervasive interaction with visual worlds. For a related analysis of another kind of computing technology, namely, Ubiquitous Computing see (Araya, 2000).

Computer-Supported Visualization

Before embarking on an ontological analysis of the technology of visualization, we need to gain an understanding of its potential scope and principles. We now consider both the ostensible motives behind the emergence of visualization technology as well as its guiding principles, as they are perceived from within the discipline itself. Preceded by various developments in computer science and other disciplines, computer-supported visualization made its formal appearance in the decade of the nineteen eighties, in terms of a report of the National Science Foundation addressing several 'problems' mainly arising in the context of the scientific community. 2 A central problem was the so-called 'informationwithout- interpretation dilemma'. As the number and power of sources of data—such as satellites, medical scanners, and supercomputers—increases, the amount of data they make available far surpasses the capabilities of scientists and professionals to process them. Exacerbating the situation, the phenomena under study, such as large molecules, the human brain and body, earth climate, or the large-scale structure of the 'universe', are increasingly complex entities themselves, whose study requires the collection of large amounts of data.

Two additional problems stressed by the report were the means of communicating results among scientists and the ways of providing for close interaction between scientists and the computational analysis of data as it unfolds, to be able to 'steer' the computation in promising directions. As a solution to these problems the report proposed the development of visualization technology:

Scientists need an alternative to numbers. A technical reality today and a cognitive imperative tomorrow is the use of images. The ability of scientists to visualize complex computations and simulations is absolutely essential to insure the integrity of analyses, to provoke insights and to communicate those insights with others (Bruce McCormick, Thomas DeFanti, and Maxine Brown 1987, p. 7).

Finally, the report proposed the "implementation of a federally funded ViSC (Visualization in Scientific Computing) initiative." In addition to the relevance of the problems it addressed, a significant strength of this initiative was that it did not appear in a vacuum, but rather it attempted to build upon a number of existing scientific and technological areas that had developed to a large extent independently of each other, such as Computer Graphics, Image Processing, Computer Vision, Computer-Aided Design, Signal Processing, and Human-Computer Interaction.

In the National Science Foundation initiative the focus was on the interpretation and communication of 'physical' data, that is, data originating from physical sources such as the earth and the human body. But there are other sources of data, more properly referred to as 'information'—such as office, business, and financial information—which are increasingly significant and in which similar problems arise. An important difference between visualizing physical data and visualizing information, understood in the previous sense, is that while in the first case the characteristics of the source of data provide 'natural' visualizations of it, this is not generally the case with information. In consequence, a second area of computer-supported visualization emerged, which has become known as information visualization . 3 Visualization technology is now being utilized to support a variety of technological and scientific activities and is a very active area both in terms of research and applications.

Visualization Principles

But, what are the principles underlying computer-supported visualization? What are the fundamental tenets that sustain this discipline, make possible the use of visualizations in the analysis and interpretation of data and information, and orient its future development? Let us identify these principles and examine the way in which they have been understood from within the technical discipline itself.

Scientific and Information Visualization have been characterized as "the use of computer-supported, interactive, visual representations of data [and information] to amplify cognition." (Stuart Card, Jock Mackinlay, and Ben Shneiderman 1999, p. 6, our brackets) A visualization can be a figure (e.g., a map adequately enhanced with the use of colors and patterns, or an image obtained by combining photographic images with computer-generated images), a diagram (e.g., a three-dimensional graph represented in twodimensional space which can be rotated and expanded), or any other kind of visual representation. A crucial characteristic of these visualizations is that it is possible to 'interact' with them with the help of specialized devices. Such interactions allow us to 'manipulate' visualizations as we manipulate things in the 'real' world, and to perform other kinds of manipulations that are not usually possible.

Visualizations contribute to 'amplify cognition'. Those human capabilities and potentialities that come into play when performing a task of analysis and interpretation of data are regarded as cognitive capabilities or 'resources', and conceived as information processing operations. Once cognition is regarded as a human resource for processing information, amplifying or augmenting cognition means increasing, extending, or improving aspects of this resource. Visualization would amplify cognition in a variety of ways:

Visualizations can expand processing capability by using the resources of the visual system directly. Or they can work indirectly by offloading work from cognition or reducing working memory requirements for a task by allowing the working memory to be external and visual...Visualizations allow some inferences to be done very easily that are not so easy otherwise (Card, Mackinlay, and Shneiderman 1999, p. 16).

Interacting with visualizations is regarded as important for furthering our understanding of complex things or systems. In the case of computergenerated visualizations of four-dimensional objects such as hypercubes, translating, rotating, or expanding them can make understandable what at first appears as a confusing collection of interconnected lines.

To render these various notions in a more condensed way, Card et al. characterize the purpose of visualization as "using vision to think," where thinking is understood as a central element of cognition. What emerges from this complex of notions is what we will call the thinking with visualizations principle . This principle is oriented to 'enhance' a thinking that is assisted, mediated, and carried out by interacting with visualizations in a computersupported 'space'. But there is another notion that, although related to the 'thinking with visualizations' principle, brings into sharper relief what computer-supported visualization is about. In his foreword to Richard Friedhoff and William Benzon's Visualization, The Second Computer Revolution , Richard Gregory states that:

The central point that this book makes is that the newly discovered preconscious processes of human vision can be tapped and used to powerful effect by computer images—especially by computer graphics to suggest ideas. Perhaps its most powerful form is interactive graphics, where the hand can control and change the image, much as though it is a solid object lying in the familiar space of the object world. (Friedhoff and Benzon 1989, p. 8, Gregory's italics).

In the same vein, Friedhoff and Benzon consider as crucial the notion that "images have a special ability to trigger, in a controlled way, the exceedingly refined mechanisms of human visual perception." What does this mean?

That beyond the raw power of computing machines to calculate and to store massive amounts of data there appear to lie certain possibilities by which, through an 'increasingly tight coupling of humans and machines', unknown or little known potentialities of the 'brain and mind' could be uncovered and put to use. By means of images with special properties and, possibly, by allowing forms of interaction which go beyond everyday kinds of manipulations of things, it could be possible to trigger preconscious visual mechanisms and put them to work, thus amplifying our cognitive capabilities and thinking. To a limited extent, existing visualization techniques already achieve this. As Friedhoff and Benzon indicate,

Computer graphics provides a seamless fusion between the massive processing power of the visual system and the power of the digital computer...computer graphics, because it bonds mind and machine in a unique partnership, creates an entirely new way of thinking (p. 82, our italics).

Under this characterization of computer graphics, which is one of the most advanced technologies used to generate visualizations, we find two additional and powerful principles that underlie the whole enterprise of visualization. First, what we will call the fusion principle . That is, the notion that cognitive capabilities can be amplified and augmented by integrating, merging, even more, 'fusing' humans and computers, via the utilization of interactive visualizations. 4 And, second, the 'principle of the possible transformation of thinking by the use of visualizations', which we will call the transformation of thinking principle . That is, the possibility that by using interactive visualizations having properties that could 'trigger' preconscious visual processes in new ways, new kinds of thinking could arise.

Finally, we will mention two other principles at play in the context of visualization. Because the visual system is especially adapted to perform certain kinds of tasks, computer-supported visualizations should be oriented towards those tasks. This leads to what Friedhoff and Benzon call 'objectification': "a phenomenon, whether it is inherently visual or not, should be represented as something that has form, color, texture, motion, and other qualities of objects." (p. 169) We will refer to this basic notion as the objectification principle , which is oriented to make visible that which is not. Closely related to this principle is the principle of naturalism . As stated by Friedhoff and Benzon, "the central issue in computer graphics today is naturalism. The goal is to dispatch forever the angular, harshly colored images of the past and to move towards images that are so realistic as to be indistinguishable from photographs." (p. 85). This principle can be regarded as complementary to objectification in that, once something has been objectified, it strives to give to it a high degree of 'realism'.

We have identified five principles, that is, the principles of thinking with visualizations, human-computer fusion, transformation of thinking, objectification, and naturalism, which characterize the area of computersupported visualization as perceived from within the field itself. These principles can be articulated as follows. While the thinking with visualizations principle establishes the 'fundamental purpose' of this technology, namely, the amplification of cognition, the fusion principle, advocating greatly intensified human-computer interaction, specifies the means by which to achieve such amplification. On their part, the objectification and naturalism principles establish particular ways in which human-computer fusion can be achieved. Finally, the principle of the transformation of thinking points towards the possibility that thinking not only be amplified but also transformed as a consequence of the play of the four other principles.

But, what are these principles? What gives them their authority, that they can ground a technological discipline as a whole? We will return to these principles and questions later in the work, once we have gained a deeper understanding of what visualizations themselves are .

Before we conclude this section we need to consider two interrelated questions. First, will this new technology ever be able to transcend the boundaries of research laboratories to become part of the world of work? Second, even if indeed the technology were finally transplanted to the world of science, engineering, medicine, and other disciplines, could this technology make the leap into everyday life, as computers themselves are increasingly doing? Let us hear what practitioners tell us in this regard:

Information visualization is a body of techniques that eventually will become part of the mainstream of computing applications ... At certain points, the development of technology crosses barriers of performance and cost that allow new sets of techniques to become widely used. This, in turn, has effects on the activities to which these techniques are applied. We believe this is about to happen with visualization technology and information visualization techniques. Information visualization is a new upward step in the old game of using the resources of the external world to increase our ability to think (Card, Mackinlay, Shneiderman 1999, p. 34).

Because interactive visualizations rest upon two fundamental and intrinsically related human capabilities, namely, the capabilities 'to see' and 'to manipulate' things, visualization technology has the potential for being used in any activity whatsoever, not just in those specialized activities of the world of work. In addition, other developments of a technological and 'social' character—such as global computer networks and ubiquitous computing—are powerfully contributing to the already 'overflowing river of data, information, and knowledge' that has become accessible in everyday life, creating a 'problem' that visualization technology may contribute to 'solve', thus opening the door for the penetration of everyday life with this technology.

Ontological Approach

Given our aim to understand what visualizations are and, subsequently, to identify ways in which, in the context of highly technologized communities, the pervasive use of visualization technology could invite essential transformations on the way we—humans— are , how should we approach the analysis of this technology? We could start by attempting to determine what the devices enabled by this technology are. If we were to ask a designer of a technological device 'what such a device is', we may be able to learn about the uses that can be made of it, how such uses are supported by the various components of the device, the decisions that were made in its design, and possible justifications for these decisions. If we were to put such question to researchers engaged in the technologies that make a device possible, we could learn about the principles underlying the technologies, as we did in the previous section, as well as about its potentialities and limitations. We could also gain an understanding about what a device is by asking the users of the device, who may be able to identify the reasons why the device is useful in certain situations, and how is best used.

But what would all of this amount to? It would give us an understanding of the technology and related devices from the perspective of 'the present', that is, from a point of view which is almost entirely subsumed within the confines of how things appear to us today. Because the essential ways in which technologies unfold in the course of their long gestation period are for the most part invisible to us, that which is immediately accessible of them constitutes the 'given', the 'historically transmitted'—the obvious—which although familiar to us in its immediacy, becomes largely incomprehensible as we start probing beyond the immediate.

A possible way to free ourselves from this 'tyranny of the present' is to attempt to develop a 'reconstruction' of essential moments in the gestation of a technology, moments that may remain hidden in it but continue to be determinant of its power and of our encounters with the technology. We have to take seriously and assign to it all the weight that it deserves, the notion that human communities in the long span of history they have traversed so far, have not only created myriad artifacts of all kinds but, most important, have been able to create new kinds of beings and entities exhibiting novel ontological traits with respect to what preceded them. Similarly, human communities have developed a variety of practices of all kinds, sometimes giving rise to practices of a new kind relative to those previously known.

Because these new kinds of entities and practices typically take long to develop and to take hold of a community, it is difficult for us to perceive them as novel and to determine in what their novelty consists of. We call ontological operations , or refer to them as operations having ontological import, to certain kinds of human practices or specific actions that take place in the context of practices, which give rise to new kinds of entities and, possibly, new kinds of practices exhibiting novel ontological traits. Similarly, we call ontological biases to certain tendencies in our encounters with things that make us take them to be in certain ways that are essentially different from what they have been to us in the past. Thus, an ontological bias may imply an ontological transformation in the making, but not yet completed.

How do these operations obtain their ontological transformative power? In most cases isolated operations will not give rise to ontologically novel entities. It is only when they are grouped together with related operations in larger practices that they may achieve that capability. In addition, they need to spread themselves among other practices and to acquire a weight within the larger community to the point they begin to supersede other competing practices. Finally, what entitles us to say that certain entities created in the context of human practices have novel ontological traits? That they possess essential traits that they do not share with other known kinds of entities.

If we were able to reconstruct the ontological operations responsible for the emergence of a particular kind of technology, in particular, computersupported visualization, this would give us a good starting point to consider the possibility that in our encounters with devices enabled by such technology certain essential traits that characterize us may be transformed in subtle ways. Our aim in the remainder of this work, then, is to examine the technology of computer-supported visualization from this historic-ontological perspective.

What Do We Encounter In A Visualization Situation?

When we encounter a visualization in the course of an activity, what is that which we become involved with? In encountering a thing, in this case a 'technological thing', what emerges in the encounter is determined, on one hand, by what we bring to the encounter—that is, a particular comportment—and, on the other, by what the thing itself brings. In this section we will concentrate primarily on the technological thing itself, and will ask the following question: What is a visualization?

A Visualization Situation

As a point of reference we will consider the visualization of large molecules, whose characteristics are typical of a large class of visualizations. In comparison with other cases, such as the visualization of four-dimensional objects or of complex mathematical functions, this is a rather conservative domain. Choosing it over the other cases has the advantage that it is relatively simple to understand, and that whatever we may learn from it most likely will also be valid of more radical cases, while the converse may not be necessarily true.

Ball-and-stick models of molecules, made out of wood or plastic material, are commonly used to visualize their structure. Because it is impractical to develop these kinds of models for large molecules, and because they are static and, thus, unable to visualize changes in the molecule's structure, computersupported versions of ball-and-stick models have been developed. These visualizations have many uses, including determining whether a large molecule such as a drug could attach itself to other molecules found in cells of organisms. Such visualizations show detailed, colored, ball-and-stick models on the computer screen, models that can be manipulated to modify their configuration.

In a visualization situation there are entities or systems under consideration and the purpose of the activities taking place in it is to perform certain tasks involving such entities. That which is under consideration is 'represented' in the computer in certain way, for example, in terms of a programmed 'model' of a molecule specifying the kinds of atoms that it contains, their properties, their relative positions, and the bonds between them. To facilitate the performance of tasks, the entity or system is visualized in terms of a computer-supported visualization which is under the control of a second program that produces a visual 'presentation' of the molecule, as represented by the programmed model. A variety of elements enter into play in such presentation.

Elements involved in a computer-supported visualization

A visualization is shown on a flat, two-dimensional surface, the computer screen, which is composed of point-like elements called 'pixels'. The computer screen contains 'windows', that is, rectangular entities that may overlap with each other and can be reshaped and moved within the boundaries of the screen. Windows are under the control of computational processes which result from the execution of computer programs, processes that display visualizations on the windows. A visualization is painted on a window by appropriately coloring selected pixels in the screen region occupied by the window, so as to represent lines and surfaces. The visualization may be organized as a two- or three-dimensional entity. Visualizations can be 'manipulated' by means of point-and-click devices, allowing for a variety of operations including opening, moving, and reshaping.

Visualization Machines and Dimensional Spaces

A computer-supported visualization is a presentation of something on a window under the control of a computer program. We will refer to the complex constituted by computer, screen, windows, and visualization programs as a visualization machine . A visualization, then, is a presentation generated by a visualization machine. But now, what is a visualization machine? We can approach this question by noting that visualization machines are strongly related to the so-called 'Cartesian spaces', which have become pervasive in scientific and technological activities. In a visualization machine the screen and windows function as 'spaces' in which visualization programs display shapes, typically, geometric, which are characteristic of Cartesian spaces. In what follows, we explore in detail the notion of Cartesian space, reserving for later a more precise characterization of the relationships between visualization machines and Cartesian spaces.

Cartesian spaces are constituted by an organized, infinite collection of points—the pixels in the case of the computer screen—each of which can be uniquely identified with respect to a set of 'axes', typically two or three, which intersect on a single point, the 'origin'. Axes are measuring tools that, emanating from the origin, extend to infinity. Points are nothing in themselves but 'measured space'. Because no region of this space can, in principle, escape measurement, that is, escape being uniquely identified in terms of measures, Cartesian spaces turn out to be infinite measuring devices , such that whatever there is of 'space' in them is subordinated to measuring.

Underscoring the centrality of the notion of measure to Cartesian spaces is another common way in which we refer to them, namely, as 'dimensional' spaces. A dimension is a measure of space; as a verb, it refers to an 'act of measuring.' Etymologically, it derives from the Latin dimetiri , dis + metiri, where the particle dis in one of its senses indicates an intensification of the action it modifies, thus signifying 'to measure carefully'. In what follows, we use the terms dimensional and Cartesian interchangeably to refer to these spaces.

What kinds of entities can inhabit such spaces? Entities composed of particular collections of measured points, usually constituting geometrical shapes and surfaces. If, in principle, it is possible to describe any such collection of points by individually specifying each of them in an appropriate order, this would still leave us at the mercy of an unconquered infinity. It then becomes crucial to augment the notion of measure from being a specification of a point on a scale to being a 'formula', which specifies how the points in the collection are to be obtained by means of mathematical operations. Simple incarnations of such formulas describe geometrical shapes. In a broader sense, a Cartesian space is constituted not only by an infinite collection of measured points but also by the complex kinds of measurements we have called formulas. Because of the wealth of significations and practices that obtain in these spaces they have the character of a world, constituting what we will call 'Cartesian' or dimensional worlds. But we will also refer to them interchangeably as Cartesian spaces.

A dimensional or Cartesian space, then, is an infinite measuring device that can be applied as a tool to measure anything measurable—as well as to attempt , unsuccessfully, to measure that which is essentially immeasurable.

Measurement Practices

But now, what is an infinite measuring device? Let us approach this question by way of examining what takes place in practices in which something is being measured, which we will call measuring or measurement practices. Measuring, which is an extremely broad and pervasive kind of practice taking place in everyday life as well as in technological and scientific activities, is a particular way of encountering things. In measuring something, say the temperature of a person or the length of a piece of furniture, we come to the encounter with a 'purpose in mind' and with a measuring device or instrument in hand. We 'apply' the device to the thing in question, obtain a measurement, and terminate the encounter. At the heart of such encounter, when we apply the instrument to measure the thing, we pay attention primarily to that which the instrument can measure. No matter what the thing is, while taking its measure it recedes into the background and the property or properties being measured come sharply to the fore.

For its part, the instrument plays a crucial role in the encounter, bringing with it a way of measuring as well as the particular kind of measures, that is, the 'units' in terms of which the property will be measured. Thus, the specificity of the thing is superseded by the specificity of the instrument. In applying the instrument we have to 'follow the instructions' associated with the measuring procedure and, in so doing, we have to 'adjust' ourselves to the device and its procedure, we have to 'attune' ourselves to them. In the end, what counts most in the encounter is the measurement itself, to the point that once it is obtained, the encounter comes to an end. A measurement is the 'result' of the encounter, what comes out of it, and what remains. It is the specific purpose of the encounter.

In a measurement practice, then, we have multiple encounters, including encounters with that which is being measured, with the measuring device, and with the measurement itself. But we also encounter a purpose—in that we come 'to have in mind' a measuring-related purpose—and, most important, we encounter or engage in a particular measuring procedure: We carry out an act of measuring thus becoming, for the duration of the activity, 'a being that measures'. Depending on the specific characteristics of the measurement practice and on the purposes behind it, these different kinds of encounters may be emphasized differently. In taking the temperature, say, of an animal, we may do so in the context of trying to cure the animal or in the context of an animal experiment. In the second case, as it is the measurement that counts most, we will have a different kind of encounter with the animal than in the first, in which it is the animal that matters.

Thus, a measuring comportment defines a space of possibilities in which various kinds of encounters take place, encounters which can be emphasized or accentuated differently, depending on the characteristics of the specific encounter. But it should be possible to distill certain biases and tendencies underlying such diversity, which may be characterized as follows. First, a measuring encounter unfolds in ways that are governed by the measuring procedure that, in turn, is primarily determined by characteristics of the measuring device. This is an indication of a certain primacy or preeminence of the measuring device and its associated procedure over the thing being measured . Second, during the measuring activity the property that is to be measured of the thing comes sharply to the fore, while the thing recedes into the background. This suggests the existence of 'operations' to 'put aside' the thing to attend to one of its properties . Third, the purpose of the encounter is to obtain a measurement of the thing, which is what we take with us, leaving the thing behind. This is an indication of a certain primacy or preeminence of the measurement over the thing being measured .

Because through these biases, tendencies, and operations what is being encountered in a measuring practice may be significantly altered, we will say that they have a potential ontological import . These are biases and operations which, by underlying measuring encounters, have the possibility of changing the character of what we encounter in things, ultimately, the possibility of giving birth to new kinds of beings, with peculiar ontological characteristics. What we have here are subtle biases and operations, themselves subject to change, which could transform the way we encounter things thus giving rise to new kinds of encounters, that is, new kinds of practices in which we can engage. We note that with 'advances' in technology, through which measuring devices become increasingly 'powerful' and measurements increasingly more 'complex and systematic', most likely these inner trends characterizing measuring practices are exacerbated.

Ontological Operations and Biases Embedded in Dimensional Spaces

Having gained a basic understanding of what measuring is and the kind of possibilities that are opened in it, we now return to our previous question, that is, to the question of what dimensional spaces, as infinite measuring devices, are . Because a Cartesian space is a measuring device, it should reflect in itself central characteristics of measuring practices, that is, their tendencies and operations and, given that it is an infinite measuring device, it should reflect them in an intensified way. Thus, to understand what a Cartesian space is we need to identify the specific tendencies and operations that are embedded in such a device. To this effect, we must go beyond the consideration of such a device as it appears to us in the present, and consider its 'genesis' or 'origin'. Behind the device in question hide operations and biases that generated it in the first place.

In fact, a Cartesian space is a very elaborate historical creation, resulting from the accumulation of operation upon operation, of bias upon bias during the course of centuries, literally, of millennia. Successive waves of schools of mathematics and natural philosophy—nested in and nurtured by successive historical ages—have contributed to its creation. A Cartesian space, carrying the name of one who made decisive contributions to it, is an extraordinary historical achievement, whose power in terms of infinity and universality continues to be expanded, and 'for which new and significant horizons have been opened up with the advent of the digital computer'. Computer-supported visualization is but the current last step in a long—very long—chain of developments.

What are the ontological operations and biases that have contributed to the genesis of this device? We are fortunate to have available a powerful and imaginative work which will be very helpful in approaching this question. In The Crisis of the European Sciences and Transcendental Phenomenology (Husserl 1970 a ), Edmund Husserl attempted to grasp, from what turned out to be a 'historico-intentional-praxical' perspective, essential moments that could account for the emergence of modern science and modern geometry with Galileo and Descartes, respectively. 5 Central to Husserl's analysis is a characterization of the 'origin of geometry', which is a fundamental element in the notion of Cartesian space, as we are considering it here. Because of the relevance of Husserl's analysis to what concerns us, we will now consider it in detail. 6

Husserl asks for what was 'given' to Galileo, as transmitted by the tradition, that he took it for granted and that served as a basis for his own contributions to modern science. Unless we are engaged in critical reflection, that which is given is so familiar to us that we barely have an awareness of its being there. Or, if we do have some awareness it is of something that is so close to us, something with which we have such an intimate connection that it is difficult to separate it from ourselves. It constitutes us; it is us in some sense. Because of this intimacy and the attendant difficulty in establishing a distance from which we could confront the given, this is something of which we don't typically talk about.

How does Husserl proceed to gain access to that which is given at a particular historical moment, say, the 'Galilean moment'? He does not rely primarily on Galileo's works, although certainly he assumes a close familiarity with them. Rather, from his general knowledge of a historical moment, in particular, from what was available in everyday life and in the practices that obtain in it, Husserl identifies significant elements, which on one hand it is plausible to assume as pervasive, and on the other, it makes sense to assume their relevance to, in this case, Galileo's own practices. Husserl's analysis, then, focuses on practices and transformations of practices as they take place in everyday life. In addition, these transformations are not regarded as being primarily triggered from the 'outside', but are understood as emerging from the inner development of those same practices.

Among the important givens for Galileo, geometry certainly occupies a significant place. At first, Husserl focuses on the differences between modern geometry and mathematics and their Greek counterparts, in an attempt to understand what is peculiar to the modern developments. But soon the focus changes towards the 'origin of geometry'. It is possible that after the analysis of the differences between modern and ancient mathematics, Husserl may have concluded that although they are significant in several respects, ancient mathematics and geometry had already taken steps so decisive that in order to gain a fundamental understanding of geometry it was necessary to go beyond the moderns toward the ancients, and even beyond the ancients themselves, in an effort to understand the origin of geometry.

In The Origin of Geometry , 7 a work closely related to the Crisis in which a more radical understanding of the insights gained in the Crisis is attempted, Husserl indicates that:

...our interest shall be the inquiry back into the most original sense in which geometry once arose, was present as the tradition of millennia, is still present for us, and is still being worked on in a lively forward development; we inquire into that sense in which it appeared in history for the first time -- in which it had to appear, even though we know nothing of the first creators and are not even asking after them. Starting from what we know, from our geometry, or rather from the older handed-down forms (such as Euclidean geometry), there is an inquiry back into the submerged original beginnings of geometry as they necessarily must have been in their "primally establishing" function. This regressive inquiry unavoidably remains within the sphere of generalities, but, as we shall soon see, these are generalities which can be richly explicated, with prescribed possibilities of arriving at particular questions and self-evident claims as answers (Husserl 1970 b , p. 354).

Husserl sees here the necessity of a bold kind of 'historical' inquiry, which attempts to go beyond historical facts and aims at a 're-construction' of the foundational notions of geometry. In Husserl's perspective such reconstruction attempts to grasp the 'original meanings' of geometric notions. Because we lack historical sources the attempt takes the form of a 'regressive inquiry' that takes as its point of departure, say, Euclidean geometry as it was handed-down to us, and goes back towards the origins. But what could those origins be and how could we know about them given the lack of sources? Husserl suggests that, ultimately, geometry must have emerged from the 'prescientific' world:

...even if we know almost nothing about the historical surrounding world of the first geometers, this much is certain as an invariant, essential structure: that it was a world of "things" (including the human beings themselves as subjects of this world); that all things necessarily had to have a bodily character .... What is also clear, and can be secured at least in its essential nucleus through careful a priori explication, is that these pure bodies had spatiotemporal shapes and "material" qualities (color, warmth, weight, hardness, etc.) related to them. Further, it is clear that in the life of practical needs certain particularizations of shape stood out and that a technical praxis always (aimed at) the production of particular preferred shapes and the improvement of them according to certain directions of gradualness (p. 375).

In the 'prescientific' world there are already practices oriented to the production of smooth surfaces and edges, which require estimates of sizes and, consequently, measuring techniques of varying degrees of precision. It is the gap between these origins on one hand, and ancient and modern geometry, on the other, that is necessary to bridge by reconstructing possible intermediate steps. This insight, namely, that geometry must have emerged from measuring practices already prevalent in everyday activities, provides the starting point for the approach Husserl will follow to explore possible ways in which it originated. In the Crisis , and in a summary way in the Origin , Husserl identifies several moments, which we will now consider. In the presentation below, although we follow the general thrust of Husserl's analysis, taking into account the question we have raised about the ontological operations and biases embedded in Cartesian spaces, we are freely reinterpreting, and at times extending, the analysis from an ontological perspective . In a subsequent section we will perform a critical appraisal of Husserl's approach and of our own reinterpretation of it.

Ancient Geometry

Let us start by examining basic geometric notions. If we consider what a geometric shape such as a line or a circle is, as opposed to the shape of a thing, we come to see that it is a 'limit' case of a shape. That is, it is a shape that, although for the most part may not be immediately available in our encounters with everyday things, can nonetheless be obtained by performing certain operations upon 'naturally' occurring shapes. As Husserl suggests (1970a, p. 26), in many everyday practices there are activities oriented to develop smooth surfaces and edges, such that the notion of a straight edge appears in those activities as a limit, specifically, as a 'perfect' limit to which the activity tends to. We add that the notion of something 'perfectly straight' may appear in a variety of activities, for instance, in the act of walking. To approach something we typically walk towards it maintaining a constant direction. If we 'disregard' the 'irregularities' of walking and concentrate purely on its constant direction, we are left with a straight trajectory of movement. It is at the limit, when all irregularities have been discarded, that we find a perfectly straight trajectory. Or again, if we use a string to measure the length of something, as we stretch the string it goes through a variety of shapes until it reaches one that no further stretching can change. In the limit we have a straight string.

What these scenarios suggest is that there are practices in which the notion of limit-shape arises, either as that toward which the activity explicitly tends to, or as extreme cases that emerge in the course of the activity. These practices can be regarded as including operations oriented towards a limit in which irregularities in shape are eliminated, thus constituting 'smoothing' operations. But, we need to add, there is more to these operations than that because in the limit—a limit that although never reached is still possible—a new kind of being or entity emerges, characterized by an absolutely non-irregular shape. What is novel in it, is that it is a limit, something possible but never attainable. For this reason, because through these operations new kinds of entities arise, we will refer to them as ontological smoothing operations . This last phrase can be understood in at least two senses, both of which are intended. First, it refers to 'smoothing operations' that have an ontological character because they give rise to new kinds of entities. But, second, it refers to 'ontological smoothing', that is, to operations leading to simpler, smoother kinds of beings.

Yet the question may still arise: Are these perfectly regular shapes of things 'really' new kinds of beings? Why not just say they are shapes with the very special characteristic of being perfectly regular, in a particular sense. To us, their ontological novelty resides in that they are 'limit' shapes, unsurpassable from the point of view of their regularity and unachievable in terms of concrete practices. Underlying limit shapes, there is a new kind of encounter with things which does not remain with the thing as it emerges, but attempts to surpass it absolutely by positing a 'counter-thing' that is perfectly regular in some sense.

As Husserl noted (p. 25), these transformations of 'empirical' shapes towards a limit still leave us with empirical limit-shapes. A straight edge, in its perfected shape remains a straight edge , something that is accessible to us empirically. But these perfected empirical shapes are not yet, strictly speaking, geometrical notions. Even if, with our imagination, we 'eliminate' the irregularities still present in the edge of a piece of furniture, that imagined perfect shape is not yet a 'straight line', in a geometric sense. To arrive at this last notion an additional and quite powerful operation is necessary, namely, to discard the edge of the piece of furniture, the trajectory of the movement, or the measuring string itself, in order to be left with the 'pure' straight line.

We have, then, a second kind of operation by which the 'body' of a perfected thing is erased out of existence. It is the body that is irregular, the body that contains the impurities affecting the not-yet-perfect empirical shape. How do we attain to the limit? To put it figuratively, we reach the limit by means of ontological 'surgical' operations through which we, first, separate the body of things from their perfected, totally visible 'skin' and, second, discard the former and hold onto the latter. We will refer to this kind of operations as ontological excising and lifting operations , by which a world of geometric 'idealities' is lifted from the world of everyday practices. Again, this last phrase can be understood in at least two senses. It refers to excising and lifting operations that create new kinds of beings, hence operations that have an ontological character. In addition, these operations act by ontological excising and lifting, that is, by performing a radical incision splitting the realm of experience into a realm of 'empirical' shapes and a realm of 'ideal' shapes, giving rise to excised and lifted beings.

Although a geometric shape may appear to be just a contour or line, because it is a limit-shape that has been obtained by operations of ontological smoothing, excising, and lifting it possesses unique, exquisite properties. It is a being that is not just 'pure skin', but is endowed with properties that distinguish it from any other geometrical shape. When elementary shapes are combined with each other in multiple ways, they give rise to constructions of considerable complexity, exhibiting complex properties. As a result, we have not just a collection of geometrical shapes but an entire 'world' of idealities populated by ideal geometric beings.

Once geometrical shapes have been attained something else becomes possible. By getting rid of the body of things we also get rid of what is 'unknown', invisible in them, while keeping only perfected, visible shapes. In doing so, we have created entities that are absolutely regular and absolutely visible. These two distinctive characteristics of geometric shapes make simple shapes such as lines, circles, and polygons immediately understandable, thus suggesting the possibility that all the properties and relationships that can be conceived of them could become 'fully known' to us. Hence, a new kind of practice arises which, by positing initial, self-evident assertions about geometric relationships, is able to determine the kinds of relationships that should follow from them by taking a series of steps, each of them justified by prior steps. This practice emerges by lifting operations from practices that occur in everyday life (p. 26). While in the 'empirical' world we can determine properties and relationships involving things by measuring them—achieving in this way what may be called 'empirical truths'—in the ideal geometric world we gain knowledge of properties and relationships by means of logicodeductive practices, achieving a lifted form of empirical truths, namely, 'universal truths'.

Husserl can then say:

So it is understandable how, as a consequence of the awakened striving for "philosophical" knowledge, knowledge which determines the "true," the objective being of the world, the empirical art of measuring and its empirically, practically objectivizing function, through a change from the practical to the theoretical interest, was idealized and thus turned into the purely geometrical way of thinking (p. 28).

We can come to appreciate, then, that these ontological excising and lifting operations not only give birth to new kinds of beings but, also, to new practices concordant with them, thus inaugurating a new world, even more, inaugurating a world of a new kind .

As if to confirm its character of being a world, we observe that this new kingdom of idealities has its own foundational work or 'Bible'. Euclid's Elements , which because of its influence is probably the most important mathematical text ever written, establishes the foundations of a new world. 8 It is not so much the specific results it submits that is significant, but it is the manner of proceeding, by first laying the ground in terms of 'definitions' of the entities with which it deals—definitions by which these entities 'come to life'—followed by 'postulates' which identify basic geometric practices and properties of mathematical entities, and by 'common notions' identifying what today we could call logical axioms of equality. 9 All of this crowned with the introduction and utilization of a new kind of practice appropriate to this new kind of world, namely, that of establishing the 'truth' of geometric properties and relationships by a rigorous logico-deductive method.

What ontological biases can be identified in these new kinds of practices? As indicated before, ontological smoothing, excising, and lifting operations have given birth to absolutely regular and absolutely visible beings. Through these practices an ontological bias is introduced, namely, a tendency towards assuming that things, be them 'ideal' or 'empirical', are fully knowable, and that they are fully knowable for us . We call this bias ontological rather than epistemological because it concerns 'essential' characteristics of things as well as our own essential traits, insofar as we deal with things, specifically, by getting to 'know' them. On the basis of this ontological tendency certain epistemological consequences should follow. We will refer to this kind of bias as the ontological bias towards full knowability of things, or ontological bias towards transparency .

We should note two related and important additional biases introduced by the previously mentioned operations. First, because through excising and lifting operations the body of things is discarded together with all its accompanying phenomena while retaining only smooth shapes, an ontological bias towards visibility is instituted. In the richly endowed world of geometry only shaperelated properties obtain while every other kind of property has been eradicated. Second, by smoothing, excising, and lifting operations, the characteristics of entities in the 'empirical' world that make them unique, concrete, distinguishing them from all others have been obliterated. Thus, geometrical entities are no longer 'individuals' but 'types'. Only in this way whatever properties they exhibit have a 'universal' character defining that particular type of shape. We will refer to this bias as an ontological bias towards the obliteration of the concrete . Through these biases—towards transparency, visibility, and the obliteration of the concrete—in our encounters with things we may be inclined to take them to be fully knowable and primarily visible, and may tend to pass over what distinguishes them uniquely from any other. By their own characteristics and by the kinds of entities they give rise to, ontological operations favor the emergence of certain biases in our encounters with things.

So far we have presented an interpretation of the main moments that Husserl identifies in relation with the basic notions underlying ancient geometry. Husserl then proceeds to examine the moments underlying the development of modern science and modern geometry, with Galileo and Descartes.

The Galilean Moment

Although our primary interest resides in understanding ontological operations and biases inherent in Cartesian spaces, a consideration of what we are broadly referring to here as the 'Galilean moment' 10 is important, in particular, for the 'application' of geometry—and more generally, of mathematics—to the 'study of nature' that took place in it. Certain kinds of operations that originate in this application are also relevant to 'visualizations', as will be shown later.

To the question of 'What is given to Galileo that may have contributed to a new notion of natural science?' Husserl replies: Geometry and mathematics, the 'art of measuring', and 'applied' geometry and mathematics (p. 28). If, as indicated earlier, geometry emerged from the art of measuring, conversely, once geometry was established it itself was applied to measuring practices, helping to perfect them. In spite of their ideality, insofar as ideal geometric entities are about shapes and visible forms, they retain a strong reference to the empirical world. For this reason, they have been applied in the context of practical measuring activities. In technical activities such as engineering design, the application of geometric notions is, for the same reason, very 'natural' to make. Although these measuring practices are very diverse, covering a variety of activities, they are not yet directed to the 'study' of nature.

It is at the Galilean moment that geometry 11 is decisively applied to the understanding of nature. In the context of natural philosophy, ideal shapes are made to descend into the world of bodily things and empirical phenomena. While in the movement of ascent toward ideal shapes the body of things is set aside by means of ontological excising and lifting operations, in the movement of descent or application, there is again a setting aside of their bodies, this time accompanied by an additional operation where the now bodiless things or phenomena are 'assigned' an ideal shape. Thus, in the study of the movement of planets or of balls rolling on inclined planes, the empirical trajectory of the movements is set aside and assigned an elliptical or linear shape, while the 'objects', such as the sun and the planets, are represented by geometric points, circles, or spheres. 12 We should note that the application of geometry to celestial objects is very 'natural' to make, given that the large distances involved and the limitations of human visual capabilities produces a 'smoothing' of such objects.

What is the character of this setting aside of the body and the accompanying assignment of an ideal shape? From an ontological perspective, in this movement of descent of idealities towards the empirical world we recognize the emergence of yet another kind of world and its corresponding entities—the so-called 'physical world' and 'physical entities'—in which, by means of ontological reconstitution operations , the empirical world is reconstituted on a new basis. Among these operations we find what we will call ontological shape-regularizing operations , corresponding to the setting aside and assignment operations mentioned above. Through these regularizing operations, objects and phenomena are reconstituted with respect to their 'shapes' in terms of ideal geometric shapes.

We are not suggesting at this point that through these operations the actual trajectories of the planets, for whoever applies the operations, have been regularized by assuming them to be ellipses. Rather, we indicate that a new kind of world is being created, the 'physical world', in which these trajectories are regularized. It could be argued that this operation produces only an 'interpretation' of the empirical world, but no new world is created. But what is such interpretation if not something constituted by new kinds of entities created by such operations?

To better understand what takes place in the context of these reconstitution operations we need to attend carefully to additional considerations Husserl makes. Although the primary ingredients of geometry are the ideal geometric objects, equally important are the properties characterizing these objects and the myriad relationships that obtain in complex configurations of objects, as well as the apodictic method of determining them. In the reconstitution of the empirical world by geometrical means, how is this complex geometric machinery brought into play?

As Husserl indicates, already in the world of everyday practices the notion that different kinds of phenomena are related to each other in certain ways is familiar to us. Everyday things and phenomena not only appear in close proximity to other things and phenomena, but they affect each other in typical ways. In consequence, the world appears as permeated by a connectedness and relatedness by which things and phenomena depend on each other. In many cases even a clear understanding of the existence of causal relationships between phenomena develops (pp. 30-31).

What this suggests is that in the movement of descent or application of geometry, just like geometric shapes are assigned to empirical objects, the properties and relationships that obtain in the geometric world are assigned to or put into correspondence with—in an abstract sense—dependencies that obtain in the empirical world. But for this to be achieved, the notion of causality needs to be understood in a way that is amenable to the establishment of that correspondence, that is, it needs to be regularized. This regularization implies a clarification of the notion of causality, as distinct from vague notions of dependency. 13 In the reconstituted world that emerges in this way mathematically measurable causal relationships among phenomena obtain.

Reinterpreting these notions from an ontological perspective, we realize that among the ontological reconstitution operations introduced above we need to include what we will refer to as ontological link-regularizing operations by which phenomena are understood as being essentially linked to each other by means of regularized causal relationships. It is on the basis of these operations and the application of geometry that these linkages between phenomena are regarded as causes, and understood as essentially measurable. Are we justified in referring to these operations as ontological? In what way do they contribute to the emergence of 'entities' with unique ontological traits? By transforming the way in which the relatedness and connectedness of phenomena appear to us they play a major part in the emergence of what we have called the 'physical world'. We will return to this point below.

With respect to the logico-deductive method of geometry used to establish geometric truths, at the moment of application of geometry to the empirical world that method will be transformed into the more complex method of natural science. This last method involves not only logico-deductive practices but many others, such as the development of mathematical 'measures', models, and theories, of specialized measuring instruments, and of experimental methods, thus incorporating in a transformed way elements of measurement practices.

In the application of ontological shape-regularizing operations described above, objects and phenomena are attributed geometric shapes. But shapes are only one among a multiplicity of sensible qualities obtaining in phenomena, some of which are somewhat related to shape as in the case of colors, while others appear to be totally unrelated to it, as in the case of sound, taste, and heat. How could geometry be applied to measure these other kinds of sensible qualities? We cannot do justice here to the complexity of Husserl's analysis of what he called the 'indirect mathematization of the plena' (p. 34) and will pursue it only as it is related to the issue of visualization.

Because of what we referred to earlier as ontological link-regularizing operations, in the reconstituted 'physical world' it is taken for granted that there are causal relationships between different kinds of phenomena. In particular, this assumption leads to the question as to whether for sensible qualities other than shape, which Husserl called 'specific sense-qualities', it would be possible to establish causal dependencies with shape-related qualities. If this were the case, the mathematization of specific sense-qualities could be done, first, by measuring their causal dependency with shape-related qualities, and, second, by establishing and measuring dependencies between these shaperelated qualities and other shape-related qualities of interest. After acknowledging Galileo's affirmative answer to this question, 14 Husserl invites us to appreciate in its full force "the strangeness of [Galileo's] basic conception in the situation of his time."

In the reconstitution of the empirical world as a 'physical world' by means of the application of geometry, requiring that all sense-specific qualities be regarded as dependent on shape-related qualities, we identify additional ontological operations, namely ontological shape-reductive operations . These operations are based on prior ontological link-regularizing operations but involve an additional step through which all sense qualities are linked with shape-related qualities, in such a way that some kind of primacy of the latter over the former is, implicitly, established. Such operations may invite what we will call an ontological bias towards visibility in our understanding of the 'physical world'. In a sense, this bias extends to the physical world a previously identified, similar bias that obtains in the ideal geometric world.

Before concluding this examination of reconstitution operations, we need to consider an additional operation not explicitly examined by Husserl. Any concrete situation in the empirical world, even those that appear to be quite simple, involve a multiplicity of relationships. Moving 'objects' are restrained in their movement by the surrounding air or by the unevenness of the plane over which they slide; their movement is disrupted or impeded as they collide with other objects. Even after shape- and link-regularizing operations, the application of geometry to concrete situations turns out to be difficult to realize. It becomes crucial to drastically simplify the situations.

How can this be achieved? All the 'impediments' to movement are to be eradicated. Any remaining object, which is not absolutely necessary from the particular point of view from which the situation is being considered, must be discarded. Undesirable characteristics of the media surrounding the objects are to be straightened out. 15 In brief, the situation as a whole must be made smooth. We will refer to these operations as ontological situation-smoothing operations . While they complement the other reconstitution operations, in a sense these smoothing operations are more powerful than them because they transform a situation as a whole. In the limit, these operations create the 'vacuum'—the smoothest of media—that is then populated by 'physical' entities, from which all that hinders the application of geometry has been removed. Such a physical world is still different from a geometric world because—even if faintly—it refers to empirical situations from which it originated.

Universalization at the Galilean Moment

What is involved in this 'strange' step by which all sense-specific qualities are taken to be relatable to and measurable in terms of shapes? This question leads us into the final step taken in Husserl's analysis of the Galilean moment: the notions of 'universalization' and 'infinitization'. Husserl is struck by a decisive movement toward universalization he senses at the dawn of modern science and modern mathematics, a multi-pronged movement that impulses natural science, geometry, and mathematics as a whole beyond the boundaries known to the ancients. In the case of modern mathematics, Husserl finds an "immense change of meaning" with respect to the geometry and mathematics inherited from the, primarily Greek, tradition. As Husserl indicates, " universal tasks were set, primarily for mathematics (as geometry and as formal-abstract theory of numbers and magnitudes)—tasks of a style which was new in principle , unknown to the ancients" (p. 21, Husserl's emphasis).

At the Galilean moment, the notion of universalization arises in at least two different but related senses. First, in the sense of a universal causal regulation obtaining among diverse phenomena. Second, in terms of "what was taken for granted by Galileo, i.e., the universal applicability of pure mathematics" (p. 38, our emphasis). With respect to the first, we already mentioned the linkregularizing operations by which phenomena are regarded as dependent on other phenomena by means of particular dependency relations understood as causal relationships. But Husserl suggests that more is required regarding causality in the quest to achieve "a scientific knowledge of the world," namely, that the world be understood, in advance, as an infinitude of causalities (p. 32). While link-regularizing operations regularize dependency relationships in terms of causality, the notion of a world conceived in advance as an infinitude of causalities has a far broader scope and announces a transformation at the level of the empirical world as a whole, as will be discussed below.

Returning now to the second sense in which the notion of universalization arises, we need to consider the universal applicability of mathematics, taken for granted by Galileo. We can approach this particular sense of universalization by considering the ontological bias toward full knowability identified in the context of ancient geometry. The ideal geometric world is populated by entities that, because of their complete regularity and visibility are, in principle, fully knowable, entirely open to us, transparent. Consistent with this characteristic of geometric objects, the logico-deductive practices employed in finding new geometric truths already had a universal character, in the sense that they could be applied to consider any geometric relationship whatsoever.

But while with the 'ancients' transparency is contained within the boundaries of the ideal geometric world, at the Galilean moment and through a series of ontological operations previously identified, this ontological bias is extended to the whole of the empirical world. It is not just particular kinds of phenomena or particular regions of beings to which geometry is applied. If at first the old distinction between sublunar—including terrestrial—and celestial phenomena, may have kept the impulse toward the application of mathematics within the boundaries of the 'world' known to humans, astronomical observations of the planets by means of the telescope—most preciously of the rings of Saturn and of the moons of Jupiter—brought these boundaries down. 16

Universalization of causality and universal application of mathematics are the two faces of a sweeping movement out of which the 'physical world' was born. Nothing can escape this movement. Hence the 'strange' emergence of shapereductive operations mentioned earlier, by which non-shape related phenomena are linked to and measured in terms of shape-related phenomena. Universalization, then, can be understood as a movement by which all boundaries demarcating what is possible to know from what is beyond human possibilities are, in principle , erased, thus opening the whole of the empirical world to human perusal and understanding. This does not means, though, that humans are understood as having in practice the actual capacity to understand everything. 17

Because of the transformative power of universalization in the two senses indicated above, we regard them as ontological operations but with a far greater scope than those mentioned earlier, because they operate on the empirical world as a whole. Thus, we introduce the ontological, world-scope operation of causalization that refers to the universalization of causality. Through this operation, not only causal relationships are supposed to obtain between phenomena of all kinds, but rather the new kind of world thus constituted, the 'physical world', is regarded as nothing else but an infinite network of causal relationships. While these relationships become 'entities' of the first rank, empirical phenomena no longer occupy center stage.

Similarly, we introduce the ontological, world-scope metricizing operation, which refers to the universal application of geo-metry , and mathematics to measure the empirical world as a whole. More properly, metricizing is a fundamental 'inclination' towards declaring the empirical world to be what is measurable of it, where measuring is understood as an attempt at reconstituting the empirical world as a whole in terms of the ideal, mathematical world. Because this ideal world has emerged itself as a particular distillation of the empirical world—via ontological smoothing, excising, and lifting operations—metricizing comes to be the inclination to reconstitute the empirical world on an ontologically purified basis, where full knowability obtains. As indicated, because of their scope the two operations of causalization and metricizing need to be clearly distinguished from the more focused operations previously identified.

Regarding infinitization, a notion closely related to universalization, Husserl identifies what may be called a meta-operation—in the sense that it affects all operations and practices previously mentioned—namely, the infinite perfectibility of the application of mathematics in the creation of the 'physical world' and of the corresponding scientific measuring practices. All operations and practices are understood as having in them the possibility of being carried out more exactly, more pervasively, more perfectly, thus becoming stronger. Even more, the specialized operations by which perfectibility is carried out are themselves perfectible, thus creating what may be regarded as second order, 'accelerated', enhancing effects.

We now realize that the ontological reconstitution operations and biases identified earlier, such as shape- and link-regularizing, shape-reductive, and situation-smoothing operations, owe much of their ontological character precisely to the movements of universalization and perfectibility just described. Each operation taken by itself does not seem to amount to much, but when they are applied coordinately, in a mutually-reinforcing manner, and when they are regarded as having a universal scope and thus being applicable to the whole of the empirical world, they acquire an unsuspected transformative power, the power to inaugurate a world of a new kind. Because of their universal applicability they not only engender new kinds of beings but, because all beings in the empirical world can be operated upon in this way, they contribute to transform that empirical world as a whole, thus leading to the creation of the physical world.

Let us conclude by stating in a summary way the relationships between the various ontological operations identified as characterizing the Galilean moment. Link-regularizing operations, by which dependencies between phenomena are understood in terms of measurable causal relationships, are universalized by a world-scope causalization operation positing an infinite network of causalities and attributing to them first rank of existence. Ontological reconstitution operations—among which we included ontological shape- and link-regularizing, shape-reductive, and situation-smoothing operations—which reconstitute empirical phenomena on a new basis, are made possible by a world-scope metricizing operation. Because of their world scope, causalization and metricizing operations bring the physical world into existence and grant the other operations their transformative power, which is also strengthened by their infinite perfectibility.

We will reserve for a later section a discussion of Husserl's characterization of the Galilean moment as a whole, in which he tries to understand how the different elements he identified are articulated.

The Cartesian Moment

So far we have considered Husserl's characterization of the 'origin' of ancient geometry and its application to the empirical world, as it took place at the Galilean moment. We now consider the 'Cartesian moment', which gave rise to what we referred to earlier as Cartesian or dimensional spaces, moment which Husserl characterizes as the "arithmetization of geometry" (p. 44), and which in the context of the historiography of mathematics is understood as the 'application of algebraic methods to geometry leading to analytical geometry'. With these considerations we will conclude the analysis of Cartesian spaces in terms of the ontological operations that gave rise to them, and will be ready to return to our consideration of visualization machines.

What is the arithmetization of geometry? How can this development be characterized ontologically? In ancient geometry, the logico-deductive method made possible to acquire knowledge of geometric shapes, in particular of their properties and relationships. Regarding the circle, for instance, it was possible to know about its properties, and even it was possible to establish the relationship between the radius and its corresponding circumference, but the circular shape itself could not be mathematically specified. Although that shape could be described in a constructive manner, say, as the shape generated by the radius while it rotates around a fixed point, still it could not be fully measured, if by measuring we understand putting something into correspondence with 'numbers'. Arithmetization of geometry refers precisely to the notion of measuring shapes in that sense. Because Husserl is interested in providing an 'essential' characterization of the Cartesian moment, he deliberately uses arithmetization rather than 'algebraization', which would be more appropriate from a historiographical point of view, but would obscure the essential connection with numbers.

Summarily, algebra, as an extension of arithmetic, is a particular kind of measuring practice that deals with known and unknown quantities expressed in terms of numbers. In this practice, measurements are represented in terms of equations that establish relationships between what is unknown and what is known to us. As such, an equation is a particular way of extending our knowledge on the basis of prior knowledge. Because an algebraic equation consists of quantities combined by means of certain operations, where quantities represent nothing but a measurement of whatever they are about, an equation is a very abstract expression of knowledge about something. Attending to its 'essential' character, the operation of coming up with an equation we will call measuring by 'counting'. The result of such an operation, a specific algebraic equation, is a particular kind of measurement, which we will call a 'count', and which provides an account of something.

In the application of algebra to geometry we can identify several steps that can be characterized preliminarily as follows. First, there is a positing of 'axes' emanating from an origin, axes which are the simplest possible kind of lines, that is, straight lines, each of which acts as a measuring device. Second, there is a placing of geometric shapes in the context of these axes. Third, there is the introduction of equations as measuring 'tools'. Fourth, for a particular shape, there is the establishment of a relationship between the measurements (coordinates) of points belonging to the shape along both axes, relationship that is expressed in terms of an equation. Fifth, there is an analysis of the effect of variations in the algebraic equation on the geometric shapes that correspond to it.

Let us try to characterize this application and its steps from an ontological perspective. In the first step what takes place is the emergence of a new kind of 'entity', namely, a dimensional space. We can distinguish the following moments in this emergence: i) An emptying of the ideal geometric world by which geometric shapes are set aside, giving rise to ideal geometric 'space', ii) An instauration of an 'origin' in this space, origin which stands for us, humans, as measuring beings, signaling a sub-ordination of geometric space to us. Hence, geometric space now appears as 'emanating' from this origin, us, in all directions, infinitely, iii) An introduction of dimensional axes, again emanating from the origin, and embracing geometric space as a whole, in the sense that they uniquely identify each and every point of this infinite space. These three moments conjugate the act of creation of dimensional spaces, and we will refer to this step as a whole as constituting ontological dimensional spatialization operations .

In the second step, there is a populating of dimensional spaces with ideal geometric shapes by which the latter are referred to an origin with its corresponding axes. Next, we have the introduction in geometric space of a particular kind of 'measuring tool', namely, algebraic equations which, according to our prior characterization of them as particular kinds of measurement by 'counting', make possible shape-counting operations. In the fourth step there is the actual carrying out of shape counting operations for particular shapes, giving rise to shape-counts, namely, equations representing shapes, which we had called earlier 'formulas'. With these three steps, in which dimensional spaces are populated with shapes, shape-counting operations, and shape-counts, we have the emergence of Cartesian worlds, which we mentioned earlier.

Finally, in the fifth step, equations are no longer regarded as simply measurements of geometric shapes. Rather, they appear as 'generators' of shapes, such that given a shape-count we can generate its corresponding shape at any desired level of precision. Because of their ontological smoothness, geometric shapes can be generated typically by using relatively simple shape counts. What we have here is a powerful new kind of practice that contributes to the emergence of Cartesian worlds in their most proper sense. It is not only that Cartesian space emanates from the origin and that every point and every shape can be counted. More than that, using shape counts under the guise of algebraic equations, we can generate an infinite class of geometric shapes. In consequence, because they give rise to Cartesian worlds proper, we refer to these operations as ontological shape-generating operations . As we will consider later, such operations turn out to be crucial in the context of computer-supported visualization because, going beyond knowing as measuring, they give us the power of creation.

Overall, we interpret Husserl's notion of the arithmetization of geometry as the creation of Cartesian, dimensional worlds by means of ontological dimensional spatialization, shape-counting, and ontological shape-generating operations. This development is related to the ontological metricizing operations identified at the Galilean moment, except that instead of being directed at the empirical world as a whole they are now directed at geometry itself, that is, at metricizing geometry. In turn, metricized geometry can be applied to the empirical world, thus intensifying its metricizing.

Critical Appraisal of Husserl's Analysis

We now examine critically both Husserl's analysis and our reinterpretation of it. In the sections of the Crisis we have examined, Husserl attempted to uncover 'essential moments' leading to modern science and modern geometry, going in this way beyond historiography, that is, beyond the way in which a 'history of science' would have typically approached the subject. In addition, contrary to what could have been expected given his phenomenological perspective, in these sections rather than starting from an analysis of intentionality as it unfolds in geometry and natural science, Husserl focuses on practices, more specifically, on what certain practices take for granted and on their transformation. A guiding assumption orienting his work is that, ultimately, these practices must arise from everyday life practices and from the world in which they take place, the 'lifeworld'.

By focusing on essential moments, Husserl remains at the appropriate level of analysis without loosing himself in an overly detailed examination of practices, making possible for him to cover ample territory and to gain a sense of the overall development. At the same time, Husserl was well aware of the risks involved in this manner of proceeding (p. 57ff), in which most of the time he did not support his analysis with explicit references to concrete practices or to relevant works. For Husserl, the main purpose of the analysis was "to bring "original intuition" to the fore" by breaking through what has become too obvious to us: "we who all think we know so well what mathematics and natural science "are" and do." (p. 58).

But, after all, it appears that Husserl could not leave entirely behind the notion of intentionality that was central in his phenomenological approach, and he conceived of at least some of the moments he identified as acts of "pure thinking." As Husserl has it in the Origin , referring to the construction of geometric space, "this new sort of construction will be a product arising out of an idealizing, spiritual act, one of "pure" thinking, which has its materials in the designated general pregivens of this factual humanity and human surrounding world and creates "ideal objects" out of them" (1970 b , p. 377). This conception forces Husserl to deal at great length with the problem of "how does geometrical ideality (just like that of all sciences) proceeds from its primary intrapersonal origin, where it is a structure within the conscious space of the first inventor's soul, to its ideal objectivity?" (p. 357).

It is this same general orientation that makes Husserl, for the most part, to fail to consider the role of technical instruments, tools in general, and their associated practices in the origin of geometry and natural science. Most striking, Husserl does not take into account the possible significance of the use of the telescope by Galileo on the birth of Galilean science. As Don Ihde has it, "Husserl's Galileo is a Galileo without the telescope." 18 In the context of our reinterpretation of Husserl, we need to consider additional practices involving the use of instruments as crucially important for the emergence of what we called the 'physical world'. In the case of the telescope, although far from allowing us to land on the surface of the moon, it allows us to examine it as if it were far closer than it indeed is, transforming in a significant way our relationship with what is beyond the confines of the Earth.

But the telescope is only the most conspicuous example of a variety of artifacts which were available at Galileo's time, and that played an important role in science. A simple but very important device used in the study of the motion of falling bodies is the 'inclined plane', which presents the phenomena of interest in 'slow motion', thus effectively slowing 'physical' time down, making the phenomena accessible to us. While some artifacts, such as the telescope, have as primary purpose the augmentation of human capabilities, others, like the inclined plane, constitute an 'intervention in nature', and are related to the important notion of experiment in science. This is another, crucial aspect of natural science that, because of his general orientation, Husserl could not see. Because in this work we are primarily concerned with visualization rather than with science, and given that experimentation does not play a crucial role in a visualization situation, we will not approach the notion of experiment from the ontological perspective, and will leave it unconsidered.

While examining the markedly different ways in which the phenomena of falling bodies and swinging pendulums appear when viewed from Galilean and Aristotelian perspectives, Thomas Kuhn, rightly or wrongly, attributes them to the different 'paradigms' on which they are based. A paradigm is a complex notion, but in one sense that Kuhn came to prefer it is a 'shared example' that gives practitioners a way of seeing situations, in particular, of seeing phenomena under study. 19 In Husserl's work there is no clear equivalent to this notion, possibly because in his suggestions regarding the Galilean moment Husserl was working at a very basic level, trying to identify the essential ingredients of the Galilean perspective. On the other hand, in his discussions of what was taken for granted by Galileo, Husserl refers to the pre-givens in Galileo's time, which could be understood as constituting a shared background for the community or communities to which Galileo belonged. In this sense, there is a possible connection with a second notion of paradigm, less preferred by Kuhn, as the shared commitments of a community of scientists. 20

Insofar as what Husserl uncovered at the Galilean and Cartesian moments constitutes a significant transformation in the notion of science and in the conception of 'nature', and considering the 'universal' character of some of these transformations, Husserl is identifying an epochal transformation of the highest significance in the history of the West. In this sense there is an important connection with the notion of the 'history of being' elaborated by Heidegger within a few years after the publication of the Crisis . Heidegger's notion suggests that there are essential transformations on the way the world appears to human communities, transformations that determine the character of historical ages. In the Crisis , Husserl not only seems to anticipate this insight, but in a sense goes beyond it because it attempts to uncover the way a particular transformation 'originated'.

Looking retrospectively at his own Galilean Studies , Alexandre Koyré characterized "the revolution of the seventeenth century" as bringing about two fundamental changes. First, the "infinitization of the universe," which in that work he had expressed in terms of "the replacement of the idea or concept of the Cosmos—a closed whole with a hierarchical order—by that of the Universe—an open ensemble interconnected by the unity of its laws." Second, the geometrization of space, that is, the replacement of the Aristotelian conception of space by that of Euclidean geometry. 21 Although Husserl's notion of the universalization of causality bears a close relationship to Koyré's infinitization, the latter goes beyond it and points to a breaking of limits in such a way that now nothing escapes human perusal. Husserl's 'mathematization of nature' is more comprehensive than Koyré's geometrization of space. 22

Working from the perspective of the philosophy of science, Patrick Heelan (1987) has assessed critically Husserl's approach to natural science. Heelan suggests that Husserl's notion of modern science, referred to as 'Galilean science', is strongly influenced by Husserl's familiarity with 'Göttingen science', that is, a view of science that understands it as 'theory making'. Husserl's Galilean science corresponds to 'the philosophical core of Göttingen science', whose most prominent figures were influential mathematicians such as Hilbert, Klein, and Minkowski. According to Heelan, from Hilbert's point of view, "physics needs the help of mathematics to construct the ideal physics, and the ideal physics has the form of theory, and all theory ideally has the form of an axiomatic system." (p. 371). The particular aims of Göttingen science may have led Husserl to the extreme view that Galilean science, ultimately, attempted the 'mathematization of nature', a notion that we will examine in a later section.

Among the many possible ways to approach Galileo's works, there is the epistemological perspective. Joseph Pitt elaborates such an approach (Pitt 1992) in the context of the philosophy of science. In contrast to Husserl, Pitt works primarily from Galileo's text, and attempts to clarify the methodological principles underlying Galilean science, which Pitt regards as Galileo's most important contribution. Although emerging from different perspectives, what from this particular epistemological approach appear as principles oriented to secure a 'pragmatic' path toward the acquisition of knowledge about nature, from the ontological reinterpretation appear as operations leading to the emergence of a new kind of world, the physical world. The principles distilled by Pitt—of quantification, abstraction, universality, and evidential homogeneity—bear complex relationships to the ontological operations identified above.

In the proposed ontological reinterpretation of Husserl's analysis we introduced the notion of ontological operations and biases, identified specific instances of them, and established certain kinds of relationships among them. Now, from this analysis what it is possible to retain and what must be put under question marks? By ontological operations we mean practices or specific actions within practices, which either by their overall orientation or because of their characteristics, invite the emergence of new kinds of entities as well as new kinds of practices with novel ontological traits. In a summary way, a practice is an unfolding of human comportments carried out with the help of other humans, things, and tools, in a particular situation. While Husserl emphasizes intentional phenomena—for instance, the 'thinking' of the original geometers—a practice involves many different kinds of comportments, among them 'thinking', which may precede, accompany, or succeed other comportments.

Practices interact in complex ways with other practices, in particular, by being part of them, and they are grounded in specific 'ways of being human' characteristic of particular communities. Going even further, and considering Heidegger's notion of the history of being, given that a practice takes place in the context of a particular historical age, as such it responds to what may be called the predominant 'mode of revealing or unconcealing' characteristic of that age. 23 Although most practices keep this kind of correspondence with the mode of revealing, there are practices that either survive from prior ages—thus keeping alive in their own ways older modes of revealing—or that go against the grain and contain seeds of future ages.

What this tells us is that practices, in addition to having a complex structure of their own, maintain complex relationships with other practices, with particular ways of being human as embodied in particular communities, and with modes of revealing. Thus, attributing an ontological import to specific kinds of practices or actions within practices as we have done, entails an oversimplification. Rather, we should think of these practices as the most visible embodiments of transformations that are taking place in more or less pervasive ways in the context of human communities, transformations that are difficult to disentangle from each other.

Consider what we called 'ontological smoothing operations'. These are actions that take place in the context of practices by which shapes of things are made increasingly smooth, from which the notion of limit-shape arises. How do these operations themselves emerge? They could arise simply as intensifications of similar, pre-existing operations or they could be the echo, in this particular kind of practice, of trends emerging in other practices. Or consider two operations we identified in the context of the Galilean moment, namely, ontological link-regularizing operations and ontological, world-scope causalization operation. In the first kind of operation, dependency relationships between phenomena are regularized in terms of measurable causal relations; in the second, the notion of causality is universalized. How to understand the relationships between these two kinds of operations? Do linkregularizing operations emerge because of causalization, or vice versa? In the context of the present discussion this question doesn't make sense. We are in the presence of a very complex phenomenon of which these two operations are the most visible embodiments. At most, what makes sense is to say that because of the different scopes of these operations, of which the second has a 'world scope', the transformative power of the first owes much to the scope of the second.

Regarding the specific operations and biases identified in the previous analysis, the following needs to be said. Husserl's reconstruction of the essential moments of the 'origin' of geometry takes as its point of departure Euclidian geometry and, assuming that the basic geometric notions must have emerged from what was available in the 'lifeworld', it attempts to fill this immense gap by identifying certain practices that could have led to such notions. It is clear that, at best, what can be obtained in this way is one among many possible reconstructions. Because we also find the emergence of 'ideal' objects in language and in writing, which are far more pervasive achievements of human communities than geometry, it is conceivable that the emergence of geometric idealities could have followed an entirely different route than that suggested by Husserl in the Crisis . Some of these issues are explored in Husserl's Origin .

In his analysis of the emergence of Galilean science and modern geometry, Husserl was relying on his own mathematical background, on general knowledge of science, and on historical sources, such as Galileo's works. Although the emergence of Galilean science is far closer to us than that of ancient geometry, the intricacy and complexity of this development, and the fact that we are still caught under its spell, forces us again to be cautious. We already noted Husserl's failure to consider the possible relevance of technical artifacts in the emergence of natural science. More than that, it is possible that techniques, tools, and associated practices by the time of Galileo had already achieved a level of pervasiveness such that the universal character of causalization and metricizing operations was latent in them, insofar as certain forms of causality and measurement are intrinsic to such techniques and practices.

Because Husserl attributed the 'crisis of the European sciences' to a large extent to a 'loss of meaning', for Husserl the main purpose of the reconstruction was to recover the 'forgotten meaning' of geometry and natural science, that is, to regain the lost 'experiences' that were at the source of their emergence from the lifeworld. And he attempted this recovery by building explicit bridges between lifeworld practices and experiences, and those characterizing geometry and natural science. Any plausible reconstruction along these lines would effectively give us a 'sense' of what geometry and natural science are in an essential way. As a consequence, whether the specific transformations of practices identified correspond to what a precise historiographic account would produce, is not decisive for Husserl's analysis.

On the other hand, our reinterpretation of this analysis has a different orientation and is guided by a particular notion of practice. In this notion, in general, it is not us, particular human beings, that carry out practices. Rather, it is practices that carry us out. Although it is the case that we imprint upon them our own style, more important, practices imprint upon us their own seal. If this were the case, transformations of practices, in particular transformations with an ontological import, carry with them the possibility of transforming us in significant ways. In consequence, the felicity of our analysis depends to an extent on the appropriateness of the transformations that were identified.

To conclude, we can say—with the previously indicated caveats—that the notions of ontological operations and biases, as visible markers of more or less pervasive transformations with ontological import that are taking place in human communities, make sense. On the other hand, particular operations and biases we identified may be more problematic, and must be regarded only as plausible suggestions.

Ontological Operations Embedded in Visualization Machines

After examining the notion of Cartesian world and, previously, the application of geometry to the study of nature, we now return to consider the relationships between visualization machines and dimensional spaces. A visualization machine, as indicated earlier, is constituted by a computer, screen, windows displayed on the screen, and a variety of programs. Among these programs there are some which contain algorithmic representations of equations describing geometric shapes, while others, visualization programs, are in charge of displaying the geometric shapes on windows. Finally, these programs can be executed by a computer, thus effecting the display of Cartesian spaces populated by geometric shapes.

A visualization, on the other hand, is a presentation or exhibition of something by means of a visualization machine. This requires the application of dimensional spaces to a particular phenomenon under consideration—say, the structure of molecules—application that can be performed in a manner similar to the application of geometry we examined at the Galilean moment. Measures of the phenomenon or of causal relationships related to it are developed in terms of formulas, with a collection of formulas constituting a model of the given phenomenon. A model is specifically tailored to the case at hand by incorporating in it 'data' obtained through appropriate measurements. Then, geometric shapes are associated with elements of the model, thus giving rise to a visualization of it. 24

But, what is a visualization machine, when considered from an ontological perspective? It is a new kind of entity, in particular, a new kind of machine that emerged in the second half of the twentieth century, as a result of certain kinds of practices that we will now attempt to ascertain. As a whole, these practices can be characterized with the obvious rubric of 'computerization of geometry' which, coming after the 'arithmetization of geometry' and made possible by it, gives rise to visualization machines. Focusing on 'essential' moments and ignoring historiographical considerations, visualization machines emerged by certain kinds of practices involving Cartesian worlds. In ancient geometry, the ideal geometric world emerged from smoothing, excising, and lifting operations. At the Cartesian moment, dimensional worlds emerged from dimensional spatialization, shape-counting, and shapegenerating operations. With Galilean science, geometry was applied in the context of natural philosophy by means of reconstitution practices including shape- and link-regularizing, shape-reductive, and situation-smoothing operations. We have here a movement of ascent toward idealities, followed by a counter-movement in which they are made to descend to the empirical world giving rise to the so-called physical world.

At the 'computerization moment', dimensional worlds undergo yet another movement of descent that differs from, but is related to, that which took place with geometry at the Galilean moment. To some extent, this new descent can be understood as a retracing of the steps that led to the ideal geometrical world in the first place. By an overall movement of real-inaction , those smoothing, excising, and lifting operations are in some sense reversed by descending, embodying, and roughing operations to give rise to what we will call 'interactive visual worlds'. Real-ization reverts idealization but does not lead us back to the empirical world. Because realization takes place primarily by 'embodying' operations we will also refer to it as embodization .

Two different but related sets of practices need to be considered in the movement of realization or embodization of Cartesian worlds, that is, practices that led to the emergence of visualization machines, and practices that apply visualization machines to particular situations. Because a detailed consideration of these practices would require a careful examination of the ontological characteristics of computer machines, which is beyond the scope of the present work, we will proceed by focusing only on 'essential' ingredients of this movement, emphasizing its visualization-related aspects. We start with a preliminary analysis to be followed by a second, ontologically oriented consideration.

What takes place when geometric complexes come alive on a computer screen? Consider an 'animated visualization' of a large molecule, say a protein, as it folds upon itself under the effects of its own characteristics. Let us call this entity a v-molecule. Bright-colored, multiple, initially exhibiting an elongated configuration the v-molecule slowly curbs upon itself until it comes to rest, and remains in this state. Throughout this silent maneuver within the confines of the screen, the v-molecule retains its identify and structure. If we so desire we can return the v-molecule to its initial configuration and repeat the entire maneuver. Or, we can introduce a change in its composition and observe whether it has any effect on its folding and final configuration. In the course of its existence, the v-molecule takes residence in the generally flat computer screen. Now, a computer screen is a particular kind of embodiment of a dimensional space, uniquely distinguished from any other kind of screen by its being 'attached' to a computer, thus giving rise to what we called visualization machines.

When the computer machine executes the programs corresponding to a visualization a 'computational process' comes into being. Extraordinarily fastpaced, pulsating invisible and silent in the hard body of the machine, the process manifests itself in the v-molecule that emerges on the screen. It is the computational process that is at the heart of the embodization of Cartesian worlds. Visualization machines are fundamentally characterized by being programmable. As suggested above, at least two kinds of programs can be found in such machines, that is, programs that 'embody' formulas and visualization programs that 'display' them. Briefly, a program is an 'algorithmic' embodization of a practice or aspect of a practice, which can be 'executed' in a computer, thus giving rise to a computational process. More specifically, a program is constituted by a sequence of 'instructions' expressed in a particular programming 'language'. Computers are programmable machines because they can carry out these instructions. To be able to embody a practice in terms of a program, the practice must be understood in its minute details so that it can be thoroughly specified by the program.

Finally, then, a computational process is a process that takes place in a computer machine, and that comes about by the enactment by such a machine of a practice constituted by operations of various kinds, practice which has been embodied in a computer program. In the case of a visualization machine, in which computational processes manifest themselves on a computer screen with 'interactive capabilities', these processes give rise to 'interactive visual worlds' constituted by a variety of practices with which we can become engaged. Thus, we can set a v-molecule in motion and see how fittingly or unfittingly attaches itself to an element in the wall of a v-cell. And we can 'browse' the space of possible v-molecules of a certain class to determine which ones could give a better fit, if necessary.

Let us now reconsider these preliminary observations from an ontological perspective, in an attempt to identify practices and operations that have contributed to the emergence of visualization machines. We are struck by a most powerful movement of embodization by which geometrical, mathematical, and 'physical objects'—the latter understood in the sense previously discussed in the Galilean moment—as well as related practices, possessing traits that characterize them as ideal objects and practices, are made to descend into the empirical world and 'brought to life' in visualization machines. First, we identify ontological digital embodization operations by which digital bodies—known as computer machines—and digital places, usually referred to as interactive computer screens, are born. A digital body, 'imprinted' upon a 'material substrate', is a body whose capabilities are 'triggered' numerically. 25 But now, what is embodied in digital bodies? Nothing less than 'agents' that can carry out practices, which in the case of visualization machines are geometric and mathematical practices. Digital places, on the other hand, are digital embodizations of dimensional or Cartesian spaces proper, such that whatever takes place in them is also activated numerically.

Second, and complementing these operations, we identify ontological practice- proceduralizing operations , by which practices, in particular of the geometric and mathematical variety, are embodied into procedures or programs. To be executed by a computer, a program must be expressed in terms of statements in a programming 'language', where a language contains a fixed set of kinds of statements. Whatever of the practice that cannot be expressed in the language must be either twisted, so that it fits the language or extirpated, if twisting is not sufficient. Consequently, we identify here two particular ontological operations needed for proceduralization, namely, practice-regularizing and practice-smoothing operations . It is through these operations that, for instance, equations describing shapes, which we called above 'shape counts' and which are important elements of geometrical practices, are embodied into programs.

Third, we have ontological practice-enactment operations by which programs embodying practices are enacted by digital bodies giving rise to computational processes and manifested in digital places. Through practice-enactment operations, digital bodies become 'agents' carrying out practices in the context of more encompassing practices in which human 'agents' can also participate. It appears that all the embodization operations we have noted are ultimately geared towards making possible these practice-enactment operations, by which regularized practices are actually carried out.

We referred to all the operations mentioned above as ontological because, as a whole, they give rise to a new kind of machine, namely, visualization machines which, in turn, enact new kinds of worlds, that is, interactive visual worlds. Visualization machines, then, emerged by ontological embodization operations of Cartesian worlds. To some extent, this movement is made possible precisely because geometric and mathematical entities emerged themselves from the empirical world by certain kinds of operations, so that, in spite of their ideality, these entities and practices are pregnant with ingredients typical of the empirical.

Let us now return to the case of visualization of molecules. Given this talk of 'embodization' operations—in particular, of geometric shapes which are used to visualize molecules—the following questions arise: Where is the 'body' of a v-molecule? If it has one, what kind of body it is, what are its characteristics? A v-molecule is the 'manifestation' on a digital place of a computational process taking place in a digital body. While it emerges and continuous to be, the computational-digital process constitutes what we will call a d-molecule. In turn, the d-molecule comes about from the enactment of a program, which itself constitutes a p-molecule. Finally, the p-molecule results from proceduralizing 'models' of molecules, the m-molecules, which have emerged from scientific practices.

From the above, it appears that the v-molecule has a fragmented body that gives to it a peculiar mode of existence. While the d-molecule, as that which comes to be in a digital process, could be regarded as the 'living body' of the vmolecule, it itself has an ephemeral mode of existence: It comes to be from the enactment of a program, the p-molecule, and ceases to be at the end of the enactment. As a program, the p-molecule enjoys a more permanent mode of existence but, by itself, is 'non-emergent', to the point that the v-molecule could not come to be directly from the program. Now, the v-molecule, although visible and possessing interactive capabilities, is nothing but an echo of the computational process. How about the 'hard' body of the computer itself, on top of which the digital body has been imprinted: Could we find the body of the molecule in this? As a machine with certain 'universal' characteristics, which are particularized by the specific program being enacted, the computer lends its body to whatever was embodied in the program, in this case, m-molecules. But the way it does so is by supporting a computational process, which takes us back to d-molecules.

What is the import of this fragmentation ? What is made possible by it? Because of its peculiar fragmentation, an important characteristic of the body of a v-molecule is its 'replicability'. Programs can be replicated at will, without any loss of properties. Also, digital bodies and places are, to a very high degree, replicas of other digital bodies and places. As a consequence, these fragmented bodies have an existence which is to a large extent 'time' and place independent, giving rise to the replicable, place-independent enactment of practices. This tells us that even after the embodization operations to which they are subjected, the ideality of geometric, mathematical, and physical entities is in some sense preserved. Although the fragmented body is a real-ization, it retains much of the ideality of ideal objects.

But there is an additional characteristic of 'digital' embodization operations, which, although implicitly included in the above considerations, needs to be addressed more forcefully. In this context, 'digital' has two connotations that play upon each other. On one hand, as already noted digital refers to the 'numerical' aspect of the embodization, thus making explicit the ideal mathematical character of that which is embodied. On the other, digital also refers to the specific character of the way in which the embodization is carried out, namely, by means of 'digital electronics'. Briefly, digital bodies and digital places are constituted by what to us, today, are some of the nimblest and lightest phenomena we can encounter, and to which we will refer as a whole simply as 'light'. In consequence, we will say that an 'essential' characteristic of digital embodizations is their lightness , in all the proximal senses in which this word can vibrate here, including the fact that digital bodies are swiftly 'propelled' by electrons, that digital places are expressions of light, and that visualizations are not subject to gravity.

In brief, visualization machines and the interactive visual worlds they bring to life are born from ontological embodization operations of Cartesian and physical worlds. Through specific operations such as digital embodization, practice proceduralization—which include practice-regularizing and practicesmoothing operations—and practice enactment, interactive visual worlds come to life. Because much of the character of Cartesian and physical worlds is preserved through these operations, interactive visual worlds inherit and have embedded in them ontological operations that gave birth to those kinds of worlds in the first place. With interactive visual worlds a full movement of ascent and descent toward and from idealities has been completed, constituting as a whole a peculiar movement of migration towards new kinds of worlds.

Ontological Traits of Visualization Elements

After all the operations previously identified, concluding with a movement of descent towards 'visualized' idealities, what do we end up with? What are the ontological traits of that which comes about from this proliferation of practices of great transformative power, accumulating one upon the other? To address these questions let us examine the elements out of which visualizations are made.

Computer-based visualizations are 'painted' on computer windows, that is, selected pixels in the screen region in which a window is located are appropriately colored to display the visualization. Let us introduce the notion of a 'surf', which we write as a contraction of the word 'surface' to suggest how it differs from the surface of an empirical object. While pixels are individual points on a computer screen, a surf is a collection of pixels on a computer window whose purpose is to real-ize a geometrical surface representing something. We can say that surfs are the building blocks from which visualizations are constructed. In the visualization of molecules, a surf can visualize an atom or a relationship between atoms, while a collection of surfs constitutes a v-molecule, which is an embodization of an m-molecule, that is, a molecule defined by a physico-geometrical model. 26

Let us now attempt to understand what a surf is, beyond this preliminary characterization. As a digital real-ization of a geometric or physical object, what kind of real-ity a surf has? We could at first consider a surf to be a 'body' of some kind, but if counting on it we tried to grab or just to touch a surf, we would either end up empty-handed or touching the screen. Perhaps we could say that if surfs don't have bodies, at least they have 'skins', but not even this would be appropriate. How, then, something that doesn't have some kind of body can be visible?

In this regard, as well as others, surfs are ontological cousins of the 'shadows', those surprising 'entities' that light engenders as it plays upon things. Like surfs, shadows 'happen' on a surface in a place, can move or displace, and are untouchable. On the other hand, like shadows, surfs are also the play of light, although this time not upon things, but from screen and pixels. A shadow involves at least three elements, i.e., the projecting light, the thing whose shadow the light projects or creates, and the thing upon which the shadow is projected. In the case of surfs, the emitted light that brings pixels into existence corresponds to the projecting light, while the screen or windows correspond to the thing where the shadow is projected, but what is the 'thing' whose surf is projected upon the screen? As indicated earlier when examining the fragmented body of a v-molecule, there are several elements involved in it such as a computational process and a program but, ultimately, it is the model whose 'shadow' or surf is projected upon the screen.

With respect to their mode of persisting, surfs also share the persistence of shadows. Just like that proximally-remote power, the Sun, recreates everyday the shadows of trees upon the walls, a proceduralized practice, when—at our command—is enacted by a digital body, can re-create surfs upon screens. Unlike 'material things', whose mode of persisting is continuous in that they cannot cease to be and 'come back to life again', surfs have an 'intermittent' mode of duration. Even something as delicate as a particular instantiation of a sign—say, any of the letters on a written page—has what might be called a 'material existence': it is a 'physical' mark on the page. As the page ages, the mark may become lighter and lighter in color or the whole document to which the page belongs may be completely destroyed, and with it the instantiation of the sign. But from the moment the mark was made to the moment of destruction, there was a continuity to it that the surf, for its part, lacks.

This intermittence of surfs is intimately related to the replicability of the fragmented body of visualizations examined above, involving the replicability of programs and the character of replica of digital bodies and places, which in turn leads to the replicability of enactments. Taken together, intermittence and replicability reflect a fundamental trait of surfs: their extraordinary ontological smoothness and lightness . Both the movements of idealization and real-ization that gave rise to them are pervaded by smoothing, excising, and regularizing operations, and are complemented appropriately by the lightness of digital embodization. Such smoothness and lightness frees these beings from certain 'time' and place constraints making them truly new kinds of beings. We will refer to them as smooth and light beings .

Let us conclude by mentioning another important characteristic of surfs, which, incidentally, is one of the traits that makes them 'essentially' different from shadows. Because of specific embodization operations which give rise to so-called point-and-click devices, we can 'interact' with surfs in terms of 'manipulating' them by pointing, selecting, moving, and so on. Such interactivity of surfs opens up another 'dimension' in our encounters with them and reveals that referring to them as 'visualizations' is too reductive a way of conceiving them.

What Do We Encounter in Visualization Situations?

We now address the central question of this long section, that is: When we participate in a visualization situation, what do we encounter in it? This question, in turn, receives its sense from the larger question of whether our interaction with visualization technology, as a special case of our commerce with technological things in general, could contribute to transform us in essential ways . Keeping this larger context in mind, we now attempt to address the first question by attending to the elements gained in the previous analyses.

In our analysis of measurement practices presented earlier, we suggested that such practices open up a space of possibilities in which various kinds of encounters take place. A visualization situation, although having a broader scope than a measurement practice, shares with it many elements, such that the kinds of encounters identified for those practices also take place in these situations. In a visualization situation, then, we have multiple encounters, including, first, encounters with the measuring device, that is, the visualization machine; second, encounters with the measurement itself, that is, the visualization and the model to which it corresponds; third, encounters with that which is being measured, for example, certain kinds of molecules; and, fourth, encounters with particular measuring procedures, namely, visualization practices. In the visualization machines and in the visualizations they support, we do not encounter primarily the ontological operations that gave birth to them in the first place, rather, we encounter what emerged from such operations, that is, the ontological traits of those machines and their visualizations.

In the previous subsection we examined the elements of which visualizations are made of, namely, surfs, and their ontological characteristics. But in our engagements with visualizations we do not encounter surfs as such, rather, we deal with configurations of surfs that are the visualization themselves. Even more, when we encounter visualizations we do not just stare at them, but engage in certain practices involving them. Some of these practices, in particular, those having to do with the application of geometry and mathematics to the 'study of natural beings', say molecules, have a direct correspondence with similar practices we identified earlier. Such practices plus the visualizations themselves constitute what we referred to earlier as interactive visual worlds. We can then say that in a visualization situation what we primarily encounter are interactive visual worlds.

Just as with surfs, because they emerge from the kinds of ontological operations examined above, interactive visual worlds are characterized by an extraordinary ontological smoothness and lightness. As embodizations of Cartesian worlds they inherit their ontological traits as well as those characterizing ancient geometry. From the latter, visual worlds preserve a bias towards full knowability and transparency by which all that takes place in them is, in principle, fully knowable. No residue beyond our reach has been conceded to them, no 'distance' separating them from us remains. While in practice, the complexity that these worlds can attain is considerable, in principle, we can fully comprehend them. From Cartesian worlds, they inherit complementary shape-generating operations that amount to full powers of creation both of geometric beings and of the dimensional spaces they populate.

Smoothness , then, as an ontological trait of interactive visual worlds , refers not only to the smoothness and regularity of geometric objects but is a broader notion encompassing full knowability and full powers of generation.

As digital embodizations, visual worlds partake of the lightness characterizing such operations. These are worlds whose 'material strata' as well as their visible manifestations are deeply dominated by phenomena of light. Because of the peculiar fragmentary character of the embodization, the way in which visual worlds are anchored in place and 'time' is rather subtle, thus making possible their extreme replicability and intermittence, examined earlier for the case of surfs. In particular, for the enactment of practices that give rise to visual worlds, corresponding programs must be 'loaded into' and 'executed by' a computer. Programs, then, are not anchored permanently in any particular digital body. In fact, a program doesn't have to reside at all in any such machine, it could as well take residence on paper and, in principle, it could simply exist 'in the mind' of its creator. 27

Because of their lightness, programs themselves can be smoothly replicated. In addition, in principle—and to a large extent in practice—a program can be executed by any computer, as long as it can be 'translated' into the particular machine language of that computer, and 'supporting' software needed by the program is available in the machine. For this reason, even if computers don't have the same level of replicability of programs, this does not impair the overall replicability of visual worlds. This means that for their emergence, worlds of this kind do not depend on particular visualization machines, and they can be 'ported' at will. This replicability, which is one of the ways in which visual worlds are not subject to certain basic 'place constraints', contributes significantly to their 'essential' lightness. Similarly, intermittence—that is, the way in which visual worlds persist such that they can be brought to life and made to disappear at will—originates in the above mentioned fragmentation, and is a form of replicability in 'time', contributing again to the lightness of this kind of worlds.

Additionally, ontological lightness of interactive visual worlds means that these extraordinarily smooth worlds have been real-ized by means of embodization operations so nimble and supple, that no opacities have been reintroduced in them, so that they retain their transparency and full knowability for us. Because digital bodies have notable capabilities for enacting practices, our power to generate the beings and practices that populate visual worlds has become extremely effective.

Lightness , then, as an ontological trait of interactive visual worlds , refers not only to the lightness of the digital embodization but is a broader notion encompassing replicability, intermittence, full knowability, and highly effective powers of generation. Given that their smoothness and lightness characterize them ontologically, we will refer to interactive visual worlds as smooth and light worlds . Finally, because engaging in these kinds of worlds exposes us to their ontological traits, in them we encounter smoothness and lightness.

Replicability of visual worlds allows them to 'migrate' from digital place to digital place, thus announcing what appears to be a certain 'freedom' from the rigors of place, as suggested earlier. Visual worlds, no longer rigidly anchored in visualization machines—which, as bodies of some kind, continue to obey the logic of place—appear to be a freer kind of world, lighter, not subject to gravity, liberated from the singularity and uniqueness of place, smooth as light itself. It is a common observation that as we engage with this kind of worlds, their lightness and smoothness, which confer upon them a surprising grace and delicacy, exert their seductive powers on us. There is an easiness about these worlds that make them captivating; because they enthrall us, we forgive them their occasional coarseness and fall from grace. We will refer to this particular 'mood' in which our engagement with visual worlds puts us, as fascination from smoothness and lightness . Fascination, then, is another ingredient of our encounter with visual worlds.

We have already referred to the fragmented embodization peculiar to these worlds, by which the performance of actions takes place by the concurrence of two different kinds of entities, namely, programs and digital bodies. We can understand this fragmentation from two different perspectives. First, from the perspective of the 'computerization of geometry' what we call fragmentation simply comes about from the digital embodization of geometric and mathematical objects and practices. Programs are proceduralized incarnations of them; digital bodies have the capability of executing the programs, thus enacting those practices. Second, from an everyday life perspective actions and, more broadly, 'effects' are enacted by embodied 'agents' and, more generally, by embodied entities, such as 'natural' beings. From this standpoint, what we have is a fundamental fragmentation that comes about by splitting bodies into two different kinds of 'pieces', that is, pieces that describe actions—programs, and pieces that carry out actions—digital bodies.

Perhaps it is because of 'fascination' that we are not as ready to appreciate the other side of fragmentation, opposite to 'freedom from place'. Fragmenting practices into proceduralized descriptions of practices and bodies for their enactment involves a significant transformation of both what practices and what bodies are for us. In disembodied descriptions of practices and actions, the interconnectedness between actions and places has been drastically severed. In turn, to be able to enact decontextualized actions digital bodies are themselves tenuously connected to places. Thus, the interactive visual worlds that emerge from the execution of programs are 'essentially' uprooted from place, which is what enables them to be replicated at will at any digital site. At the level of surfs, this uprootedness expresses itself in that, like shadows, they are bodiless beings. For an uprooted world, every place is the same and for us, who engage with these worlds, they become 'portable'. Although, as indicated above, the fascination engendered by smoothness and lightness may conceal from us this essential uprootedness behind a sense of freedom that we gain in our involvements with these worlds, in the end we cannot fail but to encounter it.

Let us return to the case of the visualization of molecules. What is it that we encounter when we explore a visualization on the screen? In this particular world, a visualization 'visualizes' a model, a model represents a particular kind of molecule, and the modeled molecule can be either an existing, well-known kind of molecule, or a new molecule being designed. From a conventional 'referencial' perspective, the graphical molecule 'refers' to the model, while the model, ultimately, 'refers' to a molecule. But when we are intensely involved with a smooth and light v-molecule on the screen, aren't we encountering primarily the colorful v-molecule rather than anything else? When with the point-and-click device we 'grab' a component chemical group, remove it from the v-molecule, replace it with another group, and 'rotate' the resultant v-molecule to observe its shape from another perspective, all of these specific actions are constitutive of practices belonging to the world of v-molecules, not to the world of molecules.

On the other hand, implicit in our encounter with the v-molecule on the screen is the model of a molecule. In a sense, the v-molecule is 'parasitic' on the model behind it: From the model it borrows 'meaning', which makes possible our encounter with it; without the model the v-molecule would decay into a meaningless collection of surfs. But once they come alive for us, that to which they refer tends to disappear while they become the focus of attention. We find these beings much more congenial to us than their referents, because we can create them, see, manipulate, and 'think' with them. On the referential side, we also find an ambiguity, this time between the model and the kinds of molecules it refers to. Although it is the case that the model is a model of molecules insofar as it derives from, or it itself constitutes, a theory of them—which incorporates in it certain measurements obtained from the corresponding kinds of molecules and their components—it is also the case that the model describes a space of possible molecules, some of which may not even exist in 'nature'. We could then say that the model refers to non-yet-existent but possible molecules. If one of these possibilities were synthesized, then, it seems that it is the new molecule that refers to its model, which was first in 'existence', rather than the opposite.

Thus, we encounter a multiplicity of relationships, in particular, the ambiguities just mentioned, between visualizations, models, and molecules. Conventional notions of reference need to give way to more involved notions of ambiguity. In addition, the relationships between these various terms are subject to change. Fascination with smoothness and lightness intensifies our bond with visualizations, in detriment of the other two terms. The 'infinite' perfectibility of the various kinds of practices involved will give increasing salience to the visualizations, in ways perhaps unsuspected by us today, and of which the so-called virtual worlds of 'virtual reality' are an early indication.

Finally, we need to consider another important 'phenomenon', which we also encounter in our dealings with visualizations and which is different in kind from those already, mentioned. If we were able to hold in view the overall movement we have traced starting with ancient geometry and beyond, continuing with the Galilean and Cartesian moments, and concluding with the moment of computerization, we could characterize it, in a first approximation, as an extended movement of migration towards smooth and light worlds taking place across several historical ages. We do not encounter this movement in our engagements with visual worlds or related technologies, because the moments of this movement are hidden from us. What we encounter is only the 'results' of the movement incarnated in various mathematical, scientific, and technological contraptions.

But even so, is there a way in which we could come into contact, if not with this overall movement, at least with some form of it, be it a bare trace of it? Yes. Whenever we conclude an activity with which we are involved and turn around to reach an available visualization machine, or walk towards one located nearby, we repeat in the small this migratory movement from 'everyday life' worlds towards smooth visual worlds. We do not pay any attention whatsoever to this exceedingly trivial dis-placement, but it is an unmistakably clear indication of our full participation in and complicity with this massive, millenary, and fateful migratory movement. We will refer to this, our migration, as local migration towards smooth and light worlds , or local migration, for short.

Visualization Principles Revisited

To conclude our examination of computer-supported visualization, let us return to our starting point, namely, to what we called visualization principles. What are these principles? Why are they taken for granted and are so binding for those that work in visualization technology?

But why did we call them 'principles' in the first place? We referred to them in that way partly because they are the more or less explicit points of departure from which visualization technology takes off. We also referred to them as principles simply because we didn't know what they were. Somehow, in ways, which we barely comprehend, these broad 'inclinations' or biases emerge. They are injunctions for action, either for the instauration of new practices or for the transformation of existing ones. In many cases they constitute operations for the transformation of 'what is' or for the creation of new kinds of beings. The more principled they are, the greater their originality, the farther reaching they become. At least some of them constitute what we called earlier ontological operations.

'Thinking with visualizations', 'using vision to think': Why are they taken for granted, why do they have such a compelling force as if they were truth itself? They have that character in part because they echo the general direction of something broader than themselves, of something to which they belong as one of its moments, that is, the migratory movement toward smooth and light worlds. If this movement were indeed a determinant phenomenon, that is, if it determines in a significant way what is called the West and has an increasing power of determination over other human formations, then its style and general direction contributes to constitute what is given to us. It is in this kind of soil that these plants we call visualization principles have grown.

What is the relation between 'thinking with visualizations' and, say, 'metricizing nature', which characterizes one of the moments of this movement we examined earlier? We suggested that metricizing was an attempt at 'reconstituting the empirical world on an ontologically purified way, where full knowability obtains'. Geometricizing nature, in turn, refers to a metricizing by means of geometry; hence it has a strong reference to the 'visual', and it has nature as a whole as its scope. On its part, 'thinking with visualizations' implies to a certain extent 'visualizing thinking', which is consonant with 'geometricizing nature'. Because the thinking that is meant in 'thinking with visualizations' has the character of a metricizing, 'visualizing thinking' becomes 'visualizing metricizing', in particular, 'visualizing geometricizing'. If we now try to think this last notion together with 'geometricizing nature' we end up with 'visualizing the geometricizing of nature'.

But in the meantime, the metricizing of nature has become the 'metricizing of everything', including not only nature but 'everyday life as a whole', leading then to 'visualizing the geometricizing of everything'. Finally, because visualizing refers to computer-supported visualizing, that is, to interactive visual worlds characterized by smoothness and lightness, the thinking with visualizations principle becomes a call to 'smoothing and lightening the geometricizing of everything'. One way to understand this convoluted expression is by noting that 'smoothing and lightening' can be regarded as a particular kind of metricizing, which leads us to hear the expression as a 'geometricizing the geometricizing of everything'.

If this particular way of understanding the expression is appropriate, the principle 'using vision to think' represents a second order effect in which geometricizing, directed first at nature, then at everything, turns finally upon itself. Thus the 'thinking with visualizations' principle corresponds to the ontological operation of metricizing turned upon itself: Metricizing the metricizing, which is itself another, second order ontological operation, potentially leading to the ontological transformation of thinking itself.

Let us now turn, briefly, to the remaining visualization principles identified earlier. The principle of objectification—oriented to represent any visual or non-visual phenomenon in terms of 'visual objects'—is related to shapereductive operations identified at the Galilean moment, and can be understood as an ontological smoothing and lightening operation subordinate to the 'thinking with visualizations' principle. The principle of naturalism is an injunction to achieve highly realistic visualizations, in particular, images that are 'indistinguishable from photographs'. A technique that has been applied to achieve naturalistic effects is based on fractals, in which images are generated by the recursive application of a graphical transformation to the components of an image, to obtain a more detailed image. For a certain class of images this procedure does produce images which appear to us as being 'realistic', at least when compared with those obtained with more conventional techniques. 28

It would seem that these operations achieve the opposite effect of smoothness, even if they start with a smooth shape and the recursive transformation that is applied also consists of smooth elements, basically, straight lines. Applying the transformation repeatedly to what become increasingly small elements, leads to figures that appear to be very rich in detail and texture and which are anything but smooth. What is going on here? Do fractals constitute a counter-movement to the migration toward smooth and light worlds? 29 We believe the answer is No, because the roughness of fractals is only apparent.

Fractal images are no longer based primarily on the geometrization of shape but rather on what may be called their algorithmization. This means that shapes are no longer metricized primarily by means of mathematical formulas but rather by means of algorithms, in particular, of the recursive variety. In many cases, simple or even very simple recursive algorithms can give rise to images of surprising apparent 'complexity'. Smoothness disappears at the surface but remains at bottom: If we now regard 'smoothness' as 'regularity of generation', these images are surprisingly smooth. 30

As we noted earlier, another principle at play in computer-supported visualizations is the 'fusion principle', that is, the injunction to bring ever closer together human beings, on one hand, and visualization machines and visualizations, on the other, to fuse them if possible. This principle is subordinate to the 'thinking with visualizations' principle, insofar as the injunction to fuse is justified in terms of the possibility of triggering preconscious 'visual mechanisms', thus enhancing 'thinking' understood as information processing. At the same time, the fusion principle goes beyond its 'super-ordinate' 'metricize the metricizing', because its call is not only to metricize but, thought speculatively in its far reaching possibilities, it calls for the emergence of new kinds of beings, namely, hybrids of humans and digital bodies. In this sense, this principle is connected to the last principle we identified in a previous section, namely, the 'transformation of thinking' principle, because human/digital-bodies hybrids would have not only 'enhanced' but, perhaps, novel ways of 'thinking'...

Overall, we can say that the moment of visualization—made possible by ontological operations including digital embodization, practice-regularizing, practice-smoothing, and practice-enactment operations, and governed by the principles considered above—appears to be consonant with a prior movement towards smooth worlds, movement which it complements with an orientation towards not only smooth but also light worlds.

Visualizations and World

Assuming that the analysis presented so far has identified and determined important ontological characteristics of visualizations and visualization machines, we now return to the question posed at the beginning of this work, that is: Is it possible that if we inhabit worlds increasingly penetrated by visualization machines, and if we increasingly encounter in our work activities as well as in everyday life smooth and light worlds, this could contribute to elicit 'essential' transformations in what we are? And what kinds of transformations could they be? By such transformations we do not mean here something overtly dramatic, rather we refer to subtle but distinct transformations on ontological traits that characterize us.

Becoming Attuned to Visualizations

In his well-known analysis of what takes place in the context of using equipment, 31 Heidegger suggested that the specific kind of relation we maintain with tools varies depending on how well they contribute to perform the task implied in an activity. In the favorable case, when the tool or equipment in use is performing adequately, they tend to withdraw from the explicit focus of our 'concern' allowing us to focus instead on the task itself. I t is as if the fit between us, the tool, and the task is so good that the actual use of the tool does not impose any burden on us and we become free for our involvement with the task, which is our primary concern. On the other hand, any infelicity on the part of the tool may distract us from our main concern and return the tool to the focus of attention.

This analysis can be extended to consider other cases that typically arise in the use of new tools or things technological. When we encounter a tool for the first time, and while we learn to use it in the context of an activity, the withdrawal of the tool does not take place not because the tool is malfunctioning, but because we don't know how to use it properly. It is not the tool that needs adjustment, it is us who have to 'adjust' to the tool so that, eventually, we are able to use it appropriately, letting it disappear from our explicit concern. In particular, when we begin to use visualization machines and, correspondingly, begin to dwell in smooth and light worlds, depending on our prior familiarity with related technologies, we need to adjust ourselves to them to a greater or lesser extent, we need to become attuned to this new kind of machine. In particular, we need to attune ourselves to what we encounter when we become involved with them.

Additionally, in the use of visualization machines, as well as in the preparatory activities related to their use, to a certain extent we come to 'repeat' some of the ontological operations embedded in these machines. To successfully use visualization machines, say to visualize the structure of molecules, we need to reconstitute the phenomena under consideration in terms of appropriate physical objects, in whose reconstitution those operations come into play. Repeating these operations also contributes to our becoming attuned to the machine.

Finally, because visualization machines are peculiar in that they involve ontological operations and biases, thus giving birth to new kinds of beings, we suggest that our attunement to them that takes place in the course of their use opens the possibility of essential transformations in us. But the question arises: Aren't we already attuned to smooth beings and the operations involved in their creation by the mere fact that we belong in modernity or late modernity? Indeed, in an important sense we are already attuned to them, in particular, by our familiarity with Cartesian worlds, which are precursors of smooth and light worlds, as noted earlier. More precisely, what we are suggesting is that in our involvement with visualization machines and interactive visual worlds, we are furthering our attunement to smooth worlds and are becoming specifically attuned to light worlds.

Visualizations and 'Being in the World'

Let us explore possible ways in which our commerce with visualizations and interactive visual worlds, in the context of increasingly technified communities, could contribute to transform a fundamental ingredient of what we are, namely, our 'being in the world'. While attempting to characterize what had taken place at the Galilean moment as a whole, Husserl noted: "But now we must note something of the highest importance that occurred even as early as Galileo: the surreptitious substitution of the mathematically substructed world of idealities for the only real world, the one that is actually given through perception, that is ever experienced and experienceable -- our every-day life-world. This substitution was promptly passed on to his successors, the physicists of all the succeeding centuries." (1970a, p. 48f). And Husserl referred to this substitution as the 'mathematization of nature'. For Husserl, the centuries-long development that reached a peak with the emergence of modern science and which continues unabated until our own time, has led to a certain 'substitution' of 'the real world', specifically our everyday lifeworld, by the world of mathematical idealities. 32 Husserl is suggesting that, at least in scientific and technological practices, complex 'real' phenomena are reconstituted in terms of smooth, ideal objects. And that this substitution takes place surreptitiously, in a way that escapes our 'reflection'.

At least two aspects are included in Husserl's notion of substitution. First, neither Galileo nor those that followed in his tracks developed any bridges between the geometry inherited by them and those everyday life practices out of which geometry arose, and from which it received its 'original' meaning. Hence, in Husserl's view, geometry becomes detached from its sources and is taken as a given, thus, in a sense, substituting for those 'source practices'. Second, within certain kinds of practices and, increasingly, for the community as a whole, mathematical ideas and their application to 'nature' are regarded as 'representations of the lifeworld', as 'objectively actual and true nature' (pp. 49-51). In this second case, Husserl is referring implicitly to what we called earlier the ambiguity we encounter in visualizations, ambiguity which itself has a dual character. First, we encounter both visualizations as well as what they refer to. And second, in the particular case of the visualization of molecules, in what they refer to we encounter both the models behind the visualizations as well as the molecules to which those models relate. Specifically, Husserl suggests that models or mathematical idealities become prominent, at least in certain kinds of practices, thus shadowing, even more, substituting for 'the real world'.

But is this possible? Has the 'physical world' substituted for the real world? I t would appear that the answer is No. In our everyday life activities and, to a large extent, in scientific and technological practices, we continue to be engaged with what Husserl refers to as the lifeworld. Perhaps what Husserl is suggesting is that as technical and scientific practices continue to encroach upon most other practices, and given that the former are based on ontological operations that gave rise to the physical world, we increasingly dwell in the real world as if it was the physical.

In the ambiguity that characterizes interactive visual worlds there is a tension, a conflict between the visualizations and what they refer to, both of which assert themselves, both of which compete for our concernful attention. When we deal with a visualization, this conflict makes us oscillate between encountering the visualization, the model, and the phenomenon itself. Behind this oscillation there is a going in and out of different kinds of worlds. In this migratory movement—which appears to us as a migration towards smooth and light worlds, but whose subsequent destinations are unknown to us or we can barely surmise—a variety of new kinds of worlds have arisen, including the ideal geometric, physical, dimensional, and visual worlds. Because of their peculiar constitution, interactive visual worlds contain in themselves all these kind of worlds and, depending on the specific practices we are involved in, our encounters with visualizations emphasize some of them over the others.

As the ontological operations underlying these worlds accumulate one upon the other reinforcing themselves in multiple ways, giving rise to worlds which are themselves increasingly 'real', such that the phenomena to which those worlds refer, say, molecules, become buried under them, Husserl's notion of substitution appears more plausible, but the character of the substitution becomes more puzzling. Although the notion of substitution is important, it does not tell us much about possible transformations in what we are.

Smoothness and lightness. Full knowability. Full powers of generation. Why? What makes them so seductive, so fascinating to us, such that a significant period of history to a certain extent appears to be directed towards them as if towards a final destination?

Roots of Visualization

What are the roots that across several historical ages have sustained and continue to sustain this extended migratory movement? Whence the impetus for the extraordinary persistence and the extraordinary efforts by powerful minds that have contributed to it? It seems to us that to account for the emergence of these rare plants we call interactive visual worlds, to account for the taking place of this extraordinary migratory movement, we need to make recourse to something that is itself extra-ordinary and rare.

It will be permitted to us to consider, without much ado and needed preparation, a 'phenomenon' that, we believe, is one of the roots of visualization, a phenomenon that we will refer to as radical enigmaticity . Enigmaticity points to the character of 'what is' by which it is an enigma. 'What is' refers, on one hand, to 'world as a whole', in the sense of an all inclusive quasi-totality which encompasses us, and which may be called the 'cosmos'. On the other, 'what is' refers to 'what is for us' and, in this sense, it is related both to what Heidegger calls 'world' and to a corresponding notion that characterizes human beings as 'being in the world'. Briefly, human beings are 'essentially' characterized by an 'openness' in which 'what is' flourishes, comes to be for us, and such that this openness has the character of 'world'. In an ontological sense, world is a constellation of practices and significations that make possible our encounters with things, others, and ourselves. Whatever comes to be for us, it does so within a horizon—understood as background—constituted by world. 'What is' in the first sense, as cosmos, comes to be for us in the context of 'what is' in the second sense, that is, cosmos emerges in the context of world.

Enigmaticity, then, is enigmaticity of cosmos and enigmaticity of world. In its emerging for us within world, cosmos can suddenly strike us as 'incomprehensible', not in this or that particular sense, but 'essentially', in toto . Cosmos, as all that is, ever was, and will ever be, can rarely and suddenly become incandescent for us, as 'lightning that, striking at dark, illuminates for a brief moment what the moment earlier we couldn't see'. In its incandescence, we come to appreciate how bold and daring is for the cosmos to be, and to continue to be. Where? When? And this enigmaticity is radical in the sense that whatever can be adduced to account for cosmos becomes subject to the same, ineradicable enigmaticity. With the same suddenness with which it incandesces, a moment later cosmos returns to be what for the most part is for us, an 'amorphous gray mass that leaves us indifferent'.

Enigmaticity of world, on the other hand, can emerge from ourselves as the practices and significations that constitute it no longer appear as solid ground on which to stand. What to do? What to say? Although if we leave them untouched they can sustain us, many significations that we can bring to consideration, if we pursue them enough, lead us into shifting grounds, sometimes becoming incomprehensible. Practices give us a landscape in which to dwell but, if examined with some attention, they may appear as ungrounded bridges. The question as to whether these two kinds of enigmaticities, of cosmos and world, are ultimately the same or spring from the same source we leave unconsidered.

Because enigmaticity is a fundamental characteristic both of cosmos and of world, it appears that there is no way for human formations not to encounter it, even if we take into account human finitude. Hence, enigmaticity must be at the heart of human formations, which must deal with it some way or another. In the context of the West, what we have called the movement of migration towards smooth and light worlds, could be understood at least in part, as an attempt at coping with enigmaticity. 33 Many of the ontological operations we identified earlier, including smoothing, excising, lifting, shapeand link-regularizing, causalization, and metricizing can be at least partially understood as oriented ultimately towards the eradication of enigmaticity. In this migration movement, the enigmaticity of cosmos has been neutralized by the creation of the physical world—by means of universal or cosmic causalization and metricizing operations yielding, in principle, full knowability. At another stage of this migration, but still at an incipient level, the emergence of interactive visual worlds characterized in principle by full knowability and full powers of generation, can be understood as a further attempt to obliterate the enigmaticity of cosmos. 34

Possible Transformation of Being-in-the-World

It is possible that the way enigmaticity is dealt with in different kinds of worlds may be relevant to the particular way of 'being in' those worlds. Let us begin by briefly examining Heidegger's notion of 'being in the world'. 'Being in' refers to a fundamental way in which we are, by which we inhabit and dwell in a world, taking care of things and being with others. As Heidegger suggests, " 'I am' means I dwell, I stay near ... the world as something familiar in such and such a way." (Heidegger 1996, p. 51). Thus, 'being in' denotes a most basic, multifaceted ontological trait characterizing human beings. Exploring further into this trait, Heidegger suggests that it is in 'being in' that a world is disclosed, coming to be for us, and then identifies constituents of this trait, including 'attunement' and 'understanding'. Attunement, as an ontological notion, corresponds to moods, the moods in which we constantly are and which in the most immediate way disclose 'what is' to us.

In the above quote, 'being in' is understood as dwelling in the world as something 'familiar'. Now, what is familiarity and how is it related to enigmaticity? Familiarity means that we find easily our way in the world, that we have an immediate, although not necessarily explicit, understanding of what we encounter. But, even if familiarity predominates in our dwelling, it occasionally vanishes. As Heidegger suggests, "moods bring Dasein before the that of its there [that Dasein is and has to be] which stares at it with the inexorability of an enigma ." (p. 128, our gloss and emphasis). Thus, the familiarity that characterizes dwelling is punctured by enigmaticity and, we suggest, enigmaticity always underlies familiarity as its ineradicable shadow. This underlying enigmaticity makes world and cosmos—each in its own way—appear as 'different' from us, that is, it gives rise to an originary difference by which world and cosmos can appear to us, can be for us. In this difference, then, a distance can blossom in which not only world and cosmos appear as different from us but also as being something entirely other than us and pregnant with their own possibilities. 35

How shall we understand, in this context, what we have characterized as smooth and light worlds which, because they are based on certain idealities, can in principle be fully understood, consequently giving us the possibility of generating all the possible phenomena that can take place in them? Wouldn't our basic relationship with this kind of world be different from that we have with other kinds of worlds? In particular, wouldn't what we have called enigmaticity be almost completely obliterated from the fabric of these worlds? Although interactive visual worlds can reach a level of complexity that would put them, in practice, beyond our full understanding, we would still fully understand the basic, ideal principles on which they are grounded.

It is then plausible to suggest that the 'distance' between us, on one hand, and world, on the other, by which the latter is understood as being pregnant with its own possibilities would tend to vanish, because we, humans, have fully generated such world, from the ground up. But, being ineradicable, enigmaticity would still underlie our dwelling in these quasi non-enigmatic worlds, such that what we called the originary difference by which these worlds can appear to us, still gives.

We now need to ask: If the 'distance' between us and visual worlds is erased but still these worlds appear as being different from us, what then obtains? One possibility, which we now consider, is that we take these worlds as emanating from us . What could this mean? Let us recall what we indicated earlier regarding the Cartesian moment, in particular, the ontological dimensional spatialization operations, which were proposed as accounting for the creation of dimensional spaces, a cornerstone of visual worlds.

First, ideal geometric space emerges by an operation upon the ideal geometric world by which the geometric shapes that populate it are removed from it. This implies that geometric shapes are no longer the primary givens but, rather, that they come to be in the context of something else which is now primary, namely, geometric space. But what is this space, what sustains it? A second operation clarifies this, that is, the institution of an origin in this space. Such an origin could be simply regarded as a point of reference with respect to which shapes are measured, but this would not do justice to what the origin stands for. Rather, the 'origin' needs to be understood as that from which geometric space originates , comes to be. And we indicated earlier that, properly understood, such an origin stands for us, humans. Geometric space is different from us but, at the same time, it 'emanates' from us. From this origin also emanate dimensional axes—as infinitely extended arms that hold this space in a tight embrace.

Next, we indicated that geometric space is populated with a variety of entities including shapes and shape-counts, that is, equations representing shapes. Finally, we noted that by a transformation of the notion of shape-count, such equations are no longer understood as simply measuring shapes but, rather, as generating them, and not only known kinds of shapes, but entirely new classes of them, previously unknown to us. We referred to these kinds of operations as ontological shape-generating operations because, together with the operations that gave rise to Cartesian spaces, they account for the creation of Cartesian worlds. With the emergence of shape-generating operations not only geometric space emanates from us, but the shapes themselves, which populate such space, emerge from us.

Now, what takes place at the visualization moment when Cartesian worlds are real-ized in terms of interactive visual worlds? Although it may be plausible to suggest that Cartesian worlds are taken as emanating from us, as we did above, what happens to such 'emanation' when these worlds are real-ized? At the computerization moment, we identified digital embodization operations, giving rise to digital bodies and digital places, as well as practiceproceduralization and practice-enactment operations that enable the carrying out of practices by digital bodies. In their own ways, these various operations aim at realizing the ideal, and in so doing, they attempt at giving these idealities a life of their own, that is, to make them pregnant with their own possibilities.

But the real-ization of these idealities in terms of interactive visual worlds remains tenuous. We noted earlier the fragmentation of practices into programs that specify them and digital bodies that carry them out, and suggested that such fragmentation gives rise to the 'essential' uprootedness from place of such worlds. They are 'portable', meaning that—in principle, though not necessarily in practice—they can be enacted at any time in any computer, hence, they are not rooted in any place. Where are they rooted, then? In us—who, ultimately, understand, generate, and 'turn them on and off' at will, at any time, anywhere. Consequently, the distance that flourishes between us and other everyday kinds of worlds is very fragile in the case of visual worlds, what makes us suggest that our 'being in' those worlds may retain a strong sense of emanation.

Let us attempt to gain a more concrete sense for what 'emanation' could mean. When we 'think' of something, what is the relationship that we maintain with our thoughts? While we regard them as different from us, we have a sense that they 'emanate' from us, which is why we call them 'our thoughts' in the first place. Certainly, thoughts are not arbitrary and they say what things are. Nonetheless, they upsurge in us. What we are suggesting is that it is possible that our relationship with interactive visual worlds could come to have a character similar to the relationship we have with our own thoughts so that the 'distance' between such worlds and us would vanish. We can express this transformation in the following, formulaic way. In the case of visual worlds, our 'being in the world' could become 'being (in) the world' , such that, that 'in' which we dwell appears to us as emanating from us, so that 'we are those worlds'.

Two of the visualization principles examined earlier point in the direction of the suggested transformation. The fusion principle can be understood as an injunction to perform operations—ontological operations—to fuse together humans and visualizations, overcoming the 'distance' that separates them. Similarly, the 'thinking with visualizations' principle can be understood as inviting a seamless relationship between our thoughts and computer-supported visualizations.

To take one, and final step further, we ask: If visualizations were to become pervasive in everyday life, could the indicated transformation be possible not just for interactive visual worlds but for all the worlds in which we dwell, so that we would now be essentially characterized by 'being (in) the world'?

Notes

1

In Visualization, the Second Computer Revolution , Richard Friedhoff and William Benzon (1989) present a richly illustrated overview of the field of computer-supported visualization, discussing a variety of uses of this technology.

2

This is an extensive report that grew out of discussions in several panels and workshops that took place in 1986 and 1987 (Bruce McCormick, Thomas DeFanti, and Maxine Brown 1987).

3

An important source for work in this area is (Stuart Card, Jock Mackinlay, and Ben Shneiderman, 1999).

4

An application of this principle can be found in the notion of 'cognitive co-processor', a component of the 'Information Visualizer' (George Robertson, Stuart Card, and Jock Mackinlay, 1993). An important purpose of this visualization architecture is to "maximize interaction rates with the human user by tuning the displays and responses to real-time human action constants", p. 59.

5

This last work of Husserl, written towards the end of his life in the period 1934-1937, has to be regarded as a work in the makings, unfinished in many of the written parts and incomplete with respect to its overall structure. Nonetheless, it is a very significant work because of its aims and because it represents an original departure with respect to Husserl's prior works, by its strong emphasis on the 'origins' of phenomena.

6

Husserl's phenomenological analysis of geometry and modern science in the Crisis has been critically examined by Patrick Heelan (1987) and Don Ihde (1991, pp. 17-22), among others. For a recent interpretation see Noël Gray (1999, pp. 49-70).

7

Although not included in the Crisis as published by Husserl himself, because of its close relationships to its theme and because of the way Husserl starts that piece, in which he appears to refer implicitly to the Crisis , later editions have included it as an Appendix to the Crisis (Husserl 1970b, pp. 353-378).

8

As stated in (Victor Katz 1998, p. 58), the Elements "has appeared in more editions than any work other than the Bible. It has been translated into countless languages and has been continuously in print in one country or another nearly since the beginning of printing."

9

See, e.g., (J. F. Scott 1969, pp. 22-33).

10

Because Husserl is concerned with 'essential' steps in the unfolding of natural science, he uses Galileo's name in a broad sense to refer not only to Galileo himself but also to the historical moment in which natural science was born: "With Galileo [this idea of nature] appears for the first time, so to speak, as full-blown; thus I have linked all our considerations to his name, in a certain sense simplifying and idealizing the matter; a more exact historical analysis would have to take account of how much of his thought he owed to his "predecessors" (1970 b p. 57).

11

Galileo himself used Euclidian geometry, as exemplified by his study of motion in the Third Day of his Two New Sciences . In many cases, because the problem under study can be represented using parallel lines cut by other lines, Galileo reasons by means of 'proportions'. In other cases, he reasons in terms of 'infinitesimals' (Galilei 1989, p. 165). See discussion in (Katz 1998, p. 421).

12

Geometry is certainly used before Galileo, for instance, in astronomy and in the study of motion, but it is with Galileo that it acquires a more fundamental and decisive role. In a famous passage Galileo states that:

Philosophy is written in this grand book—I mean the universe—which stands continually open to our gaze, but it cannot be understood unless one first learns to comprehend the language and interpret the characters in which it is written. It is written in the language of mathematics, and its characters are triangles, circles, and other geometrical figures, without which it is humanly impossible to understand a single word of it; without these, one is wandering about in a dark labyrinth (Galileo 1960, pp. 183-184).

Whatever Galileo may have meant by this complex statement and whatever the justifications he may have had for it, he proposes to read this 'grand book' geometrically. And this means that, sooner or later, these operations of setting aside the bodies of objects and replacing them by ideal geometric shapes must take place.

13

On the Third Day of the Dialogue Concerning the Two Chief World Systems (p. 410), Simplicio, the Aristotelian, says:

Truly, I think that Salviati's eloquence has so clearly explained the cause of this effect that the most mediocre mind, however unscientific, would be persuaded. But we who restrict ourselves to philosophical terminology reduce the cause of this and other similar effects to sympathy , which is a certain agreement and mutual desire that arise between things which are similar in quality among themselves, just as on the other hand that hatred and enmity through which other things naturally fly apart and abhor each other is called by us antipathy .
Sagredo: And thus, by means of two words, causes are given for a large number of events and effects, which we behold with amazement when they occur in nature. Now this method of philosophizing seems to me to have great sympathy with a certain manner of painting used by a friend of mine... ."

At the Fourth Day, in a discussion on the tides and their possible relations to the earth's movements, Salviati proposes a highly regularized notion of causality:

Thus I say that if it is true that one effect can have only one basic cause, and if between the cause and the effect there is a fixed and constant connection, then whenever a fixed and constant alteration is seen in the effect, there must be a fixed and constant variation in the cause (Galilei 1967, p. 445).

Joseph Pitt refers to Galileo's notion of causality as the Principle of Universality (Pitt 1992, p. 97) and quotes from the Dialogue, p. 418: "... ultimately one single true and primary cause must hold good for effects which are similar in kind."

14

This issue is related to Galileo's distinction between 'conditions' such as shape and movement, which he locates in "material or corporeal substances", and 'qualities' such as color, taste, and heat, which he locates in the body of the perceiver. This distinction appears in the context of a discussion on whether "motion is the cause of heat" ( The Assayer , pp. 308- 313). In Galileo's discussion one can see how he attempts to explain heat in terms of 'shaperelated' qualities: what we call fire "would be a multitude of minute particles having certain shapes and moving with certain velocities." He concludes,

I do not believe at all that in addition to shape, number, motion, penetration, and touch there is any other quality in fire which is 'heat'; I believe that this belongs to us, and so intimately that when the animate and sensitive body is removed, 'heat' remains nothing but a simple vocable (1960, p. 312).

Pietro Redondi examines these issues, contrasting Galileo's position with Aristotle's (Redondi 1987, pp. 55-57). Husserl regards Galileo's distinction as a 'false consequence of Galileo's mathematizing reinterpretation of nature' (1970 a , p. 53).

15

Consider the beginning of the Fourth Day in Galileo's Two New Sciences (p. 217): "I mentally conceive of some moveable projected on a horizontal plane, all impediments being put aside." Later, in the same Day, p. 225:

No firm science can be given of such events of heaviness, speed, and shape, which are variable in infinitely many ways. Hence to deal with such matters scientifically, it is necessary to abstract from them. We must find and demonstrate conclusions abstracted from the impediments, in order to make use of them in practice under those limitations that experience will teach us. And it will be of no little utility that materials and their shapes shall be selected which are least subject to impediments from the medium, as are things that are very heavy, and rounded.

From an epistemological perspective, Galileo's removal of impediments appears, clearly enough, as a "methodological rule of abstraction which frees him from the necessity of considering the infinite variations in the particular features of nature and allows him to look for generalities instead" (Pitt 1992, p. 72). Pitt refers to this rule as Galileo's Principle of Abstraction.

16

Alexandre Koyré has examined in detail the "infinitization of the universe" that took place in early modernity, whereby the boundaries of the Cosmos were exploded (Koyré 1957).

17

For a discussion of Galileo's "recognition of the limit of human cognition" see (Pitt 1992, p. 34).

18

The quote is from (Ihde 1997, p. 293). In (Ihde 1990), Ihde characterizes Husserl's interpretation of Galileo as implying an "emptying the scientific world of perception and praxis." (p. 38). In the same work can be found a discussion of the role of the telescope in Galileo's work and of instruments in general in science (pp. 52-57).

19

In his Postscript to The Structure of Scientific Revolutions (Kuhn 1970, p. 189), where he discusses paradigms as shared examples, Kuhn refers to paradigms as accounting for "the resultant ability to see a variety of situations as like each other."

20

In (Ihde 1991), there is an extensive discussion of Husserl's and Kuhn's views on science, especially in pp. 11-24.

21

Koyré's referred to his Galilean Studies in (Koyré 1957, p. viii). The quote on the notion of Cosmos is in (Koyré 1978, p. 131).

22

Koyré having been a student of Husserl, and the fact that the Crisis and Galilean Studies were published shortly one after the other, 1937 and 1939, respectively, makes it likely that there were direct influences between the two works, especially, from Koyré's work into the Crisis . (See Translator's Introduction to the Crisis , p. xix, n7.)

23

Mode of revealing is a particular way of bringing forth or opening up whatever there is, which then constitutes 'what is'. Heidegger suggests that it is the mode of revealing that gives rise to historical ages, characterized by a predominant mode.

24

Visualizations based on images, say, a map of a continent, do not fit exactly the kind of application we describe here, but they still are an application of dimensional spaces. In this case, instead of using 'formulas', the visualization is based on 'bitmaps', that is, collections of descriptions of points that constitute the image.

25

Are we justified in talking about digital bodies rather than digital machines? Because computers are programmable they are far more 'flexible' than conventional machines. Hence, i t appears that the notion of body, more complex than that of a machine, does better justice to the kinds of entities we are dealing with. But we cannot address this issue in detail here.

26

Strictly speaking, a surf is not a technical term, but it is a convenient notion that facilitates the analysis. Surfs can also represent elements of 'images.' An image, e.g., a photograph of something, can be digitalized and then realized in terms of surfs. For simplicity, we will concentrate here only on surfaces realizing geometric objects.

27

It is possible to directly wire a program into a digital body by implementing it in terms of an integrated electronic circuit. In this case, there is a direct digital real-ization of the program, giving rise to an entity that will have somewhat different characteristics.

28

The class of images for which this holds is generated by self-similar shapes: "When each piece of a shape is geometrically similar to the whole, both the shape and the cascade that generate it are called self-similar." (Benoit Mandelbrot 1982, p. 345) 'Cascade' refers here to the recursive process that generates the shape.

29

For an extended discussion on fractal geometry and its relation to 'traditional' geometry see (Gray 1999, pp. 113-123).

30

Consider what Mandelbrot says regarding Koch curves, which have been regarded by many mathematicians as being extremely irregular:

Being fascinated with etymology, I cannot leave this discussion without confessing that I hate to call a Koch curve "irregular." This term is akin to rule, and is satisfactory as long as one keeps to the meaning of ruler as an instrument used to trace straight lines: Koch curves are far from straight. But when thinking of a rule as a king (= rex, same Latin root), that is, as one who hands down a set of detailed rules to be followed slavishly, I protest silently that nothing is more "regular" than a Koch curve (Mandelbrot 1982, p. 41, Mandelbrot's emphasis).

31

See Sections 15 and 16 in (Heidegger 1996, pp. 62-71).

32

See our previous reference—in the subsection on 'Critical Appraisal of Husserl's Analysis'—to Heelan's suggestion on the possible sources of Husserl's notion of mathematization of nature, which Heelan relates to 'Göttingen science'.

33

In the West, another fundamental way of coping with enigmaticity has been Christianity. It can be argued that two of the most extraordinary beings ever conceived by human formations, namely, 'God' and the 'world beyond' of Christianity, were to a certain extent responses to enigmaticity. Perhaps the Christian God, a conception elaborated over the course of centuries, can be understood—in an admittedly simplified way—as emerging from what might be called 'enigmaticity displacement operations', by which the enigmaticity of the cosmos is displaced towards another being characterized by omniscience, omnipotence, and being causa sui .

34

If indeed enigmaticity is radical, as suggested, all attempts to displace, neutralize, and eradicate it must in the end fail. Perhaps this could account for the fate of the Christian God as proclaimed by Nietzsche. Heisenberg's indeterminacy principle in physics, Gödel's incompleteness theorem in mathematics, and exponential complexity barriers in computer science, can be taken as signs that enigmaticity cannot be conquered.

35

In the case of world, we need to distinguish between 'our possibilities', what Heidegger calls the 'for the sake of which', and the possibilities implicit in 'significance', that is, the network of interlocking meanings and possible meanings that emerge from the entities that appear in the world for us. We are specifically referring to the 'intrawordly entities' that have themselves their own possibilities, and appear as such.

References

Araya, Agustin A. "Changed Encounters With Things and Ontological Transformations. The Case of Ubiquitous Computing." Research in Philosophy and Technology , Vol. 19. 2000, pp. 3-31.

Card, Stuart K., Mackinlay, Jock D., and Shneiderman, Ben. Readings in Information Visualization. Using Vision to Think . San Francisco: Morgan Kaufmann Publishers, Inc, 1999.

Friedhoff, Richard M. and Benzon, William. Visualization, The Second Computer Revolution . New York: Harry N. Abrams, Inc., Publishers, 1989.

Galilei, Galileo. "The Assayer." Trans. by Stillman Drake and C. D. O'Malley, in The Controversy on the Comets of 1618 . trans. by Stillman Drake. Philadelphia: University of Pennsylvania Press, 1960.

_______. Dialogue Concerning the Two Chief World Systems—Ptolemaic and Copernican . Trans. by Stillman Drake. Berkeley: University of California Press, 1967.

_______. Two New Sciences . Trans. by Stillman Drake. Toronto: Wall & Thompson, 1989.

Gray, Noël. "Stains on the Screen. The Geometric Imaginary and Its Contaminative Process." Research in Philosophy and Technology , Supplement 4. 1999, pp. 1-145.

Heelan, Patrick. "Husserl's Later Philosophy of Natural Science." Philosophy of Science , 54. 1987, pp. 368-390.

Heidegger, Martin. Being and Time . Albany: State University of New York Press, 1996.

Husserl, Edmund. The Crisis of the European Sciences and Transcendental Phenomenology . Trans. by David Carr. Evanston: Northwestern University Press, 1970a

_______. The Origin of Geometry . Published as Appendix VI in The Crisis of the European Sciences and Transcendental Phenomenology . Trans. by David Carr Evanston: Northwestern University Press, 1970b.

Ihde, Don. Technology and the Lifeworld. From Garden to Earth . Bloomington: Indiana University Press, 1990.

_______. Instrumental Realism . Bloomington: Indiana University Press, 1991.

_______. Whole Earth Measurements. Ludus Vitalis, Journal of Philosophy of the Life Sciences , Vol. 2, Special Issue. 1997, pp. 291-299.

Katz, Victor J. A History of Mathematics. An Introduction . Reading, Massachusetts: Addison-Wesley, 1998.

Koyré, Alexandre. From the Closed World to the Infinite Universe . Baltimore: The John Hopkins Press, 1957.

_______. Galileo Studies . New Jersey: Humanities Press, 1978.

Kuhn, Thomas S. The Structure of Scientific Revolutions . Chicago: The University of Chicago Press, 1970.

Mandelbrot, Benoit B. The Fractal Geometry of Nature . New York: W. H. Freeman and Company, 1982.

McCormick, Bruce H., DeFanti, Thomas A., and Brown, Maxine D. "Visualization in Scientific Computing," Computer Graphics 21, 6 (November). New York: Association for Computing Machinery, SIGGRAPH, 1987.

Pitt, Joseph C. Galileo, Human Knowledge, and the Book of Nature. Method Replaces Metaphysics . Boston: Kluwer Academic Publishers, 1992.

Redondi, Pietro. Galileo Heretic . Princeton: Princeton University Press, 1987.

Robertson, George G., Card, Stuart, and Mackinlay, Jock. "Information Visualization Using 3D Interactive Animation," Communications of the ACM , 36 (4). 1993, pp. 57-71.

Scott, J.F. A History of Mathematics. From Antiquity to the Beginning of the Nineteenth Century . London: Taylor & Francis Ltd, 1969.

by Mark B. Gerus