SPT v4n2 - On the Impact of Deterministic Chaos on Modern Science and Philosophy of Science: Implications for the Philosophy of Technology?


Number 2
Winter 1998
Volume 4

ON THE IMPACT OF DETERMINISTIC CHAOS ON MODERN SCIENCE AND PHILOSOPHY OF SCIENCE: IMPLICATIONS FOR THE PHILOSOPHY OF TECHNOLOGY?

Theodor Leiber, University of Augsburg


Philosophy relates everything to wisdom,
but through the methods of science!
(Immanuel Kant)


1. OVERVIEW AND INTRODUCTION: DETERMINISTIC CHAOS, CHALLENGE FOR WHOM OR WHAT?

The modern concept of deterministic chaos arises from the mathematical and physical investigation of the topological and dynamical properties of deterministic systems. The notion of deterministic chaos is frequently used in an increasing number of scientific as well as non-scientific contexts, ranging from mathematics and the physics of dynamical systems to all sorts of complicated time evolutions, e.g., in chemistry, biology, physiology, economics, sociology, and even psychology. In this, the central epistemological impact of chaos research is on matters of prediction and computability of most nonlinear deterministic systems, while the various concepts of deterministic chaos in use do not constitute a new science, or a revolutionary change of the scientific world picture. Instead, chaos research provides a sort of toolbox of topological, perturbational, and numerical methods which are certainly useful for a more detailed analysis and understanding of such dynamical systems whose deterministic trajectories are, roughly speaking, endowed with the property of exponential sensitivity on initial conditions. Such a property, then, implies merely one, but a quantitatively strong type of effective or empirical limitation on long-time computability and predictability, respectively. Several reasons are given for why the impact of deterministic chaos research on quantitative modelling in the analysis of social and technological processes seems to be rather limited.

With respect to deterministic chaos, we distinguish between metaphysical, epistemological, and mathematical determinism. Epistemological determinism amounts to the working hypothesis of lawlikeness of processes--or, of the heuristic, theoretical, and empirical fruitfulness of lawlike scientific models. Mathematical determinism is given by the fundamental existence theorem of ordinary differential equations, i.e., the existence and uniqueness of a solution for any initial state. We take it that metaphysical determinism is a transcendent assumption neither provable nor refutable, because epistemological determinism is not strictly confirmable (because of the problem of induction); and mathematical determinism is an idealization which is not strictly confirmable empirically (because measurements are always given with finite precision and are endowed with noise). Whenever the term "chaos" appears in the following, it is meant (if not explicitly stated otherwise) to denote "deterministic chaos" in the sense of mathematical determinism of trajectories.

Deterministic chaos has been in vogue now for more than a decade. Applications to problems long assumed to be quite regular and predictable, as well as to problems long intuitively known to be "chaotic" have been proposed, and quite a number of them successfully so. Meanwhile, the explanatory ambitions and applications of chaos research, more specifically of nonlinear dynamical system theory, cover many types of dynamical evolutions in the empirical sciences such as physics, chemistry, biology, economies, sociology, physiology, and psychology (Ba ar, 1990 ; Duke and Pritchard, 1991 ; Haken, 1983 ; Küppers, 1996 ; Mainzer, 1997a; Mainzer, 1997b ; Skarda and Freedman, 1987 ). Basically, the various applications of nonlinear systems theory try to utilize the results established in mathematical, and especially physical, chaos research and nonlinear dynamics, and they constitute the field of research where the basic impacts of deterministic chaos are to be located. Thereby, interesting and sometimes illuminating quantitative and qualitative models, treatable by established mathematical methods, are provided even in traditionally non-mathematized sciences.

A general lesson to be learned from the various phenomena of deterministic chaos in mathematics and physics simply is that--as may have been well known before the advent of modern chaos research--mathematical determinism, especially if it is assumed to imply long-time computability, is an idealization never achievable in the empirical world of actual modelling, measurements, and computations. More specifically, it follows from deterministic chaos research that any actual perturbation of a deterministic trajectory of a dynamical system is amplified exponentially in the course of time, and thereby long-time computability is strongly limited because of practical reasons.

2. DETERMINISTIC CHAOS IN MATHEMATICS AND PHYSICS

The modern concept of deterministic chaos has its central bearings, success, and progress in the mathematics and physics of the topological and dynamical properties of deterministic systems.

Different accounts or definitions of chaos have been around in the literature of the last several years. Not surprising but often ignored, there is, however, no general definition of deterministic chaos applicable to the majority of interesting cases (Leiber, 1996a, chap. 15; Leiber, 1997; Leiber, 1998a ).

It is only in the special case of the iteration of a function (e.g., the logistic function x ax (1- x ), x [0,1], a 4) that there is agreement on the mathematical (actually topological) properties characteristic of deterministic chaos (Peitgen et al., 1994 , chap. 1): (i) sensitive dependence on the initial conditions ( SD ); (ii) dynamical mixing in state space ( MIX ); (iii) periodic points lying dense in state space ( DPP ). Note that the mathematical definition of chaos, ( MIX DPP SD ) : DC , presupposes the existence of a state space whose states are precisely localizable in principle, and it applies to closed systems.

Moreover, while the chaos industry is still expanding, the interpretational practice, or meaning variance, of the term, "chaos," varies a lot, and some scientists are even quite unhappy about the very notion of deterministic chaos itself which was accidentally introduced by Li and Yorke ( 1975 ).

The distinguishing property of dynamical deterministic chaos is the chaotic long-time behavior of dynamical systems. Deterministic equations of bounded motion with few degrees of freedom give rise to complicated solution trajectories, (i) which do not exhibit any quasi-periodicities without any external disturbances, and (ii) which are extremely, i.e., exponentially, sensitive to small deviations in the initial conditions.
For details about the chaotic mechanisms in Hamiltonian and dissipative dynamical systems (and also some graphical illustrations) see, e.g. (Leiber, 1996a , pp. 380-397; Thomas and Leiber, 1994 ). In passing we may also note that despite the exponential sensitivity of individual deterministic trajectories in many cases there is some structural predictability possible, e.g., when Hamiltonian chaotic trajectories are enclosed by invariant tori in "almost" non-integrable Hamiltonian systems, or when the dissipative dynamics is contracted to relatively low-dimenional strange attractors.

The case of regular, i.e., long-time effectively computable, dynamics is given if the distance d(t) of neighboring trajectories is constant or grows algebraically in the course of time,

d \const ~ ~ ~ \or ~ ~ ~ d t SUP ,

with a system dependent constant . This implies that the length of the computable time interval of the trajectories's evolution also grows algebraically with the precision of initial data. The reason is that the N binary digits of the inital data are increasingly lost with the trajectories's evolution because of the algebraic amplification of initial value and/or computational errors; i.e., in the regular case, on the order of N/2N bits per computational step are lost.

In the case of chaotic, i.e., not effectively long-time computable, motion, neighboring trajectories diverge exponentially (exponential sensitivity),

d \exp( SUB + t), ~ ~ ~ SUB + > 0

where + denotes the largest characteristic Lyapunov exponent. Accordingly, for the chaotic case the computable time interval, t c 1/ + , merely grows logarithmically with increasing precision of initial data; per iteration, then, approximately one bit of initial data is lost.

Therefore, in principle the empirical distinction of regular and chaotic dynamics in a numerical experiment is achievable by quantitative estimates: for the case of N-bit computing precision, in regular systems any computable correlation with initial data is lost after approximately 2 N iterations, while in chaotic systems the same is true already after N iterations. At the same time it is to be noted, however, that dynamical chaos in the sense of limited long-time computability is a matter of degree.

Besides the mathematical definition of chaos, which is applicable only to relatively simple mathematical systems (e.g., logistic function, tent map, and Bernoulli shift), and besides the characteristics of Hamiltonian and dissipative chaos, a number of methods have been invoked, especially in the physics of non-dissipative nonlinear dynamical systems, to characterize the degree of complexity, and, according to the increasing degree of dynamical instability or non-predictability, a hierarchy of abstract dynamical systems has been established, roughly (i.e., neglecting intermediate degrees) ranging from (i) ergodicity, to (ii) mixing, and to (iii) K- and Bernoulli systems. (For definitions and technical details, see Batterman, 1991 ; Lichtenberg and Liebermann, 1983 , chap. 5; Ornstein and Weiss, 1991 .) Note that, unfortunately, it is widespread abuse to denote all of these types of dynamical instability by the same word, chaos.

Whereas rigorous existence proofs for the properties of dense periodic points (DPP) and mixing (MIX) of mathematical chaos can only be given for very simple nonlinear systems, the overwhelming majority of dynamical systems, which are of interest in physics, and which are assumed to exhibit chaotic behavior, do not allow for comparable proofs. Therefore, such systems are investigated by means of a number of conceptually and empirically nonequivalent procedures (e.g., canonical perturbation theory, linear stability analysis, Lyapunov exponents, dynamical entropies, strange attractors, diffusion-like models), where in most cases numerical computer calculations play a decisive role. Almost everything known about strange attractors relies on computer numerics. (For detailed accounts, see, e.g., Buzug, 1994 ; Lichtenberg and Liebermann, 1983 ; Peitgen et al., 1994 , chap. 3; Tabor, 1989 .)

3. DEGREES OF PREDICTABILITY: REGULAR VERSUS CHAOTIC MOTION

Obviously, the question arises whether the mathematical and physical concepts of deterministic chaos can be reduced to a smallest common denominator, which is not only meaningful theoretically, but first of all empirically. A positive answer can be straightforwardly given in the framework of a correlation function concept of the predictability of dynamical systems.

In the problem of predictability of dynamical evolutions of deterministic trajectories, actually three processes are involved, namely the observed one x(t), the model one y(t), and the hypothetically underlying real process z(t). Then, the mean-square error < 2 > = <(x-y) 2 > as taken from finite empirical averaging provides a universally adopted measure for prediction accuracy,

< SUP 2 > ~ \= ~ 1 OVER N {SUM FROM {j=1} TO {N}} LEFT [ x(t SUB j SUP 0 \+ ) ~ \- ~ y ( t SUB j SUP 0 \+ ) RIGHT ] SUP 2 ~ ~ ~ ~ ~ ~ ~ ~ ~ (1)

where j counts the different observations, and t j 0 and t j 0 + denote the starting instant and the time instant of measurements, respectively. It is assumed that the greater the number of observations performed the more faithful is the error estimate in Eq. (1).

The degree of predictability can now be measured by a coefficient of correlation between the observed process and the model process at the time moment after observation has started:

D( ) ~ \= ~ {< x y >} OVER { SQRT {<x SUP 2 > ~ < y SUP 2 >}}.

Since the initial value of prediction y 0 is taken to be equal to x 0 , we have D( =0) = 1; empirically, with increasing the degree of predictability D reaches zero. Generally, the closer D ( ) to unity (from below) the more satisfactory the forecast, and the closer D ( ) to zero (from above) the larger the discrepancy between observation and prediction. The time span of predictable behaviour, pred , is defined by D ( pred ) = 0.5 which corresponds to the situation that the absolute error < 2 > is of the same order as the observed process's invariance < x 2 >: < 2 > (< x 2 > + < y 2 >)/2 < x 2 >.

While in the case of regular motion the mean-square prediction error grows algebraically

< SUP 2 > ~ \= ~ SUP SUM SUP 2 ~ ~ ~ ~ ~ \with ~ ~ ~ SUM SUP 2 \= ( { SUB f SUP 2 } \+ { SUB SUP 2 } \+ { SUB { M} SUP 2 }),

in the case of deterministic chaos it grows exponentially

< SUP 2 > ~ \= ~ \exp (2 SUB + ) SUM SUP 2,
with 2 , f 2 , and 2 M representing the contributions of "measurement noise" (i.e., finite errors in measurement and numerical precision of initial data), other physical noises (e.g., stochastic forces), and the impact of model inaccuracy M = M x - M z , respectively. Empirically, none of these noises is negligible while for the case of pure deterministic chaos we may omit noises other than perturbations of initial data due to finite precision in measurement and numerical computation.

Equating the mean-square error < 2 > with the observation variance < x 2 >, we can estimate the time of predictable behavior in terms of the specified signal-to-noise ratio SNR = < x 2 >/ 2 for the regular case,

SUB {pred} SUP{reg} ~ ~ LEFT( {<x SUP 2 >} OVER { SUB SUP 2} RIGHT)SUP{1/ }, ~ ~ ~ ~ ~ ~~ ~ ~~ ~ ~~ ~ ~~ ~ ~~ ~ ~~ ~ ~ ~ ~ ~~ ~ ~~ ~ ~~ ~ ~~ ~ ~~ ~ ~~ ~ ~ (2)

and for the chaotic case:

SUB{pred} SUP{chaos} ~ ~ 1 OVER {2 SUB +} ~ \ln ~ LEFT( {< x SUP 2 >} OVER { SUB SUP 2} RIGHT)
. ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ (3)

The feature of local exponential instability, or sensitivity on initial conditions as given in deterministic chaos means, e.g., that to increase pred chaos by an order of magnitude the signal-to-noise ratio must increase by the factor e 10 20.000. Also, the positive Lyapunov exponents considerably lower the predictable time span, especially when + »1, SNR »1 (i.e., small errors), and + » SNR .

In deterministic dynamical systems the relationship corr < pred is fairly typical (where corr denotes the correlation time). The lower limit pred = corr is realized when no dynamic equation is available and the prediction is based on the principle that tomorrow will be like today. (Incidentally, all attempts ever made to obtain a value of pred markedly exceeding corr by statistical forecasting methods based on autoregression-type linear algorithms have been in vain. The comparability of pred and corr can only be presumed from general considerations.)

On the basis of Eq. (3) the predictability horizon hor can be defined as a finite timespan of predictable behavior that cannot be surpassed by either improved measuring instruments or a refined prediction model:

SUB {hor} ~ \= ~ lim FROM{STACK { 0 # M 0}} SUB {pred} ~ . ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ (4)

Beyond the predictability horizon hor no model can provide an adequate forecast. In those cases where for all tested model processes y(t) it is impossible to achieve a high degree of predictability D ( ) for times exceeding the domain of the correlation coefficient, i.e., where pred corr , it should be admitted that a dynamic model for process x(t) is not available and that x(t) should be categorized as noise, i.e., as a stochastic process (in the general sense of the theory of probability) to which no dynamical prediction model can be fitted. At the same time, processes that are predictable for times exceeding the correlation time are to be classed with partially determinate ones.

In summary, any deterministic or stochastic process has a limited predictability range stemming from inaccuracy of the model equation, perturbing action of instrumentation or strength of measurement noise, fluctuations in the system concerned, bifurcations in the course of evolution, etc., and constraints due to costs. Deterministic chaos in the sense of exponential instability of dynamical systems is a quantitatively severe limitation of the long-time predictability of deterministic systems, because any sort of error, deviation, or perturbation is amplified exponentially, see Eq. (3).

Here, some remarks seem to be in order. Some physicists (possibly because of the lack of a generally applicable definition of deterministic chaos) seem to suggest that no specific distinction should be made between chaos and noise; i.e., deterministic chaos is just denoted as "low-dimensional noise." Since Boltzmann's microscopic chaos, hyothetically underlying statistical mechanics (or the stochastic forces representing the heat bath), is not necessarily deterministically chaotic, however, such an extension of the concept of noise implies that, at least, we unnecessarily lose conceptual differentiation between two different mechanisms of stochasticity.

Besides deterministic chaos there are, however, other severe limitations of computability, including physical limitations of realizability of computing machines (e.g., quantum uncertainty relation, heat dissipation); much more important are numerical untreatability and uncomputability (e.g., too high computational problem complexity); computer errors (e.g., hardware errors, software errors, algorithm errors).

Untreatable problems are those which, depending on some system parameter, have exponentially growing algorithmic complexity; if the computational complexity is infinite for any problem formulation, conceivable to date, we call the system uncomputable. For a discussion of points (i) and (ii), and also a short survey on information-based complexity and computability problems in linear analysis, see Leiber ( 1996b , pp. 26-40).

Note also that from the theory of recursive computability in classical linear analysis, to date, almost nothing is available about the computational complexity of nonlinear (i.e., potentially chaotic) problems. It is clear, however, that the class of numerically untreatable systems (which are not effectively algorithmically computable because of exponentially growing computational complexity) and the class of chaotic systems are not identical: every chaotic system is untreatable (i.e., not long-time computable) whereas untreatable problems are not necessarily chaotic (e.g., there exist a number of linear, non-chaotic problems which are untreatable; Leiber, 1996b , pp.36-40). The deterministically chaotic systems constitute a true subset of the set of untreatable problems. Moreover, unique connections between dynamical system properties--e.g., nonlinearity, non-integrability, and dynamical instability on the one hand, and algorithmic complexity and limited long-time computability on the other hand--cannot be established. There are effectively treatable (i.e., algorithmically computable) systems which do not admit of a closed-form solution as a function of time (e.g., transcendental equations); there are nonlinear systems which are integrable (e.g., solitons); there are linear systems which are uncomputable, or untreatable (Leiber, 1996b , pp. 38-40); in the framework of mathematical ergodic theory it has been shown that algorithmic Kolmogorov complexity is not synonymous with dynamical instability (or "deterministic randomness; Batterman, 1996 ); also, exponential instability is compatible with well-posedness, i.e., with the existence of a closed-form solution. In summary, for the instability hierarchy of dynamical systems there is no comprehensive characterization available in terms of computability concepts .

4. ON THE EPISTEMOLOGICAL IMPLICATIONS OF DETERMINISTIC CHAOS

It is obvious that the modern concept(s) of deterministic chaos significantly change the traditional conceptual contents of the concept, "chaos." (For an historic outlook to some traditional concepts of chaos, see Leiber, 1996a .)

In my opinion, however, the novelty and fundamental character of the epistemological implications of deterministic chaos are quite restricted. Nevertheless, the features of deterministic chaos, from a classical mechanics point of view, question a number of implicit assumptions of classical mechanical physics, the "mechanical world picture" of the 19th century, and some assumption of a positivistic philosophy of science.

(i) Deterministic chaos can be conceived as a property of low-dimensional, nonlinear, deterministic systems with more than two state space dimensions, which are not subjected to any external stochastic perturbations and which are not effectively treatable by means of linear perturbation theory. Nonlinearity (i.e., non-validity of the superposition principle) is a necessary but not sufficient condition for deterministic chaos in the sense of exponential sensitivity to appear. (Perturbation theoretic methods have, however, proved very useful in the intimate neighborhood of Hamiltonian chaos, especially for establishing the KAM theorem; (Leiber, 1996a , pp. 380-385, 390-395.)

(ii) In contradistinction to purely statistical models, chaotic dynamics can be partially subjected to quantitatively precise analyses other than probabilistic methods. At the same time, deterministic chaos research can be successfully applied only to systems with relatively few dimensions. For higher dimensional systems we usually have to invoke probabilistic modellings irrespective whether the underlying deterministic dynamics is chaotic or not; in systems with attractors of large dimension, naive arguments indicate that recurrence times are astronomical: i=1 n i < 0, n = 0, and many i > 0 i.e., -1 « lim t ;V(0) 0+ (1/ t ) V(t) / V(0) < 0. Here is only weak contraction of the occupied phase space volume, and the mean recurrence time = i / i »1 (i n), where i and i denote the Luapunov exponents of the deterministic flux and the corresponding Poincaré map, respectively. Or, as John Guckenheimer has put it: "A likely bet is that most natural systems display either simple dynamics that are not chaotic or dynamics that are beyond the realm of low-dimensional chaotic attractors" (Guckenheimer, 1991 , p. 7). Moreover, a quantitative argument given by Jean-Pierre Eckmann and David Ruelle (Ruelle, 1990 , pp. 244 ff.) demonstrates that the (re-) construction of strange attractors from a time series by means of the Grassberger-Procaccia algorithm is to be interpreted very cautiously: from purely theoretical considerations it follows that for such attractors, the correlation dimension 2 log 10 N , where N is the number of utilized time series data. This implies that dimension estimates are only informative if they are well below 2 log 10 N . In many cases presented in the literature, however, this is not the case (because usually N 1000 and the measured "dimensions" are of the order of 6): "The 'proof' that one has low dimensional dynamics is therefore inconclusive, and the suspicion is that the time evolutions under discussion do not correspond to low-dimensional [deterministic] dynamics. It is possible that interesting information can nevertheless be extracted from the time series examined, but this would probably require new ideas. In the meantime prudence is in order, and claims that one can predict the stock market--for instance--using the ideas of dynamical systems appear somewhat unrealistic" (Ruelle, 1990, p. 247).

(iii) From a foundational point of view in physical theorizing there is some vague hope that in the framework of mathematical ergodic theory deterministic chaos might provide a micro-dynamical foundation of macro-properties as found in nonlinear non-equilibrium thermodynamics. The results established by the KAM theorem may be taken as providing a possible route from integrable, reversible behavior to probabilistic, irreversible behavior in statistical mechanics; however, these are still purely qualitative arguments, and no one is able to explicitly deduce from chaotic dynamics the basic equation(s) of one of the thermodynamic approaches.

(iv) Mathematical determinism is empirically rather meaningless, and the assumption that mathematical determinism should imply numerical long-time computability is simply misguided; a specifically illuminating case is the solution of the deterministic Hamiltonian N -body problem from celestial mechanics (Leiber, 1996a , pp. 390-395), where it can be shown explicitly that even a constructive solution, namely a globally convergent power series (Wang, 1991 ), can be useless from a practical point of view because the very slow convergence and the round-off errors make these series useless in numerical work. Deterministic chaos is, however, just one additional, quantitatively specific aspect strengthening such arguments, which state that mathematical determinism (theoretical or formal determinateness; traditionally sometimes called "absolute predictability") and predictability (determinability; effective computability) of individual trajectories are clearly to be distinguished , even if there were no deterministic chaos at all.

Note that Pierre Simon de Laplace equated mathematical determinism with absolute predictability, but only for the case of a super-intelligent being (later called the Laplacean demon) which would be able to know the initial conditions and interacting forces of a mechanical system with absolute precision, and which would in no way be limited physically. In modern terms; for an idealized infinite Turing machine, deterministic chaos would not even appear.

For a discussion of the relations between determinism, deterministic chaos, and freedom, see Leiber ( 1998a ). It is an immediate consequence that a correct deterministic nomological ( DN ) explanation (based on a deterministic law) does not necessarily constitute a potentially correct long-time prediction; the corresponding equivalence is only theoretically valid, namely if the initial data are known with arbitrary precision; especially in the case of deterministic chaos the initial data have to be known with precision increasing at least exponentially with the time interval to be predicted. Therefore, long-time effective numerical computability or empirical predictability can no longer count--in fact they never could count--as decisive criteria for the operationalization (or verification) of deterministic lawlikeness.

From the SD -property in chaotic systems, and the unavoidable (initial value) measurement errors and computational deviations inherit in any computational process even of non-chaotic systems, it has been concluded that (neglecting Duhem-Quine holism for the case of an Allsatz ) deterministically chaotic laws of motion (e.g., difference or differential equations) are not falsifiable (from computer experiments) in the strict sense (i.e., deterministically), but are only treatable by means of the usual statistical methods (Düsberg, 1995 ). This claim is, however, not strictly valid at least for two reasons: (i) In the cases where the Shadowing Lemma (Coven et al., 1988 ; Peitgen et al., 1994 , pp. 122 ff.) can be proved explicitly. (ii) Moreover, such a degree of non-falsifiability is only larger for chaotic systems because in the long-term they do allow only for probabilistic predictions, i.e., the predictions are of the type that the dynamic system after some time will be found in some infinitesimal interval dx of the state space with probability p(x) dx , where the probability density p depends on the equation of motion considered (for an explicit example for the logistic equation, see Düsberg, 1995 , p. 17), while non-falsifiability is nevertheless not completely negligible for non-chaotic systems, simply because mathematical determinism (including the reversibility and reproducibility of formal states) is an idealization which is not attainable by empirical science. Clearly, the difference between regular and chaotic systems is that for chaotic systems the reliability of predicted values decreases exponentially with increasing prediction time span, while in regular systems it decreases merely algebraically. In this sense, on the observational level all dynamical systems exhibit the property of effective irreversibility, but to different degrees, which may be of considerable importance, however, for practical purposes.

Also, deterministic chaos puts limitations on the feasibility of any theoretically intended reductive explanation which should be carried out via precise numerical computations of the system's state; e.g., in the case of micro-reductions effective numerical untreatability may prevent the effective execution of an intended partial reduction because the reducing problem formulation may be numerically untreatable while the reduced problem formulation may not be (Leiber, 1998b ).

Research on deterministic chaos provides mathematical and numerical refinements with regard to the solution structure of nonlinear differential (and difference) equations with respect to their dynamical stability, with some loose connection to questions of their algorithmic computability; and thus, deterministic chaos in physics constitutes distinct methodological progress . Also, "The mathematical theory of dynamical systems forms a substrate for the construction of computational tools that allow us to explore complex dynamical models much more efficiently than we have done so far, whether or not the systems are chaotic" (Guckenheimer, 1991 , p. 8). Physical chaos research however, does not constitute a new research programme, or novel theory of physics. The theoretical core or negative heuristic is still constituted by the axioms and theorems of classical mechanics.

This is not to say that physical chaos research does not have its novelties, namely, its positive heuristic (investigate nonlinear systems with more complicated solution behavior); its novel predictions (e.g., homo- and heteroclinic points and related complicated orbits in Hamiltonian chaos; strange attractors of different types in dissipative chaos); and its novel applications (i.e., successful predictions). In this context, e.g., researches in Hamiltonian chaos are merely one argument for maintaining that dynamical systems are richer in solution structure than the integrable part of mechanics, the importance of which has long been overemphasized.

Epistemologically, therefore, drawing a direct comparison between the findings of deterministic chaos and the fundamental changes of physical theorizing in the 20th century is exaggerating. In contradistinction to relativistic and quantum mechanics, physical chaos research in the framework of classical mechanics does not develop novel fundamental structures of the micro- or macro-cosmos, though it has led to a certain renaissance of classical mechanics by emphasizing the general and possibly unifying question of algorithmic, effective computability of dynamical systems. Surely, with the advent of deterministic chaos in the natural sciences, the "dream of physicalism," in the sense of a belief in the feasibility of "perfect predictability" based on mathematical determinism, has met an additional and publicly very effective counterargument. Remember the famous, but rather metaphorical notion of the "butterfly effect." But it should also be remembered that the thesis of "perfect predictability" was never tenable.

Moreover, some dynamic phenomena of deterministic chaos demonstrate that there is obviously no fundamental dichotomy between determinism and randomness in mathematical modelling, because unstable deterministic, chaotic systems may model specific random phenomena. For a variety of physical examples of ergodic systems it can be shown (Ornstein and Weiss, 1991 ) that a deterministic process is indistinguishable from a non-deterministic Markov process up to deviations due to a finite partition of the state space. If this partition is chosen as the finite limit of measurement accuracy, a deterministic and a Markov process model of a Sinai billiard are observationally indistinguishable. Thus, results from the investigation of the instability hierarchy of dynamical systems show that it is not always unambiguously possible to decide on the basis of empirical success whether the model adopted should be mathematically deterministic or indeterministic (i.e., stochastic). This again shows that mathematical determinism is not very meaningful empirically, and that the determinism or indeterminism of our dynamical models should be conceived as a matter of degree; and deciding between a deterministic and an indeterministic model of description can be a convention depending on the choice of the precision of analysis. The connection between statistical and deterministic description is quite intricate indeed. For many, and probably for most types of predictions, statistical description is operationally more meaningful, since it reflects the finite precision of measurement and numerical process, and it bypasses the fundamental limitations associated with the instability of the hypothetically underlying deterministic motion.

5. IS THERE ANY RELEVANCE OF CHAOS RESEARCH TO THE ANALYSIS OF SOCIAL AND TECHNOLOGICAL PROCESSES?

First of all, let us hear some statements from a 1989 paper published in the American Journal of Physics (the copyright of which is with the American Association of Physics Teachers), under the title, "Chaos versus Predictability in Formulating National Strategic Security Policy" (Saperstein and Mayer-Kress, 1989 ). There it is maintained that:

A generally recognized relevance of current physics methods to important nonphysics problems should make it much easier to attract and keep physics students . It thus seems reasonable for physicists to discuss and develop such nonphysical problems both in and out of their classrooms. Aside from attracting students, such activities by physicists may make important contributions to the public debate and resolution of major national and international issues (Saperstein and Maye-Kress, 1989, p. 217; my emphasis).

After the announcement that "the well-known transition from laminar to turbulent flow is a heuristic analogy to the transition from cold to hot war," Saperstein and Mayer-Kress present a "simplified procurement model for the Strategic Defense Initiative (SDI) . . . which can be used to determine the outcome of various deployment modes" (Saperstein and Mayer-Kress, 1989 , p. 217). As a result of their numerical investigation they conclude:
Because of uncertainty as to which, if any , parameter sets are characteristic of the "real world", we look at many sets . Within this variety, it is possible to find the desired "yes" answers to both of these questions [posed there]. . . . These results from a very simple model , which suggest caution toward a policy of deploying SDI , indicate the usefulness of applying physical ideas to the nontechnological world of strategy and public policy making .

We have just introduced a large number of model parameters, many of which cannot be adequately pinned down from the open literature. There will be more such parameters as the model is developed. There will also be several model functional relationships that cannot be directly determined via observations of the present world scene. And yet we wish to learn something useful--applicable to the world scene-- from our model, incomplete and uncertain as it obviously is (Saperstein and Mayer-Kress, 1989 , p. 219; my emphasis).

Despite their admission of the rather approximative character of their modelling approach (via some nonlinear rate equations), the authors hold to their general claims for its fruitfulness:

Not only is physics useful for the discussion of the technological subunits of policy (will they work, separately and together as a functioning system?), it is also useful for analyzing the entire policy structure itself . It can throw significant light on the fundamental policy questions : even if it all "works", will it really do for us that which we want done? The ability to deal with such questions should certainly add to the pride of physics students and their faculty (Saperstein and Mayer-Kress, 1989 , p. 222).

Now there can be no doubt that there is an important place for the widespread application of the great amount that has been learned about nonlinearity in chaos research in recent years. But some concerns may already be formulated as to whether research as it is now will succeed in the broad sense hoped for. For the case of potential applications in the biological sciences it has been stated:

More and more nonlinear research is becoming either marginal or irrelevant , aided and abetted by the wide availability of larger and larger computers, and the ease of formulating variations on a basic mathematical theme and doing one more case. Indeed, many of these incremental explorations yield fascinating special features. However, the important questions are: first, do they extend our general understanding ; or, alternatively, do the special features really provide new, quantitative insight to some particular experimental observation? It's not clear in many instances of published research today whether the answer to either [question] is in the affirmative (Krumhansl, 1993 , p. 97; my emphasis).

In this sense, in quantitative biochemical and biomolecular modelling there seems to show up a certain tendency for "marginal modelling," or "science by advertisement": i.e., extremely sophisticated nonlinear simulations have been and are carried out "that show interesting behavior, but little effort has been expended to seek out substantively their presence or absence in situations they allegedly represented." Similarly, many of the current nonlinear dynamical models of the conformations of biomolecules are "biology by advertisement" (Krumhansl, 1993 , p. 98).

Moreover, there is a certain amount of relevance to the estimation that a similar situation is developing in certain areas of nonlinear science today, particularly as new supercomputers allow the exploration of more and more complex model problems. . . . There are notable exceptions, as in hydrodynamics where exploration of singular properties, local structures, and turbulence has maintained close and faithful contact with the physics ( Krumhansl , 1993, p. 98).

Obviously, deterministic chaos and the predictability problems associated with it only apply to mathematized problem formulations in the form of deterministic dynamical equations. This severely restricts the actual impact of deterministic chaos in disciplines other than mathematics, physics, and physico-chemical dynamics (and even there). In this sense, an evident argument for why deterministic chaos is not so important in social process modelling is that deterministic dynamical models are rarely of successful use there; instead, stochastic modelling (e.g., Markovian systems) and probabilistic ("top-down") approaches (e.g., synergetics; probabilistic diffusion-like processes; phase transitions in non-equilibrium systems) are frequently utilized (Gsänger and Klawitter, 1995 ; Anonymous, 1996b ; Weidlich, 1994 ). An immediate sub-argument in the same vein is that, even if deterministic models were used on some basic level of description, in any realistic model system probabilistic modelling prevails if the number of degrees of freedom exceeds a certain limit (say, 10 2 -10 3 ); then, the micro-information about deterministic trajectories is smoothed out, e.g., through coarse-graining, or probability densities.

It seems clear (or, it is almost trivial to say) that the limited successful applicability of dynamical (and also stochastic) mathematical models in, e.g., the social and behavioral sciences is basically the sheer result of the enormous, truly tremendous complexity of relevant systems there. This implies irreversibility, limited predictability, and limited reproducibility, irrespective whether the systems are dynamically chaotic or not. Here, complexity is not meant to denote a technical term but comprises, among other things, the following problems: (i) the identification, construction, and interpretation of relevant observables with stable properties is most often a thing too hard to achieve; (ii) a corresponding measure space and its empirical basis are not easily, or unambigiously, or at all definable (and empirical data are often quite unsharp; for the explicit mathematical discussion of an interesting simple example from social politics, see Krause, 1996 ); (iii) most often it is a question what the relevant interaction mechanisms are, or how they should be modelled. In other words, at least from a physicist's point of view the rather limited success of quantitative-mathematical methods in the social and behavioral sciences is a result of the fact that in the generic case we would have to investigate many many-particle systems which are nonlinearly coupled, which are subjected to stochastic disturbances, whose particle properties are changing in the course of dynamic evolution, and whose dynamical "laws" are also changing (or they are not uniquely identifiable with appropriate reliability). The lack of detailed and law-like dynamical models--which are also well-confirmed in the sense of being an integral part of a successful theory-net--in the social sciences leads to a distinct preference for employing statistical approaches in the sense of static models (i.e., of statistical analyses of empirical data) while dynamical models are at most used in the sense of quantitative simulations-- which can, in most cases, merely be given a qualitative, or rather vague interpretation.

Nevertheless, there are dynamical models in use (e.g., cellular automata, generalized rate equations, statistical mechanics, game theory, etc.) which give partial insights into selected complex social and behavioral dynamics (Gsänger and Klawitter, 1995 ; Hegselmann and Peitgen, 1996 ; Hegselmann et al., 1996 ; Mainzer, 1997a; Mainzer, 1997b ; Troitzsch, 1996 ; Weidlich, 1994 ). Therefore, it seems that mathematical modelling and numerical simulations in the social sciences constitute interesting approaches supplementary to the core of the application of mathematical statistics and non-quantitative investigation. And mathematical modelling can be fruitful theoretically, heuristically, and sometimes even empirically; but it will always be very restricted in the social sciences.

Among the possible successes to be gained from mathematical (quantitative) modelling of social processes, we find the following (see Hegselmann and Peitgen, 1996 , pp. 13-15): (i) Theoretical model reductions can improve the theoretical understanding of the relations between micro- and macro-levels of description (e.g., micro-explanation of the appearance of unexpected properties on the macro level; unexpected reduction of known macro-phenomena to micro-processes). (In this sense social scientists may learn from chaos research "that a well founded substantial theory on the micro level is indispensable for understanding even the least complex social processes and for the analysis of process produced data: Curve fitting procedures on the macro level will never do, and fitting a standard linear model to data produced by a nonlinear process will do neither"; Troitzsch, 1996, p. 184). Theoretical results (Gaines, 1976; Gaines, 1977 ; Pearl, 1978 , about the relation between the amount of data,, e.g., number of observations, the complexity of models, and their predictive properties seem to imply that indeterministic, stochastic models, which have been derived from empirical data, do not exhibit valuable predictive abilities, independent of the amount of data available.). (ii) Quantitative abstract models may allow for qualitative explanations, and they may further the "heuristic understanding" of the dynamics of complex processes where nonlinear dynamical modelling emphasizes the importance of the formation of organizational structures without central processing units ("self-organization" or non-equilibrium phase transitions). (Social scientists may learn from chaos research "that equilibrium states are seldom found in complex systems, and hence that linear models are not very well suited to the analysis of data in the social sciences"; Troitzsch, 1996 , p. 184). (iii) Mathematical modelling, and numerical and analog simulation may provide non-negligible contributions to the process of theory formation in the empirical social sciences. (E.g., "An analysis of a noisy chaotic time series will yield the attractor dimension and thus give hints at the number of variables--(e.g. subpopulations, or types of individuals, or attributes of groups-- involved in the process under observation"; Troitzsch, 1996 , p. 185). Besides hinting at such general model-theoretic, explanatory, and pedagogical aspects (or hopes, or regulative ideas of research), I cannot do better than emphasize a statement recently given in the literature:

Whereas it is relatively easy to design and to simulate a complex model of a process in a complex social system in such a way that the model displays complex behaviour we shall see that it is extremely difficult to find or to gather data supporting such a model. Thus, in the realm of social science it might be in fact impossible to make reasonable use of the methods used in physics to detect chaotic behaviour (Troitzsch, 1996 , p. 162).
Quite obviously, what has been said about the impact of deterministic chaos on quantitative sociology also applies to the field of investigations in philosophy of technology, including ethics of technology. (Some recent works in ethics of technology are Hastedt, 1991 ; Lenk, 1992; Lenk, 1994 ; Ropohl, 1996 . For an overview of the state of the art in ethics of technology, see Grunwald, 1996a; Grunwald, 1996b . Note, however, that Grunwald's criticism is sometimes exaggerated, and his constructivist arguments--Grunwald, 1994 and 1995--against the feasibility of a quantitative approach in a social or technological systems's dynamics are, at least, open to critique. See also Anonymous 1996a .) It also applies to action theory of decision and planning; technology assessment, etc., and especially to quantitative models in technology assessment (e.g., trend extrapolation; formation of historical analogies; Delphi reports; analysis of relevance trees; risk analysis; model simulation; cost-profit analysis). (For an overview of the state of the art in technology assessment, see Bullinger, 1994 ; Mohr, 1995 . For the repertoire of methods employed in technology assessment, see VDI-Report, 1991 .)

In contradistinction to the dynamical models in the natural sciences (physics, chemistry, and biology), technological developments take place in much more complex scenarios comprising scientific, technological, economical, ecological, sociological, political aspects and the like. Therefore, it is tremendously more complicated to unambiguously fix procedures and rules for quantification in quantitative technology assessment which would guarantee the intersubjective and situation-invariant reproducibility of interpretation of quantitative models.

Nevertheless, model simulations or numerical experiments (executed on the basis of different mathematical problem formulations ranging from simple optimization computations to the ambitious models of operations research) are a powerful tool for studying the behavior of complexly interacting system networks. This is especially true if real-world experiments are impossible because of theoretical, practical, or ethical reasons, or if the total effect of many interdependent causes can no longer be estimated intuitively. The basic methodological problems of quantitative dynamical technology assessment are these: while computer assisted model simulations and predictions are often desirable and indispensable, the empirical adequacy of the modelling quantities is often insecure (because of lack of dynamical models, insecure knowledge, insecure measurement units, subjective preferences, high real-world complexity). The resulting consequences are that the dynamics and results of the model systems are hard to survey and interpret, and their empirical adequacy is not unambigiously decidable.

In this sense, the results of an opinion research poll among 208 mostly industrial research laboratories in Japan in 1990 seems to be typical. The researchers were asked for the efficacy as well as the degree of application of different technology assessment methods. This poll shows that the efficacy of model simulation is highly estimated (as effective ; or most effective ) by more than 40% of the interviewed, but its degree of application is almost zero. While trend extrapolation gained almost the same efficacy estimation, it is applied by more than 30% of the interviewed (Grupp, 1994 , pp. 79-82). To be sure, the different methods of technology assessment provide us with possibilities to analyze technological developments to some extent in advance, and to avoid some of their undesirable effects (Bullinger, 1994). For realistic situations, quantitatively precise predictions are, however, not effectively achievable, neither in the sense of deterministic, nor of reliable probabilistic predictions. This is because--intuitively speaking--"the problems are much more difficult than the N -body problem, or weather forecasting."

It is in order to cite some recent opinions here:
--"Predictions, in the sense of absolute statements about the future, or firm forecasts, are never attainable in technology assessments. The complexity of the subjects of inquiry, as well as of the methods available, always intervene" (Bonnet, 1994 , p. 37).

--"To sum up, it can be said that there is no single method of technology assessment--only various methodologies for particular technology assessments. These methods all come out of the particular disciplines brought into play. Moreover, whether or not the methods are appropriate for a particular technology assessment depends upon the competence of the team and the availability of data relative to the problem" (Bonnet, 1994 , p. 49).

--"Within the scope of research and development, or innovation, social and political processes play so great a role, relative to technical aspects, that it would be unrealistic to expect determinate supporting statements. Predictions, in the sense of absolute statements about the future, are not available for technology assessments. The same holds for forecasts in the narrow sense, i.e., statements making truth claims with a high degree of reliability. "Foreshadowing" might be the best concept to use to characterize an open-future type of technology assessment; indeed, it seems to be the only possibility" (Grump, 1994 , p. 57).

--"Technology assessment can no longer be viewed as a tool for precise forecasting--or as providing, for what is happening now but is viewed as an early sage of what is to come, either a short-term or a long-term framework or perspective. Rather, technology assessment is closer to being a tool for the discussion of possibilities or alternative futures. Today, it seems, it is much more like the preliminary discussion of choices among possibilities--which politicians can then proclaim to be true or well grounded if they want to seem to have information about the future. This does not mean that we should no longer make predictions about future possibilities, especially if they can be made in the form of models or calculations--for here we do have better abilities than before. However, this should not be the only place we look for knowledge about responsible behavior; that is better sought in discussion or open arguments about what is or is not desired" (Petermann, 1994 , p. 110).

--"On the whole, technology assessment has come to be thought of--figuratively speaking--as a more open, softer, less science-like concept. The dominance of experts, or basing claims on hard evidence, have either disappeared or come to be treated as no more than background" (Petermann, 1994 , p. 111).

Therefore, in the sense of a methodological or practical (but not necessarily epistemological or ontological) anti-reductionism, we have to accept the thesis of non-separability of quantifications in social systems from the overarching systems of normative aims and values (and risk assessment). Accepting this should, however, neither lead to an overall negation of the possibility of quantitative modelling in specific cases, nor to an underestimation of the importance of non-quantitative analyses in the framework of a generalized systems theory ( Kornwachs , 1991). In technology assessment, the need for interdisciplinary research efforts is obvious (and also rather well known, but not always realized); while chaos research is surely of rather limited relevance.

6. CONCLUDING REMARKS

To conclude, I would like to summarize the most general conclusions that can be drawn from contemporary chaos research: theoretically, deterministic chaos is conceived as a property of systems which are strictly deterministic in the sense of mathematical determinism; empirically, the most prominent feature of such systems, namely, the exponential sensitivity of initial conditions, leads to an amplification of any perturbation, noise, or error, which grow exponentially in the course of time; deterministic chaos always implies effective uncomputability (but not necessarily untreatability)--i.e., the precision of the initial data required for gaining a given precision of final data (and thus computational complexity) increases exponentially fast; unambigious definitions of deterministic chaos exist only for relatively simple mathematical maps and not for interesting cases of dynamical systems.

There is a hierarchy of degrees of computability of formal systems, and only the "edges" are known, to date. Within this hierarchy we may call those systems chaotic which, basically because of their nonlinearity, exhibit at least some close analogues of the SD property because of the MIX and DPP property, and which are therefore algorithmically uncomputable (or have at least algebraically growing computational complexity) in the long term. Thus, deterministic chaos constitutes merely one problem type in the wide, and partially still unexplored range of problems of effective computability. In a more general sense, chaos research provides us with intuitive examples and arguments which should be used to further our often underdeveloped abilities to do "nonlinear thinking" (Mainzer, 1997a ); namely, conceptualization in terms of nonlinear causal nets instead of mono-causal chains. Such abilities are well trained by studying the dynamics of nonlinear systems, with their emphasis on the role of instabilities leading to exponential error amplification.

Another lesson to be learned from chaos research is that the mathematical and physical models of dynamical systems theory, which stresses the importance of generic properties and structural stability (e.g., strange attractors, bifurcation scenarios), provide invaluable guidance in the study of specific problems (e.g., many numerical results previously qualified as "anomalous" are now used for identifying chaotic behaviour). Different methods are provided for analyzing solutions with interesting global properties in specific nonlinear models. Most of these methods rely heavily on numerical experiment and have led to a number of new methods of data analysis (e.g., dimension computations, Lyapunov exponents, Kolmogorov-Sinai entropy, phase space reconstruction, spectra of dimensions). It seems, however, that the range of applicability and validity of such methods has not yet been investigated comprehensively. None of the famous scenarios of "bifurcation to chaos" (like period doubling, quasiperiodic) seem to be sufficiently general that we can entirely dispense with analyses aimed at determining what actually occurs in specific dynamical models.

It should also be clear that chaos research does not constitute a new science or novel theory. Physical chaos research does not even exist as a coherent field (comparable, e.g., to quantum and relativity theories); and there is no comprehensive methodology available for, e.g., mathematical, Hamiltonian, and dissipative chaos. In these three general cases the properties of chaos are (or have to be) investigated by different methods. Thus, chaos research constitutes a rather loose collection of ideas and methods which can be added to the scientist's toolbox, and many are inherited from classical applied mathematics.

In summary, the different approaches to deterministic chaos tell us that there are severe quantitative limitations to long-time computability, and thus controllability, already in deterministic systems with few degrees of freedom. Thus, deterministic chaos constitutes one argument out of many that attack the general positivistic belief in the complete computability of nature--advocated at least since Galileo, Hobbes, and Descartes, but also by the 19th-century mechanists, 20th-century positivist philosophers of science, and many others.

For example, as Moritz Schlick said: "In other words, arriving at a correct prediction based on causality is the true mark of lawlikeness" (Schlick, 1931 , p. 150). Note that deterministic chaos also pulls down Einstein's famous and somewhat superficial 1935 criterion of "physical reality," which reads: "If, without in any way disturbing a system, we can predict with certainty (i.e. with probability equal to unity) the value of a physical quantity, then there exists an element of physical reality corresponding to this physical quantity" (Einstein et al., 1935 , p. 777).

The mathematized natural sciences are still rather close to the technocratic ideal of progress; but they also draw attention to its limits today. We clearly see the epistemological limits of the insights possible for us into the dynamics of the material world, which have only become accessible to precise methodological analyses in recent times. The limitations of computability are limits of controllability and feasibility. Therewith, it is not the developmental endpoint of the mathematized natural sciences which is announced, but it may well be an insight into the indispensable value of some more qualitative arguments within the quantitative sciences which need to be promoted.

REFERENCES

Anonymous. 1996a. "Kritik und Replik zu A. Grunwald: Ethik der Technik; Systematisierung und Kritik vorliegender Entwürfe." Ethik und Sozialwissenschaften 7:205-281.

_______. 1996b. "Kritik und Replik zu H. Haken: Synergetik und Sozialwissenschaften." Ethik und Sozialwissenschaften 7:595-675.

Ba ar, E., ed. 1990 . Chaos in Brain Function . Berlin:Springer.

Batterman, R. W. 1991 . "Randomness and Probability in Dynamical Theories: On the Proposals of the Prigogine School." Philosophy of Science 58:241-263.

_______. 1996. "Chaos: Algorithmic Complexity vs. Dynamical Instability." In P. Weingartner and G. Schurz, eds., Law and Prediction in the Light of Chaos Research: Lecture Notes in Physics, Volume 273 , Berlin: Springer, pp. 211-235.

Bonnet, P. 1994 . "Methoden und Verfahren der Technikfolgenabschätzung: Exotische Hausmannskost?" In Bullinger, 1994, pp. 33-54.

Bullinger, H.-J., ed. 1994 . Technikfolgenabschätzung . Stuttgart: Teubner.

Buzug, T. 1994. Analyse chaotischer Systeme . Mannheim: B.I.-Wissenschaftsverlag.

Coven, E., I. Kan , and J. A. Yorke. 1988. "Pseudo-Orbit Shadowing in the Family of Tent Maps." Transactions of the American Mathematical Society 308: 227-241.

Düsberg, K. J. 1995 . "Deterministisches Chaos: Einige wissenschaftstheoretisch interessante Aspekte." Journal for General Philosophy of Science 26:11-24.

Duke, D., and W. Pritchard, eds. 1991. Measuring Chaos in the Human Brain . Singapore: World Scientific.

Einstein , A., B. Podolsky, and N. Rosen. 1935. "Can Quantum-Mechanical Description of Physical Reality be Considered Complete?" Physical Review 47: 777-780.

Gaines, B.R.M : 1976. "On the Complexity of Causal Models." IEEE Transactions on Systems, Man and Cybernetics 6:56-59.

_______. 1977. "System Identification, Approximation and Complexity." International Journal of General Systems 3:145-174.

Grunwald, A. 1994. "Wissenschaftstheoretische Anmerkungen zur Technikfolgenabschätzung: Die Prognose- und Quantifizierungsproblematik." Journal for General Philosophy of Science 25:51-70.

_______. 1995. "Erkenntnistheoretischer Status und kognitive Grenzen der Technikfolgenabschätzung." In H.-P. Böhm, H. Gebauer, and B. Irrgang, eds., Nachhaltigkeit als Leitbild der Technikgestaltung : Forum für interdisziplinäre Forschung Band 14 . Dettelbach: Verlag Röll. Pp. 29-42.

_______. 1996a. "Ethik der Technik: Systematisierung und Kritik vorliegender Entwürfe." Ethik und Sozialwissenschaften 7:191-204.

_______. 1996b. "Ethik der Technik: Entwürfe, Kritik und Kontroversen." Information Philosophie 4:16-27.

Grupp, H. 1994 . "Einordnung der Methoden der Technikfolgenabschätzung in das Gefüge der Wissenschaften. In Bullinger, 1994, pp. 55-86.

Gsänger, M. , and J. Klawitter, eds. 1995. Modellbildung und Simulation in den Sozialwissenschaften : Forum für interdisziplinäre Forschung, Band 13 . Dettelbach: Verlag Röll.

Guckenheimer, J . 1991. "Chaos: Science or Non-science?" Nonlinear Science Today 1:6-8.

Haken, H. 1983 . Synergetics: An Introduction. Nonequilibrium Phase Transitions and Self-Organization in Physics, Chemistry, and Biology . Berlin:Springer.

Hastedt, H. 1991 . Aufklärung und Technik: Grundprobleme einer Ethik der Technik . Frankfurt am Main: Suhrkamp.

Hegselmann, R. , and H.-O. Peitgen, eds. 1996. Modelle sozialer Dynamiken: Ordnung, Chaos und Komplexität . Wien: Verlag Hölder-Pichler-Tempsky.

Hegselmann, R., U. Müller, and K.G. Troitzsch, eds. 1996. Modelling and Simulation in the Social Sciences from the Philosophy of Science Point of View . Dordrecht: Kluwer.

Kornwachs, K. 1991 . "Glanz und Elend der Technikfolgenabschätzung." In K. Kornwachs, ed., Reichweite und Potential der Technikfolgenabschätzung . Stuttgart: Poeschel Verlag. Pp. 1-22.

Krause, U. 1996 . "Impossible Models." In Hegselmann et al., 1996, pp. 65-75.

Krumhansl, J.A. 1993 . "Nonlinear Science: Toward the Next Frontiers." Physica 68: 97-103.

Küppers, G., ed. 1996. Chaos und Ordnung: Formen der Selbstorganisation in Natur und Gesellschaft . Stuttgart: Reclam.

Leiber, T. 1996a . Kosmos, Kausalität und Chaos: Naturphilosophische, erkenntnistheoretische und wissenschaftstheoretische Perspektiven . Würzburg: Ergon Verlag.

_______. 1996b, "Chaos, Berechnungskomplexität und Physik: Neue Grenzen wissenschaftlicher Erkenntnis?" Philosophia Naturalis 33:23-54.

_______. 1997. "On the Impact of Deterministic Chaos on Modern Science and Philosophy of Science." Poster presented at the Tagung der Deutschen Gesellschaft für Komplexe Systeme und Nichtlineare Dynamik e.V., Internationales Institut für wissenschaftliche Zusammenarbeit Schloß Reisensburg e.V., 16.-18. Oktober 1997.

_______. 1998a. "On the Actual Impact of Deterministic Chaos." Synthese 113 (in press).

_______. 1998b. "Deterministic Chaos and Computational Complexity: The Case of Methodological Complexity Reductions." Journal for General Philosophy of Science (in press).

Lenk, H. 1992. Zwischen Wissenschaft und Ethik . Frankfurt am Main: Suhrkamp.

_______. 1994. Macht und Machbarkeit der Technik . Stuttgart: Reclam.

Li, T.-Y., and J. Yorke. 1975. "Period Three Implies Chaos." American Mathematical Monthly 82:985-992.

Lichtenberg, A. J. , and M. A. Liebermann. 1983. Regular and Stochastic Motion . New York: Springer.

Mainzer, K. 1997a. Thinking in Complexity: The Complex Dynamics of Matter, Mind and Mankind , 3rd ed. Berlin: Springer.

_______. 1997b. "Komplexe Systeme in Natur und Gesellschaft." Physik in unserer Zeit 28:74-81.

Mainzer, K., and W. Schirmacher, eds. 1994. Quanten, Chaos und Dämonen: Erkenntnistheoretische Aspekte der modernen Physik . Mannheim: B.I.-Wissenschaftsverlag.

Mohr, H. 1995 . "Technikfolgenabschätzung in Theorie und Praxis." Nova Acta Leopoldina ( Neue Folge ) 71:293:65-71.

Ornstein, D. S. , and B. Weiss. 1991. "Statistical Properties of Chaotic Systems." Bulletin of the American Mathematical Society 24:11-116.

Pearl, J.: 1978. "On the Connection Between the Complexity and Credibility of Inferred Models." International Journal of General Systems 4:244-264.

Peitgen, H.-O. , H. Jürgens, and D. Saupe. 1994. Chaos - Bausteine der Ordnung . Berlin: Springer.

Petermann, T. 1994 . "Historie und Institutionalisierung der Technikfolgenabschätzung." In Bullinger, 1994, pp. 89-112.

Ropohl, G. 1996 . Ethik und Technikbewertung . Frankfurt am Main: Suhrkamp.

Ruelle, D. 1990 . "Deterministic Chaos: The Science and the Fiction." Proceedings of the Royal Society (London) A427:241-248.

Saperstein, A. M. , and G. Mayer-Kress. 1989. "Chaos versus Predictability in Formulating National Strategic Security Policy." American Journal of Physics 57:217-223.

Schlick, M. 1931 . "Die Kausalität in der gegenwärtigen Physik." Die Naturwissenschaften 19:145-162.

Skarda, C., and W. Freedman. 1987. "How the Brain Makes Chaos in Order to Make Sense of the World." Behavioral and Brain Sciences 10:161-195.

Tabor, M. 1989 . Chaos and Integrability in Nonlinear Dynamics: An Introduction , New York: Wiley.

Thomas, H., and T. Leiber. 1994. "Determinismus und Chaos in der Physik." In Mainzer and Schirmacher, 1994, pp. 147-207.

Troitzsch, K. G. 1996 . "Chaotic Behaviour in Social Systems." In Hegselmann and Peitgen, 1996, pp. 162-186.

VDI-Report. 1991 . VDI-Report 15: Technikbewertung - Begriffe und Grundlagen, Erläuterungen und Hinweise zur VDI-Richtlinie 3780 . Düsseldorf: Verein Deutscher Ingenieure.

Wang, Q. 1991 . "The Global Solution of the N -Body Problem." Celestial Mechanics and Dynamical Astronomy 50:73-88.

Weidlich, W. 1994 . "Das Modellierungskonzept der Synergetik für dynamische sozio-ökonomische Prozesse." In Mainzer and Schirmacher, 1994, pp. 255-279.