SPT v10n1 - Part 2: Technological Functions and Normativity - Function and Probability: The Making of Artefacts


Number 1
Fall 2006
Volume 10


Function and Probability: The Making of Artefacts


Françoise Longy
Institut d’Histoire et de Philosophie
des Sciences et des Techniques, Paris



Abstract

The existence of dysfunctions precludes the possibility of identifying the function to do F with the capacity to do F. Nevertheless, we continuously infer capacities from functions. For this and other reasons stated in the first part of this article, I propose a new theory of functions (of the etiological sort), applying to organisms as well as to artefacts, in which to have some determinate probability P to do F (i.e. a probabilistic capacity to do F) is a necessary condition for having the function to do F. The main objective of this paper is to justify the legitimacy of this condition when considering artefacts. I begin by distinguishing “perspectival probabilities”, which reflect a pragmatic interest or an arbitrary state of knowledge, from “objective probabilities”, which depend on some objective feature of the envisaged items. I show that objective probabilities are not necessarily based on physical constitution. I then explain why we should distinguish between considering an object as a physical body and considering it as an artefact, and why the probability of dysfunction to be taken into account is one relative to the object as member of an artefact category. After clarifying how an artefact category can be defined if it is not defined in physical terms, I establish the objectivity of the probability of dysfunction under consideration by showing how it is causally determined by objective factors regulating the production of items of a definite artefact type. I focus on the case of industrially produced artefacts where the objective factors determining the probability of dysfunction can be best seen.

Key words: artefact, probability, etiological theory of function, biological function, artefact function

Function and capacity

One usually associates function with capacity. Coffee machines usually have the capacity to make coffee and hearts, which have the function of pumping blood, are usually able to pump blood. This relationship is of practical importance : often, it is by learning the function of an object that one learns what to do with it and what to expect from it.

One of the two major contemporary theories of function, that goes back to an article of Cummins in 1975, relies on this close relationship : it identifies the function to do F with the capacity to do F or, to use a more technical term, with the disposition to do F. But by doing so, it offers no account of the normative aspect of functional discourse. An entity that has a function is supposed to do something under particular circumstances, but it may not necessarily be able to do it as dysfunctioning items (a non-working coffee machine or a diseased heart) show. We will set aside here the question of whether there are two types of functions, the first one having normative import and allowing us to speak of dysfunctions while the second does not. We will concern ourselves only with the first type of functions.

The other major theory, the so-called etiological theory, whose basic tenets go back to an article of Larry Wright in 1973, offers a straightforward account of dysfunction, which is a reason for its wide acceptance by philosophers. According to this theory, functions indicate a particular sort of history, and that explains their normative import as well as their etiological sense. Actually, functions often serve to explain why something exists or is so : the function to attract peahens is usually supposed to explain why peacocks have such a big and vivid tail. For the etiologists, in fact, a function is an effect that explains a “being there.” For instance, the current actual hearts have the function of pumping blood, because previous hearts have been selected for having had the effect of pumping blood. In view of their dependence on past facts, functions can be dissociated from present capacities. The possibility of dysfunctions rests on this temporal discrepancy : something can have a function because of its history, even if it fails at present to have the corresponding capacity.

Some etiologists have remarked, furthermore, that the relationship between function and present capacity cannot simply be of the form that to have a function implies a high probability of having the corresponding capacity. Often the two go together, but this is not necessarily the case. To show it, Millikan contemplated the case of the spermatozoids. They have the function of fertilising ovules, but that does not imply that they have a high probability to do so. Neander has put forward, in the same intent, the case of a pandemic disease. If a pandemic disease would make 75% of the population blind, the function of the eyes would still be to allow vision. This discrepancy between function and capacity has been a further argument to exclude present capacities from the notion of functions, and so far that is what etiologists have done.

The drawbacks of the current dualist theory of function

Up to quite recently, etiologists have been mostly interested by biological functions. So it was natural that they were ready to define functions by referring to natural selection. Some extended this definition to the functions of artefacts, by admitting not only natural selection but also cultural selection. But this can be only a partial answer to the question since most artefacts are attributed a function when they are created, i.e. before any cultural selection could act on them. The etiologists who tackled this point answered that, in this case, the function names what the person(s) responsible for creating or producing the artefact thought it was for.

But this thesis, here called the intentionalist theory of function, has a severe drawback. Such “intention-based” functions are, in a specific sense, subjective: they will vary according to the intentions attributed to their designers, all other things being equal. For example, let us consider a component that has been made and put in some specific place by Boris, it will have the function to cool the liquid passing through it if it is what Boris thought it was for, or it will have the function to reflect the incoming light if that was Boris’ idea. No objectively ascertainable factor need vary from one situation to the other for the function to vary : the change of the intentional content in Boris’ mind is sufficient.

Such a direct dependence on intentions must be clearly distinguished from the dependence on intentions through socio-cultural pressure which may appear in a cultural selection theory of function. It is a general truth that the use of an artefact and its diffusion depend essentially on what people think about it : what it may do, how it should be used, what it could be useful for, etc. But such intentions will be taken in account in a cultural selection theory of function only when they manifest themselves and thus contribute to the general social-cultural context determining the “evolutionary life” of the artefact. Now, a cultural context is as objective as a natural one. One looks at what people do and did : how they use(d) the artefacts, the preferences they show(ed) in buying them, etc. This is something that can be studied by empirical means, the ones history and sociology currently draw on with no need for introspection. Therefore, the intentionalist theory of function must be clearly distinguished from the thesis that artefact functions depend on intentions through socio-cultural pressure and selection. Only the first one endows functions with a particular subjective nature.

I have argued elsewhere that the intentionalist theory has to be rejected (1) because no artefact function manifests the subjectivity this theory implies and (2) because it would imply a highly problematic ontological heterogeneity : two very different sorts of properties - subjective ones and objective historical ones - would be indistinguishably mixed. The conclusion I have drawn from these considerations is that the classical dual etiological theory of function – selectionist for biological items and eventually a large part of the artefacts and intentionalist for the remaining part of the artefacts – is mistaken and that one must look for a more general and more abstract characterisation of etiological functions.

Theory of function : A new perspective

The challenge, then, has been to see whether and how one could accommodate the two subsequent facts while maintaining an etiological perspective :

  1. There are artefact functions which are not due to a selection mechanism
  2. All artefact functions are objective.
In other words, which objective property could explain why artefacts are there, whether they have just been created or have long been maintained by cultural selection?

A designer conceives an artefact, i.e. a type of artefacts, for a determinate function : it should have the capacity to do F in a determined type of circumstances. So, the objective property we are searching, let us call it O, should be related to this specific capacity. But to go along this line implies overcoming two serious difficulties.

First, how could there be any objective property of the Xs, prior to the existence of any X, that could explain the existence of Xs? What is required is an O such that “X is there because of O”. It has usually been thought, that O should be a past event or a past state of affair. It is not necessary. O could be a timeless property – as is, for example, the relation between the type X, the capacity to do F and a series of circumstances C – and play a part as a reason. For instance, someone finding out about O decided to make Xs because of O. We reserve for another article a proper defence of this way of understanding the etiological condition attached to functions.

Second, how to justify the probability that then has to be introduced in the definition of function ? The necessity to introduce probability comes to light when one tries to answer the following question : What objective link can exist between the Xs and the contemporaneous capacity to do F when the Xs have function F ? As we have seen before, an item can have a function and lack the corresponding capacity. So, the function to do F can at most imply some probability to do F. Consequently, if a definite relation exists between the function F and the capacity to do F, it can be only probabilistic.

The aim of this paper is to show (a) that there is in fact for each function F a specific probability of having the capacity to do F (the same generic high probability for all cases will not do, as spermatozoids show) and (b) that this introduction of probability is no smart trick but rests on solid grounds. It is not good enough to have the valid negative reason that functions cannot be equated in a straightforward manner with their corresponding capacities. Introducing probability to loosen a tie that would otherwise be too tight is ad hoc. Other reasons must justify it positively.

The line of thought summarized above led me to the following characterisation of function : “X has function F” means :
As an item of Type X , X has a(n) (objective) property O such that :
1) O implies that X has probability P to do F in circumstances C
2) The present X or Xs are there because of O

Let us outline two aspects of this characterization to show how it can cover artefact functions as well as biological ones. First, O can correspond to properties of very different sorts. In the case of a biological function, O will be roughly : belonging to a species some of whose previous members have been selected because they have had an X that did F. But for the first generation of an artefact, O may be something like : an object constructed in such a manner (in order to fall within some margin of error this series of specifications) will have, because of the laws of physics, chemistry, etc., probability P to do F.

Secondly, the mechanism or the process which explains causally the existence of the Xs does not enter into this characterization (in that it differs from most etiological definitions of function). It can rely on selective forces or on intentional contents and actions insofar as the two conditions above are satisfied.

Finally, to avoid any confusion, let us make precise that the characterization of function we propose here is quite different from the one Bigelow and Pargetter proposed in 1987, although they also introduced probability. For them, a function is a capacity enhancing the fitness (i.e. the chance to survive) of the entity possessing it. There are two major differences between their proposal and ours. First, according to them, functions inform us about the future not about the past. Second, the probability they introduce concerns not the possession of the capacity itself but the survival value resulting from possessing the capacity. After this general presentation, let us turn our attention to the more specific questions that are the target of this paper : “What does the probability appearing in the first condition consist in ?” and “Does this probability have real grounds that justify it positively ?” To answer them we need first to make clear, in general, when probability will correspond to something real and substantial and when it will not.

When do probabilities reflect arbitrary conditions ?

If you say of some 39 years old Parisian woman that her probability to be pregnant today is P using the information that women between 35 and 45 who reside in Paris have probability P to be pregnant in this period of the year, you point to an objective feature relative to the category (the ratio of pregnant women among them) but not relative to the woman. Relative to her it just reflects your level of information and the perspective which you adopt in looking at her. It is because you consider her as a member of this particular group that you attribute to her this probability to be pregnant today. With new information, for example, that she suffers from some gynaecological problem, or that the ratio of pregnant Parisians between 38 and 40 is P’, the probability would change. New information means a new reference class, and this implies generally a new probability value. Better knowledge may even allow us to dispense with probabilities. It would be the case, if you came to know through a sonogram that a fecundated ovule has nested in her uterus. We will call such probabilities perspectival : they depend on an arbitrary perspective, a perspective determined simply by a pragmatic interest or a particular state of knowledge. The criterion for recognizing these probabilities is that they would change and may even disappear if the arbitrary limits imposed by our current knowledge change. On the contrary, we will call “objective” the probabilities which can be shown to be independent of an arbitrary perspective due to a provisional state of knowledge or to some pragmatic interest.

A simple case of objective probability is the one associated with non-linear dynamical systems. These systems are highly sensitive to initial conditions : any small variation in the initial conditions can give rise to enormous differences, the so-called "Butterfly effect". So, whatever the precision you may obtain in fixing a range of initial conditions, completely different outcomes will remain possible after a long enough dynamical evolution. No increase in knowledge would make it possible to get rid of probability in predicting the outcome after a long enough dynamical evolution. The probability is tied to a physical property of these dynamical systems : their high sensitivity to initial conditions. But objective probability can also be grounded in something not purely physical. To demonstrate this, let us take the example of throwing a die.

Here there are two phenomena susceptible to giving rise to an irreducible probability. The first one is the physical phenomena we have just seen. The high sensitivity to initial conditions of rolling dice will imply the equiprobability of halting on any face after a determinate number of rotations, let us say after 10 rotation. The second one is the social phenomena of using dice in chance games. It is clear that on less demanding conditions the outcome of a rolling die may be quite predictable. For example, if someone puts the die in her hand with the 4 on top and makes the die slide to the table without rolling, the probability that the die will halt on 4 will be 1. Maybe the probability to halt on 4 will be 1 also, if one makes the die roll only once from a determinate initial position in the hand. It is because one easily sees that such ways of "throwing" dice makes the outcome quite predictable that no player is allowed to make dice roll only once.

However, there may be ways of throwing dice that are allowed, even if their result (the halting face) could be predicted in a categorical way were one to know the value of some parameters (initial position, force of throw, etc.) with a determinate precision. For example, let us suppose our physical theory makes it possible to predict on which face a die would halt if it rolls only three times on a plane from a determinate range of initial positions. It may well have no importance for the fairness of a dice game played by human beings, even if the players are physicists in possession of a table giving them the different outcomes for a series of ranges of initial conditions. Why ? The limited capacity of bodily control humans have may imply that they necessarily pick at random in a large range of initial conditions when throwing, and this, in turn, may imply the same chance for every face to come out after only three rotations of the die. If it is so, the probability of 1/6 to halt on 4 can also be objective when the die has rolled only between three and nine times and was thrown by a human being : it is an objective probability related to the real use of dice, their use in chance games. In fact, the dice, the table and the person who throws may be seen as a new sort of dynamical system. The point then is that the initial conditions take in account the fact that it is a very complex machine, a human, who is throwing : the human hand or arm cannot be isolated and treated as a simple mechanical device whose physical parameters could be set up at will.

In our perspective which is to associate probabilities to artefact functions the more interesting case of objective probability is the last one where social factors (dice as used in human chance games) are taken in account. Why do social factors matter when considering objective properties of artefacts ? We will once again use the dice as a paradigmatic example to answer this question.

Artefacts and causal explanations

The probability of 1/6 to halt on 4 after only three rotations is not a physical property of the rolling dice as is the probability of 1/6 to halt on 4 after ten rotations. The latter one is a result of physical theory when considering dice purely as physical objects : it is the probability obtained by considering situations where one could imagine to fix as precisely as wanted the pertinent physical characteristics of a system comprising a die, i.e. a well equilibrated cubic object, a horizontal plane and a simple mechanistic throwing device. This probability tells of the sensitivity to initial conditions well equilibrated cubic objects have when rolling. Conversely, the first probability tells something about dice inasmuch as they are part of a specific social situation : chance games played by human beings.

What needs to be highlighted is the following : this type of situation (human dice-throwing in the context of chance games) is not an arbitrarily defined category, it has a substantial reality in our world, a world which is not simply a physical world of material bodies, but is also a world of living beings with social activities. A lot of facts concerning present and future dice depend on the fact that dice throwing is a human game. For objects considered as physical bodies – like dice as well equilibrated cubic objects - all causal explanations are the exclusive concern of physics, but for objects considered as artefacts, many causal explanations hinge on socio-cultural factors. The size of dice depends essentially on the size of human hands. The precision with which they are made depends also on their use in games : they should be sufficiently well equilibrated and of a sufficiently regular cubic form so as to raise no worry about the equiprobability of the different faces to turn up. To sum up, the production of dice is not guided by the concerns of physicists interested in the behaviour of well-equilibrated cubic objects, but by the concerns of players who throw dice on common dining tables at home or on the velvet of gaming tables in clubs.

What do we need to know about dice to make sensible predictions about what may happen to dice ? Not so much what characterizes them as physical bodies as what is their typical use, their function. It is their use that will explain why some are found in children’s pockets or why some ended up in some dump or other when their colour faded and they looked old or dirty. Mental experiments show this still more vividly.

Let us suppose, just for a second, that a die rolling on a horizontal plane is a linear system and that our present physical means allow us to predict categorically most of the time on which face a rolling die would stop after 10 or more rotations. Would that change anything ? Nothing much if dice throwing remained an activity performed as it is performed today with a limited control of the players on initial conditions, and if this limited control implied an equiprobability to halt on any one of the six faces after three rotations. The physical theory concerning the dynamics of well-equilibrated rolling cubic objects would be completely different but the story of dice could be the same. Conversely, what would induce changes would be the creation of devices, some bionic arm for example, allowing a better control over initial conditions and making it possible that a well equipped and trained gambler could quite often make the die halt on the face she wanted. Most certainly then, the rules for throwing dice in gambling houses would be changed or new sorts of dice would be made, for example dice with an internal rotating sphere introducing a higher sensitivity to initial conditions.

“Dice” names a functional category, not a physical one, and that is no minor detail. A physical category is a category defined by a series of physical properties like, for example, being a well equilibrated cubic object whose edges are between 0,3 cm and 20 cm, while a functional category is defined by a specific use. As we have seen above, one has to take into consideration this specific use if one wants to explain and predict many properties of present and future dice. The function of being a die implies of course the possession of determinate physical properties, but the hierarchy is clear : function comes first. It shows still more vividly with complex artefacts like cars or engines. It is not by extracting the common physical properties of present cars, that one will obtain a good definition for car, a definition that will be able to encompass future cars; this can be accomplished only by considering what they are made for. For instance, knowing that cars are for personal transportation (between 1 to 10 people), one can deduce some property future cars are almost certain to have, for instance seats with sufficient front room for human legs. What is uncertain, on the contrary, is the presence of wheels, even if it is presently a feature all cars possess and have possessed : maybe future cars will use air cushion or quick caterpillar tracks.

Considering artefacts as physical entities or as functional ones

We don’t attribute to dice some mysterious properties by saying that the probability a die has to stop on one face after rolling only three times may be different whether we consider it as a physical object or as an artefact. As we have seen, considering a die as a physical object or as an artefact means simply that we are envisaging different sorts of situation. Problems arise only if one fails to notice the ambiguity that phrases of the form "the probability of X to do F in circumstances C" may sometimes have.

Supposing that probabilities correspond to frequencies in some reference class (or population), it is sheer triteness that different reference classes will generally mean different probabilities. But sometimes the reference class is left implicit while different ones could be meant. Speaking just of the probability of X to do F may be ambiguous when X is an artefact because X can be envisaged, as we have seen, either as a member of a physical category or as an artefact, i.e. as a member of a functional category.

Let us consider simpler examples than dice: artefacts whose capacities imply no probability, “surefire” capacities as Mackie called them (conversely, to be a fair die implies the possession of a probabilistic capacity). A surefire capacity to do F in circumstances C will manifest itself by producing outcome F every time circumstances C are present. For instance, to be water-soluble implies to dissolve whenever put in water in a definite set of circumstances (being on earth, ...). No exception is allowed. If the expected outcome does not show, it proves that the capacity was in fact missing. Electric switches, bulbs or hairdryers are endowed with such capacities : they are supposed to turn the current on and off, to produce light, to blow hot hair whenever a definite set of circumstances is present.

That is no doubt a simplification. Often, the capacity an artefact is supposed to have is related to graduated effects, and that means that considerations of level and border line effects step in. A hairdryer that blows very little air will be judged not to have the capacity hairdryers should have and will be counted as a dysfunctioning hairdryer. Some, the border line cases, will appear as not working perfectly, blowing not exactly enough air or a bit too much. The notion of well-functioning often goes with the existence of standards : the capacity should result in effects that exceed some limit or are in between definite values. But we can, by supposing a standard precise enough and very rare border line cases, ignore here these complications so as to focus on our major question : what is the probability to do F about when we consider an artefact relatively to the function of doing F.

Let us consider the case of the bulb. The capacity at issue is the capacity to produce light when connected in the right way to the right electric settings with the right amount of current passing through. Which probability should enter in our characterisation of the function of the bulb ? Not the probability the bulb has when envisaged as a physical object. As a physical object, that is as an object a physicist will analyze by looking at its physical structure and characteristics, no probability will in general be implied : whether or not it has the wanted capacity will be a straightforward matter. There may be some irreducible border line cases, for example, when the filament is weak at some point in such a way that it is indeterminate whether it will break down or not when heated by the current passing through. But in the general case, it will be a quite definite matter whether it will produce light or not in the right circumstances, and our present physical means of analysis are already sufficient for giving in most cases a straightforward yes or no answer with a minimum risk of errors.

If what one was considering was a physical category - all objects whose physical characteristics are very close to the ones of this particular X - then it would generally be a straightforward matter, with no need to appeal to probability, whether the Xs of this physical type would have or not the capacity to produce light when placed in the right conditions. But this is not what we are interested in when considering artefacts. What interests us is whether the object produced or bought as a determinate artefact will have the desired capacity. The question then is not whether a determinate physical structure, which is instantiated by this particular bulb, has a determinate capacity or not but whether or not an item belonging to the category bulb has a physical structure allowing it to have the desired capacity.

Functions do not necessarily imply multirealization, as it is sometimes supposed, but they go happily with it. Being a bulb, a can opener or an engine supposes some specific relation to a particular capacity (a relation more complicated than simply possessing the capacity, as we already know) but it does not imply a determinate physical structure. Several physical structures can be found in the same functional category. (I will consider below the intriguing question of how functional categories can be defined.) Thus, two quite different things can be hidden in the too general phrase “the probability X has to do F in circumstances C” : on the one hand, the probability the physical structure X has to do F and, on the other hand, the probability the functional item X has to possess a physical structure doing F. But in order to perceive the ambiguity and to be willing to eliminate it, one has to be convinced first that it makes sense to distinguish the functional object from the physical one, and that this distinction is required for explaining artefacts – what may happen to them, how they may change, etc.

The difference we are stressing here is quite similar to the one that can be found when speaking of organisms, where the same sort of ambiguity can be encountered as well. “What is the probability that a particular baby becomes an overweight child if she has this diet and performs these physical activities ?” may mean “what is the probability that she becomes overweight since she has this particular genetic make-up ?” or “what is the probability that she possesses a genetic make-up driving her to get overweight since she belongs to this particular population or has had these ancestors ?”. In biology, such ambiguity appears to be quite common and goes together with the existence of two different sorts of causal explanations for the same phenomenon, one pointing to physiology or development and the other to heredity or evolution. It is with the intent to account for such a duality that Mayr introduced the distinction between “proximate causes” and “ultimate causes”. The very nature of evolutionist explanations as well as the relation these entertain with physiological or developmental ones are still discussed issues in the philosophy of biology. Without tackling any of these questions, one can just notice the legitimacy of another level of causal explanations for organisms than the one of proximate physical causes. What we defend here is simply that a similar duality has to be admitted in the case of artefacts too.

Design, industrial production and the probability of being a well-functioning item

How are functional categories defined if they are not defined in physical terms ? By historical factors, like species are defined. In general, items of the same artefact type have been produced in the same industry or in similar ones, following identical procedures or following procedures that have been seen or demonstrated to give rise to identical or similar outcomes, they have been submitted to identical or similar controls relative to the same properties, they have been distributed, offered for sale, advertised as objects of the same functional type. Furthermore, the manufacturing processes will often result from common engineering and design processes or from ones that are largely related to one another. This is what links together items of the same artefact kind, when one looks at them from the production side. On the other side, the consumers’ one, the functional identity is currently perceived and maintained : objects bought or transmitted as items of the same artefact kind will be used in the same ways and will be expected to behave identically in circumstances related to their typical use. As soon as there will be different trademarks, the buyers will act as a selective force making the trademark with more dysfunctioning items disappear or cost less or improve their products. The members of a same artefact kind are not linked together as strongly as the members of a same species are - heredity through transmission of genetic material – but their linkage is sufficient to determine a real kind of a historical nature.

The probability artefact X has to be a well-functioning item - the probability that X has to possess a physical structure doing F when doing F is the function of its artefact type - can be evaluated by statistical means : the proportion of wellfunctioning or dysfunctioning items in representative samples of the relevant population. But that it can be so evaluated does not say anything about the nature of this probability. It does not tell whether it reflects only epistemic or pragmatic factors, or has a causal ground. In other words, it does not tell whether the probability is perspectival or objective. We will try to show that the probability is objective when it is calculated relative to a real-kind population and fulfils determinate conditions.

The probability to possess the required capacity is an explicit pivotal element in the industrial production of artefacts. It already plays a role at the stage of engineering and design. Engineers envisage artefacts in such probabilistic terms when they work out their specifications and how to realize them : which materials to use, what should the production line be, which controls to perform at which stage, etc. The choices engineers or managers have to make usually take into account how doing one thing or another will increase or decrease the probability of dysfunctioning items. For instance, in order to minimize production costs while remaining under the threshold of 0.5% of dysfunctioning items, will it be better to buy cheap materials and install at some stage of the production line a device eliminating 98% of the defective elements or to buy expensive high-quality materials ?

Usually the value of the threshold that the proportion of dysfunctioning items should not exceed is explicit. What sets this value ? Mostly the competition on the market and the consequences dysfunctioning items may have. So, through the retroaction of the market on the production, this threshold - and hence the probability to be a well-functioning item if manufactured in this country or under this trademark - will depend on social factors like which reliability-price ratio will help to ensure good sales.

To sum up, the probability an artefact item has to be a well-functioning one depends on the conditions and processes of its manufacture, and these, in turn, depend for their maintenance or improvement on a great many factors : technical discoveries, costs of possible improvements, expectations about quality, expectation about costs, level of commercial competition, etc. In other words, when X is considered as a member of the population of artefacts a specific factory produced within a period of stable manufacturing conditions, its probability to be a well-functioning item reflects some objective features of the complex causal process responsible for producing the entire population, and not some arbitrary perspective under which X would have been considered.

There are different reference classes or populations that satisfy the requirement of resulting from a common origin or from causally interdependent origins in such a way that they match a causal process capable of explaining many features of their members. To try to precise more formally how to characterize these populations will raise very difficult questions like the one of defining real kinds. An intuitive grasp, that can be tested on examples, is sufficient here. Hairdryers coming out of one factory, hairdryers of a particular trademark (the same manufacturing standards will be imposed to all production units), or hairdryers of well-established occidental trade-marks distributed in occidental countries are all examples of such real-kind populations. Conversely, the population of yellow hairdryers produced in May 1999 in Singapore has no reason to match a specific causal process having an explanatory power relative to this specific population (of course to have been painted in yellow will explain their being yellow, but this causal relation is almost a tautology, it cannot be said to have any significant explanatory power).

Real-kind populations like the ones we considered above can be part of one another or can even sometimes overlap. Is that not a problem for the position we defend here ? It is no more a problem in this case that it is a problem in biology to explain some phenomena considering levels under or above the species level. For instance, it may be pertinent to explain the proportion of sickle cells anaemia in some part of the world, and hence a probability to have some specific gene, to consider a specific subgroup of the human species that has lived in relative isolation in sub-Saharan Africa.

The expected proportion of dysfunctioning hairdryers of well-established occidental trade-marks distributed in occidental countries within the same range of price, let us say P OC , results from what are the expected proportions of dysfunctioning hairdryers in each factory concerned, let us say P 1 , P 2 , etc. The different P k s will normally be very close to one another. The P k s corresponding to factories of the same trademark will be more or less identical because of the standards set by the trademark, the P k s of different trademarks will be very close one another because of the forces exerted by the market. So, in the end P OC will be very close to the value of each P k . If P OC would be obtained as a simple means between the different P k s and the P k s were depending on unrelated factors, P OC would reflect something arbitrary. But the P k s depend on the market, as we have seen, and that is what P OC reflects : the forces the market exerts on hairdryers’ quality in a society having such an economy and such technological resources.

Now, we are in position to offer a general answer to the question raised earlier about whether the probability X has, as an artefact, to be a-well functioning item is objective or perspectival. It will be objective if the artefact population relative to which the probability is evaluated is a real-kind and if the expected ratio of well-functioning items in this population is a consequence of what grounds it as a real-kind, otherwise it will be perspectival. So, for example, whoever supposes that the population of hairdryers considered for evaluating P OC is defined only by a pragmatic interest (like wanting to know which chances there are to buy a well-functioning hairdryer in this price range) should conclude that P OC is perspectival. But, anyone who accepts that such a population is a real-kind and that P OC results from factors determining it substantially should conversely reach the conclusion that P OC is objective.

To conclude, let us sum up our principal result : there is a specific probability or a probability bracket that can be attributed to an artefact item to have the capacity for which it is made, and this can be explained by and grounded on objective factors. By that, we have tried to show that to link a function to the probability of having a corresponding capacity was, in the case of artefacts, not only possible, but also much more than just a technicality since this probability was rooted in the causal processes underlying artefact categories. An ulterior justification of our characterisation, which will be left for further investigations, will be to show why functions, so understood, are at the same time epistemically and causally important : (1) why we find it useful in so many cases (for organisms, for artefacts, etc.) to refer to such functional categories and (2) why this is a way to carve the world at some of its joints so as to obtain valid causal explanations.

Bibliography

Bigelow, John and Pargetter, Robert. 1987. Functions. The Journal of Philosophy 84: 181-196. Cummins, Robert. 1975. Functional Analysis. The Journal of Philosophy 72: 741-765.

Diaconis, Persi and al. 2004. Dynamical bias in the coin toss, http://wwwstat.stanford.edu/~cgates/PERSI/papers/headswithJ.pdf
Note :  The url provided above returned invalid results.
Relevant information may be found at the following link:
http://math.stanford.edu/research/comptop/preprints/heads.pdf

Houkes, Wybo & Vermaas Pieter. E. (eds.). 2006. Artefacts in Philosophy (forthcoming).
Peter Kroes, Coherence of structural and functional descriptions of technical artefacts, Studies In History and Philosophy of Science Part A, Volume 37, Issue 1, The dual nature of technical artefacts, March 2006: 137-151, http://dx.doi.org/10.1016/j.shpsa.2005.12.015

Longy, Françoise. 2006a. A Case for a unified realist theory of functions, in Houkes & Vermaas P. (eds) 2006.
—. 2006b. Unité des Fonctions et Décomposition Fonctionnelle, in Le tout et les parties edited by Jean Gayon and Thierry Martin, Paris : CNRS éditions (forthcoming).

Lorne, M-C. 2004. Explications fonctionnelles et normativité, PhD Thesis, Paris : EHESS.

McLaughlin, P. 2001. What Functions Explain , Cambridge : Cambridge University Press.

Millikan, R. 1993. White Queen Psychology and Other Essays for Alice, Cambridge : The MIT Press.
—. 2000. On clear and confused ideas , Cambridge : The Cambridge University Press.

Mayr, Ernst. 1961. Cause and Effect in Biology, Science 134, 1501-1506
Neander, Karen. 1991. Function as Selected Effects: The Conceptual Analyst's Defense, Philosophy of Science 58: 168-184.

Wright, Larry. 1973.Functions. Philosophical Review 82: 139-168.