Why Ask Why: Patterns and Themes of Causal Attribution in the Workplace
Dan C. Brown
Murray State University
The gap between technical educators and industry-based training professionals has narrowed. Faculty from many community colleges and technical schools have begun to create customized training packages tailored to meet both the broader needs of regional industries and the narrowly defined specific needs of individual companies. Suppliers of technical instruction based in publicly funded institutions are often viewed by community leaders as economic development agents with important links to employers. Expanding further into these roles brings access to opportunities and resources but also exposes technical educators to expectations and challenges that have been somewhat unique to training in the past. One such challenge is the expectation that training providers be able to demonstrate the effects of training on the bottom line of the customer organization. As levels of investment in training have increased, demands by management that training demonstrate its effectiveness by charting its contribution to the bottom line (Bushnell, 1990; Robinson & Robinson, 1989) or otherwise demonstrating its utility (Goldstein, 1989) have also increased.
As educators move into expanding areas of responsibility in industry, there is a need to better understand the causal interpretations and expectations of management to enhance both marketing and program evaluation efforts. People engage in causal attribution because they have a need to manage the environment in which they live. Attribution theory takes people's subjective understanding of their environment into account as it attempts to explain their perceptions of what causes the events that occur in the work place (Crittenden, 1989). Managers and training professionals may not have the same priorities and focus, but individuals from both groups need to understand events that occur around them and to make judgments about the causes of intended and unintended results. Crittenden (1983, 1989) noted that observed differences in attribution style have been linked by researchers to group differences. This suggests the question: Are the differences in experience, focus, and goals of managers and training professionals of sufficient magnitude to lead to different information collection, causal attributions, or methods of making causal judgments?
If training professionals are to be valued and receive continued support from managers, they need to understand how key stakeholders attribute causality for results within organizations. Attribution research may hold promise for increased understanding of how professionals make causal attributions in a complex and imperfect business world where complete information is seldom available. Better understanding of the attribution process could have implications for (a) what knowledge and skills trainers need to increase the likelihood that training will meet the expectations of management; (b) what information evaluators collect and how it is analyzed and presented; and (c) how evaluators are trained.
The following research questions were designed to examine these issues.
- What types of information do managers and training professionals seek when they strive to attribute causality for behaviors and/or events?
- How do the causal attribution processes utilized by managers and training professionals differ when attributing causality for results within their organizations?
- How do the causal attribution processes described by managers and training professionals relate to causal attribution theory?
Causal attribution processes are not only means of providing the individual with perceptions of reality about the world, but also of maintaining effective control in that world (Kelley, 1972; Stryker & Gottlieb, 1981). Attribution theory rests on three assumptions: (a) that individuals attempt to determine the causes of both their own and others' behavior, (b) that individuals do not assign causes of behavior randomly but rather employ rules, and (c) that the causes attributed to behavior will influence subsequent behavior (Jones, 1979).
Both heuristic and systematic analysis processes have been proposed as representations of the attribution processes individuals use. Heuristic processing relies on fit with existing mental frameworks or schemata and has been described as a process not unlike accepting an overall conclusion (Chaiken, 1980). Individuals approach most attribution problems with beliefs about cause and effect relationships that provide frameworks within which relevant information can be fitted in order to draw reasonably good causal inferences (Kelley, 1972a). These frameworks (causal schemata or scripts) represent sets of expectations about how events will occur or others will act in normal circumstances (Hilton & Slugoski, 1986). Schemata can make stimuli appear more salient and provide rules for making judgments (Phillips & Lord, 1982), allowing the individual to proceed in a simple manner assuming that the most salient potential cause is the one behind another person's behavior (Kelley & Michela, 1980; Lord & Smith, 1983). Persons may begin by using limited information to develop naive hypotheses about the causes of behavior, then seek confirmation with a preference for information that allows simple rather than sophisticated confirmatory inferences (Hansen, 1980). When the experiences of individuals appear to conform to the beliefs and expectations contained within causal scripts or schemata, there is no need to search for additional information or explanations (Wong & Weiner, 1981).
Systematic analysis of information has been proposed as the process used to arrive at causal attributions when important problems are under consideration and adequate time for deliberation exists, or when information conflicts with preconceived schemata or scripts (Kelley & Michela, 1980). Kelley (1967) embraced the analysis of variance as a metaphor for lay attribution, wherein an effect was believed to be attributed to the possible cause with which it covaried over time. Such covariation could be based on the assumption that one cause affects the other or that some third factor induces the two to covary (Kelley, 1972). Numerous attribution theorists and researchers have proposed variations of Kelley's (1967) systematic analysis metaphor. Hewstone and Jaspars (1987) suggested that attributors consider all possible causes and analyze whether the causes/effects are present or absent, then interpret this covariation through the use of logical rules based largely on the rules of logic suggested by John Stuart Mill. Hilton and Slugoski (1986) assumed that individuals collect consensus, distinctiveness, and consistency information that is analyzed through a process based on both the principle of covariation and counterfactual and contrastive criterion. Lipe (1991) proposed that because counterfactual information is often difficult to obtain, two proxies take on special importance: covariation and information regarding alternative explanations. While counterfactual data is not explicitly sought in most cases, Lipe (1991) argued that counterfactual reasoning is implicitly used when covariation and information regarding alternative explanations are explored.
This study was designed to explore causal attribution in the context of how managers and training professionals decide what causes events to occur in the work place. Purposefully selected employees from two primary health care organizations were identified for inclusion. These two health care organizations were selected because they were similar in size, located in communities with similar demographics, engaged in the same type of industry, had several training professionals in their educational services departments, had implemented recent change initiatives (related to Total Quality Management implementation efforts) that were of sufficient magnitude to have involved several training professionals and managers, and the results of these initiatives had training as one of several possible causes. An informant was contacted from each organization who could identify recent outcomes or results that (a) had several possible causes, of which one must be training; and (b) managers and training professionals who were familiar with each outcome or result and its potential causes. Discussions with the informants led to the creation of a list of outcomes or results that could serve as the context for subsequent interviews with employees of the respective companies.
The researcher selected two results from the list for each site (see Table 1). Five managers at the director level from the first organization and four from the second organization met the selection criteria. Four trainers from the first organization and five from the second organization met the criteria to participate in the study.
|Results Selected for Interview Context by Site|
|Site A||Change in pre-employment interview procedures and a change in leadership behaviors to more closely reflect official organizational values|
|Site B||Change in the level of employee acknowledgment and a change in the level of employee empowerment.|
A customized interview schedule was devised for each site by placing the questions in the context of the identified events and results. The interview schedules were designed to get participating managers and trainers to recount their perceptions of the two previously identified specific results, the attributed causes, and their best recollection of how they arrived at those attributions. These interview schedules were divided into four sections, with each section focusing on a different type of information. The first section consisted primarily of demographic questions about educational background and work history. The second section first asked the participants to verify that they had knowledge of the results in question and had attributed causality for those results and then asked the participants to answer a series of questions about factors that causal attribution theorists had identified as potential influences on the process a participant uses when attributing causality. Examples of these factors included the participant's perception of (a) himself/herself as actively involved or as an observer, (b) the result as expected or unexpected, (c) the success or failure of the initiative, and (d) the importance of making a causal judgment. The third section asked the participants to attribute causality for each of the two specific results. The fourth section asked the participants to recall aspects of the mental processes they used to determine their causal attributions for the specific results. This section included attention to specific process-related questions about aspects of making causal judgments through use of either heuristic analysis or systematic analysis. These aspects included references to (a) the participant's prior experience and/or knowledge, (b) the temporal order in which information was received or observed, (c) reliance on salient information, (d) the consideration of multiple possible causes, (e) missing or unobtainable information, (f) the consideration of information that either confirms or refutes the participant's hypothesis, (g) distinctiveness, (h) consistency, and (i) consensus as sources when attributing causality for results. All interviews were tape recorded for later analysis.
Data analysis was conducted in two stages. Stage 1 involved use of a coding scheme derived from factors identified in a diverse group of causal attribution theories. This coding scheme was developed to aid in analyzing the types of attributions that were identified and the relationships between the processes reported and those that were described in attribution theory. Stage 2 involved an inductive analysis procedure that helped further explore the attribution processes reported by the participants.
Before analysis could begin, literal transcriptions of the recorded responses were created. Once transcribed, the data were reviewed for accuracy by listening to the tapes while reading the transcriptions. Discrepancies were corrected, each line of the transcripts was numbered, and the files were printed with a wide right margin to aid in subsequent coding and analysis processes.
The Ethnograph (Seidel, Kjolseth, & Seymore, 1988) is a computer software tool designed to assist with analysis of text data. Once the researcher has initially coded the data while reading and rereading the text, this tool has the capability of facilitating search and retrieval of this coded material in any combination the researcher desires. This flexible search and retrieval feature aids in location and identification of patterns, categories, and themes from within the data. This analysis aid was used in stages 1 and 2 of the analysis to enable the researcher to code and recode data efficiently, then search and sort coded data as desired, thus allowing themes and patterns to emerge from the text.
The first stage of this analysis was designed to explore which if any of the attributional theories explain the recount of events that occurred in these interviews with managers and training professionals. This analysis was undertaken with the goal of seeking insight into the processes that managers and training professionals used when faced with the need to identify causes for important events that occurred within the complex business environment.
The interview data were coded and analyzed using techniques adapted from content analysis (Weber, 1990) to search for patterns of attributional factors that have been identified in the causal attribution theories previously discussed. The following coding system was devised to aid in identifying patterns in the presence of characteristics of various attribution theories within the text of interviews conducted with managers and training professionals:
- Distinctiveness - Reference to events that occur only when a cause is present and not in its absence.
- Consensus - Reference to like attributions or experiences by others.
- Consistency - Reference to events that always occur when a given cause is present.
- Confirmation - Reference to attempts to confirm a hypothesis without considering possible alternative causes.
- Missing Data - Reference to missing information deemed helpful in determining the cause of an event or result.
- Salience - Reference to possible cause(s) that stand out from the background of events and possible cause(s).
- Schemata - Preconceived expectations, descriptions of causal relationships previously observed, experienced, or learned.
- Temporal Order - References to time or order of events.
- Person as Cause - The source of the cause rests in a person(s).
- Environment as Cause - The source of the cause rests in the environment.
- Person and Environment - The source of the cause rests in both a person(s) and the environment.
After the initial coding was conducted by the researcher, a graduate student at the University of Illinois was trained in the coding process and coded 20% of the interview transcriptions. A comparison of coding between the researcher and the trained graduate student resulted in an intercoder reliability of 93 percent.
Once there was evidence that intercoder reliability was within acceptable limits, the coded data were analyzed and compared against patterns suggested by causal attribution theories based upon heuristic processes and systematic information processes. Heuristic process theories suggest that when persons possess preconceived notions of causal relationships, they often make causal attributions based upon salient, limited and/or simplified information. These individuals may produce naive hypotheses and seek confirmation of those hypotheses, often without considering the plausibility of possible alternative causes. Thus references that seem to refer to schemata, salient causes, confirmation of hypotheses without seeking alternative possibilities, and missing information were taken as evidence that heuristic analysis was occurring.
Systematic attributional analysis theories suggest that attributors seek to identify how factors of distinctiveness, consistency, and consensus covary over time to determine whether the cause will be attributed to persons, the environment, or a combination of persons and the environment. Thus references to consideration of the three factors of distinctiveness, consistency, and consensus were taken as evidence that systematic analysis may have occurred.
The process used in stage 2 looked for patterns and themes that might emerge from the text but that may not have been anticipated in the theories that were the basis of the codes used in stage 1. This stage of the analysis was useful because it supplemented the findings obtained through the analysis component in stage 1. It provided the opportunity to examine the text from a different perspective and with different constraints.
The inductive component focused on how the participants determined why specific events occurred. An inductive analysis process has been described as one that allows patterns, categories, and themes to emerge from within the interview data itself (Seidman, 1991). Seidman (1991) suggested that the researcher approach interview transcripts "with an open attitude, seeking what emerges as important and of interest from the text" (p. 89). The coding and analysis in this stage were not linear but rather overlapping activities that often occurred simultaneously.
The coding of the data required the researcher to exercise judgment about what was important in the transcript (Seidman, 1991). The inductive analysis process (Lincoln & Guba, 1985; Patton, 1980) required careful reading and rereading of the interview data to allow patterns, categories, and themes to emerge from within the data itself. Through reduction of the material the interviewer began to interpret it (Seidman, 1991). Use of inductive analysis procedures allowed exploration of the attribution processes used by managers and training professionals without the limitations and constraints that would be imposed if only more restricted analysis procedures were employed.
Initially, utilizing the format suggested by The Ethnograph (Seidel et al., 1988) software, the data were coded by marking all segments that described aspects of the causal attribution process. The researcher then organized these coded segments into categories that emerged upon reading and re-reading the text. As this analysis continued, additional relationships between the categories began to emerge. These categories were labeled by the researcher as he continued to look for patterns and themes that emerged from within the categories (Lincoln & Guba, 1985; Patton, 1980; Seidman, 1991).
The Ethnograph (Seidel et al., 1988) software search and sort features facilitated the researcher's shift from comparison of coded segment with coded segment to comparison of categories with properties of the categories that resulted from earlier comparisons (Glaser & Strauss, 1967; Lincoln & Guba, 1985). Once the transcripts had all been coded, files were created, one for each of the identified thematic categories. The coded segments with reference markers were then copied into the appropriate files and printed for further review. Following Patton's (1980) suggestion, the researcher looked not only for relationships suggested by the data, but also for any exceptions and alternative explanations that may appear. The researcher then considered the weight of evidence and looked for "best fit between data and analysis" (Patton, 1980, p. 327).
Information Used for Causal Attribution
Analyses of the interview data identified differences in the causal attributions of managers and training professionals but not in the processes they used to make those attributions. Several common sources of information that appear key to causal attribution emerged from this analysis: conversation, personal experience, observations of personnel and results, and quantitative data. All managers and training professionals reported the use of various forms of informal conversation and observation to identify why changes had or had not occurred. The most apparent differences between managers and training professionals, in the information frequently reported by trainers but not mentioned by managers, were conversations with other training professionals and use of responses to surveys and focus groups. Training professionals made frequent reference to the perceptions of other training professionals both from within the organization and from outside the organization as sources of information used in making causal judgments. Managers may not have mentioned this source of information because managers do not value the perceptions of training professionals. However, it is possible that because training departments are somewhat isolated from other operational units in the organizations, managers have few opportunities for informal conversation with training professionals.
Participants did not object to making causal judgments with incomplete data or data collected for purposes other than making causal judgments. Even though data from surveys and focus groups were frequently cited by training professionals as important in making causal judgments, only one training professional reported collection of these types of data with the intent of looking at historical data to make causal judgments. There were no references by managers to the use of this information. One manager did suggest that data collected as a normal part of the Total Quality Management process might provide insight into why results occurred but did not say he/she had used that data. In all other cases data collection references were process- rather than results-oriented. Data collection was typically intended for formative improvement of future training components, comparison of programs offered against those offered by other organizations, or for use in developing future programs.
Managers were more results-oriented than training professionals. When asked how they determined the causes of the changes (results) they had just described, almost half the managers stated that the results they observed were the most important information that helped them make causal judgments. As one manager stated, "The proof was in the pudding." The managers typically had direct knowledge of the results obtained in their departments but incomplete knowledge of results across the organization. In contrast, most trainers reported little direct knowledge of the results that had been achieved but were often aware of training-related processes that had occurred within the organization and attitudes of employees about those processes, as well as the desired results those processes were intended to facilitate.
Each group of participants had different responsibilities and access to different data. Gottlieb and Ickes (1978) theorized that differing causal inferences may be a function of the information communicated to subjects. The findings of this study support this hypothesis. It appears that the differences in how managers and training professionals attributed causality may not have been in the mental processes they use but may have originated in the underlying assumptions that preceded and set parameters for those processes. One example of those differences in assumptions may have originated in the assumptions about the role of training in causing the results that provided the context for this study. There was an often implicit and sometimes explicit message in the comments of the training professionals that if training is "carried through," the desired results could be expected to occur. Trainers often identified training either as the sole cause or a primary cause of the positive results that had been achieved, even though training was never specifically mentioned in any of the interview questions. In contrast, very few managers interviewed either mentioned training or singled out training as a cause of results.
Causal attribution theory states that when the causes of events are considered very important, individuals consider consistency, distinctiveness, and consensus data as a basis for a systematic analysis that allows the individual to attribute causality (Hilton & Slugoski, 1986; Kelley, 1967, 1972a; Orvis, Cunningham, & Kelley, 1975). Even though most classified each of the results discussed as important or very important, many participants in this study gave present and future tense answers when asked about the past. Only one participant in this study described having collected information for the sole purpose of determining why an event had occurred, and that training professional had been responsible for the design, implementation, and evaluation of an acknowledgment program that had failed. When describing her efforts to make causal judgments for the failure of the program, this trainer reported collection and consideration of consensus, consistency, and distinctiveness information, but she also reported reliance on prior knowledge against which she compared that data:
I thought of a lot of different reasons why and was exploring why maybe before we knew why the program for acknowledgment didn't work, maybe it wasn't designed correctly but I went through the steps of the program with a number of different people to verify would this work and I actually saw it work so I know that the program was designed ok ... and it worked, there was nothing wrong with mechanism ... it was publicized ok, I don't think there were any other major factors because I pretty much hit all the bases ... there were a couple of places in the organization where the program was working and people liked it, those were anomalies, those were atypical, I looked at the individual areas where the program was working and looked at the sub-culture in those areas and in those areas where the program worked there seemed to be better morale and employee-management relations were much healthier, lines of communication were much more open and perhaps the management acknowledgment and empowerment of employees were greater ... and they [the employees] were very quick to identify that their places were different than others. (Training Professional #8, personal communication, November, 1993)
Causal Attribution Processes
Every participant in this study relied heavily on causal schemata derived from individual experience, observation of cause effect relationships, and both implicit and explicit teachings about the nature of the world (Kelley, 1972a). The heuristic processes could take the form of reliance on causal schemata that described cause effect relationships linked to contexts or events, or reliance on scripts that described normal behavior for "types" of people in the presence of specific stimulus. The participants in this study often described the use of schemata to make causal judgments even when they described the result and/or the act of making a causal judgment as very important. As illustrated in the excerpts that follow, when scripts were described by participants, they were often linked to early observations of normal behavior even dating back to childhood, adolescent, or early adult experiences.
any past experience, you are either going to take something from it or you should I believe and learn from it and be able to apply that in the future. (Manager #1, personal communication, November, 1993)
if I saw what looked to be a parallel situation here I would go back first and say, well, there was a cause and effect before, people act in certain ways with different kinds of stimuli ...is this a comparable situation, does the person I'm dealing with here who has these similar behaviors to someone else from ten years ago in a similar situation, how similar are these people really ... I have to find out how different and in what respects it is different. (Manager #5, personal communication, November, 1993)
if you treat people in a certain way, you're likely to get a certain reaction from those people and I think that's apparent in the literature and I think that's apparent in practice and I've learned both ways, if you want to get this reaction from people treat them in this way. (Training Professional #8, personal communication, November, 1993)
Participants sought sufficient cause, not proof. If the expected result was observed, participants seldom sought or considered alternative explanations. This was consistent with the previous finding that when experiences are consistent with expected results, there is no need to search for alternative explanations (Wong & Weiner, 1981). Similarly, as Hansen (1980) theorized, when participants believed strongly in a hypothesis or when expected results were observed, participants showed a preference for simple confirming information that would not contradict their expectations. Both managers and trainers talked about recognizing that other individuals sometimes hold alternative causal beliefs. When alternative causal schemata were acknowledged, attributors could either discount the alternatives or as described in the following comments, consider them, often through group discussion and analysis.
when I have two conflicting stories its like I've got to find the truth, sometimes there's not a truth to it, both stories are true in the minds of the beholder. (Manager #3, personal communication, November, 1993)
I've thought about what other people have thought might be and I've had a few discussions within this department, we've talked about this, oh yes. (Training Professional #7, personal communication, November, 1993)
with this job there's a lot of discussion about that [the consensus of experiences among the staff] particularly with the CEOs and [name] and myself, we discuss it a lot, why did we have success with this group, what do we need to do different with this group. (Training Professional #1, personal communication, November, 1993)
Causal attribution theory has consistently explored the process of making causal judgments as an individual cognitive process. The findings of this study indicated that when multiple causal hypotheses were being considered, making causal judgments could be a group process. When it was necessary to identify alternative possible causes, alternatives were identified and assessed, then the relative strength or fit was considered through interaction with other individuals.
One manager, discussing attempts to change leadership behavior to more closely match organizational values, said that she had listened to other people's ideas of why events occurred, explained why she thought as she did, then looked at all the ideas for "best fit with the data she had observed and discussed." A training professional described similar discussions among training professionals looking for plausible explanations but with an eye toward identifying possible interventions.
we spend lots of time going, I think, maybe, because, I don't know, but its a guess, and we do a lot of that, sometimes its things that require action, sometimes its fun to talk about, so on those things that require action, you kind of say, well this sounds just a little better than this so let's try it. (Training Professional #6, personal communication, December, 1993)
Most managers and training professionals who reported use of knowledge- or experience-based schema to make causal judgments denied that they had looked for non-confirming information about the conclusions they had made. The following remarks were representative of these responses.
I only look for things that support my conclusions and when I talk to people that have similar situations to find out, I network with those people that hold the same type of mind sets that I do and I'm not sure I pay attention to those things that refute, they're there but I'm not sure that I pay attention to those, I discount them. (Training Professional #2, personal communication, December, 1993)
I don't really think I expect to see anything different, (Training Professional #9, personal communication, December, 1993)
Conversation was the most frequently reported source of collected data when participants made causal judgments. When this source of information was considered, the perception of bias in a communicator was identified as an important factor in assessing the strength of either information or an alternate hypothesis. This was based on such factors as whether the information was solicited or unsolicited and the reputation of the communicator. As theorized, when persons collected and used consensus data they often considered their perception of bias in the communicator. People changed their opinions toward an advocated position as an outcome as the result of their inferences about why a communicator had taken this position (Kelley, 1967). Further, when participants perceived that a communicator had represented information accurately and without bias, they could make causal judgments quickly, through less detailed analysis. One manager described her mental processes when she was confronted by questions of communicator bias:
mentally I'm thinking about who they are, do I generally trust what they say and are they pretty accurate in what they say, are they living in a fairy world or are they a person who pretty much knows what's going on, how well do I know them, if its somebody that's a department head or somebody I don't know at all I would probably just register it as a piece of information, a fact that they've told me, no values judgment on it at all but if I really know that person and I really trust what they say ... I'm going to believe them. (Manager #9, personal communication, December, 1993)
A manager discussing an attempt to change pre-employment interview practice noted that if she had encountered conflicting consensus data she would have needed to collect further information before making a causal judgment.
the stories have been similar, that's come from a variety of ranks ... with them jiving I probably wouldn't investigate as much as if I had two conflicting stories. (Manager #3, personal communication, November, 1993)
Causality for events within complex contexts is extremely difficult to measure. Trainers and educators believe intuitively that under some circumstances training can increase efficiency and profits, but realistically, training alone can almost never determine profit or loss. Training is just one of many interrelated factors (Hassett, 1992). Precisely separating the effects of training from those of other internal and external factors can be very difficult. In spite of this frequent lack of proof of causality, managers often find themselves in the position of having to make decisions that require that they attribute causality for effects without the benefit of proof. It was stated early in this discussion that attribution research may hold promise for increased understanding of how managers and training professionals make decisions about cause-result relationships (causal attribution) in a complex environment where complete information is seldom available. The results of this study, generally consistent with heuristic causal attribution theories, support the belief that for many persons "perception is reality," particularly when we are examining individuals' perceptions of what causes the events that happen to them and the processes individuals use when developing perceptions of why behavior and/or events occurred. Heuristic attribution processes may allow judgments to be made with very limited data; although the price of this efficiency may be more frequent incorrect causal judgments.
This observation coupled with the apparent lack of communication between trainers and managers when making causal decisions presents both risk and opportunity. The risk is that in an environment where the most common source of causal data is conversation, the lack of communication between training professionals and managers when making causal attributions may decrease the probability that training will achieve the desired results, which in turn may increase the probability that investment in training will be reduced or discontinued in times of economic downturn. The opportunity, in this age of management-by-fact, lies in effectively using evaluation and other data collection techniques focused not on what was learned but on understanding the causal relationships between results achieved and training to assist managers in making decisions based on data, not solely on the basis of heuristically interpreted schemata.
The literature on causal attribution has often described causal judgment making as an individual process but the findings of this study suggest that it may often be a group process. This finding presents one more reason to assure that personal interaction and group decision making skills are an integral part of technical teacher and trainer education experiences. Skills in these important areas may provide not only important instructional opportunities to training professionals but also may help enhance opportunities for inclusion of trainers in decision making discussions.
Trainers and managers as causal attributors need to understand the effects of organizational history, experiences, and prior knowledge on perceptions of causality. Consistency and accuracy in understanding managers' causal judgments may be enhanced by expanding the information to which trainers have access. Training could be more effectively designed and presented if adequate feedback data are made accessible when important causal judgments must be made. The apparent lack of communication between trainers and managers when making causal attributions about training related results is a serious concern. If training professionals seek to increase credibility and impact, they must not only learn to apply the skills and techniques necessary to evaluate both process and results in complex organizations but also seek opportunities to practice these skills.
Brown is Assistant Professor, Department of Industrial and Engineering Technology, Murray State University, Murray, Kentucky.
Glaser, B. G., & Strauss, A. L. (1967). The discovery of grounded theory: Strategies for qualitative research. Chicago: Aldine.
Goldstein, I. L. (1989). Critical training issues: Past, present, and future. In I. L. Goldstein (Ed.), Training and development in organizations (pp. 1-21). San Francisco: Jossey-Bass.
Gottlieb, A., & Ickes, W. (1978). Attributional strategies of social influence. In J. H. Harvey, W. Ickes, & R. F. Kidd (Eds.), New directions in attribution research (Vol 2, pp. 261-296). Hillsdale, NJ: Erlbaum.
Hassett, J. (1992). Simplifying ROI. Training, 29(9), 53-57.
Jones, D. E. Kanouse, H. H. Kelley, R. E. Nisbett, S. Valins, & B. Weiner (Eds.), Attribution: Perceiving the causes of behavior (pp. 151-174). Morristown, NJ: General Learning Press.
Kelley, H. H. (1972). Attribution in social interaction. In E. E. Jones, D. E. Kanouse, H. H. Kelley, R. E. Nisbett, S. Valins, & B. Weiner (Eds.), Attribution: Perceiving the causes of behavior (pp. 1-26). Morristown, NJ: General Learning Press.
Kelley, H. H. (1967). Attribution theory in social psychology. In D. Levine (Ed.), Nebraska Symposium on Motivation 1967 (pp. 192-238). Lincoln: University of Nebraska Press.
Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry. Newbury Park, NJ: Sage.
Lord, R. G., & Smith, J. E. (1983). Theoretical, information processing, and situational factors affecting attributional theory models of organizational behavior. Academy of Management Review, 8(1), 50-60.
Orvis, B. R., Cunningham, J. D., & Kelley, H. H. (1975). A closer examination of causal inference: The roles of consensus, distinctiveness, and consistency information. Journal of Personality and Social Psychology, 32(4), 605-616.
Patton, M. Q. (1980). Qualitative evaluation methods. Beverly Hills: Sage.
Seidel, J. V., Kjolseth, R., & Seymore, E. (1988). The Ethnograph [Computer program]. Littleton, CO: Qualis Research Associates.
Seidman, I. E. (1991). Interviewing as qualitative research. New York: Teachers College Press.
Stryker, S., & Gottlieb, A. (1981). Attribution theory and symbolic interactionism: A comparison. In J. H. Harvey, W. Ickes, & R. E. Kidd (Eds.), New directions in attribution research (Vol. 3, pp. 425-458). Hillsdale, NJ: Erlbaum.
Weber, R. P. (1990). Basic content analysis. Newbury Park, NJ: Sage.
Reference Citation: Brown, D. C. (1996). Why ask why: Patterns and themes of causal attribution in the workplace. Journal of Industrial Teacher Education, 33(4), 47-65.