JITE v34n3 - At Issue - Academic Program Evaluation: Lessons from Business and Industry
Academic Program Evaluation: Lessons from Business and Industry
Mike A. Boyle
University of LouisvilleRichard Crosby
University of LouisvillePerhaps no topic is more relevant to an organization's success than that of effectiveness ( Copeland, Koller, & Murrin,1995 ; Watson, 1993 ). In business and industry, a positive financial bottom line is most desirable ( Phillips, 1994 ; O'Hara, 1995 ). Higher education, in contrast, has traditionally defined success in a number of ways including: student numbers, recruitment data, and retention percentages ( Bryson, 1995 ; Lucas, 1994 ). While these measures in higher education are important, the survival of programs may depend on presenting more program-related data from a variety of sources. This may require an evaluation system that considers (a) student satisfaction, (b) learning proficiency, (c) application skills, and (d) overall program effectiveness.
Some years ago, Donald Kirkpatrick developed a systematic approach to evaluation which includes four levels of measures ( Kirkpatrick, 1959a, 1959b, 1960a, 1960b ): (1) the feelings the students have about the program (Reaction), (2) the degree to which they learned the required material (Learning), (3) their ability to transfer training to the work site (Application), and (4) the impact of training on the organization's bottom line (Results). A recent study indicates that 94% of the companies surveyed use some form of the Kirkpatrick system to evaluate their training and development programs ( Bassi, Benson, & Cheney, 1996 ). While numerous attempts have been made to develop alternatives (i.e., Holton 1996a, 1996b ; Kaufman & Keller, 1994 ), the Kirkpatrick model remains as a standard for business and industry (e.g., Kirkpatrick, 1996a, 1996b ; Alliger & Janak, 1989 ; Cascio, 1987 ). Perhaps the major reason for the resilience of the model is because it works well for evaluating the effectiveness of both technical and soft skills training, and is particularly well suited for evaluating the various quality initiatives. Kirkpatrick's work, in principle, seems equally appropriate for evaluating programs of study at universities, and if adopted, achieves the serendipitous benefit of using parallel criteria and terminology accepted by business and industry.
Level One-Reaction
Level one is the measurement of students' feelings of like or dislike for a class or program by asking, listening, or using evaluation forms at the conclusion of a course. These evaluations provide administrators and instructors with valuable insights for course improvement in areas where student input is the best data. The results could include increased course popularity and enrollment, and might help achieve higher ratings for educational programs among students and faculty alike.
Level Two-Learning
Midterm and final examinations, quizzes, and project or portfolio assessments are forms of level two evaluations which are common in academics. These are used to determine the knowledge, attitudes, and skills the learner has attained in a specific course. In business and industry, however, level two evaluations are more rare. This difference is logical and expected because industry is more focused toward on-the-job performance and often has constraints which prevent evaluation in the classroom ( Erickson, 1990 ).
Because business and industry tend to utilize level one instruments to evaluate classroom training while universities conduct level two measurements, very different perceptions of a successful program can potentially exist. How well a student likes a course (level one) does not necessarily imply that he or she has learned what was intended (level two). Great care must be exercised when assuming success on one level will equate to success on any other level of evaluation.
Level Three-Application
Level three evaluations determine how well students transfer the knowledge and skills they learned into actual workplace performance. A key to the success of this level is a clear determination of exactly what is to be evaluated and where, how, and when this takes place. In business and industry, students can apply what is learned and be measured for competency in actual job settings. At universities, most courses do not lend themselves to this type of evaluation because there is a focus on providing a strong knowledge foundation which will later be used for skills development. Practicums, co-op and work experience programs, and internships, however, provide a powerful medium for evaluating students within the context of real work settings.
Level Four-Results
Level four evaluations emphasize the contributions of training to the organizational mission and objectives. Higher education faces different challenges than business and industry since its primary mission is to meet the perceived and real needs of many stake holders (i.e., faculty, students, employers, community groups, parents). Increasingly, administrators must prove that programs are meeting these needs and that they are making an appropriate impact for the monetary expenditure. When issues such as time, effort, resources, and the availability of data are considered, this approach to evaluation can be very challenging.
Many programs in higher education have been eliminated because of low enrollment, outdated equipment, or a lack of faculty skills. In other cases, good programs have been eliminated because there was no data to prove what the program had accomplished. A well-conceived and implemented level-four evaluation plan can reveal program weaknesses before they become problematic, as well as providing a strong rationale for continuance.
Conclusions and Recommendations
In the not so distant past, educational programs were somewhat isolated from inspection. This is no longer the case because declining student populations and shrinking budgets increasingly require programs to justify their existence. The only way to keep higher education programs current and viable is to use appropriate evaluations and make necessary corrections. Kirkpatrick's four-level evaluation system has become a standard for business and industry because it provides comprehensive data to support training programs. If adapted for use in academic programs, this system will provide data to ensure that our students: (1) like the program, (2) are learning the material, (3) are able to apply the material in work settings, and (4) have the correct competencies to compete in the job market. In short, implementing the Kirkpatrick four-level evaluation system could go a long way toward ensuring the success and reputation of industrial and technical teacher educational programs.
References
Alliger , G., & Janak, E. (1989). Kirkpatrick's levels of training criteria: Thirty years later. Personnel Psychology , 42(3), 331-342.
Bassi , L.J., Benson, G., & Cheney, S. (1996). The top ten trends. Training and Development , 50(11), 28-42.
Bryson , J. (1995). Strategic planning for public and non profit organizations . San Francisco: Jossey-Bass.
Cascio , W. (1987). Applied psychology in personnel management (3rd ed.). Englewood Cliffs: Prentice-Hall.
Copeland , T., Koller, T, & Murrin, J. (1995). Valuation, measuring and managing the value of a company (2nd ed.). New York: John Wiley.
Erickson , P. (1990). Evaluating training results. Training and Development Journal , 44(1), 57-59.
Holton III, E. (1996a). The flawed four-level evaluation model. Human Resource Development Quarterly , 7(1), 5-21.
Holton III, E. (1996b). Final word: Response to reaction to Holton article. Human Resource Development Quarterly , 7(1), 27-29.
Kaufman , R., & Keller, J. (1994). Levels of evaluation: Beyond Kirkpatrick. Human Resource Development Quarterly , 5 (4), 371-80.
Kirkpatrick , D. L. (1959a) Techniques for evaluating training programs. Journal of ASTD , 13(11), 3-9.
Kirkpatrick, D. L. (1959b) Techniques for evaluating training programs: Part 2-Learning. Journal of ASTD , 13(12), 21-26.
Kirkpatrick, D. L. (1960a) Techniques for evaluating training programs: Part 3-Behavior. Journal of ASTD , 14(1), 13-18.
Kirkpatrick, D. L. (1960b) Techniques for evaluating training programs: Part 4-Results. Journal of ASTD , 14(2), 28-32.
Kirkpatrick, D. L. (1996a). Great ideas revisited. Training and Development Journal , 50(1). 54-59.
Kirkpatrick, D. L. (1996b). Invited reaction: Reaction to Holton article. Human Resource Development Quarterly , 7(1), 23-25.
Lucas , A. (1994). Strengthening departmental leadership . San Francisco: Jossey-Bass.
O'Hara , P. (1995). The total business plan . New York: John Wiley.
Phillips , J. (Ed.). (1994). Measuring return on investment . Alexandria, VA: American Society for Training and Development.
Watson , G. (1993). Strategic benchmarking . New York: John Wiley.
Reference Citation: Boyle, M. A., & Crosby, R. (1997). Academic program evaluation: Lessons from business and industry. Journal of Industrial Teacher Education , 34(3), 81-85.