Journal of Industrial Teacher Education logo

Current Editor: Dr. Robert T. Howell  bhowell@fhsu.edu
Volume 32, Number 1
Fall 1994


DLA Ejournal Home | JITE Home | Table of Contents for this issue | Search JITE and other ejournals

An Evaluation of Seventeen Leadership Development Programs for Vocational Educators

Jerome Moss, Jr.
University of Minnesota
Gary W. Leske
University of Minnesota
Qetler Jensrud
University of Minnesota
Thomas H. Berkas
Evangelical Lutheran Church of America

During the first five years of its operation, the National Center for Research in Vocational Education at Berkeley (NCRVE) supported a program of research, development, and service designed to improve the leadership capabilities of vocational educators. The work resulted in a number of products and services. A conceptualization of leadership and leadership development was formulated (Moss & Liang, 1990) that envisions leadership development as improving those attributes-characteristics, knowledge, skills, and values-that predispose individuals to perceive opportunities to behave as leaders, to grasp those opportunities, and to succeed in influencing group behaviors in a wide variety of situations and professional roles. Success as a leader in vocational education is conceived primarily as facilitating the group process and empowering group members. Two instruments were created to define the conceptualization operationally. The Leader Effectiveness Index(LEI) assesses the extent to which individuals are performing successfully as leaders (i.e., achieving six generic tasks that reflect the desired facilitative, empowering role). The Leader Attributes Inventory (LAI) measures the degree to which individuals possess each of 37 attributes that predispose successful leadership performance. Both instruments have been shown by several studies to have high reliability and validity (Moss, Finch, Lambrecht, & Jensrud, in press; 1994). While the instruments were being created and their psychometric characteristics determined, available instructional materials for leadership development were examined (Finch, Gregson, & Reneau, 1992), and new case studies (Finch, et al., 1992) and simulation materials (Finch, 1993) were developed. With this research and development work well underway, NCRVE initiated efforts to stimulate and evaluate leadership development programs around the country.

A telephone survey early in 1990 of members of the University Council for Vocational Education had revealed two critical facts. First, there were very few identifiable activities being offered (or even planned) by member institutions that were specifically intended for leadership development (as opposed to administrator preparation). Second, while many faculty recognized the need for more and better leadership development activities, they lacked the time to create them (Moss, Johansen, & Preskill, 1991). Given the need for special assistance in order to stimulate leadership programs, as well as a relative lack of knowledge about the best way to provide leadership development experiences, NCRVE decided to offer special incentives to institutions of higher education and state agencies to initiate a wide variety of innovative leadership development programs for graduate students and in-service personnel in vocational education. The intent of the evaluations of these programs, therefore, was (a) to determine the extent to which a variety of formal educational interventions could impact leader attributes and other related behaviors, (b) to identify the particular program approaches and activities that appeared most effective, and (c) to begin to assess the effects of NCRVE stimulation efforts on the institutions and agencies involved.

Stimulating Leadership Development Programs

In May, 1990, NCRVE sent a Request for Proposal (RFP) to approximately 500 department heads in institutions of higher education with graduate programs in vocational education. The RFP offered a subsidy of up to $4500 to departments that would provide a new or extensively revised leadership program for their graduate students majoring in vocational education. Seven institutions were funded to offer nine programs (two programs were offered twice): Colorado State University, Iowa State University, Indiana State University, Mississippi State University, North Carolina State University, University of Georgia, and University of Maryland. A tenth program, at the University of Minnesota, was fully supported by the state.

Because the need to provide opportunities for leadership development experiences to in-service vocational educators was also viewed as important, an RFP was sent in February, 1991, to all 51 State Directors of Vocational Education and approximately 500 department heads in institutions of higher education with graduate programs in vocational education. The RFP offered a subsidy of up to $5000 to state agencies or universities that would provide a new or extensively revised leadership development program for in-service educators. Awards were made to seven applicants to help support their proposed programs: Colorado State University, Iowa State University, Louisiana State University, Maryland State Department of Education, Mississippi State University, North Carolina State University, and the University of Missouri-Columbia.

Both sets of RFPs-soliciting programs for graduate students and for in-service personnel-permitted the leadership development programs to take any form or length, and innovation and variety were encouraged. However, both RFPs required that programs use as instructional objectives one or more of the 37 attributes assessed by the Leader Attributes Inventory (LAI). The attributes are listed in Table 1. They have been shown empirically to be highly related to effective performance as a leader in vocational education (Finch, Gregson, & Faulkner, 1991; Migler, 1991; Moss, Jensrud, & Johansen, 1992; Moss, Lambrecht, Finch, & Jensrud, in press; Moss & Liang, 1990; Wardlow, Swanson, & Migler, 1992; White, Asche, & Fortune, 1992). A second requirement was that institutions cooperate with NCRVE in the evaluation of their programs.

Table 1
Attributes Assessed by the 'Leader Attributes Inventory'

Energetic with stamina Ethical
Insightful Communication (oral, listening, written)
Adaptable, open to change Sensitivity, respect
Visionary Motivating others
Tolerant of ambiguity & complexity Networking
Achievement-oriented Planning
Accountable Delegating
Initiating Organizing
Confident, accepting of self Team building
Willing to accept responsibility Coaching
Persistent Conflict management
Enthusiastic, optimistic Time management
Tolerant of frustration Stress management
Dependable, reliable Appropriate use of leadership styles
Courageous, risk-taker Ideological beliefs appropriate to the group
Even disposition Decision-making
Committed to the common good Problem-solving
Personal integrity Information management
Intelligent with practical judgment

Evaluation Design

Evaluation Questions

Nine evaluation questions were posed in order to guide (a) the evaluation of each leadership development program, (b) the summary of the evaluation results of the set of ten programs for graduate students and the summary of evaluation results of the seven programs for in-service personnel, and (c) the summary of the evaluation results for all 17 programs. The first three evaluation questions required a description of the programs, their participants, and their costs. Questions 4 through 7 addressed program outcomes-participant satisfaction, participants' perceived change in the 37 leader attributes, participants' ability to behave and perform as leaders, and the institutional impacts of stimulating leadership development programs. Question 8 required an exploration of some of the relationships between program activities and program outcomes to identify effective practices. The ninth question sought recommendations for further practice. The nine questions were:

  1. What types of leadership activities have been developed and implemented by cooperating institutions and agencies?
  2. How many, and what types of people participated in the leadership development programs?
  3. What were the costs of different leadership programs?
  4. How satisfied were participants with the various leadership development programs?
  5. To what degree did participants perceive a change in their leader attributes as a result of participating in the leadership development programs?
  6. To what extent did the leadership development activities affect how participants perceive their ability to behave and perform as leaders?
  7. What kinds of impact did the leadership development programs have on institutions?
  8. What activities were considered particularly effective and what leader attributes did they impact?
  9. What recommendations can be made for improving leadership development activities?

Data Collection

Table 2 shows the nine evaluation questions and the methods (instrumentation and techniques) used to gather the qualitative and quantitative data relevant to each question. With the exception of questions 6 and 7, multiple methods of data collection were used for each evaluation question to help ensure data accuracy and completeness. Quantitative data provided information about program participants, program costs, participant satisfaction with the program, participants' pre- and post-program leader attributes, and participants' leadership and behavior six months after the completion of the program. Qualitative data provided the program description, and the faculty and participants' perceptions (through interviews and focus groups) about the effectiveness of various program activities. A brief description of the methods of the data collection follows.

Program description. Information about program activities was obtained in three ways. First, the program proposal provided a general description of the intended activities. Second, after the conclusion of the program, the director completed a detailed "Program Description." Third, program directors were interviewed in person and/or by telephone.

Table 2
Evaluation Questions and Means of Data Collection

Means of Data Collection Evaluation Questions
1 2 3 4 5 6 7 8 9

Program Description x   x x
Participant Description   x  
Program Cost   x  
Participant Satisfaction Survey   x   x x
Leader Attributes Inventory   x   x   x x
Behavior & Performance Survey   x   x  
Faculty Interviews x x x   x   x
Participant Focus Groups x   x x   x x

Participant description. Program directors completed a RParticipant DescriptionS form, and each participant provided information about her/himself as a part of the Leader Attributes Inventory (LAI).

Program cost. Detailed budgets were received as a part of the program proposal. After instruction, during faculty interviews, adjustments were made on budgeted accounts to reflect actual expenditures.

"Participant Satisfaction Survey." Participants' satisfaction with the way the program was organized and delivered, as well as its value, was assessed through the RParticipant Satisfaction Survey.S This instrument measured satisfaction using a 5-point Likert scale and was administered immediately after program completion. The instrument also invited qualitative information from participants about the most effective major activities in each program and the specific outcomes (leader attributes) they affected.

"Leader Attributes Inventory." (LAI). The LAI is a 37 item instrument with each item consisting of the name and a brief description of an attribute. The self-report form used in the evaluations utilized a seven point scale (from 40% to 100% in 10-point intervals) to measure the frequency with which the attribute was displayed. Each item was scored separately. Test-retest reliabilities of individual items ranged from r = .53 to .89; the internal consistency (alpha) of the items was r = .97. Correlation coefficients between the individual items and the total score on the Leader Effectiveness Index (the extent to which individuals are actually performing successfully as leaders) ranged from .40 to .88; the coefficient between the total score on the LAI and total score on the LEI was .84.

"Behavior and Performance Survey." This instrument was sent to participants by, and returned to, program directors six months after program completion. Completed instruments were then forwarded to the authors. This instrument collected information about participants' employment status, leadership activities, use of leader attributes, perception of the contribution of the program to the accomplishment of leadership tasks, and additional leadership training activities.

Faculty interviews. Three types of interviews were held. First, a number of telephone contacts were made with program directors during the conduct of the programs to monitor progress. Second, visits to ten of the institutions (during which the focus groups were also held) gave evaluators an opportunity for discussions with program directors about program activities and participants. Third, after the completion of most programs, a meeting of program directors was convened during an AVA Convention. There, directors reported on the impact of the program on their institutions and exchanged ideas about possible ways to improve future programs.

Participant focus groups. Within a week after each program was completed, ten institutions were visited by a pair of evaluators. A focus group, which included most of the program participants at the institution, was conducted. The key questions used to elicit group input during the hour to hour-and-a-half discussions were: What program activities were most effective? Why? What impact(s) did they have on you? What could you do to improve the program? In addition, the level of participant satisfaction with the programs as a whole became evident by their comments, and the nature of specific activities was clarified.

The timetable for collecting data, using each of the above methods of data collection, is shown in Table 3.

Data Analysis

Data about each of the 17 leadership development programs were collected and analyzed separately to answer the nine evaluation questions. The results for each program were shared with its director, who reviewed them for accuracy and completeness. Then, the results for the ten programs for graduate students and the results for the seven programs for in-service personnel were grouped separately. Each group was summarized and examined to derive insights useful to others who might be planning to conduct their own leadership programs for graduate students or for in-service personnel (Leske, Berkas, & Jensrud, in press; Moss, Jensrud, & Johansen, 1992). In this stage of the analysis, the units of analysis were most frequently programs (n = 10 or 7), but attributes (n = 37) and individuals (n = 180 or 85) were also used. Statistical techniques for combining, comparing, and relating data were employed.

Table 3
Means of Data Collection and Collection Timetable

Means of Data Collection Data Collection Timeline
Immediately
After Instruction
6 Months
After Instruction

Program Description X  
Participant Description X  
Program Cost X  
Participant Satisfaction Survey X  
Leader Attributes Inventory(LAI) X* X*
Behavior & Performance Survey   X
Faculty Interviews X X
Participant Focus Groups X  

Note: * The graduate student programs collected LAI scores immediately after instruction. The programs for in-service personnel collected LAI scores 6 months after instruction.

The results of the separate analyses of the two groups (programs for graduate students and programs for in-service personnel) revealed important differences between them in terms of programs and participants: (a) the amount of "administration" content included, (b) average program length, (c) the award of credit versus some financial assistance, (d) the academic degree status of participants, (e) the age distribution of participants, and (f) participants' previous experience as non-school administrators. Also, as noted in Table 3, the two sets of programs collected Leader Attribute Inventory(LAI) scores at different lengths of time after program completion. Consequently, it was not considered appropriate to pool the 17 programs and conduct one overall analysis to determine the combined results for all programs. Instead, the results of the analyses of the ten programs for graduate students and the seven programs for in-service personnel were compared directly. The outcomes that were found to be most representative of both sets of programs comprise the information most useful to those who are preparing to conduct leadership development programs for either graduate students or in-service personnel and are reported in this article.

Limitations

Like all studies, these evaluations have limitations that should be made explicit before describing their results. First, no experimental controls were exercised. Given the present lack of knowledge about how leadership development programs can best be developed, it was decided to encourage programs that used a wide variety of approaches. This necessitated structuring the evaluations as exploratory rather than as confirmatory studies. The evaluations focused on searching for relationships that might later be tested by more rigorous (experimental) designs. Second, all of the data on outcome variables were based upon participant self-perceptions and self-reports of activities. They are appropriate measures, but the credibility of the results would have been enhanced had it also been possible to secure the views of observers about participant behaviors. Third, the LAI gain scores are based upon an 84 percent return from participants in the graduate student group, but on a 54 percent return from the in-service group (after two mailed follow-ups plus telephone calls). A major reason for the low rate of return from the in-service group was that the six-month follow-up had to be scheduled during the summer months when many of the participants were not at work. However, tests of the independence of groups did not produce significant Chi-square values, suggesting that the in-service personnel who responded were not different from the total number of in-service program participants on the demographic variables considered. Fourth, the follow-up period of six months is relatively short to assess some of the effects of the programs, albeit as long as circumstances permitted.

Results

The following results, organized by the nine evaluation questions, were obtained by comparing the outcomes of the analysis of the ten leadership development programs for graduate students with the outcomes of the analysis of the seven leadership development programs for in-service personnel. The emphasis is placed on results that are representative of both sets of programs, and therefore, will be most useful to others who are planning their own leadership development programs.

What Types of Leadership Activities Have Been Developed and Implemented?

The content and methods of the 17 individual programs varied considerably. The key features of the ten programs for graduate students included (a) seminars with a semester-long internship; (b) seminars coupled with field trips (one to five days each); (c) seminars plus teams of participants instructing teachers in the field; (d) one-day workshops focused on health-related attributes; (e) seminars with a focus on self-assessment and planning for self-improvement; (f) three 2.5 - 5 day retreats with several months between sessions; and (g) team taught seminars with applications to contemporary problems in vocational education.

The programs for in-service personnel were equally varied and included (a) developing individualized leadership training plans; (b) seminars coupled with shadowing and workshops; (c) on-site workshops that included tele-learning and a multiple-site learning session; (d) a specialized undergraduate/graduate credit course; (e) a six hour transportable model workshop combined with individual plans of action; (f) a series of planning meetings plus a two and a three day training workshop; and (g) a 2.5 day workshop followed by a series of four seminars.

One fairly consistent difference between the programs for graduate students and those for in-service personnel was that the former did not include content dealing with administration (e. g., law, finance, personnel). Almost all of the in-service programs, because of the pressing needs of participants, were compelled to deal to some degree with immediate administrative concerns.

Table 4 describes additional characteristics of the two sets of programs. Note that the programs for graduate students were, on the average, longer and were offered for graduate credit. Several of the programs for in-service personnel provided participants with some kind of financial assistance to encourage participation.

How Many and What Types of People Participated?

Table 5 presents comparable characteristics of the two groups of program participants. As might be expected, the graduate students had a higher proportion of full-time students and those studying for the doctorate. They also had a higher proportion of students with experience as non-school administrators and appeared to have a greater percentage of participants under 35 years of age.

Table 4
Some Program Characteristics

Characteristics For Graduate Students For In-service Personnel

Length & Intensity Ranged from a total of 6 hours in one day to 90 hours of class instruction plus 180 hours of outside assignments spread over 9 months. Mean length was 39 hours. Ranged from a combination of individualized leadership training together with cluster meetings (10 hours) to a semester-long class which met for 40 hours over a 3-month period. Mean length was 24 hours.
Credit & Stipends All but the 6-hour program were for graduate-level credits. Participants paid tuition. One program was credit based. Two programs provided an option for credit. Three programs provided some financial support.
Class Size Four to 25 with a mean of 16. Three to 26 with a mean of 13.
Number of Attributes Selected as Instructional Objectives Ranged from 4 to 22. Ranged from 4 to 37. (Two programs let participants choose their objectives.)



Table 5
Participant Characteristics

Characteristic Graduate
Students (N=180)
In-Service
Personnel (N=85)

Doctoral student 61% 9%
Other 39% 91%
Male 49% 45%
Female 51% 55%
Caucasian 86% 81%
Other 14% 19%
School administrator experience 47% 45%
No experience 53% 55%
Non-school administrator experience 68% 39%
No experience 32% 61%
35 years of age or less 27% 18%
36 years of age and over 73% 82%
Full-time student 39% 0%
Part-time student 61% 100%

What Were the Costs of Different Leadership Programs?

Three measures of institutional costs for graduate student programs and in-service personnel programs are presented in Table 6. What appears noteworthy is not differences between the two groups of programs, but rather the large variation in cost among individual programs within each group. An inspection of program characteristics revealed that differences in cost might well be attributed to such variables as program length, the nature of special activities provided (i.e., out-of-state travel), and the extent to which institutions instead of participants bore the cost of the special activities.

How Satisfied Were Participants With the Various Leadership Development Programs?

Based upon both quantitative and qualitative data, participants were very satisfied with the programs provided for both graduate students and in-service personnel. Table 7 presents the results of administering the "Participant Satisfaction Survey" after program completion. Five of the items assessed satisfaction with program organization and delivery; three items measured satisfaction with value of the experience. On a five point scale, mean ratings ranged from 4.3 for organization and delivery to 4.6 for value of the experience.

Table 6
Program Direct Costs*

Item Graduate Student
Programs (n=10)
In-Service Personnel
Programs (n= 7)

Cost of Conducting the Program
Range $7,135 - $45,386 $6,889 - $16,780
Mean $16,419 $10,692
Standard Deviation $11,998 $3,323
Cost Per Student
Range $174 - $4,378 $265 - $3,533
Mean $1,409 $1,228
Standard Deviation $1,417 $1,060
Cost Per Student Hour of Instruction
Range $9 - $68 $27 - $118
Mean $24 $60
Standard Deviation $18 $39

Note: *Excludes the cost of planning and developing the program and indirect costs.


Table 7
Participant Satisfaction With Programs

Programs Organization &
Delivery*
Value of the
Experience*

Graduate Student
Range 3.0 - 4.8 3.8 - 5.0
Mean 4.3 4.6
Standard Deviation .56 .37
In-service Personnel
Range 2.5 - 4.9 2.9 - 5.0
Mean 4.4 4.6
Standard Deviation .89 .75

Note: *Calculations based on 5 point scale.

Focus groups revealed that participants in both groups felt that similar experiences should be made available to other vocational educators. They also felt that beginning and advanced level programs should be provided or that the single program in which they participated should be lengthened. This was true regardless of the program's actual length.

To What Degree Did Participants Perceive a Change in Their Leader Attributes as a Result of Participating in the Leadership Development Program?

Data to answer this question were secured by administering the LeaderAttributes Inventory (LAI) twice to participants-once to obtain self-perceptions of attributes as they were before the program (a retrospective measure), and once to secure self-perceptions of their attributes as they were after the program. In order to account for a possible tendency on the part of the participants to inflate their after program scores, the average gain score on four leader attributes that should not have been affected by the programs was subtracted from the remaining 33 attributes before tests of significance were calculated.

Three hundred thirty t-tests comparing before with after LAI scores (adjusted) were carried out for the ten graduate student programs (33 times 10), and 231 tests were conducted for the seven in-service programs (33 times 7). Thirty-six percent of all the tests for the graduate student programs and 31 percent of all the tests for the in-service programs showed statistically significant gains (p <= .05). Moreover, in both sets of programs, the distributions of the after instruction LAI scores were much more compressed around their means than were the before instruction scores. Apparently, both sets of programs had significant impacts on the participants' perceptions of their leader attributes.

Neither of the two sets of programs was very successful in improving the attributes they pre-specified as instructional objectives. Just 41 percent of the attributes specified as objectives by the graduate student programs, and 31 percent of those specified by the in-service programs, made significant (p <= .05) gains. Of all the significant attribute gain scores, only 34 percent were pre-specified as instructional objectives by the graduate student programs, and only 33 percent were pre-specified by the in-service programs. There are several possible explanations for this result: (a) Individuals were given choices in readings and in other assignments, and thus were exposed to different experiences; (b) individuals inevitably learned different things from the same instruction as they interpreted the experience in terms of their own values and cognitive structures; (c) the attributes may not be completely independent; and (d) program designers did not yet know the best way to develop specific attributes.

Two key attributes, "adaptable, open to change" and "visionary," appeared to be readily improved by both sets of programs. Six or more of the ten programs for graduate students and four or more of seven programs for in-service personnel succeeded in improving significantly (p <= .05) the participants' perceptions of these two attributes. On the other hand, both sets of programs found the attribute of "communication" (i. e., listening, oral, written) most resistant to perceived improvement. Although at least six programs for graduate students and four programs for in-service personnel tried to improve communication, they succeeded in only two cases. Program directors hypothesized that much more time would be needed to make a meaningful impact.

Finally, some relationships were explored between the demographic characteristics of participants and gains in leader attribute scores. In both sets of programs, significant improvement in leader attributes was not meaningfully related to the degree objective, gender, ethnicity, experience as a school administrator, experience as a non-school administrator, age, or full- versus part-time student status of the participants. For participants in the in-service programs there was a meaningful negative relationship between years of teaching experience and gain scores on the LAI.

To What Extent did the Leadership Development Activities Affect How Participants Perceive Their Ability to Behave and Perform as Leaders?

Data about behavior and performance were collected by administering the "Behavior & Performance Survey" six months after program completion. Information was obtained about (a) the extent to which the leader attributes were used and those considered most useful, (b) the relationship between perceived successful performance as a leader and the number of attributes improved significantly by the program, (c) the number of leadership activities in which program participants had been engaged, and (d) the amount of additional leadership training participants had acquired.

Participants reported that they had used all 37 leader attributes during the six month period following instruction. Between 15 and 56 percent of the respondents to the "Behavior & Performance Survey" from the graduate programs considered each of the 37 attributes most useful; between 6 and 46 percent of the respondents from the in-service programs considered each of the attributes most useful. There was considerable agreement between the participants of both sets of programs about the particular attributes that were most useful to them and the six attributes that prior empirical research (Moss & Liang, 1990) had shown to explain collectively 81 percent of the variation in effective leadership performance among vocational administrators. The data in Table 8 show the ten attributes considered most useful by most participants in the two sets of programs, as well as the six attributes that best explained the variation in the leadership performance of vocational administrators. Note that participants in the two sets of programs agreed on eight of the ten attributes considered most useful to them, and that five of those eight most useful attributes are among the six attributes that best explain the variation in leader performance.

The "Behavior & Performance Survey" secured ratings (five point scale) from each participant on the extent to which (s)he had accomplished the six tasks of leaders in vocational education in the six months since completing the program. The tasks were (a) inspire a shared vision, (b) foster collaboration and ownership, (c) exercise power effectively and enable others to act, (d) set the right (external) context for the organization, (e) establish an environment conducive to learning, and (f) satisfy the job-related needs of group members. Self-ratings by the participants in each program were averaged and ranks assigned to programs. A significant (p <= .05) relationship was found in both sets of programs between the program rankings of participants' perceptions of their successful performance as leaders and the rankings of programs based on the number of attributes the programs had improved significantly. The rank order correlations were .53 for the graduate student programs and .75 for the in-service programs.

Table 8
Attributes Considered Most Useful by Participants and Their Ability to Explain Performance

Leader Attribute Ten Most Useful
Attributes*
Attributes
That Best
Explain Leader
Performance**
Graduate Student
Programs
In-service
Programs

Adaptable, open to change 56% 46% x
Communication
(listening, oral, written)
55% 44%  
Insightful 51% 35% x
Visionary 51% 38%  
Team building 50% 39% x
Willing to accept responsibility 50% 41% x
Confident, accepting of self 48%  
Motivating others 47% 35% x
Planning 47%  
Networking 45% 36%  
Decision-making   39%  
Delegating   38%  
Information management   x

Note: *Percent of participants who judged each attribute to be "most useful" to them.
**From Moss & Liang, 1990.

The "Behavior & Performance Survey" also secured information about the additional number of leadership activities and the further leadership training participants had engaged in since program completion. An average of 57 percent of the participants in the programs for graduate students and 69 percent of the participants in the programs for in-service personnel reported engaging in a greater number of leadership activities during the six month period following instruction than they had before instruction. Forty-five percent of the additional leadership activities in both sets of programs were job-related; 24 and 52 percent, respectively, of the additional activities were in new professional roles for the participants of the graduate student and in-service programs.

An average of 18 percent of the participants in graduate programs and 60 percent of the in-service program participants reported undertaking additional leadership training activities during the six month period following instruction. The kinds of further training reported were primarily focused readings, courses, and workshops.

What Kinds of Impact Did the Leadership Development Programs Have on Institutions' Involvement in Developing and Maintaining Leadership Learning Activities for Their Students?

As reported by their directors, the programs appeared to have had some important and desirable institutional impacts. First, of the eight institutions that offered programs for graduate students, four reported adding new leadership courses to their regular graduate curricula, three other institutions were revising existing graduate or undergraduate courses to include greater emphasis upon leadership development, and one institution had secured state funding to repeat their program.

Second, the most common impact on the institutions that provided programs for in-service personnel was the improvement of course offerings based upon what was learned from the programs. Networks, both internal and external, were reported to have been enhanced. The two programs that had received financial support from their state agencies were being considered for continued funding, and one program not originally receiving state support was being encouraged by the state agency to apply for funding so that the program could be continued.

What Activities Were Considered Particularly Effective and What Leader Attributes Did They Impact?

One aspect of the evaluation was an attempt to link specific program activities considered to be effective with the specific attributes they impacted. To the extent that these linkages could be determined, knowledge of them could be used to make future program designs more efficient. In order to establish the linkages, all of the following criteria needed to be satisfied: (a) The activity was tried in at least five programs; (b) it was nominated during focus groups as effective by participants in at least four programs; (c) the nominated activity was linked to the same outcome (leader attribute) in at least two of the four programs; and (d) the LAI score of the linked attribute was also significantly increased (p <= .05) in the same two programs. Linkages were not studied in the in-service programs.

Given these rather stringent criteria, two kinds of program activities appeared to be effective in improving specific leader attributes. First, self-assessment activities, combined with planning for self-development, were reported to be very effective by participants in both sets of programs. In the programs for graduate students, these activities were found to improve the specific attributes of: (a) confident, accepting of self; (b) adaptability, open to change; and (c) appropriate use of leadership styles.

Second, observations of and interviews with leaders at work were also reported to be very effective by participants in both sets of programs. In the programs for graduate students, these activities were found to improve the specific attributes of a visionary and courageous risk-taker.

Another technique used for identifying particularly effective program characteristics (without linking them to specific attributes) was to examine the empirical relationships between selected program characteristics common to the graduate student and the in-service personnel programs and the number of attributes significantly improved by the program. In this exploratory analysis, programs were ranked on each characteristic by the evaluators based upon their knowledge of programs, and these rankings were correlated with program rankings based upon the number of LAI attributes significantly improved by the program. The following interpretations were drawn from the analysis:

  1. The more structured the program, and the more active the involvement of the participants, the more likely it was for attributes to improve. (The rank order correlation coefficient for the graduate student programs was .82. A coefficient was not computed for the in-service programs).
  2. Readiness for change was prerequisite to attribute improvement. Readiness was developed by providing experiences for (a) team building and (b) assessing participants' attributes, with time for reflection and goal setting. (The coefficient for graduate programs was .56. A coefficient was not computed for in-service programs).
  3. While there was a significant correlation (p <= .05) between number of hours of directly supervised instruction and the number of leader attributes that increased significantly for graduate student programs (rs = .56), no such relationship was found in the programs for in-service personnel. This difference may be explained by the special characteristics of two in-service programs that required considerable self-study time rather than instructor contact time and produced relatively high numbers of attributes with significant gains.
  4. There was no significant relationship in either set of programs between the number of attributes increased significantly and the cost per student.

What Recommendations Can be Made for Improving Leadership Development Activities?

Recommendations for improving the design and conduct of future leadership development programs were secured mainly from participants during the focus groups. Participants were asked to nominate activities or processes that they believed were particularly effective. When there was consensus among participants in seven of the ten groups with whom the evaluators met, and when program directors concurred, the activity or process nominated was included in the following list.

  1. Careful course structure and direction by the instructor are necessary to keep participants focused on the program objectives.
  2. Participants should be helped to construct a cognitive model of leadership that can guide their future leadership development. Readings, presentations by role models, interviews and observations are helpful in this regard.
  3. Team building experiences should be provided early in the program to build a safe, supportive environment in which attribute changes are encouraged and facilitated.
  4. A number of self-assessment instruments (inventories and tests) of leader qualities and styles should be administered to (a) sensitize participants to their weaknesses as a basis for improvement and their strengths as a foundation for building upon, and (b) help participants understand, respect, and appreciate behavioral differences among individuals.
  5. Opportunities to plan for self-improvement, based upon self-assessment, are useful mechanisms to encourage reflection and goal setting.
  6. Sufficient time must be allowed for guided practice in applying the attributes to be changed and for reflecting on the experience. Simulations, exercises, games, and field assignments are useful tools. The time allocated to practice seems to distinguish between programs that teach about leadership and those that bring about behavioral modifications.
  7. While program participants will ordinarily engage in the same types of activities (e.g., shadowing, making presentations, etc.), instruction should be personalized by leaving the choice of specific experiences (e. g., whom to shadow, presentation topics, etc.) up to individuals.
  8. When selecting the attributes that will constitute the instructional objectives of a program, consideration must be given to how difficult each is to improve. Balance must be maintained between likely difficulty to improve, time available for the program, and the number of attributes to affect.

Implications

The results of these evaluations call for three kinds of actions by the National Center for Research in Vocational Education. First, the success of the programs, as measured by (a) the satisfaction of participants, (b) the degree to which participants perceived improvement in their leader attributes, (c) the effect they appeared to have on participants' subsequent behavior and performance, and (d) their impact on the institutions providing them, are sufficiently encouraging to warrant NCRVE's continued efforts to stimulate, facilitate, and improve leadership development programs throughout the country.

Second, the evaluations have also yielded important insights into the characteristics that lead to successful leadership development programs. While further research and development are needed to improve our knowledge about how to maximize program efficiency and effectiveness, an important next step in stimulating, facilitating, and improving leadership development programs will be to incorporate what has been learned thus far into an instructional program. That program should be made readily available to vocational educators who wish to implement new programs or to supplement existing ones. Such a program has now been created by NCRVE with sufficient materials to cover an estimated 90 clock hours of instruction, plus outside assignments. Program length and content may be adapted to suit the needs of particular groups and situations. The published program, entitled "Preparing Leaders for the Future," should be available to the field through NCRVE by November, 1994.

Third, the Leader Attributes Inventory (LAI) has been shown to be a useful assessment tool. The attributes it measures were used by participants in their leadership activities, and it has been shown to be sufficiently sensitive to reflect changes in the self-perception of attributes as a result of planned educational interventions. To further realize its potential, however, the LAI needs to be refined and norms and standards established so that it can be used for individual diagnostic purposes as well as a criterion for measuring program impact. In fact, NCRVE has recently undertaken such a project that is expected to be completed in November, 1994. At that time, an LAI Manual should be available to prospective users of the instrument. The manual will describe (a) the LAI's conceptual foundation, (b) how the instrument may be used, (c) its psychometric characteristics, (d) the two groups of vocational educators for whom norms have been established, and (e) the kinds of information provided to users in the form of individualized feedback reports.

References

Finch, C. R. (1993). Breakers: An organizational simulation for vocational education professionals. Berkeley, CA: National Center for Research in Vocational Education.

Finch, C. R., Gregson, J. A., & Faulkner, S. L. (1991). Leadership behaviors of successful vocational education administrators. Berkeley, CA: National Center for Research in Vocational Education.

Finch, C. R., Gregson, J. A., & Reneau, C. E. (1992, September). Vocational education leadership development resources: Selection and application. Berkeley, CA: National Center for Research in Vocational Education.

Finch, C. R., Reneau, C. E., Faulkner, S. L., Gregson, J. A., Hernandez-Gantis, V., & Linkous, G. A. (1992, October). Case studies in vocational educational administration: Leadership in action. Berkeley, CA: National Center for Research in Vocational Education.

Leske, G. W., Berkas, T. H., & Jensrud, Q. (in press). An evaluation of seven leadership development programs for in-service vocational educators. Berkeley, CA: National Center for Research in Vocational Education.

Migler, J. R. (1991, October). Selected leadership attributes and styles of administrators in exemplary vocational education institutions and administrators in Minnesota technical colleges. Unpublished doctoral dissertation, University of Minnesota, St. Paul.

Moss, J., Jr., Finch, C., & Johansen, B-C. (1991). What makes a vocational administrator an effective leader? Journal of Industrial Teacher Education, 29 (1), 1-15.

Moss, J., Jr., Finch, C., Lambrecht, J. J., & Jensrud, Q. (in press). Leader Attributes Inventory Manual. Berkeley, CA: National Center for Research in Vocational Education.

Moss, J., Jr., Finch, C., Lambrecht, J. J., & Jensrud, Q. (1994). Leader Effectiveness Index Manual. Berkeley, CA: National Center for Research in Vocational Education.

Moss, J., Jr., Jensrud, Q., & Johansen, B-C. (1992). An evaluation of ten leadership development programs for graduate students in vocational education. Berkeley, CA: National Center for Research in Vocational Education.

Moss, J., Jr., Johansen, B-C., & Preskill, H. (1991). Developing the Leader Attributes Inventory: An odyssey. Journal of Industrial Teacher Education, 28 (2), 7-22.

Moss, J., Jr., & Liang, T. (1990). Leadership, leadership development, and the National Center for Research in Vocational Education. Berkeley, CA: National Center for Research in Vocational Education.

Wardlow, G., Swanson, G., & Migler, J. (1992). Institutional excellence in vocational education: Assessing its nature and operation. Berkeley, CA: National Center for Research in Vocational Education.

White, D. J., Asche, F. M., & Fortune, J. C. (1992), April). Minority leadership training: Evaluation and analysis of a five-state program. Paper presented at the American Education Research Association meeting, San Francisco, CA.


DLA Ejournal Home | JITE Home | Table of Contents for this issue | Search JITE and other ejournals