JVTE v13n2 - Assessing Tech Prep Implementation


Volume 13, Number 2
Spring 1997

ASSESSING TECH PREP IMPLEMENTATION

Rodney L. Custer
Dept. of Practical Arts and Vocational-Technical Education
University of Missouri-Columbia
Columbia, MO

Sheila K. Ruhland
Business Education Division
Western Wisconsin Technical College
La Crosse, Wisconsin

Bob R. Stewart
Dept. of Practical Arts and Vocational-Technical Education
University of Missouri-Columbia
Columbia, MO

It is becoming increasingly apparent that the time is right to initiate the process of assessing Tech Prep program effects and to determine the extent to which local Tech Prep implementation efforts are consistent with the Vocational Education Act. Due to the comprehensive scope and systematic nature of the Tech Prep concept, Stufflebeam's CIPP model was selected and adapted. Tech Prep components for each program assessment dimension (Context, Input, Process, Outcomes) were identified. Results from this study indicate that the four assessment dimensions be included throughout the planning, implementation and assessment phases of Tech Prep.

Tech Prep's mission and vision are taking root in concrete form through various initiatives (i.e., articulation agreements, applied academic courses, industry agreements, and teacher and counselor workshops). It is becoming increasingly apparent that the time is right to initiate the process of assessing Tech Prep efforts. At a minimum, the goals of these assessment efforts should seek to: (a) determine specific outcomes for Tech Prep evaluation, and (b) assess the extent to which local Tech Prep implementation efforts are consistent with the mandates of the Carl D. Perkins Vocational and Applied Technology Act of 1990.

Review of Literature

Today's educational structures are facing profound and exciting new challenges. Concern about the United States' ability to compete in world markets has directed attention to the link between education and employment. Increasingly complex technologies are rapidly extending into virtually every aspect of modern life. Students will need to acquire both an understanding of the subjects studied and the capacity to apply what they have learned in the increasingly complex real-world settings they will face as they enter the workforce. Educational systems are searching for ways to promote the integration, application and transfer of learning through mechanisms that are meaningful and motivational to students and that are capable of preparing them for living in an increasingly complex world. Government and educational agencies are responding to this call for reform. Federal and state initiatives have been developed to restructure education including Goals 2000, School-to-Work Opportunities Act, and Title III-Part E of the Carl D. Perkins Vocational and Applied Education Act.

Tech Prep Legislation

Tech Prep is an educational plan that offers an alternative to the traditional college-prep program; a solid academic foundation; coordinated efforts between secondary and postsecondary schools; addresses student needs, backgrounds, and learning styles; and provides students with lifelong learning ( Hull & Parnell, 1991 ). Tech Prep/Associate Degree is a carefully designed curriculum for students in a four year (2+2) or six year (4+2) plan. Students gain the competencies (knowledge, skills, and values) essential for technical careers. Since 1985 (Hall & Parnell, 1991), Tech Prep has become a viable educational initiative in our schools.

Federal dollars are authorized for Tech Prep under Title III-Part E of the Carl D. Perkins Vocational and Applied Technology Act of 1990. Dollars were allocated to each state board that subsequently awarded Tech Prep grants to individual consortia. A Tech Prep consortia consists of at least one secondary school and one postsecondary institution.

Consortia submitted three year plans outlining their Tech Prep initiatives. Under Title III, at a minimum, Tech Prep must lead to the completion of an associate degree or two-year certificate; provide technical preparation in a specified field; build student competence in mathematics, science, and communications through a sequential course of study; and lead to employment placement. In order to be eligible for Title III funding Tech Prep must contain the following seven elements: articulation agreements, appropriate curriculum design, curriculum development, in-service teacher training, counselor training, equal access for special populations, and preparatory services ( Brustein, 1993 ).

Brustein (1993) , in a guide on the implementation of legislation, summarized the two purposes of Tech Prep: (1) to provide planning and demonstration grants to consortia of local educational agencies for the development and operation of 4-year programs designed to provide Tech Prep education leading to a 2-year associate degree or a 2-year certificate; and (2) to provide, in a systematic manner, strong comprehensive links between secondary schools and postsecondary educational institutions.

Thus it is clear that Tech Prep is comprehensive and ambitious in terms of both goals and scope. There is tremendous potential for spanning the boundaries between vocational and academic education, and making education more efficient and meaningful for students.

Tech Prep Evaluation

The basic purposes of Tech Prep evaluation have been identified as collecting data, providing valuable information to stakeholders, and fulfilling the legislative requirements of the state's accountability system ( Dornsife, 1992 ). The Department of Education and Office of Vocational and Adult Education have provided guidelines for evaluating Tech Prep initiatives. These guidelines include identifying program characteristics and expected outcomes, anticipating possible outcomes and decisions that may result from the evaluation, identifying information sources, and summarizing and presenting the information ( Brustein, 1993 ).

Increased emphasis was placed on evaluation, planning, and accountability as part of the amended 1976 Vocational Education Act. At the same time, vocational education program evaluation has been a neglected area of research even though there have been noted deficiencies in the process ( Strickland & Asche, 1987 ; Wadsdyke, 1978 ). These deficiencies have included: (a) identifying indicators of the effectiveness of the program, (b) lack of follow-up as to the impact of vocational education programs, and (c) failure to use evaluation as part of program planning, policy setting, and review.

In October 1992, the U. S. Department of Education contracted with the Mathematica Policy Research, Inc. (MPR) to conduct a national evaluation of Tech-Prep planning and implementation efforts. The five year plan, consisted of three major data collection components which included: (a) a survey of state-level Tech-Prep coordinators, (b) a survey of local Tech-Prep consortia, and (c) in-depth studies of ten selected programs. Data were collected during the fall 1993 from local consortium coordinators on nine broad topics ( Silverberg & Hershey, 1995 ). The topics included composition and governance structure, funding and resources, core Tech-Prep programs, students served, workplace experience, secondary and postsecondary curriculum development and articulation, career and staff development, student outcomes, and monitoring and evaluating Tech-Prep progress.

Findings indicated that most consortia in FY 1993 were still in a pilot phase to report on student participation. Tech-Prep serves more than 60% of secondary students in the country. However nationwide, only 5% of all secondary students were represented in the consortium. Not surprisingly, longer a consortium has been involved with Tech-Prep, the more likely several Tech-Prep program components were being implemented. Program components implemented included career clusters, core program, and developed new curricula to make workplace experiences available to Tech-Prep. Reporting the actual number of Tech-Prep students remains a major obstacle for consortia. Approximately one-third of the consortia could report numbers of Tech-Prep students during the fall 1993.

Tech Prep should be evaluated to determine if the goals of the program have been attained. Recent studies have been conducted to assess Tech Prep initiatives ( Bragg, Layton, & Hammons, 1994a ; Brawer & Hammons, 1993 ; Delaware Statewide, 1990 ; Roegge, Wentling, Leach, & Brown, 1993 ; Rubin, 1993 ). While these studies vary in their overall purpose, each was aimed at describing the planning and implementation of local Tech Prep initiatives.

According to Bragg, Kirby, Puckett, Trinkle, and Watkins (1994) , Tech Prep "evaluation should, at a minimum, be used to: (1) examine the effectiveness of process, (2) assess the extent to which the learner outcomes are being met, and (3) ensure that continuous improvement is a constant goal" (pp. 104-105). These authors further stated that if these three goals are followed, "the quality, effectiveness, and impact of Tech Prep systems can be determined and enhanced" (p. 105).

Tech Prep Outcomes Assessment

Evaluation efforts related to Tech Prep outcomes have been limited. This is due partly to the number of Tech Prep initiatives that have reported their efforts as "in its infancy" or "in the informational stage". Findings from the Omnibus Survey in 1992 ( Office of Educational Research and Improvement, 1994 ) further reported:

Most of the tech-prep initiatives had no students in the spring of 1992: 72 percent of the postsecondary institutions reporting tech-prep said their programs had no secondary students, and 87 percent said they had no postsecondary students. Most tech-prep initiatives were still in the planning and development stages. (p. 113)

Research has been conducted to identify specific outcomes as a result of Tech Prep. An outcome is measured against some clearly stated goal. These goals are related to the quality, effectiveness, and goal attainment of Tech Prep. Outcomes assessment provides the information essential to determine if Tech Prep is functioning, and to assess if students are moving through and completing the degree plan. The student becomes the ultimate focus of Tech Prep outcomes assessment.

The focus on outcomes provide the consortium with the information essential to evaluate the goals established as part of the Tech Prep initiative. Thus, in order to identify the outcomes, it becomes imperative that goals are developed each year. The clearer the goals, the more likely the consortia will be able to identify appropriate outcomes. A summary of research conducted to identify Tech Prep outcomes follows.

A recent study by Bragg, Layton, and Hammons (1994b) examined local Tech Prep implementation in the United States. One of the five major research questions was "what are the goals, elements, and outcomes of local Tech Prep initiatives?" (p. 14). The respondents were given seventeen student outcomes using a rating scale of very low, low, moderate, high, and very high. Fifteen of the outcomes were rated a high level of priority. Of the seventeen outcomes, the top three outcomes were: "improved knowledge and skills in math; increased problem-solving, thinking, and reasoning skills; and improved employability skills and work readiness" (p. 50).

In Dutton, Hammons, Hudis, and Owens (1994) , there is a table titled "Tech Prep Initiative Outcome Measures" (p. 55) that provides a series of questions that enable the consortium to identify desired results from a Tech Prep program. Specific questions are asked that measure education initiatives. The outcomes are rated using a scale of none, minor, moderate, major, and extreme.

Interviews were conducted by Ruhland, Custer, and Stewart (1994) to explore the processes used in Missouri to implement Tech Prep. The process oriented categories included: (a) articulation and collaboration, (b) student program planning and implementation, (c) staff development, (d) curriculum development, and (e) marketing efforts. Findings indicated that articulation structures were in the initial stages of development. Program planning was also identified in the initial stages, with the most frequently used approach being career clusters. Staff development activities included workshops, informational meetings, Tech Prep conference attendance, and on-site visits. Most coordinators indicated they were promoting team teaching as a mechanism for delivering integrated academic content and activities. A variety of techniques were employed to inform the general public and recruit students.

A study by Roegge, Wentling, Leach, and Brown (1993) found that using the concept mapping process assisted with displaying the major components for Tech Prep. They identified the relationships between the components and priorities placed on each component and cluster of related components. The concept mapping process provided a pictorial representation of Tech Prep stakeholder's perceptions. Clusters identified included benefits, populations served, outcomes, program components, enrollment incentives, external involvement, planning and support, staff development, and articulation/ integration.

Dornsife (1992) identified the outcomes that postsecondary institutions should collect for program accountability. Those outcomes include percentage of course enrollment, program completion, job placement, number of articulated classes and agreements, marketing activities, staff development, advising, and student tracking.

Hammons (1992) identified six focus areas or components into which outcomes could be grouped. The "student" focus component indicators include student retention, grade point average, and demonstration of job competency. The "facilitator" focus component includes faculty professional development, guidance programs, and access to special populations. The "professional development" focus component relates to obtaining information related to academic and vocational skills attainment and advanced courses taken. The "attitudes/perceptions" focus component includes recognition and level of satisfaction with the program. The "careers" focus component evaluates job placement, employment levels, and earning levels. The final focus component, "resources", identifies the quality and quantity of resources utilized.

A study conducted by Bragg and Layton (1992) collected data related to Tech Prep philosophies and policies, staffing, administrative structure, evaluation, marketing, and staff development. A list of outcomes were presented to the respondents to ascertain if the outcomes had been included in their state's list of outcomes. These outcomes included improved: (a) academic skills, (b) secondary program completion rate, (c) job placement rate, (d) technical skills, (e) postsecondary program completion rate, (f) career awareness, (g) employer satisfaction, (h) problem-solving and critical thinking skills, (i) attitudes toward or perceptions of technical careers, and (j) student self-esteem. Bragg and Layton further assert that "since fewer than forty percent of the states have established outcomes at the state level, a major concern for all leaders at all levels should be the identification of expected outcomes and evaluation procedures" (p. 4-17).

The Partnership for Academic and Career Education (1992) designed a series of categories with objectives (outcomes) to assess the Tech Prep implementation at the middle/high school and postsecondary levels. The categories included: organizational/planning structure, curriculum development/enhancement, academic and vocational integration, teaching/learning process, special populations, guidance/counseling, staff development, school climate, evaluations, and middle school involvement. Specific outcomes are listed under each category with a rating scale of low, moderate, high, and not applicable level of implementation.

As indicated earlier, the student becomes the ultimate focus of Tech Prep outcomes assessment. McCaslin (1992) developed a five step process for student outcomes assessment. They include: "select outcomes to be assessed; identify indicators for each outcome; identify information collection arrangements; determine the analysis procedures; and report the information" (p. 103). Three years into the development of Tech Prep, there is a need to describe the current evaluation efforts of Tech Prep and the overall progress of the initiatives.

Purpose of the Study

The primary purpose of this study was to obtain the perceptions of state Tech Prep coordinators of procedures used to assess Tech Prep implementation. To accomplish this, the following objectives were formulated:

  1. To identify the extent and structure of Tech Prep evaluation efforts.
  2. To determine the components being used to evaluate Tech Prep.
  3. To determine the extent to which states are able to identify Tech Prep students.
  4. To examine the criteria for and the extent to which student outcomes are being used in Tech Prep evaluation.
  5. To identify the frequency with which various processes are used to collect data in Tech Prep evaluation.

Methodology

The procedure that was used for the study consisted of developing a survey to collect data from those responsible for statewide Tech Prep leadership and reporting. The survey, titled "Tech Prep Evaluation Efforts", was designed to gather data across a range of issues related to Tech Prep implementation and assessment. The survey began by requesting information regarding the general structure of each of the state's Tech Prep initiative assessment procedures. This concern for assessment structures is critical since one of the requirements of the 1990 Perkins Act was that states "implement systems of core measure and standards for assessing the performance of secondary and postsecondary vocational programs by fall 1992" ( Office of Educational Research and Improvement, 1993, p. 27 ). The 1993 NAVE report concluded that "in general, states are doing a good job of developing performance standards and measures" and that "all states are using more than the two standards/measures required by the Perkins Act" (p. 46). More specifically, information was necessary to (a) assess the availability of data and (b) to further explore the dimensions appropriate for assessing the implementation of Tech Prep initiatives.

Data Collection

State Tech Prep Coordinators at the national level (including Washington, DC and Puerto Rico) were contacted by mail and asked to complete a survey. The 52 coordinator's names were obtained from the United States Department of Education, Office of Vocational and Adult Education. All states were represented in the mailing. The coordinators were also asked for information concentrating on the criteria and procedures used to conduct their statewide Tech Prep assessment procedures as well as for information on how Tech Prep students are identified. The survey content was based on assessment issues and Tech Prep outcomes that were described in the literature. Content validity was further established as part of a series of interviews that were conducted by Ruhland, Custer, and Stewart (1994) with Missouri's Tech Prep coordinators. The initial mailing yielded 15 (28.8%) returns. A follow-up survey including a second copy of the instrument was mailed to non-respondents approximately one month after the initial return deadline. This yielded an additional 20 (38.5%) returns for a total of 35 (67.3%).

Program Outcome Assessment - National Survey Findings

Our results (see Table 1) were somewhat less conclusive. Respondents were asked whether their state "currently has an evaluation plan in place to evaluate [their] Tech Prep efforts?" Over three-fourths of those responding indicated that such a plan is in place. The discrepancy between the NAVE report and our data could reflect some difference in local perceptions and statewide reporting mechanisms. It is clear, as the 1993 NAVE study indicates, that assessment procedures were "clarified late in the regulatory process" ( Office of Educational Research and Improvement, p. 30 ) and that "local implementation has just begun" (p. 45).

Table 1
State Plan to Evaluate Tech Prep

Frequency Percentage
Have an evaluation program in place 27 77.1%
Do not have a program in place 7 20.0%
The program is in draft form 0 0.0%
No response 1 2.9%

N = 35

To probe the nature of the state evaluation plans, respondents were also asked to indicate which components of the Tech Prep initiative are included in their state's assessment procedures. Six components were listed on the survey (see Table 2). The responses indicate a pattern of assessment across multiple components rather than focused on any single dimension. Respondents were also encouraged to list any additional components that may be included in their state's evaluation plan. The limited responses were restricted to two components, i.e., business and industry involvement/apprenticeship opportunities and administration and organization.

Table 2
Components of Tech Prep Evaluation

Frequency Percentage
Articulation 22 63%
Student Program Planning 15 43%
Staff Development 21 60%
Curriculum 18 51%
Marketing 16 46%
Evaluation 15 43%
Other 5 14.3%

N =35

The focus then shifted to one of the key input elements identified in the literature; the ability to determine the degree to which Tech Prep initiatives affect students along several dimensions including achievement, drop out, placement, etc. To assess this aspect of Tech Prep implementation, it would be extremely useful to be able to identify and distinguish between students who are in Tech Prep from those students who are not. Thus, the initial question on the survey asked whether the state's Tech Prep plan provides a mechanism for identifying who is and who is not a Tech Prep student. Approximately two-thirds of the state coordinators who responded to the survey indicated that their state's consortia were able to identify a Tech Prep student (see Table 3).

Table 3
Ability to Identify Tech Prep Students

Frequency Percentage
Can identify Tech Prep students 24 68.6%
Cannot identify Tech Prep students 10 28.6%
No Response 1 2.8%

N = 35

In order to explore these responses in more depth, those who indicated an ability to identify Tech Prep students were then asked to indicate specifically "how" those students were identified in their state. The most frequent and most formalized responses indicated that Tech Prep students were those who are enrolled in courses:

  1. generally oriented toward an occupational path or career direction, or
  2. sequenced and articulated toward post-secondary technical training and/or certification. (These types of responses typically identified the articulation of math, science, communications, and technologically oriented courses.)

These types of rather formalized and consistent responses were provided by less than one-half (10-11) of those who had indicated that they are able to identify Tech Prep students. Additionally, these types of responses were typically accompanied by specific, statewide documentation that usually contained a definition of Tech Prep students.

Input from the remaining 50% of the sample who had responded favorably to the Tech Prep student identification question was more diverse and much less specific. In some cases, supporting documentation defined Tech Prep "programs" rather than Tech Prep "students." Others indicated that identification was either on a student self-selected basis or that individual schools were free to designate based on their own criteria.

In the aggregate, the survey indicates considerable disparity across the nation in terms of the degree of formal identification of students participating in Tech Prep initiatives. Less than one half of those responding (or approximately one state in five) appear to have formalized and clearly documented a mechanism for identifying Tech Prep students. The results also indicate that many states either (a) do not have a mechanism for identifying these students or (b) are not using specific definitions or criteria to identify Tech Prep students.

As a follow-up to those who had indicated that they were not able to identify specific Tech Prep students, respondents were asked whether they "intend to include student progress as part of evaluation" and if so, how they plan to accomplish the assessment. One half of the respondents indicated that they plan to do so (see Table 4).

Table 4
Tech Prep Student Non-identifiers Intending to Use Student Progress in Program Evaluation

Frequency Percentage
Intend to use student progress as part of the program evaluation 5 50.0%
Do not Intend to use student progress as part of the program evaluation 2 20.0%
No Response 3 30.0%

N = 10

The responses of those indicating that they plan to use student progress data and information in spite of a lack of identification of Tech Prep students were examined for substance and patterns. The responses were quite non-specific in nature, referring generally to such strategies as "visiting with students," "performance based instruction," "performance reports," and "placement exams." None of the respondents indicated the mechanism by which students would be selected to participate.

Next, the focus shifted from general assessment planning to the criteria used to assess Tech Prep student progress. From a program assessment point of view, this poses a dilemma. Criterion based assessment of student progress presumes an ability to identify and track changes in students along the identified criteria. As has been indicated above, it is apparent that many states lack a mechanism for identifying Tech Prep students. Thus, the application of criterion referenced assessment procedures to individual students is not possible in these states. In the absence of procedures for identifying program participants, the same criteria may be applied generally across a selected sample of students (an entire school, a school system, certain grade levels, etc.). In this instance, however, the ability of assessment procedures to attribute changes in students (along the selected criteria) to the intervention of the program (in this case Tech Prep) is extremely limited.

In light of this important caveat, the results of those who indicated a lack of procedures for identifying specific Tech Prep students were examined ( N =10). Six of the "non-identification" respondents indicated that they are using some of the criteria for evaluating student progress. The criteria that are being applied in these states were distributed across the range of assessment criteria (see Table 5). A similar pattern was observed for states indicating an ability to identify Tech Prep students. However, in addition to the use of multiple assessment criteria, respondents from these "identifier" states also indicate that they are employing specific criteria to assess student progress at a higher percentage. In addition to the criteria that were provided on the survey, some additional criteria were suggested including graduation from college and adherence to a career plan.

Table 5
Evaluation Criteria Used to Assess Tech Prep Student Progress

Specific Tech Prep Students Not Identified Specific Tech Prep Students Identified
Frequency of "non identifiers" Percentage of "non identifiers" Frequency of "identifiers" Percentage of "identifiers"
Drop out rate 3 30% 16 64%
Graduation rate 3 30% 18 72%
Test scores 1 10% 15 60%
Pursue post-secondary education 4 40% 19 76%
Job placement 2 20% 19 76%
Grades 0 0% 12 48%
Other 1 10% 5 25%

N for states indicating a lack of procedures for identifying specific Tech Prep students = 10

N for states indicating procedures for identifying specific Tech Prep students = 25

Total N = 35

It is important to note that a pattern of using multiple criteria to assess students was clearly present across the entire sample (see Table 6). Nearly 70% indicated that they are using more than one criterion and almost one half are using as many as five criteria. This use of multiple criteria provides for a more comprehensive and complete assessment of Tech Prep than would be possible if the process was to be restricted to any single (or lesser number of) criteria.

Table 6
Multiple Criteria Used to Assess Tech Prep Student Progress

Number of Criteria Used Frequency Percentage
7 7 20.0%
6 6 17.1%
5 4 11.5%
4 2 5.7%
3 3 8.6%
2 2 5.7%
1 6 17.1%
0 5 14.3%

N =35

One of the important aspects of any assessment model or procedure has to do with the methods and instruments that are employed to gather data. This study found that on-site interviews are the method of choice for Tech Prep initiative assessment data collection. Substantial use was also made of surveys, telephone interviews, and third party evaluators (see Table 7). At closer view, these findings are encouraging because they indicate a rather strong preference for the types of mechanisms that provide for more substantive interaction, probing and in-depth conversation between evaluators and those engaged in Tech Prep implementation. This pattern of assessment mechanisms also indicates that most states are injecting some form of external assessment into the process. More specifically, the four primary mechanisms (listed on the survey) are typically conducted by individuals or agencies other than those engaged in program delivery.

Table 7
Data Gathering Mechanisms

Frequency Percentage
On site Interviews 22 63%
Surveys 13 37%
Telephone Interviews 10 29%
Third Party Evaluation 12 34%
Other 20 57%

N =35

As was the case with student assessment criteria, there is a clear preference for using multiple procedures for gathering program assessment data. Over one half of those responding indicated that they are using multiple data collection procedures and one-third are using three or more. Again, this indicates a clear preference for triangulation rather than depending on a single mechanism (see Table 8).

Table 8
Use of Multiple Methods for Program Data Collection

Number of Methods Used Frequency Percentage
5 2 5.7%
4 7 20.0%
3 3 8.6%
2 8 22.8%
1 14 40.0%
0 1 2.9%

N =35

Discussion/Implications

Given the complex and multi-dimensional nature of the Tech Prep initiative, it is critical that program assessment be multi-dimensional and capable of assessing systemic change. Within vocational education, there has been an emphasis on including measures of outcomes. Certainly this emphasis must be retained within any viable assessment plan. Some of the outcomes are based on the conceptualization of the role of Tech Prep such as including reduction of drop out rates, successful articulation of students into post-secondary level technical programs, etc. Still others are more specific to individual state and local settings. These program outcomes must be specified in advance if program effect is to be most effectively assessed. It should also be clear that successful Tech Prep assessment must involve more than the assessment of outcomes. The most concrete example of this point from this study is that an absence of specific Tech Prep student identification leads to ambiguity in the assessment of many process and outcome variables.

This study also illustrates and emphasizes the value and importance of assessing the processes involved in Tech Prep implementation. An exclusive focus on outcomes fails to capture the essence of the interventions that resulted in those outcomes. It is obvious that a major impetus driving the conceptualization of Tech Prep had to do with deficiencies in outcome indicators (school drop out rates, lack of knowledge transfer and application, poor program articulation, etc.). It should also be clear that a major thrust of Tech Prep has to do with developing and implementing processes for restructuring schools and instituting fundamental changes in the way that education is delivered in America. This is related to national as well as local initiatives, such as the association with the broader range of educational reform that is occurring across the nation (Goals 2000, SCANS, etc.). Simply put, Tech Prep is a component in the larger context of educational reform. A second perspective related to systemic change is more local; namely that Tech Prep can lead to some fundamental rethinking of and restructuring of local educational systems. Thus, any valid assessment plan must necessarily include a mechanism for examining process . As a result of this study, it is recommended that the assessment of the process component include such factors as the quality of integration team interaction; curriculum development and alignment; career path development; advisory council selection, input and interaction; cooperating school agreements; and budgeting.

A related observation deals specifically with curriculum development. The clear tendency has most often been to latch on to existing materials rather than to develop and customize materials at the local level. While many materials are of excellent quality, they, in many instances, fail to address the unique directions and configurations that are evolving out of the initiatives related to contextual learning and changing methodology. Some may be better received by vocational teachers than by traditional academic teachers. Additional documentation is needed on how to assess collaborating successfully with one another in the curriculum development and integration processes.

An area of major difference concerns the identification of Tech Prep students. This thread relates to many of the process categories identified for analysis. For example, marketing efforts would be greatly enhanced if the target is clearly identified. If, on the other hand, Tech Prep is perceived as a kind of general program philosophy that somehow affects the entire school system (as opposed to specific students in specific ways within the system), then marketing is much more difficult. In short, it is one thing to market a general philosophy or structural change. It is quite another thing to recruit specific, individual students into classes that have been designated as part of Tech Prep. The same concern relates to program and student assessment as well as, to some degree, curriculum development. It is difficult to see how procedures would be developed to align curriculum and then assess outcomes in the absence of specifically identified Tech Prep students. Therefore, Tech Prep student identification may be a critical key in forcing more general systemic change.

It is also important to comment on the nature of the structure of Tech Prep. Tech Prep is designed to provide broad guidelines within which state and local educational agencies can work to develop mechanisms that work for them. Tech Prep provides the general structure and the broad guidelines. Local educational agencies are then free to configure and develop their initiatives in light of their unique circumstances, needs, goals, and resources. Thus, Tech Prep implementation inherently tends to be flexible, creative, and localized. This poses unique challenges for evaluation. Certainly, it is much easier to assess the effects of programs that are homogeneous and for which the boundaries are clearly structured, administered, and maintained. Such is not the case for Tech Prep, nor should it be. Thus, Tech Prep assessment must consist of a process that uses multiple data collection mechanisms, that includes a diversity of criteria, and that is sensitive to both outcomes and processes as they function within the larger cultural and educational contexts given certain system inputs.

Thus, while Tech Prep is indeed a concept which is being implemented in many diverse and unique ways, it nevertheless tends to take on specific programmatic forms at the local level. As this occurs, it is critical to subsequent planning and assessment that the goals which are specific to local initiatives be clearly identified and defined. Specific assessment components must be selected for implementation based on the unique characteristics of local consortia. It is perhaps even more important throughout the developmental stages that the question be raised and addressed, Can and how will this aspect of the initiative be assessed? This approach to Tech Prep evaluation will assure that the assessment program is comprehensive and robust enough to detect systemic change and yet that it is specific enough to demonstrate accountability. Such an effort must be directed at establishing baseline data against which subsequent movement can be measured. Given the complexity and variety of Tech Prep initiatives, it is critical that multiple means be used to conduct program assessment. This is necessary in order to triangulate and interpret data as well as to capture the uniqueness of various local configurations.

References

Bragg , D. D., & Layton, J. D., & Hammons, F. T. (1994a). Tech Prep implementation in the United States , 5 (2). Champaign: University of Illinois, Office of Community College Research and Leadership.

Bragg , D. D., Layton, J. D., & Hammons, F. T. (1994b). Tech Prep implementation in the United States: Promising trends and lingering challenges . Berkeley, CA: National Center for Research in Vocational Education, University of California at Berkeley. (ERIC Document Reproduction Service No. ED 374 336)

Bragg , D. D., Kirby, C. L., Puckett, P. A., Trinkle, K. A., & Watkins, L. (1994). Building a preferred future with Tech Prep systems . Berkeley, CA. (ERIC Document Reproduction Service No. ED 375 297)

Bragg , D. D., & Layton, J. D. (1992, December). A comparison of implementation in four states . Paper presented at the meeting of the American Vocational Education Research Association, St. Louis, MO.

Brawer , M., & Hammons F. T. (1993, September). Florida's Tech-Prep evaluation: Implementing the model . Paper presented at the 1993 National Tech Prep Network Fall Conference, Atlanta, GA.

Brustein , M. (1993). AVA guide to federal funding for Tech Prep . Alexandria, VA: American Vocational Association.

Delaware Statewide Vocational-Technical High Schools (1990). Tech Prep compendium of models . Dover, DE. (ERIC Document Reproduction Service No. ED 332-016)

Dornsife , C. J. (1992). Beyond articulation: The development of Tech Prep programs . Berkeley, CA: National Center for Research in Vocational Education, University of California at Berkeley.

Dutton , M., Hammons, F., Hudis, P., & Owens, T. (1994). Evaluating your Tech Prep program . Waco, TX: Center for Occupational Research and Development.

Hammons , F. T. (1994). Tech Prep initiative outcome measures. In M. Dutton, Evaluating your Tech Prep program , (pp. 55-57). Waco, TX: Center for Occupational Research and Development.

Hammons , F. T. (1992). The first step in Tech-Prep program evaluation: The identification of program performance indicators . Unpublished doctoral dissertation, Virginia Polytechnic Institute and State University, Blacksburg, VA.

Hull , D., & Parnell, D. (Eds.). (1991). Tech Prep associate degree: A win/win experience . Waco, TX: Center for Occupational Research and Development.

McCaslin , N. L. (1992, December). Outcomes assessment for vocational education. In D. D. Bragg (Ed.), Alternative outcomes for postsecondary vocational education (pp. 95-108) (MDS-239). Berkeley, CA: National Center for Research in Vocational Education, University of California at Berkeley.

Office of Educational Research and Improvement. (1994). National assessment of vocational education: Final report to congress. Program Improvement: Education Reform (Volume III) . Washington, DC: U. S. Department of Education.

Partnership for Academic and Career Education. (1992). Tech Prep implementation. Self-assessment inventory . Pendleton, SC. (ERIC Document Reproduction Service No. ED 362 636)

Roegge , C. A., Wentling, T. L., Leach, J. A., & Brown, D. C. (1993, April). Using concept mapping techniques to compare stakeholder groups' perceptions of Tech Prep . Paper presented at the meeting of the American Education Research Association Annual Meeting, Atlanta, GA.

Rubin , M. (1993, September). Evaluation of California's Tech-Prep education program . Paper presented at the 1993 National Tech Prep Network Fall Conference, Atlanta, GA.

Ruhland , S. K., Custer, R. L., & Stewart, B. R. (1994). Final report: Status of Tech Prep in Missouri . Division of Vocational and Adult Education, Department of Elementary and Secondary Education, Jefferson City, MO.

Silverberg , M. K., & Hershey, A. M. (1995). The emergence of Tech-Prep at the state and local levels . Princeton, NJ: Mathematica Policy Research, Inc.

Strickland , D. C., & Asche, F. M. (1987). Enhancing utilization: A proposal for a modified utilization focused model for vocational education evaluation. The Journal of Vocational Education Research , 12 (4), 13-34.

Wadsdyke , R. G. (1978). Vocational education evaluation: implementation problems. In C. C. Rentz & R. R. Rentz (Eds.), Evaluating federally sponsored programs , (pp. 69-78). San Francisco: Jossey-Bass Inc., Publishers.