JTE v2n2 - Assessing the Effectiveness of the Change to Technology Teacher Education

Volume 2, Number 2
Spring 1991


Assessing the Effectiveness of the Change to Technology Teacher Education
 
            Daniel L. Householder & Richard A. Boser
 
               Many institutions which formerly pre-
          pared teachers of industrial arts are cur-
          rently implementing technology teacher
          education programs.  As these institutions
          change to implement technology teacher educa-
          tion, it is important to obtain an accurate
          assessment of the effectiveness of the inno-
          vation.  Change in the teacher education cur-
          riculum may be assessed in a number of
          possible ways, each with several potential
          advantages.  However, there is no generally
          accepted model for assessing the overall ef-
          fectiveness of such a major change in tech-
          nology teacher education.
               To address this problem, a study was
          undertaken to develop and verify a set of
          measures that could be used to assess the ef-
          fectiveness of the move to technology teacher
          education.  Specifically, the study sought
          answers to two research questions:  "What
          measurements should be used to determine the
          effectiveness of the change?" and "How should
          these mea-
          surements be validated?"
 
                           BACKGROUND
               The literature relevant to the assess-
          ment of change and program implementation may
          be categorized into three areas: (a) educa-
          tional program evaluation; (b) program evalu-
          ation in higher education, specifically in
          teacher education; and (c) change and program
          implementation in teacher education programs.
          Studies in each of these areas were reviewed
          to establish the research base for the devel-
          opment of the formative evaluation system for
          technology teacher education programs.
 
          EDUCATIONAL PROGRAM EVALUATION
               In a literature search for an applicable
          model for the evaluation of teacher education
          programs, Ayers, Gephart, and Clark (1989)
          reported "approximately 40 references to
          evaluation models" (p.  14).  Stufflebeam and
          Webster (1980) identified and assessed 13 al-
          ternative evaluation approaches in terms of
          their adherence to the definition: "an educa-
          tional evaluation study is one that is de-
          signed and conducted to assist some audience
          to judge and improve the worth of some educa-
          tional object" (p. 6).  Their analysis re-
          sulted in three categories of evaluation
          studies: (a) politically oriented, or pseudo
          evaluations; (b) question oriented, or quasi-
          evaluations; and (c) values oriented, or true
          evaluations.  Stufflebeam and Webster ad-
          dressed the strengths and weaknesses inherent
          in each evaluation approach in order to pro-
          vide evaluators with a variety of frameworks
          for conducting evaluation studies.
               However, as Popham (1975) noted, compar-
          ing evaluation approaches in order to select
          the best model is usually a fruitless en-
          deavor.  Popham stated:
 
             Instead of engaging in a game of "sames
             and differents," the educational evalu-
             ator should become sufficiently
             conversant with the available models of
             evaluation to decide which, if any to
             employ.  Often, a more eclectic ap-
             proach will be adopted whereby one se-
             lectively draws from the several
             available models those procedures or
             constructs that appear most helpful.
             (p.  21)
 
               Cronbach (1982) echoed this need for
          eclecticism by noting that "the [evaluation]
          design must be chosen afresh in each new
          undertaking, and the choices to be made are
          almost innumerable" (p.  1).  Indeed, an
          eclectic approach seemed most appropriate for
          the formative evaluation of the change to
          technology teacher education.
               The review of the evaluation literature
          identified two approaches that could be com-
          bined to develop appropriate instrumentation
          and procedures.  These were the Context, In-
          put, Process, and Product (CIPP) Model origi-
          nated by Stufflebeam et al. (1971), and the
          Discrepancy Model proposed by Provus (1971).
          These models have many commonalities.  Both
          models:
 
          1.  Were conceptualized and developed in the
              late 1960s in response to the need to
              evaluate projects funded through the Ele-
              mentary and Secondary Education Act
              (ESEA) of 1965.
          2.  Represented efforts to broaden the view
              of educational evaluation to include more
              than an assessment of the terminal objec-
              tives.
          3.  Emphasized the systems view of the educa-
              tion by stressing the relationship be-
              tween context, inputs, processes, and
              products.
          4.  Emphasized the importance of collecting
              information on key developmental factors
              to aid decision-makers in assessing pro-
              gram progress at a given point
              (Brinkerhoff, Brethower, Hluchyj, and
              Nowakowski, 1983).
          5.  Were concerned with the developmental as-
              pects of program design and implementa-
              tion, and recommended close collaboration
              with program developers.
          6.  Have been used in a variety of evaluation
              environments (Roth, 1978; Provus, 1971;
              and Stufflebeam, et al., 1971), though
              they are not specifically designed for
              the evaluation of teacher education pro-
              grams.
 
               THE CIPP MODEL.  Bjorkquist and
          Householder (1990) noted that "programs in
          which goals are accomplished are usually con-
          sidered to be effective" (p. 69).  In an
          overview and assessment of evaluation
          studies, Stufflebeam and Webster (1980)
          stated that the objectives-based view of pro-
          gram evaluation "has been the most prevalent
          type used in the name of educational evalu-
          ation" (p. 8).  Indeed, prior to the ESEA,
          educational evaluation had focused upon "the
          determination of the degree to which an in-
          structional program's goals were achieved"
          (Popham, 1975, p. 22).  However, a group lead
          by Stufflebeam proposed an evaluation process
          that focused upon program improvement by
          evaluating virtually all aspects of the edu-
          cational program.  Stufflebeam (1983) stated:
 
             Fundamentally, the use of the CIPP
             Model is intended to promote growth and
             to help the responsible leadership and
             staff of an institution systematically
             to obtain and use feedback so as to ex-
             cel in meeting important needs, or at
             least, to do the best they can with the
             available resources. (p. 118).
 
          In short, the CIPP Model placed a premium on
          information that can be used proactively to
          improve a program.
               DISCREPANCY MODEL.  This model was de-
          veloped to be put in place as the new pro-
          grams were designed and implemented in the
          Pittsburgh public schools.  A systems ap-
          proach was used to determine whether program
          performance met accepted program standards.
          Provus (1971) conceptualized a three-step
          process of program evaluation: (a) defining
          program standards, (b) determining whether a
          discrepancy exists between some aspect of
          program performance and the standards govern-
          ing that aspect of the program, and (c) using
          discrepancy information either to change per-
          formance or to change program standards (p.
          183).  According to Provus, this operational
          definition of program evaluation leads to
          four possible alternatives: (a) the program
          can be terminated, (b) the program can pro-
          ceed unaltered, (c) the performance of the
          program can be altered, or (d) the standards
          governing the program can be altered (Popham,
          1975).
               The Discrepancy Model has five stages:
          (a) design; (b) installation; (c) process;
          (d) product; and (e) program comparison.
          Provus (1971) noted that, "at each of these
          stages a comparison is made between reality
          and some standard or standards" (p. 46).  The
          first four stages are developmental in nature
          and designed to evaluate a single program.
          The fifth stage, which Provus designated as
          optional, provides information for making
          comparisons with alternative programs.
               MERGING THE EVALUATION MODELS.  With the
          commonalities of the two models previously
          stated and the thoroughness of the CIPP Model
          reviewed, one might well ask why the two mod-
          els should be merged.  The answer lies in the
          complementing strengths of the two models.
          CIPP, with its use of both quantitative and
          qualitative procedures and its emphasis on
          proactive evaluation, provides an overarching
          evaluation model. Because of its
          thoroughness, it is also extremely expensive
          and time consuming.  As Stufflebeam and
          Webster (1980) noted, values-oriented
          studies, such as CIPP, aimed at assessing the
          overall merit or worth of a program are
          overly ambitious "for it is virtually impos-
          sible to assess the true worth of any object"
          (p. 18).  However, the CIPP model provides an
          excellent framework for approaching the mul-
          titude of possible variables in program eval-
          uation.
               What does the discrepancy evaluation
          model add to this customized assessment ap-
          proach?  Stufflebeam and Webster (1980)
          stated that question-oriented studies that
          focus on program objectives or standards "are
          frequently superior to true evaluation
          studies in the efficiency of methodology and
          technical adequacy of information employed"
          (p. 18).  In particular, the discrepancy
          model championed by Provus adds three useful
          constructs to the evaluation process:
 
          1.  The broadening of the evaluation proce-
              dure to include the possibility of alter-
              ing the standards to conform with
              reality.  In light of the current empha-
              sis on standards external to the program,
              such as National Council for Accredi-
              tation of Teacher Education (NCATE) cri-
              teria, this approach seemed particularly
              appropriate.
          2.  The emphasis upon high-fidelity implemen-
              tation addressed major concerns in the
              change process.
          3.  The emphasis upon problem solving sol-
              utions to program performance alteration
              appeared to be consistent with the es-
              poused philosophy of technology educa-
              tion.
 
               Since technology teacher education pro-
          grams are still largely in the implementation
          stage, assessments of their effectiveness
          could most profitably focus on discrepancies
          between the performances and standards that
          are concerned with the inputs and the proc-
          esses of the technology teacher education
          programs.  Taken together, it seems reason-
          able to consider an evaluation approach that
          focuses on input and process evaluation com-
          ponents as Stufflebeam uses the terms by com-
          paring actual performance with defined
          standards.
 
          PROGRAM EVALUATION IN TEACHER EDUCATION
               Few studies have related specific pro-
          gram evaluation approaches to the assessment
          of teacher education programs.  Perhaps the
          dearth of references in the literature to
          specific evaluation approaches used in
          teacher education programs is the result of
          the emphasis placed on the accreditation of
          those programs.  Accreditation procedures re-
          quire that teacher education institutions pe-
          riodically undertake systematic formative and
          summative evaluations.  Taking this reality
          into consideration, Ayers, Gephart, and Clark
          (1989) proposed the Accreditation Plus Model
          that integrates the accreditation process and
          existing evaluation approaches.  While focus-
          ing on the National Council for Accreditation
          of Teacher Education (1987) standards and
          criteria for compliance, the model suggests a
          process that is "active, continual, and form-
          ative" (p.  16).
               The Accreditation Plus Model seems to be
          a logical extension of an already required
          practice.  While this model was designed to
          be used for the evaluation of professional
          educational units, the process seems adapt-
          able to the more specific evaluation concerns
          of technology teacher education programs.
 
          CHANGE AND PROGRAM IMPLEMENTATION
               Gee and Tyler (1976) suggested that
          "reasonable people will assume moderate risk
          for great benefits, small risks for moderate
          benefits, and no risk for no benefit" (p. 2).
          While this statement makes explicit the per-
          sonal nature of the change process, organiza-
          tional characteristics are also important
          factors in facilitating change.  Hopkins
          (1984) argued that the nature of the educa-
          tional organization itself is a major imped-
          iment to change.  He noted that in spite of
          considerable external pressure for change in
          teacher education, there were few observable
          differences in the routines of professors and
          students.  Hopkins made the provocative sug-
          gestion that "teacher training institutions
          as organizations appear unable effectively to
          manage self-initiated change" (p. 37).
          Giacquinta (1980), even less charitable, sug-
          gested that schools of education find that
          "change is a necessary, often bitter pill
          taken for the sake of survival" (Hopkins,
          1984, p. 43).  These opinions seem to be
          shared by several state legislatures which
          have recently mandated changes in teacher ed-
          ucation requirements and practices.
 
          A MODEL FOR ORGANIZATIONAL CHANGE
               A model of the innovation-decision proc-
          ess in an organization, developed by Rogers
          (1983), focuses on the process of adoption,
          implementation, and the incorporation of the
          innovation into the organization.  The five
          steps in the model are divided into two
          stages: initiation and implementation.
               INITIATION STAGE.  During this stage,
          organizational activities center around the
          information-gathering, conceptualizing, and
          planning that is required to make the deci-
          sion to change.  The two steps included at
          this stage are: (a) agenda setting, where the
          initial idea search occurs and the motivation
          to change is generated; and (b) matching,
          where organizational problems and possible
          solutions are analyzed for compatibility.
               The initiation stage is essentially a
          problem solving exercise.  As the organiza-
          tion becomes cognizant of a performance
          shortfall, it initiates a search of the envi-
          ronment for possible solutions to the prob-
          lem.  For example, industrial arts programs
          were generally faced with declining enroll-
          ments.  At the same time, many studies cited
          the need for students to possess increased
          scientific and technological literacy.  In
          response, the field started to focus on tech-
          nology education as an emergent solution to
          both problems.
               IMPLEMENTATION STAGE.  The second stage,
          implementation, begins after the decision to
          make the change has been made by the organ-
          ization.  This stage includes the decisions,
          actions, and procedures involved in putting
          an innovation into regular use.  The imple-
          mentation stage includes three steps: (a)
          redefining/restructuring the innovation and
          the organization to accommodate the change;
          (b) clarifying the innovation as it is put
          into regular use; and ultimately (c)
          routinizing or institutionalizing the change
          as an integral part of the ongoing activities
          of the organization.
               According to Rogers (1983) each step is
          "characterized by a particular range of
          events, actions, and decisions" (p. 362).
          Further, the latter steps cannot occur until
          the issues in the earlier steps have been re-
          solved.  Citing the work of Pelz (1981) as a
          source of support for the model, Rogers noted
          that innovations imported into an organiza-
          tion "usually occur in the time-order se-
          quence" (p. 366).  However, innovations that
          originated within an organization are not
          characterized by a similarly clear pattern of
          adoption.  Since technology teacher education
          programs are currently changing in an attempt
          to meet largely external innovations (NCATE
          accreditation standards and state certif-
          ication requirements), it appears that the
          time-order sequence is expected to apply.
          The linear nature of the innovation-decision
          model highlights the need to nurture the
          change to technology teacher education
          throughout the stages of the entire change
          process.
 
          SUMMARY
               In light of the review of literature and
          the specific goals of this research effort,
          the decision was made to develop an evalu-
          ation design incorporating an eclectic mix of
          program evaluation approaches, the NCATE ac-
          creditation process, and descriptions of the
          process of change as that process may be ex-
          pected to occur in teacher education organ-
          izations.  Stufflebeam's CIPP Model provided
          an overall framework from which to assess the
          effectiveness of change to technology teacher
          education.  Provus's Discrepancy Model added
          the possibility of adjusting the measurement
          standards to conform to program performance
          reality.  And, because accreditation is an
          overarching evaluation concern for teacher
          education, the Accreditation Plus Model sug-
          gested a way of integrating program evalu-
          ation and accreditation.  Further, because
          technology teacher education programs are
          presently in the early implementation stage,
          measures that reflect the process of change
          seemed to be appropriate for inclusion.
 
                           PROCEDURES
               A modified Delphi design was used in
          this study.  Nominations of leading practi-
          tioners and advocates in technology education
          who might serve as Delphi panelists were so-
          licited from officers of the Council on Tech-
          nology Teacher Education and the
          International Technology Education Associ-
          ation.  This process resulted in the se-
          lection of a panel comprised of the 22
          individuals who were recommended by at least
          two of the CTTE or ITEA officers.
               On an open-ended questionnaire, panel-
          ists were asked to suggest criteria and pro-
          cedures for evaluating the effectiveness of
          the change from industrial arts teacher edu-
          cation to technology teacher education pro-
          grams.  Fourteen panelists returned the first
          round questionnaire.  The reponses were tabu-
          lated, duplications were eliminated, and sim-
          ilar suggestions were combined.  This process
          resulted in a list of 58 criteria and 33 pro-
          cedures for evaluating the effectiveness of
          the change to technology teacher education.
          The criteria were sorted into four catego-
          ries: (a) the technology teacher education
          program, (b) faculty members, (c) student
          skills, and (d) capabilities of graduates.
               The second round questionnaire asked the
          22 panelists to rate the importance of the 58
          criteria and 33 procedures on a scale which
          ranged from 0 to 10.  The instructions de-
          fined a rating of 0 as a recommendation that
          the criterion or procedure be dropped.  A
          rating of 10 meant that the criterion or pro-
          cedure was considered to be absolutely vital
          to the assessment of the effectiveness of the
          change to technology teacher education.  Pan-
          elists were asked to offer editorial sug-
          gestions on the statements of criteria and
          procedures and also to suggest additional
          criteria and procedures (and to rate any ad-
          ditional statements).
               Eighteen of the 22 second round ques-
          tionnaires were returned promptly.  The re-
          sponses were tabulated and the mean rating of
          importance for each item was calculated.  The
          statements of criteria and procedures were
          then listed in order of their mean rating of
          importance.  The ranked listings for each
          criterion with a mean value greater than 9.0
          on the 10 point scale are included in Table
          1.
 
          TABLE 1
          HIGHLY RANKED CRITERIA AND PROCEDURES SORTED
          BY CATEGORY
          ---------------------------------------------
           Mean    Criteria and Procedures
          ---------------------------------------------
          TECHNOLOGY TEACHER EDUCATION PROGRAM ...
 
            9.55  Laboratory instruction provides op-
                  portunities for students to
                  reinforce abstract concepts with
                  concrete experiences.
            9.50  Instructional strategies emphasize
                  conceptual understanding and problem solving.
            9.23  Professional studies component emphasizes
                   the study of technology, including social-
                   cultural affects.
            9.22  Laboratories facilitate the learning
                   of broad based technological concepts.
            9.22  Instruction incorporates current
                   technological activities.
            9.17  Philosophy, mission statement, goals
                   and curriculum emphasize technological
                   skills rather than technical skills.
            9.17  Social-cultural impacts of technology
                   are emphasized.
            9.12  Field experiences are technology cen-
                   tered.
            9.05  Problem solving and decision making
                   abilities are emphasized.
            9.00  Curricula are based on recent re-
                   search findings.
 
          FACULTY MEMBERS ...
            9.50  Display a positive attitude toward
                   the technology teacher education
                   curriculum.
            9.22  Participate in planned professional
                   development activities to
                   update their knowledge and skills.
            9.05  Communicate their understanding of
                   the meaning and implications of
                   technology education both within and
                   outside the classroom.
 
          STUDENTS ARE EXPECTED TO ...
            9.78  Be people oriented.
            9.44  Be future oriented.
            9.39  Demonstrate the ability to teach
                   problem solving techniques.
            9.33  Effectively plan and implement tech-
                   nology education in grades 5-12.
            9.28  Develop and implement curriculum ma-
                   terial that reflect a broad
                   technological system area.
            9.28  Demonstrate an awareness of society's
                   reliance on technological systems.
            9.22  Plan and implement teaching-learning
                   activities.
            9.17  Use a vocabulary that reflects the
                   concepts of technology education.
            9.11  Apply current instructional theory.
            9.06  Formulate appropriate objectives.
            9.05  Be open to change and willing to ini-
                   tiate change.
            9.05  Consider global perspectives in tech-
                   nology education.
            9.00  Demonstrate a basic understanding of
                   tools, machines and process and their
                   applications in manufacturing,
                   construction, communication, and
                   transportation.
 
          GRADUATES OF THE TECHNOLOGY TEACHER EDUCATION
          PROGRAM ...
            9.78  Employ a philosophy which reflects a
                   technological base.
            9.61  Teach concepts and use teaching tech-
                   niques that are technology based.
 
          PROCEDURE STATEMENTS ...
            9.50  Examine the curriculum to determine
                   if the philosophy, definition, mission
                   statement, goals and objectives, course
                   content, and learning experience
                   reflect technology education.
            9.22  Analyze the courses required in the
                   program, the content contained in each
                   of the courses, teaching strategies and
                   methods, assignments, tests, and
                   student field experience to determine if
                   they reflect technology education.
          ---------------------------------------------
 
          DEVELOPING THE TECHNOLOGY TEACHER EDUCATION
          CHECKLIST (TTEC)
               An initial review of the listing of cri-
          teria and procedures identified by the panel-
          ists in this research suggested many
          parallels to the NCATE approved curriculum
          guidelines as specified in the BASIC PROGRAM
          IN TECHNOLOGY EDUCATION (1987).  The intent
          of this investigation was not to duplicate
          the NCATE assessment process, but to identify
          essential elements in the implementation of
          technology teacher education that would serve
          as key indicators of the effectiveness of the
          change from industrial arts teacher educa-
          tion.  In order to concentrate the assessment
          effort, therefore, criteria were selected for
          inclusion in the measurement instrument if
          they were:
 
          1.  Highly ranked within their criteria cate-
              gory but not addressed by NCATE curric-
              ulum guidelines;
          2.  Correlated to NCATE curriculum guidelines
              for technology teacher education and dis-
              tinctly different from usual practices in
              industrial arts teacher education; or
          3.  Considered to be essential to support the
              process of organizational change.
 
               Other suggested items were not included
          in the TTEC because they were measurements of
          program outcome, such as performance of pro-
          gram graduates.  These items were excluded
          from the measurement instrument since tech-
          nology teacher education is in the implemen-
          tation phase, a stage when Hall and Hord
          (1987) noted that "interpreting any outcome
          data is extremely risky" (p. 343).
               Further, the procedures proposed for
          this formative evaluation design were pur-
          posely limited by the following criteria:
 
          1.  The time required for on-site data col-
              lection by the external evaluator(s)
              should not exceed two observer-days.
          2.  With the exception of interviews and
              classroom and laboratory observation ses-
              sions, the data gathering should not re-
              quire additional faculty time.
          3.  Existing data should be used whenever
              possible.
          4.  Data gathering should not seriously dis-
              rupt on-going instructional activities.
 
          In this way, the evaluation may be conducted
          in a reasonable time with a minimum of dis-
          ruption to departmental activities.
 
          VERIFICATION OF THE TTEC
               In order to verify the measures selected
          for inclusion in the checklist, a draft of
          the TTEC was sent to the panel for editorial
          suggestions and additional comments.  Sixteen
          of the twenty-two panelists responded.  Most
          respondents suggested editorial revisions or
          made other comments.  Careful consideration
          was given to these suggestions as revisions
          were made in the TTEC.  The TTEC, revised to
          incorporate suggestions from panelists, is
          reproduced below.
 
          TECHNOLOGY TEACHER EDUCATION CHECKLIST
 
          1.  Examine the catalog, a sample of curric-
              ulum documents, and a sample of course
              syllabi to verify the degree to which:
              a.  The philosophy, mission statement,
                  and goals and objectives of the pro-
                  gram reflect the definition(s) of
                  technology education suggested by
                  ITEA, CTTE, and relevant groups in
                  the state/province.
              b.  Study is required in technological
                  systems such as communication, pro-
                  duction (construction and manufactur-
                  ing), transportation, and
                  biotechnology.
              c.  Courses in mathematics, science, and
                  computing science are required.
              d.  Required full-time student teaching
                  and early field experiences are con-
                  ducted in an exemplary technology ed-
                  ucation setting.
              e.  Required reading lists provide com-
                  prehensive coverage of technology and
                  technology education.
              f.  Learning activities and experiences
                  are representative of technology edu-
                  cation.
          2.  Interview the department head with regard
              to the change to technology teacher edu-
              cation to discern the degree to which:
              a.  Funding is adequate to support the
                  current technology teacher education
                  program and plans are in place for
                  periodic replacement and upgrading of
                  facilities and equipment.
              b.  Faculty and staff allocations are ad-
                  equate to serve student enrollments
                  in technology teacher education.
              c.  The written departmental plan for
                  faculty professional development and
                  technological updating is adequate to
                  prepare faculty members for contempo-
                  rary technology teacher education.
              d.  Enrollments in the major are ade-
                  quate, stable, or increasing.
              e.  The written departmental implementa-
                  tion plan for technology teacher edu-
                  cation addresses the process of
                  organizational change.
              f.  Faculty are committed to the philoso-
                  phy and objectives of technology edu-
                  cation.
          3.  Interview faculty members and review re-
              cent annual reports, biodata information,
              faculty publications, copies of presenta-
              tions, and manuscripts being considered
              for publication to verify whether:
              a.  Faculty are writing scholarly papers,
                  developing instructional materials,
                  and giving presentations about tech-
                  nology education.
              b.  Current faculty research and service
                  activities are directed toward topics
                  and issues in technology education.
              c.  Faculty are actively involved in pro-
                  fessional organizations in technology
                  education.
          4.  Observe professional and technical
              classes to discern the degree to which:
              a.  Instructional methods emphasize tech-
                  nological problem solving and
                  decision-making.
              b.  Instructional materials reflect con-
                  temporary technology.
              c.  Major elements of technology educa-
                  tion (e.g., systems, environmental
                  and social impacts, and the applica-
                  tions of technological devices) are
                  emphasized in the course activities.
          5.  Inspect laboratory facilities to ascer-
              tain the degree to which:
              a.  Laboratories are adequate for effec-
                  tive instruction.
              b.  Equipment and space provide students
                  adequate opportunities for experi-
                  ences in state-of-the-art applica-
                  tions of technology (e.g., CAD/CAM,
                  CIM, robotics, desk-top publishing,
                  lasers, table-top technology,
                  hydroponics).
          6.  Interview students, and examine student
              logs and required student work to discern
              whether:
              a.  The elements of technology education
                  are understood and integrated into
                  their total philosophy of education.
              b.  They are active in a TECA chapter.
              c.  The problem solving process and
                  decision-making rationale are incor-
                  porated into grading.
              d.  Environmental consequences and
                  social-cultural effects of technology
                  are reflected in student activities.
          7.  Interview chairs of related departments
              and administrators (dean, provost, or
              president) to ascertain the degree of
              philosophical support that is provided
              for technology education.
          8.  Listen to conversations and discussions
              and observe student activity to discern
              the degree to which:
              a.  The terminology used by faculty and
                  students reflects technology and
                  technology education.
              b.  Faculty and students appear to be en-
                  thusiastic about technology educa-
                  tion.
          9.  Interview principals who have experience
              with student teachers and graduates of
              the technology education program to dis-
              cern whether the program prepares profes-
              sionals to:
              a.  Plan and implement technology educa-
                  tion.
              b.  Use problem solving strategies.
              c.  Apply current instructional theory.
 
                      USING THE INSTRUMENT
               Jordan (1989) began a discussion of
          evaluation and change by reminding practi-
          tioners that:
 
             One of the axioms of measurement is
             that assessment is not an end in it-
             self.  We evaluate because we wish to
             know the current state of affairs, but
             we wish to do that in order to make im-
             provements.  Exactly how we wish to im-
             prove depends on what we discover.  In
             theory, the process is circular and un-
             ending.  That is, we should assess and
             make improvements and then assess the
             improvements. (p. 147)
 
          With this interaction between evaluation and
          change in mind, there are several possible
          ways of using the instrument developed
          through this research.  Perhaps the simplest
          use would be for an internal or external
          evaluator to use the instrument as a check-
          list of what has been accomplished and what
          is in progress (or still to be initiated).
          Two more complex uses may include determining
          if the innovation is in place and using force
          field analysis to determine sources of re-
          sistance.
 
          DETERMINING IF THE INNOVATION IS IN-PLACE
               Hord, Rutherford, Huling-Austin, and
          Hall (1987) proposed that before assessing
          program outcomes it is first necessary to de-
          termine that the innovation is in fact in
          place.  They indicated two ways of making
          that determination: (a) first, the level of
          fidelity of the actual implementation of the
          innovation can be compared with the intended
          innovation, and (b) second, the actual levels
          of use can be determined.  Hord et al. pro-
          posed that each innovation has essential and
          related components.  The essential components
          cannot be changed without undermining the na-
          ture of the innovation itself.  The related
          concepts allow for local flexibility and,
          while varied, are still faithful to the inno-
          vation design.  Hord et al. suggested that
          assessment of fidelity can be made by devel-
          oping a checklist that outlines ideal, ac-
          ceptable, and unacceptable variations of the
          innovation.  In technology teacher education
          programs, many of the criteria identified
          through this research may serve as the "es-
          sential" components.
               The second measure proposed by Hord et
          al. (1987) to determine whether or not the
          innovation is actually in place is an assess-
          ment of the six levels of use.  These levels
          range from Level of Use 0 (nonuse) to Level
          of Use VI (renewal) where the "user reevalu-
          ates the quality of use of the innovation,
          seeks major modifications of or alternatives
          to, present innovation to achieve increased
          impact on clients, examines new developments
          in the field, and explores new goals for self
          and the organization" (p. 55).  By using the
          TTEC to identify the essential components of
          the change to technology teacher education,
          an assessment of levels of use from the per-
          spective of the faculty may be an important
          step in measuring the effectiveness of the
          change and planning further intervention
          strategies.
 
          FORCE FIELD ANALYSIS
               Lewin (1951), the originator of field
          psychology, proposed that change is the re-
          sult of competition between driving and re-
          sisting forces.  Lewin's conceptualization
          has been adapted to describe the dynamics of
          a number of management situations in organiza-
          tional change.  Daft (1988) stated that:
 
             To implement a change, management
             should analyze the change forces.  By
             selectively removing forces that re-
             strain change the driving forces will
             be strong enough to enable implementa-
             tion. . . .  As restraining forces are
             reduced or removed, behavior will shift
             to incorporate the desired changes. (p.
             313)
 
               Miller (1987) suggested that force field
          analysis could be used to nurture a climate
          receptive to innovation and creativity.
          Miller stated:
 
             The primary function of the force field
             in idea generation is to present three
             different stimuli for thinking of new
             options or solutions.  Because the
             field represents a kind of tug-of-war,
             there are three ways to move the center
             line in the direction of the more de-
             sirable future:
 
             1.  Strengthen an already present posi-
                 tive force.
             2.  Weaken an already present negative
                 force.
             3.  Add a new positive force. (p. 73)
 
               If these two ideas are taken together, a
          picture emerges of how force field analysis
          and the instrument designed through this re-
          search could be applied to the transition
          from industrial arts teacher education to
          technology teacher education.  First, each
          criterion could be assessed to determine its
          relative strength as a driving force for
          change.  Additionally, forces unique to the
          particular implementation may be identified
          and dealt with.  Second, the information gen-
          erated through the assessment could be used
          to strengthen the implementation procedures.
          In this way, the instrument may serve as a
          game plan for implementation and continued
          assessment of the change.
 
                          IMPLICATIONS
               The Technology Teacher Education Check-
          list, which was the primary outcome of this
          research, should be useful to the faculty of
          a technology teacher education program or to
          an external evaluator in conducting formative
          or summative assessments of the change to
          technology education.  While its use requires
          minimal duplication of the NCATE approval
          procedures, the items in TTEC focus upon key
          indicators of effective change to technology
          teacher education.  The TTEC might be espe-
          cially useful in a review of a technology
          teacher education program, a year or two in
          advance of the preparation of a curriculum
          folio to be submitted for consideration for
          NCATE approval.
 
          ----------------
          Daniel L. Householder is Professor and
          Richard A. Boser is a Graduate Assistant in
          the Department of Industrial, Vocational and
          Technical Education, Texas A&M University,
          College Station, Texas.
 
 
                           REFERENCES
          Ayers, J. B., Gephart, W. J., & Clark, P. A.
             (1989).  The accreditation plus model. In
             J. B. Ayers & M. F. Berney (Eds.), A PRAC-
             TICAL GUIDE TO TEACHER EDUCATION EVALU-
             ATION (pp.  13-22). Boston:
             Kluwer-Nijhoff.
          Bjorkquist, D. C., & Householder, D. L.
             (1990). Reaction to reform: Research im-
             plications for industrial teacher educa-
             tion.  JOURNAL OF INDUSTRIAL TEACHER
             EDUCATION, 27(2), 61-74.
          Brinkerhoff, R. O., Brethower, D. M.,
             Hluchyj, T., & Nowakowski, J. R. (1983).
             PROGRAM EVALUATION: A PRACTITIONER'S GUIDE
             FOR TRAINERS AND EDUCATORS. Boston:
             Kluwer-Nijhoff.
          Cronbach, L. J. (1982). DESIGNING EVALUATIONS
             OF EDUCATIONAL AND SOCIAL PROGRAMS. San
             Francisco: Jossey-Bass.
          Daft, R. L. (1988). MANAGEMENT.  New York:
             Dryden Press.
          Gee, E. A., & Tyler, C.  (1976).  MANAGING
             INNOVATION.  New York: John Wiley & Sons.
          Giacquinta, J. B.  (1980).  Organizational
             change in schools of education:  A review
             of several models and an agenda of re-
             search.  In D. E. Griffiths & D. J.
             McCarthy (Eds.), THE DILEMMA OF THE
             DEANSHIP.  Danville, IL: Interstate.
          Hall, G. E., & Hord, S. M. (1987). CHANGE IN
             SCHOOLS:  FACILITATING THE PROCESS.
             Albany: State University of New York
             Press.
          Hopkins, D. (1984). Change and the
             organisational character of teacher educa-
             tion. STUDIES IN HIGHER EDUCATION, 9(1),
             37-45.
          Hord, S. M., Rutherford, W. L., Huling-
             Austin, L., & Hall, G.  E. (1987). TAKING
             CHARGE OF CHANGE.  Alexandria, VA:  Asso-
             ciation for Supervision and Curriculum De-
             velopment.
          International Technology Education
             Association/Council on Technology Teacher
             Education. (1987). NCATE-APPROVED CURRIC-
             ULUM GUIDELINES: BASIC PROGRAM IN TECHNOL-
             OGY EDUCATION.  Reston, VA: Author.
          Jordan, T. E.  (1989).  MEASUREMENT AND EVAL-
             UATION IN HIGHER EDUCATION: ISSUES AND IL-
             LUSTRATIONS.  Philadelphia:  Falmer Press.
          Lewin, K. (1951). FIELD THEORY IN SOCIAL SCI-
             ENCE.  New York: Harper.
          Miller, W. C.  (1987).  THE CREATIVE EDGE:
             FOSTERING INNOVATION WHERE YOU WORK.
             Reading, MA: Addison-Wesley.
          National Council for Accreditation of Teacher
             Education.  (1987).  STANDARDS, PROCE-
             DURES, AND POLICIES FOR THE ACCREDITATION
             OF PROFESSIONAL EDUCATION UNITS.
             Washington, DC: Author.
          Pelz, D. C.  (1981).  'STAGING' EFFECTS IN
             ADOPTION OF URBAN INNOVATIONS.  Paper pre-
             sented at the Evaluation Research Society,
             Austin, TX.
          Popham, W. J.  (1975).  EDUCATIONAL EVALU-
             ATION.  Englewood Cliffs, NJ: Prentice-
             Hall.
          Provus, M. (1971).  DISCREPANCY EVALUATION
             FOR EDUCATION PROGRAM IMPROVEMENT AND AS-
             SESSMENT.  Berkeley, CA: McCutchan.
          Rogers, E. M.  (1983).  DIFFUSION OF INNO-
             VATION.  New York: Free Press.
          Roth, R. A.  (1978).  HANDBOOK FOR EVALUATION
             OF ACADEMIC PROGRAMS: TEACHER EDUCATION AS
             A MODEL.  Washington, DC: University Press
             of America.
          Stufflebeam, D. L.  (1983).  The CIPP Model
             for program evaluation.  In G. F. Madaus,
             M. S. Scriven & D. L. Stufflebeam (Eds.),
             EVALUATION MODELS: VIEWPOINTS ON EDUCA-
             TIONAL AND HUMAN SERVICE EVALUATION
             (pp.117-142).  Boston: Kluwer-Nijhoff.
          Stufflebeam, D. L., & Webster, W. J.  (1980).
		     An analysis of alternative approaches to
             evaluation.  EDUCATIONAL EVALUATION AND
             POLICY ANALYSIS, 2(3), 5-19.
          Stufflebeam, D. L, Foley, W. J., Gephart, W.
             J., Guba, E. G., Hammond, R. L., Merriman,
             H. O., & Provus, M. M. (1971).  EDUCA-
             TIONAL EVALUATION AND DECISION MAKING.
             Itasca, IL:  F. E. Peacock.


          Permission is given to copy any
          article or graphic provided credit is given and
          the copies are not intended for sale.
 
Journal of Technology Education   Volume 2, Number 2       Spring 1991