Journal of Industrial Teacher Education logo

Current Editor: Dr. Robert T. Howell  bhowell@fhsu.edu
Volume 35, Number 3
Spring 1998


DLA Ejournal Home | JITE Home | Table of Contents for this issue | Search JITE and other ejournals

Improving Instructional Development of Industrial Technology Graduate Teaching Assistants

Paul E. Brauchle
Illinois State Universtiy
Kenneth F. Jerich
Illinois State University

The teaching ability of graduate teaching assistants (GTAs) and the need for programs that lead to significant improvements in instruction methods by GTAs has been an ongoing concern in higher education. For example, Robert Berdahl, President of The University of Texas at Austin, in his keynote address at the Fourth National Conference for the Training and Employment of Graduate Teaching Assistants, indicated that faculty must refocus their efforts on the importance of teaching if the academy wants to extend the public confidence higher education now enjoys (Berdahl, 1995). He further stated that expectations concerning teaching in higher education settings are increasing and that the education of graduate teaching assistants "for and by their professions–must be shaped with this reality in mind…", and, "I would challenge you all to consider how these changes can alter how we teach, for we must translate those changes into considerations of how best to train graduate students" (p. 4). Ernest Boyer (1987) established the focus when he indicated that American colleges were ready for renewal and that the nation’s colleges had been successful in responding to student diversity. However, he also indicated that higher education might have been less effective in helping students apply subject matter knowledge to authentic settings.

Much of the vitality of higher education institutions resides in the faculty and how they connect with their students. Faculty, especially young faculty and future professors (usually graduate teaching assistants), must have opportunities to engage in and improve their teaching skills in college classrooms. Hackney (1993), during his testimony at his Senate confirmation hearings, stated that "Every human experience is enhanced by higher levels of knowledge" (p. B4). Halpern (1994) suggested that students are preparing for careers that will be typified by change. Given this reality, reliance on rote content knowledge is outdated and must be replaced with an emphasis on developing critical thinking and learning skills. She also noted that, in 1992, the American Psychological Association indicated that the active nature of learning is not optimized in traditional classrooms where students sit quietly and passively receive information from a teacher positioned at the front of the classroom. The shift in emphasis from factual content knowledge to process-based problem-solving is already required in today’s world of work. College teachers must be able to adjust to these new dimensions and dynamics in the college classroom in order for students to be successful learners.

New teaching and learning strategies should emphasize active questioning, cooperative learning activities, and reflective discussions. These types of activities promote significant and meaningful learning at higher levels of cognition. Critical analysis and evaluative judgments should be paramount goals for college classrooms. Newell-Decyk (1994), noted the power of illustrations and examples during the teaching process and Hansen (1994) noted that a variety of questioning techniques can be employed to promote critical and creative thinking in college classrooms.

In addition to changes in the nature of work, the types of students who are entering undergraduate studies are changing rapidly. Reports from blue ribbon panels, research on college teaching, and efforts to improve the teaching effectiveness of GTAs indicate that in order for undergraduate students to achieve higher levels of knowledge, they must be engaged in higher cognitive learning processes. Moreover, graduate teaching assistants should understand how the concept of information processing provides a theoretical foundation as well as how contextual factors can balance teaching of thinking and the thinking processes of their students (Doyle, 1987; Jerich, 1996; Joyce & Weil, 1996; Lambert & Tice, 1996). Numerous teaching and learning theories and strategies for effective instruction exist across academic disciplines. Similarly, an array of theoretical positions informs programs designed to train GTAs to become tomorrow’s effective professors (Jerich & Brauchle, 1993).

The need for professional faculty development programs to prepare GTAs as future professors is becoming more pronounced in higher education (Border, 1993; Brookfield, 1995; Bruce, 1997; Jerich, Leinicke, & Pitstick, 1995; Lambert & Tice, 1993; Levinson-Rose & Menges, 1981; Ronkowski, Conway, & Wilde, 1995; Svinicki, 1995). Various approaches for training GTAs are being employed at higher education institutions. These vary in depth and formality depending on the curricular emphasis and institutional policies.

Purpose of the Study

The purpose of this study was to examine the effect of pedagogical training on the performance of industrial technology graduate teaching assistants when teaching performance was measured by their students’ evaluations of lessons. The teaching performance of one group of industrial technology GTAs (who were classified as not having the necessary prerequisite knowledge base to teach at the college level) was compared with the teaching performance of a second group of GTAs (classified as already having the necessary prerequisite knowledge base to teach at the college level). The GTAs without the necessary prerequisite knowledge base were required to complete a formal training course in college teaching for graduate teaching assistants as part of their degree programs.

Perspectives on Effective Pre-service Teacher Preparation

The instructional approach used in this study to train graduate teaching assistants combined microteaching with actual classroom teaching. A major premise of this approach was that effective instruction consists of the systematic study of the principles of teaching and learning in concert with each other (incorporating teaching strategies for both content and methods) . Instructional approaches that use only one or the other of the teaching strategies in isolation do not constitute a unified whole. For example, the concept of microteaching, developed at Stanford University in 1963, was defined as using specific teaching skills to be learned and practiced in the laboratory through the use of a teach-reteach cycle of 14 technical teaching skills. However, subsequent studies on various technical skills (Gage, 1976; Peterson, Marx, & Clark, 1978) revealed that excessive attention was given to the original teaching skills, indicating a need to explore other models of microteaching.

The University of Illinois at Urbana-Champaign incorporated the use of microteaching based on a general methods approach. Subsequent studies on the impact of the Illinois model suggested that this approach was extremely effective for the training of preservice teachers (Jerich & Johnson, 1989). Furthermore, various teaching methods of higher education have been studied and have resulted in different approaches to training faculty in higher education (Brown, 1987; Connell, 1987; Cranton, 1987; Dunkin, 1987; McKeachie, 1963). Hence, the approach used for training graduate teaching assistants for the improvement of teaching practice drew upon appropriate aspects of many instructional approaches and was rooted in the belief that learning is driven by information processing (Joyce & Weil, 1996).

Teaching, as information processing, was actualized in the course by planning units of instruction designed to engage students systematically in cognitively engaged, student-initiated learning without diminishing the extent of textbook content coverage. The pre-conference consultation, teaching practice, and post-conference consultation components featured experiences in which the GTAs planned for and practiced the various stages of the conception of teaching learned during the classroom instruction component.

This approach to training graduate teaching assistants creates a dual role for participants (i.e., student and teacher). During the classroom sessions, which used instructional models based on the cognitive theory, the GTAs assumed the role of students; the educational goal was to achieve mastery of the theoretical constructs and principles of teaching. Here, the GTAs learned to develop and refine their teaching styles. In the clinical-based pre-conference consultations, teaching practice, and clinical-based post-conference consultations, the graduate teaching assistants became practicing teachers. This provided opportunities for the graduate teaching assistants to experiment with cultivating pedagogical intelligence, a position advanced for all teachers by Rubin (1989).

Thus, training programs designed to enhance university-level teaching should possess two key components. First, the curriculum for a training program should focus on general pedagogical knowledge (Shulman, 1987). Such knowledge would include, at a minimum, how to logically construct subject matter (i.e., content pedagogy) as well as how to deliver that subject matter (i.e., pedagogy methodologies). Second, the training program should include clinical practice (i.e., classroom field experience) (Raths, 1987).

Methodology

Research Design

A causal comparative post-test design was used to measure the instructional impact of the comparison and treatment groups of GTAs (Borg & Gall, 1989; Campbell & Stanley, 1963; Kerlinger, 1973). Instructional impact was assessed by using student ratings of GTAs who taught first- and second-year undergraduate lecture/discussion industrial technology courses. This method has been used when "…comparing subjects in whom [a] pattern or characteristic is present with similar subjects in which it is absent or present to a lesser degree" (Borg & Gall, 1989, p. 537).

In this study, the comparison group was composed of GTAs who had prior teaching experience while the treatment group consisted of GTAs who had received a formal training course in instruction. Cook and Campbell (1979) and Schumacher and McMillan (1989) have indicated that there will be situations in which a treatment is implemented before a researcher can fully prepare for it. Such was the case in this study. From an industrial technology department’s administrative policy position, the researchers could not control for the assignment of different industrial technology GTAs who were to complete the Curriculum & Instruction (C&I) training course; nor could they control for the industrial technology department’s decision to have other graduate teaching assistants not complete the C&I training course. The researchers did recognize, however, that different approaches to GTA training in Industrial Technology may have transpired and that the teaching effectiveness of those different GTAs could be assessed.

This study reports the results of that assessment. Although the number of GTAs in each group differed, each group did represent an entire "intact" population of teaching assistants who received a particular type of teacher training. According to Krathwohl (1993) and Glass and Hopkins (1984), intact groups can be used for empirical research. There was no attempt to control for specific background differences in the GTAs, such as age or learning ability. The prior teaching experience of the GTAs was controlled for by the selection criteria established by faculty in the Industrial Technology Department.

Instrumentation

The instrument used to measure the effectiveness of the two groups of GTAs was the Teacher Performance Appraisal Scale (TPAS) (Johnson, 1968). The TPAS is "lesson specific;" that is, it is designed to measure the extent to which content and pedagogy strategies have been successfully incorporated into a lesson. By having the undergraduate students complete the TPAS instrument, the authors were able to closely measure the GTAs’ use of content and pedagogy during instruction (Jerich, 1993). The TPAS instrument consists of 10 structured items measuring five areas thought to be important for teacher performance; Learning Aims, Content, Method, Evaluation, and Accomplishment.

Learning aims. Learning aims refer to the extent to which explicitly or implicitly stated purposes of the learning are developed by the instructor and are understood by the learner. An understanding of aims is valuable to the learner and to the teacher. Learning, at least cognitive learning, can be thought of as a process of acquiring information, analytical constructs, and understandings of relationships and purposes as well as how to relate these new learnings, in some organized fashion, to previous knowledge. Learning aims that are not connected are quickly discarded in favor of learnings that can be organized in meaningful ways by learners. Effective teachers attempt to clearly define their purposes through explicit statements or pursue lines of reasoning that result in meaningful learning. An understanding of aims is also of value to teachers. Learning aims serve as a selection criterion for content, methods, and procedures used during instruction. Teachers make numerous professional decisions on a daily basis. If they have not clearly formulated their purposes, many of these decisions risk being disconnected or inappropriate. Without a clear understanding of purpose, teachers have no rational basis for evaluation of learner progress of their own degree of success.

Content. Content refers to an array of information included in the lesson and includes the extent to which the materials are meaningful and organized. Conceptual or process aims may be achieved in various contexts. For example, an essential requisite for increasing problem-solving skill is a sufficient base of relevant content knowledge. Specific problems should be relevant to the subject of the course, the educational goals of the community and the interests of the learners. Within these rather broad boundaries, teachers have considerable latitude in selecting specific content that will be of interest to students and that can be adjusted to various levels of abstraction and challenge.

Methods. Methods refer to the activities that are selected for instructional purposes (e.g., lecture or learner participation) and include considerations such as the extent to which methods are stimulating to learners and successful in terms of the lesson’s objective. Similarly, the selection of methods should be related to the aims of instruction and variables such as students’ achievement, abilities, and interests. A process orientation to aims assumes differences in levels of behavior in the cognitive, affective, and psychomotor domains. Thus, classroom activity appropriate for addressing a problem-solving goal might not be useful or appropriate for a goal focused on teaching creative expression. Various instructional goals elicit different human behaviors, and classroom experiences should be selected and constructed with these differences in mind. Similar arguments can be made for student variables such as differences in achievement, ability levels, and interests.

Evaluation. Evaluation involves ways in which the learning process is assessed by the instructor including verbal and non-verbal cues obtained by contact with the learners. Even when lessons have clearly defined goals, relevant content, and utilize appropriate methodology, learning is not assured. Teachers must continuously evaluate the learning process, not in terms of coverage or clock time, but in terms of the extent to which students have met lesson goals and objectives. When students are not learning, teachers must adjust or reconstruct the learning environments to rekindle learning. These adjustments cannot be made unless teachers have some way of assessing the success of lessons on a continual basis. This involves numerous approaches including establishing eye contact with the learners, asking questions, and encouraging learners to ask questions. Through these channels of communication, teachers obtain vital student evaluation information. With such information teachers can adjust or reconstruct learning environments to increase the probability of learning.

Accomplishment: Accomplishment refers to the extent to which students believe something worthwhile was accomplished in the lesson. While this represents subjective student judgement, it is nevertheless a component of teaching that is of interest to instructors.

Johnson (1968) concluded that the TPAS has a sufficient degree of validity when administered by both trained evaluators and comparatively untrained student evaluators. Internal consistency analysis indicated that some subsections of the instrument should be considered together. For example, Learning Aims and Content merged into one factor, which makes sense given that content drives the aim of many secondary lessons. Evaluation emerged as a second factor while the Methods subscales were shared between two factors (See Johnson, 1973, for a detailed discussion of the psychometric properties of the TPAS).

Procedures

The TPAS was used to test for differences in student perceptions of GTAs in the comparison and treatment groups. The instrument was administered to the undergraduate students enrolled in the various GTAs’ classes during the 12th week of each of the three semesters, two weeks before the university’s standard end of semester evaluation. The internal consistency reliability of the TPAS was estimated to be a = .94 (Cronbach’s Coefficient Alpha). Using TPAS, undergraduate students were asked to rate one lesson. The students did not know which treatment group they were in. Measures of central tendency and analysis of variance yielded a quantitative analysis of data. Exit interviews of the treatment group of GTAs provided qualitative data.

Sample

The total sample consisted of 13 Industrial Technology GTAs who had taught over a three-semester period. The comparison group of GTAs (n = 6) did not participate in the formal training in teaching strategies while the treatment group (n = 7) did complete the formal training. One-hundred-forty undergraduate students rated the comparison group and 106 undergraduate students rated the treatment group (both groups used the TPAS). This could be viewed as a form of cluster sampling (Cook & Campbell, 1979) since the undergraduate students were free to register for any available course section. The lessons taught by the GTAs were not specially selected for this study. The lessons happened to be scheduled on the course calendars for the 13th week in the semester.

The GTAs in the experimental and control groups taught various sections of IT 130, Introduction to Manufacturing Processes and ACS 155.02, Introduction to Microcomputers. IT 130 concentrates on secondary material processes including industrial machinery usage and forming, casting, separating, joining, and conditioning study. ACS 155.02 is an introduction to microcomputers and programming, with an emphasis on scientific and technical applications. The course includes BASIC and machine language programming including I/O, elementary files, application software, and hardware and software evaluation. Both courses were taught in multiple sections and GTAs from the treatment and comparison groups were assigned by the departmental administration based upon departmental needs. Both courses were required for all students seeking Industrial Technology degrees.

Comparison group. The comparison group for the study was evaluated (e.g., transcript review, vita, interview) by faculty of the Department of Industrial Technology. These individuals were judged to have an acceptable standard of teaching knowledge to warrant their bypassing enrollment in the Curriculum and Instruction training program for graduate teaching assistants. This comparison group had been trained previously or possessed what were judged to be sufficient teaching experiences in the content areas they were assigned to teach. Typically, the "previously trained" group had completed a teacher program in industrial arts, industrial technology, or technology education, where a technical skills approach to teaching had been emphasized. Furthermore, they had attended seminars and workshops focused on teaching technical skills. Previous teaching experiences of the comparison group typically included teaching university or secondary-school level courses in industrial technology in a university or high school setting, as well as having been mentored by industrial technology professors.

Treatment group. Graduate teaching assistants with either no formal training in teaching or teaching experience were assigned to the treatment group. Through screening conducted by the industrial technology faculty, it was decided that these individuals had not yet met the prerequisite standard of teaching knowledge. Thus, these GTAs were thus required to complete the training program for graduate teaching assistants offered by the Department of Curriculum and Instruction.

Treatment

The treatment group completed a formal training program in college teaching, which is delivered by the Department of Curriculum and Instruction (C&I) at Illinois State University. The formal training consisted of a graduate level course, C&I 490, Improvement of Teaching Practice (see Jerich, 1993). The course was composed of four curricular components that were tightly coupled together. These four components were (a) classroom sessions on instructional models (based on the cognitive theory that teaching is seen as information processing), (b) clinical-based pre-conference consultations, (c) teaching practice (on-site lessons), and (d) clinical-based post-conference consultation (see Figure 1).

The initial 24-hour instructional phase was conducted over a period of eight three-hour sessions, one week prior to the start of the semester-long course. This instruction consisted of formal presentations on topics such as constructing knowledge structures, developing teaching rationale, and using methods and pedagogy to deliver various kinds of content. They were also exposed to model teaching protocols that they were to incorporate into their practice teaching. The training culminated with a 20-minute microteaching lesson, delivered by each of the GTAs. Throughout the 16-week course, the GTAs met on a weekly basis (three-hour sessions each) for a total of 24 hours of additional instruction. These sessions focused on specific instructional models and teaching strategies, and used an information processing approach to teaching. The clinical experiences associated with the class sessions consisted of individualized 30-minute pre-conferences with each GTA, analysis of videotaped lesson observations (ranging from 50 minutes to over 100 minutes), and individual 30-minute post-conferences with the GTAs following the lesson observations.

Classroom instruction. Each of the GTAs was responsible for identifying styles of teaching that were comfortable for them and that they believed would satisfy the basic expectations of the students they would teach. In addition to extensive instruction about structuring knowledge, developing rationales, using learning outcomes in teaching, and selecting content pedagogy, a series of classroom sessions was prepared and delivered to GTAs using two basic categories of teaching strategies. These were: (a) teacher-centered strategies (e.g., lecture-recitation, demonstration, and concept teaching); and (b) student-centered strategies (e.g., relationships, value analysis, and reflective discussion). The lessons were composed of interactive methods and content strategies so that the systematic identification of teaching strategies became a central focus of instruction.

Pre-conference consultation. Prior to teaching a lesson, each graduate teaching assistant met with his/her supervisor to discuss the use of lesson planning and teaching strategies. During this pre-conference, the supervisor assessed the extent to which the graduate teaching assistant incorporated various content (e.g., defining the concept being taught and developing an explanation of the topic) and teaching strategies (e.g., recitation, demonstration, and discussion) learned in the class sessions into the lesson. Within a cooperative clinical supervision framework, close attention was given to the details of lesson planning and teaching strategies as well as to estimating the extent to which the lesson would be effective.

Applied teaching practice. In the teaching practice component of the course, each GTA was videotaped while delivering two lessons. The dynamics of the teaching-learning act were then assessed. Instructional time periods ranged from 45 to 75 minutes and the size of the group instructed averaged between 25 to 30 students. Graduate teaching assistants refined and developed their teaching style by practicing and experimenting with combinations of teaching strategies. The two on-site teaching lessons were sequentially and carefully guided. Only the second lesson delivered by the GTAs generated data for this study.

Post-conference consultation. Immediately after completing each lesson, graduate teaching assistants were given 10 minutes to analyze their lessons using a self-analysis form. During the 30-minute post-conference, which followed as soon as possible after their lesson analysis, the faculty supervisor and graduate teaching assistant discussed and compared their perceptions of the lesson. Graduate teaching assistants also received a written assessment of their lesson from their students. Thus, undergraduate students participated in the evaluation of the GTAs (see Jerich, 1993).

At this point, an additional comment should be made about the study’s design. The causal-comparative design is generally viewed as providing little or no direct control of variables that might affect a study’s results. While this design does not provide for direct control (compared with experimental designs), it is the only practical way of conducting research in many educational and business settings. In this case, graduate teaching assistants in the treatment and comparison groups were similar in terms of grade point averages (all had in excess of 3.0 GPA in their last 60 hours of undergraduate work) and technical backgrounds (all had taken classes at the undergraduate level in the technologies they were teaching). Also, since the classes were all composed of multiple sections, graduate teaching assistants had the opportunity to observe a lesson presented by a professor prior to delivering the same lesson to students.

As noted previously, criteria for assigning GTAs to treatment or comparison groups were that those with no previous instructional training or experience were assigned to the treatment group (they were enrolled in the C&I 490 course), while those with some previous instructional training or experience were assigned to the comparison group. Clinical classroom supervision (observation) was a requirement of the C&I 490 course and was part of the treatment. Both treatment and comparison groups participated in the instructor evaluation by their undergraduate students at the end of the class period. The students were assured that their responses were confidential and would in no way affect their grades for the course. The GTAs were not present when the students completed the instrument.

Use of Student Ratings to Judge Teacher Performance

The use of student-rating instruments to judge teaching effectiveness remains controversial. While student ratings capture some aspects of teaching effectiveness, they may also capture information that is unrelated to effective instruction. Nevertheless, student ratings are considered important to assessing the teaching effectiveness of faculty nationwide. Recently, the American Psychologist devoted an entire issue to the use of student ratings (see the November 1997 issue). The authors converged on two issues (a) student ratings are valid, and (b) contextual variables can affect the level of ratings (d’Apollonia & Abrami, 1997; Greenwald 1997; Greenwald & Gilmore, 1997; Marsh & Roche, 1997; McKeachie, 1997). McKeachie (1997) stated:

…There is little disagreement about the usefulness of student ratings for improvement of teaching (at least when student ratings are used with consultation or when ratings are given on specific behavioral characteristics). There are, however, two problems that detract from the usefulness of ratings for improvement. The first problem involves students’ conceptions of effective teaching. The second problem is the negative effect on low ratings on teacher motivation. A solution for both these problems is better feedback. (p. 1219)

Others have conducted extensive studies in the use of student evaluations in teaching and have found them to be a valid and reliable means of reporting general levels of effective teaching in college and university courses (Aleamoni, 1978; Feldman, 1979; Marsh, 1986; Tiberius, Sackin, Slingerland, Jubas, Bell, & Matlow, 1989; Wilson, 1988).

Findings

The study focused on the extent to which the two groups of graduate teaching assistants in industrial technology were rated as effective instructors. Mean TPAS ratings displayed in Figure 2 illustrate the extent to which GTAs were able to incorporate the appropriate content and teaching strategies into their classroom instruction in order to promote effective student outcome measures. Of the 10 items on TPAS, two dealt with the Learning Aims of the lesson, two related to the Content, three with Methods used, two with Evaluating instructor sensitivity to student needs, and one with the level of Accomplishment felt by the students. The last three items were key student outcome measures.

Five separate significance tests were used to test the hypothesis that there were no significant differences in student ratings of instructor effectiveness in terms of aims, content, method, evaluation, and accomplishment using the mean ratings of instructors on the TPAS. All hypotheses were non-directional and a significance level of .05 was selected.

Learning Aims

The mean rating for item one (Were the learning aims of the lesson understood?) was M = 4.24 (SD = .79) on the 7-point scale. The rating for item two (Were the learning aims of the lesson developed?) was M = 4.20 (SD = .86). The treatment group’s ratings for the learning aims component of TPAS were comparatively higher. The mean of 5.29 (SD = .45) was 1.05 points higher than the control group. The mean rating for item 2 was 5.26 (SD = .162), which is 1.06 points higher than the control group. This indicated that the GTAs’ lessons were clearly understood and developed and helped the students achieve the goals and objectives of the lessons taught.

Analysis of variance (ANOVA) was used to compare the mean scores for treatment groups on the Learning Aims components of the TPAS (see Table 1). A significant difference (df = 1,244, F = 82.60, p = .01) was detected between the means of the groups. This significant difference suggests that the Illinois State University GTA training program may have contributed to the significantly higher ratings on learning aims.

Content

The Content component of the TPAS assessed the content strategies used for the lessons. For the comparison group, the students indicated that Content for the GTAs’ lessons was meaningful and well organized. The data for the two items indicated that the students believed that the content of the lessons had some degree of relevance (M = 4.32; SD = .98) and that it was well organized (M = 4.51; SD = .63). The ratings were slightly above the 4.0 mean on the 7-point scales used in the study. By comparison, the treatment group’s student ratings for the content component indicated that the students believed that the content of the lessons had a high degree of relevance (M = 5.16; SD = .78) and were well organized (M = 5.65; SD = .63). Both items were rated higher (i.e., +.84 points and +1.14 points respectively) for the treatment group compared to the control group.

An ANOVA was used to compare the mean scores for the treatment and comparison groups on the Content component of the TPAS (see Table 2). The two-tailed test yielded a significant difference between the two groups (df = 1,244, F = 66.75, p = .01). GTAs completing the training scored significantly higher than those who had not. This supports the benefits of a graduate teaching assistant training program.

Methods

The three items for the Method component of TPAS were designed to measure the extent to which the methodogies used in the lesson had an impact on the lesson and student learning. For the comparison group of GTAs, the data indicated that the students perceived the methodogies for the lessons as positive components, measured by the means of 4.25 (SD = 1.01), 4.09 (SD = .93), and 4.18 (SD = .74), respectively. Students in the treatment group of GTAs also saw the methodogies for the lessons in a positive light, as indicated by the means of 5.42 (SD = .39), 4.97 (SD = .54), and 5.25 (SD = .58), respectively. The differences between the two groups of GTA ratings were +1.17, +.88, and +1.07 points higher, respectively–all favoring the treatment group.

The third ANOVA compared treatment and comparison groups on the Method component of the TPAS (see Table 3). There was a significant difference (df = 1,244, F = 71.82, p = .01) between the groups on the Method component.

Table 3
Analysis of Variance Averaged Means and Standard Deviations for Experimental and Control Groups on METHOD Component of TPAS Summary Table

Group Number Deviation Mean Standard

Control Group 140 4.17 0.88
Experimental Group 106 5.21 1.03

Two items of the TPAS addressed the Evaluation component. For the comparison group of GTAs, both items were on the positive side. The mean for item 8 was 4.50 (SD = 1.30) and the mean for item 9 was 4.16 (SD = .94). The treatment group’s ratings for these two items were also positive. Item 8 had a mean of 5.57 (SD = .71) and item 9 had a mean of 5.27 (SD = .58). Both items were rated higher (i.e., +1.07 points and +1.11 points respectively) for the treatment group of GTAs compared to the comparison group.

When student ratings for treatment and comparison groups on the Evaluation component of the TPAS were analyzed using ANOVA, there was a significant difference (df = 1,244, F = 71.40, p = .01) between the groups (see Table 4). Again, the GTAs who had completed the training program received higher student ratings than those who had not.

Table 4
Means and Standard Deviations for Experimental and Control Groups on EVALUATION Component of TPAS Summary Table

Group Number Deviation Mean Standard

Control Group 140 4.33 0.95
Experimental Group 106 5.40 1.01

Accomplishment

In the final component of the TPAS (Accomplishment), the students taught by the comparison group indicated they felt positively about their sense of accomplishment in the lessons. The mean rating of 4.24 (SD = 1.02) for item 10 is a positive rating and served as an indicator of successful instruction. The students taught by the treatment group of GTAs also indicated that they had a higher sense of accomplishment with a mean of 5.37. The mean rating for item 10 (M = 5.39; SD = .39) is a high positive instructional rating.

Table 5 shows the ANOVA’s averaged means and standard deviations for the treatment and control groups on the Accomplishment component of the TPAS. There was a significant difference between the two groups (df = 1,243, F = 53.55, p = .01). Inspection of the data revealed that GTAs who had completed the training program had higher student ratings than those who had not completed the training.

Table 5
Means and Standard Deviations for Experimental and Control Groups on ACCOMPLISHMENT Component of TPAS Summary Table

Group Number Deviation Mean Standard

Control Group 139 4.24 1.17
Experimental Group 106 5.37 1.21

Overall Analyses

Table 6 shows the ANOVA table for comparing five TPAS factors of Learning Aims, Content, Methods, Evaluation, and Accomplishment for the comparison group. No significant differences (df = 4.694, F = 1.43, p = .22) were found across the five components leading to the conclusion that the comparison group was reasonably homogeneous with respect to the five TPAS factors.

Table 6
Means and Standard Deviations for the Control Group on the Aims, Content, Method, Evaluation and Accomplishment Factors for TPAS Summary Table

Factor Number Deviation Mean Standard

Aims 140 4.22 .79
Content 140 4.41 .93
Method 140 4.17 .88
Evaluation 140 4.33 .94
Accomplishment 139 4.24 1.17

Likewise, Table 7 shows the results of the comparison of performance on the five TPAS factors for the treatment group. No significant differences (df = 4.694, F = .75, p = .56) were found across the five components and it was concluded that the treatment group was homogeneous with respect to the five TPAS factors.

Table 7
Analysis of Variance Averaged Means and Standard Deviations for the Experimental Group on the Aims, Content, Method, Evaluation and Accomplishment Factors for TPAS Summary Table

Factor Number Deviation Mean Standard

Aims 106 5.26 1.00
Content 106 5.42 .97
Method 106 5.21 1.03
Evaluation 106 5.40 1.01
Accomplishment 106 5.37 1.21

Findings from Exit Interviews

A longer-term perspective on the teaching effectiveness of GTAs was obtained by interviewing the GTAs after they had completed their teaching assignments. The interviews were conducted by a faculty member from the Department of Industrial Technology who had no connection with the training program. Of the seven industrial technology GTAs who had received training, three were later available for interviews. Of those three, one had taught previously and was taking the training while teaching a second assigned course. The interviews focused on the extent to which the GTAs perceived that the clinical-based instructional approach (a) achieved the above objectives, (b) improved or enhanced their teaching effectiveness, and (c) expanded their self-confidence as teaching assistants, and (d) broadened the comfort level associated with the performance of their teaching duties.

The GTAs reported that the instruction during the entire training program was, in general, of high quality and gave them information, skills, tips, and techniques they could use in their instructional duties. All of their comments were favorable and positive in nature. They indicated that the instruction had been well organized and administered and had proved to be highly relevant to the teaching needs of GTAs responsible for undergraduate classes in basic technology areas. The pre-conference and post-conference experiences in the training program were seen as valuable methods of assessing their performance in developing and delivering lesson plans as well as achieving success in the classroom. The GTAs indicated that videotaping and reviewing lessons had provided valuable learning experience and that this had enhanced their classroom presentation skills.

Some of the most powerful comments made by the GTAs was that they felt the training had equipped them with specific skills necessary to improve their student evaluations and their student evaluations had actually improved when they implemented the skills gained from the training. The quantitative analysis supports this conclusion. The student ratings were in fact higher for GTAs who had received the training. These results suggest that the training improved the performance of more than half of the GTAs who taught undergraduate courses.

Perhaps the most compelling interview evidence were the comments made by GTAs while teaching their assigned classes. As one GTA remarked, "I am currently enrolled in the course (the training program) and during the first three meetings I have improved my teaching abilities to an incredible extent. This class should be required of all GTAs so as to improve the quality of their class work presentations." This comment was made by a GTA who was teaching an assigned course while simultaneously participating in the training program.

After participating in the training program and subsequently teaching an assigned course, another GTA noted, "The training [program] needs to be recognized as an essential requirement for any GTA so the quality and practicality of the teaching done by the GTA is of exceptional levels for the satisfaction of the students and [the] GTA." Another GTA commented that "[the training] was a very helpful course. The instructor was helpful and concerned in making me a better teacher. He was available for outside consultation if needed, and overall, it was a very worthwhile course."

Based on these results, the Department of Industrial Technology has extended its study of the impact of the training program. In the interim, the Department has decided to require all GTAs who do not have an education background or prior teaching experience to complete the clinical-based training program. The training is highly recommended for all GTAs, even if they have teaching experience; especially those assigned to teach self-contained classes or classes that share responsibilities and instructors.

Discussion and Recommendations

Based on this study’s findings, the curricular components that were integrated into the overall training program enhanced teaching performance of novice graduate teaching assistants. The training program was apparently successful in enhancing the training performance of GTAs based on the undergraduate student ratings. The self-reported impressions of the value of the treatment by the graduate teaching assistants confirmed this conclusion. Novice GTAs can learn instructional strategies and integrate them successfully into their own style of teaching as a learner and also as a teacher.

Caution should be exercised when attempting to generalize the results of this study. The undergraduate students were asked to rate the teaching of the graduate teaching assistants at the same point in time for each semester in order to gain an understanding of how the undergraduate students enrolled in the various courses perceived the overall success and effectiveness within the classroom instructional setting. By doing so, we were checking for program congruence involving curricular, instructional and learning components of the undergraduate courses being offered and delivered. Additional evaluations of lessons throughout the course may have strengthened our findings. However, constraints within the contextual setting prohibited this from occurring.

In spite of these limitations, the results of this research suggest that this type of training may in fact impact teaching effectiveness. These findings extend the work of Angelo & Cross (1993) and Brookfield (1995) who advocated establishing intensive and high quality instructional programs in higher education. This study provides statistical evidence supporting the effectiveness of a training program based on conceptually grounded knowledge about teaching and learning. Thus, GTAs completing a conceptually-based instructional program, with an applied clinical experience component in the curriculum, significantly honed their teaching craft when compared to their counterparts.

Additionally, this study suggests that the way in which GTAs are trained to be effective teachers extends beyond learning about specific individual teaching skills. Simply gaining teaching experience, in the absence of having had the benefit of completing a training program that includes a supervised teaching experience, does not necessarily make for better teaching. Rather, training programs should be constructed to reflect conceptually-grounded, research-based practices, which is a position advocated by Gage (1989). He suggested that research paradigms should be used to test the differences between teaching practices that are designed to support lower-level, short-term processes for instruction/learning and teaching practices that are designed to support conceptual, theoretical, higher-level, long-term processes for instruction/learning. This research also suggests that graduate teaching assistants could benefit from completing a teacher-training program designed to facilitate their development as reflective professionals capable of making informed instructional decisions in their classrooms.

While the results of this study are encouraging, some additional limitations must be noted. The sample size was relatively small and the GTAs were from a single institution. Nonetheless, the GTAs were representative of the population of master’s-level industrial technology GTAs at Illinois State University. The industrial technology GTAs in this study may not be representative of industrial technology GTAs at other institutions. A suggestion for future research would be to replicate this study using industrial technology graduate teaching assistants at a research level one doctoral-granting institution. In spite of the study’s limitations, we believe that the findings add to an emerging line of systematic inquiry about the effective teaching of industrial technology subjects by using GTAs in the classroom. The results indicate that a teacher-training program can provide a forum in which GTAs learn to become effective instructors and that the type of training completed can make a substantial difference in the ways their students view their teaching effectiveness. Using research-based techniques (within an extended, structured graduate course) to train GTAs compared to other training programs (e.g., teaching tips paradigms) can make a difference.

Recommendations

  • Graduate teaching assistants who have not majored in education or had significant instructional experience, should pursue a planned program of instruction similar to the one used at Illinois State University in order to help them do the best possible job of instruction.
  • Special training in instruction should be made mandatory for new GTAs who have not had the appropriate educational background.
  • Whenever possible, training should be provided prior to GTAs assuming instructional duties.
  • Training should occur concurrently with the GTAs’ first instructional assignment.
  • Universities and colleges should consider a centralized instructional program to meet the needs of GTAs for all departments in order to provide their best possible instructional performance.
  • Additional research studies should be conducted under various conditions to identify and examine the elements that improve the teaching performance of graduate teaching assistants.

References

Aleamoni, L. M. (1978). The usefulness of student evaluations in improving college teaching. Instructional Science, 7(1), 95-105.

Angelo, T., & Cross, K. (1993). Classroom assessment techniques. Handbook for College Teachers (2nd ed.). San Francisco, CA: Jossey-Bass.

Berdahl, R. (1995). Preparing teacher scholars: Is a split personality inevitable? In T. A. Heenan, & K. F. Jerich (Eds.), The teaching assistantship: Engaging the disciplines (pp. 1-2). Champaign, IL: The University of Illinois Conferences and Institute Press.

Border, L. L. B. (1993). The graduate teacher certification program: Description and assessment after two years. In K. Lewis (Ed.), The TA experience: Preparing for multiple roles. Stillwater, OK: New Forums Press.

Borg, W., & Gall, M. (1989). Educational research: An introduction. Longman: New York.

Boyer, E. (1987). College: The undergraduate experience in America. New York: Harper & Row.

Brookfield, S. (1995). Becoming a critically reflective teacher. San Francisco, CA: Jossey-Bass.

Brown, G. (1987). Lectures and lecturing. In M. Dunkin (Ed.), The international encyclopedia of teaching and teacher education (pp. 284-287). Oxford: Pergamon Press.

Bruce, A. M. (1997). Encouraging TA development as a research institution: A case study of TA mentoring in the University of Georgia’s English department. The Journal of Graduate Teaching Assistant Development, 4(1). 5-14.

Campbell, J., & Stanley, D. (1963). Experimental and quasi-experimental designs for research. Boston: Houghton Mifflin.

Connell, W. (1987). History of teaching methods. In M. Dunkin (Ed.), The international encyclopedia of teaching and teacher education (pp. 201-213). Oxford: Pergamon Press.

Cook, T., & Campbell, D. (1979). Quasi-experimentation: Design and analysis issues for field settings. Boston: Houghton Mifflin.

Cranton, P. (1987). Clinical teaching. In M. Dunkin (Ed.), The international encyclopedia of teaching and teacher education (pp. 304-305). Oxford: Pergamon Press.

d’Apollonia, S. & Abrami, P. C. (1997). Navigating student ratings of instruction. American Psychology, 52(11), 1198-1208.

Doyle, W. (1987). Paradigms for research. In M. Dunkin (Ed.), The international encyclopedia of teaching and teacher education (pp. 113-118). Oxford: Pergamon Press.

Dunkin, M. (1987). Technical skills of teaching. In M. Dunkin Ed.), The international encyclopedia of teaching and teacher education (pp. 703-705). Oxford: Pergamon Press.

Feldman, K. A. (1979). The significance of circumstances for college students’ ratings of their teachers and courses. Research in Higher Education, 10(2), 149-172.

Feldman, K. A. (1983). The seniority and instructional experience of college teachers as related to the evaluation they receive from their students. Research in Higher Education. 18(1), 3-124.

Gage, N. L. (1989). The paradigm wars and their aftermath: A "historical" sketch of research on teaching since 1989. Educational Researcher, 18(7), 4-10.

Gage, N. L. (1976). A factorially designed experiment on teacher structuring, soliciting, and reacting. Journal of Teacher Education, 27(1), 35-38.

Glass, G. V., & Hopkins, K. D. (1984). Statistical methods in education and psychology, (2nd ed.). Englewood Cliffs, New Jersey: Prentice-Hall.

Greenwald, A. G. (1997). Validity concerns and usefulness of student ratings of instruction. American Psychology, 52(11), 1182-1186.

Greenwald, A. G., & Gilmore, G. M. (1997). Grading leniency is a removable contaminant of student ratings. American Psychologist, 52, 1209-1217.

Hackney, S. (1993, July 21). Senate confirmation hearing: National Endowment for the Humanities. Chronicle of Higher Education, B4.

Halpern, D. F. (1994). Rethinking college instruction for a changing world. In Halpern, D. (Ed.), Changing college classrooms (pp. 1-10). San Francisco: Jossey-Bass.

Hansen, C. B. (1994). Questioning techniques for the active classroom. In Halpern, D. (Ed.), Changing college classrooms (pp. 1-10). San Francisco: Jossey-Bass.

Jerich K. (1996, April 7). Learning at the head of the class. In Debbie Goldberg (Ed.), When students teaching students: Focus on the university. The Washington Post Education Review, 12, 28.

Jerich, K., Leinicke, L., & Pitstick, T. (1995). A comparative study of the instructional development of two groups of graduate teaching assistants in accounting: Engaging the disciplines (pp. 175-178). Champaign, Illinois: The University of Illinois at Urbana-Champaign, Office of Conferences and Institutes.

Jerich, K., & Brauchle, P. (1993). Enhancing teaching effectiveness of industrial technology graduate assistants. The Journal of Technology Studies, 19(2), 20-28.

Jerich, K. (1993). The knowledge base for educating graduate assistants to be effective college teachers at Illinois State University. In K. G. Lewis (Ed.), The teaching assistantship: A preparation for multiple roles (pp. 122-133). Stillwater, OK: The New Forums Press.

Jerich, K. (1993). The relationship of clinical supervision and the training of graduate teaching assistants. In K. G. Lewis (Ed.), The teaching assistantship: A preparation for multiple roles (pp. 326-338). Stillwater, OK: The New Forums Press.

Jerich, K., & Johnson, W. (1989). An evaluation report of the components of a teacher education program. (Unpublished Technical Report)Champaign: University of Illinois at Urbana-Champaign, Department of Curriculum and Instruction.

Johnson, W. D. (1973). An analysis of the Teacher Performance Appraisal Scale. (Unpublished manuscript) Urbana-Champaign: University of Illinois, Teaching Techniques Laboratory.

Johnson, W. D. (1968). An evaluative report of the laboratory portion of the professional semester Fall, 1967. (Unpublished manuscript) Urbana-Champaign: University of Illinois, Teaching Techniques Laboratory.

Joyce, B., & Weil, M. (1996). Models of teaching (5thed.). Englewood Cliffs, NJ: Prentice-Hall.

Kerlinger, F. (1973). Foundations of behavioral research (2nd ed.). New York: Holt, Rinehart & Winston.

Krathwohl, D. R. (1993). Methods of educational social science research: An integrated approach. New York: Longman.

Lambert, L., & Tice, S. (1996). University teaching: A guide for graduate students. Syracuse University Press.

Lambert, L. M., & Tice, S.L. (1993). Preparing graduate students to teach: A guide to programs that improve undergraduate education and develop tomorrow’s faculty. American Association of Higher Education. Washington, D.C.

Levinson-Rose, J., & Menges, R. J. (1981). Improving college teaching: A critical review of research. Review of Educational Research, 51(3), 403-434.

Marsh, H. & Roche, L. (1997). Making students’ evaluations of teaching effectiveness effective: The critical issues of validity, bias, and utility. American Psychologist, 52(11), 1187-1197.

Marsh, H. W. (1986). Applicability paradigm: Students’ evaluation of teaching effectiveness in different countries. Journal of Educational Psychology, 78(6), 465-73.

McKeachie, W. J. (1997). Student ratings: The validity of use. American Psychologist, 52(11), 1218-1225.

McKeachie, W. (1963). Analysis and investigation of teaching methods. In N.L. Gage (Ed.), Handbook of research on teaching (pp. 448-505). Chicago: Rand McNally.

Newell-Decyk, B. (1994). Using examples to teach concepts. In D. Halpern (Ed.), Changing college classrooms (pp.39-63). San Francisco, CA: Jossey-Bass.

Peterson, P., Marx, R., & Clark, C. (1978). Teacher planning, teaching behavior and student achievement. American Educational Journal, 15(3), 417-432.

Ronkowski, S., Conway, C., & Wilde, P. (1995). Longevity issues for developmental TA training programs. In T. A. Heenan, & K. F. Jerich (Eds.), The teaching assistantship: Engaging the disciplines. Champaign, IL: The University of Illinois Conferences and Institute Press.

Raths, J. D. (1987). An alternative view of the evaluation of teacher education programs, Advances in Teacher Education, 3, Ablex Publishing.

Rubin, L. (1989). The thinking teacher: Cultivating pedagogical intelligence. Journal of Teaching Education, 40(6), 31-34.

Schumacher. S., & McMilan, J. H. (1989). Research in education: A conceptual introduction, (2nd. ed.). Glenview, Illinois: Scott, Foresman & Co.

Shulman, L. (1987). Knowledge and teaching: Foundations of the new reform. Harvard Educational Review, 57(1), 1-22.

Svinicki, M. D. (1995). The seven deadly comments that get in the way of learning about teaching. In Heenan, T. A., & Jerich, K. F. (Eds.), The teaching assistantship: Engaging the disciplines. Champaign, IL: The University of Illinois Conferences and Institute Press.

Tiberius, R. G., Sackin, H. D., Slingerland, J., Jubas, M. K., Bell, M., & Matlow, A. (1989) The influence of student evaluative feedback on the improvement of clinical teaching. Journal of Higher Education, 60(6), 665-681.

Wilson, T. C. (1988). Student evaluation of teaching forms: A critical perspective. The Review of Higher Education, 12(1), 79-95.


DLA Ejournal Home | JITE Home | Table of Contents for this issue | Search JITE and other ejournals