JVME v20n3: Computer-based Instruction versus Instructor-based Instruction of Interpretive Clinical Pathology Case Analysis


Volume 20, Number 3 1993

Computer-based Instruction versus Instructor-based Instruction of Interpretive Clinical Pathology Case Analysis

H. Tvedten, G. Walter, J. Stickle, K. Henkel and C. Anderson
From the College of Veterinary Medicine, Michigan State University, East Lansing, MI 48824-1316.


Introduction

Veterinary clinical pathology had been taught at Michigan State University mainly by two courses. One course had been a 2-week clerkship for senior veterinary students including about 70 hours of didactic instruction for each clerkship involving no patient care. This clerkship was repeated 9 to 24 times each year to groups of 6-12 senior veterinary students for a time commitment of 6-7 hours a day for 18 to 48 weeks a year . About half of each clerkship consisted of laboratories emphasizing microscopic diagnostic skills, i.e., hematology, cytology and urine sediments. The other half was recitation sessions emphasizing interpretation of laboratory data. Interpretation had been taught by having students analyze laboratory data from previous cases in small group discussions. Teaching by analysis of data from real cases has been a popular method of instruction for both the instructors and students, because it simulates the situations students will handle as veterinarians. Repeating the same lessons to small groups however, has the disadvantage of a large faculty time commitment.

Computer-aided instruction (CAI) appeared a logical method to teach much of the material often repeated using the same or similar cases. Case simulations are a common application of CAI and were the method in which one-half of the clerkships were taught. A great deal of faculty time during these senior clerkships could be saved if a significant amount of the analysis of laboratory data could be taught by use of the computer. Computer-aided instruction could also add flexibility by allowing students to study at their own speed and to repeat difficult topics. Interactive case simulations (CAI) were written in sets of 10 cases to allow students to practice interpreting laboratory data. The first lessons were well received by students, so more were written and computer-aided instruction became an increasing portion of the clerkships. A study was devised to determine whether replacing a large portion of teaching in this course by CAI was statistically or subjectively different than teaching only by instructors.

Materials and Methods

The 2-week clerkships in clinical pathology provided a unique, repetitive structure to compare different teaching techniques. The design of the 4 credit hour (quarter term) clerkship had been refined into a formal and uniform course format. Each clerkship taught essentially the same sequence of topics for a complete review of clinical pathology during 70 hours of scheduled time. There was no patient care so the topics taught were not determined by what clinical cases became available daily as was typical for most other senior clinical clerkships. Typically on the first day, anemia and RBC alterations were taught with prepared cases and glass slides. The second day was mainly leukocyte alterations. Other topics were tests of liver, pancreas, kidney, acid base, electrolytes, cytology, etc. A similar amount of time was scheduled for each topic in each clerkship. Students in each clerkship were graded based on their performance on 3 written essay examinations and subjective grades from faculty leading discussions or laboratory sessions. A moderator was assigned to each clerkship and did the largest share of the teaching, wrote the exams and determined grades. Other faculty members taught variable portions of each clerkship course and submitted subjective grades on students.

Of the 9 clerkships taught that year, 8 were used in the study. Four clerkships were taught only by an instructor leading case discussions, and another four clerkships were taught with over half of the case analysis sessions taught by CAI. The 8 clerkships were divided into 4 pairs with one of four instructors teaching each pair of clerkships. Each pair included one computer-assisted clerkship and one clerkship without scheduled computer time. All students accepted participation in the study.

Sixteen lessons were written for use on IBM compatible personal computers. The first 9-12 lessons written were designed to specifically cover the topics of the clerkships such as anemia, leukocyte testing, renal tests, acid base, etc. Lessons written later went into more depth for specific topics like Heinz body anemias and were designed for clinical pathology resident seminars or a graduate course in hematology. Each lesson had about 10 cases to match the variety of cases covered during each topic discussed in the clerkships. Each case had 4 questions for the students to answer correctly before moving to the next case. Positive feedback was given for correct answers, and clues or other explanations were given when incorrect answers were chosen. The lessons were written by a faculty member who had taught the clerkship more than a hundred times over 15 years, so the questions and the correct and incorrect answers were those which typically occurred in discussions of the cases in the past. The lessons emphasized laboratory test data and what types of laboratory conclusions should be made from various test results. Lessons usually had one theme such as hepatic testing, and took about 1 hour to complete. The 16 lessons provided up to 16 hours of autotutorial instruction and over 140 cases for students to study. Almost all 16 lessons were used in each of the 4 clerkships using computer-aided instruction. The topics were anemia, leukocytes, Heinz body anemia, blood parasites, nonregenerative anemia, iron profile, hemostasis, hepatic, pancreatic, acid-base, electrolyte, renal, calcium, fluid cytology and review lesson numbers 1 and 2. The students were encouraged to work in groups of 2-3 to stimulate discussion.

The skills to be taught by an instructor or computer lesson were to identify the most significant, abnormal laboratory test results in a case and to interpret what those results meant in that particular situation and animal. The criterion considered to best test these skills was an essay-type question in which the students analyzed laboratory data from at least 1 case at the end of the clerkships. Each instructor prepared a final examination for their pair of clerkships. The final exams had a common portion which was the same for the approximately 20-24 students in the 2 clerkships of each instructor. The common portion was graded by the instructor without knowledge of the student's name or type of clerkship. The grading was done after both clerkships were finished. The same test was not given in all rotations because students tend to save and exchange test questions as learning aids.

Other criteria of performance included the following. The total objective score for each student was the total of the final examination and 2 earlier examinations each student had taken in their clerkship. Over half of these essay examinations were based on analyzing laboratory data in cases. The total subjective score was the average score given by the instructors in each student's clerkship for each student's performance in discussions and laboratories. The student's grade point average in veterinary school before their clinical year and their grades scored in a clinical pathology course during their second year in veterinary school were used to assess the association of previous academic performance on their performance in the senior clerkships separate from methods of teaching.

All students were given the same pretest at the start of each clerkship to judge their knowledge of clinical pathology before the clerkship. Half of the students (those from the first 2 of the 4 pairs of clerkships) repeated the exam used for the pretest at the end of the clerkship as a posttest to judge their increase in understanding during the 2 weeks.

Statistical analysis of numerical criteria was by analysis of covarience. The students' grade point averages and grades in their first clinical pathology course two years earlier were used to try to factor out the expected better performance in test scores of the students who had previously performed better (1) . These 2 variables and the student's pretest score were used as covariables.

Questionnaires were given to all students for their subjective opinions. There were 5 questions as follows:

  1. How much did case analysis help your ability to interpret laboratory data (little, moderately, greatly)?
  2. How much of your rotation's case analysis was taught by the following methods? (enter a percentage number) instructor, computer, Parker-Bros. games, self- taught, other.
  3. How much of your rotation's case analysis should have been taught by the following methods? (enter a percentage number) instructor, computer, Parker-Bros. games, self taught, other.
  4. How did you tolerate the pretest, posttest and questionnaire?
  5. What do you believe is the best way to learn interpretation of laboratory data? Why?

A different questionnaire had been used the previous year when 9 computer lessons designed specifically for the clerkship were available and used. In that previous year, students had almost half as much time scheduled with computer lessons as the year of this study. One item in common to both questionnaires was for students to state according to their opinion what percentage of case analysis should be taught by computer lessons. This was intended to obtain a numerical measure of the degree of their preference for the computer lessons. The general attitude of the students toward computer-based learning could thus be approximately compared between the two years. Subjective observations by instructors were recorded daily during the 8 clerkships and recorded in a log. The observations were such as "Students worked consistently and appeared to discuss the material well" or "Students were quiet."

Results

No statistical difference could be detected in performance between computer- or noncomputer-taught students as measured by criteria such as scores on the common portion of the final examinations, total objective score for the clerkship, total subjective score for the clerkship or posttest scores.

Statistical differences were noted among some factors that were unrelated to whether or not computer lessons were used. The students' previous grade point average (p<.01), pretest score (p<.05) and score on the common portion of the final examination (p<.01) significantly affected the total objective score for the student in the clerkship. The pretest score was negatively related to the final objective score. The total subjective score was significantly associated with the student's previous grade point average (p<.01). The score on the common portion of the final examination was statistically associated with whom the instructor was for the clerkship (p<.01). Statistically significant relationships among the other factors evaluated were not found.

The pretest and posttest scores showed great variability among different groups of the 10-12 students in the rotations. In the first pair of clerkships, for example, the first rotation had an average of 6.5 correct out of the 10 multiple-choice questions on the pretest and improved to 8.17 correct on the same exam at the end of the rotation. The second group answered only 4.5 questions correctly on the pretest and improved to 7.8 on the posttest. The second group used the computer lessons. If one considers only performance at the end of a clerkship, then one does not detect the degree of improvement during the clerkship.

Subjective observations by instructors were too irregular to evaluate quantitatively but suggest some differences. Most of the small groups of 2-4 students appeared more open and vocal in discussing the cases in a small computer group than in recitation sessions. Conversation seemed more relaxed with 1-3 peers who had a similar knowledge base. Often a student would show an outward demonstration of satisfaction, disappointment or frustration in response to the computer's response to their answer while they tended to be quieter and more formal with an instructor. How quiet and formal students appeared also varied with which instructor lead the discussion.

During the 8 clerkships in this study there were usually 2 computer lessons per day and the students were not as enthusiastic about the lessons as the previous year when only 1 lesson per day was used. (In the previous year the 66 students answering a questionnaire stated that 88.9 + 19% of the analysis of cases should be taught by computer lessons. Thirty-three of these students felt they learned more by computer lessons while 27 believed they learned less.) In the current study, the 45 students who were taught by computer believed that 49% of their case analysis was taught by computer and thought that 39% of the case analysis should be taught by computer. Because they recommended less computer-aided instruction than they thought they had received, it is apparent they were less receptive to CAI than students from the previous year.

It appeared to some faculty, based on observing students' enthusiasm and amount of discussion at computer stations, that the students became fatigued with studying the computer lessons after about 1 or 1 1/2 hours without a break. Certain more advanced computer lessons such as one on the serum iron profile appeared less well liked than the 9-12 lessons designed to reproduce the case discussions in the typical clerkship. The more advanced lessons were designed for post-DVM trainees, and often asked very specific questions that senior veterinary students did not know.

The 41 students in the 4 clerkships without computer lessons believed that 0.8% of their case analysis was taught by computer and that 20% of case analysis should be taught by computer. All students had used some of the computerized lessons 2 years earlier in their sophomore course, and apparently wanted at least a portion of the interpretative skills to be taught or retaught by computer.

The free text, responses to the questionnaire questions "What do you believe is the best way to learn interpretation of laboratory data? Why?" is summarized as follows. Of those taught in the 4 clerkships using the computer-aided instruction, 23 liked working with real life cases on computers, 12 liked oral case discussion and interpretation of laboratory results, 2 liked case presentations in lecture because the computer lessons did not adequately explain everything and 3 liked one-half class discussion and one-half case discussion without computer lessons.

Discussion

This study did not demonstrate any statistical difference in student performance based on the two methods of teaching interpretation of clinical pathology data in eight senior clerkship courses. The objective and subjective evaluations by the faculty of the students' performance were not statistically different between groups of students taught by the two methods. This finding agrees with the previous consensus of educators that students taught with CAI perform no worse than students taught by traditional methods (2) .

There have been criticisms of recent studies on comparisons of CAI to other methods of instruction (3) . One criticism was that CAI users devote more time to the common learning goal than their peers and any improved performance detected was due to more effort, and not something inherently better in using computers to teach. In our study both types of rotations had the same total hours of scheduled instruction (70 hours), so this criticism does not seem to apply to the students in this study. The students using a computer lesson would typically evaluate 10 cases in that hour, while in a discussion with an instructor, 1-4 more complex cases were discussed for an hour. Though more cases per hour were evaluated by students using CAI, this did not mean they learned more.

Another criticism of previous studies on CAI is that they did not compare equal teaching methods if the resources did not have the same pedagogical strategies or knowledge content (3). Most of the materials and goals of the 2 methods in our study were very similar since the topics, cases and questions used in 9-12 of the computer lessons were the same as one instructor had used in oral discussions in the clerkships for 15 years. Both groups received similar information and concepts. In retrospect, we believe a few more advanced CAI lessons used in the year of this study did not fit well with the basic objectives of the clerkship. The 4-5 computer lessons with more advanced, complex or detailed topics had been designed for residents and graduate students. Student enthusiasm for computer lessons the previous year had stimulated inclusion of all available computer lessons in the year of this study which was apparently a pedagogical mistake. Inability to answer highly specific questions frustrated some students. CIA programs should be perceived by users as challenging but not frustrating in order to be well received and frequently used (2) .

Another criticism of CAI research is cost (3) . Start-up costs were not a negative consideration in this study because Michigan State University's College of Veterinary Medicine had already invested heavily in developing a laboratory and lecture hall well equipped with computers and had a computer support staff. The time input to write the 16 lessons averaged about 40 hours per lesson spread over 2 1/2 years. The lessons were written in a faculty member's office as time was available. If the lessons are used 12 hours per rotation, then during a currently typical year with 9 rotations there should be 108 hours of CAI per year for 640 hours of initial input in development. The lessons have been used for other courses and continuing education and also at other universities so the time input in development and revisions was paid off by the time used in instruction. The 16 lessons contained only text and no visual images so required minimal development time after an authoring system (4) was learned, a model lesson designed and repeated. CAI productions with motion, personally developed images and/or sound may take weeks to months to prepare just 1 hour of instruction, and may not be cost effective.

Preparation time during a sequence of 18 weeks of consecutive days of teaching 7 hours a day must be minimized. Preparation time for CAI usually is only checking if the computers are working rather than an instructor selecting, organizing and reviewing cases prior to oral presentations.

A final criticism is that CAI literature has failed to identify learners or learning tasks for which CAI may be most appropriate (3) . Many have seemed more interested in computer technology than in educational principles. The method is not as important as whether the lesson is of value and well designed. A major skill for veterinary students to develop has been to identify problems with a patient and define the problem well enough that a solution can be initiated. Any skill is improved by practice and computer simulations can help develop this skill. Case simulations must be developed by one experienced in teaching that skill. Well-written case simulations (CAI) are an appropriate way to teach this skill in our opinion.

Some perceived advantages of using computer-aided instruction in the current study were suggested by student responses to questionnaires and subjective faculty observations. Advantages included strong student receptiveness to computer lessons, more active student discussion of analysis of data, and the ability of small groups of students to work at their own pace, their own time schedule and even individually in a separate room. A few students preferred working on computer lessons by themselves alone in the library. Different students study best in different ways. Students usually seemed to enjoy the lively discussions with classmates. The more active student discussion with 1-2 fellow students at a computer station was assumed to be a lack of the fear factor of saying "something stupid" in front of 11 classmates and an instructor formulating subjective grades. Student verbal participation obviously varied with each group of students and with different instructors. Increased verbal participation was considered to be a positive goal since it made the students more involved with their learning (2) .

Other observations were made which did not relate to computer-aided instruction. It was expected that previous grade point average would be a strong positive predictor of performance in the clerkship in clinical pathology for senior students (1) . The strongest effect on the grades on the common portion of the final exam was related to the instructor for the clerkship. It was not unexpected that faculty members would vary in how they graded essay answers or in the level of performance expected of students. A somewhat surprising finding was that the performance on the pretest was slightly negatively related to grades in the clerkship. Perhaps some students who did well initially did not study as hard as those who perceived they did poorly on the pretest. The results of the pretest and posttest illustrated great variation among the groups. It is not surprising that with such inherent variation among senior veterinary students, we could not detect a statistical difference between two good methods of teaching similar material and skills.

Summary

In summary, extensive supplementation of computer-aided instruction to instructor-lead instruction of interpretation of clinical pathology data was shown to be equally as effective in teaching this skill as teaching only by instructors. In answer to the original question stimulating this study, there was no statistical evidence that one method was better than the other, or that computer-aided instruction in the clinical pathology clerkships should not be used. The computer-aided instruction provided a pleasant variation in teaching techniques, saved faculty time during the busy clerkships and often appeared to stimulate active student discussion of the clinical pathology topics. It was decided to continue use of the computer lessons which met course objectives for future clerkships and other clinical pathology courses where they fit the instructor's teaching style.

References and Endnotes

1. Confer AW: Preadmission GRE and GPA as predictors of academic performance in a college of veterinary medicine. J Vet Med Educ 17:56-62, 1990.

2. Xakellis GC and Gjerde C: Evaluation by second-year medical students of their computer-aided instruction. Acad Med 65-23-26, 1990.

3. Keane DR Norman GR and Vickers J: The inadequacy of recent research on computer-assisted instruction. Acad Med 66:444-448.

4. Vagus Authoring System, O. F. Roesel, Purdue University, West Lafayette, IN 47907.