Grad program review considered
By Susan Trulove
Spectrum Volume 18 Issue 26 - April 4, 1996
The Commission on Graduate Studies and Policies (CGSP) has taken up the issue of graduate program review.
Saying that the aim is not to burden departments with an additional review process, but to explore conducting graduate program reviews on a parallel course as part of ongoing department reviews, John Eaton, associate provost for graduate studies, described what is presently being done at Virginia Tech, what is done at Iowa and Duke, and the impetus for change.
Three colleges at Virginia Tech-Agriculture, Forestry, and Human Resources-participate in Cooperative State Research, Education and Extension Service (CSREES) reviews at five-year intervals. The colleges do a self study, then an external team does a review. The aim is to determine if CSREES resources are wisely used. The final report goes to the dean.
In addition, Forestry has on-site reviews by two professional groups every 10 years. These reviews include accreditation of undergraduate programs.
The College of Education has done program-improvement and resource-allocation reviews using an external team. Emphasis has been on teaching and faculty-student relations. The final report is made to the dean.
The College of Engineering has several undergraduate accreditation reviews. It also has a graduate program review using external teams. The reviewers look at program goals, size, support, leadership, faculty, students, courses, research, facilities, and standards. The final report goes to the dean.
Business also has accreditation reviews by professional groups.
In Arts and Sciences, internal committees review department profiles, interview the department heads, staff, and students, and prepare evaluations of the heads and of the departments. The committees select two-person external teams that interview the faculty and staff, students, and external administrators. The external teams prepare their reports for the dean.
Eaton made the following points regarding Virginia Tech's review systems:
* In general, graduate program quality does not have a major emphasis in the present system.
* Benchmarking in or outside the university plays a minimal or no role in the present review process.
* Generally, there is no required involvement of the provost or graduate dean in the present process.
"The importance of quality in graduate programs is of university-wide importance," he said. "As we move toward selective and differential resource allocation it is important that we have better criteria for evaluating graduate program quality."
He provided case studies from Iowa and Duke as examples of two systems of graduate program review.
Les Sims, graduate dean at Iowa, spoke to the Council of Graduate Studies (CGS) last summer and at Virginia Tech's Research and Graduate Studies' retreat in October.
The review process at Iowa is new and was undertaken to support restructuring decisions-"to determine what departments to downsize and which departments to strengthen," Eaton explained.
Criteria were program quality and "centrality" or importance to the mission of the university. Secondary criteria were student demand, potential for excellence, external impact, and cost. The review measured "income assessment," program performance, outcome assessment, and cost financing, Eaton said. He explained that "income assessment" is data on applications, admissions, enrolled students, GPA, GRE, undergraduate institutions, and awards to determine the quality of incoming students.
Program performance is based on numbers of students who complete, faculty and student awards, productivity, degrees awarded, diversity, and time to degree.
Outcomes are based on placement of students, career paths of students, evaluations by graduates' employers, and national rankings.
Within the university, similar programs were compared-that is, art and music would be compared, or chemistry and physics. "They found that good departments were high across the variables," Eaton reported. Iowa also benchmarked programs against similar programs using the National Research Council rankings.
The final report went to the provost.
Bruce Chaloux observed that Iowa's review process "is more mathematical in terms of created numbers than most evaluations we see."
Lewis Siegel, graduate dean at Duke, also spoke at the CGS meeting last summer, providing information on using external reviewers and quality markers.
At Duke the external review team was funded by the provost but the process was implemented through the dean of the graduate school. The charge to the team was from the graduate dean and the provost.
Teams of three to five members were formed from department nominations and other sources. The departments and graduate school provided information. Each team makes a two-day campus visit then has an exit interview with the provost, the graduate dean, and the college dean. The teams' reports and departments' responses are shared with the executive committee of the graduate faculty. An executive summary is shared with the president, appropriate deans, and the university's board of trustees.
CGSP Chair Rebecca Lovingood asked if the commission should consider whether there will be graduate program review, or determine the shape it might take.
"I think we have graduate program review now," said Eaton. "Given the structure of what is being done now, what can we do to assess quality and give departments access to information they don't have?
"The issue of where the reports go is of broader interest."
"Who makes that decision?" asked Lovingood.
"I don't think it's a decision; it's a recommendation," said Eaton.
Len Peters suggested the commission could forward recommendations to the University Council.
Jim Burger asked, "What are the indicators that there is a problem?"
"If there is a problem, we can learn about it by testing ourselves against our peers," Eaton said. "Fisheries and EDVT have been ranked number one; that's based on benchmarking data. Maybe such data could be used to persuade a dean to put resources in a program."
Burger said, "The success of graduate students has to do in part with the amount of research sponsorship. Promotion and tenure is based in part on the success and placement of graduate students. If a program is not successful, then research is not funded and faculty members are not promoted."
Peters said, "It's not that there's a problem. We can use a review process for diagnostics to make a program better.... How do we know whether our students are placed well relative to other programs across the country? We may find we're doing well. What's wrong is not the issue. The issue is how we can use a process like this to make us better."
Chaloux said, "We need to be sensitive to what sponsors are looking at. I suspect they are looking at NRC rankings." He added, "I would hope that program review would lead to decisions about resource allocation."
After additional discussion, Eaton said much of the data departments already collect could be used. "If the concept of benchmarking becomes a reality, Institutional Research might be collecting data so that when a department comes up for review there would be data available.
Gene Brown said, "The process for review might be as important as the product. Deciding what you value is important for us as faculty."
Peters said some reviews at Tech are already shared with the graduate dean and the provost, such as landscape architecture and some departments in engineering.
Lovingood said, "It would be helpful to have similar information from all departments."
The issue of graduate program review was referred to the Degree Requirements, Standards, Criteria, and Academic Policies Committee. Eaton suggested the committee could point out areas where additional attention is desirable, recommend that the university set up a benchmarking process, and recommend that reports could go to the graduate dean and provost.