JTE v11n1 - Identification of Quality Characteristics for Technology Education Programs: A North Carolina Case Study

Volume 11, Number 1
Fall 1999


https://doi.org/10.21061/jte.v11i1.a.2

Identification of Quality Characteristics for Technology Education Programs: A North Carolina Case Study

Aaron C. Clark and Robert E. Wenig

Since its beginning, technology education has consistently pursued quality outcomes in course offerings. Especially during the past 25 years, the process of establishing standards, or outcomes, has been a major area of focus at both the national and state levels ( Dugger, 1988 ). After interviewing North Carolina State Department of Public Instruction state officials, it was found that North Carolina had not identified indicators of quality that could be used to assess whether technology education programs throughout the state are meeting statewide curriculum goals and objectives. The identification of such quality indicators and the development of a correlated check sheet was the purpose of this study.

Program quality has been a concern for practitioners within technology education. However, what constitutes the elements of quality has not been adequately investigated. For example, Henak (1992) declared that quality learning in a technology education program comes from the content, learning process, experiences, and growth opportunities offered to students.

The problem of educational quality and its assessment extends to the whole of education. According to the Education Commission of the States (1992) , even though there have been many attempts to develop educational standards, new information on assessing the quality of education provided in schools, districts, and states is lacking. The Federal Coordinating Council of Science, Engineering, and Technology (1993) added that an evaluation process is needed in each state to analyze programs so that questions about the quality of a program can be answered. Further, if responsible change efforts are to be made to establish quality in a technology curriculum, they must include a structure for an objective and critical assessment of each program in order to establish benchmarks for the process (Dyrenfurth, Custer, Loepp, Barnes, Iley, & Boyt, 1993). Many states in addition to North Carolina are working towards setting criteria for assessing quality within technology programs. If a state is to grow and develop better course offerings within its technology education programs, more research is needed. Professionals in technology education throughout North Carolina felt that a benchmarking process was needed and it should be directly linked to an assessment strategy. Secondarily, the results of this study would also be used in establishing criteria for North Carolina's "Governor's Quality Leadership Award" in education, a goal for all educational programs within the state.

Research Methodology

The Delphi technique for achieving consensus among experts was determined to be the best research method for the stated purpose of this study. Volk (1993) used the Delphi method for acquiring consensus on technology education curriculum development and Dalkey (1972) suggested the Delphi technique as a means for decision-making through the use of expert judgement. Procedures used for conducting this particular Delphi study were developed from experts on the methodology (e.g., Delbecq, Van de Ven, & Gustafson, 1986 ; Linstone & Turoff, 1975 ; Meyer & Booker, 1990 ). From the literature, it was determined that a four round Delphi process would be used.

The members of the panel of experts were selected by soliciting recom-mendations from administrators responsible for technology education, technology teacher educators, and personnel from the North Carolina State Department of Public Instruction. Individuals with the highest number of recommendations were selected to serve on the panel of experts. The resultant panel totaled 19 and consisted of 15 technology teachers, three vocational directors, and one technology teacher educator. This number was proportional to the total number of individuals within the state who serve in these respective positions.

Next, a review panel of three members was randomly selected from a list of those not selected to be on the panel of experts. The purpose of the review panel was to review and approve each instrument used in the study. This was done to reduce bias that might occur as a result of modifications made between rounds ( Linstone & Turoff, 1975 ; Meyer & Booker, 1990 ). The format for the initial instrument was developed by reviewing examples from other Delphi studies ( Meyer & Booker, 1990 ; Volk, 1993 ). The categories and quality indicators were identified principally from a list of similar items developed by the Maryland State Department of Education (1995) .

Findings

Table 1 is a descriptive summary of the panel members and the geographic regions they represented. The population in the state is nearly equally distributed among three telephone area codes and the respondents nearly equally repre-sented these regions. Nearly two thirds taught at the high school level. For eight of the panel members, the baccalaureate was the highest degree held while ten held a master's degree or higher. One respondent had not earned a degree. Table 2 reports information about the education experience of the panel members. The average years of teaching experience was 11 with a range of zero to 28 years.

Table 1

Selected Demographic Characteristics of Expert Panel Members
Category
n
%
Professional Position


Technology Teacher
15
78.9
Technology Teacher Educator
1
5.3
Administrator
3
15.8
Total
19
100
Principal Grade Level of Position


Middle School Grades
6
31.6
High School Grades
12
63.1
College Level
1
5.3
Total
19
100
Level of Education


Less than a BS/BA
1
5.3
BS/BA
8
42.1
MS/Med
9
47.3
EdD/PhD
1
5.3
Total
19
100
Geographic Region


704 Area Code
7
36.8
910 Area Code
6
31.6
919 Area Code
6
31.6
Total
19
100

Table 2

Years of Teaching Experience of Expert Panel Members (n=19)
Category M SD Minimum Maximum
Teaching experience in years 11 9.13 0 28
Administration experience in years .57 1.53 0 6

The initial instrument, once approved by the three-member review panel, was sent to the expert panel members. This represented Round One of the study. Panel members were allowed to edit the indicators and categories and add new ones. Those that were accepted by the majority of the panel members were retained. Similar items were combined and redundant items were eliminated. Once approved by the review panel, the resulting instrument consisted of 47 indicators of quality for technology education programs across eight categories.

Round two of the Delphi process involved having the panel of experts rate the quality indicators and categories identified in Round One. The process described by Meyer and Booker (1990) was followed. This involved the use of a Likert-type scale ranging from one to five. A value of one represented a very poor indicator of quality, not considered appropriate for any technology education program. A rating of two represented a poor indicator of quality, one that 49% or fewer of the programs should meet. A value of three represented a fair indicator of quality, one that was appropriate for 51 percent or more for technology education programs within the state. A rating of four represented a good indicator of quality, one that 75 percent or more technology education programs should meet. Five represented an excellent indicator of quality, one that all technology education programs in the state should meet.

Table 3

Examples of Modifications Made to Indicators from Round Two of the Delphi Study
Indicator from Round Two Modifications to Indicator for Round Three
The philosophy and program objectives address the need to teach the application of technology for the present and future needs of society The program objectives address the need to teach the application of technology for the present and future needs of society
The philosophy and mission statements address the relationship among humans, society and technology The philosophy, program objectives and mission statement address the relationships among humans, society and technology
The philosophy addresses the need to continually update and revise the curriculum The philosophy and program objectives address the need to continually update and revise the curriculum

Using standard Delphi procedures, a mean cutoff value of 3.01 was used on the Likert-type scale responses. Quality indicators with a mean value of 3.01 or above were retained for Round Three and the others were discarded. It was thereby determined that the remaining indicators were appropriate for 51 percent or more of technology education programs within the state. All the categories were retained.

A One Factor Repeated Measures Analysis of Variance (ANOVA) was used to determine if one particular category was dominant over other categories according to a procedure suggested by Agresti & Finalay (1986) . No significant differences were found among the categories. These data are shown in Table 4 .

In Round Three of the study, the panel of experts were asked to rank order the quality indicators within each of the eight categories ( Meyer & Booker, 1990 ). Sixteen of the original 19 members of the expert panel responded to this round within the established time period. No new quality indicators were suggested in this round but they did suggest six modifications. As with previous rounds, these suggestions were approved by the review panel.

Table 4

One-Factor Repeated Measures ANOVA Test on Category Names from Round Two
Category Name
M
SD
F-Value
P
Philosophy and Mission
4.26
.57
--
--
Instructional Program
4.21
.56
--
--
Student Populations
4.29
.63
--
--
Program Requirements
4.09
.69
--
--
Safety and Health
4.11
.65
--
--
Professional Development
4.42
.61
--
--
Facilities/Equipment/Materials
4.35
.70
--
--
Public Relations
4.25
.76
--
--
OVERALL
--
--
1.42
.20

A series of Spearman correlation coefficients was calculated between the ratings of the quality indicators from Round Two compared to rankings determined in Round Three ( Gibbon 1976 ). This statistical process was designed to reveal the relationship between each category and its corresponding indicators. The indicators in Round Two ranged from 1 (low) to 5 (high) while the rankings within each category went in the opposite direction (1 being the highest rank). Thus, a high negative correlation was an indicator of consensus between the two rounds.

The Facilities/ Equipment/Materials Category had a low negative correlation coefficient of minus .18 and the Public Relations category had a positive correlation coefficient of plus .44. These two categories, with their indicators, did not indicate consensus. However, the overall scores combined together had a moderate negative correlation coefficient of minus .40. This suggested that consensus was being achieved between rounds two and three overall. Suggested modifications from both panels were made to indicators from this round and incorporated into the fourth and final round. These data are shown in column two of Table 5 .

The Spearman's correlation coefficient was also calculated between the ranks and medians for Round Three. A positive high correlation would show that no outliers (effects of one or more extreme scores) were influencing the consensus reaching process for the indicators in this round. Such high positive correlations were found for all of the categories. These data are shown in column three of Table 5 .

In the end, indicators that ranked in the upper 50 percent for a category were retained and the others were discarded. This reduced the indicators to a useable number and retained only those most likely to reach consensus in the final round.

Delphi Round Four, the final round, was intended to gain the final approval of the quality indicators from the panel of experts. All but two of the panel members responded by the specified date. Each panel member was asked to indicate whether they accepted or rejected each of the quality indicators that resulted from Round Three. No suggestions for changes to the items were permitted. Once these data were collected, the indicators were placed in a contingency table. A Chi-Square test (p <.05 a="" all="" an="" and="" by="" comprehensive="" conducted="" consensus="" consensus.="" criterion.="" determine="" did="" director="" do="" education="" experts="" felt="" for="" had="" health="" href="#table6" in="" indicator="" indicators="" it="" legal="" members="" must="" not="" of="" one="" only="" panel="" plan="" practice="" prepare="" program.="" programs="" quality="" quality.="" reach="" reached="" reasons="" remaining="" required="" safety="" shown="" teachers="" technology="" that="" the="" therefore="" this="" to="" vocational="" was="" which="" written=""> Table 6, constituted the final list.

Table 5

Spearman's Rho Correlation Coefficient between Round Two Rating and Round Three Rank (Round Two), and Round Three Rank and Median
Category Name
r (M rate/M rank)
r (M rank/Mdn)
Philosophy and Mission of Program
-.90
.91
Instructional Program
-.88
.98
Student Populations
-.97
1.00
Program Requirements
-.44
.93
Safety and Health
-.82
.97
Professional Development
-.63
.95
Facilities/Equipment/Materials
-.18
.99
Public Relations
.44
.97
OVERALL
-.40
.95

Table 6

Final Listing of Quality Indicators for Technology Education Programs

Philosophy and Mission of Program Category:
  • The program objectives address the need to teach the application of technology for the present and future needs of society.
  • The philosophy and program objectives include teaching students the importance of using knowledge, materials, tools, and machines to solve problems by producing products.
  • Technology teachers are actively involved in developing the philosophical and/or mission statement for the program.
  • The philosophy and program objectives address the need to continually update and revise the curriculum.
Instructional Program Category:
  • Course content is developed from course competencies/enabling objectives and utilizes approved curriculum guides, courses of study and professional resources.
  • Course content is allowed to develop and to experiment with new technologies and areas.
  • Course content is affected by the perpetual evolution of technology and society's interaction with that technology.
Student Populations Category:
  • Technology education activities are provided for all students without bias toward gender, ethnic background, achievement, handicap, or dis-advantagement.
  • All students are provided guidance about technology education course offerings at their school.
  • All population types are represented in the technology education program.
  • Program Requirements Category:
  • Sufficient funds are budgeted for equipment and facility improvements to accomplish course objectives.
  • Administration presents the attitude necessary for growth and development of technology education programs.
  • The maximum number of students per period is appropriate for class population (special populations, etc.) and appropriate for the type and kind of instructional activity(ies) conducted.
  • Administration is knowledgeable of the need to continually update the technology curriculum.
Safety and Health Category:
  • Technology teachers prepare and teach appropriate lessons on safety.
  • Students participating in technology education classes are required to complete a written safety test on applicable equipment with 100% success.
Professional Development Category:
  • The technology teacher is provided adequate time and finances to attend at least one state sponsored workshop or function.
  • Adequate funding is provided for technology teachers to participate in local, state, and national professional development according to local policy and procedures.
  • The technology teacher participates in staff development activities that lead to the correlation of technology education with other related academic and vocational disciplines.
Facilities/Equipment/Materials Category:
  • The technology presented is applicable to the present and future workplace.
  • The appearance and arrangement of the laboratory reflect the mission and philosophy of the program.
  • The technology offered in the program is up-to-date with current technological needs.
Public Relations Category:
  • Teachers and students maintain a high state of visibility through the promotion of class and student activities as a public relations strategy.
  • Students promote and support technology education programs through involvement in activities, including North Carolina Technology Student Association or Career Exploration Clubs of North Carolina.
  • Business and industry actively communicate with the local schools.

Conclusion

Using the Delphi technique, a panel of experts within the state of North Carolina reached consensus on 25 quality indicators for technology education. Three major conclusions were drawn from the information collected from this study. First, the quality indicators listed in the findings for this study were developed for technology education programs in North Carolina, but most could be used for other programs that utilize laboratory instruction. The researchers for this study feel that since expert panel members were asked to write indicators general to all technology programs within the state that each indicator could easily be articulated into other program areas related to industrial or vocational education.

Second, the final listing of quality indicators are similar to those listed in Maryland's (1995) list of quality indicators. The 25 indicators found within this study cover the same topic areas as indicated within the Maryland listing of quality indicators. Major differences between the two lists are as follows. First, Maryland has developed hundreds of indicators and most are directly related to specific content in technology education. Maryland used a team of technology education professionals to assess program quality for schools within the state. Indicators found within this study are fewer in number and not as specific in curriculum content areas. Also, these indicators were written for school administrators with little or no background in technology education to use for program assessment.

Finally, major categories found within the study directly mirror eight of the ten categories for standards used in previous standards projects ( Dugger, 1985 ). The researchers for this study did not pursue why expert panel members did not include categories for evaluation process and support systems. Also, the study with all categories and indicators solicited from the experts directly reflect those major areas and findings associated with the Technology for all Americans Project (1995) . This reflection illustrates to the researchers that the establishment of standards and the development of quality indicators can coincide with each other and therefore, one can identify quality characteristics for programs through the establishment of standards for that same program. This process of combining the two areas together will allow technology education professionals to establish needed benchmarks for programs as we teach our students to learn to live in a technical world.

References

Agresti, A., & Finlay, B. (1986). Statistical methods for the social sciences (2nd ed.). San Francisco: Dellen Publishing.

Dalkey, N. C. (1972). Studies in the quality of life: Delphi and decision making. Lexington, MA: Lexington Books.

Delbecq, A. L., Van de Ven, A. H., & Gustafson, D. H. (1986). Group techniques for program planning: A guide to nominal group and Delphi processes. Middleton, WI: Green Briar Press.

Dugger, W. E., Bame, E. A., & Pinder, C. A. (1985). Standards for Technology Education Programs. South Holland, IL: Goodheart-Wilcox.

Dugger, W. E. (1988). Evaluation of our profession over the past twenty-five years: Quantitative and qualitative benchmarks. In D. L. Householder (Ed.), Mississippi Valley Industrial Teacher Education Conference, USA, 75, (pp. 91-109). Lansing, IL: Technical Foundation of America. (ERIC Document Reproduction Service No. ED 323 332)

Dyrenfurth, M. J., Custer, R. L., Loepp, F. L., Barnes, J. L., Iley, J. L., & Boyt, D. (1993). A model for assessing the extent of transition to technology education. Journal of Industrial Teacher Education, 31(1), 57-83.

Education Commission of the States. (1992). Creating visions and standards to support them: Restructuring the education system. Denver, CO: Education Commission of the States. (ERIC Document Reproduction Service No. ED 350 676)

Federal Coordinating Council for Science, Engineering and Technology. (1993). The federal investment in science, mathematics, engineering, and technology education? Where now? What next? Report of the expert panel for the review of federal education programs in science, mathematics, engineering, and technology. Washington, DC: Author. (ERIC Document Reproduction Service No. ED 366 500)

Gibbons, J. D. (1976). Nonparametric methods for quantitative analysis. New York: Holt, Rinehart and Winston.

Linstone, H. A., & Turoff, M. (1975). The Delphi method: Techniques and applications. Reading, MA: Addison-Wesley Publishing.

Maryland State Department of Education. (1995). Quality indicators for technology education programs in Maryland. Baltimore, MD: Division of Career Technology and Adult Learning.

Meyer, M. A., & Booker, J. M. (1990). Eliciting and analyzing expert judgement: A practical guide. Washington, DC: U.S. Nuclear Regulatory Commission.

Technology for All Americans. (1995). A project to develop national standards for K-12 technology education [Brochure). Blacksburg, VA: Dugger.

Volk, K. S. (1993). Curriculum development uses the Delphi technique. The Technology Teacher, 52(4), 35-36.

WorldWide Education and Research Institute. (1982). Quality indicators for vocational education. Salt Lake City, UT: Utah State Office of Education, Division of Vocational Education. (ERIC Document Reproduction Service No. ED 219 572)


Aaron C. Clark (aaron@poe.coe.ncsu.edu) is an Assistant Professor and Robert E. Wenig (wenig@poe.coe.ncsu.edu) is an Associate Professor in the Department of Mathematics, Science and Technology Education, College of Education and Psychology at North Carolina State University, Raleigh.