JITE v38n4 - Development and Analysis of a National Certification Exam for Industrial Technology
Development and Analysis of a National Certification Exam for Industrial Technology
Dennis W. Field
Iowa State UniversitySheila E. Rowe
Iowa State UniversityCertification can be defined as recognition of achievements within a profession based on requirements voluntarily adopted by its representative association ( Jaffeson, 2001 ). This article offers some insight into the development process for a certification program as it relates to the field of Industrial Technology. A number of colleges and universities accredited by the National Association of Industrial Technology (NAIT) have indicated interest in using a certification exam for purposes of program assessment. For these colleges and universities, development and continuing analysis of an examination are important elements of the NAIT certification program. In light of this expressed interest, a significant segment of this article focuses on research associated with the examination.
NAIT is dedicated to the establishment and maintenance of professional standards for industrial technologists. The association's certification program was established to provide recognition of an individual's knowledge, skills, and continuing professional development in the field of industrial technology. The certification process allows individuals to document their skills and knowledge within a given profession, while organizations receive some assurances that certified individuals are continually involved in professional development. Pare ( 1996 ) states that the fact that an individual is certified may be the best indicator of how qualified a potential or current employee is. Barnhardt ( 1994 ) also suggests that professional certification helps both the individual and the organization, a viewpoint that is apparently shared by others, based on the fact that over one thousand certification programs are available for different professions and occupations. For example, Hamm ( 1996 ) lists 28 occupational categories for certification that include approximately 1600 granting certificate programs and over 200 accrediting organizations. However, despite the growing number of certification programs, and of groups such as the National Certification Commission (NCA), the National Organization of Certification Association (NOCA), and the National Organization for Competency Assurance (NOCO) that provide standards and guidelines for certification, Tillman ( 1995 ) reports that there is generally a lack of organization, accessibility, and consensus of information on certification programs. Yet these organizations-NOCA, NCA, NOCO-do help give focus to the certification process, and other groups, such as the American Psychological Association (APA), the American Education Research Association (AERA), and the National Council on Measurement in Education (NCME) provide standards for test development that are available for associations to use in developing and updating certification programs.
Certification Programs
Each profession defines its certification program in its own unique terms. Barnhardt ( 1994 ) found after reviewing 450 certification programs that there is no one single definition that can apply to every program. Many certification programs use education and experience as certification requirements, while others may use only one or neither of these criteria. The designators used for certification differ in meaning as well as the criteria established to define them. For example, Barnhardt notes that the designator "Certified Public Accountant" is actually a state issued license. However, one should not use the terms credentials, certification, licensure, standards and accreditation interchangeably. Certification is voluntary and provides assurances about an individual, while accreditation provides assurances about institutions. Licensure and standards programs are managed by state or government agencies, and Barnhardt reports that licensure serves to restrict a profession to individuals who meet minimum state requirements. Credentials and competency exams imply that individuals are "guaranteed" to perform at certain prescribed levels ( Jaffeson, 2001 ).
According to Barnhardt ( 1994 ), over 1,500 certification programs exist in the United States. They represent a wide range of industries and professions, including business, management, accounting, finance, human resources, law, logistics, planning, insurance, marketing, communications, security, real estate, hospitality and travel, computers, and engineering. He suggests that companies can use various certification programs as assessment tools to determine job knowledge and the level of an individual's dedication to his profession. In addition, they may also be used to gauge experience and as resources for training. Certification programs serve an accountability function by holding workers accountable for their level of competence in their occupation.
During the past ten years, certification programs have become increasingly popular. For example, four pharmacy associations collaborated to form a single, consolidated voluntary national certification organization in 1994, the Pharmacy Technician Certification Board (PTCB). Since 1995, PTCB has certified more than 54,000 pharmacy technicians through the Pharmacy Technician Certification Examination ( Murer, 2000 ). Moreover, the Pharmacy Technician Certification program is far from alone in its explosive level of growth. In a report from Drake Training & Technologies, the number of certification programs in information technology increased ten-fold between 1993 and 1994 ( Merrill, 1994 ). Software certification examinations such as Microsoft's Certified Professional program, Certified Oracl Engineer, and the certified Novell Engineer are representative of popular computer application certification programs made available in recent years.
Business and industry observers cite the relevance and need for certification programs. Peluso ( 2000 ), corporate counsel for the Professional Examination Service, stated that certification programs enable employees in various fields to advance their value and appeal. Such programs also provide the public with more confidence in quality of work. Peluso added that professional certification programs sponsored by associations serve a multitude of purposes for many stakeholders, including the general public, employers, and certificants. Schrage ( 2000 ) argued in Fortune magazine that a degree alone does not tell an employer what a job applicant can actually do. Schrage suggested that a computer science cum laude baccalaureate does not describe the digital abilities of its recipient, but that certification will give academic programs and degrees meaning in the marketplace.
NAIT Certification
For its part, NAIT regards the implementation and validation of professional standards to be beneficial to its stakeholders. NAIT ( 1997 ) offers the following reasons, among others, as grounds for its position: The certified individual can rightfully feel a sense of satisfaction by demonstrating the mastery of a level of expertise in the field of industrial technology; employers of industrial technology graduates are provided some assurances regarding their employees' educational background and continued professional development; and the discipline of industrial technology gains by focusing on a continuing and rigorous examination of the knowledge, skills, and attributes that are essential for working as a member of the industrial technology community.
It should be noted that students and faculty of industrial technology programs are a significant part of the aforementioned stakeholders within the industrial technology community. The authors believe, as do others ( Barnhardt, 1994 ; NAIT, 1997 ; Pare, 1996 ), that the benefits of a well-run certification program are worth the effort; however, the key expression here is "well-run." A great deal of planning and hard work must go into the program startup and subsequent operation to ensure that the outcomes of the certification program are reliable and valid.
NAIT Certification Program
The NAIT certification program had its start with the formation of an ad-hoc certification committee in 1991. The impetus for setting up a program of certification by examination came primarily from two considerations. The first was that there might be a pool of professionals with an interest in certification who did not meet the original certification guidelines; and the second, that aggregated examination results of graduating students might be of some value to industrial technology baccalaureate programs wishing to assess the technical management portions of their programs (Everett Israel, personal communication, June 2000).
The technical management portion of the programs is defined by means of the NAIT foundation course requirements listed in the NAIT Industrial Technology Accreditation Handbook ( NAIT, 2000 ). All industrial technology graduates must complete a minimum of 36 semester hours in management courses and technical courses. Management courses cover topics such as quality control, production planning and control, industrial supervision, industrial finance and accounting, industrial safety management, facilities layout and materials handling, time and motion study, industrial communications, business law, and marketing; while technical courses encompass subject matter such as computer integrated manufacturing, computer aided design, electronics, materials testing, computer technology, packaging, construction, and manufacturing processes.
Two levels of certification are currently available, Certified Industrial Technologist (CIT) and Certified Senior Industrial Technologist (CSIT). The CIT is the initial certification status awarded to graduates and faculty of NAIT-accredited associate and baccalaureate industrial technology degree programs. The CIT is expected to be upgraded to the CSIT upon recertification, and a maximum of eight years is allowed for individuals to meet CSIT requirements. The CIT is not renewable. The CSIT is awarded to graduates and faculty of NAIT-accredited associate and baccalaureate degree industrial technology programs with five years of professional experience and 75 hours of professional development units (PDUs). The required professional experience may be earned by teaching in a NAIT-accredited industrial technology program or working as a practicing industrial technologist. The PDUs must be related to the discipline of industrial technology and must have been completed in the previous five years. Once obtained, CSIT certification must be renewed every five years ( NAIT, 1997 ). A written examination is available for members of the profession who do not meet CIT or CSIT academic criteria for initial certification, but who demonstrate continued professional growth in the field of industrial technology.
Since the inception of the program in late 1991, 1,572 individuals have been NAIT certified. A total of 484 of these individuals currently maintain an active certification, with 301 designated as CITs, and the remaining 183 achieving CSIT rank.
Examination Development Methodology
ACT, Inc. ( 1997 ) suggests a framework for the assessment development process. They submit that the development process consists of four phases: (1) developing the exam specifications, (2) prototyping, (3) pre-testing, and (4) constructing operational forms. During Phase 1, a "blueprint" for the examination is assembled. This yields an outline detailing what skills the assessment is going to measure. In Phase 2, sample items are written and assembled in a prototype form, which is then administered to a small number of examinees. Test specifications may be adjusted based on an analysis of the results from the prototype. After prototyping, items are pre-tested by a large enough sample of examinees (e.g., 1,500 to 2,000) to permit the evaluation of each test item's psychometric properties. The item statistics are examined for clues as to possible problems with item content or form. During the pre-test phase, items are also analyzed for differential item functioning with respect to gender and ethnicity. Phase 4 involves the construction of the operational forms of the examination. Alternate and equivalent forms of the examination are developed from the pool of items meeting all content, statistical, and fairness criteria ( ACT, 1997 ).
As a frame of reference, it should be noted that the NAIT certification exam would be considered to be in the early stages of Phase 3, and the remainder of this article is primarily concerned with the authors' work in this "pre-testing" phase; however, brief summaries of efforts undertaken in Phases I and II are also presented. Leadership in Phase I was provided by Dr. Everett Israel, Eastern Michigan University, and by Dr. Larry Helsel, Eastern Illinois University, as Chairs of the Certification Committee. The blueprint development process in Phase I-needed to initiate a certification process by examination-drew upon content area research done by Dr. Clois Kicklighter (Everett Israel, personal communication, June 2000).
Phase 1: Blueprint
The subject areas around which the exam content was developed were identified during a Delphi study conducted by Dr. Clois Kicklighter. This study identified the technical management core for baccalaureate degree programs accredited by NAIT (C. Kickligher, personal communication, October 10, 1991).
The Delphi Technique
The Delphi technique is a data collection strategy that uses a panel of experts to gain group consensus while limiting some of the typical disadvantages of face-to-face group interaction ( Isaac & Michael, 1981 ). This technique is distinguished by three features: (a) anonymity, (b) iteration with controlled feedback, and (c) statistical group response. Anonymity is controlled through the use of a questionnaire. Respondents are not able to identify other panel members or their responses, allowing individuals to change their opinions without publicly announcing that they have done so. Feedback is controlled through the moderator, who draws out only those pieces of information that are relevant to the topic being examined. This eliminates arguments and continual restatement of problems and issues among panel members. The use of statistical group response includes the opinions of the entire group ( Martino, 1975 ).
Three iterations of the Delphi process were conducted under the direction of Dr. Kicklighter. The results of the study were reported internally to NAIT in October 1991. Eight major exam content areas were identified from that study: (1) Quality Control, (2) Production Planning and Control, (3) Industrial Supervision, (4) Industrial Finance and Accounting, (5) Industrial Safety Management, (6) Plant Layout and Materials Handling, (7) Time and Motion Study, and (8) Industrial Communications (C. Kicklighter, personal communication, October 10, 1991). These areas are consistent with foundation course subject matter requirements for graduates of NAIT accredited Industrial Technology programs.
Phase 2: Prototype
Dr. Kicklighter also played a major role in the development of the exam itself. In a letter to Dr. Andrew Baron (C. Kicklighter, personal communication, February 28, 1992), he proposed that copies of desired course listings and technical management core content areas be sent to every program accredited by NAIT. The faculty, students, and advisory committees at these programs were to be asked to compare their technical management content core to the proposed listing. It was hoped that this process would clarify terminology and fill in any gaps overlooked in the content reservoir. A request would be made to faculty in these programs to send Dr. Kicklighter their final exams, tests, and other questions that could be used in a certification test. It was believed that this analysis, plus the test questions collected, would give the NAIT Certification Board enough information to develop an examination that would be compatible with accreditation guidelines.
Discussions concerning the certification exam had progressed to questions of format by October 1993. It was proposed and approved that the exam be no more than three hours in length; that it be open-book; that it cover the major concepts, theories, and problems related to each of the eight technical management areas; and that panels of experts in each technical management area be convened to review content to be included on the exam (NAIT Board of Certification minutes, October 15, 1993). Not all of the 1993 recommendations survived the development process intact as both the Board and the exam underwent a number of changes over the next year-and-a-half. The NAIT Executive Board approved the appointment of Dr. Matthew Stephens as Chair of the Board of Certification in 1995, replacing Dr. Helsel, who resigned to return to full-time teaching. Dr. Stephens directed the completion of the prototyping phase.
The form of the certification exam that was prepared for field trials consisted of 200 multiple choice questions from only six of the eight original areas: (1) Quality Control, (2) Production Planning and Control, (3) Safety, (4) Industrial Supervision, (5) Time and Motion Study, and (6) Industrial Communications. One other noteworthy change was the shift from an open-book to a closed-book exam. A draft exam was field-tested with approximately 60 examinees in the spring of 1995. Based on this group of examinees, Dr. Stephens and a consultant, Dr. John T. Mouw, completed a detailed item and test analysis, the results of which were summarized for other Board of Certification members in a memo. According to Dr. Stephens, the overall KR 20 1 reliability of this form of the exam was .88. The KR 20 reliabilities of the six subsets were: Safety = .72, Production Planning and Control = .69, Industrial Supervision = .67, Quality Control = .59, Industrial Communications = .33, and Time and Motion Studies = .20 (M. Stephens, personal communication, January 9, 1996).
In a subsequent memo to members of the Board of Certification, Dr. Stephens indicated that the following actions were taken to correct problem areas in the original exam indicated by an analysis of the Spring 1995 data: The exam was collapsed into four sections (from the original six); questions were edited for added clarification; approximately 40 new questions were added; and 80 irreparable or inappropriate questions were deleted to increase test validity (M. Stephens, personal communication, April 16, 1996).
These actions resulted in a version of the exam suitable for a cross-validation administration that consisted of 40 questions each from the following four technical management content areas: Quality Control, Production Planning and Control, Safety, and Management/Supervision.
Content validation.
Crocker and Algina ( 1986 ) state: "In content validation, a typical procedure is to have a panel of independent experts (other than the item writers) judge whether the items adequately sample the domain of interest" (p. 218). The exam content review by panels of experts in each technical management area, directed by the Board of Certification (NAIT Board of Certification minutes, October 15, 1993), along with subsequent work under the direction of Dr. Stephens, indicated a concerted effort to ensure content validity of the items included on the examination.
Examination cross-validation study.
Data from 153 students enrolled representing nine different institutions were used in the analysis. Table 1 indicates student participation in the cross-validation (norming) study by institution. Table 2 provides descriptive statistics for the four subscales and the overall exam. Table 3 tabulates selected percentiles against test scores. Tables 1, 2, and 3 have been adapted from Tables I, II, and Tables IVa-d and VI in the "Analysis of the results from the Norming Administration of the National Association of Industrial Technology Certification Exam" report ( NAIT, 1996 ).
Table 1
Number of usable student samples by institution (sample size = 153)
Institution Sample size Central Connecticut 6 Colorado 18 Illinois State 8 Indiana State 29 North Carolina 24 Purdue 29 Texas Southern 22 San Jose State 7 University of North Dakota 10 The observations and conclusions in the "Analysis of the results from the norming administration of the National Association of Industrial Technology certification exam" report were in keeping with the exam development practices suggested by ACT ( 1997 ) during the prototype phase and include recognition of several areas requiring additional work. The report to the Board of Certification cautioned that a sample size of 153 is a less than optimum number upon which to base decisions regarding cut scores. However, it is well within the 100 to 200 examinee sample sizes mentioned by test construction experts ( Crocker & Algina, 1986 ) as common for preliminary test item tryouts on items developed for commercial use. Items that produced low discrimination indices in the norming administration of the certification exam were revised for use in the current exam.
Table 2
Descriptive statistics collected from exam field test (sample size = 153)
Content Area Mean Test Score SD Reliability (KR 20 ) Production, Planning, and Control 21.39 5.18 .71 Quality 18.36 5.28 .71 Safety 22.01 5.39 .76 Management/Supervision 23.93 6.27 .82 Overall Examination 85.69 17.66 .90 Table 3
Exam field test score percentiles (sample size = 153)
Content Area 99 th 95 th 75 th 50 th 25 th 1 st Production, Planning and Control 32 29 25 22 18 9 Quality 31 29 22 18 14 8 Safety 31 29 26 23 19 6 Management/Supervision 34 33 28 25 21 7 Overall Examination 115 110 99 89 74 37 Note: The maximum score for each of the four individual content areas is 40. The maximum score for the overall examination is 160. As an example of table interpretation, an overall examination score of 110 out of a possible 160 would place the individual in the 95 th percentile of examinees.
Phase 3: Pre-test
To date, 311 examinees have completed the current version of the examination, which was released in December 1998. Given the relatively small sample size available, a classical item analysis was deemed most appropriate as a research methodology. Both gender and ethnicity data are being collected for future analysis. The authors' research results generated during this phase of the examination development effort follow.
The pass rate for the exam for this group of examinees is 53%. Figure 1 provides histograms of examinee performance for each of the four content areas; Tables 4 and 5 provide descriptive statistics and test score percentile ranks by content area, respectively, for the pre-test administration of the examination. Figure 2 is typical of the type of scatter plot, sorted by technical management core content area, used for a review of exam item difficulty versus discrimination. The item difficulty is simply the percent of people answering the specific question correctly. For example, the markers in the lower left-hand corner of the Production Planning and Control panel in Figure 2 indicate the most difficult questions in that section of the exam; only 8 to 14% of the examinees answered these questions correctly. The point biserial correlation 2 or discrimination index is an indication of "how closely performance on a test item scored 0 or 1 [wrong or right] is related to the total test score" ( Crocker & Algina, 1986 , p. 317). It is an indication of how well a question discriminates between examinees of differing ability levels.
The range of the point biserial correlation is -1 to 1, with extremely low positive and any negative correlation coefficients being undesirable. Test developers generally will keep items (questions) with point biserial values that are at least two standard errors above zero. A convenient approximation for the point biserial standard error, according to Crocker & Algina ( 1986 ), is equal to (N-1) -1/2 , where N is the sample size. This means that any question on the NAIT Certification exam with a point biserial correlation above 2*(311 - 1) -1/2 ≅ 0.11 would meet the retention criterion, given the current sample size of 311. The discrimination index and the difficulty level as presently calculated are classical test statistics and, as such, vary according to the performance of the individuals that make up the sample.
Figure 1. Content area histograms of examinee scores.Table 4
Descriptive statistics from current NAIT certification exam (sample size = 311)
Content Area Mean Test Score SD Reliability (KR 20 ) Production, Planning and Control 19.76 4.91 .68 Quality 16.37 4.14 .53 Safety 23.61 4.58 .65 Management/Supervision 24.49 5.28 .73 Overall Examination 84.22 14.71 .86 Table 5
Current NAIT certification exam field test score percentiles (sample size = 311)
Content Area 99 th 95 th 75 th 50 th 25 th 1 st Production, Planning, and Control 30 28 24 21 17 9 Quality 27 24 19 17 14 8 Safety 33 31 27 24 22 12 Management/Supervision 35 32 28 26 23 9 Overall Examination 113 104 95 88 78 43 As can be seen in Figure 2, a number of the questions in the current exam are prime candidates for review. Again using the four markers in the lower left-hand corner of the Production Planning and Control panel in Figure 2 as an example, all four of these questions are negative discriminators as well as being difficult questions, indicating that of the few people who did get the answers correct, they tended to be individuals who scored below average on the overall exam. While there are several explanations for this, all would indicate the need for item review.
Discussion
Two issues are critical to the successful completion of Phase 3 activities. The first is to expand the item bank for the current exam. Additional qualified items should be readily available if, during a review, current exam questions are judged unacceptable or there is a desire to expand the examination. There is, however, a domino effect associated with this need to rework, replace, or add test items, or even to develop alternative equivalent test forms. It requires a significant number of additional examinees to allow re-estimation of item parameters with relative stability. According to Nunnally ( 1967 ), a longstanding rule of thumb is to have five to ten times as many examinees as items. The second critical issue is therefore one of sample size. For the Board of Certification to follow Nunnally's suggestion, between 800 and 1,600 examinees would be needed as test development continues, with the upper figure being consistent with ACT's ( 1997 ) recommendation of a sample size between 1,500 and 2,000 for Phase 3 development activities. Given the range of recommended sample sizes, readers can perhaps understand the authors' call for increased participation in the program. From a test development perspective, it may make sense to request that, at least for a period of time, a NAIT certification exam be given to all students sometime during the two semesters immediately preceding their graduation from a NAIT-accredited industrial technology program. This initiative would allow the Board of Certification to strengthen the examination development process, improve reliability, expand the number of sub-tests, and ultimately strengthen the position of NAIT certification. It also would provide the Board the sample size flexibility to investigate an alternative method of item analysis using the concepts of item response theory.
Figure 2 . Item difficulty and point biserial correlation (discrimination) for the four subsections on the NAIT Certification exam. Vertical reference lines indicate the preferred difficulty range (0.4 to 0.6), while the horizontal reference line is located at the minimum discrimination value for item retention (i.e., 0.11).Although one can always seek to improve item and test performance based on the metrics provided by classical test statistics, such an effort does not address the broader policy issue of whether it might be desirable to replace the current norm-referenced test with a criterion-referenced examination, or indeed even an assessment instrument that provides a more authentic assessment of technological problem-solving capability. Nor does such an effort preclude a discussion of whether classical item analysis should be discarded in favor of a method of analysis based on item response theory, or whether cut scores, and therefore pass rates, are appropriate. Further, the issue of how performance on this exam should impact, if at all, the way courses are taught at NAIT accredited institutions is not addressed. To offer advice in these areas without input from the full NAIT Board of Certification would be premature; however, these are long-term concerns that should be on the table for discussion given the ultimate objective of a viable, meaningful certification exam.
Classical Item Analysis
If one were to approach this examination strictly from the standpoint of classical item and test analysis, a number of researchers ( Crocker & Algina, 1986 ; Cronbach & Warrington, 1952 ; Henryssen, 1971 ; Lord, 1952 ) suggest that the relationship between item difficulty and item discrimination (here, the point biserial correlation) should be of interest. Henryssen, in particular, suggests that when the average biserial correlation between item and total test score is in the range of .30 to .40, the ideal item difficulty should be between .40 and .60 to permit discriminations that are more reliable among examinees of different ability levels. As the average biserial correlation increases above .60, Henryssen notes, a wider range of item difficulties may be acceptable. There are exceptions to this (for example, if you are trying to identify only the best of the best), but the above guidelines serve as a good rule of thumb. As can be seen from the plots in Figure 2, a number of items fall outside the guidelines and likely require adjustment or replacement. This version of the NAIT certification exam yields an average biserial correlation of .20 between item and total test score, which suggests, according to Henryssen, that item difficulties should be targeted toward .50, or an attempt should be made to rework test items to significantly increase the average biserial correlation, or both.
Method of Item Analysis
The selection of questions used in the current examination was based on the classical item analysis statistics of difficulty and discrimination. The tables and graphs in this manuscript specify difficulty and discrimination indices associated with current examinee performance on the test items. However, classical item analysis is not the only methodology available to test developers. Crocker and Algina ( 1986 ) discuss an analytical method based on item response theory that addresses several shortcomings in classical item analysis including, most importantly, the lack of information about how examinees at different ability levels have performed on an item, and the variation in item analysis statistics associated with examinee selection (group dependent statistics). As Crocker and Algina state: "This [item response theory] knowledge allows one to compare the performance of examinees who have taken different tests. It also permits one to apply the results of an item analysis to groups with different ability levels that the group used for the item analysis" (pp. 339-340). If item parameters are invariant to changes in the makeup of the examinee group on which the analysis is based, one can plausibly extrapolate the results to any other group of examinees. Crocker and Algina also state: "The invariance of item parameters has the important practical consequence that the parameters of large numbers of items can be estimated even though each item is not answered by every examinee" (p. 363). If the Board of Certification expects to develop multiple equivalent forms of the Certification examination and apply item parameters with confidence to the whole population of examinees, then a shift toward the use of item response theory in item analysis is advocated.
Test Type and Cut Scores
Current pass-fail criteria, or cut scores, for each of the four sub-sections of the current exam were arbitrarily set following the norming administration of the examination at the 25th percentile of that sample ( see Table 3 ). This is a classic example of a norm-referenced test where cut scores are set relative to group examinee performance. The question of whether the cut score is set at the appropriate level is certainly open to discussion. For example, a cut score of 14 for the Quality Control sub-section is arguably low; one might expect random selection on 40 four-choice items to yield a score of 10 even without prior subject knowledge. If one desires to evaluate whether an examinee has attained the goals of instruction, then a criterion-referenced examination would be more appropriate.
Instructional Impact
Although there are no guidelines as to how performance on this exam should impact, if at all, the way courses are taught at NAIT accredited institutions, the opportunity certainly exists for educational institutions to use the NAIT certification exam as a method of assessing and comparing undergraduate Industrial Technology programs. For current program assessment purposes, student data are aggregated at the institutional level. This information is compiled, then group means and standard deviations of student performance for each of the current exam's four subsections are returned to the institution, along with aggregate means and standard deviations from a cross-sectional composite of other institutions. Individual student performance data are not provided to the school, nor are the individual averages of other schools. However, given a large enough sample size, schools can request a breakdown of data along other demographic variables of interest (gender, academic option, etc.) if those data are supplied to the Board of Certification. The composite data can provide educational institutions a measure of comparative information regarding how their students perform relative to students in other Industrial Technology programs in the four main subject categories of Production, Planning and Control; Quality Control; Safety; and Management and Supervision. In addition, students may request information about whether they passed or did not meet minimum passing score requirements for each of the examination subsections.
Summary of Recommendations
The NAIT certification program is still in its infancy and the Board must address many issues in the coming years. The Board of Certification should initiate work to (a) develop a much larger item bank and alternate exam forms; (b) analyze exam items using item response theory; (c) investigate certification options for graduates of two-year versus four-year institutions; (d) prepare exam study guides; (e) develop specialty exams based on program options, such as graphics, aviation, and construction, that complement the core competency exam currently in place; (f) establish a formal mechanism to obtain feedback on policies and procedures under consideration by the Board of Certification; (g) encourage widespread adoption of the certification exam by undergraduate Industrial Technology programs for students in the last two semesters of their core curriculum; and (h) gauge the potential for authentic assessment in the Industrial Technology certification process.
Implications
As previously mentioned, the NAIT certification program was established to provide recognition of an individual's knowledge, skills and continuing professional development in the field of industrial technology. The program also provides Industrial Technologists with an alternative (non-academic) route to becoming certified and, with evidence of professional development, maintaining that certification. The validation of professional standards requires a significant commitment in terms of development time and effort. Although the NAIT Board of Certification has made considerable progress, much development work remains to be done. This effort will be successful only if the Board of Certification is able to obtain commitments from NAIT institutions to participate in this development process.
Footnotes
1 The KR 20 formula is where k is the number of items, is the total test variance, and pq is the variance of item i (Crocker & Algina, 1986, p. 139).
2 When a test developer is interested in how closely performance on a test item scored 0 to 1 is related to performance on the total test score, the point biserial correlation can be useful. The formula is where μ + is the mean total test score for those who answered the item correctly, μ x is the mean total test score for the entire group, σ x is their standard deviation, p is the difficulty of item i , and q is (1-p) (Crocker & Algina, 1986, p. 317).
References
ACT, Inc. (1997). Work Keys preliminary technical handbook . Iowa City, IA: Author.
Barnhardt, P.A. (1994). The guide to national professional certification programs . Amherst, MA: HRD Press
Crocker, L., & Algina, J. (1986). Introduction to classical and modern test theory . Orlando, FL: Harcourt Brace Jovanovich, Inc.
Cronbach, L.J., and Warrington, W. G. (1952). Efficacy of multiple choice tests as a function of spread of item difficulties. Psychometrika , 17, 127-147.
Hamm, M. S. (1996). What are the building blocks of good certification and accreditation programs? In Pare, M. A. (Ed.), Certification and accreditation programs directory (pp. xi-xiv). Detroit, MI: Gale Research.
Henryssen, S. (1971). Gathering, analyzing and using data on test items. In R. L. Thorndike (Ed.). Educational measurement (2nd ed., pp. 130-159). Washington, D.C.: American Council on Education.
Isaac, S., & Michael, W. B. (1981). Handbook in research and evaluation (2nd Ed). San Diego, CA: EDITS.
Jaffeson, R. C. (2001, January). Certification purposes. Certification Communications Newsletter , IX(1), 1-2.
Lord, F. M. (1952). The relationship of the reliability of multiple choice items to the distribution of item difficulties. Psychometrika , 18, 181-194.
Martino, J. P. (1975). Technological forecasting for decision making . New York: North-Holland.
Merrill, K. (1994, November 7). Certification-exam stats: Info-tech tests spiraling up. Computer Reseller News , 179.
Murer, M. M. (2000). Certification by collaboration. Association Management , 52(5), 59-64.
National Association of Industrial Technology. (1996). Analysis of the results from the norming administration of the National Association of Industrial Technology certification exam . Unpublished manuscript.
National Association of Industrial Technology. (1997). Industrial technology certification handbook . Ann Arbor, MI: Author.
National Association of Industrial Technology. (2000). Industrial technology accreditation handbook . Ann Arbor, MI: Author.
Nunnally, J. C. (1967). Psychometric theory . New York: McGraw-Hill.
Pare, M. A. (Ed.). (1996). Certification and accreditation programs directory . Detroit, MI: Gale Research.
Peluso, S. T. (2000). Planning professional certification programs. Association Management , 52(5), 65-67.
Schrage, M. (2000, June 26). You're nuts if you're not certifiable. Fortune , 338
Tillman, T. S. (1995). A benchmarking study to identify and analyze professional certification programs in industry and engineering . Unpublished master's thesis, Purdue University, West Lafayette, IN.
Field is Assistant Professor and Rowe is a doctoral candidate in the Department of Industrial Education and Technology at Iowa State University in Ames, Iowa. Field also serves as Chair of the National Association of Industrial Technology (NAIT) Board of Certification. Field can be reached at dwfield@iastate.edu . The authors wish to thank all those individuals who have provided information regarding the NAIT Certification program, particularly Dr. Everett Israel, Dr. Clois Kicklighter, Dr. Alvin Rudisill, and Mr. David Monforton.