JVER v29n3 - Technical Education Curriculum Assessment

Volume 29, Number 3
2004



Technical Education Curriculum Assessment

Jonathan C. Keiser
Dunwoody College of Technology
&
University of Minnesota


Frances Lawrenz
University of Minnesota

James J. Appleton
University of Minnesota

Abstract

The purpose of this paper is to describe and determine the efficacy of a Technical Education Curriculum Assessment (TECA). The TECA was designed to guide the judgment of the quality of technical education curricular materials. Three research strands were combined into a theoretical framework which underlies the education of effective technicians. The TECA consists of sets of rubrics which focus on workplace competencies, technical accuracy, and pedagogical soundness. The rubrics were constructed using a deductive-inductive approach. This was an iterative process that ensured validity by moving back and forth from the theoretical framework uncovered in the literature review (deductive) to the application of the rubrics to actual curricular materials (inductive). We describe the process of rubrics development and provide data which support their validity and reliability. This psychometrically sound instrument should assist industry and education professionals to make more informed decisions when designing, implementing, and evaluating technical education curriculum.

Introduction

Many publications in the last decade have outlined how advanced technology, the global economy, and changing demographics have intensified the need for new educational programs to supply industry with qualified technicians. The U.S. national need for more technicians was anticipated in the early 1990s in reports such as Gaining the competitive edge: Critical issues in science and engineering technician education ( National Science Foundation, 1993 ) and Technology for all Americans: A rationale and structure for the study of technology ( International Technology Education Association, 1996 ). Occupational and technical programs are especially important today in our rapidly changing job market. When asked about the economy, community college administrators mentioned several of their programs as particularly relevant to national economic recovery: digital systems, facilities technology, manufacturing process technology, and telecommunications ( Coley, 2000 ). As discussed by Grubb (1999) , the occupations with the highest growth rates require less than a bachelor's degree, typically one to two years of postsecondary technical education. In no other area than in vocational and technical education has greater emphasis been placed upon the development of curricula that are relevant in terms of substantive outcomes for students and the industrial community ( Finch&Crunkliton, 1999 ).

Standards are needed to help ensure the quality of education and the development of employees for technical level jobs. The study by Benn and Stewart (1998) with committee members for technical programs showed that the use of standards increases communication between industry and education, because the standards provide a basis for curriculum and assessment. Finch and Crunkliton (1999) distinguished between in-school and out-of-school technical education success standards. In-school success standards must be closely aligned with the performance expected within the given occupation. For instance, the criteria used by instructors should be the industry standards. Out-of-school success standards are determined by the employment-related success of a program's graduates. For example, out-of-school success standards can be occupational placement ratings, graduates' incomes, workplace competencies, technical skills, and entrepreneurial skills. In the last few years, the National Skill Standards Board has been encouraging business and industry to communicate their requirements to educators ( West, 2001 ). Technical education curricula should reflect these requirements so that graduates possess the competencies and skills that are critical to employer needs.

The purpose of this paper was to describe and determine the efficacy of a Technical Education Curriculum Assessment (TECA). The TECA was designed to guide the judgment of the quality of technical education curricular materials. It consists of sets of rubrics which assess workplace competencies, technical accuracy, and the pedagogical soundness of technical education curricula. The process of rubrics development and data supporting their validity and reliability is described. The TECA was developed and implemented to assess the quality of 30 sets of curricular materials which were part of the National Science Foundation's Advanced Technology Education (ATE) Program.

Theoretical Framework

In order to develop an effective curriculum evaluation tool, the theory underlying the development of effective technicians needs to be explicated. Technical and vocational education research literature and curriculum and assessment research literature provide multiple perspectives on the production of effective technicians. We have merged these differing strands of research into a coherent theoretical framework. The major theoretical foci discussed here include the Secretary's Commission on Achieving Necessary Skills ( SCANS, 1991 ), Finch and Crunkilton's (1999) curriculum development theory for technical and vocational education, and Wiggins' (1993 , 1998 ) model of assessment and curriculum development.

The SCANS (1991) commission identified the competencies and skills needed to succeed in the world of work. The report identifies the following competencies that effective workers can productively use: resources, information, interpersonal skills, systems, and technology. Resources refer to allocating time, money, materials, space, and staff. Information means that a worker can acquire, organize, maintain, and evaluate data and use computers to process information. Interpersonal skills are human relation skills such as the ability to work on teams and lead, negotiate, and work well with people from culturally diverse backgrounds. Systems refer to an understanding of social, organizational, and technological systems. Workers should be able to monitor performance and design and improve systems. Technology implies that workers can effectively apply technology to specific tasks and maintain and troubleshoot equipment. The SCANS (1991) report also identifies three foundational skill sets that competent workers in a high-performance workplace need. These are basics skills (e.g., reading, writing, speaking, arithmetic), thinking skills (e.g., problem solving, decision making, reasoning, creativity), and personal qualities (e.g., self-esteem and self-management, sociability, integrity). Although the commission completed its work in 1992, its findings and recommendations continue to be a valuable source of information and continue to be cited in career and technical education literature ( http://www.scans.jhu.edu/NS/HTML/Articles.htm ). Similarly, Benn and Stewart (1998) suggest standards linking industry and vocational education programs, and Dyrenfurth (2000) suggests that employability and basic skills should be considered more heavily than company specific needs.

Finch and Crunkilton (1999) propose that the success of technical education curricula is not only measured by students' achievement in school, but also through the results of that achievement in the world of work. Therefore, curricula must be oriented and justified by both the process (learning experiences within the school setting) and the product (employment opportunities derived from in-school experiences). They suggest that curricula must simultaneously be justified by industry, yet remain pedagogically focused. Under this model, technical education curricula must directly help students develop a broad range of knowledge, skills, attitudes, and values that clearly contribute to the graduate's employability. In order to accomplish these tasks successfully, technical education curricula must be responsive to the technological changes in society. Finch and Crunkilton list the following factors that must be considered to keep curricula highly relevant to assist students in entering and succeeding in the world of work.

  1. Data-Based: decisions regarding content need to be grounded in school and community data.

  2. Dynamic: curriculum is responsive to changes in the workplace and modifications should be tangible improvements.

  3. Explicit Outcomes: curricular goals should be measurable; the more explicit the outcomes, the easier it is to determine if students achieve them.

  4. Fully Articulated: the scope and sequence of curricular concepts should be logical and efficient. Linkages between grades and across courses should be thoughtful.

  5. Realistic: student experiences should be practical and fully contextualized.

  6. Student-Oriented: instructional approach should assist students to prepare for the world of work.

  7. Evaluation Conscious: continuous effort should be made to evaluate the effectiveness of the curriculum.

  8. Future-Oriented: extent to which curriculum will be effective in the future should be determined.

  9. World Class Focused: formal effort to benchmark world-class standards and focus on total quality.

Wiggins (1993 , 1998 ) and Wiggins and McTighe (1998) offer an underlying theory which emphasizes the careful selection of what should be studied, closely ties to real world use of the knowledge, and authentically assesses understanding. To show real competency, students should be able to demonstrate each of the following six facets of understanding: explanation, interpretation, application, have perspective, empathy, and have self-knowledge. Explanation is understanding revealed through performances and products that clearly, thoroughly, and instructively explain how things work, what they imply, where they connect, and why they happen. Interpretation is meaning-making (e.g., rendering a concept personalized, accessible, and/or translated) rather than explanation. Application is the ability to use knowledge effectively in new situations and diverse contexts. Perspective implies that the student can consider concepts from different vantage points. Empathy is similar to perspective but implies the ability to understand another person's feelings and worldview without necessarily agreeing with them. Finally, self-knowledge implies that students recognize their own patterns of thought and how these might affect understanding. More recently, Vars and Beane (2000) suggest that technicians should solve authentic problems with multidisciplinary knowledge.

We have combined these different research strands into a theoretical framework which underlies the education of effective technicians. As can be seen in Figure 1, the research strands are integrated to support three themes: responsive educational experiences, deep understanding, and relationship to work. Responsive educational experiences describe curricula that place the student in the center of the pedagogical universe. These are dynamic curricula in which the content and instructional strategies are responsive to the needs of the learners. Deep understanding refers to curricula which promote thorough and in-depth comprehension of content and meaning. These are curricula that would score high on Bloom's taxonomy ( Bloom, Englehart, Furst, Hill,&Krathwohl, 1956 ). Relationship to work refers to curricula which are oriented and justified by workplace demands. Each of these themes is informed by SCANS (1991) , Finch and Crunkilton (1999) , and Wiggins (1993 , 1998 ). These three themes, supported by the core instruction, materials and assessments, are shown as combining to produce distinctive classroom environments which in turn lead to the production of effective technicians. This theoretical model guided the development of our technical education curriculum assessment (TECA).

Figure 1 - Theoretical Strands

TECA Construction

The TECA was constructed with a deductive-inductive approach by moving back-and-forth from the theoretical framework uncovered in the literature review to actual curricular materials. This iterative process of theoretical critique and application of the rubrics to the curricular materials allowed us to constantly check their validity from the standpoint of the accepted knowledge in the literature review (deductive) while remaining compatible with the actual material (inductive). This process of using specific exemplars of work is discussed by Wiggins and McTighe (1998) . They contend that effective rubrics should be based on specific exemplars using the widest range of quality possible so that all potential performances fit within the rubric. The authors of this paper used this back-and-forth process three times to construct a draft of the TECA. The draft was given to a technical and science education assessment expert for review and feedback. Improvements were made in wording, in rating scales, and the number of items. It was decided that a series of "yes"/"no" questions should precede the rubric questions to assure that raters would attend to specific elements of the curricular materials and better understand the intent of the rubric questions. Another round of this iterative process of critique and application resulted in a refined draft of the rubrics which could be used to assess curricula designed for technician education in a wide range of vocational fields.

This refined draft was sent for review to the ATE Evaluation Project's Advisory Committee. This committee is a nine-member team, composed of technical education and evaluation experts. The committee was given the opportunity to actually use the TECA instrument to rate a piece of curriculum similar to the ATE materials included in the sample. Although the Advisory Committee viewed TECA as comprehensive and as asking the right questions about quality, they also made suggestions for improvements. Suggestions included better alignment with Science, Technology, Engineering, and Mathematics (STEM) standards; improved structure and more careful wording of items; more consideration of workplace diversity; and more emphasis on how well curricular materials included or considered student assessments. Each suggestion and comment was considered in light of the theoretical framework from which the rubrics emerged and the practical issue of actually using the rubrics to assess a large number of materials. These considerations resulted in yet another revision. The revised draft was then used by three science and technology education experts to independently rate three different materials. These curriculum materials were specifically chosen, because they reflected a wide range of quality with respect to pedagogical soundness and technical accuracy. The science and technology education experts then met to discuss their ratings, interpretations, ease of use, and the clarity of the rubrics. This resulted in further minor modifications in wording, definitions, instructions, and clarity. The process of development and refinement stretched over nine months and resulted in a significant evolution of the original rubrics. The TECA is available at http://www.wmich.edu/evalctr/ate/evalproducts.htm .

TECA is composed of three sets of rubric questions. As depicted in Figure 2, the first set of rubrics has two parts, A and B, the second set is part C, and the third set is part D. The first set of rubrics (A and B) is designed to separately assess the technical value and pedagogical soundness of the materials being rated. Three types of experts are identified to use the TECA: industry experts who respond to part A, curriculum design experts who respond to part B, and expert teachers who respond to part A or B depending on their personal expertise. In addition to rating the rubric items within parts A and B, the experts are asked to respond to simple "yes-no" questions about what is included in the materials as well as to describe the evidence that supports the ratings of each item. Part A is composed of five items that are answered by industry experts specific to the curricular material that is being reviewed. These items consider issues of alignment of materials with the workplace, application of knowledge, use of technology, rigorous content and quality performance. Part B is composed of six items that are answered by experts in curriculum, instruction and assessment. These items considered issues of instructional strategies, problem solving, general education, assessment, personal qualities, and diversity.

The second set of rubrics, C, is called holistic ratings. This set of four items is designed to assess the materials in a more holistic manner by simultaneously considering the technical and pedagogical aspects of the materials. These questions are broad and are meant to capture the general quality of the materials. These items are answered by all reviewers, regardless of the area of their expertise. The items in this section are explicitly linked to the more specific ratings in parts A and B in order to help the reviewer understand the underpinnings of the question. In order to explicitly connect the theoretical framework to the TECA, the three themes (i.e., relationship to work, responsive educational experiences, deep understanding) from figure 1 are mapped to the four items which compose the holistic ratings depicted in figure 2.

The third set of rubrics, D, is one question which serves as an overall rating (figure 2). This is designed to be a summary assessment of the effectiveness of the materials in helping students learn the knowledge and practices needed to be successful in a technical workplace. This rating is not intended to be an average of all the previous ratings, but an overall judgment of quality and likely impact of the materials. This item is answered by all reviewers, and they are asked to describe the evidence that supports their ratings.

After each individual reviewer completes all the sets of rubrics (Part A or B, Part C, and Part D), the team of reviewers meet to discuss their individual ratings and develop a group consensus. The group of three reviewers then provides a group consensus rating for the four items in Part C and the one item in part D. Therefore, there are 4 ratings for each of the items in Parts C and D; one from each of the three expert raters and one from the group of raters as a whole.


Figure 2 - Conceptual Diagram of TECA

The process of selecting expert reviewers stretched over several months. A database of 60 potential expert reviewers was constructed from recommendations of ATE Principal Investigators (PIs), a request to provide nominations for expert reviewers at the 2002 annual ATE PI meeting, a textbook author and literature search, and an industry search for technical experts. These 60 potential reviewers were contacted and asked about their willingness to serve and to provide a short curriculum vita. Based on the reviewers' expertise, they were classified as industry experts, curriculum experts or instructional experts. Expert reviewers were then matched to our sample of curricular materials. Based on this analysis, 18 reviewers were invited to attend a meeting to be trained on using the rubrics and to rate the materials.

Expert reviewers traveled to a large upper Midwest university to be trained on the sets of rubrics and evaluate the materials. As discussed above, teams were constructed so that each material was rated by a technical expert, a curriculum expert, and an expert teacher. Reviewers received four hours of training on the rubrics. During the training, they had the opportunity to use the rubrics to evaluate three different pieces of curricular materials. On each training material, reviewers rated the material independently and then worked in small groups. Finally, a large group discussion was held to share and discuss ratings. This provided an opportunity for reviewers to ask questions, make suggestions, and eventually reach consensus regarding interpretation and use of the rubrics. The training appeared to be effective because by the end of the training the small groups were generally in agreement about the quality of the materials with all ratings within 1 point of agreement.

Reliability and Validity

Each material in our sample was rated four times, once by each type of expert (i.e., industry, curriculum, and instructional) and once by the team of experts assigned to each set of materials. The team ratings were done after each team member completed his or her individual rating and after the team had the opportunity to meet and discuss the material. The ratings were successfully completed and revealed a wide range of quality among the ATE developed materials. The specific ratings for the materials ranged from 0 to 4 and the overall ratings ranged from 1 to 4. This spread in scores allowed the materials to be categorized and compared on different aspects such as format, type of technology, and setting.

Inter-rater and intra-rater reliabilities were calculated for each rubric item. The inter-rater reliabilities for the holistic and overall rubrics suggest that all reviewers, despite their varying expertise, interpreted the rubrics in the same way. On average, over 50% of the time reviewers were in perfect agreement and 90% of the time they were within 1 point of agreement. Similarly, the intra-rater reliabilities suggest a high degree of internal consistency between the three sets of rubric questions. Qualitative evidence, such as feedback from the expert reviewers, also suggests these rubrics are trustworthy and valuable to use when evaluating technical education curricula. In other words, the rubrics appear to be internally consistent from both industrial and pedagogical perspectives.

Within group reliabilities were calculated for the four holistic rubric items and the overall rubric item. These scores were calculated by tallying each instance individual reviewers were in perfect agreement, within one point of agreement, within two points of agreement, etc. with the team rating for a particular material. These intra-group reliabilities suggest a high degree of internal consistency between the three individual ratings and the group rating with over 90% being within one point of agreement. Correlations between the within group ratings of the holistic and overall ratings averaged 0.77.

TECA provides a valid and reliable way to determine the quality of technical education curricula. The review process demonstrates how validity among raters can be achieved, which ultimately increases the validity of the curriculum evaluation. The careful training of raters adheres to the substantial amount of literature demonstrating that familiarizing judges with measures, ensuring their understanding of the order of operations, and providing guidance on the interpretation of normative data can reduce rater effects ( Rudner, 1992 ). By applying the rubrics to actual curricular materials and examining the scoring criteria, technical education and curriculum and instruction experts were able to improve the structural validity of TECA ( Cohen, Manion,&Morrison, 2000 ; Messick, 1995 ). Developing rubrics that evaluate the extent to which curricular materials meet industry and occupational needs provides evidence of criterion-related validity, while attention to both interrater and intra-rater consistency attend to the two forms of reliability typically considered in rubric development ( Moskal&Leydens, 2000 ). Using technical education experts to rate the content of curricular materials addressed content-relevance and criterion-related validity ( Cohen et al., 2000 ; Sax, 1997 ). These experts are able to determine how accurately and to what extent the TECA items measure the domain in question. Finally, the method of training utilized, short duration between ratings, limited contact between raters during individual rating sessions, and reasonable expectations for number of ratings completed mount significant evidence in favor of strong internal validity ( Harwell, 1999 ).

Discussion

There is a critical need for professional technicians who possess state of the art technical skills and workplace competencies ( Secretary's Commission on Achieving Necessary Skills, 1991 ; National Science Foundation, 1993 ; International Technology Education Association, 1996 ; Clagett, C. A., 1997 ). To meet this growing need, educational programs must shift to prepare knowledgeable workers who are both flexible and high performing ( Harkins, 2002 ). Employer needs have changed and while foundational technological skills are still thought of as important, employability and basic skills have surpassed those that are machine or company specific ( Dyrenfurth, 2000 ). Integration of curriculum is viewed as one method of organizing the life skills necessary for all citizens of a democracy and centers around solving real world problems requiring content and skill application from numerous disciplines ( Vars&Beane, 2000 ). Curricula will have to remain aligned with the changing skill sets required of workers to produce the outcomes vital to employers. Careful assessment of curricula is crucial to ensuring this alignment.

TECA is an effective evaluative instrument to judge the efficacy of technical education curricula. It provides insight by teasing apart the technical value and pedagogical soundness inherent to a curriculum. These sets of rubrics are able to successfully evaluate curricular materials based on the characteristics that Finch and Crunkliton (1999) suggest distinguish technical education curriculum. TECA evaluates the extent a curriculum is oriented towards, and justified by, industry and occupational needs, while at the same time, evaluating how well the curriculum focuses on the pedagogical needs of the student. The varying sets of rubrics help ensure validity and attention to the different aspects of technical education. They are tied to the research literature ( Finch&Crunkliton, 1999 ; Pucel, 1995 , 2000 ; Wiggins 1993 , 1998 and Wiggins&McTighe, 1998 ) and national standards for technological education ( NBPTS Standards Committee, 2001 ; SCANS, 1991 ). TECA helps illuminate the features that Clark and Wenig (1999) identified as quality characteristics of a technical education program. As such, TECA not only provides a basis for the evaluation of existing materials but also a guideline for the development of new curricula. These rubrics could be used by a wide range of industry and technical education professionals.

This psychometrically sound instrument should allow industry and education professionals to make more informed decisions when designing, implementing, and evaluating technical education curricula. By providing an integrated examination of both technical value and pedagogical soundness, TECA attends to curricular components necessary for ensuring that an increasing number of technicians will enter industry positions possessing both technical skills and workplace competencies. TECA could be instrumental in the vital endeavor to continue to foster high quality and pertinent education in economy-driving, high-technology fields.


References

Benn, P. C.&Stewart, D. L. (1998) . Perceptions of technical committee members regarding the adoption of skill standards in vocational education programs. Journal of Vocational and Technical Education, 14 (2) . Retrieved July 2, 2003, from http://scholar.lib.vt.edu/ejournals/JVTE.html.*
* Note :  The url provided above returned invalid results.
Relevant information may be found at the following link:
http://scholar.lib.vt.edu/ejournals/JVTE/

Bloom, B., Englehart, M. Furst, E., Hill, W.,&Krathwohl, D. (1956). Taxonomy of educational objectives: The classification of educational goals. Handbook I: Cognitive domain . New York, Toronto: Longmans, Green.

Clagett, C. A. (1997). Workforce skills needed by today's employers (Market AnalysisMA98-5). Largo, MD: Prince George's Community College, Office of Institutional Research and Analysis. (ERIC Document Reproduction Service No. ED 413 949)

Clark, A. C.&Wenig, R. E. (1999). Identification of Quality Characteristics for Technology Education Programs: North Carolina Case Study, Journal of Technology Education, 11 (1), 18-26.

Cohen, C., Manion, L.,&Morrison, K. (2000). Research Methods in Education (5th ed.). New York, NY: RoutledgeFalmer.

Coley, R. (2000). The American community college turns 100: A look at its students, programs and prospects . Washington, DC: Educational Testing Service.

Dyrenfurth, Michael J. (2000, September). Trends in industrial skill competency demands as evidenced by business and industry . Paper presented at the International Conference of Scholars on Technology Education, Braunschweig, Germany. (ERIC Document Reproduction Service No. ED463403)

Finch, C. R.&Crunkilton, J. R. (1999). Curriculum development in vocational and technical Education: Planning, content, and implementation (5th ed.). Needham Heights, MA: Allyn and Bacon.

Grubb, W.N. (1999). Learning and earning in the middle: The economic benefits of sub-baccalaureate education . New York, NY: Community College Research Center, Teachers College, Columbia University.

Harkins, Arthur M. (2002) . The future of career and technical education in a continuous innovation society. Journal of Vocational Education Research, 27 , 35-64.

Harwell, Michael (1999). Evaluating the validity of educational rating data. Educational and Psychology Measurement, 59, 25-37.

International Technology Education Association. (1996). Technology for all Americans: A rationale and structure for the study of technology . Reston, VA: Author.

Messick, S. (1995). Validation of inferences from persons' responses and performances as scientific inquiry into score meaning. American Psychologist, 50 : 741-749.

Moskal, Barbara M.&Leydens, Jon A. (2000). Scoring rubric development Validity and reliability. Practical Assessment, Research,&Evaluation, 7 (10). Retrieved November 20, 2003 from http://PAREonline.net/getvn.asp?v=7&n=10 .

National Science Foundation. (1993). Gaining the competitive edge: Critical issues in science and engineering technician education , NSF 94-32. Washington, D. C.: Author.

NBPTS Standards Committee. (2001). Career and Technical Education Standards. National Board for Professional Teaching Standards.

Pucel, D. J. (1995). Developing technological literacy: A goal for technology education. The Technology Teacher, 55 (3), 35-43.

Pucel, D. J. (2000). Developing and Evaluating Performance-Based Instruction . New Brighton, MN: Performance Training Systems, Inc.

Rudner, Lawrence M. (1992). Reducing errors due to the use of judges. Practical Assessment,Research,&Evaluation, 3 (3). Retrieved Nov. 20, 2003 from http://PAREonline.net/getvn.asp?v=3&n=3 .

Sax, G. (1997). Principles of educational and psychological measurement and evaluation (4th ed.). Albany, NY: Wadsworth.

Secretary's Commission on Achieving Necessary Skills. (1991). What work requires of schools . Washington, DC: Secretary's Commission on Achieving Necessary Skills, U.S. Department of Labor.

Vars, Gordon F.&Beane, James A. (2000). Integrative curriculum in a standards-based world (Report No. EDO-PS-00-6). Champaign, IL: ERIC Clearinghouse on Elementary and Early Childhood Education. (ERIC Document Reproduction Service No.ED441618)

West, E. (2001). The national skill standards board: Working with community colleges. Workplace, 12 (1),11, 23-24.

Wiggins, G. (1993). Assessing student performance: Exploring the purpose and limits of testing . San Francisco: Jossey-Bass.

Wiggins, G. (1998). Educative assessment: Designing assessments to inform and improve student performance . San Francisco: Jossey-Bass.

Wiggins, G., McTighe, J.,&Association for Supervision and Curriculum Development, Alexandria, VA. (1998). Understanding by design . U.S.; Virginia.


Acknowledgement

This research project is based upon work supported by the National Science Foundation grant REC 0135385.


Authors

Jonathan Keiser is a Principal Science Instructor at Dunwoody College of Technology and a Ph.D. candidate in the Department of Curriculum and Instruction at the University of Minnesota . His research interests include the evaluation of science and technology oriented curricula and student's conceptions of the nature of science.

Frances Lawrenz is the Wallace Professor of Teaching and Learning in the Department of Educational Psychology at the University of Minnesota . Her research interests include program evaluation, research methods, and assessment in science, mathematics, technology and engineering education.

Jim Appleton is a Research Assistant at the Center for Applied Research in Educational Improvement (CAREI) at the University of Minnesota . His research interests include the role of cognitive development in science literacy, student engagement with school, and research methodology.