JTE v12n2 - An Assessment Model for a Design Approach to Technological Problem Solving

Volume 12, Number 2
Spring 2001


https://doi.org/10.21061/jte.v12i2.a.1

An Assessment Model for a Design Approach to Technological Problem Solving

Rodney L. Custer, Brigitte G. Valesey,
and Barry N. Burke

Education reform has focused increasingly on critical thinking processes, including problem solving and student assessment. Correspondingly, curriculum and professional development efforts are directed toward developing problem solving abilities through authentic learning and problem-based teaching methodologies.

The development of problem solving abilities is pivotal to technological literacy. Problem solving is a critical thinking skill necessary for addressing issues related to technology and for developing effective solutions to practical problems. According to the Rationale and Structure for the Study of Technology ( ITEA, 1996 ), technologically literate persons "are capable problem solvers who consider technological issues from different points of view and in relationship to a variety of contexts"( p. 11 ). Waetjen ( 1989 ) cited problem solving as an important skill necessary for optimizing technological innovation and for developing technological literacy. Whether for economic competitiveness ( National Commission on Excellence in Education, 1983 ), technical means for survival ( Savage & Sterry, 1990 ), or to develop common sense knowledge of technology and how it evolves to meet human needs ( DeLuca, 1992 ), problem solving is deemed an essential skill for a productive life.

With problem solving a major theme in technology education, there is a need for detailed assessments to determine how students solve problems and at what levels of expertise. This study sought to develop a model for assessing problem solving using a design approach to the study of technology.

Background

Problem solving is a complex set of thinking skills and human activities. Waetjen ( 1989 ), for example, proposed a problem solving model based on the work of Polya ( 1957 , 1971 ) and Philpott & Sellwood ( 1987 ), involving defining the problem, reforming the problem, isolating the solution, implementing the plan, restructuring the plan, and synthesizing the solution. Pucel ( 1992 ) espoused problem solving as a technological method, where technology evolves to serve useful purposes of humans, based on processes of innovation.

Savage & Sterry ( 1990 ) proposed a problem-solving model with the premise that humans depend on technical means for survival. They indicated that the problem solving process parallels the scientific method in science. In Standards for Technological Literacy: Content for the Study of Technology ( ITEA, 2000 ), problem solving is defined as, "the process of understanding a problem, devising a plan, carrying out the plan, and evaluating the plan in order to solve a problem to meet a human need or want" ( p. 255 ).

Problem solving occurs in various ways, depending on the task and the context. DeLuca ( 1992 ) identified several problem-solving processes applied to technology. These processes are troubleshooting/debugging, scientific process, design process, research and development, and project management. Custer ( 1995 ) classified problem-solving activities by complexity and goal clarity where design is a major subset of technological problem solving. Design, involving ideation, identifying possible solutions, prototyping, and finalizing the design, has become a predominant problem solving process in the technology education laboratory-classroom. The assessment model developed for this study focused on problem solving as a design-based process, guided by criteria and constraints. The model was not intended to be used with a singular approach nor incorporate a specific number of steps, but to be applied to many different methods, models, and practices.

Problem solving has been investigated in terms of thinking skills and critical activities. Halfin ( 1973 ) identified key mental processes used by technological professionals. They include defining the problem or opportunity, interpreting data, constructing models and prototypes, designing, testing, modeling, creating, and managing. Hill ( 1997 ) used definitions and examples developed from Halfin's mental processes to develop and field-test a tool for assessing students during technology education activities. The assessment tool was used to capture qualitative data concerning what mental processes were evidenced in duration and frequency during a modular instructional activity.

MacPherson ( 1998 ) explored factors affecting another form of technological problem solving, near transfer troubleshooting. He developed a rubric to assess critical incidents in various stages of problem solving activities associated with maintenance activities performed by technicians. This rubric contained critical incidents on a continuum from novice to expert levels. Findings indicated that years of experience, cognitive technical knowledge, and critical thinking were effective predictors of near transfer problem solving skills while cognitive style and problem solving style were least likely to predict problem solving abilities. Results indicated that novices and experts exhibited different patterns of behavior. The assessment rubric used in this study was based on the MacPherson study model.

Problem and Purpose

The Technological Literacy Standards: Content for the Study of Technology ( ITEA, 2000 ) regards design as the primary problem-solving approach in Technology Education ( p. 5 ). In design activities, students frequently collaborate to create design solutions through problem solving behaviors that require detailed and consistent evaluation. A need exists for assessment models to examine problem solving during and as a result of student activities. Evaluating the technological literacy of students depends upon assessment tools that measure levels of student performance and achievement individually and within groups. The goal of this study was to develop an assessment model that could be used to evaluate student problem solving performance in design activities.

Research Objectives

An assessment model was developed and field-tested to measure student problem solving performance in technological design activities. A rubric incorporating critical incidents in problem solving and expertise levels was central to the model. The model was intended to provide a framework for assessing technological problem solving in group and individual activities.

The research questions for this study were:

  1. What are the key components of a model to assess individuals and groups in problem solving activities? This study focused on creating and field-testing a model to provide guidance for developing comprehensive problem solving assessments.
  2. What knowledge and skills do students gain from design-based problem solving activities? Since problem solving is a complex set of thinking skills, the assessment must be able to capture observable student behaviors that indicate critical incidents in design activities.
  3. What factors (i.e., GPA, grade level, technology courses, mathematics and science grades, gender, personality preferences, and problem solving styles) affect problem-solving abilities of high school students? Since many technology education classes are made up of students of different backgrounds, preferences, and ability levels, this study sought to investigate the possible effects of various factors.

The methodologies and research instruments used in this study were designed to address these questions and to yield a model for assessing student problem solving in design activities. The next section details the methodologies used to develop and field test the model.

Methodology

Sample and Procedures

A combined quasi-experimental/descriptive design was used to explore factors affecting problem solving in a design activity. Groups of students were issued a design problem (i.e., to design a "school locker of the future") and a set of design constraints. Constraints consisted of a time frame, limited funding, and use of physical, informational, and bio-chemical systems. The activity was conducted over eight hours, which was equally distributed over two days.

The study sample was comprised of two groups of high school students enrolled in technology education classes in two states. One group of students ( n= 12) was from a large, suburban, east-coast school district. A second sample ( n= 15) was from a small, rural, mid-western school district. This purposive sampling procedure was used to compare students from programs with contrasting philosophies and delivery systems. The east-coast students were accustomed to a design brief approach to technology education whereas the mid-western students' program used a more traditional lab and project-based approach. The two programs were selected to explore the effects of contrasting methods of delivering technology education (i.e., process-based, design brief approach vs. more content- and project-based approach). Students within each location were randomly assigned to groups of three individuals, which remained intact throughout the activity.

After an orientation to the activity (consisting of a brief discussion of design and problem solving, a verbal description of the design brief, and a period of clarification discussion), students engaged in a process of design clarification, design development, physical modeling, and evaluation (see Figure 1 ). Each group was issued an actual school locker unit and materials (i.e., markers, foam board, tape, scissors, cardboard, and hot glue guns) to use to construct a full-size mock-up of their design. All groups had access to a computer with an Internet connection and a telephone to use for research purposes or to contact suppliers.

At the conclusion of the activity, each group made a formal presentation, in which they described their mock-up and how effectively their design met the assigned constraints. Students were asked to explain their interpretation and refinement of the design constraints and describe the process that they used to research possible solutions to the problem.

Instrumentation and Data Collection

The researchers for the study designed two different rubrics. These were the Student Individualized Performance inventory (SIP) and the Group Process (GP) rubric. Both instruments were developed and validated by the research team in consultation with established experts in technological design and problem solving.

The Student Individualized Performance (SIP) rubric was developed to assess individual student performance in technological problem-solving situations. Based on a synthesis of the design literature, the researchers identified four major dimensions, which consistently were represented in various design and problem-solving models. These dimensions were Problem & Design Clarification, Develop a Design, Model/Prototype, and Evaluate the Design Solution. Each dimension was subdivided into three strands (see Figure 1 ), replicating the process used to identify the major dimensions. These dimension categories were reviewed by an expert panel with extensive knowledge of problem solving and design for conceptual accuracy. This process yielded substantial agreement with some minor revisions in terminology.

Each strand was rated on a five-point scale, from expert (5) to novice (1). To facilitate and refine the rating process, critical incident identifiers were developed for each performance level ( Dyrenfurth, Custer, 1993 ; MacPherson, 1998 ). Figure 2 illustrates the critical incidents for Dimension #1. To optimize content validity, an expert panel familiar with technological design and authentic assessment reviewed drafts of the SIP. Based on their input, significant modifications were made to the conceptual framework for the rubric.1

A pilot test was conducted to refine the instrument and to conduct rater training. In the pilot test, one group of three students completed the design activity in a manner identical to the larger study. Following the pilot study, raters and students debriefed the experience. Based on the results, refinements were made to the directions given to students and to the design constraints. Critical incident statements were revised based on feedback from the two lead raters. During the actual study, the Cronbach Coefficient Alpha reliability was .78.

Dimension #1: Problem & Design Clarification
  • Examine context and define problem
  • Develop, clarify, and negotiate constraints and criteria
  • Conduct research/gather pertinent information
Dimension #2: Develop a Design
  • Generate and visualize possible solutions
  • Select a design solution
  • Plan and communicate design
Dimension #3: Model/Prototype
  • Select resources
  • Develop procedure
  • Produce model/prototype
Dimension #4: Evaluate the Design Solution
  • Test and critique solution against constraints
  • Refine model
  • Documentation/ Technical Reporting
Figure 1. Dimensions and strands of the Student Individualized Performance rubric.

During the two-day field test, two raters rated each student independently. These independent ratings were conducted in order to assess interrater reliability. Prior to actual data collection, raters were trained by the research team and by the lead rater who had conducted the ratings throughout the entire pilot-testing phase. The training consisted of an orientation to the design activity, a comprehensive analysis of the SIP rubric, and a briefing by the lead rater. The briefing included information about problems encountered and lessons learned during the pilot test.

One primary rater and one secondary rater were assigned to each three student design group. Each rater was responsible for rating one group of three students as a primary rater and a second group of three students as a secondary rater, thus rating a total of six students using the SIP rubric rating sheets. Immediately following the field test, each two-member rater team met to discuss their observations of individual students and to reconcile differences in ratings by consensus on a strand-by-strand basis. The final ratings for each student included two graded SIP rubrics (one per rater) and the combined SIP rubric (based on consensus between the two raters). In addition to analyzing the perceptual differences between raters, this process also enabled the researchers to examine the usability and effectiveness of the SIP instrument.

Problem and Design Clarification
Expert Proficient Competent Beginner Novice
Examine context & define problem Poses pertinent questions for clarification; identifies and prioritizes sub-problems (within the larger problem); explores context. Poses questions; Identifies sub-problems but does not prioritize. Ignores context. Identifies key content; defines problem adequately. Asks some pertinent questions. Ignores context. Expresses limited knowledge of context or problem area; problem is defined but needs clarification. Asks questions but not pertinent and too few. Ignores context. Exhibits some indifference or frustration. Tends to hone in on wrong problem, isolated subset, or easiest part to solve. Begins to solve without clarification questions. Doesn't see context. Exhibits considerable indifference or frustration.
Develop, clarify, & negotiate constraints and criteria Explains key constraints in detail; tried to negotiate or circumvent constraints; Gains clarification of criteria prior to solving problem or posing solutions. Clarifies constraints in detail; expresses their relationship to the problem solution. Engages in some limited negotiation of the constraints. Clarifies constraints and accepts them as presented and understood. Recognizes constraints but seeks minimal clarification. Accepts as is. Clarifies constraints late in design process as failures occur. Did not identify constraints or criteria; did not grasp the significance of constraints. Minimal grasp of (or concern about) constraints. Sees constraints as insignificant.
Conduct research/gather pertinent information Consults several key sources; evaluates information; relates information back to problem and constraints. Exhibits refined search strategies. Researches sub-problems Consults several key sources; uses observational techniques; cites references. Ignores sub-problems. Uses search guides and locates at least 2 sources. Consults sources with some direction and/or organization. Conducts very limited research. Restricted to easy to find and readily available resources. Does not conduct research nor consult sources. Starts solving problem without information.
Figure 2: Critical incidents for Dimension #1 of the SIP.

The interrater reliability was examined by correlating the total score ratings for both raters on each of the four dimensions (recorded prior to scoring difference negotiations between raters). Interrater reliability scores were low, ranging from .070 to .501. Based on an analysis of the rating process, two factors were believed to have contributed to these low reliability scores. First, while raters were briefed on the procedures and on the use of the rubric (including discussions of pilot testing feedback) some raters did not use the rubric in advance of the study. In retrospect, additional training of raters, including post-rating discussion of rating differences, should have been conducted in order to improve interrater reliability.

A second factor affecting the use of the SIP rubric as well as the overall assessment model for this study dealt with extracting individual performance and achievement from group process. Individual problem solving performance is a function of a complex set of factors, including content knowledge, problem solving style, and critical thinking ability. When these factors are embedded in group situations, the complexity is further elevated and assessment challenges are exacerbated. More research is needed to better understand how and in what ways individual performance is affected by group process.

For the purposes of this study, the negative effect of relatively low interrater reliability on validity was corrected by having the raters reconcile differences between ratings. These reconciled scores were used to statistically analyze the data. While this process enhanced the validity of ratings for this study, subsequent use of the model and SIP rubric should address the challenges associated with rating reliability.

SIP scoring consisted of assigning numerical values on a five-point scale (5=expert to 1=novice) to each of the twelve strands of the SIP. A single score was then computed for each dimension by averaging the scores for each three-strand set. An overall mean score was computed for each student by averaging the four dimension scores. Throughout this process, the combined (rater reconciled) SIP rubric scores were used.

Variables

The computed SIP values served as the dependent variable for the study. Based on the literature review and the perceptions of the researchers, a set of independent variables was also identified. These consisted of program type (east coast vs. mid-west), technology education experience (number of courses taken), grade level, mathematics and science achievement scores (course grades), personality type (measured by the Myers-Briggs Personality Inventory), problem solving style (measured by the Problem Solving Indicator), and gender.

The Myers-Briggs scores were grouped into four categories, consistent with established scoring and interpretation procedures. These categories consisted of action-oriented innovators (extravert-intuitive), action-oriented realists (extravert-sensing), thoughtful innovators (introvert-intuitive), and thoughtful realists (introvert-realists).

Problem-solving style was measured using an adapted version of the standardized Problem-Solving Inventory (PSI-TECH) ( Wu, Custer, & Dyrenfurth, 1996 ). This paper and pencil, self-reporting instrument is designed to measure factors including problem-solving self-confidence, approach-avoidance, and personal control. Statistical analysis consisted of descriptive statistics, analysis of variance and correlation.

Findings and Discussion

Due to the exploratory nature of the study, descriptive data analysis procedures were used. These procedures were also judged to be appropriate due to the relatively small sample size and the purposive sample selection. While these limitations disallow the use of statistical inference, a descriptive analysis nevertheless provides a useful preliminary basis for more extensive research.

As stated previously, design involves a complex set of cognitive processes. The four rubric dimensions embody this complexity and represent different activities. When the Dimension Total scores are compared, it is not surprising that modeling/prototyping scores were the highest (See Table 1 ) since historically, technology education programs and curricula have concentrated on making products and implementing designs. Given the design focus in the Standards for Technological Literacy , there is a need to emphasize the preliminary and preparatory aspects of the design process (Dimensions #1 and #2) as well as the more analytical, evaluative component (Dimension #4) in technology education curriculum and instruction. One independent variable was program type; over the past decade, programs in the east coast district concentrated on design more than programs in the rural mid-west district. The results of this study are inconclusive since mean score differences between the two samples are minimal and the differences could be a function of rater differences between the two locations. Note that in order for statistically meaningful comparisons of programs to occur, the treatments would need to be more controlled and the samples would need to be much larger.

One purpose of this study was to conduct a preliminary analysis of design data according to achievement, as measured by overall GPA and mathematics and science achievement. Table 2 shows correlational values that emerged from the data analysis. Note that correlational effect sizes in the range of .30 and .50 are considered to be significant at the "medium" and "large" levels respectively for behavioral science research ( Cohen, 1988 ). Thus, the pattern of results suggests some interesting relationships between technological design performance, GPA, and science achievement. Note the relatively low scores for mathematics as well as the low associations across the variables for Dimensions #1 and #2. The association between science achievement and Dimension #4 hints at a possible focus on analytical skills; specifically, a predisposition for interpreting experimental results (science) rather than solving well structured and prescribed problems (mathematics).

Table 1
Program Type by Problem Solving Dimension

Dimension East-coast Sample Mid-west Sample Dimension Sample
n m SD n m SD n m SD

#1 Prob. & Design Clarification 12 2.7 .64 14 3.0 .44 26 2.8 .55
#2 Develop a Design 12 2.5 .58 14 3.0 .49 26 2.8 .57
#3 Model/Prototype 12 2.9 .79 14 3.3 .31 26 3.1 .60
#4 Evaluate the Solution 12 2.4 .36 14 2.3 .57 26 2.4 .48
Sample Total 12 2.6 .50 14 2.9 .29

Even though these are preliminary findings, the results suggest that the relationship between mathematics and science achievement on the one hand and performance in technological design may be differential and complex. Also, some aspects of design on the other may be more useful than others in implementing "inquiry-based" learning in mathematics and science. The complexities of these factors provide rich opportunities for additional research.

Student performance was also analyzed by gender (see Table 3 ). While the total scores were nearly identical, there are differences in Dimensions #3 and #4. The comparatively higher Model/Prototype score for males corresponds somewhat with gender stereotypes, where males are often considered more comfortable with constructing/making activities. The elevated solution evaluation scores for females represents an interesting contrast, with females demonstrating a comparatively stronger analytical ability related to the quality of the design and prototype. While these results are far from conclusive, they warrant further study since gender differences related to interests in and participation with technology are not well understood.

Table 2
Correlational Analysis for Design Dimension GPA, Mathematics Achievement, and Science Achievement

Dimension GPA Mathematics Achievement Science Achievement

#1 Prob. & Design Clarification .162 -.184 .133
#2 Develop a Design .244 -.055 .186
#3 Model/Prototype .260 .194 .335
#4 Evaluate the Solution .287 .200 .428
Total Score .342 .118 .398

Table 3
Student Design Performance by Gender

Dimension Male Female
n m SD n m SD

#1 Prob. & Design Clarification 20 2.8 .53 2.9 2.9 .67
#2 Develop a Design 20 2.8 .58 2.7 2.7 .56
#3 Model/Prototype 20 3.2 .52 2.9 2.9 .82
#4 Evaluate the Solution 20 2.3 .50 3.4 3.4 .42
Sample Total 20 2.8 .39 2.7 2.7 .49

Several patterns emerged when problem-solving performance was analyzed according to personality type. As shown in Table 4 , the highest percentage of the sample (nearly 50%) were in the action-oriented innovator category. While overall performance scores were slightly higher for this group, the groups were essentially identical. These results make intuitive sense, since innovative, action-oriented individuals could be expected to enroll in courses dealing with technological design. Perhaps more hopeful is the indication that while creative problem solving activities may appeal to certain personalities, actual performance was very similar across all four personality types.

When the data are examined on a dimension-by-dimension basis, the most striking difference in personality types is with the thoughtful realists, who rated substantially lower on the first two dimensions. While factors other than personality type could certainly have contributed to these results, it is possible that individuals with this personality type may perform less well than others during the planning stages of design activities.

The potential implications for teaching and learning in technology education classrooms are important. These findings suggest that problem-solving performance in design activities may not be a function of personality type. What is encouraging from this study is that students of different personality types can participate and achieve in design activities on a relatively equal basis. Conversely, what is discouraging is what could happen to group and individual performance when personality types are deliberately homogeneous. Given the emphasis on teams and collaborative activity in education and industry, this represents a valuable area for additional research.

Table 4
Myers Briggs Personality Type by Problem Solving Dimension

Dimension EN ES IN IS
m SD m SD m SD m SD

#1 Prob. & Design Clarification 3.0 .55 2.9 .40 2.8 .68 2.4 .55
#2 Develop a Design 2.9 .66 2.8 .30 2.7 .77 2.5 .51
#3 Model/Prototype 3.2 .70 3.1 .60 3.0 .34 3.1 .69
#4 Evaluate the Solution 2.5 .46 2.3 .73 2.1 .35 2.4 .07
Sample Total 2.9 .46 2.8 .42 2.6 .31 2.7 .40

EN: Action-oriented innovators ( n= 11) IN: Thoughtful innovators ( n= 4)
ES: Action-oriented realists ( n= 6) IS: Thoughtful realists ( n= 5)

Another trait that was examined in this study was the relationship between problem-solving design performance and problem-solving style (as measured by an adaptation of Heppner's Problem-Solving Inventory (PSI). The PSI is designed to measure three components of efficacy in problem solving situations: self-confidence (extent to which individuals believe they can successfully solve problems), approach-avoidance (tendency to actively pursue problem solutions in a timely manner), and personal control (extent to which individuals feel like they are in control of problem situations). The validity and reliability of a technological version of the instrument (PSI-TECH) were established in two previous studies ( Wu, 1996 ; MacPherson, 1998 ) and were found to be nearly identical to the original standardized instrument (e.g., Cronbach's Alpha ranging from 0.71 to 0.88). The primary difference between the original PSI and the PSI-TECH is that the PSI-TECH focuses specifically on technological problem solving situations.

Table 5 contains descriptive statistics for this study's sample. Note that the possible point values are different for each efficacy component, thus a major part of the differences in mean score values across the three components is a function of differences in the metric employed. Also, PSI scores are inversely related to the trait, with high scores representing a reduced presence of a given trait. For example, a high numerical self-confidence score would indicate low levels of self-confidence.

In order to meaningfully interpret PSI-TECH scores, this study's data were compared with those obtained in the two previous studies, using the identical instrument (see Table 6 ). The Wu ( 1996 ) study focused on a sample of 300 students from five different mid-western universities. The sample was evenly distributed across the humanities, technology education, and engineering. The technician sample ( MacPherson, 1998 ) was comprised of 15 professional maintenance technicians in light manufacturing and service industries.

Table 5
Problem-solving Style (PSI-TECH)

Efficacy Component Pts. Poss. m SD Min. Score Max. Score

Self confidence (SC) 66 24.23 6.80 13 37
Approach Avoidance (AA) 96 50.12 12.00 28 81
Personal Control (PC) 30 14.92 4.47 5 23
Total 192 89.08 20.20 53 129

n= 26

The results of this study indicate that overall problem-solving style scores for this study's high school student sample compare favorably to the university level technology education group, with both being considerably higher than the university level humanities majors (note that lower scores represent higher levels of the trait). Predictably, efficacy levels for the professional level, adult technicians were noticeably higher. When the results of the three studies are compared on a trait-by-trait basis, a similar contrast can be observed for self-confidence. There was somewhat less contrast with personal control, where the high school students actually felt a stronger sense of control in technological problem solving situations than did university level technology education students (and considerably more than humanities majors). Approach-avoidance scores ranged from technicians (highest) to high school students (lowest) with university technology education majors approximately half way between.

In addition to providing normative data for this study, the prior studies also yielded useful calibration reference points, with technicians representing the "expert" end of the continuum and humanities students representing "novice" end. While additional sampling and research is needed to calibrate the instrument more accurately, this process provides a preliminary and reasonable approach for understanding where this study's sample fits within a larger context. Using this approach to calibration, the high school sample tends to resemble the novice end of the spectrum for efficacy with technology. These findings have important implications for learning and teaching related to technological design and problem solving. Educational research has repeatedly shown that motivation, performance, and achievement are closely interrelated. The technology education profession could benefit from additional study of how efficacy factors influence (and are influenced by) student performance in design activities.

The PSI-TECH efficacy scores were then correlated with the problem solving dimension data in order to explore the relationship between efficacy and problem solving performance. Based on the data in Table 7 , students were generally most efficacious on Dimension #1. These findings are somewhat surprising given the performance results in Table 1 above, where student performance was highest on Dimension #3. It could have been expected that the higher PSI-TECH scores would be most closely associated with areas of strong performance. While correlation values are moderate, the strongest associations clustered along the problem clarification dimension. This could indicate that students tend to feel more comfortable with problem clarification as a more structured aspect of the design process than they do with more abstract and creative aspects of design.

Table 6
Mean Scores for Comparative Studies (PSI-TECH)

University Level Students
(Wu study)
Efficiency Component Technology Education Students Humanities Students Technicians
(MacPherson study)
Sample for this study

Self confidence 24.34 27.79 16.64 24.23
Approach Avoidance 43.49 50.59 34.14 50.12
Personal Control 15.36 17.54 11.43 14.92
Total 83.19 95.92 63.71 89.08

Table 7
Correlational Analysis of PSI-TECH vs. Problem-solving Dimension

Dimension Self Confidence Approach Avoidance Personal Control Total

#1 Prob. & Design Clarification .365 .455 .394 .486
#2 Develop a Design .297 .150 .338 .262
#3 Model/Prototype .087 .086 .314 .048
#4 Evaluate the Solution .295 .142 .418 .265
Total Score .306 .122 .478 .277

To further refine the analysis of student characteristics, the data were also analyzed by grade level (see Table 8 ). Note that 12th grade student performance was highest, particularly on Dimensions #2 and #3. This makes sense given the maturity and, in some cases, additional experience with technology classes. Further research is needed to better understand the interesting and complex relationship between students' involvement in the design process, their experience and maturity, and the extent to which they feel confident and in control of the process.

Overall group performance was assessed in order to evaluate the quality of group dynamics and performance. As shown in Table 9 , the rubric included items specific to technological design as well as other items that dealt with more general process skills. The lowest group average score was on item #10, the item that is most specific to technological design. This tendency to prematurely select design solutions also occurs with individuals. More research is needed to explore the extent to which group involvement either exacerbates or reduces this "rush to judgment" tendency in design situations.

Table 8
Student Design Performance by Dimension

Dimension Grade 9 Grade 10 Grade 11 Grade 12
m SD m SD m SD m SD

#1 Prob. & Design Clarification 2.6 .69 2.7 .93 2.9 .49 2.9 .40
#2 Develop a Design 2.5 .73 2.6 .83 2.8 .37 3.1 .76
#3 Model/Prototype 3.1 .86 3.0 .89 3.1 .56 3.3 .42
#4 Evaluate the Solution 2.6 .25 2.4 .55 2.4 .51 2.1 .43
Sample Total 2.7 .61 2.7 .70 2.8 .30 2.9 .40

The findings of the study indicate that, while some areas of performance are strong, other areas could benefit from additional intervention and focus. While the generalizability of these results is limited, the findings suggest that the profession could benefit from more instruction and assessments on teamwork and group processes. This is especially important given the emphasis on group process in the Technological Literacy Standards .

Table 9
Group Evaluation Rubric

m SD
1. As a whole, the group was flexible and adaptable 4.42 0.70
2. All members of the group contributed actively to the process 4.23 0.78
3. The group was able to incorporate diverse personalities and ideas 4.19 0.97
4. The group had the ability to resolve adversity (ideas that didn't work, frustration, etc.) 4.06 0.79
5. There was a good balance between group and individual work 3.92 0.97
6. All members contributed creative ideas to the process 3.79 1.03
7. The group was able to re-energize when the energy level dropped off 3.38 0.64
8. The group was able to critique its own work 3.19 0.47
9. The members achieved an appropriate balance between leadership and follower ship 3.01 0.65
10. The group generated many new ideas rather than prematurely selecting a single solution 2.87 0.81

5 - Absolutely true of this group
4 - Described the group for the most part
3 - Description fit the group about half of the time
2 - Only marginally describes the group
1 - Does not describe the group at all

Conclusions and Recommendations

Problem solving in technological design activities can be identified as a set of observable behaviors on a performance level continuum. These behaviors can be captured on an assessment instrument and can provide valuable clues to a student's critical and creative thinking abilities. What is more difficult to discern are the effects of factors such as GPA, math and science achievement, gender, and personality type, on student performance in design activities. While the results revealed some effects, they are far from conclusive.

The rubric instrument designed for this study identified key indicators of problem solving. This study revealed the complexity of observing and rating several students at the same time and the challenges associated with untangling individual from group performance. While the rubric was useful as an assessment tool, additional refinement will be necessary for laboratory-classroom application, particularly in probing the actual thought processes of students during the design activity. The experience in this study, however, suggests that the SIP is useful as a research tool.

This study was designed to provide a model for assessing students as they engage in problem solving in design activities. The research methodology presented many challenges from identifying key student behaviors to examining individual as well as group effects. Translating the model into practice poses additional challenges for researchers and practitioners. The researchers offer the following recommendations for further research:

  • Further validate and refine critical incidents.
  • Control for selected variables in future studies to establish possible effects and interactions.
  • Explore ways to capture understanding of technological content as part of the problem solving process.
  • Examine the role of group process in assessing individual performance.
  • Develop assessment instruments from the model that can be readily used in the laboratory-classroom.
  • Develop mechanisms for assessing selected students over an extended time period to determine to what extent their problem solving performance changes as a result of doing design activities.
  • Examine how teachers currently assess students and what critical incidents they identify in their assessments.

This study presented an avenue for research that can provide valuable information concerning student problem solving performance in design activities. Appropriate assessment measures will provide in-depth information concerning student performance and levels of problem solving expertise. Such assessments will contribute to better monitoring of student progress and possible identification of future innovators, industrial designers, engineers, and technologists.

References

Cohen , J. (1988). Statistical power analysis for the behavioral sciences . (2nd ed.). NJ: Lawrence Erlbaum Associates.

Custer , R. (1995). Examining the dimensions of technology. International Journal of Technology and Design Education , 5, 219-244.

DeLuca , V. W. (1992). Survey of technology education problem solving activities. The Technology Teacher , 51(5), 26-30.

Dyrenfurth , M.J., Custer, R.L., Loepp, F. L., Barnes, J.L., Iley, J.L., & Boyt, D. (1993). A model for assessing the extent of transition to technology education. Journal of Industrial Teacher Education , 31(1), 57-83.

Halfin , H. H. (1973). Technology: A process approach. (Doctoral dissertation, West Virginia University, 1973) Dissertation Abstracts International , 11(1), 1111A.

Hatch , L. (1988). Problem-solving approach. In W.H. Kemp & A.E. Schwaller (Eds.), Instructional strategies for technology education. 37th Yearbook (pp.87-98) Council on Technology Teacher Education.

Heppner , P. P. (1988). Personal Problem Solving Inventory, Form B . Palo Alto, CA.: Consulting Psychologists Press, Inc.

Heppner, P. P. (1988). The problem-solving inventory (PSI)- Research manual . Palo Alto, CA: Consulting Psychologists Press, Inc.

Hill , R. (1997). The design of an instrument to assess problem-solving activities in technology education. Journal of Technology Education , 9(1), 31-46 .

International Technology Education Association. (1996). Rationale and structure for the study of technology. VA: Author.

International Technology Education Association. (2000). Standards for technological literacy: Content for the study of technology. VA: Author.

MacPherson , R. T. (1998). Factors affecting technological troubleshooting skills. Journal of Industrial Technology Education , 35(4), 5-26 .

National Commission on Excellence in Education. (1983). Nation at risk . Washington DC: U.S. Department of Education.

Philpott , A., & Sellwood, P. (1983). An introduction to problem solving activities: Some suggestions for design and make. The School Scene Review , 65(230), 26-32.

Polya , G. (1957). How to solve it: A new method of mathematical method . NJ: Princeton.

Pucel , D. (December, 1992). Technology Education: Its changing role within general education. Paper presented at the American Vocational Association Convention, St. Louis, Missouri. (ERIC Document Reproduction Service No. ED 353 400)

Savage , E., & Sterry, L. (1990). A conceptual framework for technology education . VA: International Technology Education Association.

Waetjen , W.B. (1989) Technological problem solving: A proposal. Reston,VA: International Technology Education Association.

Wu , T., Custer, R. L., & Dyrenfurth, M. (1996). Technological and personal problem solving styles: Is there a difference? Journal of Technology Education , 7(2), 55-71 .

Rodney L. Custer is Associate Professor and Chair, Department of Industrial Technology and Science, Illinois State University. Brigitte G. Valesey is Director, Center to Advance the Teaching of Technology and Science, International Technology Education Association. Barry N. Burke is Coordinator of Industry and Technology Education, Montgomery County Public Schools, Maryland. This research was supported by a Research Incentive Grant funded by CTTE, ITEA, and The Technical Foundation of America.

Due to space limitations, only one dimension is presented in this article. The complete assessment rubric can be obtained by contacting Dr. Rodney L. Custer at custer@indtech.it.ilstu.edu.