JOTS v24n1 - Tailoring Assesment of Technological Literacy Learning

https://doi.org/10.21061/jots.v24i1.a.4
Tailoring Assessment of Technological Literacy Learning

Glenda Prime

This is based on a presentation to the Second Jerusalem International Science & Technology Education Conference in Israel, January 8-11, 1996.

Dr. Prime is a faculty member in the School of Education at the University of The West Indies, St. Augustine, Trinidad.


Widespread acceptance of technological literacy as a desirable outcome of education has led to the development and implementation of a variety of curriculum innovations in the field of technology education. Learning activities may mask some degree of conceptual fuzziness about the construct of technological literacy (TL). However, as they use assessment strategies in their work, curriculum developers strive for utmost clarity and precision about the psychological states implied by the construct and about its overt behavioral manifestations. The questions to be answered in the process of developing strategies to evaluate TL are "What constitutes TL?" and "What knowledge, attitudes, and behaviors should a technologically literate person display?" These questions must be answered clearly and precisely if valid and appropriate assessment strategies are to be designed.

This article draws on some recent commentaries on these questions to suggest some ways these issues might influence the design of assessment strategies for TL.

TECHNOLOGICAL LITERACY

Dyrenfurth (1991) suggested that TL is "a concept used to characterize the extent to which an individual understands, and is capable of using technology" (p. 139). Although there are almost as many definitions of this construct as there are authors defining it, the definitions all embody the notions of knowledge, and understanding, of technology, and capability in its use. However, when attempts are made to specify the knowledge and to identify the competencies implied in capability, it becomes apparent that these definitions are quite diverse.

Yff and Butler (1983) suggested the purpose of TL is to enable citizens "to weigh alternatives and make informed decisions. It should enable [people to] manage their lives and cope with change to their best advantage" (p. 14). In a discussion of the technology component of the British national curriculum, Farrell (1992) described its outcome as the education of young people "to be capable in a society where they are constantly interacting with the made world" (p. 40). This latter statement brings into focus the social, cultural, and historical dimensions of TL since the "made world" could be expected to vary across cultures and, over time, within cultures.

This context-specificity makes it difficult to spell out the manifestations of TL in anything but general terms. Nevertheless, if appropriate assessment modes and strategies are to be devised, it is necessary that in each context the desired manifestations of TL be specifically determined. It is useful to categorize these as knowledge, skill, and affective manifestations. This broadly corresponds to the three dimensions used by Dyrenfurth (1991) who suggested that TL includes a civic dimensionthe ability to understand the issues raised by technology; a practical dimensionthe ability to use technology; and a cultural dimensionthe appreciation of the significance of technology. Lewis and Gagel (1992) argued that the conceptions of TL vary with the philosophical orientations of the constituencies attempting to define it. It is in the relative importance of the affective, cognitive, and skill components of TL that this diversity is most apparent. It should be recognized that these domains are not watertight and that any practical manifestation of capability is in fact an amalgam of all three.

Knowledge Component

Lewis and Gagel (1992) argued that "at its core literacy implies knowledge" and further, that "levels of literacy seem to correlate with levels of knowledge" (p. 21). The concepts and understandings related to technology must be the knowledge content of curricula in TL.

The broad knowledge areas identified in the literature are summarized as follows:

  1. A knowledge of problems that might have technological solutions. Yff and Butler (1983) suggested that a curriculum for TL should include a study of "the major social, economic, and geophysical problems" (p. 13). They include among these such problems as hunger, transportation, and waste disposal.
  2. A knowledge of important technologies such as computer applications, systems dynamics, industrial processes (Yff & Butler, 1983), biotechnology, materials, and energy technologies (American Association for the Advancement of Science, 1989).
  3. Understanding of the social and cultural impact of technology such as the effect of technology on societies, its value-ladenness, and its irreversibility ( Heinsohn, 1977 ; Mears, 1986 ).
  4. The range of concepts that are prerequisites for an understanding of technology drawn from such other disciplines as science, mathematics, history, and language (Lewis & Gagel, 1992).
  5. An understanding of the form or structure of technological knowledge. This implies understanding of knowledge of what works and therefore has a practical dimension. It also implies an appreciation of how technological knowledge is related to other forms of knowledge, particularly science.

Capability, which lies at the heart of TL, is essentially the ability to think and do effectively in the context of the real world. This implies a range of both cognitive and psychomotor skills that Layton (1987) has characterized as functional competencies. In addition to those cognitive skills that relate to ways of processing information, the technologically literate persons should display the ability to think critically about technology itself.

Skills Component

One concept of TL emphasizes the ability to evaluate technology as one of the core characteristics of the technologically literate person. Donelly (1992) described this view as a "small but important radical strand of thought about technology education" (p. 133). Lewis and Gagel (1992) suggested that the technologically literate person should "be able to fashion informed opinion regarding the social, political, environmental, or economic consequences" (p. 131) of technological activity. In the same vein, Yff and Butler (1983) postulated that the most important aspect of TL is that "it should enable citizens to recognize when others, to whom they have entrusted the management of their social institutions, are not acting in their interests" (p. 14).

Engaging in technological activity is an important aspect of capability, and one that involves a complex interaction of cognitive and manipulative skills. Schwaller (1989) identified some of the cognitive ones as analytical thinking, creativity, problem solving, research, and analysis. The manipulative skills are those involved in the design process and in the making of technological products. Design skills are central to technological activity. These skills must be broadly conceptualized to include the abilities to recognize those problems that might yield to technological solutions, generate ideas, and formulate strategies for implementing ideas.

The Affective Component

Layton (1991) argued for the "crucial conative" component of technological capability. This refers to a willingness that must precede action in a technological or any other context. Kozolanka and Olson (1994) extended this component to the realm of virtue when they suggested there also needs to be the capacity to act for the right reasons. The profound and pervasive impact of technology on society makes this a critical issue. These relate to the question of social responsibility. The technologically literate person could be expected to exhibit not a mere awareness of, but a concern for, the "moral and ethical implications of technological choice" ( Lewis & Gagel, 1992, p. 130 ).

While education in technology should not be constrained by narrow vocational concerns, TL cannot be divorced from preparation for employment. There are at least two reasons for this. The first is that the concept of literacy is at its heart concerned with the competencies needed for functional adult life. The second is that the world of work is the major arena of technological activity. TL, then, implies the possession also of such affective workplace skills as flexibility and team spiritedness, among others.

It seems helpful to consider the discussion of the preceding three components in the context of the following summary of the construct of TL, which indicates that it:

  • is multidimensional;
  • implies a range of functional capabilities that include the designing and making of technological solutions to problems, the monitoring of the societal impact of technologies, the evaluation of technology from a variety of value criteria, and the ability to use those technologies that are appropriate to one's own context;
  • implies a knowledge of some basic concepts related to technology and an understanding of how certain technologies work;
  • is dependent on a range of cognitive skills such as analytical skills, creativity, brainstorming, problem solving, and data collection; and
  • has a major affective component including such attitudes as independence and interdependence, caring, environmental concern, social responsibility, and positive work habits.

Thus we have a set of functional competencies that characterize TL. They are the competencies that a technologically literate person could be expected to display in appropriate conditions. These would be used to guide the choice of approaches and the design of tasks for the evaluation of TL.

The assessment of learning in technology should therefore be guided by a consideration of these functional competencies as well as by the following three important characteristics of technology itself: (a) it is practical and grounded in the real world, (b) it is as much process as it is product, and (c) it is very much a way of thinking and acting in relation to the material world.

SOME ASSESSMENT ISSUES

Increasingly, the term assessment is being used to signify the change from an almost singular reliance on tests that gave quantifiable results to methods of evaluation that recognize the complexity of human functioning and that more closely reflect the real-world context of human performance. Indeed terms such as authentic ( Wiggins, 1989 ), illuminative ( Hodson & Reid, 1988 ), and expressive (Eisner, 1993) are being used to describe assessment procedures that elicit a display of student learning in its uniqueness and complexity.

The use of student portfolios, group as opposed to individual tasks, performance as well as product evaluation, ongoing as opposed to single-event assessment, and open-ended rather than closed tasks are strategies that make learning visible as it progresses and unfolds in its uniqueness for each learner. Such approaches serve well the purpose of engaging pupils in their own learning while providing diagnostic information to teachers and of affirming the individuality of each learner. It seems evident that these approaches to assessment are the ones that will provide the best evidence of technological literacy.

Assessment strategies that value the idiosyncratic nature of student learning raise the issue of standards. Use of a common yardstick by which to measure individual outcomes has been one of the hallmarks of traditional forms of evaluation. This is in essence a validity issue. In a discussion of the nature of performance assessment, Messick (1994) suggested that "authenticity" and "directness," which are qualities claimed to be characteristic of performance assessment, are related to the issue of validity. Frederikson and Collins (1989) and Linn, Baker, and Dunbar (1991) proposed that specialized validity criteria need to be invoked for performance assessment. The issue of validity as it relates to the assessment of technological literacy will be addressed again later in the article.

Factors in Designing Assessment

Valid assessment should be construct-driven. It is the nature of the construct of technological literacy that should determine the mode and conditions of its assessment. Assessment tasks should elicit from students the knowledge, behaviors, and attitudes that are believed to be characteristic of a technologically literate person. The multidimensional nature of TL makes the design of such tasks a difficult proposition.

Dyrenfurth (1988) reported on the use of a paper and pencil test of TL that attempted to assess some domain knowledge and some of the attitudes thought to be indicative of TL. Clearly such measures are of limited usefulness given the strong element of practical capability that TL involves. Snow (1993) suggested that multiple-choice items and student portfolios represent opposite ends of a continuum of response structure, and Messick (1994) implied that a mix of assessment strategies that includes structured exercises and open-ended performance tasks might be useful for achieving breadth of coverage within a domain. The implications of these views are that multiple strategies, rather than a single mode, are more likely to tap the range of competencies included in a domain. For assessment to be considered authentic, however, there are other criteria besides breadth of coverage that must be met. Linn et al. (1991) remind us of the demand for meaningfulness. All forms of structured items, particularly multiple-choice, and to a lesser extent semi-structured ones, run the risk of being trivial and lacking in meaningfulness to students since they portray items of knowledge as disconnected from the configuration of which they are a part. This seems to run counter to the notion of authentic assessment. It is critical that such items be seen as useful only as part of a wider range of assessment procedures and applicable principally to the knowledge domain of TL.

Perhaps the most important educational function of assessment is its formative or diagnostic one. Authentic assessment should be designed to allow identification of students' needs. This requires that tasks be sufficiently open to allow students to display their unique understandings and capabilities so that teachers can fashion or modify learning experiences to meet revealed deficiencies. This is particularly important in the assessment of TL. If, as Dyrenfurth (1991) noted, the development of TL proceeds along a continuum from "non-discernible to exceptionally proficient" (p. 140), then students will be situated at varying points along that continuum. Assessment tasks for TL should allow students to function at their most advanced points along the continuum.

Consider also that the level of TL students achieve results from both planned curricular experiences in school and their out-of-school experiences with technology. Prime (1992) found this to be true for a sample of secondary school students in Trinidad, whose experiences in the home environment was the largest determinant of their attitudes toward technology. Thus, assessment tasks should allow students to display understandings drawn from both in-school and out-of-school experiences.

Another implication of the progressive nature of the development of TL is that assessment must be ongoing and continuous over time, so as to depict changes in levels of capability and understanding as they emerge. Uses of portfolios, documentation, and graphic presentations seem to be approaches that meet the need for uniqueness of expression and continuity of assessment.

The growing interest in performance assessment is particularly important in the assessment of TL since technology is both process and product. The identification of needs, the generation of designs to meet those needs, the weighing of alternatives in the light of specifications, and the making and evaluation of products are some of the processes of technology. Performance assessment, more accurately called performance-and-product assessment ( Fitzpatrick & Morrison, 1971 ), is suited to assess both the skill component and the affective component of TL. The focus of such assessment should be the processes and the final product generated by students. Observational techniques, teacher-student interviews, and the documentation of students' decision making while engaged in technological activity serve to make visible the technological thinking which would finally be embodied in the finished product. Assessment of the product by predetermined and jointly agreed-upon criteria is an approach that draws both teacher and student into assessment procedures. Cataloging students' reasons for design decisions and student-generated criteria for evaluating products reveal the values held by students. These become important assessment data.

The fact that TL is essentially about functional competencies in the real world may be the source of the greatest assessment challenge, that is, to design assessment tasks that incorporate the salient elements of the real world in which TL is actually displayed. School assessment, even performance assessment, runs the risk of being too formalized and decontextualized to provide evidence about real-world functional competencies, in which case they may tell little of what a student is likely to be able to do in the real-life context. In the real world, technological activity is always comprehensive and purposeful, and implies the ability to see opportunities to improve some aspect of the world either by creating something new or by making an existing thing better. It involves moving from the initial idea through successive refinements to a solution. This aspect of competence is easy enough to assess through performance/product assessment. Since values and intentions are inescapable inputs into these activities, competence involves the continuous resolution of conflicting values. But values and intentions are not observable in the finished product and they are often neglected in assessment. Assessment techniques need to be developed to overcome this difficulty. The use of teacher-student interviews, peer interactions, and the development of student profiles during the course of the student's engagement with a technological task hold some promise in this direction. The "comprehensiveness" aspect of technological activity implies a degree of commitment and ownership of the task that is difficult to capture in school assessment, where the tasks set are often but snippets of the full range of activities in a technological context. Given the iterative nature of the phases of technological activity, it might well be that performance at any phase carried out in isolation would be different from what it would be if done in the context of the whole process.

Often people other than the creators determine the success of a technology. This is certainly true in a commercial setting where the consumer often determines success. Functional competence thus implies a sensitivity to the humanness of technology and, more specifically, to consumer issues. If one assesses students' capabilities in evaluating technology out of the context or contact with real clients, a vital aspect of real-world functional capability may not be measured or realized.

In a sense, a technological activity is never completed. The "final" product is really nothing more than the most recent prototype. A student's ability to visualize new possibilities for refinement of a product is an aspect of capability that is difficult to assess within the time constraints usually imposed on school assessment procedures. Consider also that real-world technological performance is rarely done solo. The ability to function as part of a team and the ability to communicate ideas in a variety of modes such as in discussion, graphic presentations, and 3-dimensional models are critical aspects of functional capability. In such instances, group assessment tasks would provide more accurate reflections of real technological activity and would promote the social interaction from which students derive emotional support.

The series of appropriate TL assessment approaches presented here embody many of the principles of illuminative assessment. Multiple strategies which employ concrete activities that are relevant to the lives of students and are grounded in the real world are advocated here. The balance in such assessment is clearly on the side of the processes rather than on the products, and the activities rather than the outcomes of students' technological work. While product assessment is important, products alone fail to exhibit the complex ongoing interaction of idea and action that lie at the heart of technological capability and infuses all stages of technological activity. Even performance-oriented assessment will fall short of its goal unless the design of such assessment strategies is informed by a careful analysis of the elements of real-world functional competency, some of which have been suggested. These approaches produce a high level of student engagement with tasks, and blur the lines between learning activities and evaluation. In such situations, assessment becomes an integral part of the instructional process and exerts its most positive influence on teaching and learning.

The Place of Validity

The essential validity question is, How appropriate are the data produced by assessment for the purpose for which the assessment is intended? Messick (1994) proposed a range of validity criteria that includes the "content, substantive, structural, external, and generalizability aspects of construct-validity" (p. 13), which he suggested are applicable to all forms of assessment. Additionally, Frederikson and Collins (1989) and Linn, Baker, and Dunbar (1991) proposed that specialized validity criteria are needed for performance assessment. These include content quality, content coverage, cognitive complexity, meaningfulness, cost and efficiency, transfer and generalizability, fairness and consequences (Linn et al., 1991), and scope, reliability, and transparency (Frederikson & Collins, 1989).

I suggest that assessment of TL, of which capability is an element, might best be done through procedures that are open-ended enough to allow for individual expressions of competence, that may have individualized criteria of worth, and that allow for the recognition of knowledge and skills that are not necessarily part of the planned curricular experiences of students. The notion of common tasks and standards for assessment is an integral part of traditional methods of assessment and is critical to traditional concepts of validity. Furthermore, the idea of rewarding knowledge and skills outside of the planned curriculum runs counter to the traditional notion of content-validity criteria.

Another specialized criterion that might be applicable is context validity. The closer an assessment activity is to the real situation to which its results are to be generalized the more valid the assessment is likely to be. In the assessment of TL, context validity may be achieved by performance tasks that place children in real situations or by school-based tasks that tap a large number of real-world competencies. This aspect of assessment becomes more important when one bears in mind the fact that it is not the performance itself which is the real object of assessment, but the competence which it impliesa competence that is only meaningful in the real world.

Assessment of TL is a challenging task made so by its multidimensional nature. Appropriate strategies should involve multiple approaches yielding multiple types of data. Assessment should be open-ended to allow for the unique expression of individual achievement. It should be ongoing and continuous.

The use of portfolios, student documentation of activity, graphic displays, and product demonstrations are useful techniques. They are, however, limited in that they are passive manifestations of capability. The more active manifestations that are indicative of the complex interaction of values, intentions, knowledge, and skills are perhaps better tapped by means of teacher-student conversations, peer group discussions, and observational techniques conducted during student project work. Together these approaches seem likely to allow students to make public their own unique capabilities. The consequence of such strategies, which according to Messick (1994) is itself a validity issue, will be the enhancement of the learners' engagement with technology, thus increasing their capability and competence.


References

American Association for the Advancement of Science. (1989). Project 2061 (Report of Technology Panel; AAAS Publication 89-01S). Washington, DC: Author.

Donelly, J. F. (1992) . Technology in the school curriculum: A critical bibliography. Studies in Science Education, 20 , 123­156.

Dyrenfurth, M. (1988) . Technological literacy in industry . Columbia, MO: Applied Expertise Associates.

Dyrenfurth, M. (1991) . Technological literacy synthesized. In M. J. Dyrenfurth & M. R. Kozak (Eds.), Technological literacy (40th Yearbook of the Council on Technology Teacher Education, pp. 138­183). Peoria, IL: Macmillan, McGraw-Hill.

Eisner, E. W. (1993) . Reshaping assessment in education: Some criteria in search of practice. Journal of Curriculum Studies, 24 (3), 219­233.

Farrell, A. (1992) . Reviewing capability in national curriculum assessment. Design and Technology Teaching, 25 (1), 39­43.

Fitzpatrick, F., & Morrison, E. J. (1971) . Performance and product evaluation. In R. L. Thorndike (Ed.), Educational measurement (2nd ed., pp. 237­270). Washington, DC: American Council in Education.

Frederikson, J. R., & Collins, A. (1989) . A system approach to educational testing. Educational Researcher, 18 (9), 27­32.

Heinsohn, R. J. (1977) . General education in technology: An approach using case studies. Journal of General Education, 29 (1), 37­53.

Hodson, D., & Reid, D. J. (1988) . Changing priorities in science education. School Science Review, 70 (250), 101­108.

Kozolanka, K., & Olson, J. (1994) . Life after school: How science and technology teachers construe capability. International Journal of Technology and Design Education, 4( 3), 209­226.

Layton, D. (1991) . Science education and praxis: The relationship of school science to practical action. Studies in Science Education, 19 , 43­79.

Layton, D. (1987) . Some curriculum implications of technological literacy. In M. Harrison, D. Layton, & N. Bolton (Eds.), Technology education project: Paper 1 (Papers submitted to the Consultation held on November 15 & 16, 1985; pp. 4­8). York, United Kingdom.

Lewis, T., & Gagel, C. (1992) . Technological literacy: A critical analysis. Journal of Curriculum Studies, 24 (2), 117­138.

Linn, R. L., Baker, E. L., & Dunbar, S. B. (1991) . Complex, performance-based assessment: Expectations and validation criteria . Educational Researcher, 20 (8), 15­21.

Mears, J. A. (1986) . Evolutionary process: An organizing principle for general education. The Journal of General Education, 37 (4), 313­325.

Messick, S. (1994) . The interplay of evidence and consequences in the validation of performance assessments. Educational Researcher, 23 (2), 13­23.

Prime, G. (1992) . Technology education in the Caribbean: Needs and directions. International Journal of Technology and Design Education, 2 (3), 48­57.

Schwaller, A. E. (1989) . Transportation, energy, and power technology . New York: Delmar.

Snow, R. E. (1993) . Construct validity and constructed response tests. In R. E. Bennett & W. C. War, Jr. (Eds.), Constructions versus choice in cognitive measurement: Issues in constructed response, performance testing, and portfolio assessment (pp. 45­60). Hillsdale, NJ: Erlbaum.

Wiggins, G. (1989) . A true test: Toward authentic and equitable assessment. Phi Delta Kappa, 71 (9), 703­713.

Yff, J., & Butler, M. J. (1983) . Technological literacy: Challenge for teacher education . Washington, DC: Current Issues. (ERIC Document Reproduction Service No. ED 227 060)

JTS logo The International Honorary For Professions in Technology

TS