Virginia Libraries Logo

Virginia Libraries

Current editors:
Beth DeFrancis defrancb@georgetown.edu, Editor
John Connolly jconnolly@nsl.org, Assistant Editor

July/August/September, 2010
Volume 56, Number 3

DLA Ejournal Home | VALib Home | Table of Contents for this issue | Search VALib and other ejournals

Assessing the One-Shot Instruction Session: Leveraging Technology for Optimum Results

by Jennifer Nardine and Carolyn Meier

As budgets and belts have tightened in recent years, academic libraries have placed more emphasis on the assessment of various library services.1 While it is straightforward, statistically, to collect information about webpage use, circulation, and gate counts, assessing the value of bibliographic instruction has historically proven more challenging.

… assessing the value of bibliographic instruction has historically proven more challenging.

Instruction sessions must not only be evaluated on a “body count” basis—the percentage of the student population that library instructors reach in a given semester—but, more important, must encompass learning trends. In the case of semester-long librarian/class interactions, it is relatively easy to set up rubrics and grading scales on which the students are evaluated. However, most librarians do not currently teach semester-long sessions, either independently or in cooperation with another faculty member. Instead, librarians most frequently teach “the one-shot.”

Most instruction librarians are familiar with the one-shot concept; a professor or TA gives one class period to the library, and in that time the instruction librarians impart everything the students need to know about research for this particular class and, hopefully, for classes to come. The one-shot is a balancing act between delivering the information literacy skills students will need throughout their lives and addressing the specific demands of a course or professor.

Beyond the actual delivery, there is also the issue of assessing one-shot instruction sessions. Assessment serves multiple purposes. It evaluates the effectiveness of the instruction—how much of the usually condensed information the students absorbed and whether the library session is an effective use of time both for the specific class in question and in the broader context of library function and student learning. Also, education literature indicates that assessment should provide a chance for the students to review what they have learned. The end of class evaluation should cause the students to review and rephrase what they have learned, thereby embedding the information in their long-term memories.2 In other words, the evaluation must 1) provide feedback to the instructor regarding the effectiveness of their lesson and 2) give the students the chance to “review, rephrase, and retain” the material.

Evaluation of library instruction at Virginia Tech Libraries has evolved over the last decade both in terms of the assessment instrument used and the delivery medium chosen. One-shot session evaluations have transitioned from a paper-based, Likert scale format to an electronically delivered, mixed qualitative and quantitative version with considerable success.

Image of pre-2006 University Libraries Session Evaluation Form with 6 categories: 1.Usefulness of session content, 2.Timing of session within the semester, 3.Level of interactivity, 4.Overall quality of session, 5.I would be comfortable asking the library instructor for help in the future, 6.Comments. Items 1-4 have a 5-point Likert scale of Excellent to Poor.  Also shown is the 2006 Evaluation 3-2-1 card with directions, 'List three things you learned in this class,' 'List two things that you still don't understand about the library,' 'List one thing that you would change about this class,' and 'Optional: Please send me more information about:,  name, and email.'

Prior to 2006, the Virginia Tech Newman Libraries (VTL) evaluations used five-by-seven-inch carbon sheets with five Likert scale questions and room for comments. The sheets were distributed at the end of a library session for students to complete and return before leaving the library. Because of the Likert scale, the assessment results were easy to tabulate. Classroom aides would give the librarians the carbon copies of the evaluations, providing instant feedback to the instructors. The librarians could then make changes to their instruction if warranted. The classroom aides would then tabulate the session’s statistics from the originals and file them for use in the library’s annual report.

While using the Likert scale saved time and provided quantitative results, it had some shortcomings. First, this was more of a presentation evaluation tool; the questions on the card turned the evaluation into an issue of personality or delivery style over content. Second, the Likert scale as arranged ron the form made it very easy for a student to simply circle one column of descriptors and hand in the evaluation without reading the questions. Additionally, the Likert scale did not provide students with the opportunity to review the class material, rephrase it, and reinforce it in their minds. While the Likert scale is appropriate for assessing a semester-long class, which has tests, quizzes, and homework to solidify concepts for students and to provide retention feedback to instructors, it did not serve the needs of students or instructors of one-shot sessions.

In 2006, VTL switched from a Likert scale assessment to the 3-2-1 card model. This had some advantages over the Likert scale. Students listed three things they learned, two things they still did not understand, and one thing about the class they would change. There was also a section for comments. This provided an end-of-class review to reinforce student learning. The 3-2-1 cards changed the evaluation from an issue of personality and presentation style to a focus on content.

The main drawback of the 3-2-1 card was the significant amount of time needed to collate and analyze the results. Classroom aides did not have the training to organize the free-text responses, much less analyze it, so review, compilation, and analysis fell to the librarians, taking them away from other tasks. The librarians were able to sort the answers into seven main categories and to sort answers into subcategories that provided useful feedback about the strengths and weaknesses of the instruction sessions. While this assessment provided valuable information on what students were taking away from the session, the lack of quantitative results made it difficult to compare the effectiveness of this year’s instruction with previous years.

Librarians did not get instant feedback because of the significant time it took for the cards to be analyzed, and were not prompted to adjust their instruction midsemester if needed. Also, those doing the analysis found the process tedious both because the paper-based system didn’t allow for easy tabulation and because of the sheer number of cards per semester.

In fall 2008, VTL started investigating the possibilities of an electronic survey instrument, following the general trend of converting tabulation and record-keeping activities to an interactive electronic format whenever possible. Survey.vt.edu was a homegrown survey instrument that was available free of charge to any Virginia Tech student, faculty, or staff member. It allowed for both quantitative and qualitative data capture and the ability to export records to a variety of packages, thus facilitating statistical analysis.

Library Instruction Survey - Spring 2010. Consists of 10 questions: 1.Course Name/Number, 2.Course Instructor/Professor, 3.Librarian, 4.I have taken a tour of Newman Library (True/False), 5.This was my first library instruction session - not including a tour (True/False), 6.If you have had a previous library session, please list the course name, 7.The content of this library instruction session was: too simple, too advanced, just right, 8.Three things that I learned in this class are:, 9.Two things that I still don't understand about research or using the library are:. 10.As a result of what I have learned, One thing that I will do differently when conducting library research is:

At the same time that the survey transitioned from paper-based to electronic, the survey format itself was updated again. The new version of the survey included a combination of free-text answers loosely based on the 3-2-1 concept, a Likert scale entry, and radio-button selections. New questions included information on previous library sessions, physical tours, and what students would do differently in their research process post-instruction.

Moving to an electronic format made it much easier to distribute and collect surveys to all classes that were held in the VTL buildings. The classroom manager added a quick link to the survey on the desktop of all classroom computers, allowing students to easily access the survey at the end of a session. One challenge this presented, however, was that accessing the survey required more effort whenever library instruction took place outside of the library building. Some classes took place in buildings or rooms without computers in them at all, and none of the remote labs had the shortcut to the library instruction survey link on the computer desktops. Librarians handled this issue by creating a short URL using www.tinyurl. com that could be given directly to students in remote labs or to professors to post in the class content management system as a link. Despite the portability challenges of the computer-based survey, the overall response rate to post-class surveys increased dramatically, especially as librarians reserved a few minutes at the end of each instruction session specifically for survey completion.

Along with the higher response rate, the electronic survey allowed VT librarians access to survey results from any location with an Internet connection. The centralized data sets made collation, analysis, and dissemination of survey results faster and easier than any of the paper-based formats. The instruction coordinator standardized the data and distributed evaluations to individual librarians throughout the semester. The faster turnaround led to more fluidity of lesson structure throughout the semester. It was easy to spot problem instruction areas and change them midsemester, rather than having to wait until semester’s end to gather, collate, and analyze survey results. The instruction coordinator also created an internal survey for librarians to track what was taught in each instruction session. Librarians complete this survey while students are completing the session evaluations. This survey has allowed VTL to track instruction alignment with ACRL information literacy standards and has also led to the development of screencasts that professors can use for review in their courses.

The shift in survey design allowed VTL librarians to get a more rounded view of student learning than did previous instruction formats, while still keeping compilation and analysis efforts at a reasonable level. For example, all freshmen English sessions could be extracted and analyzed separately.

Electronic survey data was automatically gathered in a database, thus eliminating the need for transcription or initial collation. The instruction coordinator could easily export database contents to Excel for further compilation, sorting, and analysis without the need for significant programming on the librarians’ parts. For instance, results of the student survey are compared with the librarians’ survey to see if the students absorbed what the librarians taught. Excel’s sorting and filtering functions enabled them to see cause-and-effect relationships that were previously impractical to distinguish using paper methods. For example, many of the students who said they still needed help finding books and materials had not taken the physical tour.

Shows question 7 from the Library Instruction Survey - Spring 2010. The content of this library instruction session was: too simple=3% (85), too advanced=1% (36), just right=93% (2583), no answer=2% (63)

Analysis of both quantitative and qualitative data was easier when all the survey results were electronically available. The VT Survey instrument itself allowed for quantitative analysis on the fly, retabulating percentages as more data entered the system. Language analysis of the qualitative answers was simplified by the sorting and counting functions in Excel. In the future, more sophisticated analysis, especially of the qualitative survey results, will be simplified as electronic language analysis tools become more refined and available.

VTL also took advantage of the online survey instrument to get feedback from the instructors. A few weeks after each bibliographic instruction session, librarians would send a brief survey to the professor or TA to learn what parts of the information literacy sessions had had an effect on long-term student performance. This strengthened ties between the instruction librarians and the other faculty, who were able to assess the value of library sessions, determine which points students retained and which needed more reinforcement, and to decide whether more information literacy instruction was needed before the end of the semester.

The current VTL instruction survey is a living document, easily modified to reflect the changing needs of the professors, students, and librarians involved in the VTL library instruction programs. With continued development and communication between all involved parties, VTL will be able to continuously improve not only the statistical relevance of one-shot library instruction but also the actual instruction process.


Jennifer Nardine is the instruction and outreach librarian at the Virginia Tech University Libraries. A graduate of DePaul University (bachelor of music performance, 1992) and University of Michigan (MSI-LIS, 2004), Nardine joined the faculty at Virginia Tech in 2009. She has had diverse work experiences, including time as an operational sales manager, a lounge singer, and a database developer before starting a career as an academic librarian. Her research focuses on the intersection of people, both patrons and employees, and libraries. You can reach her by email at jnardine@vt.edu.

Carolyn Meier is an instructional services librarian and coordinates information literacy instruction in Newman Library at Virginia Tech. She received her MLS from the University of Michigan and has an EdS in instructional technology from Virginia Tech. She worked for thirty years as a school librarian before embarking on her second career as an academic librarian. She presently serves on the LIRT Transition to College Committee. Her research interests focus on information literacy instruction and assessment and outreach. Her work interests include new methods for improving instruction and finding new technologies to reach students. She can be contacted by email at cmeier@vt.edu.

Notes

1Richard M. Dougherty, “Guest Editorial: Assessment + Analysis = Accountability,” College and Research Libraries 70, no. 5 (2009): 417; Xi Shi and Sarah Levy, “A Theory-Guided Approach to Library Services Assessment,” College and Research Libraries 66, no. 3 (2005): 267.

2Franklin M. Zaromb, Jeffrey D. Karpicke, and Henry L. Roediger III, “Comprehension as a Basis for Metacognitive Judgments: Effects of Effort after Meaning on Recall and Metacognition,” Journal of Experimental Psychology: Learning, Memory, and Cognition 36, no. 2 (2010): 557. VL


DLA Ejournal Home | VALib Home | Table of Contents for this issue | Search VALib and other ejournals