Evaluating the trustworthiness of Internet-based or other digital information has become an essential 21st century skill. The iSkills™ assessment, from Educational Testing Service (ETS), purports to measure such digital evaluation skills, along with other digital literacy skills. In this work, we use an argument-based approach to assessment validation to investigate the extent to which iSkills test scores can support inferences about the ability of college students to evaluate information in a digital environment. Eighty-eight undergraduates responded to iSkills assessment tasks and to more open-ended "naturalistic" tasks. These naturalistic tasks were designed to be like homework assignments that incorporate the critical evaluation of digital information. We observed weak-to-moderate correlations between scores, suggesting overlap in the skills assessed by the iSkills and the naturalistic tasks. Analyses of concurrent cognitive interviews (n=11 of 88) suggested distinctions between students' response processes to the assessment and naturalistic tasks. Although iSkills assessment tasks appear to elicit skills consistent with evaluation of digital information in the real world, students' responses to the naturalistic tasks demonstrated broader evaluation skills and less attention to the testing context. This study provides empirical validity evidence regarding ETS's iSkills assessment, as well as valuable insights into how undergraduates evaluate information in a digital environment.
Snow, E., & Katz, I. (2010). Using Cognitive Interviews and Student Response Processes to Validate an Interpretive Argument for the ETS iSkills™ Assessment. Communications in Information Literacy, 3 (2), 99-127. https://doi.org/10.15760/comminfolit.2010.3.2.75