•  
  •  
 

Subjects

ICT Literacy; Assessment; Validation

Document Type

Research Article

Abstract

Evaluating the trustworthiness of Internet-based or other digital information has become an essential 21st century skill. The iSkills™ assessment, from Educational Testing Service (ETS), purports to measure such digital evaluation skills, along with other digital literacy skills. In this work, we use an argument-based approach to assessment validation to investigate the extent to which iSkills test scores can support inferences about the ability of college students to evaluate information in a digital environment. Eighty-eight undergraduates responded to iSkills assessment tasks and to more open-ended "naturalistic" tasks. These naturalistic tasks were designed to be like homework assignments that incorporate the critical evaluation of digital information. We observed weak-to-moderate correlations between scores, suggesting overlap in the skills assessed by the iSkills and the naturalistic tasks. Analyses of concurrent cognitive interviews (n=11 of 88) suggested distinctions between students' response processes to the assessment and naturalistic tasks. Although iSkills assessment tasks appear to elicit skills consistent with evaluation of digital information in the real world, students' responses to the naturalistic tasks demonstrated broader evaluation skills and less attention to the testing context. This study provides empirical validity evidence regarding ETS's iSkills assessment, as well as valuable insights into how undergraduates evaluate information in a digital environment.

DOI

10.15760/comminfolit.2010.3.2.75

Downloads prior to this publication

1894

Persistent Identifier

http://archives.pdx.edu/ds/psu/22508

Creative Commons License

Creative Commons Attribution-Noncommercial 4.0 License
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 4.0 License.

Share

COinS