Sponsor
Support was provided to G. A. R. by the National Science Foundation's Florida–Georgia Louis Stokes Alliance for Minority Participation Bridge to the Doctorate award 1612347. This material is also based upon work supported by the National Science Foundation under award 1849473 to J. E. L.
Published In
Chemistry Education Research and Practice
Document Type
Post-Print
Publication Date
5-1-2020
Subjects
Science -- Study and teaching (Higher) -- Social aspects, Science education -- Research, Minorities -- Education (Higher) -- United States
Abstract
As the field of chemistry education moves toward greater inclusion and increased participation by underrepresented minorities, standards for investigating the differential impacts and outcomes of learning environments have to be considered. While quantitative methods may not be capable of generating the in-depth nuances of qualitative methods, they can provide meaningful insights when applied at the group level. Thus, when we conduct quantitative studies in which we aim to learn about the similarities or differences of groups within the same learning environment, we must raise our standards of measurement and safeguard against threats to the validity of inferences that might favor one group over another. One way to provide evidence that group comparisons are supported in a quantitative study is by conducting measurement invariance testing. In this manuscript, we explain the basic concepts of measurement invariance testing within a confirmatory factor analysis framework with examples and a step-by-step tutorial. Each of these steps is an opportunity to safeguard against interpretation of group differences that may be artifacts of the assessment instrument functioning rather than true differences between groups. Reflecting on and safeguarding against threats to the validity of the inferences we can draw from group comparisons will aid in providing more accurate information that can be used to transform our chemistry classrooms into more socially inclusive environments. To catalyze this effort, we provide code in the ESI for two different software packages (R and Mplus) so that interested readers can learn to use these methods with the simulated data provided and then apply the methods to their own data. Finally, we present implications and a summary table for researchers, practitioners, journal editors, and reviewers as a reference when conducting, reading, or reviewing quantitative studies in which group comparisons are performed.
Rights
© The Royal Society of Chemistry 2020
This is the author’s version of a work that was subsequently published in Chemistry Education Research and Practice. Changes resulting from the publishing process, such as editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Chemistry Education Research and Practice, 2020, 21, 969-988, and can be found online at: https://doi.org/10.1039/D0RP00025F.
DOI
10.1039/d0rp00025f
Persistent Identifier
https://archives.pdx.edu/ds/psu/33975
Citation Details
Published as Rocabado, G. A., Komperda, R., Lewis, J. E., & Barbera, J. (2020). Addressing diversity and inclusion through group comparisons: a primer on measurement invariance testing. Chemistry Education Research and Practice, 21(3), 969–988. https://doi.org/10.1039/D0RP00025F
Electronic Supplementary Information