Human Versus Machine: Investigating L2 Learner Output in Face-To-Face Versus Fully Automated Role-Plays

Published In

Computer Assisted Language Learning

Document Type

Citation

Publication Date

1-22-2022

Abstract

To examine the utility of spoken dialog systems (SDSs) for learning and low-stakes assessment, we administered the same role-play task in two different modalities to a group of 47 tertiary-level learners of English. Each participant completed the task in an SDS setting with a fully automated agent and engaged in the same task with a human interlocutor in a face-to-face format. Additionally, we gauged students’ perceptions of the two delivery formats. Elicited oral performances were examined for linguistic complexity (syntactic complexity, lexical variety, fluency) and pragmatic functions (number and type of requests). Learner performance data across the two delivery modes were comparable although learners spoke slightly longer in the SDS task and used significantly more turns in the face-to-face setting—a finding that may be due to participants deploying more social rapport building moves, clarification requests, and backchanneling. The attitudinal data indicate that, while many learners liked both delivery formats, there was a slight preference for the face-to-face format, mainly due to the presence of body language. Overall, results show that fully automated SDS tasks may constitute a feasible alternative to face-to-face role-plays. Nevertheless, when possible, learners should be given a choice in task format for both learning and assessment.

Rights

Copyright 2022 Taylor & Francis

DOI

10.1080/09588221.2022.2032184

Persistent Identifier

https://archives.pdx.edu/ds/psu/39012

Share

COinS