Published In

Issues in Accounting Education

Document Type

Post-Print

Publication Date

11-2023

Subjects

Artificial intelligence, Machine learning

Abstract

ChatGPT, a language-learning model chatbot, has garnered considerable attention for its ability to respond to users’ questions. Using data from 14 countries and 186 institutions, we compare ChatGPT and student performance for 28,085 questions from accounting assessments and textbook test banks. As of January 2023, ChatGPT provides correct answers for 56.5 percent of questions and partially correct answers for an additional 9.4 percent of questions. When considering point values for questions, students significantly outperform ChatGPT with a 76.7 percent average on assessments compared to 47.5 percent for ChatGPT if no partial credit is awarded and 56.5 percent if partial credit is awarded. Still, ChatGPT performs better than the student average for 15.8 percent of assessments when we include partial credit. We provide evidence of how ChatGPT performs on different question types, accounting topics, class levels, open/closed assessments, and test bank questions. We also discuss implications for accounting education and research.

Rights

© Copyright the author(s) 2023

Description

This is the author’s version of a work that was accepted for publication. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was sub mitted for publication. A definitive version was subsequently published in Issues in Accounting Education.

DOI

10.2308/ISSUES-2023-013

Persistent Identifier

https://archives.pdx.edu/ds/psu/41096

Included in

Business Commons

Share

COinS