First Advisor

Gerasimos Fergadiotis

Date of Award

Spring 5-12-2024

Document Type

Thesis

Degree Name

Bachelor of Science (B.S.) in Speech and Hearing Sciences and University Honors

Department

Speech and Hearing Sciences

Language

English

Subjects

stroke, apraxia of speech, pairwise variability index, reliability, generalizability theory

DOI

10.15760/honors.1482

Abstract

Differential diagnosis of apraxia of speech (AOS) from other speech and language disorders is a prevailing challenge in the field of speech language pathology. Presently, existing measures for assessing AOS consist of perceptual rating scales, such as the Apraxia of Speech Rating Scale developed by Duffy, Strand, Clark, and Josephs. While perceptual rating scales are the current gold standard for assessment and diagnosis of AOS, they require a high degree of clinical expertise, and are susceptible to rater bias and rater drift. Thus, there exists a need for quantitative, objective measures for differential diagnosis of AOS. The purpose of this study was to determine the reliability of the pairwise variability index (PVI), an acoustically derived metric, for differential diagnosis of AOS. This study examined speech samples from 17 Australian-English speakers with post-stroke aphasia with or without AOS. After PVI was calculated for all participants' productions, PVI measurement variability was assessed using a linear mixed-effects model estimated in R. The fixed effect was the intercept, and the random effects included the individual raters, participants, the interaction between participant and item, and the interaction among participant, item, and iteration. Then, a G-theory coefficient was calculated to determine the generalizability of the results of this study with a larger G-coefficient corresponding to a smaller degree of error. G-theory analysis produced a G-coefficient of .725, which suggests that the PVI measurements were moderately reliable, but show room for improvement. Notably, our analysis found that there was a negligible amount (4.79e-11%) of variance attributable to raters, a result that corroborates existing research regarding the possible utility and applicability of PVI in clinical settings. Due to the relatively small data sample for this study, we suggest that future researchers compile a more comprehensive dataset and conduct D-studies to improve overall research design. Finally, we also encourage future studies to investigate the automation of PVI calculation, which we believe could lessen the reliance on clinical expertise and reduce rater bias and rater drift. Overall, PVI was found to have diagnostic potential alongside perceptual rating scales in clinical settings.

Persistent Identifier

https://archives.pdx.edu/ds/psu/41838

Share

COinS