Attacking a Transformer-Based Models for Arabic Language As Low Resources Language (LRL) Using Word-Substitution Methods

Published In

2023 Fifth International Conference on Transdisciplinary AI (transai)

Document Type

Citation

Publication Date

2023

Abstract

Transformer-based models are achieving high performance on a variety of natural language processing (NLP) tasks for example: sentiment analysis, information extraction or text classification. These architectures provide state-of-the-art solutions for most of the problems that NLP tasks face nowadays. However, it is really important to examine their effectiveness and robustness. To do so, researchers tried to attack these models with adversarial examples, which are small perturbations that make small changes in the actual inputs in order to mislead the models to check their vulnerabilities. However, most researchers have done their experiments on Transformer-based models that are trained on high resources corpora such as English. In this paper, we examine Transformer-based models for low resources languages (LRL) using word-substitution attacking algorithms for text classification tasks in Arabic, as a representative of LRL. Our models were successfully fooled after attacking it with word-substitution methods called Random and Prioritized, respectively. As a result of our experimentation, the accuracy of our Transformer-based models that we have attacked has been reduced by 7% and 10% on Random attack, and 44% and 40% on Prioritized attack.

DOI

10.1109/TransAI60598.2023.00025

Persistent Identifier

https://archives.pdx.edu/ds/psu/41276

Publisher

IEEE

Share

COinS