First Advisor

Nirupama Bulusu

Date of Award

Spring 6-2024

Document Type

Thesis

Degree Name

Bachelor of Science (B.S.) in Computer Science and University Honors

Department

Computer Science

Language

English

Subjects

Adversarial Attacks, Machine Learning Robustness, Adversarial Training, Model Resilience

DOI

10.15760/honors.1581

Abstract

Machine learning models are integral for numerous applications, but they remain increasingly vulnerable to adversarial attacks. These attacks involve subtle manipulation of input data to deceive models, presenting a critical threat to their dependability and security. This thesis addresses the need for strengthening these models against such adversarial attacks. Prior research has primarily focused on identifying specific types of adversarial attacks on a limited range of ML algorithms. However, there is a gap in the evaluation of model resilience across algorithms and in the development of effective defense mechanisms. To bridge this gap, this work adopts a two-phase approach. First, it simulates attacks like the Basic Iterative Method (BIM), DeepFool, and Fast Gradient Sign Method (FGSM)on common ML models trained on MNIST and CIFAR-10, which are used for image processing. This thesis will then discuss defensive strategies, which reduces the sensitivity of the model to input changes to improve model resilience against attacks. The findings are aimed to benefit ML researchers to develop more secure and robust ML systems.

Persistent Identifier

https://archives.pdx.edu/ds/psu/42187

Share

COinS