Neural networks (Computer science) -- Security measures, Genetic algorithms, Neural networks (Computer science) -- Image processing
Neural networks provide state-of-the-art accuracy for image classification tasks. However traditional networks are highly susceptible to imperceivable perturbations to their inputs known as adversarial attacks that drastically change the resulting output. The magnitude of these perturbations can be measured as Mean Squared Error (MSE). We use genetic algorithms to produce black-box adversarial attacks and examine MSE on state-of-the-art networks. This method generates an attack that converts 90% confidence on a correct class to 50% confidence of a targeted, incorrect class after 2000 epochs. We will generate and examine attacks and their MSE against several sparse neural networks. We theorize that there exists a sparse architecture used for image classification that reduces input image space and therefore that architecture will cause an increase in the MSE required for a classification change. Our work is relevant for security dependent applications of neural networks, low-power high-performance architectures, and systems architectures.
Chen, Jack H. and Woods, Walt, "Generating Adversarial Attacks for Sparse Neural Networks" (2018). Undergraduate Research & Mentoring Program. 31.