Undergraduate Research & Mentoring Program

Document Type


Publication Date



Neural networks (Computer science) -- Security measures, Genetic algorithms, Neural networks (Computer science) -- Image processing


Neural networks provide state-of-the-art accuracy for image classification tasks. However traditional networks are highly susceptible to imperceivable perturbations to their inputs known as adversarial attacks that drastically change the resulting output. The magnitude of these perturbations can be measured as Mean Squared Error (MSE). We use genetic algorithms to produce black-box adversarial attacks and examine MSE on state-of-the-art networks. This method generates an attack that converts 90% confidence on a correct class to 50% confidence of a targeted, incorrect class after 2000 epochs. We will generate and examine attacks and their MSE against several sparse neural networks. We theorize that there exists a sparse architecture used for image classification that reduces input image space and therefore that architecture will cause an increase in the MSE required for a classification change. Our work is relevant for security dependent applications of neural networks, low-power high-performance architectures, and systems architectures.


The authors acknowledge the support of the Semiconductor Research Corporation (SRC) Education Alliance (award # 2009-UR-2032G) and of the Maseeh College of Engineering and Computer Science (MCECS) through the Undergraduate Research and Mentoring Program (URMP).

Persistent Identifier