Location

Portland State University

Start Date

7-5-2019 11:00 AM

End Date

7-5-2019 1:00 PM

Abstract

Neural Networks (NNs) are increasingly used as the basis of advanced machine learning techniques in sensitive fields such as autonomous vehicles and medical imaging. However, NNs have been found vulnerable to a class of imperceptible attacks, called adversarial examples, which arbitrarily alter the output of the network. To close the schism between needing reliability in real-world applications and the fragility of NNs, we propose a new method for stabilizing networks, and show that as an added bonus, our technique results in reliable, high-fidelity explanations for the NN's decision. Compared to the state-of-the-art, this technique increased the area under the curve of accuracy versus root-mean-squared error of allowed attacks by a factor of 1.8x, and we demonstrate that it allows for new Human-In-The-Loop (HITL) training techniques for NNs. On medical imaging, we show that our technique results in explanations which are significantly more sensible to a human operator than the explanations from previously proposed algorithms. The combination of increased network robustness and the ability to demonstrate decision boundaries to a human observer should pave the way for greatly improved HITL decision processes in future work.

Share

COinS
 
May 7th, 11:00 AM May 7th, 1:00 PM

Reliable Explanations via Adversarial Examples on Robust Networks

Portland State University

Neural Networks (NNs) are increasingly used as the basis of advanced machine learning techniques in sensitive fields such as autonomous vehicles and medical imaging. However, NNs have been found vulnerable to a class of imperceptible attacks, called adversarial examples, which arbitrarily alter the output of the network. To close the schism between needing reliability in real-world applications and the fragility of NNs, we propose a new method for stabilizing networks, and show that as an added bonus, our technique results in reliable, high-fidelity explanations for the NN's decision. Compared to the state-of-the-art, this technique increased the area under the curve of accuracy versus root-mean-squared error of allowed attacks by a factor of 1.8x, and we demonstrate that it allows for new Human-In-The-Loop (HITL) training techniques for NNs. On medical imaging, we show that our technique results in explanations which are significantly more sensible to a human operator than the explanations from previously proposed algorithms. The combination of increased network robustness and the ability to demonstrate decision boundaries to a human observer should pave the way for greatly improved HITL decision processes in future work.