Supplemental Files:
Creator:
Date:
Abstract:
I propose and evaluate the first theory and computational model that explicitly describes the cognitive constraints---in the form of relational architectural constraints---that produce adversarial robustness in machine vision. This theory proposes a new form of Graph Neural Network (GNN), called an asymptotic GNN, that uses a non-linearity with a vertical asymptote to constrain where the network should be sensitive and insensitive to variations in its substructure. This approach proposes a new invariance for artificial neural networks, similar to the translational invariance of convolutional layers. I call this type of invariance ``substructure invariance'' and I show how it is capable of producing adversarial robustness in machine vision. To support these claims I first provide a detailed analysis of the two core components of the model: the asymptotic barrier function and learning branch. I then compare the asymptotic GNN to four state-of-the-art models on the MNIST dataset: one baseline and three adversarially robust models. I show that the asymptotic GNN outperforms all of the models on gradient-based adversarial attacks, mainly the DeepFool attack and the Momentum Iterative Method, and performs similar to the state-of-the-art on the decision-based PointWise attack. This establishes that the asymptotic GNN is a viable candidate for adversarial robust modeling. In subsequent analyses, I show that ablations to the core asymptotic barrier function and learning branch distinctly and differentiably impair the adversarial robustness of the asymptotic GNN. I then vary the basic structure of my model to demonstrate how the PointWise attack is symptomatic of a fundamentally different type of problem than substructure-invariance, mainly a feedback-based denoising operation.