Defense: {Bandlimiting Neural Networks Against Adversarial Attacks}
Write-up: {https://arxiv.org/abs/1912.00049}
Authors: {Maksym Andriushchenko, Francesco Croce, Nicolas Flammarion, Matthias Hein}
Code: {https://github.com/max-andr/square-attack/}
Does the code implement the robust-ml API and include pre-trained models: {yes}
Claims: {Under Linf eps=8/255 these models have: 15.8% adversarial accuracy on CIFAR-10 (on 1k points), 0.4% adversarial accuracy in ImageNet (on 1k points). The results were obtained using the Square Attack which is based on random search. The details can be found in Section 5.2 of our paper (see "Breaking the post-averaging defense".}