Vulnerability of classifiers to evolutionary generated adversarial examples
Jazyk angličtina Země Spojené státy americké Médium print-electronic
Typ dokumentu časopisecké články
PubMed
32361547
DOI
10.1016/j.neunet.2020.04.015
PII: S0893-6080(20)30135-0
Knihovny.cz E-zdroje
- Klíčová slova
- Adversarial examples, Genetic algorithms, Kernel methods, Neural networks, Supervised learning,
- MeSH
- algoritmy MeSH
- lidé MeSH
- neuronové sítě * MeSH
- řízené strojové učení * trendy MeSH
- rozpoznávání automatizované metody trendy MeSH
- strojové učení trendy MeSH
- Check Tag
- lidé MeSH
- Publikační typ
- časopisecké články MeSH
This paper deals with the vulnerability of machine learning models to adversarial examples and its implication for robustness and generalization properties. We propose an evolutionary algorithm that can generate adversarial examples for any machine learning model in the black-box attack scenario. This way, we can find adversarial examples without access to model's parameters, only by querying the model at hand. We have tested a range of machine learning models including deep and shallow neural networks. Our experiments have shown that the vulnerability to adversarial examples is not only the problem of deep networks, but it spreads through various machine learning architectures. Rather, it depends on the type of computational units. Local units, such as Gaussian kernels, are less vulnerable to adversarial examples.
Citace poskytuje Crossref.org