This paper deals with the vulnerability of machine learning models to adversarial examples and its implication for robustness and generalization properties. We propose an evolutionary algorithm that can generate adversarial examples for any machine learning model in the black-box attack scenario. This way, we can find adversarial examples without access to model's parameters, only by querying the model at hand. We have tested a range of machine learning models including deep and shallow neural networks. Our experiments have shown that the vulnerability to adversarial examples is not only the problem of deep networks, but it spreads through various machine learning architectures. Rather, it depends on the type of computational units. Local units, such as Gaussian kernels, are less vulnerable to adversarial examples.
- MeSH
- Algorithms MeSH
- Humans MeSH
- Neural Networks, Computer * MeSH
- Supervised Machine Learning * trends MeSH
- Pattern Recognition, Automated methods trends MeSH
- Machine Learning trends MeSH
- Check Tag
- Humans MeSH
- Publication type
- Journal Article MeSH
... exploit gene superfamilies, 198 Summary, 200 Further reading, 201 -- 10 IMMUNITY TO INFECTION I—Adversarial ... ... complement, 203 -- Bacterial strategies to avoid death, 203 The host counter-attack, 204 Some specific examples ...
Seventh edition xii, 356 stran : ilustrace, tabulky ; 28 cm
- MeSH
- Allergy and Immunology MeSH
- Autoimmune Diseases MeSH
- Immunity MeSH
- Immune System Diseases MeSH
- Transplantation Immunology MeSH
- Publication type
- Monograph MeSH
- Handbook MeSH
- Conspectus
- Patologie. Klinická medicína
- NML Fields
- alergologie a imunologie