• Something wrong with this record ?

Vulnerability of classifiers to evolutionary generated adversarial examples

P. Vidnerová, R. Neruda,

. 2020 ; 127 (-) : 168-181. [pub] 20200420

Language English Country United States

Document type Journal Article

This paper deals with the vulnerability of machine learning models to adversarial examples and its implication for robustness and generalization properties. We propose an evolutionary algorithm that can generate adversarial examples for any machine learning model in the black-box attack scenario. This way, we can find adversarial examples without access to model's parameters, only by querying the model at hand. We have tested a range of machine learning models including deep and shallow neural networks. Our experiments have shown that the vulnerability to adversarial examples is not only the problem of deep networks, but it spreads through various machine learning architectures. Rather, it depends on the type of computational units. Local units, such as Gaussian kernels, are less vulnerable to adversarial examples.

References provided by Crossref.org

000      
00000naa a2200000 a 4500
001      
bmc20028066
003      
CZ-PrNML
005      
20210114152915.0
007      
ta
008      
210105s2020 xxu f 000 0|eng||
009      
AR
024    7_
$a 10.1016/j.neunet.2020.04.015 $2 doi
035    __
$a (PubMed)32361547
040    __
$a ABA008 $b cze $d ABA008 $e AACR2
041    0_
$a eng
044    __
$a xxu
100    1_
$a Vidnerová, Petra $u The Czech Academy of Sciences, Institute of Computer Science, Pod Vodárenskou věží 271/2, 182 07 Prague 8, Czechia. Electronic address: petra@cs.cas.cz.
245    10
$a Vulnerability of classifiers to evolutionary generated adversarial examples / $c P. Vidnerová, R. Neruda,
520    9_
$a This paper deals with the vulnerability of machine learning models to adversarial examples and its implication for robustness and generalization properties. We propose an evolutionary algorithm that can generate adversarial examples for any machine learning model in the black-box attack scenario. This way, we can find adversarial examples without access to model's parameters, only by querying the model at hand. We have tested a range of machine learning models including deep and shallow neural networks. Our experiments have shown that the vulnerability to adversarial examples is not only the problem of deep networks, but it spreads through various machine learning architectures. Rather, it depends on the type of computational units. Local units, such as Gaussian kernels, are less vulnerable to adversarial examples.
650    _2
$a algoritmy $7 D000465
650    _2
$a lidé $7 D006801
650    _2
$a strojové učení $x trendy $7 D000069550
650    12
$a neuronové sítě $7 D016571
650    _2
$a rozpoznávání automatizované $x metody $x trendy $7 D010363
650    12
$a řízené strojové učení $x trendy $7 D000069553
655    _2
$a časopisecké články $7 D016428
700    1_
$a Neruda, Roman $u The Czech Academy of Sciences, Institute of Computer Science, Pod Vodárenskou věží 271/2, 182 07 Prague 8, Czechia. Electronic address: roman@cs.cas.cz.
773    0_
$w MED00011811 $t Neural networks : the official journal of the International Neural Network Society $x 1879-2782 $g Roč. 127, č. - (2020), s. 168-181
856    41
$u https://pubmed.ncbi.nlm.nih.gov/32361547 $y Pubmed
910    __
$a ABA008 $b sig $c sign $y a $z 0
990    __
$a 20210105 $b ABA008
991    __
$a 20210114152913 $b ABA008
999    __
$a ok $b bmc $g 1608401 $s 1119246
BAS    __
$a 3
BAS    __
$a PreBMC
BMC    __
$a 2020 $b 127 $c - $d 168-181 $e 20200420 $i 1879-2782 $m Neural networks $n Neural Netw $x MED00011811
LZP    __
$a Pubmed-20210105

Find record

Citation metrics

Logged in users only

Archiving options

Loading data ...