• Je něco špatně v tomto záznamu ?

Efficient learning of Scale-Adaptive Nearly Affine Invariant Networks

Z. Shen, Y. Qiu, J. Liu, L. He, Z. Lin

. 2024 ; 174 (-) : 106229. [pub] 20240311

Jazyk angličtina Země Spojené státy americké

Typ dokumentu časopisecké články

Perzistentní odkaz   https://www.medvik.cz/link/bmc24013751

Recent research has demonstrated the significance of incorporating invariance into neural networks. However, existing methods require direct sampling over the entire transformation set, notably computationally taxing for large groups like the affine group. In this study, we propose a more efficient approach by addressing the invariances of the subgroups within a larger group. For tackling affine invariance, we split it into the Euclidean group E(n) and uni-axial scaling group US(n), handling invariance individually. We employ an E(n)-invariant model for E(n)-invariance and average model outputs over data augmented from a US(n) distribution for US(n)-invariance. Our method maintains a favorable computational complexity of O(N2) in 2D and O(N4) in 3D scenarios, in contrast to the O(N6) (2D) and O(N12) (3D) complexities of averaged models. Crucially, the scale range for augmentation adapts during training to avoid excessive scale invariance. This is the first time nearly exact affine invariance is incorporated into neural networks without directly sampling the entire group. Extensive experiments unequivocally confirm its superiority, achieving new state-of-the-art results in affNIST and SIM2MNIST classifications while consuming less than 15% of inference time and fewer computational resources and model parameters compared to averaged models.

Citace poskytuje Crossref.org

000      
00000naa a2200000 a 4500
001      
bmc24013751
003      
CZ-PrNML
005      
20240905134042.0
007      
ta
008      
240725e20240311xxu f 000 0|eng||
009      
AR
024    7_
$a 10.1016/j.neunet.2024.106229 $2 doi
035    __
$a (PubMed)38490114
040    __
$a ABA008 $b cze $d ABA008 $e AACR2
041    0_
$a eng
044    __
$a xxu
100    1_
$a Shen, Zhengyang $u Baidu Inc, Beijing, 100871, China
245    10
$a Efficient learning of Scale-Adaptive Nearly Affine Invariant Networks / $c Z. Shen, Y. Qiu, J. Liu, L. He, Z. Lin
520    9_
$a Recent research has demonstrated the significance of incorporating invariance into neural networks. However, existing methods require direct sampling over the entire transformation set, notably computationally taxing for large groups like the affine group. In this study, we propose a more efficient approach by addressing the invariances of the subgroups within a larger group. For tackling affine invariance, we split it into the Euclidean group E(n) and uni-axial scaling group US(n), handling invariance individually. We employ an E(n)-invariant model for E(n)-invariance and average model outputs over data augmented from a US(n) distribution for US(n)-invariance. Our method maintains a favorable computational complexity of O(N2) in 2D and O(N4) in 3D scenarios, in contrast to the O(N6) (2D) and O(N12) (3D) complexities of averaged models. Crucially, the scale range for augmentation adapts during training to avoid excessive scale invariance. This is the first time nearly exact affine invariance is incorporated into neural networks without directly sampling the entire group. Extensive experiments unequivocally confirm its superiority, achieving new state-of-the-art results in affNIST and SIM2MNIST classifications while consuming less than 15% of inference time and fewer computational resources and model parameters compared to averaged models.
650    12
$a neuronové sítě $7 D016571
650    12
$a učení $7 D007858
655    _2
$a časopisecké články $7 D016428
700    1_
$a Qiu, Yeqing $u Shenzhen Research Institute of Big Data, Shenzhen, 518172, China; School of Science and Engineering, The Chinese University of Hong Kong, Shenzhen, 518172, China
700    1_
$a Liu, Jialun $u Baidu Inc, Beijing, 100871, China
700    1_
$a He, Lingshen $u National Key Lab of General Artificial Intelligence, School of Intelligence Science and Technology, Peking University, Beijing, 100871, China
700    1_
$a Lin, Zhouchen $u National Key Lab of General Artificial Intelligence, School of Intelligence Science and Technology, Peking University, Beijing, 100871, China; Institute for Artificial Intelligence, Peking University, Peking, 100871, China; Peng Cheng Laboratory, Shenzhen, 518000, China. Electronic address: zlin@pku.edu.cn
773    0_
$w MED00011811 $t Neural networks $x 1879-2782 $g Roč. 174 (20240311), s. 106229
856    41
$u https://pubmed.ncbi.nlm.nih.gov/38490114 $y Pubmed
910    __
$a ABA008 $b sig $c sign $y - $z 0
990    __
$a 20240725 $b ABA008
991    __
$a 20240905134036 $b ABA008
999    __
$a ok $b bmc $g 2143513 $s 1225617
BAS    __
$a 3
BAS    __
$a PreBMC-MEDLINE
BMC    __
$a 2024 $b 174 $c - $d 106229 $e 20240311 $i 1879-2782 $m Neural networks $n Neural Netw $x MED00011811
LZP    __
$a Pubmed-20240725

Najít záznam

Citační ukazatele

Nahrávání dat ...

Možnosti archivace

Nahrávání dat ...