• Je něco špatně v tomto záznamu ?

Training much deeper spiking neural networks with a small number of time-steps

Q. Meng, S. Yan, M. Xiao, Y. Wang, Z. Lin, ZQ. Luo

. 2022 ; 153 (-) : 254-268. [pub] 20220615

Jazyk angličtina Země Spojené státy americké

Typ dokumentu časopisecké články

Perzistentní odkaz   https://www.medvik.cz/link/bmc22024653

Spiking Neural Network (SNN) is a promising energy-efficient neural architecture when implemented on neuromorphic hardware. The Artificial Neural Network (ANN) to SNN conversion method, which is the most effective SNN training method, has successfully converted moderately deep ANNs to SNNs with satisfactory performance. However, this method requires a large number of time-steps, which hurts the energy efficiency of SNNs. How to effectively covert a very deep ANN (e.g., more than 100 layers) to an SNN with a small number of time-steps remains a difficult task. To tackle this challenge, this paper makes the first attempt to propose a novel error analysis framework that takes both the "quantization error" and the "deviation error" into account, which comes from the discretization of SNN dynamicsthe neuron's coding scheme and the inconstant input currents at intermediate layers, respectively. Particularly, our theories reveal that the "deviation error" depends on both the spike threshold and the input variance. Based on our theoretical analysis, we further propose the Threshold Tuning and Residual Block Restructuring (TTRBR) method that can convert very deep ANNs (>100 layers) to SNNs with negligible accuracy degradation while requiring only a small number of time-steps. With very deep networks, our TTRBR method achieves state-of-the-art (SOTA) performance on the CIFAR-10, CIFAR-100, and ImageNet classification tasks.

Citace poskytuje Crossref.org

000      
00000naa a2200000 a 4500
001      
bmc22024653
003      
CZ-PrNML
005      
20221031100755.0
007      
ta
008      
221017s2022 xxu f 000 0|eng||
009      
AR
024    7_
$a 10.1016/j.neunet.2022.06.001 $2 doi
035    __
$a (PubMed)35759953
040    __
$a ABA008 $b cze $d ABA008 $e AACR2
041    0_
$a eng
044    __
$a xxu
100    1_
$a Meng, Qingyan $u The Chinese University of Hong Kong, Shenzhen, China; Shenzhen Research Institute of Big Data, Shenzhen 518115, China. Electronic address: qingyanmeng@link.cuhk.edu.cn
245    10
$a Training much deeper spiking neural networks with a small number of time-steps / $c Q. Meng, S. Yan, M. Xiao, Y. Wang, Z. Lin, ZQ. Luo
520    9_
$a Spiking Neural Network (SNN) is a promising energy-efficient neural architecture when implemented on neuromorphic hardware. The Artificial Neural Network (ANN) to SNN conversion method, which is the most effective SNN training method, has successfully converted moderately deep ANNs to SNNs with satisfactory performance. However, this method requires a large number of time-steps, which hurts the energy efficiency of SNNs. How to effectively covert a very deep ANN (e.g., more than 100 layers) to an SNN with a small number of time-steps remains a difficult task. To tackle this challenge, this paper makes the first attempt to propose a novel error analysis framework that takes both the "quantization error" and the "deviation error" into account, which comes from the discretization of SNN dynamicsthe neuron's coding scheme and the inconstant input currents at intermediate layers, respectively. Particularly, our theories reveal that the "deviation error" depends on both the spike threshold and the input variance. Based on our theoretical analysis, we further propose the Threshold Tuning and Residual Block Restructuring (TTRBR) method that can convert very deep ANNs (>100 layers) to SNNs with negligible accuracy degradation while requiring only a small number of time-steps. With very deep networks, our TTRBR method achieves state-of-the-art (SOTA) performance on the CIFAR-10, CIFAR-100, and ImageNet classification tasks.
650    12
$a počítače $7 D003201
650    12
$a neuronové sítě $7 D016571
655    _2
$a časopisecké články $7 D016428
700    1_
$a Yan, Shen $u Center for Data Science, Peking University, China. Electronic address: yanshen@pku.edu.cn
700    1_
$a Xiao, Mingqing $u Key Laboratory of Machine Perception (MOE), School of Artificial Intelligence, Peking University, China. Electronic address: mingqing_xiao@pku.edu.cn
700    1_
$a Wang, Yisen $u Key Laboratory of Machine Perception (MOE), School of Artificial Intelligence, Peking University, China; Institute for Artificial Intelligence, Peking University, China. Electronic address: yisen.wang@pku.edu.cn
700    1_
$a Lin, Zhouchen $u Key Laboratory of Machine Perception (MOE), School of Artificial Intelligence, Peking University, China; Institute for Artificial Intelligence, Peking University, China; Peng Cheng Laboratory, China. Electronic address: zlin@pku.edu.cn
700    1_
$a Luo, Zhi-Quan $u The Chinese University of Hong Kong, Shenzhen, China; Shenzhen Research Institute of Big Data, Shenzhen 518115, China. Electronic address: luozq@cuhk.edu.cn
773    0_
$w MED00011811 $t Neural networks $x 1879-2782 $g Roč. 153, č. - (2022), s. 254-268
856    41
$u https://pubmed.ncbi.nlm.nih.gov/35759953 $y Pubmed
910    __
$a ABA008 $b sig $c sign $y p $z 0
990    __
$a 20221017 $b ABA008
991    __
$a 20221031100753 $b ABA008
999    __
$a ok $b bmc $g 1854406 $s 1175943
BAS    __
$a 3
BAS    __
$a PreBMC
BMC    __
$a 2022 $b 153 $c - $d 254-268 $e 20220615 $i 1879-2782 $m Neural networks $n Neural Netw $x MED00011811
LZP    __
$a Pubmed-20221017

Najít záznam

Citační ukazatele

Nahrávání dat ...

Možnosti archivace

Nahrávání dat ...