Energy complexity of recurrent neural networks
Language English Country United States Media print-electronic
Document type Journal Article, Research Support, Non-U.S. Gov't
PubMed
24555455
DOI
10.1162/neco_a_00579
Knihovny.cz E-resources
- MeSH
- Action Potentials physiology MeSH
- Algorithms MeSH
- Brain physiology MeSH
- Neural Networks, Computer * MeSH
- Neurons physiology MeSH
- Publication type
- Journal Article MeSH
- Research Support, Non-U.S. Gov't MeSH
Recently a new so-called energy complexity measure has been introduced and studied for feedforward perceptron networks. This measure is inspired by the fact that biological neurons require more energy to transmit a spike than not to fire, and the activity of neurons in the brain is quite sparse, with only about 1% of neurons firing. In this letter, we investigate the energy complexity of recurrent networks, which counts the number of active neurons at any time instant of a computation. We prove that any deterministic finite automaton with m states can be simulated by a neural network of optimal size [Formula: see text] with the time overhead of [Formula: see text] per one input bit, using the energy O(e), for any e such that [Formula: see text] and e=O(s), which shows the time-energy trade-off in recurrent networks. In addition, for the time overhead [Formula: see text] satisfying [Formula: see text], we obtain the lower bound of [Formula: see text] on the energy of such a simulation for some constant c>0 and for infinitely many s.
References provided by Crossref.org