General-purpose computation with neural networks: a survey of complexity theoretic results
Language English Country United States Media print
Document type Journal Article, Research Support, Non-U.S. Gov't, Review
- MeSH
- Algorithms MeSH
- Time Factors MeSH
- Classification MeSH
- Mathematical Computing MeSH
- Models, Neurological MeSH
- Neural Networks, Computer * MeSH
- Neurons physiology MeSH
- Models, Statistical MeSH
- Publication type
- Journal Article MeSH
- Research Support, Non-U.S. Gov't MeSH
- Review MeSH
We survey and summarize the literature on the computational aspects of neural network models by presenting a detailed taxonomy of the various models according to their complexity theoretic characteristics. The criteria of classification include the architecture of the network (feedforward versus recurrent), time model (discrete versus continuous), state type (binary versus analog), weight constraints (symmetric versus asymmetric), network size (finite nets versus infinite families), and computation type (deterministic versus probabilistic), among others. The underlying results concerning the computational power and complexity issues of perceptron, radial basis function, winner-take-all, and spiking neural networks are briefly surveyed, with pointers to the relevant literature. In our survey, we focus mainly on the digital computation whose inputs and outputs are binary in nature, although their values are quite often encoded as analog neuron states. We omit the important learning issues.
References provided by Crossref.org