Training a single sigmoidal neuron is hard

. 2002 Nov ; 14 (11) : 2709-28.

Jazyk angličtina Země Spojené státy americké Médium print

Typ dokumentu časopisecké články, práce podpořená grantem

Perzistentní odkaz   https://www.medvik.cz/link/pmid12433296

We first present a brief survey of hardness results for training feedforward neural networks. These results are then completed by the proof that the simplest architecture containing only a single neuron that applies a sigmoidal activation function sigma: kappa --> [alpha, beta], satisfying certain natural axioms (e.g., the standard (logistic) sigmoid or saturated-linear function), to the weighted sum of n inputs is hard to train. In particular, the problem of finding the weights of such a unit that minimize the quadratic training error within (beta - alpha)(2) or its average (over a training set) within 5(beta - alpha)(2)/ (12n) of its infimum proves to be NP-hard. Hence, the well-known backpropagation learning algorithm appears not to be efficient even for one neuron, which has negative consequences in constructive learning.

Citace poskytuje Crossref.org

Najít záznam

Citační ukazatele

Pouze přihlášení uživatelé

Možnosti archivace

Nahrávání dat ...