正在加载图片...
Feedforward Sigmoidal Representation Theorems Feedforward sigmoidal architectures can in principle represent any Borel-measurable function to any desired accuracy-if the network contains enough hidden"neurons between the input and output neuronal fields. So the MLP can solve the problems of nonlinear separable problems and function approximate.Feedforward Sigmoidal Representation Theorems ◼ Feedforward sigmoidal architectures can in principle represent any Borel-measurable function to any desired accuracy—if the network contains enough “hidden” neurons between the input and output neuronal fields. ◼ So the MLP can solve the problems of nonlinear separable problems and function approximate
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有