Neural Networks

Volume 6, Issue 8, 1993, Pages 1069-1072
Neural Networks

Some new results on neural network approximation

https://doi.org/10.1016/S0893-6080(09)80018-XGet rights and content

Abstract

We show that standard feedforward networks with as few as a single hidden layer can uniformly approximate continuous functions on compacta provided that the activation function ψ is locally Riemann integrable and nonpolynomial, and have universal Lp (μ) approximation capabilities for finite and compactly supported input environment measures μ provided that ψ is locally bounded and nonpolynomial. In both cases, the input-to-hidden weights and hidden layer biases can be constrained to arbitrarily small sets; if in addition ψ is locally analytic a single universal bias will do.

Access through your organization

Check access to the full text by signing in through your organization.

Access through your organization

References (14)

There are more references available in the full text version of this article.

Cited by (399)

  • Introduction to multi-layer feed-forward neural networks

    1997, Chemometrics and Intelligent Laboratory Systems
  • Analysis of financial time series

    2010, Analysis of Financial Time Series
View all citing articles on Scopus
View full text