location: Current position: Home >> Scientific Research >> Paper Publications

Approximation Capability to Compact Sets of Functions and Operators by Feedforward Neural Networks

Hits:

Indexed by:会议论文

Date of Publication:2007-09-14

Included Journals:EI、CPCI-S、Scopus

Page Number:82-86

Abstract:This paper is concerned with the approximation capability of feedforward neural networks to a compact set of functions. We follow a general approach that covers all the existing results and gives some new results in this respect. To elaborate, we have proved the following: If a family of feedforward neural networks is dense in H, a complete linear metric space of functions, then given a compact set V subset of H and an error bound epsilon, one can fix the quantity of the hidden neurons and the weights between the input and hidden layers, such that in order to approximate any function f is an element of V with accuracy epsilon, one only has to further choose suitable weights between the hidden and output layers. We also apply our theorem to the problem of system identification, or approximation to an operator, by neural networks.

Pre One:L-p approximation capabilities of sum-of-product and sigma-pi-sigma neural networks

Next One:Convergence of Gradient Descent Algorithm for Diagonal Recurrent Neural Networks