Hits:
Indexed by:会议论文
Date of Publication:2007-08-12
Included Journals:EI、CPCI-S、Scopus
Page Number:1222-1225
Abstract:This paper is concerned with the approximation capability of feedforward neural networks to a compact set of functions. We follow a general approach that covers all the existing results and gives some new results in this respect. To elaborate, we have proved the following: If a family of feedforward neural networks is dense in H, a complete linear metric space of functions, then given a compact set V subset of H and an error bound epsilon, one can fix the quantity of the hidden neurons and the weights between the input and hidden layers, such that in order to approximate any function f epsilon V with accuracy epsilon, one only has to further choose suitable weights between the hidden and output layers.