Elsevier

Neural Networks

Volume 9, Issue 5, July 1996, Pages 765-772
Neural Networks

CONTRIBUTED ARTICLE
A Numerical Implementation of Kolmogorov's Superpositions

Abstract

Hecht-Nielsen proposed a feedforward neural network based on Kolmogorov's superpositionsf(xi,…,xn)=q=0Φq(yq) that apply to all real valued continuous functions f(x1, …, xn) defined on a Euclidean unit cube of dimension n ≥ 2. This network has a hidden layer that is independent of f and that transforms the n-tuples (x1, …, xn) into the 2n + 1 variables yq, and an output layer in which f is computed. Kůrková has shown that such a network has an approximate implementation with arbitrary activation functions of sigmoidal type. Actual implementation is, however, impeded by the lack of numerical algorithms for the hidden layer which contain continuous functions of the formyq=Σnp=1αpψ(xp+qa) with constants a and αp. This paper gives an explicit numerical implementation of the hidden layer that also enables the implementation of the output layer. Copyright © 1996 Elsevier Science Ltd

Keywords

Superpositions
Kolmogorov
Representation of functions of several variables
Feedforward neural networks
Continuity
View full text