Perceptron Architecture Ensuring Pattern Description Compactness
2009
Sergejs Jakovļevs

This paper examines conditions a neural network has to meet in order to ensure the formation of a space of features satisfying the compactness hypothesis. The formulation of compactness hypothesis is defined in more detail as applied to neural networks. It is shown that despite the fact that the first layer of connections is formed randomly, the presence of more than 30 elements in the middle network layer guarantees a 100% probability that the G-matrix of the perceptron will not be special. It means that under additional mathematical calculations made by Rosenblatt, the perceptron will with guaranty form a space of features that could be then linearly divided. Indeed, Cover’s theorem only says that separation probability increases when the initial space is transformed into a higher dimensional space in the non-linear case. It however does not point when this probability is 100%. In the Rosenblatt’s perceptron, the non-linear transformation is carried out in the first layer which is generated randomly. The paper provides practical conditions under which the probability is very close to 100%. For comparison, in the Rumelhart’s multilayer perceptron this kind of analysis is not performed


Keywords
perceptron, pattern recognition

Jakovļevs, S. Perceptron Architecture Ensuring Pattern Description Compactness. IT and Management Science. Vol.40, 2009, pp.87-93. ISSN 1407-7493.

Publication language
English (en)
The Scientific Library of the Riga Technical University.
E-mail: uzzinas@rtu.lv; Phone: +371 28399196