We study the state space of a popular network of asynchronous multi-connected linear threshold elements. The properties of the state space are analyzed during a learning process: the network learns a set of patterns which appear in its environment in a random sequence. The patterns influence the network's weights and thresholds through an adaptive algorithm, which is based on the Hebbian hypothesis. The algorithm tries to install the patterns as fixed points in the network's state space, and to guarantee that a large region of attraction surrounds each fixed point. We obtain the stabilization probabilities of each pattern in the learned set, as well as the stabilization rate, as a function of the training time. In addition, we obtain a lower bound on the probability of convergence to any stored pattern, from an initial state at a given Hamming distance from it. A specific case of our training algorithm is the widely used, nonadaptive sum-of-outer-products parameter assignment. Properties of networks with this assignment can, therefore, be evaluated and compared to properties that are obtained under the adaptive training, which is more suited to the pattern environment. Our derivation allows the evaluation of the quality of information storage for a given set of patterns, and the comparison of different information-coding schemes for items that need to be stored and retrieved from the network. Also evaluated are the steady-state values of the network's parameters, following a long training with a stationary set of patterns. Finally, we study the differences between networks that were trained by “hard” and “soft” limiter learning curves.
All Science Journal Classification (ASJC) codes