Published In

Proceedings of World Congress on Neural Networks '93

Document Type

Conference Proceeding

Publication Date



Neural networks (Computer science), Neural networks -- Structure, System theory


To achieve reduced training time and improved generalization with artificial neural networks (ANN, or NN), it is important to use a reduced complexity NN structure. A "problem" is defined by constraints among the variables describing it. If knowledge about these constraints could be obtained a priori, this could be used to reduce the complexity of the ANN before training it. Systems theory literature contains methods for determining and representing structural aspects of constrained data (these methods are herein called GSM, general systems method). The suggestion here is to use the GSM model of the given data as a pattern for modularizing a NN prior to training it. The present work assumes the GSM model for the given problem context has been determined (represented here in the form of Boolean functions of known decompositions). This means that certain information is available about constraint among the system variables, and is used to develop a modularized NN. The modularized NN and an equivalent general NN (full interconnect, feed-forward NN) are both trained on the same data. Various predictions are offered: 1) The general NN and the modularized NN will both learn the task, but the modularized NN will learn it faster. 2) If trained on an (appropriate) subset of possible inputs, the modularized NN will perform better generalization than the general NN. 3) If trained on a non-decomposable function of the same variables, the general NN will learn the task, but the modularized NN will not. All of these predictions are verified experimentally. Future work will explore more decomposition types and more general data types.


Paper presented at the World Congress on Neural Networks ’93 (WCNN-93), Portland, OR., July, 1993.

Persistent Identifier