next up previous


An Approach to Learning in Autocatalytic Sets in Analogy to Neural Networks

Harald Hüning
Neural Systems, Electrical Engineering,
Imperial College, London SW7 2BT, UK
The work is financed by the European Commission (TMR)


E-mail: h.huening@ic.ac.uk
Phone: +44 171 594 6221
Fax: +44 171 823 8125

Extended Abstract

A mathematical model of learning in autocatalytic sets is developed with the aim to achieve some degree of the adaptability of evolving systems in a learning system as well. For this study a catalytic reaction network model is considered as an alternative to neural networks. The couplings in the model correspond to the catalysis of reactions among molecular species, where a catalyst enhances a reaction without being consumed itself. Catalytic networks have been studied in the area of molecular evolution to explain the cooperation of several types of molecules that may have been the precursors of the first cells (Eigen & Schuster, 1977, May, 1992). Evolution experiments in a bioreactor have shown that the principle of an autocatalytic cycle finds a new structure of molecular species and their interactions (Eigen, 1993). This motivates theoretical studies of the formation of autocatalytic sets, which are characterised by several population dynamic variables maintaining a high level through cooperation in a competitive environment (Bagley & Farmer, 1992). The dynamic model consists of growth terms corresponding to catalysed reactions, and competition corresponding to the dilution in a biochemical flow reactor. Usually this competition is modelled by keeping the sum of all state variables constant (Eigen & Schuster, 1978, Hofbauer & Sigmund, 1988). This is comparable to a winner-take-all mechanism, but there are several winners in an autocatalytic set. The growth terms can formally be regarded as sigma-pi units like in neural networks (Williams, 1986). Only a few analytical results are known about catalytic networks in general (Stadler, Fontana & Miller, 1993).

Farmer (1990) has considered autocatalytic sets in a more general framework of connectionist systems including neural networks, immune systems and classifier systems. Farmer has proposed the connection strengths in autocatalytic sets to be identified with the population variables of catalysts, following a partial derivative which gives the weights in the case of the weighted sum-and-threshold neuron model. However, the models of autocatalytic sets use products of state variables, thus there is no distinction between state variables and connection parameters in Farmer's view. Alternatively, connection parameters can be introduced into autocatalytic sets by using a weighting exponent for each factor in the products (Hüning, 1998).

The weighting exponents facilitate a transition between different systems without adding and removing equations, in contrast to a study of the evolution of a metabolism (Bagley, Farmer & Fontana, 1992). The evolution model involves adding equations and coupling terms to a system of differential equations, and following from the dynamics other species can become extinct, so their equations may then be abandoned.

This paper presents first results on the application of a learning rule for a continuous adjustment of the weighting exponents similar to competitive learning in neural networks (Hertz, Krogh & Palmer, 1991). In contrast to an earlier learning rule for sigma-pi units it is an unsupervised type of learning and the competitive dynamics is used in place of the semilinear activation function (Rumelhart, Hinton & Williams, 1986). Each equation of the dynamical system is considered a module with its connections given by its non-zero weighting exponents for all input and feedback states. Minor modifications are applied to the catalytic network model. The growth terms are changed to a max-pi function instead of the sum, and a minimum value of 0.001 is applied to all states. Mixed orders of the product terms are admitted in the system, and then the autocatalytic sets depend on the overall level of the states. A defined region of the dynamics is chosen by keeping the level of the states below 1, which gives a dominance to low-order terms.

Input is supplied as external states, and through learning the stable states of some variables can be made to respond to logic combinations of the input states. In the learning rule it is decided first which modules are allowed to learn (i.e. the autocatalytic set) by applying a `learning' threshold to a stable state, similar to the choice of the maximum response in competitive learning. Second the parameters are adjusted towards the mean of the input in time using the standard competitive rule (Hertz, Krogh & Palmer, 1991) with the above condition. With a decaying learning rate the resulting exponents yield the relative frequency of high input states conditional on a module being active, multiplied with the maximum level of each state.

A data example demonstrates how different patterns can be trained to elicit a response in the same module. Each letter of a word is given in two different representations in a simple 1-of-M code. The task is to learn stable weighting exponents for the logical OR function of using two symbols for each letter like a lower- and upper-case representation. Thus two representations are given for one word, so the same pattern is supplied on input channel 1 while two binary patterns are supplied on input channel 2. Although the learning rule is unsupervised, the input 1 acts as a teaching input to associate both patterns with the same word module. To show the stability of the learning rule, the initial architecture is designed manually with a minimal number of connections, but through learning more exponents become non-zero. Learning from scratch has been studied separately without continuous adjustment of the parameters (Hüning, 1998).

In simulations a dynamic equilibrium of the increasing exponents and the decreasing states is found, and the module corresponding to the word is saved from dropping below the learning threshold by an appropriate choice of some fixed parameters. Although this learning method minimises an error or distance function between the exponents and the states (see Hertz, Krogh & Palmer, 1991), it does not maximise the response of the module. The product terms are not well related to a distance measure, because here the maximum score is always achieved for all-zero exponents and not for an equality of the exponents and a state vector. Nevertheless, the rule supports a group of modules being active cooperatively.

This method may serve on the one hand as a system level theory for learning of a hierarchical and recurrent architecture which is comparable to neural networks, and on the other hand this method can be used to build chemical computers, because the non-zero exponents can be interpreted as a set of chemical rules for the recognition of the input states. In both cases, the OR-functions in the architecture give rise to generalisation in the recognition of new data.

References

About this document ...

An Approach to Learning in Autocatalytic Sets in Analogy to Neural Networks

This document was generated using the LaTeX2HTML translator Version 97.1 (release) (July 13th, 1997)

Copyright © 1993, 1994, 1995, 1996, 1997, Nikos Drakos, Computer Based Learning Unit, University of Leeds.

The command line arguments were:
latex2html -split 0 abstr.

The translation was initiated by Mr H. Huening on 3/31/1998


next up previous
Mr H. Huening
3/31/1998