Scientific Understanding of Consciousness
Dynamic Link Architecture — a theory by Christoph von der Malsburg
Two central hypotheses of dynamic link architecture: (1) binding by signal correlations, and (2) short-term synaptic modification. (von der Malsburg; Binding Problem, 141)
Rapid synaptic modification -- synapses are characterized by two quantities, T and J, where T is the conventional permanent synaptic weight; and J is the momentarily effective strength of a synapse. The parameter J can vary on a rapid time scale between zero and a maximum determined by T. (von der Malsburg; Binding Problem, 140)
Forceful activity events can modify synapses in a small fraction of a second, perhaps a few milliseconds. When there is no further activity in the two cells connected by a synapse, J slowly returns to its resting value, with the time constant of short-term memory. (von der Malsburg; Binding Problem, 140)
Signal correlations come to express the patterns of connections in the circuit, i.e, the underlying causal structure of the nervous system. (von der Malsburg; Binding Problem, 141)
(paraphrase of, Christoph von der Malsburg, DYNAMIC LINK ARCHITECTURE,
Handbook of Brain Theory and Neural Networks, MIT Press, 2002)
According to the DLA, the brain's data structure has the form of graphs, composed of nodes (called "units") connected by links. The graphs of the DLA are dynamic: both units and links comprise activity variables changing on the rapid functional time scale of fractions of a second. Graphs form a very versatile data format which probably is able to render the structure of any mental object. A particularly important feature is the ability of graphs to compose more complex data structures from simpler ones.
Units are endowed with structured signals changing in time. These signals can be evaluated under two aspects, intensity and correlation. Intensity measures the degree to which a unit is active in a given time interval. Correlations quantify the degree to which the signal of one unit is related to that of others. The general idea is that identical signal patterns are strongly correlated, whereas statistically independent signal patterns have zero correlation.
The strength of links can change on two time scales, represented by two variables: temporary weight and permanent weight. The permanent weight corresponds to the usual synaptic weight, which can change on the slow time scale of learning, and represents permanent memory. The temporary weight is constrained to the range between zero and the permanent weight and can change on the same time scale as the unit activity (hence the name dynamic links).
Dynamic links constitute the glue by which higher data structures are built up from more elementary ones. Conversely, the absence of links (temporary or permanent) keeps mental objects separate from each other and prevents their direct interaction. In the simplest case, a link binds a descriptor to an object. For example, a link may bind a unit representing a color to another unit that stands for a specific object. More generally, mental objects are formed by binding together units representing constituent parts. The infinite richness and flexibility of the mind is thus made possible as a combinatorial operation (although the combinatorial expanse of 1015 synapses is dramatically reduced by representations constraints). The mental activity representing familiar objects may be formed by the dynamical binding of appropriately structured arrays of component units. Units can quickly realign their synaptic activity to form a part of different functional contexts. They are integrated into a specific context by the activation of appropriate links. Dynamic links are the means by which the brain specializes its neural network to the momentary needs of an organism.
Under the influence of signal exchange, graphs and their units and links are subject to dynamic change, constituting an operation of network self-organization. The dynamic links have a resting strength near the value of the permanent weight. When the units connected by a permanent link become active, there is rapid feed-back between the units' signal correlations and the link's strength, a strong link tending to increase signal correlation, a strong correlation controlling the link to grow in strength toward the maximum set by the permanent weight. This feed-back can also lead to a downward spiral, weak correlation reducing a link's strength, a weak link losing its grip on signals which, under the influence of other links, drift apart toward lower correlation. Thus, links between active units tend to be driven toward one of their extreme values, zero or the maximum set by the permanent weight.
Links are subject to divergent and convergent competition: links converging on one unit compete with each other for strength, as do links diverging from one unit. This competition drives graphs to sparsity. Links are also subject to co-operation. Several links carrying correlated signal structure cooperate in imposing that signal structure on a common target unit, helping them all to grow. As the ultimate cause for all signal structure is random, correlations can only be generated on the basis of common origin of pathways. Thus, cooperation runs between pathways that start at one point and converge to another point. The common origin of converging pathways may, of course, be an event or a pattern in the environment.
Co-operation and competition conspire to favor certain graph structures. These are distinguished by being sparse (that is, activating relatively few of the permanent links in or out of units) and by having a large number of co-operative meshes — arrangements of alternate pathways from one source to one target unit. Beyond these statements, a general characterization of graph attractor states is an open issue. However, there are certain known graph structures that have been shown in simulations to be attractor states and that prove to be very useful, see the section on applications. All of these graph structures may be characterized as "topological graphs": If their units are mapped appropriately into a low-dimensional "display space" (one- or two-dimensional in the known examples), the links of those graphs all run between units that are neighbors in the display space.
In classical neural architectures, learning is modeled by synaptic plasticity: the change of permanent synaptic weights under the control of neural signals. This general idea is also part of the dynamic link architecture. However, the DLA imposes a further refinement in that a permanent weight grows only when the corresponding dynamic link has converged to its maximum strength, which happens only in the context of an organized graph structure. For a permanent link to grow it is thus not sufficient for the two connected units to have high intensity in the same brain state, but their signals must be correlated and their link must be active. This puts the extra condition on the growth of permanent connection weights that they be validated by indirect evidence, in the form of active indirect pathways between the units connected, and in the form of relative freedom from competition, the two conditions characterizing a well-structured dynamic graph. Thus, only the very few connections that are significant in this sense can grow.
Neural Implementation of Dynamic Links
How can the units, links and dynamical rules of the DLA be identified with known neural structures? On the most fundamental level, units are to be identified with neurons, links with axons and synapses, signals with neural spike trains, and permanent weights with conventional synaptic strengths. Signal intensity is evaluated as firing rate, averaged over an interval A, whereas the stochastic signal fine structure within that interval is evaluated in terms of correlations with a resolution time T, two spikes arriving within T of each other being counted as simultaneous. The smallest reasonable choice for A is probably 100 msec or a little less, the smallest choice for T may be 3 msec. Neural signals in the cerebral cortex have a very rich stochastic structure on all time scales, much of which is not correlated strongly with external stimuli in neurophysiological experiments (and is usually suppressed by averaging in a post-stimulus time histogram).
Dynamic links are realized on the single neuron level as rapid reversible synaptic plasticity. Starting from a resting value, the temporary weight of a synapse is increased by correlations between the pre- and postsynaptic signals, and is decreased if both signals are active in a given time period but are not correlated. The resting weight of a synapse is probably not too far from the maximum set by the permanent weight (so that RRP will manifest itself mainly in the form of rapid weight reduction). The interactions between temporary synaptic strength and signals is such as to constitute a positive feed-back loop. Changes in temporary synaptic weights must take place on a fast time scale to be of functional significance, possibly as quickly as within 10 msec. In the prolonged absence of presynaptic or postsynaptic activity, the temporary weight rises or falls back toward its resting value, with a time scale that corresponds to short-term memory (perhaps a few dozen seconds), or it is reset by an active mechanism (for example, in the visual cortex during saccades).
(end of paraphrase)
Return to — Neural Network