Please note that you must be a
member by then, or you will no longer be able to access the material. Members
who submit an essay for the assessment at the end of the course will be given a
grade that will correspond to where they would have finished in the
corresponding Stanford class. While not a formal Stanford qualification, it
will be a realistic appraisal of their level of knowledge, and I will be happy
to write a reference based on it.

The first lecture was self-explanatory

We now begin the first technical lecture;

The first lecture was self-explanatory

We now begin the first technical lecture;

p/word as before

We will stay with this material until everyone is comfortable with it.

This is why we're doing this work;

We will stay with this material until everyone is comfortable with it.

This is why we're doing this work;

**Current theme; Single neurons - classical and quantum**

Ours (2004) was the first work to show how single
neurons could realistically perform processing of sensory data expressed simply
as spectral such data. This work has since been corroborated by, for example,
Tiago Branco et al. (2010). Essentially, we argued that subthreshold oscillations of
the neuron allowed groups of neurons to “own” part of the spectrum. That can be
conceived of using only classical physics..

Since our original work, quantum coherence at
physiological temperatures has been demonstrated for biological systems in
photosynthesis at the 3nm level characteristic of gap junctions in neurons
(Hoyer et al, 2011). This finding converges with a controversy about quantum
effects in neurons related to consciousness. While, in related work, we
question the assumption in the later that “phase coherence” has in fact been
demonstrated in the brain, there is a long-attested corpus of observations
suggestive of entropically minimal states several times a second there.

We therefore speculate that gap junctions might
allow a quantum superposition of states of the membrane potential of each
neuron to be communicated to thousands of others. This will lead to
entanglement of a scale that would allow the Fourier decomposition we envisage
for the classical case be extended to a quantum description. This is the only
currently physiologically plausible story about Quantum effects in the brain

In fact, we have data to indicate that much
of the statistical inferences in classical EEE/ECOG evince premature closure,
and that this approach is certainly not ready – pace, the ORCH OR proponents –
for the non-classical world.

The existence of phase coherence in gamma waves in
the brain, and the relation of this phenomenon to consciousness, is a point of
much consensus, with only the recent work of Pockett and her colleagues
contradicting it. It has been further argued that the entropically minimal
state resulting from this phase coherence might yield an environment conducive
to quantum coherence.

While we believe that the work of Walter Freeman indeed is indicative of entropically minimal states in th brain occurring several times a second, we believe that the EEG/ECOG signal is too coarse to indicate synchrony. Indeed, we have findings from PCA , among other methods, indicating that a 64-electrode grid produces at most two signals. As for phase coherence, the stated electronic specifications of the equipment use expressly prohibit any such inference, as the error range of the equipment is too large. So this study of single neurons over the next few classes is REALLY important

O Nuallain, S CSLI,
Stanford and T. Doris(2004)
http://bcats.stanford.edu/previous_bcats/bcats04/html/nuallain.html

Hoyer et al, (2011)
http://arxiv.org/pdf/1106.2911.pdf

See also Branco et al (2010) http://www.sciencemag.org/content/329/5999/1671

The critical paper on neural resonance is appended below

The critical paper on neural resonance is appended below

**WHAT IS NEURAL RESONANCE FOR?**

SEAN
O NUALLAIN, University of Ireland USA AND TOM DORIS

ABSTRACT.
Vast amounts of research, both theoretical and experimental, are being carried
out about neural resonance, subthreshold oscillations, and stochastic resonance
in the nervous system. In this paper, we first offer a radically different
computational model of neural functioning, in which integrate and fire
behaviour is seen as a special case of the more encompassing resonate and fire
processes. After commenting on the explanatory vista opened up by this model,
we speculate on its utility for signal processing.

KEYWORDS: Subthreshold oscillations; neural
resonance; signal processing; resonate and fire.

**1. Introduction**

While neural
resonance can exist without subthreshold oscillations, a vast literature
connects the two. For Wu et al (2001), the oscillations emerge from membrane
resonance. The resonant current is steady-state potassium current, amplified by
a sodium current. Izhikevich (2002) most explicitly drew consequences from the
fact that the Hodgkin-Huxley model is a resonator. His point that a neuron's
firing may depend on the timing of its afferent impulses is one that we believe
to be well-taken. We have been careful to ensure that our model caters to all
the possible scenarios (in-phase doublets, and so on) that he envisages. Like
Wu et al (op. cit.) he interrelates subthreshold oscillations and bursts,
coming to the conclusion that the intervals in bursts may be significant for
communication. This is one line of reasoning that emerges, transformed and
extended, in our work.

System level
phenomena are also increasingly beginning to attract attention. Wu et al
(ibid.) comment that a single excitatory stimulus to a mesencephalic V neuron
can result in high-frequency spiking in a whole network under certain
circumstances. Even more interestingly, the phenomenon of stochastic resonance
(SR) has come into focus in neuroscience. SR is essentially a non-linear
systems phenomenon through which, apparently paradoxically, a noisy environment
can be exploited to amplify a weak signal. Reinker et al (2004) integrate the
two resonance phenomena by asserting that subthreshold neural resonance
manifests itself when thalamocortical neurons are stimulated with sine waves of
varying frequency, and stochastic resonance emerges when noise is added to
these stimuli.

The
possibility that these phenomena have computational utility has not been lost
on these and other researchers. However, we believe that ours is the first work
credibly to interrelate the signal-processing task faced millisecond to
millisecond by the brain with the phenomena in question. In their review
article, Hutcheon et al (2000) comment that resonance and oscillation may have
a role in such phenomena as gamma waves. Rudolph et al (2001) venture a more
specific conjecture; responsiveness of neo-cortical pyramidal neurons to
subthreshold stimuli can indeed be enhanced by SR, and under certain conditions
the statistics of this background activity, as distinct from its intensity,
could become salient. Obviously, such forms could have computational
consequences.

For Freeman
et al (2003), the conversion of sensory data into meaning is mediated by those
gamma wave processes. The distinction between ours and Freeman's approach,
which we are admirers of, is that we are looking for the resonant frequencies
at the microscopic level in single neurons using novel solutions to the 4th
order Hodgkin-Huxley equation, whereas Freeman finds them at the mesoscopic
level in the characteristic frequencies of populations. Nevertheless, the
thrust of the two approaches, and the critique of the integrate-an-fire model,
is similar.

Yet the
integrate -and-fire (INF) neuron has emerged largely intact, even if
supplemented with resonant abilities (Reinker et al, op cit.). In this paper,
our first goal is to call the integrity of the INF paradigm into question in a
novel way. In particular , we wish to show that INF behaviour can be viewed as
a specific phase in the cycle of a different neural model, the
resonate-and-fire model (RNF). Our model caters to all the bursting
situations-doublet, triplet etc -identified by Izhikevich (2002). However, our
background as computer scientists impels us on another previously unexplored
path at this stage. What actually are the sensory data that the brain is
operating on? Intriguingly, a decomposition of such stimuli into their
constituent power spectra affords a vista in which each resonating neuron may
accomplish a part of a Fourier transform. These digital analog signalling
processing (DASP) concerns form the next part of the paper. We recognise that,
since the frequencies involved are changing, a more complex function
approximation method like the Hilbert transform may be closer to
neuroscientific reality; however, the ethos whereby individual neurons or
groups thereof have the roles proposed remains the same.

Yet the way
ahead may be more fascinating still. While quantum computing, as distinct from
quantum cryptography, may still be a generation away, computational tasks such
as data base search have already been achieved by exploiting the phenomenon of
classical wave interference. In the most speculative part of the paper, we
propose that dendro-dendritic connections may be complicit in this. Particularly
in neocortex the dendrodendritic connections have only recently been
recognized, since they are comparatively uncommon, in contrast to the
axosynaptic connections among pyramidal cells, accounting for maybe 85 Finally,
we allude to further work that we have done in which the RNF paradigm is
applied to some classical problems with artificial neural nets (ANNS).

**2. The Resonate and Fire Model**

The
Hodgkin-Huxley system exhibits a stable low amplitude oscillation which can be
considered in isolation to the production of action potentials. Izhikevich has
done preliminary work on the possibility that neurons may exhibit either
integrative or resonance properties. He posits that the neuron experiences a
bifurcation of the rest state and depending on the outcome subsequently behaves
as either an integrator or a resonator.

If the rest
state disappears via fold or saddle-node on invariant circle bifurcations, then
the neuron acts as an integrator; the higher the frequency of the input, the
sooner it fires. If the rest state disappears via an Andronov-Hopf bifurcation,
then the neuron acts as a resonator; it prefers a certain (resonant) frequency
of the input spike train that is equal to a low-order multiple of its
eigenfrequency. Increasing the frequency of the input may delay or even
terminate its response.

Integrators
have a well-defined threshold manifold, while resonators usually do not.
Integrators distinguish between weak excitatory and inhibitory inputs, while
resonators do not, since an inhibitory pulse can make a resonator fire.

Izhikevich points out that the Hodgkin-Huxley model
exhibits behaviors which are a superset of the standard IFN model. The low
amplitude oscillation of the membrane potential can be sustained for long
periods without the need for an action potential to result. Only when the
amplitude of oscillation reaches a threshold value does depolarisation and
action potential generation ensue. The resonance phase of the process is
non-trivial. Complex waveforms are permissible, and would suggest that this
phase of neuronal behaviour is of some importance to the behaviour of the
cognitive apparatus. The oscillations are directly related to the action
potential, since the same parameter, membrane potential, is central to both
phases. Since the action potential is of undoubted importance to the activity
of the brain, it would appear that an intimately related phenomenon should be
given thorough consideration. The IFN model is the result of a view of the
neuron which only considers a brief period prior to the generation of the
action potential. As such, we will show that the resonate and fire model is a
superset of the IFN, that it is capable of capturing all of the properties of
the IFN in addition to new and interesting capabilities with strong evidence
supporting the idea that such properties are critical to the transduction of
sensory data.

The physical
basis for the resonate and fire model lies in the fact that every object has a
frequency or a set of frequencies at which they naturally vibrate when struck,
strummed or somehow distorted. Each of the natural frequencies at which an
object vibrates is associated with a standing wave pattern. Standing waves are
formed when oscillations are confined to a volume, and the incident waveform
from the source interferes with the reflected waveform in such a way that
certain points along the medium appear to be standing still. Such patterns of
interference are produced in a medium only at specific frequencies referred to
as harmonics. At frequencies other than the set of harmonic frequencies, the
pattern of oscillation is irregular and non-repeating. While there are an
infinite number of ways in which an object can oscillate, objects prefer only a
specific set of modes of vibration. These preferred modes are those which
result in the highest amplitude of vibration with the least input energy.
Objects are most easily forced into these modes of vibration when disturbed at
frequencies associated with their natural frequencies.

The model
described here seeks to compromise between plausibility in the biological
domain, and efficiency in the computational domain. The level of granularity of
the model is an essential factor in this compromise. In order to model systems
with many interacting neurons, it was necessary to avoid the computational
overhead of compartmental models. The current model provides no spatial extent
for its neurons. The mathematical physics governing the harmonic oscillator is
used as a basis for the development of the resonate and fire model. The entity
that actually oscillates is the membrane potential. The driving forces are the
input spikes received on the neuron's dendritic field. The neuron's
oscillations are lightly damped under normal conditions. For a brief period
after firing, the oscillation is heavily damped, reflecting the quiescence
period found in biological neurons, typically referred to as the absolute
refractory period. The fundamental frequency of the neuron is a tunable
parameter, in our consideration; the details which would determine this
quantity in the biological instance are omitted. We treat it simply as a single
parameter that may be set arbitrarily.

The
oscillation of the membrane potential can alternatively be viewed as the
oscillation of the threshold at which the action potential is generated. The
arrival of an excitatory pulse to a dendrite will result in the summation of
the current membrane potential with the new input. If the current membrane
potential is high, smaller input will result in the threshold being reached and
an action potential being generated. Similarly, if the current potential is
low, a larger input will be required to force the resultant potential across
the threshold. From this viewpoint, the resonate and fire model can be seen to
be a superset of the IFN model. The behaviour of the IFN model can be simulated
with a resonate and fire neuron with a low resonant frequency (long period).
Input spikes are then summed in the usual manner with negligible influence from
the oscillation of the membrane potential.

The IFN
model, in which two neurons that innervate a third node with excitatory
connection are always considered to cooperate, does not apply here. Such an
event sequence also illustrates the other side of selective innervation, when
the post-synaptic neuron is not selected by the pre-synaptic neuron, by virtue
of the fact that its resonant frequency means that the interspike delay is not
an integral multiple of the period of oscillation.

Such
properties have obvious applications, one can envision an array of neurons
forming a ``spectrographic map''; each neuron in the array is attuned to a
different resonant frequency. Two input neurons innervate every neuron in the
map, so that when the two input neurons fire, the time between their firing
(inter-spike delay) will cause a single neuron in the map to react most
positively. The neuron that reacts with an action potential is the neuron whose
resonant period (the inverse of the frequency) most closely matches the
inter-spike delay. Such an arrangement can be generalized to implement a
pseudo-Fourier transform of an input channel. Each neuron in the spectrographic
map will ``own'' a particular narrow frequency band. The input channel is a
signal containing multiple frequencies superimposed upon one another. The input
innervates all neurons in the map, which produce action potentials if their
particular resonant frequency is present in the original signal.

The
implementation details of the resonate and fire model are straightforward. We
consider an idealized harmonic oscillator, similar to a mass on a spring. There
is a single point of equilibrium in such a system, where the position of the
mass is at the point where the spring is neither compressed nor stretched. The
mass is assumed to be floating in free space outside the influence of the
gravitational force, while the other end of the spring is bound to an idealized
fixed point. The mass is displaced from the equilibrium point by the arrival of
an impulse (push) of negligible duration. The displacement of the mass then
oscillates back and forth past the equilibrium position. The spring exerts a
``return force'' proportional to the magnitude of the displacement. The
frequency of oscillation is determined by both the size of the mass and the
magnitude of the return force exerted by the spring. In the real world, all
such oscillations gradually die off (though remain at the same frequency), due
to the damping effects of friction.

A more
familiar analogy would be that of a playground swing. Here the equilibrium position
of the swing seat is directly below the supporting bar, i.e. hanging straight
down. When we push the swing, it begins to swing to and fro (oscillate) past
the equilibrium point. If we want to make the swings ``higher'' (increase the
amplitude of oscillation) we must push the swing ``in phase'' with the basic
oscillation. This simply means that we must push it as it is at the top of the
back swing, or heading away from us. If we push it as it is coming toward us,
we are pushing ``out of phase'' with the basic oscillation, and the amplitude
thereby is decreased.

The
mathematical details of the model follow directly from the math used to
describe harmonic oscillation in bodies such as the mass on a spring, pendulums
and playground swings. The task here is to translate the basic ideas into a
form applicable to the resonate and fire neuron. Additionally we must formulate
this in a manner that is amenable to computational implementation.

The starting
point for analysis is to consider the mass on a spring arrangement. Here we
have a mass that is displaced from the equilibrium point by at any given moment;
this displacement may be positive or negative. Due to the physical form of the
spring, the mass always experiences a return force in the opposite direction to
the current displacement:

where is a positive constant
referred to as the spring constant. This equation captures the fact that the
return force is proportional to the current displacement. This is a key fact in
that such systems are characterized among

*Harmonic Oscillators*. The basic behaviour of Harmonic Oscillators is captured by the differential equation:
By Newton's
second law, we can relate the mass, return force and acceleration thus:

Substituting
we arrive at

The above
equation is simply shorthand for that which we know intuitively. It states that
the current acceleration is proportional to the current displacement, and in
the opposite direction. For the purposes of simulation, we rewrite the equation
in its more common form, replacing and with the term , defined below.

The term is defined as

This result allows us to re-express the
acceleration term in terms of :

A particular example of an equation which
represents a solution to the general differential relation described above is
written

where is any constant length
and is any constant angle.
The parameters which give an oscillator its unique properties are , and . The value of determines the
amplitude of oscillation, that is how far the maximum displacement from
equilibrium will be. The term determines the
strength of the returning force. This in turn determines how quickly the mass
returns to the equilibrium point (and indeed the velocity at which the
equilibrium is passed). This equates to the more familiar concept of the
frequency of oscillation. The frequency of oscillation is the number of
complete cycles performed per second, and is the inverse of the period, the
length of time required to complete a single cycle.

The period
of oscillation of such a system is denoted and related to the
other terms as follows:

In a fashion similar to the delta functions
used to describe the IFN, we now demonstrate the operation of the resonate and
fire model in mathematical terms. First, we must define some variables unique
to the model:

where is the resonant
frequency of node , and is the frequency of
the global clock. The global clock frequency determines the granularity of
simulation and may be set to any value, the default used to produce the graphs
discussed previously is 1000. The term is referred to as the
counter multiplier for node . This term is introduced since it may be calculated once the
resonant frequency is specified, and thus does not need to be calculated in
subsequently.

The rate of
change of the membrane potential of neuron , or its velocity, is denoted by . The change in the velocity for the current time step is
calculated first. The contribution from input pulses from all pre-synaptic
neurons is calculated by the sum of products term , where is the weight of the
connection from neuron to neuron , and is the current
(axonal) output of neuron . The current axonal output is always either a or a , since action potentials are all or none events. The return
force's contribution to the velocity calculation is expressed as , which is the expression we arrived at for previously, divided by
. We divide by because we are
performing a

*time slice*calculation; in each step of the calculation we are simulating a period of time that is the inverse of the global clock frequency. The final term is the damping factor. The damping constant, ranges from to , and is typically assigned a value of around . The effect of this parameter is to cause the oscillation to gradually die off, slowly reducing the amplitude, as seen previously in the graphs.
The
calculation of the new membrane potential, , is straightforward once we have calculated the new
velocity. In a single period of the global clock, will change by the
product of the current velocity and the time that we are simulating. Since
period is the inverse of the frequency, this sum can be expressed as shown
above. At this point we have calculated the new membrane potential. All that
remains is to handle the production of action potentials.

The above
equation is the mathematical characterization of the model's method for
deciding the output of neuron , denoted . The result is simply that if is greater than , which denotes the threshold, then is set to , otherwise it is set to . There are a number of actual mathematical functions that
provide suitable implementations of , however in the computational implementation a single ``if''
statement suffices.

The
mathematical structures described thus far handle axonal inputs from
pre-synaptic neurons. Another major feature of the model is direct
dendro-dendritic connections. This aspect is accommodated through a simple
extension to the delta rule.

The new sum
of products term is the sum across all
neurons providing dendritic inputs to neuron , of the products of the current membrane potential of neuron
, , minus the current membrane potential of neuron , and the weight of the
dendritic connection from neuron to neuron , denoted . This factor is the key element in the creation of the
dendritic field, through which waveforms may propagate. The difference between
the axonal inputs and the dendritic connections in this model is that axonal
inputs permit the transmission of single impulses, . The term is non-zero only when
neuron has generated an action
potential, while the term is almost always
non-zero, hence the difference between the two sum-of-product terms. The
dendritic connections transmit electrical ``pressures'' which cause recipient
neurons' membrane potentials to become closer to their own.

It is easy
to extend this model to provide for propagation delays. Each neuron is modeled
as a set of parameters, including the current value of and . We extend this to provide a history of the values of these
parameters. As each time step of the simulation passes, the new value
calculated for and becomes the
``current'' value, while the old current value is stored in the history record.
Axonal and dendritic connections are then augmented to specify which element of
the history array they refer to, so that instead of using the current value of or in the delta rule, we
may use the value as it was time steps ago. For
convenience of implementation, the current value is stored in the history array
as element , element is the value as it was
during the last time slice, and so on. The terms and which represent the
parameters of the connection, are augmented to account for this, with a
superscript, indicating the element
of the history array that they refer to. This additional parameter is a
fundamental property of the network topology of a resonate and fire network. So
the final delta rule, which encapsulates resonance, axonal inputs, the
dendritic field, and propagation delays, becomes

The model
described above has been implemented using the C programming language.

**3. Signal processing and RNF.**

Sherrington
(1906) first suggested the concept of the integrate-and-fire neuron. Under this
scheme, the higher the frequency of the input spike trains, the larger the
input activity is considered to be. The neuron is then assumed to respond with
a firing rate that is a function of the input firing rates. McCulloch and Pitts
(1943) formalised the model and showed how to encode any logical proposition in
a network of their neurons. Similarly, any network could be shown to encode a
logical proposition.

Eccles
(1957) used spinal cord recordings to correlate the spike frequency with the
intensity of the applied stimulus as well as the intensity of the perceived
sensation. Under the frequency-coding scheme, neurons encode information by
frequency modulation of action potentials output on the axon. Increased firing
rates in the presence of certain stimuli were taken to indicate that the neuron
under observation was reacting directly to the presence of the feature which it
was tuned to react to.

An alternate
view of neuronal signaling which uses frequency coding as its basic component
is that of ``population coding'' (Georgopoulos et al., 1982). Under this
scheme, the intensity or salience of the content is conveyed using frequency
modulation, but the content itself is represented by a distributed combination
of spike trains across a population of neurons.

In terms of
visual processing, the assumption of feature detection follows an Euclidean
geometry hierarchy. First, there are point and line detectors. These feed into
edge and boundary detectors, and so on up the scale. Barlow (1972) suggested
the possibility of such hierarchies when he made the claim that aspects of
perceptual awareness are related to the activity of specific neurons. The
``grandmother'' cell hypothesis follows logically from this sequence. This
concludes that there can be a single cell in the brain which is best attuned
for a single recognition task, such as the recognition of a single human face
(Grandma's). There has been some experimental evidence for such ``fine tuning''
of individual neurons, such as the demonstration of Tanaka (1993) of
``grandmother'' style cells in monkeys which respond to moderately complex
figures.

There are
numerous problems with such specific specialization of function at the cellular
level. From a redundancy viewpoint, it is simply bad design to have a single
point of failure of the recognition process as would be the case were a single
cell assigned to a single pattern. A key feature distinguishing neural networks
from other computational devices is the property of graceful degradation -
meaning that a large part of the system can be destroyed without completely
annihilating the behaviour of the system.

Hubel and
Weisel's work on the receptive fields of individual neurons in the cat's
striate cortex was taken by many as proof positive that visual perception
followed the Euclidean hierarchy of points, lines and contours, shapes and
forms (1959). Each stage was seen to be built on the previous. The basic assumption
underlying this scheme is that the visual processing operation begins with a
two dimensional retinal image formed by the eye. As observes, the situation is much more complex
than that. The optical image is a flow in at least three dimensions, the retinal
image is curved, not flat, and the perceptual system has evolved to operate
under conditions where the subject is moving. As experiments (Rock, 1983) show,
the primitives of perception are ``relations between changes in oculocentric
and egocentric direction. Lines and edges are not the primitives that configure
the perceptual process; lines and edges result from the perceptual process,
they do not determine it.'' .

This is not
to say that the whole paradigm of viewing neural perceptual stages as feature
extraction exercises is wrong. Rather that it is time to examine carefully the
assumptions underlying the choice of features that we think are being
extracted. Ultimately, sensory data come in the form of a power spectrum, a
continuous stream of intensity values. The fact that hair cells in the ear are
tuned to specific frequencies, and the existence of neurons in the inferior
colliculus specifically oriented to pitch extraction is now commonplace in the
literature (see, for example, Braun 2000). We make the following
suggestions:

- The stimulus for processing of sensory data is ultimately a power spectrum
- Conventional neural net systems have great difficulty in handling phenomena like rotational invariance, scaling, and so on
- These problems can be avoided by considering the action of RNF neurons that own a part of the frequency spectrum

We now wish
to open out the discussion to talk about the specific role of dendro-dendritic
connections, and the possibility that the well-examined phenomenon of stochastic
resonance may point to a general process, ubiquitous in the brain, of computing
with wave interference.

**4. Computing with wave interference; the role of dendro-dendritic connections.**

While
quantum computing has become bogged down by the decoherence phenomenon, Ian
Walmsley and his associates, inter alia, have demonstrated the possibility of
computing by wave interference alone. Their celebrated demonstration is
effectively an interference-based optical computer. Our work described above
exemplifies the possibility of neurons implementing Fourier transforms by
stealth, as it were, by a single neuron "owning" a particular
bandwidth. In this brief section, we wish to suggest the possibility that the
structure of dendro-dendritic connections affords a more flexible and
potentially computationally powerful means of signal processing. In general, we
are suggesting, such mechanisms perform the work of computation in the brain;
INF effectively handles computation.

Pribram sees
the dendritic microprocess as a central location of computational activity in
the brain. Spike trains, action potentials are seen more as communicative
devices than as the essence of the computational process. Izhikevich's resonate
and fire neuron and the neural model described later place greater emphasis on
the dendritic microprocess than conventional neural network models.

An important
departure in Pribram's (1991) work is the emphasis on the role of
dendro-dendritic connections. Such connections are similar to normal
axonal-dendritic synaptic connections; however, the entity being transmitted is
not an action potential, instead it is the current internal state of the
neuron. In this way, Pribram proposes that computations can occur which involve
multiple neurons, but which do not utilise axonal action potentials. This is
not to say that action potentials are relegated to insignificance in the model;
rather dendritic processes have been promoted to a level on a par with action
potentials and conventional axonal transmission.

Recent evidence
from experimental studies have confirmed that subthreshold dendritic dynamics
are complex and would appear to have an important role to play in the
computational activity of the brain. Particularly, calcium channels (Schutter
,1993) react strongly to subthreshold inputs. Callewaert , Eilers and Konnerth
(1996) express the case for the dendritic process thus:

Recent
results obtained by using high resolution imaging techniques provide clear
evidence for new forms of neuronal signal integration. In contrast to the
quickly spreading electrical potentials, slower intracellular signals were
found that are restricted to defined dendritic compartments. Of special
significance seem to be highly-localized, short-lasting changes in calcium
concentration within fine branches of the neuronal dendritic tree. These
calcium signals are evoked by synaptic excitation and provide the basis for a
dendritic form of signal integration that is independent of the conventional
electrical summation in the soma. There is experimental evidence that dendritic
integration is critically involved in synaptic plasticity.

The general
feature whereby neurons can ``tune in'' to a particular frequency component of
the aggregate oscillation in the dendritic field provides an important computational
asset to the model as a whole. It is also a phenomenon predicted to exist in
biological neurons by Llinas (1988). The fact that the dendritic field supports
such interference effects has deep ramifications; the modes by which the brain
performs computation may be very different to the current action-potential
centric paradigm.

Readers
familiar with Young's slit experiment may find the analogy useful. In this
case, the light source is the input neuron, while the slits correspond to the
two output neurons. The screen on which the interference pattern appears is the
entire set of possible values of the delay constants; for a particular pair of
values we are measuring the interference at a single point on the screen. So,
for each experimental simulation of the network, we select a value of the delay
constants. As for Young's slit experiment, if the distance from the slits to
the point on the screen is exactly the same, then waves from each slit arrive
in-phase and constructively interfere. If, however the distance differs by
exactly half a wavelength, then destructive interference occurs and the waves
cancel each other out. In addition to standard inputs coming from the axons of
presynaptic neurons, the RFN model implements inputs from the dendrites of
other neurons, transmitting the current activation of the pre-synaptic node.
This feature is directly inspired by Pribram (1991), who emphasizes the role of
such channels in the computational process in the brain.

Here we have
modeled the feature in a manner similar to the standard axonal input -- the sum
of the products of connection weight and pre-synaptic output is augmented with
the sum of product of dendritic connection weight and the current activation of
the pre-synaptic neuron. Therefore, the only difference is that the current
activation is used instead of the current output.

On its own,
this mechanism would not be very useful. The contribution from dendro-dendritic
connections to a post-synaptic neuron's activation would simply be the linear
sum of the current activations of its pre-synaptic neurons. This situation is
corrected by the addition of the delay mechanism discussed previously. Each
dendro-dendritic connection has an associated weight, and delay. The delay
corresponds to a propagation delay in the biological case. As the diagrams
illustrate this mechanism permits an innervated neuron to position itself in
any position in the interference field of a set of neurons, by tuning the delay
parameters of its dendritic connections.

We have also
implemented a neural net architecture using this basic idea. Neurons are to
learn which frequencies to respond to.

**5. Conclusions**

This paper
makes a set of claims ranging in strength from categorical to extremely
tentative. The fact that after a century of modern neuroscience we have yet to
establish the neural basis for a single symbolic cognitive act must surely give
pause. Elsewhere (O Nualláin, (2003) ) we speculate that entirely different
formalisms like Lie groups may be appropriate for analysis of brain function in
addition to the Hilbert and other transforms hinted at here. It is
uncontroversial at this stage to contend that old-fashioned INF needs greatly
to be augmented. We contend that RNF may offer a superset formalism. We go on
to posit that dendro-denritic connections may yield a fundamental set of new
insights, which we look forward to pursuing.

**References**

Barlow H. B. (1972) Single
Neurons and Sensation: A neuron doctrine for perceptual psychology. Perception.
Perception 1, 371-394.

Biebel, U.W., Langner, G., 1997.
Evidence for "pitch neurons" in the auditory midbrain of chinchillas.
In: Syka, J. (Ed.), Acoustic Signal Processing in the Central Auditory System.
Plenum Press, New York

Braun, M., 2000. Inferior
colliculus as candidate for pitch extraction: multiple support from statistics
of bilateral spontaneous otoacoustic emissions. Hear. Res. 145, 130-140.

Braun, M., 1999. Auditory
midbrain laminar structure appears adapted to f 0 extraction: further evidence
and implications of the double critical bandwidth. Hear. Res. 129, 71-82.

G Callewaert, J Eilers, and A
Konnerth Axonal calcium entry during fast 'sodium' action potentials in rat
cerebellar Purkinje neurones J Physiol (Lond) 1996 495: 641-647

Georgopoulos, A., Kalaska, J.,
Caminiti, R., Massey, J. (1982). On the
relations between the directionof two-dimensional arm movements and cell
discharge in primate motor cortex. Journal ofNeuroscience, 2(11), 1527-1537.

Hutcheon, B. and Yarom, Y.
"Resonance, oscillation, and the intrinsic frequency preferences of
neurons" Trends Neurosci. 2000 May; 23(5): 216-22

Izhikevich (2002) "Resonance
and selective communication via bursts in neurons having subthreshold
oscillations" Biosystems 67(2002) 95-102

Llinas, R. (1988) The intrinsic
electrophysiological properties of mammalian meomeric insights into
electrophysiological function

Langner, G., Schreiner, C.E.,
Biebel, U.W., 1998. Functional implications of frequency and periodicity coding
in auditory midbrain. In: Palmer, A.R., Rees, A., Summerfield, A.Q., Meddis, R.
(Eds.), Psychophysical and Physiological Advances in Hearing. Whurr, London,
pp. 277-285.

Langner, G., Schreiner, C.E. and
Merzenich, M.M. (1987) Covariation of latency and temporal resolution in the
inferior colliculus of the cat. Hear. Res. 31, 197-201

McCulloch, W. and Pitts, W.
(1943). A logical calculus of the ideas immanent in nervous activity. Bulletin
of Mathematical Biophysics, 7:115 - 133.

Rees, A. and Sarbaz, A. (1997)
The influence of intrinsic oscillations on the encoding of amplitude modulation
by neurons in the inferior colliculus. In: J. Syka (Ed.), Acoustic Signal
Processing in the Central Auditory System, Plenum Press, New York, pp. 239-252

Pribram, K. (1991) Brain and
Perception: holonomy and structure in figural processing. N.J. : Lawrence
Erlbaum

Reinker, S, E. Puil, and R.M.
Miura (2004) "Membrane Resonance and Stochastic resonance modulate firing
patterns of Thalamocortical neurons: Journal of computational Neuroscience 16
(1): 15-25, January-February, 2004

Rudolph, M. and A. Destexhe
(2001) "Do neocortical pyramidal neurons display stochastic resonance?"
Journal of computational neuroscience 11,19-42

DeSchutter,
E. and Bower, J.M. (1993) Parallel fiber inputs gate the Purkinje cell response
to ascending branch synaptic inputs. Soc. Neurosci. Abst. 19:1588.

Sherrington CS. 1906. Integrated
Action of the Nervous System. Cambridge University Press: Cambridge, UK

Wu, M, C-F Hsiao, and S.C.
Chandler (2001) "Membrane reonance and subthreshold membrane oscillations
in Mesencephalic V Neurons: Participants in Burst Generation The Journal of
Neuroscience, June 1, 2001, 21(11):3729-3739

## No comments:

## Post a Comment