Grupos de Google ya no admite nuevas publicaciones ni suscripciones de Usenet. El contenido anterior sigue siendo visible.

Trying to develop a Computer model of a biological Neural Network

Visto 2 veces
Saltar al primer mensaje no leído

R. Storm

no leída,
10 sept 1998, 3:00:0010/9/98
a
I’m very much fascinated by biological Neural Networks. But because I’m
a computer scientist, and not a biological or medical, I don’t know a
great deal of the biological aspects of Neural Networks. I’ve made a very
simplified requirement specification of a computer model for Neural
Networks.
I would like to ask someone to take a look at it (form a biological point
of view), and tell me what else I need to know, or what’s wrong in it.

Any other comments are welcome…

Thanks, Ray


.
.
.
.

N N EEEE U U RRR AA L N N EEEE TTT W W OO RRR K K SSSS
NN N E U U R R A A L NN N E T W W O O R R K K S
N N N EEEE U U RRR AAAA L N N N EEEE T W W O O RRR KK SSS
N NN E U U R R A A L N NN E T W W W O O R R K K S
N N EEEE UUUU R R A A LLL N N EEEE T W W OO R R K K SSSS
Computational Neural Network Model
Requirements Specification
(C) 1998 By R. Storm


1. Neural Networks Overview
---------------------------


1.1 Biological Neural Networks
------------------------------

1.1.1 Information about biological neurons
------------------------------------------

- Each neuron acts on its own.
- A neuron collects signals at it's synapses by summing all excitatory
and inhibitory influences upon it.
If the excitatory influences are dominant the neuron sends a message
to other neurons. This is decided by the neuron threshold function.
(e.g. step, ramp, sigmoid or gaussian)
- The neuron value will degrade through time, otherwise threshold will
always be exceeded.
- An axon carries information through series of action potentials.
- A synapse represents the junction between an axon and a dendrite.
- Synapses are made during in the early life of a organism. Probably
when that neuron is stimulated enough connections are made with nearby
neurons. These connections are not reversible.

1.1.2 Information about biological neural networks
--------------------------------------------------

- Over 100 billion (10^11) neurons in a human brain.
- Maximal 1000 synapses on the in & output of a human neuron.
- Parallel processing.
- Has a way of preventing information overloading or loss.
- All neurons are positioned in a 3d space.
- Sensors are simply attached to the dendrites of a group of neurons.
- Actuators are attached to the axon of a group of neurons.


1.2 Simulation of Biological Type Neural Networks
-------------------------------------------------

To successfully design a simulated model of a biological neural network
it's necessary to discuss the way of implementation of al the in paragraph
1.1 mentioned properties:

1.2.1 Computational Model of a biological neuron
------------------------------------------------

- Each neuron acts on its own.

This is a big problem if the model is designed for a computer with a
single CPU. So there must be a timed method of the Neural Network class
which can perform the actions for each neuron. This can be done in several
ways, the best way will probably be:
Each neuron that has been stimulated sets a flag, which indicates it
needs to be updated. Then the method that updates the network processes
in a standard pattern all neurons.

- A neuron collects signals at it's synapses by summing all excitatory
and inhibitory influences upon it.
If the excitatory influences are dominant the neuron sends a message
to other neurons. This is decided by the neuron function. (e.g. step,
ramp, sigmoid or gaussian)

When a neuron threshold value is exceeded, it uses it's numbered list with
synapses containing pointers to next neurons. When updating a triggered
neuron send a signal to all other neurons to which it is connected.

- The neuron value will degrade through time, otherwise threshold will
always be exceeded.

Two datamembers of the Neuron class will keep track of the reduction of
the neuron value. The write-off delay will indicate the number of update-
cycles there are before reduction begins. Another keeps track of the
number of cycles that has passed since the last update. The power of
reduction is determined via the reduction function, which can very much
look like the neuron threshold function.

- An axon carries information through series of action potentials.

These series can and probably will occur between several update-cycles,
when the neuron is triggered multiple times.

- A synapse represents the junction between an axon and a dendrite.

A synapse is represented either by a class or just by a list or array
of dendrites with pointers to connected neurons.

- Synapses are made during in the early life of a organism. Probably
when that neuron is stimulated enough connections are made with nearby
neurons. These connections are not reversible.

When a neuron is stimulated enough, it may create a synapse with the
most nearby neuron (possibly the one with the least activity).
This could be made reversible. The synapse is removed by a neuron, which
wasn't stimulated enough, and so it removes all synapses, using a
datamember that keeps track of it's activity.

1.2.2 Computational Model of biological neural networks
-------------------------------------------------------

- Over 100 billion (10^11) neurons in a human brain.

This is possible, but will use lots of memory an CPU-speed, even though
a computer is faster. Because the computer use the advantage of parallel
processing.

- Maximal 1000 synapses on the in & output of a human neuron.

This can be limited by a Neuron datamember, which represents the maximum
number of synapses.

- Parallel processing.

See paragraph 1.2.1, section: "Each neuron acts on its own".

- Has a way of preventing information overloading or loss.

The way of how information overloading is handled is described in para-
graph 1.2.1, section "The neuron value will degrade through time,
otherwise threshold will always be exceeded.".
For what concerns information loss (or forgetting). Humans have short
and long term memory. It probably has something to do with the degeneration
of the synapse connections (caused by aging of the organism). and neuron
thresholds (with their write-off delay). The long and short term memories
can be simulated by gives groups of neurons different neuron value
write-off
delays.

- All neurons are positioned in a 3d space.

The neurons will be ordered in memory (through arrays or lists) in
a box-like representation, of which the length, height, depth are
variable.

- Sensors are simply attached to the dendrites of a group of neurons.

Add a physical attribute to the main application (organism) to connect
input organs (keyboard, bitmap) to the neural network. The neural network
must have means (read: a method) to connect the sensor to the neural
network.

- Actuators are attached to the axon of a group of neurons.

Add a physical attribute to the main application (organism) to connect
output organs (screen, sound, printer) to the neural network. The neural
network must have means (read: a method) to connect the actuator to the
neural network.


---=== End ===---

Matt Jones

no leída,
10 sept 1998, 3:00:0010/9/98
a
In article <6t84i1$2...@pmsnnews.best.ms.philips.com> R. Storm,

rst...@best.ms.philips.com writes:
>I would like to ask someone to take a look at it (form a biological point
>of view), and tell me what else I need to know, or what’s wrong in it.
>

Hi Ray,

One comment I could make about your proposed model concerns the "realism"
of treating a neuron as a summing unit with a threshold. This would
really be a very, very rough approximation. Neuronal membrane responses
are highly nonlinear. Also, because of a property called "inactivation"
that the voltage-gated ion channels have, and because the voltage change
is a function of the total membrane conductance, the firing threshold for
a neuron is not exactly a fixed value. Rather, the threshold can move
around depending on what synaptic conductances (for example) are active,
and also depending on the steepness of the voltage trajectory. These are
somewhat complicated effects to simulate on a large scale, because they
usually require solving a bunch of differential equations for _each_
neuron. But there may be some simplifying assumptions that can be made,
while still preserving the important nonlinearities.

Another, and perhaps more important, comment has to do with modelling
synapses. In general, synaptic strength is also not a fixed quantity.
Aside from really gross long-term changes in synaptic efficacy (i.e.,
Long Term Potentiation and Depression), there are changes that act
constantly and at millisecond time scales. For example, after an initial
stimulus, most central synapses will experience either a facilitation or
a depression that lasts for tens of milliseconds. Thus, _each_ synapse
may adjust its strength somewhat depending on the recent history of
activity at that synapses. For more info on this point, and methods for
modelling it, you might want to look up papers by Tsodyks and Markram.

Cheers,

Matt Jones

Jeff Best

no leída,
12 sept 1998, 3:00:0012/9/98
a
Ray,

Some other thoughts.

It occurs to me that one of the things we don't simulate with ANNs is
the delay in recovering fluency in the practice of a skill. Assuming
that this is encoded via some route through a network and we imagine the
route as involving the transport of a resource (call it action
potentials, if you will), then, as the resource is pushed through each
axon, some must get left behind. This acts as a repository of resource
which effectively lowers the threshhold for that axon. If the axon
doesn't fire for a while, this resource may get transformed into a more
stable substance or absorbed into some nearby intra-cellular structure.
Subsequent resumption of the skill triggers the release of this reserve
of resource, but it takes a little while, hence, we are not immediately
fluent upon the resumption of a skill abandoned for some little while.

Another thought. Does the neuron, as fed by the resource arriving via
dendrites, provide all the resource for punching through its axons? Does
the arriving resource just act as a switch, allowing resource from the
neurocellular fluid to flow into the cell and through the axon? Is this
a basis for mood-inhibition of neurocellular activity (i.e. lack of
resource in the neurocellular fluid inhibits resource flow through axons
to whatever has arrived via dendrites or is locked up in the cell). If
one neuron has many axons, and has to supply all the resource to fire
them from a smaller number of feeding dendrites, the intensity of signal
through the axons would be reduced in comparison to the arriving signal.

--
Jeff Best
je...@jtbest.demon.co.uk

Jeff Best

no leída,
12 sept 1998, 3:00:0012/9/98
a
I've just done some further reading[1]...

<READING>

Apparently the Purkinje cell of the cerebellum may have 100,000
dendritic synapses.

The ribosomes (protein synthesis sites) are many in the dendrites and
few (if any) in the axon.

The axon usually emerges from an "axon hillock".

Some soma and dendrites transmit information directly to the dendrites
and cell bodies of another neuron.

Neurons "feed" on glucose, although they have the enzymes for
metabolising other sugars, ketone fats, acetoacetate, lactate and
3-hydroxybutyrate. The latter chemicals are less likely to pass through
the blood-brain barrier, although infants may use them.

Vitamin B1 (thiamine) is necessary to the use of the glucose and neurons
will eventually die in its absence.

Glia cells occupy as much space as neurons. Some construct the myelin
sheaths. They also keep unmyelinated axons apart. They form a sewage
disposal function, occupy the space left by a deceased neuron and
sometimes form scar tissue. They may provide structural support which
holds "connections" in place against external shocks.

Neuron migration is guided by long-fibred radial glia. (Possibly by
secreting neurotrophin, semaphorin or the controlling cyclic
nucleotides?)

Glia retain the ability to divide but most neurons lose it. Dendrite
growth and shrinkage is always possible. Larger animals and those kept
in more extensive environments develop more dendritic branches and glia
cells. These changes have limited persistence. (I wonder what putting a
human in a prison cell for X years does to dendritic branching?)
Dendrites appear to be constantly changing.

Potassium, sodium, chloride, bicarbonate (and other?) ions can pass
through the polarised neuron cell membrane via pores in embedded
proteins. At rest, sodium pores are closed. Potassium and chloride pores
allow a constant, small flow of ions (and the occasional sodium ion) to
flow.

A sodium-potassium pump pulls two potassium ions into the cell for every
three sodium ions ejected. The potassium ions can diffuse out again,
faster than the sodium ions can return. The result is a concentration
gradient, an electrical gradient and the neuron's resting potential. The
electrical gradient encourages potassium ions to bunch together in the
centre of the neuron.

The sodium ion concentration gradient allows the neuron to be ready for
a rapid response. On excitation, the neuron opens the sodium pores and a
faster inrush of soodium ions than would otherwise be possible, is
achieved.

Stimulation of any amount beyond a threshhold (generally 15 mV above the
resting potential), causes the sodium gates to be opened.
</READING>

Presumably, the spike amplitude is dependent upon the sodium
concentration outside the cell. If this is depressed by any factor, the
intensity of the spike may be diminished. Again, if anything slows the
opening of the sodium gate, or alters the resting potential or
threshold, then the spike may be delayed. If anything ties up the
potassium in the neuron, the reestablishment of the resting potential
will be delayed as will any subsequent spikes.

<READING>
The depolarisation of the membrane due to the spike, opens the sodium
and potassium gates. Sodium ions flood in faster and potassium ions
depart. Eventually a reversed polarity is achieved although, at the
spike's (action potential's) peak, the sodium ion concentration int the
neuron is still lower than that outside. Despite the concentration
gradient, the electrical gradient is reversed to the point where further
sodium entry is virtually halted. The resting state is reached as the
sodium-potassium pump restores the previous balance.
</READING>

<SPECULATION>
If slightly more sodium ions remain, it is conceivable that the
excitation of the neuron will be marginally faster (few ions need to
enter the cell), hence leading to a "more expert" response. (I'm still
looking for the mechanism that explains skill-fluency and its temporary
loss and recovery).
</SPECULATION>

<READING>
The refractory period (during which the neuron recovers from excited
state to resting), determines the possible firing frequency of the cell.

Action potentials arise in the axon hillock. They are transmitted along
the axon as sodium ions, but some diffuse through the membrane.
Polarisation results, leading to ion exchange, regenerating the signal.
The result can be viewed as a wave along the axon. Different axons
propagate this wave at different rates. Thin axons may achieve 1 m/s.
Thicker, unmyelinated axons may propagate action potentials at 10 m/s.
The fastest myelinated axons may achieve 120 m/s. Axons in some larger
animals may exceed a metre in length.

Action potentials may not be generated in small neurons. Instead a
graded potential is generated which is conducted to areas adjacent ot
the cell. Decay over distance limits this to local cellular
intercommunication, but, the communication can be in any direction.

At the presynaptic ending of an axon, neurotransmitters are released
across a synapse. Each such release contributes to the excitation or
inhibition of the "connected" dendrite. But, there are some "electrical"
synapses used for synchronisation. Repeated stimulation of an axon may
be sufficient to generate an action potential in the postsynaptic
neuron. This has to overcome the steady decay of the graded potential
invoked in the post-synaptic dendrite.

Neurotransmitters include Dopamine, Epinephrine, Acetylcholine,
Norepinephrine, Serotonin, aspartate, glutamate, glycine, some amino
acide metabolites, the enkephalins and many hormone peptides.

Calcium pores in the presynaptic membrane are opened by the arrival of
the action potential. The resulting increase in calcium concentration
triggers the release of an amount of neurotransmitter. Sometimes this
release occurs periodically without an action potential.

The neurotransmitter diffuses across to the postsynaptic membrane (less
than 0.5 microns in under 10 microseconds) and attaches to a receptor.
The inhibitory or excitatory effect of the neurotransmitter depends upon
the type of receptor it finds.

Some neurotransmitters attaching to a receptor open sodium, chloride or
potassium gates for an a fixed period ionic effect. The others cause a
metabolic effect by stimulating an enzyme. This will convert an existing
resource to some other chemical. This may cause chemical changes in
proteins, affecting gate operation or even neuron structure. Some
resulting molecules themselves act as secondary transmitters.

Ionic effects may last a few milliseconds, but peptide transmitters may
have an effect lasting hours.

Some neurotransmitters need to be dismantled after use and the products
can diffuse back to the presynaptic neuron. Others are detached from
their receptors and reabsorbed directly, or broken down into products
that diffuse into the blood stream.
</READING>

[1] James W. Kalat, Biological Psychology, 3rd Edition, 1988, Wadsworth
Inc.

I hope this compensates for the earlier, uninformed speculation <g>

--
Jeff Best
je...@jtbest.demon.co.uk

Windows 95 [win'doze], n. a Linux configuration utility.

0 mensajes nuevos