Problems With the Izhikevich Model

286 views
Skip to first unread message

toms...@riseup.net

unread,
Jun 18, 2009, 6:45:31 AM6/18/09
to Brian
Hello,

Brian is a great resource, I've been able to put together in days what
I thought would take me months. I've come up against a problem using
the Izhikevich model though, both in the brian.library.IF import and
using the raw equations shown in the documentation. I've had a good
search around for a solution but haven't found anything.

The following very simple code shows me trying to make a single
Izhikevich neuron and plot its resting state (note there is no attempt
to `drive' the neuron):


from brian import *
from brian.library.IF import *

eqsRS = Izhikevich(a=0.02/ms, b=0.2/ms)
resetRS = AdaptiveReset(Vr=-65*mV, b=2.0*nA)
group = NeuronGroup(1, model=eqsRS, threshold=-30*mV, reset=resetRS)

M = StateMonitor(group, 'vm', record=True)
N = StateMonitor(group, 'w', record=True)

run(250*ms)
plot(M.times / ms, M[0] / mV)
plot(N.times / ms, N[0] / mV)
show()


I have the problem that the recovery variable w falls off extremely
quickly to around -14000mV. I get the same problem when using the raw
equations for the Izhikevich model:


eqs = Equations('''dvm/dt=(0.04/ms/mV)*vm**2+(5/ms)*vm+140*mV/ms-w:
volt
dw/dt=a*(b*vm-w) : volt/second''')


Can anyone reproduce this behaviour? I'm using Python 2.6 and Brian
1.1.2 (I think). Ideally, I would like to use the raw equations
because I'd like to add another equation which doesn't affect the
neuron potential but is driven by afferent firings and falls off
slowly with time. A solution using the raw equations would be most
helpful.

One last question: Izhikevich's original equations have a term `+ i'
at the end of the `dvm/dt...' line, which is for the input to the
neuron. Where is this accounted for in the Brian version?

Thanks in advance. Again, it's a brilliant library.

Tom

Romain Brette

unread,
Jun 18, 2009, 6:54:06 AM6/18/09
to brians...@googlegroups.com
Hi Tom,

This is not a bug! This is because w is in volt/second and in that line:
plot(N.times / ms, N[0] / mV)
you divide by mV.
If you look at the equation, the resting value for w is b*vm =
(0.2/ms)*-70*mV=-14*mV/ms
What you get if you divide by mV is -14/ms = -14000/second

If you want a extra input current, do it like this:
eqs = '''
dvm/dt=(0.04/ms/mV)*vm**2+(5/ms)*vm+140*mV/ms-w+I : volt
dw/dt=a*(b*vm-w) : volt/second
I : volt/second'''

Romain

toms...@riseup.net a écrit :

toms...@riseup.net

unread,
Jun 18, 2009, 7:57:00 AM6/18/09
to Brian
Romain,

thank you for the incredibly quick reply. Perhaps I should have not
used the word `reproduce', I didn't mean for it to sound like a bug.
Thanks for spotting that silly mistake of mine. I'm not used to
working with differential equations and dimensions in programming so
some of this is new to me.

I have one more question. Using the parameters in Izhikevich's paper
"Simple Model of Spiking Neurons", I'm unable to replicate the
habituation that the regular spiking neuron shows. I've uploaded a
plot of my neuron activation here http://yfrog.com/3wizhipngp . I
assume I have something simple wrong with my choice of decay rate or
reset value for the recovery variable but I'm not able to figure out
what. Any ideas would be very welcome.

Thanks again for your help.
Tom

On 18 juin, 11:54, Romain Brette <romain.bre...@ens.fr> wrote:
> Hi Tom,
>
> This is not a bug! This is because w is in volt/second and in that line:
> plot(N.times / ms, N[0] / mV)
> you divide by mV.
> If you look at the equation, the resting value for w is b*vm =
> (0.2/ms)*-70*mV=-14*mV/ms
> What you get if you divide by mV is -14/ms = -14000/second
>
> If you want a extra input current, do it like this:
> eqs = '''
> dvm/dt=(0.04/ms/mV)*vm**2+(5/ms)*vm+140*mV/ms-w+I : volt
> dw/dt=a*(b*vm-w) : volt/second
> I : volt/second'''
>
> Romain
>
> tomsh...@riseup.net a écrit :

Romain Brette

unread,
Jun 18, 2009, 8:11:38 AM6/18/09
to brians...@googlegroups.com
Hi Tom,
I would suggest that you choose a slower decay rate for the second
variable (smaller a), and possibly a larger reset value (for w).
I have a recent paper that you might find useful:
http://audition.ens.fr/brette/papers/TouboulBrette2008BC.html
It's on the adaptive exponential IF, but it's qualitatively very
similar. There is also a Scholarpedia article:
http://www.scholarpedia.org/article/Adaptive_exponential_integrate-and-fire_model
Romain

toms...@riseup.net a écrit :

Romain Brette

unread,
Jun 18, 2009, 8:23:00 AM6/18/09
to brians...@googlegroups.com
Ah and maybe one other thing: your initial conditions are probably
wrong. You should let the model go to equilibrium before you start the
stimulation, or calculate the equilibrium conditions and set them at the
beginning.
Romain

Romain Brette a écrit :

toms...@riseup.net

unread,
Jun 18, 2009, 8:51:40 AM6/18/09
to Brian
Romain,

thank you again for your help. I was convinced that I was doing
something wrong because I've managed to show habituation in another
simulation I wrote in C using the same parameters. I've solved the
(admittedly simple) problem and I'm posting it here in the hope that
it helps someone else.

If the activation and recovery variables of an Izhikevich neuron are
not set manually at the beginning of a simulation, they take some time
(depending on your parameters) to stabilise. Driving the neuron before
stabilisation will not produce the correct behaviour (in my case,
habituation did not occur). This is easily solved by setting the
activation and recovery at the start of the run:


@network_operation(when='start')
def init_vars():
if defaultclock.t.__float__() == 0.0:
neur.vm = c # Set neuron activation to the reset
value
neur.w = c * b # Set recovery variable to reset value
* b (same b as in the constructor of the Izhikevich neuron. Often
0.2mV)


Thanks again!
Tom


On 18 juin, 13:11, Romain Brette <romain.bre...@ens.fr> wrote:
> Hi Tom,
> I would suggest that you choose a slower decay rate for the second
> variable (smaller a), and possibly a larger reset value (for w).
> I have a recent paper that you might find useful:http://audition.ens.fr/brette/papers/TouboulBrette2008BC.html
> It's on the adaptive exponential IF, but it's qualitatively very
> similar. There is also a Scholarpedia article:http://www.scholarpedia.org/article/Adaptive_exponential_integrate-an...
> Romain
>
> tomsh...@riseup.net a écrit :
>
> > Romain,
>
> > thank you for the incredibly quick reply. Perhaps I should have not
> > used the word `reproduce', I didn't mean for it to sound like a bug.
> > Thanks for spotting that silly mistake of mine. I'm not used to
> > working with differential equations and dimensions in programming so
> > some of this is new to me.
>
> > I have one more question. Using the parameters in Izhikevich's paper
> > "Simple Model of Spiking Neurons", I'm unable to replicate the
> > habituation that the regular spiking neuron shows. I've uploaded a
> > plot of my neuron activation herehttp://yfrog.com/3wizhipngp. I

Romain Brette

unread,
Jun 19, 2009, 1:16:30 AM6/19/09
to brians...@googlegroups.com
Hi Tom,

Is there a particular reason why you need to use a network operation?
Otherwise you could simply do:
neur.vm=c
neur.w=c*b
just before run().

Romain

toms...@riseup.net a écrit :

toms...@riseup.net

unread,
Jun 19, 2009, 7:52:26 AM6/19/09
to Brian
Oops, yes, you are right. I have no idea why I put that in a network
operation. All solved!
Tom

On 19 juin, 06:16, Romain Brette <romain.bre...@ens.fr> wrote:
> Hi Tom,
>
> Is there a particular reason why you need to use a network operation?
> Otherwise you could simply do:
> neur.vm=c
> neur.w=c*b
> just before run().
>
> Romain
>
Reply all
Reply to author
Forward
0 new messages