Brian2 vs Brian 1.4.1 speed?

215 views
Skip to first unread message

aditya...@gmail.com

unread,
Apr 11, 2015, 7:12:57 PM4/11/15
to brians...@googlegroups.com
Hi,

I'm running an exc-inh network in Brian1.4.1 and Brian2 (from master git repo pulled about a month ago) for comparison. I found Brian 2 taking at least double the time as Brian 1.4.1. Is this expected or not? Can someone point me to benchmarks of Brian 2 compared to Brian 1. Further, if I use two NeuronGroup-s for exc and inh populations (though the neurons are exactly the same) instead of one, with 4 different Synapse objects instead of one, then Brian2 takes 4 times longer. Any pointers?

Thanks,
Aditya.

Dan Goodman

unread,
Apr 11, 2015, 9:04:18 PM4/11/15
to brians...@googlegroups.com
Hi,

Thanks for getting in touch about this, it's very useful to get this sort of feedback.

There are a lot of optimisations still to be done for Brian 2: we're still focusing more on correctness than speed. That said, a factor of 2 is surprising, mostly in my experience B2 is running faster than B1. Could you post your code that shows the difference?

It may be the case that what is slow is the construction time for the synapses rather than the run time, since this is the part of B2 that has had the least effort in optimisation (because we're still deciding on the best tradeoff between generality and efficiency). In which case, a long run of your simulation might prove faster in B2 than B1.

The other possibility is that you're comparing the Connection object in B1 to the Synapses object in B2. Synapses is much more general and in some cases if you are running in pure Python mode it will be slower. We're actually working on this at the moment and it should be much improved in the next release.

In any case, please do let us know what you find and post your code if possible.

Dan

aditya...@gmail.com

unread,
Apr 12, 2015, 4:57:47 AM4/12/15
to brians...@googlegroups.com
Hi Dan, thanks for your quick reply.

I've attached the code. It's my implementation of :
Ostojic, Srdjan. 2014. “Two Types of Asynchronous Activity in Networks of Excitatory and Inhibitory Spiking Neurons.” Nature Neuroscience 17 (4): 594–600. doi:10.1038/nn.3658
which is based on Brunel 2000.
Last I checked, this wasn't implemented in Brian. Currently, I'm using it to compare across simulators MOOSE, Brian1.4.1 and Brian2dev [I wrote it for a tutorial at last year's CAMP summer school, yes we used MOOSE and Brian!]. To compare replication, I've set the dt really small to 1e-6 s. I've kept simtime to a short 0.2 s to compare speed (already takes ~30+ s to 'run' post construction, would you recommend longer?). [ Perhaps, later you could include it as an example in Brian (after I verify parameters and put in some more user comments based on http://www.biorxiv.org/content/biorxiv/early/2015/04/09/017798).]

Ok, now for the details:
There are 3 scripts <...>_brian.py, <...>_brian2.py and <...>_brian2_2pops_4syns.py [also another script for MOOSE]. These have a population of 800 exc and 200 inh neurons (same dynamics) with simple 'delta-fn' voltage-jump synapses from PE to PI and PI to PE. All these scripts create exactly the same network (or at least try to). I seed python module random, and use it to generate my own connectivity, rather than rely on MOOSE or B1/B2. [I should have abstracted out the common part to an external file, rather than duplicating it across scripts...]

In _brian.py for B1, I used 1 NeuronGroup and created PE and PI subgroups to set up 2 Synases-s between exc and inh neurons [No I don't use Connection-s]. In _brian2.py for B2, I have 1 NeuronGroup and set up 1 Synapses with different weights for the exc and inh neurons [indexing]. In _brian2_2pops_4syns.py for B2, I have 2 NeuronGroup-s for the exc and inh neurons (though with identical dynamics), with 4 Synapses-s to set up the same network and same connections. This helps with using PopulationRateMonitor on the individual NeuronGroup-s [subgroups was a nifty feature, as I could use various Synapses-s and Monitor-s on them, why has it been removed in B2?].

Regarding replication, I remember reading somewhere that different simulators diverge really fast on spike timings [do you/someone remember this ref?]. I see the same here. I just started playing with clocks and their order in _brian.py for this, and with method in _brian2.py, so watch out for these in the scripts. Perhaps you can already tell me which method and clock order in B1 vs B2 give the same results, or do they?

Now for the timing, I'm using python's time module to time Brian run() and also Brian's internal reporting. I guess the difference between the two should be the initialization time.

[I'm not sure if I understand what you mean by construction time for synapses. Yes, setting individual connections to True using Synapses.connect(i,j) between specific neurons i and j (in a python loop, i.e. outside Brian) was really a lot faster in B1 compared to B2 (maybe like 100x). But, I got around this by constructing two lists with i and j values and calling .connect(ilist,jlist) just once in B2. Then the construction time for synapses was perhaps faster for B2 than the loop in B1. Haven't timed this. I'm only looking at simulation times, below.]

Below are my observations [Do you get the same on running these scripts?]... In summary, Brian1.4.1 takes ~15s, Brian2 with weave takes ~36 s, and Brian2 with split NeuronGroups with weave takes ~77s [In fact, I seem to recall that just splitting a NeuronGroup into 2, even without Synapses-s, makes Brian2 take longer, don't remember now...]. With numpy in Brian2, it takes slightly longer. Perhaps you can look at the scripts and tell me how to optimize / what I am doing wrong... [Aside: don't look at the smoothed rate plots on the plots that pop up, they're not really accurate at the ends, as the simtime=0.2s is too small.]

The 'inittime + runtime' line uses time.time() around Brian's run().

<...>_brian.py for Brian1.4.1

Setup complete, running for 0.2 s at dt = 1e-06 s.

65% complete, 10s elapsed, approximately 5s remaining.

100% complete, 15s elapsed, approximately 0s remaining.

inittime + runtime, t =  15.4371991158


<...>_brian2.py for Brian2, 1 NeuronGroup, 1 Synapses-s:

prefs.codegen.target='weave'

Setup complete, running for 200. ms at dt = 1e-06 s.

Starting simulation for duration 200. ms

56.352 ms (28%) simulated in 10s, estimated 25s remaining.

111.338 ms (55%) simulated in 20s, estimated 16s remaining.

165.014 ms (82%) simulated in 30s, estimated 6s remaining.

200. ms (100%) simulated in 36s

inittime + runtime, t =  36.9479730129


<...>_brian2_2pops_4syns.py for Brian2, 2 NeuronGroups, 4 Synapses-s, but same network (but do confirm this):

prefs.codegen.target='weave'

Setup complete, running for 200. ms at dt = 1e-06 s.

Starting simulation for duration 200. ms

24.911 ms (12%) simulated in 10s, estimated 1m 10s remaining.

50.247 ms (25%) simulated in 20s, estimated 1m 0s remaining.

75.961 ms (37%) simulated in 30s, estimated 49s remaining.

102.003 ms (51%) simulated in 40s, estimated 38s remaining.

128.25 ms (64%) simulated in 50s, estimated 28s remaining.

154.507 ms (77%) simulated in 1m 0s, estimated 18s remaining.

180.256 ms (90%) simulated in 1m 10s, estimated 8s remaining.

200. ms (100%) simulated in 1m 17s

inittime + runtime, t =  81.9536440372


Well, this was a detailed mail! I look forward to your / others' comments.

Best,
Aditya.
ExcInhNet_Ostojic2014_Brunel2000_brian.py
ExcInhNet_Ostojic2014_Brunel2000_brian2.py
ExcInhNet_Ostojic2014_Brunel2000_brian2_slow_2pops_4syns.py

Dan Goodman

unread,
Apr 12, 2015, 7:58:37 AM4/12/15
to brians...@googlegroups.com
Hi,

I'm looking into this but you seem to be right that there's a real
problem here.

Part of the problem was the monitors, which were not set up exactly the
same and may well be slower in Brian 2 because we haven't spent much
time optimising them. Removing those, I get times of 24.7s on B1 and
30.9s on B2, which is closer. That conceals the fact that B2 is using
compiled code in cases where B1 is not: so B2 ought to be faster.

B2 is producing slightly more spikes (2015 instead of 1933) which might
account for some difference but not enough to explain it.

The big difference in implementation is that in your B1 implementation
you're using constants in the Synapses pre expressions, whereas you're
creating a synaptic variable in B2 which means that for each spike you
need to read a whole bunch of memory. This might explain the difference.
I'll check that later or tomorrow if you don't report back.

Subgroups do exist in B2 by the way, just the syntax has changed. Now
you write subG = G[start:end] for a subgroup.

Will reply more later/tomorrow, got to go out now. One last thing: yes,
in complex examples it is expected for different simulators or even just
tiny changes in parameters to give very divergent spike times. Not much
you can do about this. Will say more later.

Dan

On 12/04/2015 09:57, aditya...@gmail.com wrote:
> Hi Dan, thanks for your quick reply.
>
> I've attached the code. It's my implementation of :
> Ostojic, Srdjan. 2014. “Two Types of Asynchronous Activity in Networks
> of Excitatory and Inhibitory Spiking Neurons.” /Nature Neuroscience/ 17
> (4): 594–600. doi:10.1038/nn.3658
> which is based on Brunel 2000.
> Last I checked, this wasn't implemented in Brian. Currently, I'm using
> it to compare across simulators MOOSE, Brian1.4.1 and Brian2dev [I wrote
> it for a tutorial at last year's CAMP summer school
> <https://camp.ncbs.res.in/>, yes we used MOOSE and Brian!]. To compare
> replication, I've set the dt really small to 1e-6 s. I've kept simtime
> to a short 0.2 s to compare speed (already takes ~30+ s to 'run' post
> construction, would you recommend longer?). [ Perhaps, later you could
> include it as an example in Brian (after I verify parameters and put in
> some more user comments based on
> http://www.biorxiv.org/content/biorxiv/early/2015/04/09/017798).]
>
> Ok, now for the details:
> There are 3 scripts <...>_brian.py, <...>_brian2.py and
> <...>_brian2_2pops_4syns.py [also another script
> <http://sourceforge.net/p/moose/code/6689/tree//moose/trunk/Demos/tutorials/ExcInhNet/ExcInhNet_Ostojic2014_Brunel2000.py>for
> MOOSE
> <http://sourceforge.net/p/moose/code/6689/tree//moose/trunk/Demos/tutorials/ExcInhNet/ExcInhNet_Ostojic2014_Brunel2000.py>].
> On 12 April 2015 00:12:56 BST, aditya...@gmail.com <javascript:> wrote:
>
> Hi,
>
> I'm running an exc-inh network in Brian1.4.1 and Brian2 (from
> master git repo pulled about a month ago) for comparison. I
> found Brian 2 taking at least double the time as Brian 1.4.1. Is
> this expected or not? Can someone point me to benchmarks of
> Brian 2 compared to Brian 1. Further, if I use two NeuronGroup-s
> for exc and inh populations (though the neurons are exactly the
> same) instead of one, with 4 different Synapse objects instead
> of one, then Brian2 takes 4 times longer. Any pointers?
>
> Thanks,
> Aditya.
>
> --
> http://www.facebook.com/briansimulator
> https://twitter.com/briansimulator
>
> New paper about Brian 2: Stimberg M, Goodman DFM, Benichoux V, Brette R
> (2014).Equation-oriented specification of neural models for simulations.
> Frontiers Neuroinf, doi: 10.3389/fninf.2014.00006.
> ---
> You received this message because you are subscribed to the Google
> Groups "Brian" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to briansupport...@googlegroups.com
> <mailto:briansupport...@googlegroups.com>.
> To post to this group, send email to brians...@googlegroups.com
> <mailto:brians...@googlegroups.com>.
> Visit this group at http://groups.google.com/group/briansupport.
> For more options, visit https://groups.google.com/d/optout.

Dan Goodman

unread,
Apr 12, 2015, 5:03:24 PM4/12/15
to brians...@googlegroups.com
Just another short update (no solution yet). I modified your B1 script
to run on B2 and switching different things on and off, and using the
new profiling option in B2, I get these numbers:

B1 neurons: 9.5 s
B1 both: 18.6 s
=> synapses add 9.1 s

B2 neurons: 12.6 s
B2 both: 29.7 s
=> synapses add 17.1 s

Profiling summary (for B2)
=================
synapses_1_pre 7.58 s 29.86 %
synapses_pre 7.51 s 29.58 %
neurongroup_stateupdater 4.60 s 18.12 %
neurongroup_thresholder 2.97 s 11.68 %
neurongroup_resetter 2.06 s 8.11 %

=> neurons = 9.6 s
=> synapses = 15.1 s

So this indeed suggests that neurons are taking roughly similar times in
B1 and B2, with the caveat that maybe B2 has slightly more Python
overhead than B1. This is quite a small network (1k neurons) so this
overhead will be relatively large.

However synapses are taking twice as long in B2 as in B1, as you
originally reported.

We'll look into this and let you know what we find. Marcel - I'll take a
look at this, but do you want to have a think about it too?

Dan

Dan Goodman

unread,
Apr 12, 2015, 5:10:05 PM4/12/15
to brians...@googlegroups.com
Just one other thing to add. If you run in the new C++ standalone mode
in Brian 2 you get much better results. If you include the time to build
and compile the code, it takes around 15s (less than Brian 1 already),
but almost all of that is the time for building and compiling the code.
The actual run time of the compiled code is 1.9s. So for longer
simulations, Brian 2 in C++ standalone mode will do much, much better
than either B1 or B2 in runtime mode.

Dan

Aditya Gilra

unread,
Apr 12, 2015, 5:24:08 PM4/12/15
to brians...@googlegroups.com
Thanks Dan!

I didn't expect such a huge speed up in going to standalone mode. Was expecting more like 2x.

Though when I put in the two statements as per:

I get below error:
brianlib/network.cpp: In member function ‘void Network::run(double)’:
brianlib/network.cpp:33:21: error: ‘class Clock’ has no member named ‘t’
brianlib/network.cpp: In member function ‘Clock* Network::next_clocks()’:
brianlib/network.cpp:66:13: error: ‘class Clock’ has no member named ‘t’
brianlib/network.cpp:66:27: error: ‘class Clock’ has no member named ‘t’
brianlib/network.cpp:71:23: error: ‘class Clock’ has no member named ‘t’
brianlib/network.cpp:75:21: error: ‘class Clock’ has no member named ‘t’
make: *** [brianlib/network.o] Error 1

Am attaching the script. It's my B2 script.

Thanks,
Aditya.


To post to this group, send email to brians...@googlegroups.com
--
http://www.facebook.com/briansimulator
https://twitter.com/briansimulator

New paper about Brian 2: Stimberg M, Goodman DFM, Benichoux V, Brette R (2014).Equation-oriented specification of neural models for simulations. Frontiers Neuroinf, doi: 10.3389/fninf.2014.00006.
--- You received this message because you are subscribed to a topic in the Google Groups "Brian" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/briansupport/UZ7nMQyOj4Y/unsubscribe.
To unsubscribe from this group and all its topics, send an email to briansupport+unsubscribe@googlegroups.com.
To post to this group, send email to brians...@googlegroups.com.
ExcInhNet_Ostojic2014_Brunel2000_brian2.py

Dan Goodman

unread,
Apr 13, 2015, 3:12:55 PM4/13/15
to brians...@googlegroups.com
Hi Aditya,

For standalone mode it should work I think. I didn't check your code but
with mine it works. Maybe update to the latest version of Brian 2?

I'm still investigating the speed issue, but it's proving curiously
difficult to track down. My current feeling is that it is a combination
of some sort of inefficiency in the spike queue with a generally higher
level of Python overhead in Brian 2. I'm not entirely happy with this
conclusion though. It's not the actual code that gets run for each
synapse because this takes close to zero time to execute for both Brian
1 and Brian 2 (which explains why the standalone version is so much
faster). It can't be the C code for the spikequeue though, because the
same C code is used in standalone.

Marcel, do you think it's possible that the difference is entirely in
the amount of Python overhead that Brian 2 has compared to Brian 1?

Dan
> http://www.biorxiv.org/__content/biorxiv/early/2015/04/__09/017798)
> <http://www.biorxiv.org/content/biorxiv/early/2015/04/09/017798)>.]
>
> Ok, now for the details:
> There are 3 scripts <...>_brian.py, <...>_brian2.py and
> <...>_brian2_2pops_4syns.py [also another script
> <http://sourceforge.net/p/__moose/code/6689/tree//moose/__trunk/Demos/tutorials/__ExcInhNet/ExcInhNet___Ostojic2014_Brunel2000.py
> <http://sourceforge.net/p/moose/code/6689/tree//moose/trunk/Demos/tutorials/ExcInhNet/ExcInhNet_Ostojic2014_Brunel2000.py>>for
>
>
> MOOSE
> <http://sourceforge.net/p/__moose/code/6689/tree//moose/__trunk/Demos/tutorials/__ExcInhNet/ExcInhNet___Ostojic2014_Brunel2000.py
> inittime + runtime, t = 36.9479730129 <tel:9479730129>
> <mailto:aditya...@gmail.com> <javascript:>
> wrote:
>
> Hi,
>
> I'm running an exc-inh network in Brian1.4.1
> and Brian2 (from
> master git repo pulled about a month ago) for
> comparison. I
> found Brian 2 taking at least double the time
> as Brian 1.4.1. Is
> this expected or not? Can someone point me to
> benchmarks of
> Brian 2 compared to Brian 1. Further, if I use
> two NeuronGroup-s
> for exc and inh populations (though the neurons
> are exactly the
> same) instead of one, with 4 different Synapse
> objects instead
> of one, then Brian2 takes 4 times longer. Any
> pointers?
>
> Thanks,
> Aditya.
>
> --
> http://www.facebook.com/__briansimulator
> <http://www.facebook.com/briansimulator>
> https://twitter.com/__briansimulator
> <https://twitter.com/briansimulator>
>
> New paper about Brian 2: Stimberg M, Goodman DFM,
> Benichoux V, Brette R
> (2014).Equation-oriented specification of neural models
> for simulations.
> Frontiers Neuroinf, doi: 10.3389/fninf.2014.00006.
> ---
> You received this message because you are subscribed to
> the Google
> Groups "Brian" group.
> To unsubscribe from this group and stop receiving emails
> from it, send
> an email to briansupport+unsubscribe@__googlegroups.com
> <mailto:briansupport%2Bunsu...@googlegroups.com>
> <mailto:briansupport+...@googlegroups.com
> <mailto:briansupport%2Bunsu...@googlegroups.com>>.
> To post to this group, send email to
> brians...@googlegroups.com
> <mailto:brians...@googlegroups.com>
> <mailto:briansupport@__googlegroups.com
> <mailto:brians...@googlegroups.com>>.
> Visit this group at
> http://groups.google.com/__group/briansupport
> <http://groups.google.com/group/briansupport>.
> For more options, visit
> https://groups.google.com/d/__optout
> <https://groups.google.com/d/optout>.
>
>
>
>
> --
> http://www.facebook.com/__briansimulator
> <http://www.facebook.com/briansimulator>
> https://twitter.com/__briansimulator
> <https://twitter.com/briansimulator>
>
> New paper about Brian 2: Stimberg M, Goodman DFM, Benichoux V,
> Brette R (2014).Equation-oriented specification of neural models for
> simulations. Frontiers Neuroinf, doi: 10.3389/fninf.2014.00006.
> --- You received this message because you are subscribed to a topic
> in the Google Groups "Brian" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/__topic/briansupport/__UZ7nMQyOj4Y/unsubscribe
> <https://groups.google.com/d/topic/briansupport/UZ7nMQyOj4Y/unsubscribe>.
> To unsubscribe from this group and all its topics, send an email to
> briansupport+unsubscribe@__googlegroups.com
> <mailto:briansupport%2Bunsu...@googlegroups.com>.
> To post to this group, send email to brians...@googlegroups.com
> <mailto:brians...@googlegroups.com>.
> Visit this group at http://groups.google.com/__group/briansupport
> <http://groups.google.com/group/briansupport>.
> For more options, visit https://groups.google.com/d/__optout
> <https://groups.google.com/d/optout>.
>
>
> --
> http://www.facebook.com/briansimulator
> https://twitter.com/briansimulator
>
> New paper about Brian 2: Stimberg M, Goodman DFM, Benichoux V, Brette R
> (2014).Equation-oriented specification of neural models for simulations.
> Frontiers Neuroinf, doi: 10.3389/fninf.2014.00006.
> ---
> You received this message because you are subscribed to the Google
> Groups "Brian" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to briansupport...@googlegroups.com
> <mailto:briansupport...@googlegroups.com>.
> To post to this group, send email to brians...@googlegroups.com
> <mailto:brians...@googlegroups.com>.

Dan Goodman

unread,
Apr 13, 2015, 3:40:06 PM4/13/15
to brians...@googlegroups.com
OK with a bit more investigation I think it's what I suggested in the
previous email: just that Brian 2 has a lot more Python overhead than
Brian 1. If you scale the simulation up from 1k neurons to 10k or 100k
you see that Brian 2 starts to perform much better than Brian 1. Here
are the numbers (note that I reduced the simtime by the same factor that
I increased the number of neurons):

10k neurons (1M synapses)

B1: 6.4s
B2: 7.5s (6.5 with weave, 5.6 with weave+compiled spikequeue)

100k neurons (10M synapses)

B1: 21.1s
B2: 16.7s (15.0s with weave, 4.0s with weave+compiled spikequeue, 1.8s
with standalone)

So as you can see, once you increase the network size, Brian 2 starts to
do better than Brian 1, and if you throw in the compiled code options
then Brian 2 in runtime mode with weave performs almost as well as
standalone for the largest run.

So I think the conclusion from this is that we could probably do some
work reducing the Python overhead in Brian 2, but that the problem isn't
as serious as I thought because it goes away for larger simulations.

Dan

Dan Goodman

unread,
Apr 13, 2015, 8:06:45 PM4/13/15
to brians...@googlegroups.com
Probably final update on this:

I just had a go at reducing the Python overhead of Brian 2 and I managed
to get the Brian 2 time down from 33s to 19s for 1000 neurons (compared
to 17s in Brian 1). I had to completely break the generality of Brian 2
to get this speed up, so it won't immediately go into the main code
base, but it suggests that there are things we can do to alleviate these
problems.

If you want to follow progress on this you can subscribe to this issue:

https://github.com/brian-team/brian2/issues/447

Dan

Rainer Engelken

unread,
Apr 14, 2015, 1:46:03 PM4/14/15
to brians...@googlegroups.com, aditya...@gmail.com
Hi Aditya,
by chance I just saw your mail on the brian list. We implemented the Ostojic 2014 paper in brian1 last summer, but for the spiking networks in figures 1a and 1d in the comment, I used numerically exact event-based simulations. (Pulse-coupled LIF networks can be solved analytically from spike time to spike time). Figure 1b and 1c were done by Farzad Farkhooi (shared first author) using Nest. I don't know how fast my brian code is compared to yours, but I am happy to share it, if you are interested, have a look at the attachment. You can run it e.g. by typing
python spikingNetwork_Ostojic_2014.py 0.8 -2 0.5 0.55 1 10000
We will also publish the event-based spiking network code and the code for the rate network soon. I have not looked in detail into your code, but I have two (small) correction: The indegree of the network in Ostojic 2014 is fixed, so every neuron receives the same number if excitatory and inhibitory synapses. You can achieve this in brian1 by the keyword fixed=True
e.g. C.connect_random(Pe,P, weight=J*mV,sparseness=epsilon, seed=Seed,fixed=True).
The other issue is that your connectivity contains autapses, so the diagonal is not empty. Maybe its because I did something wrong.
Generally, I'd recommend event-based simulations in this case, because a spiking excitatory neuron can kick several postsynaptic LIF neurons into threshold at exactly the same time, so only ~10% of spike times are unique at machine precision.
I'd be glad for any feedback about the comment, both on the scientific content and on the wording. There are quite a lot of results which didn't fit into the comment, but which I am happy to share on request.

Also, if you find bugs in the script, please tell me.

Looking forward to hearing back from you

Best

Rainer

--
*************************************************************
Rainer J. Engelken
Max Planck Institute for Dynamics and Self-Organization
Fassberg 17, 37077 Goettingen (Germany)
+49 551 5176 479
rai...@nld.ds.mpg.de
http://www.nld.ds.mpg.de/
*************************************************************
spikingNetwork_Ostojic_2014.py

Aditya Gilra

unread,
Apr 21, 2015, 4:41:35 AM4/21/15
to brians...@googlegroups.com
Thanks a lot Dan. I've subscribed to the issue.

For me, the biggest takeaway is to currently use the stand-alone mode which is giving a huge speed-up (neglecting compilation time) to my other scripts also. Somehow I had expected only a 2x speed up, but I get factors of 10x and more compared to weave/numpy... I guess this must be due to brian2 not being fully optimized.

Thanks again,
Aditya.

To unsubscribe from this topic, visit https://groups.google.com/d/topic/briansupport/UZ7nMQyOj4Y/unsubscribe.
To unsubscribe from this group and all its topics, send an email to briansupport...@googlegroups.com.
To post to this group, send email to brians...@googlegroups.com.

Aditya Gilra

unread,
Apr 21, 2015, 4:55:28 AM4/21/15
to Rainer Engelken, brians...@googlegroups.com
Hi Rainer,

Great to hear from you! I guess last summer was busy with people implementing Ostojic 2014... I know two others too! I glanced through your recent comment on biorxiv also, but am yet to give it a full read.

Thanks also for corrections on my code and for sharing yours. As for speed, our brian1 implementations seem comparable with N=10000 and same sim time. I'll get back to you about the scientific details directly, independently of this thread, as this is essentially about brian1 vs brian2 speed.

Best,
Aditya.
Reply all
Reply to author
Forward
0 new messages