--
You received this message because you are subscribed to the Google Groups "ahh-discuss" group.
To post to this group, send email to ahh-d...@googlegroups.com.
To unsubscribe from this group, send email to ahh-discuss...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/ahh-discuss?hl=en.
matlab is not faster if spike rate is low
but it is hard/impossible to exactly duplicate the izh model in ss/ahh
without writing a new algorithm for spike transfer
cmu web VPN is down, can't access the brette et al. paper. last time
cyrus ran that benchmark it was faster. I probably don't need to
actually see the paper, just grab the old benchmark and tweak it to
any API changes.
However there is no way to speed up multiple realizations and Brian
was 70x slower at 1000 realizations. This should be equivalent to a 4
million neuron network controlled for rate. So I don't think we are in
any danger as long as you are working on sufficiently large problems.
Cyrus
On Wednesday, July 7, 2010, Rick Gerkin <rge...@gmail.com> wrote:
> Looking at the slopes of time vs size, the asymptotic speedup is about 6.57x (8.53x GPU only), although it looks like maybe MATLAB failed for the larger sims? Now, what exactly was the comparison? On what machine was each running, and was MATLAB using jacket, or .mex files or anything like that?
>
> I think with similar numbers of spikes, the comparison is fair, but it might be good to check in one of the smaller sims how these spikes are distributed. For example, the more they are concentrated in a few neurons, the greater the bottleneck, right?
>
> Whatever the speedup for medium-size single instances, if MATLAB chokes on large networks, and doesn't parallelize replicas, then the two main selling points are still there. Comparing one instance of a small/medium network is really the least generous comparison from SS perspective, and you still get 7x, although I'm not sure about what MATLAB optimizations could have been taken and how easy they are to implement.
>
> Rick
>
> On Wed, Jul 7, 2010 at 2:12 AM, Michael Rule <mrul...@gmail.com> wrote:
>
> http://dl.dropbox.com/u/4345112/bdata.ods
> benchmark data comparing against Izhkivech's demo model.
> -- ss uses 50%, but Izh uses all to all connectivity with weights in uniform random [0,1]-- I didn't scale total synaptic input. I just realized this is a problem. Scaling up either model quickly makes them spike way too much ( way.. way too much.. 1000 Hz ? ). BUT... at least the SS and Matlab behaviors are similar.
>
>
> I guess, TODO-- redo this with scaling to preserve realistic rates-- redo brette et al benchmark-- benchmark by scaling copies/replicas rather than net size
>
> --mrule
> On Tue, Jul 6, 2010 at 3:32 PM, Rick Gerkin <rge...@gmail.com> wrote:
>
> update?
>
> On Fri, Jul 2, 2010 at 1:08 PM, Michael Rule <mrul...@gmail.com> wrote:
>
> see my spike condition... its v>=3 (mV)so I'm getting too many spikes.
if you look at the number of spikes, both the matlab and SS networks are behaving pathologically, with essentially a spike every other timestep for all neurons. so, the spikes are similarly distributed.
I should note that achieving a speedup only in terms of being able to run several networks at once really isn't an achievement, since this can be done on a cluster with a simple bash script and frankly for Matlab users this probably the easier way to go.
Plus, price of nvidia card(s) << price of cluster and admin hassles of cards << admin hassles of cluster (getting CUDA to work on any given machine notwithstanding).Last night I had a dream that Nathan had a computer laying around where the GPU was fused to the CPU (not like it is in real projects, but like it would be in a stupid dream). I was telling him we could use this machine to increase the speed and volume of our probes written back to main memory. Given that you guys think about this stuff 10x as much as me, I can only imagine what kind of dreams you've been having.
--
The Izh model only behaves like that if you scale it up without normalizing total synaptic input ( divide by sqrt(network size) or something ) ? The default model just oscillates.