PriorityQueue for fast large network simulation

130 views
Skip to first unread message

Rainer J. Engelken

unread,
Jul 1, 2016, 3:43:48 AM7/1/16
to julia-users
Hi,
I am trying to implement a fast event-based numerically exact simulation of a sparse large spiking neural network using a priority queue. It is fast, but not fast enough. Profiling indicates that the bottleneck seem to be the dictionary operations keyindex and setindex! when changing priority of an existing key (3rd line of function ptc! in code below). Is there a way to avoid this? Is is possible to set up a priority queue with integer keys instead? More generally, is there a smarter data structure than priority queue, if I want to find the smallest element of an large array and change a small fraction array entries (say 10^2 out of 10^8) each iteration?

Best regards
Rainer

## For those interested into the implementation: We map the membrane
potential to a phase variable phi \in [-1, inf) and solve the network
evolution of an purely inhibitory homogeneous pulse-coupled leaky integrate and fire
network analytically between network spikes. This is just a toy example, some important steps are missing.

##### code ###

function ptc!(phi, postid, phishift, w, c) #phase transition curve
        @inbounds for i = postid
        phi[i] = w*log(exp((phi[i]-phishift)/w)+c) + phishift
    end
end
 
function lifnet(n,nstep,k,j0,ratewnt,tau,seedic,seedtopo)
    iext = tau*sqrt(k)*j0*ratewnt # iext given by balance equation
    w = 1/log(1. + 1/iext) # phase velocity LIF
    c = j0/sqrt(k)/(1. + iext) # effective coupling in PTC for LIF
    phith, phireset = 1., 0. # reset and threshold for LIF
     
    srand(seedic) # initialize random number generator (Mersenne Twister)
    phi = Collections.PriorityQueue(UInt32,Float64)
    @inbounds for i = 1:n #set initial phases for all neurons  
        phi[i] = -rand()
    end
     
    spikeidx, spiketimes = Int64[], Float64[] # initialize time & spike raster
    postidx = Array{UInt32,1}(k) # idices of postsynaptic neurons
    t = phishift = 0. # initial time and initial global phase shift
    for s = 1 : nstep #
        j, phimax = Collections.peek(phi) # next spiking neuron
        dphi = phith+phimax-phishift   # calculate next spike time
        phishift += dphi # global backshift instead of shifting all phases
        t += dphi  # update time
        state = UInt(j+seedtopo)  # spiking neuron index is seed of rng  
        @inbounds for i = 1:k     # idea by R.Rosenbaum to reduce memory
            postidx[i], state = randfast(n,state) # get postsyn. neurons
        end
         
        ptc!(phi,postidx,phishift,w,c) # evaluate PTC
        phi[j] = phireset + phishift    # reset spiking neuron to 0
        push!(spiketimes,t) # store spiketimes
        push!(spikeidx,j) # store spiking neuron
    end
    nstep/t/n/tau*w, spikeidx, spiketimes*tau/w
end
 
function randfast(m, state::UInt) # linear congruential generator
    state = lcg(state)               # faster then Mersenne-Twister
    1 + rem(state, m) % Int, state   # but can be problematic
end
 
function lcg(oldstate) # tinyurl.com/fastrand by M. Schauer
    a = unsigned(2862933555777941757)
    b = unsigned(3037000493)
    a*oldstate + b
end
 
#n: # of neurons, k: synapses/neuron, j0: syn. strength, tau: membr. time const.
n,nstep,k,j0,ratewnt,tau,seedic,seedtopo = 10^6,10^4,10^2,1,1,.01,1,1
@time lifnet(100, 1, 10, j0, ratewnt, tau, seedic, seedtopo); #warmup
gc()
@time rate,spidx,sptimes = lifnet(n,nstep,k,j0,ratewnt,tau,seedic,seedtopo);
#using PyPlot;plot(sptimes,spidx,".k");ylabel("Neuron Index");xlabel("Time (s)")

Tim Holy

unread,
Jul 1, 2016, 5:41:48 AM7/1/16
to julia...@googlegroups.com
My guess is that you could do better by doing a "batch update" of the queue,
so that you don't rebalance the heap each time.
Best,
--Tim

Rainer J. Engelken

unread,
Jul 1, 2016, 11:27:13 AM7/1/16
to julia...@googlegroups.com
2016-07-01 11:41 GMT+02:00 Tim Holy <tim....@gmail.com>:
My guess is that you could do better by doing a "batch update" of the queue,
so that you don't rebalance the heap each time.

@Tim, thanks for responding, maybe I didn't get your idea. How does changing the priority of x*k keys every xth step make it faster than changing the priority of k keys n every step? Both has average costs of k*log(n) per iteration, where n is the total number of keys, right?

I tried something in this spirit, using an array for phi instead of a PriorityQueue, but instead of using findmax(phi) in every step on the whole array, I did TimSort every xth step on the whole (still partially sorted) array and findmax() on the first x+s sorted elements, where s is a safety margin, as some elements of phi might be pushed outside 1:x.
The PriorityQueue turned out to be faster for large sparse networks (n=10^8, k=10^2), although quite some time is spend on dictionary bookkeeping (e.g. getindex and ht_keyindex in dict.jl)
Is there a way to avoid this bookkeeping for a priority queue with integer keys?
Best
Rainer

Tim Holy

unread,
Jul 1, 2016, 12:26:50 PM7/1/16
to julia...@googlegroups.com
I meant they should be updated on every step, but rather than update the
priority of each neuron one-by-one, perhaps you could build a heap out of the
to-be-updated neurons and then perform a "heap-merge."

http://link.springer.com/article/10.1007%2FBF00264229

Best,
--Tim

Rainer J. Engelken

unread,
Jul 2, 2016, 11:24:17 AM7/2/16
to julia...@googlegroups.com
perhaps you could build a heap out of the
to-be-updated neurons and then perform a "heap-merge."

http://link.springer.com/article/10.1007%2FBF00264229

Thanks @Tim, a complexity of  O(k+log(n)*log(k)) instead of O(k*log(n)) would indeed speed things up! I will try it, when I have time.

Has anybody implemented a Fibonacci heap in Julia?  A Fibonacci heap might be an efficient solution (at least in the case of a purely excitatory sparse spiking networks), as  the decrease key operation takes constant time O(1).

Best,
Rainer
Reply all
Reply to author
Forward
0 new messages