Wigner Clenshaw for high densities

23 views
Skip to first unread message

verstrae...@gmail.com

unread,
Oct 26, 2017, 12:46:46 PM10/26/17
to QuTiP: Quantum Toolbox in Python
Hi guys, many thanks for the useful work.

I've been using your Wigner_Clenshaw algorithm for a while now with succes. 

However, now I'm reaching a regime where the particle numbers are in the order of thousands, causing very large density matrices, so that it takes ages to calculate their Wigner function.

Now, as I know that the probability amplitudes for the lower particle numbers are very small, I'm looking for a way to chop them of this DM so that they can be neglected for the Wigner function.



One thing that does seem to work is replacing the initial index for the diagonal L=M-1 by L=M-Nminconsidered, which thows away the diagonals that do not contain high-particle number elements.

But also within the diagonals that remain, only the high particle number elements are really important, so I believe there should be a way to take that into account as well. 
Within wig_laguerre_val I tried changing 
for i in range(3, len(c) + 1):
 into 
for i in range(3, len(c)-Nminconsidered + 2):

but that one doesn't seem to work.

Any Ideas how to do this properly?

Thanks,
Wouter

Wouter Verstraelen

unread,
Oct 27, 2017, 6:48:42 AM10/27/17
to QuTiP: Quantum Toolbox in Python
I managed to make it working formally, by reverse engineering the algorithm and expliclitly writing the output as a sum of two weighed LaguerreL(n,L,x) functions times the y-outputs for n0=Nminconsidered-1 and n1=Nminconsidered.

However, when Nminconsidered becomes more than say 20, the computations of the laguerre functions themselves, as well as their prefactors containing factorials, seem to mess up regarding both performance and stability.
Reply all
Reply to author
Forward
0 new messages