Re-Use of info, revisited

17 views
Skip to first unread message

Eric Carlson

unread,
Feb 26, 2011, 10:25:20 AM2/26/11
to pyamg-user
Hello,
I just wanted to share a couple of things. I have played around a bit
with reusing some info between repeated solves, where only small
changes are taking place between iterations. In particular, I change
only the highest level info using :

"""
### Doing this over many iterations
A = sparse.spdiags(DiagVecs,DiagIndx,N,N,format='csr')
if kount==max_reuse_ml:
#create updated smoothed_aggregation_solver structure
ml=pyamg.smoothed_aggregation_solver(A,mat_flag=
'symmetric',strength= 'symmetric')
kount=0
else:
ml.levels[0].A.data=A.data
kount+=1
b = q+Qw_cell/rhow+Qo_cell/rhoo
dp = ml.solve(b,x0=dp,tol=1e-08, accel='cg')
### Do rest of stuff in loop
"""


I profiled this for 31 sequential solves, and the results for cases
(max_reuse_ml==8, max_reuse_ml==0)
total times in solver: (91 sec, 98 sec) for virtually identical
solutions. For this example, improvements are modest (every little bit
helps though!), but I have cases where I get a 45% reduction using
this strategy. The strategy has optima for max_reuse_ml, as the time
you save not creating ml at each step is offset by more cg iterations.
A closer look at the profiles shows the GS and matrix-vector:

_amg_core.gauss_seidel: (26.4 sec, 24.5 sec)
scipy.sparse.sparsetools._csr.csr_matvec: (20.0 sec, 19.7 sec)
_amg_core.bsr_gauss_seidel: (18.9 sec, 17.1 sec)
scipy.sparse.sparsetools._bsr.bsr_matvec: (9.8, 9.2)

Subtotals in GS or Mat-Vec: (75.1 sec, 70.5 sec) or (83%, 72%)
respective total solve time

My question:What's the chance GS or Mat-Vector stuff could be
parallelized?

Cheers,
Eric


Jacob Schroder

unread,
Feb 28, 2011, 1:46:59 PM2/28/11
to pyamg...@googlegroups.com
Hi Eric,

Thanks for the info. We're glad that PyAMG is useful for your
problems. Out of curiosity, what sort of problems are you solving
with PyAMG? With respect to parallelism, we have had some discussions
on PyAMG with multicore and/or GPUs, but there are no concrete plans
as of now. In particular, sparse matrix-vector products don't get
much speedup on today's multicore architectures.

Keep us posted,

Jacob

> --
> You received this message because you are subscribed to the Google Groups "pyamg-user" group.
> To post to this group, send email to pyamg...@googlegroups.com.
> To unsubscribe from this group, send email to pyamg-user+...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/pyamg-user?hl=en.
>
>

Beth Carlson

unread,
Feb 28, 2011, 8:00:08 PM2/28/11
to pyamg...@googlegroups.com
I am using PyAMG for petroleum reservoir simulation(or simulation of geologic sequestration of CO2 ). For now, it is easiest to say I am using PyAMG to solve the (unsteady-state) anisotropic diffusion equation for pressures, with spatially and time-varying diffusion coefficients. Locally, the time variation can be drastic, but globally the changes between time steps are not huge, which explains why I can get away with reuse of the AMG structures. I am solving this, for now, on a 3D structured grid, but using an unstructured grid poses no problems. 

So far, the serial solution times with PyAMG scale linearly with problem size, from tens of thousands to 10 million grid cells - at 10 million my average solve time is a very respectable 50 seconds. Overall, linear solution for pressures poses the greatest hurdle to scaling on many cores.

In scraping the internet for anything, I ran across this reference from AMD


which gives me some hope that improvement might be possible, though clearly with some difficulty.

Many thanks to the PyAMG crew for a wonderful tool...

BTW, are there any release notes for 2.0?

Cheers,
Eric

Jacob Schroder

unread,
Mar 2, 2011, 12:45:56 PM3/2/11
to pyamg...@googlegroups.com
Hi Eric,

Thanks for the problem info and the reference. Also, there will be
some release notes coming.

If the problem is anisotropic, you may find some benefit by playing
with the strength-of-connection measure, e.g.,
strength=('symmetric', {'theta' : 0.1})
where the optimal theta is usually in [0.1, 0.5] for anisotropic
problems. Alternatively, the more advanced strength measure
strength='evolution'
will increase your setup cost, but can lead to lower iteration counts.

The prolongation smoother can also be tuned for anisotropic problems.
I find that
smooth = ('jacobi', {'filter' : True, 'degree' : 2, 'weighting' : 'local'})
usually works well.

As you can tell, multigrid solvers can be tweaked in many, many ways.
But, I thought I'd mention a few of the more common.

Regards,

Jacob

Jacob Schroder

unread,
Mar 2, 2011, 11:17:49 PM3/2/11
to pyamg...@googlegroups.com
Hi Eric,

Luke just posted the release notes at
http://code.google.com/p/pyamg/wiki/ReleaseNotesv20.

Jacob


On Wed, Mar 2, 2011 at 10:45 AM, Jacob Schroder

Reply all
Reply to author
Forward
0 new messages