--
You received this message because you are subscribed to the Google Groups "amgcl" group.
To unsubscribe from this group and stop receiving emails from it, send an email to amgcl+un...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/amgcl/ebdeab35-8add-4c82-9ec3-6085f4238d8an%40googlegroups.com.
Hi Denis,Thank you so much for taking the time to review this.You are absolutely right on your comments about the code sequence for a real transient / matrix changing implementation.My intention with this code snippet was just to factorize the matrix once, and try to see if the associated precond was ok to get solutions with slightly different matrices thereafter.I think that the first matrix is important because it has the main physics of conservation, for internal nodes the sum of the entries on a row/col is zero...In your demonstration it is clear that the precond from this matrix is not good enough because it takes 33 iters to converge when the first matrix perturbation is used.When the second precond is created with the new perturbed matrix, then convergence is good, but this new precond is based on the "more diagonally dominant"matrix which in this case represents the following "perturbations", but in general we won't know the nature of the perturbations, so the starting matrix is all that we know.In my real application the size of the system increases, so the perturbation is the existence of new rows/cols associated with the additional dofs (~ 0.02 percent of the total dofs per iteration)In this case to create a precond that could be used, I extend the matrices with a "unit" diagonal representing the "extra" dofs that could be used latert on if the precond was good enough to converge.But I am finding out that the precond does not lead to a quick convergence, similarly to what we observed in your example with the original matrix and just perturbing the diagonal.I guess I may be asking too much when expecting that the AMG precond also be good for "extended" matrices.Using shared memory the AMGCL code is much faster than the PETSc approach that I was using for large systems,I see that AMGCL scales almost linearly with the # dofs, whereas PETSc is more exponential.But because PETSc has a non-linear solver that offers a good speedup when the system only changes a little, I thought that perhaps AMGCL could be used in a similar fashion.
NOTE: also because in my case just using an L2 relative error of ~ 0.01 is enough, just 1 or 2 iters of AMGCL are enough most of the time, I thought that reusing the precond could lead to a 4-5 fold gain.
I have sequences of ~ 20 matrices/rhs that I saved from PETSc, some sequences are small ~ 10k dofs, others large like 5mm dofs, and a small code to read them and get results with AMGCL if you would like to review them, if you think that perhaps there is an algorithm that could work for transient non-linear cases.
Please let me know If you can think of any other approach to handle matrix changes that only affect a very small fraction of the matrix fill/coeffs.I thought perhaps moving the delta matrix to the RHS would work:(A+dA) x = f => A x(i) = (f - dA x(i-1)) and iterate, but it is unstable.
To view this discussion on the web visit https://groups.google.com/d/msgid/amgcl/ba2af3b2-4f46-4d8c-bf5b-12c45d8f6d96n%40googlegroups.com.
Hi Denis,
--
You received this message because you are subscribed to the Google Groups "amgcl" group.
To unsubscribe from this group and stop receiving emails from it, send an email to amgcl+un...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/amgcl/4d57408b-2fca-4fe3-b282-4f1cf3b1f62en%40googlegroups.com.
Hi Denis,
I created a binary matrix following your code mm2bin.cpp,and for the rhs I just guessed it would be something like:precondition(io::write(fr, nrows), "File I/O error.");
precondition(io::write(fr, rhs), "File I/O error.");
The attached 7zip file has one mat/rhs set to verify if the format is correct.Please, would you test:./solver --binary --matrix mat.bin --rhs rhs.bin -p solver.tol=1e-2
The solution should be ~ like this:x[0]=55.324976
x[1]=55.203986
x[2]=52.842242
x[3]=46.848367
.................................
x[2875]=-0.300335
x[2876]=-0.639552
x[2877]=-0.639473I wanted to test it myself, but my compiled version of "solver" has an exception whenever I try to use external matrices,even with the example from the tutorial./solver -A poisson3Db.mtx -f poisson3Db_b.mtx precond.relax.type=chebyshev
Similarly mm2bin, etc, but the funny part is that the other AMGCL code that I also compiled with VS works like a charm !I am attaching the code, in case you want to see the PETSc binary format, but it has many windows dependencies.
To view this discussion on the web visit https://groups.google.com/d/msgid/amgcl/b48e1344-9eca-4db9-928e-9e7eb8c4bc92n%40googlegroups.com.
I did not create MTX files because on Windows VS refused to compile this line, the template could not be resolved...:amgcl::io::mm_write("b.mtx", rhs.data(), n, 1);



To view this discussion on the web visit https://groups.google.com/d/msgid/amgcl/a6e40fac-3194-4818-a6d3-3bcf52144243n%40googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/amgcl/dff9d3bb-deb8-45bb-b81d-96f47b24bb3bn%40googlegroups.com.

To view this discussion on the web visit https://groups.google.com/d/msgid/amgcl/e020c93e-db52-4b41-a56a-b4787ece4d27n%40googlegroups.com.
Denis, yes increasing the matrix size from iteration to iteration happens most of the time, actually staying constant for a few iteration like in the 634k is rare.When I tried to reuse the precond, the matrix for the precond was extended with a diagonal of "1" and rhs of "0" for approx 0.1% of the actual size, this extended size value saved as say NEXT, and then each subsequent matrix with a size <= NEXT was extended to NEXT in the same way. If the new matrix size was > NEXT then this was another criteria to trigger a precond reset. However, in my implementation with regular bicgstab / spai0 I did not succeed in reducing the Wall time. But now I am going back to implement some details, for example,computing and resetting maxiter so that we never end up doing too many iters while reusing the precond:If time_setup and time_solve are the last times from the last precond rebuilt, we can limit the num of iters as:iters_max_local = (time_setup + time_solve) / time_solve_per_iter + 0.5;
prm.solver.maxiter = iters_max_local; // in case next iterations are done without precond rebuildBut now how can we update the values of prm in "solve" which had been built with a call like this:solve = std::make_shared<Solver>(std::tie(nsize, ptr, col, val), prm);Or otherwise how can I pass prm in the call to solve:(*solve)(std::tie(nsolve, ptr, col, val),rhs, sol);
Also, I have another question, how can I call the destructor of *solve ?When convergence is not reach in iters_max_local, I want to delete *solve.
Thanks in advance.Cheers
--
You received this message because you are subscribed to the Google Groups "amgcl" group.
To unsubscribe from this group and stop receiving emails from it, send an email to amgcl+un...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/amgcl/5c7c866d-fa8f-414f-99c1-b18e8476e75an%40googlegroups.com.