--
You received this message because you are subscribed to the Google Groups "Ceres Solver" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ceres-solver...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ceres-solver/04be9e8e-b646-4b59-a105-e822f14d40a8%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Hi SameerI'm facing exactly this problem. Storing residuals and Jacobians requires too much memory, whereas storing gradient and Hessian is feasible (basically, num_residuals_per_block >> num_parameters_per_block). You talk about a way to build normal equations. But am I right in thinking that in the way that ceres is architected, you must store the residuals and Jacobians? Or is there a way to avoid this?
--Thanks,Olly
On Friday, 3 August 2018 11:40:51 UTC-7, Sameer Agarwal wrote:We use different Jacobian storage depending on the linear solver type used by the user.For the cases where we actually construct the normal equations, it is possible to save memory by building normal equations.but then there is another tradeoff, and that is threading.which is, that we can quite simply thread without contention the evaluation of the Jacobian because rows do not interact.but with normal equations we will need to deal with per parameter block pair mutexes to implement threading.SameerOn Fri, Aug 3, 2018 at 9:40 AM puzzlepaint <tom.s...@gmail.com> wrote:--Hi,when using ceres to solve a problem, I noticed that (unless I am mistaken, in which case I am sorry for asking a dumb question :) ) ceres always seems to store the residual Jacobians as a (sparse) matrix of size [total_num_residuals x total_num_effective_parameters] (at least internal/ceres/compressed_row_jacobian_writer.cc allocates such a matrix), and this presumably can consume a lot of memory if the number of residuals is very high. Out of interest, in case that observation is correct then I am wondering why the Jacobians are always stored this way. With a normal-equations-based solver, wouldn't it often save large amounts of memory to store only the coefficients of the least-squares update equation, which would be a (potentially sparse) matrix of [total_num_effective_parameters x total_num_effective_parameters] and a vector [total_num_effective_parameters x 1], and thus be independent of the number of residuals? I.e., do what e.g. DenseNormalCholeskySolver does for accumulating these coefficients, but do it already during the residual Jacobians computation so that one never has to store all Jacobians at the same time. Or is there any benefit to storing the residual Jacobians individually?Thanks,Thomas
You received this message because you are subscribed to the Google Groups "Ceres Solver" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ceres-solver...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ceres-solver/04be9e8e-b646-4b59-a105-e822f14d40a8%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
You received this message because you are subscribed to the Google Groups "Ceres Solver" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ceres-solver...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ceres-solver/489132a2-0d72-4d19-b753-91d3fb1add33%40googlegroups.com.
The reason I ask is that for my full problem size, the size of Jacobian actually causes the member int num_nonzeros_ of BlockSparseMatrix to wrap around, so even if I wanted to run it on a machine with enough memory, I can't.
If you want to support larger problems, the SparseMatrix container needs to use larger, unsigned types for the size and index values, and explicitly check for saturation to detect overflow. Note that checking for overflow by asserting on a negative value (block_sparse_matrix.cc:79) won't work all the time; you can wrap all the way round to a positive number again.
To view this discussion on the web visit https://groups.google.com/d/msgid/ceres-solver/6564f301-6644-485c-972b-9ce7aa0d7499%40googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ceres-solver/778fbddd-1cdf-449c-9e81-c8a2eff90ab1%40googlegroups.com.
Sorry, not bipartite! The camera blocks are linked through residuals. But the landmarks can easily be complemented out still.
On Tuesday, 16 October 2018 16:56:24 UTC-7, Oliver Woodford wrote:Hi Sameer (sorry for earlier typo!)Yes, my residual blocks are 192 long, with parameter blocks of 6, 6, 6, 6 and 3. It is a bipartite problem, similar in structure to a bundle adjustment problem: there are many of the the last parameter block (like landmarks) and far fewer of the first 4 blocks (like cameras), and no residual block links two landmarks, so they should be automatically complemented out (I hope).
The problem I'm struggling with (which is one of the smaller ones I want to solve) has 294 "camera" blocks, 91319 "landmark" blocks and 535007 residual blocks. This leads to a memory requirement of 192*27*535007*8 = 20.7GB for the Jacobian alone. However, the dense schur complement matrix should be only 294*6x294*6 (a mere 23.7MB). So as you can see, it's a large scale problem. But then, ceres is a large scale solver :).
To view this discussion on the web visit https://groups.google.com/d/msgid/ceres-solver/d0607c28-f550-4da1-80a0-b74f0ddf027b%40googlegroups.com.
Hi
Oliver Woodford <oliver....@gmail.com> hat am 17. Oktober 2018 um 01:58 geschrieben:
Sorry, not bipartite! The camera blocks are linked through residuals. But the landmarks can easily be complemented out still.
On Tuesday, 16 October 2018 16:56:24 UTC-7, Oliver Woodford wrote:Hi Sameer (sorry for earlier typo!)Yes, my residual blocks are 192 long, with parameter blocks of 6, 6, 6, 6 and 3. It is a bipartite problem, similar in structure to a bundle adjustment problem: there are many of the the last parameter block (like landmarks) and far fewer of the first 4 blocks (like cameras), and no residual block links two landmarks, so they should be automatically complemented out (I hope).The problem I'm struggling with (which is one of the smaller ones I want to solve) has 294 "camera" blocks, 91319 "landmark" blocks and 535007 residual blocks. This leads to a memory requirement of 192*27*535007*8 = 20.7GB for the Jacobian alone. However, the dense schur complement matrix should be only 294*6x294*6 (a mere 23.7MB). So as you can see, it's a large scale problem. But then, ceres is a large scale solver :).
Just a silly idea...
Would it be prohibitively expensive to compute your Hessian (H = J^T J) and gradient (g = J^T r) at every step and to use dense Cholesky factorization on H (H=L^T L) to create a fake Jacobian (L) and residual (L^-T g)? That should at least work, shouldn't it (it would be horribly wasteful computational-wise, but...)
Markus
Sorry for the noise, I hadn't thought this through, I guess you can safely ignore that "idea".
Markus Moll <moll....@arcor.de> hat am 17. Oktober 2018 um 08:18 geschrieben:
--
You received this message because you are subscribed to the Google Groups "Ceres Solver" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ceres-solver...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ceres-solver/1837053858.58253.1539757126067%40mail.vodafone.de.
--
You received this message because you are subscribed to the Google Groups "Ceres Solver" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ceres-solver+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ceres-solver/50d93bf6-7dfe-495d-be59-7a40966fd039%40googlegroups.com.
I am looking into changing the indexing inside ceres, it is a non-trivial change and will take some time.Sameer
--
You received this message because you are subscribed to the Google Groups "Ceres Solver" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ceres-solver...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ceres-solver/5c5fb4a1-9c63-4fac-9588-22619078da1a%40googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ceres-solver/78365d21-4b26-42b3-affb-0b3d583dd4cd%40googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ceres-solver/afd0c222-bf74-4665-981c-90eea5ad9f25n%40googlegroups.com.
Hi Sameer,thanks for your quick reply! I understand that changing something that is barely needed/requested is not a good investment of time.In our case, we observed that we get better results when using very many (up to 10s of millions of) data points, probably because not all of the unknowns are equally observable and it is difficult to automatically "pre-select" a good batch of data.For now, I will probably try some "trivial block Kaczmarz" approach, that is using different subsets of the observations for different iterations, and see if that converges to an accurate solution (or at all).
To view this discussion on the web visit https://groups.google.com/d/msgid/ceres-solver/3d63c895-25a9-4194-a2db-52a1ac5b1fe7n%40googlegroups.com.