Hi,
Recently I’ve been working on a project in cosmology which entails solving the Boltzmann equation for 2 interacting species. The actual equations include a bunch of nasty looking integrals so I wont bother typing them out, but it boils down to a system of 4 coupled ODE’s (which correspond to the evolution of the temperatures and chemical potentials of both), parametrized by the temperature of the universe. I am using CVode which can solve my equation and gives sensible results for temperatures above ~25MeV. Below this, it slows to a crawl, and the number of steps/evaluations skyrockets for no apparent reason. The only interesting thing that occurs is one of chemical potentials reaches a maximum, but otherwise everything else is perfectly well behaved. I’ve tried adjusting the error tolerances but nothing seems to help much. There is some spiking that occurs when I zoom in (see attachment), but if I lower the tolerance sufficiently this vanishes.
I will note that I do not supply my own Jacobian due to the complexity of my system. I’m expecting this to be the culprit, but I am hoping there may be a way to work around this without providing one explicitly.
I am also not sure if this will help specify the cause of my issue, but here are the statistics I get from the computation (also in case this stands out the step size is supposed to be negative):
Before slowing down:
Time,29.81676446869958,Steps,445,Error test fails,27,NLS step fails,0,Initial step size,-6.388153926949412e-06,Last step size,-0.3377185547218121,Current step size,-0.3377185547218121,Last method order,2,Current method order,2,Stab. lim. order reductions,0,RHS fn evals,697,NLS iters,694,NLS fails,12,NLS iters per step,1.559550561797753,LS setups,76,Jac fn evals,17,LS RHS fn evals,68,Prec setup evals,0,Prec solves,0,LS iters,0,LS fails,0,Jac-times setups,0,Jac-times evals,0,LS iters per NLS iter,0,Jac evals per NLS iter,0.02449567723342939,Prec evals per NLS iter,0,Root fn evals,0
Time,28.80360880453415,Steps,448,Error test fails,27,NLS step fails,0,Initial step size,-6.388153926949412e-06,Last step size,-0.3377185547218121,Current step size,-0.3377185547218121,Last method order,2,Current method order,2,Stab. lim. order reductions,0,RHS fn evals,700,NLS iters,697,NLS fails,12,NLS iters per step,1.555803571428571,LS setups,76,Jac fn evals,17,LS RHS fn evals,68,Prec setup evals,0,Prec solves,0,LS iters,0,LS fails,0,Jac-times setups,0,Jac-times evals,0,LS iters per NLS iter,0,Jac evals per NLS iter,0.02439024390243903,Prec evals per NLS iter,0,Root fn evals,0
Time,27.95893844960629,Steps,452,Error test fails,28,NLS step fails,0,Initial step size,-6.388153926949412e-06,Last step size,-0.1689839334020173,Current step size,-0.1689839334020173,Last method order,2,Current method order,2,Stab. lim. order reductions,0,RHS fn evals,710,NLS iters,707,NLS fails,13,NLS iters per step,1.564159292035398,LS setups,78,Jac fn evals,18,LS RHS fn evals,72,Prec setup evals,0,Prec solves,0,LS iters,0,LS fails,0,Jac-times setups,0,Jac-times evals,0,LS iters per NLS iter,0,Jac evals per NLS iter,0.02545968882602546,Prec evals per NLS iter,0,Root fn evals,0
Time,26.77279710073503,Steps,457,Error test fails,28,NLS step fails,0,Initial step size,-6.388153926949412e-06,Last step size,-0.282724494022406,Current step size,-0.282724494022406,Last method order,3,Current method order,3,Stab. lim. order reductions,0,RHS fn evals,716,NLS iters,713,NLS fails,13,NLS iters per step,1.560175054704595,LS setups,79,Jac fn evals,18,LS RHS fn evals,72,Prec setup evals,0,Prec solves,0,LS iters,0,LS fails,0,Jac-times setups,0,Jac-times evals,0,LS iters per NLS iter,0,Jac evals per NLS iter,0.02524544179523142,Prec evals per NLS iter,0,Root fn evals,0
After slowing down:
Time,14.18906639590881,Steps,21795,Error test fails,1232,NLS step fails,1706,Initial step size,-6.388153926949412e-06,Last step size,-0.003183356203066493,Current step size,-0.003183356203066493,Last method order,3,Current method order,3,Stab. lim. order reductions,0,RHS fn evals,42192,NLS iters,42189,NLS fails,3477,NLS iters per step,1.935719201651755,LS setups,9090,Jac fn evals,3552,LS RHS fn evals,14208,Prec setup evals,0,Prec solves,0,LS iters,0,LS fails,0,Jac-times setups,0,Jac-times evals,0,LS iters per NLS iter,0,Jac evals per NLS iter,0.0841925620422385,Prec evals per NLS iter,0,Root fn evals,0
Time,14.08999491262614,Steps,25468,Error test fails,1555,NLS step fails,1857,Initial step size,-6.388153926949412e-06,Last step size,-1.361802565455957e-05,Current step size,-1.361802565455957e-05,Last method order,4,Current method order,4,Stab. lim. order reductions,0,RHS fn evals,48693,NLS iters,48690,NLS fails,3771,NLS iters per step,1.911810899952882,LS setups,10335,Jac fn evals,3874,LS RHS fn evals,15496,Prec setup evals,0,Prec solves,0,LS iters,0,LS fails,0,Jac-times setups,0,Jac-times evals,0,LS iters per NLS iter,0,Jac evals per NLS iter,0.07956459231875128,Prec evals per NLS iter,0,Root fn evals,0
Time,13.98997805273008,Steps,29569,Error test fails,1929,NLS step fails,1997,Initial step size,-6.388153926949412e-06,Last step size,-2.78664537136988e-05,Current step size,-2.78664537136988e-05,Last method order,4,Current method order,4,Stab. lim. order reductions,0,RHS fn evals,55764,NLS iters,55761,NLS fails,4051,NLS iters per step,1.8857925530116,LS setups,11660,Jac fn evals,4185,LS RHS fn evals,16740,Prec setup evals,0,Prec solves,0,LS iters,0,LS fails,0,Jac-times setups,0,Jac-times evals,0,LS iters per NLS iter,0,Jac evals per NLS iter,0.07505245601764675,Prec evals per NLS iter,0,Root fn evals,0
Is it likely that the Jacobian approximation is failing? Would supplying my own help avoid this slowing down? I would like to make sure before taking the time it will require for me to do so. I hope I've provided enough but would be happy to supply any other information. Any help is appreciated!
To unsubscribe from the SUNDIALS-USERS list: write to: mailto:SUNDIALS-USERS-...@LISTSERV.LLNL.GOV
Hi Robert,
Are you setting your absolute tolerance using the vector option? I.e., by calling CVodeSVtolerances with a vector where each entry specifies the absolute tolerance for the corresponding state variable.
Best,
Cody