Polaris - "Belos Block GMRES did not converge"

28 views
Skip to first unread message

Proux

unread,
Mar 18, 2026, 11:37:19 AMMar 18
to SCALE Users Group
Hello Scale Team, 

I'm having an issue with Polaris v6.3.1. 
It's a BWR fuel assembly. 
When my branches/histories are set for HFP, the run goes fine. 

When I add CZP branches to my matrix, the run errors out. This is not observed for any pin layout. In the case of a pin layout A, the case would completely run (HFP and CZP branches). When I modify pin layout A into pin layout B, to account Short Length Partial Rods, the run errors out. 

For example, the config below runs well for pin layout A, and shows the following features for pin layout B:  

History:
Bank out, TF = 800  rhoCool = 0.90 (this works for all BU steps)

Branch: 
  Bank out, TF = 300, rhoCOOL :  dens= 0.99 (errors out here at first BU step)

Here is the tail of the output:
*** Belos Block GMRES did not converge after 100 iterations.
       Iteration 34 k-eff = -.00032 fission error = 8.057E+01 k error = *********
*** Belos Block GMRES did not converge after 100 iterations.
*** Belos Block GMRES did not converge after 100 iterations.
*** Belos Block GMRES did not converge after 100 iterations.
*** Belos Block GMRES did not converge after 100 iterations.
*** Belos Block GMRES did not converge after 100 iterations.
*** Belos Block GMRES did not converge after 100 iterations.
*** Belos Block GMRES did not converge after 100 iterations.
*** Belos Block GMRES did not converge after 100 iterations.
*** Belos Block GMRES did not converge after 100 iterations.
*** Belos Block GMRES did not converge after 100 iterations.
*** Belos Block GMRES did not converge after 100 iterations.
*** Belos Block GMRES did not converge after 100 iterations.
*** Belos Block GMRES did not converge after 100 iterations.
terminate called after throwing an instance of 'std::invalid_argument'
  what():  /home/gitlab-runner/builds/tSVMuxiy/0/rnsd/scale/Trilinos/packages/anasazi/epetra/src/AnasaziEpetraAdapter.hpp:1197:

Throw number = 1

Throw test that evaluated to true: true

Belos::MultiVecTraits<double,Epetra_MultiVector>::SetBlock(A, mv, index = {0}): A has only 2 columns, but there are 1 indices in the index vector.

Program received signal SIGABRT: Process abort signal.

Backtrace for this error:
#0  0x1485940382E7
#1  0x1485940388EE
#2  0x148593854DEF
#3  0x1485938A154C
#4  0x148593854D45
#5  0x1485938287F2
#6  0x148593556A64
#7  0x148593554C05
#8  0x148593554C32
#9  0x148593554E51
#10  0x1485A39DB4E9
#11  0x1485A3A0C62B
#12  0x1485A563EDAF
#13  0x1485A71B53F9
#14  0x1485A719F480
#15  0x1485A71BCDDE
#16  0x1485C11B0C1F
#17  0x1485C1426A10
#18  0x1485C13FC802
#19  0x1485C1421C4F
#20  0x1485C1428E68
#21  0x1485C14316A2
#22  0x1485C1441C6D
#23  0x1485C143F9D2
#24  0x404F15 in MAIN__ at Polaris.f90:?



Do you know the source of this error -- and a way around it?

Thank you!

Matt Jessee

unread,
Mar 18, 2026, 1:26:51 PMMar 18
to SCALE Users Group
Hello,

This is most likely an error in the CMFD acceleration.

I recommend disabling, i.e., use opt KEFF EigSolver='POWER'

If the run-time is prohibitive, let me know and I can perhaps help identify a few faster options.

Thanks, Matt

Pascal Rouxelin

unread,
Mar 18, 2026, 1:42:25 PMMar 18
to Matt Jessee, SCALE Users Group
Hi Matt, thanks -- let me try that and give you feedback. 

Time is not prohibitive (big matrix but I can afford a day-ish to produce these)

--
You received this message because you are subscribed to a topic in the Google Groups "SCALE Users Group" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/scale-users-group/03YMfovRyB0/unsubscribe.
To unsubscribe from this group and all its topics, send an email to scale-users-gr...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/scale-users-group/04618bd2-f9b3-42e0-8bc5-08a0527d9cd6n%40googlegroups.com.


--
Pascal Rouxelin 
Senior Research Scholar
Nuclear Engineering Department
North Carolina State University 
2500 Stinson Drive - 1110B
Burlington Laboratory
Raleigh, 27607 North Carolina, USA

Pascal Rouxelin

unread,
Mar 19, 2026, 1:24:55 PM (13 days ago) Mar 19
to Matt Jessee, SCALE Users Group
Hello Matt and SCALE users, 

Thanks, disabling CMFD acceleration did the trick. Thank you! 
Reply all
Reply to author
Forward
0 new messages