Minimize the time and avoiding .out file

170 views
Skip to first unread message

yrob

unread,
Mar 26, 2020, 1:48:11 PM3/26/20
to SCALE Users Group
Hello everybody,

First, I am new to SCALE, and learned almost exclusively from the User Manual and the Couple tutorial that you can find on this forum.

I am currently working on a project where I use COUPLE to input one group cross sections into ORIGEN, and then run a depletion calculation.
This is done for a huge number of burnable materials (hundreds of thousands), with one input file for each of them. 
Indeed, each of them have a specific composition and flux spectrum, so I cannot group them. 
Additionally, since all cases are individual, I put them in separated files to be able to use all my cores in parallel.

Thus, my goal for now is to minimize the calculation time, the writing time and also the storage needed.
However, I have 120 isotopes and 2 to 3 reactions for my cross sections. 
Therefore, when running SCALE, an output file with around 13,000 lines is created (~800 kB/burnable material). 
I think that avoiding writing this file and just ask OPUS to write the new composition in a text file will save me a lot of (writing) time and storage. 
Yet I have not found any way to do it.

Then, I am wondering if my way of approaching the problem is the best or if there is a better way to do it. 
For example, the fact that I have to write (and store) a .f33 file (~300 kB/burnable material) instead of passing it directly from COUPLE to ORIGEN does not seem optimal. 
I would gladly follow your advice to this challenging problem.

Please find attached one .inp and one .out file to see how I currently proceed.

Thank you for your help,
Yrob

scale1.out
scale1.inp

Will Wieselquist

unread,
Mar 26, 2020, 3:39:05 PM3/26/20
to scale-us...@googlegroups.com
EDIT: changed typo: a3 1 -> a2 1

Hello Yrob!

There is a PRT option in COUPLE to suppress a lot of the output in the 1$$ array, a2 1. 1$$ a2 1 a15 0 a18 238 e t. However, I noticed some other things about your model.
  • Nuclide IDs should be in ZAI format (pg. 5-26) of manual.
  • There is no elemental carbon in COUPLE. Use 60120 instead.
  • It is not possible to set the total cross section (MT 1) in COUPLE. The total includes scattering. Scattering is meaningless for transmutation. Typically 18 and 102 are sufficient for this type of usage.
Also, it's not possible to avoid writing to a file in SCALE 6.2. This will be possible in the next release as we modernize COUPLE.

Best Regards,

Will Wieselquist
SCALE Team


yrob

unread,
Mar 26, 2020, 5:45:35 PM3/26/20
to SCALE Users Group
Dear Mr. Wieselquist,

Thank you for your prompt answer.

I was a bit confused about it, but looking at the manual I think that you meant a2 1 instead of a3 1.

Doing that, it is way better, and the output file contains now only 1000 lines. 
I am wondering if we can also avoid copying the input into the output file, since I think it is unnecessary in many situations since we already have access to it.

I corrected the input based on your comments.

Finally, do you think that SCALE is well adapted to what I am trying to do? 

yrob

unread,
Mar 26, 2020, 6:28:13 PM3/26/20
to SCALE Users Group
I am also trying to find the best way to run COUPLE and ORIGEN in parallel, what is the usual way to do it?

Will Wieselquist

unread,
Mar 26, 2020, 7:04:19 PM3/26/20
to SCALE Users Group
I'll attempt all answers here.

Easy ones :)
1. I corrected my post to 1$$ a2--thanks for noticing that.
2. There is no way to suppress the input copy. So that's probably the minimum output size.

> Finally, do you think that SCALE is well adapted to what I am trying to do? 

It's not super clear what you're trying to do. Let us say for ease of discussion, you are doing HTGR modeling and have ~300k pebbles you want to track explicitly. Doing this all in Serpent (I saw from your COUPLE file), I'd ballpark, you need about 300000*(50000+3000*3)*8/(1024*1024.*1024)=130 GB of memory (50000 is size of transition matrix, 3000 size of nuclide vector). This isn't an ungodly amount, so part of me wonders why you don't do the calculation in Serpent. 

But okay, you want to use Serpent only for the transport and approach Depletion externally through a file-based coupling. That's going to offset some of RAM with disk space. But in terms of runtime, 1-2 seconds for each ORIGEN+COUPLE--if you are using Monte Carlo, you have 300k flux and cross section tallies that you need to make, and converge to reasonable values. This almost certainly takes more that 1-2 seconds per zone, right?

So in terms of this is the right tool for the job, given my perceived constraints, I'd say there's not much else out there. There have been a million couplings like this (Monte Burns, VESTA, etc.) and it's great to see you're using modern ORIGEN instead of ORIGEN 2. It's a little strange to hook it up to Serpent, considering they have integrated depletion.

You seem to be interested in high-performance and that takes you to domain-decoupled depletion within your Monte Carlo solve which is going to require some coding. Here's a sample file from our ORIGEN API to solve a decay problem. 

    // library contains definition of nuclides/transitions
   
auto decay_lib = std::make_shared<Origen::Library>();
   
{
        std
::string path(std::getenv("DATA"));
        path
+= "/origen.rev03.end7dec";
       
ScaleUtils::IO::DB db;
       
Origen::LibraryIO  io;
        io
.load(*decay_lib, path, db);
        std
::cout << db << std::endl;
   
}


   
// initialize material
   
double           volume       = 1.0;     // important for numden
    std
::string      name         = "x";     // arbitrary
   
int              id           = 1234;    // arbitrary
    std
::string      library_type = "decay"; // arbitrary
   
Origen::Material mat(library_type, decay_lib, name, id, volume);


   
// set solver
   
{
       
ScaleUtils::IO::DB db;
        db
.set<std::string>("solver", "cram"); // or "matrex"
       
auto slv = Origen::SolverSelector::get_solver(db);
        mat
.set_solver(slv);
        std
::cout << db << std::endl;
   
}


   
// set initial numden
   
//                  atoms/barn-cm  IZZZAAA ids
    mat
.set_numden_bos({2.e-4, 1.e-2}, {54135, 92238});


   
// add a 30-day decay step (seconds/step)
   
//  - extract a transition matrix from our decay library
   
auto trx = decay_lib->newsp_transition_matrix_at(0);
   
//  - transition coefficients, flux=0, time
    mat
.add_step(86400. * 30);
    mat
.set_flux(0.0);
    mat
.set_transition_matrix(trx);


   
// solve requires relative time substeps
   
Origen::Vec_Dbl dtrel{0.2, 0.3, 0.3, 0.2};
   
Origen::Vec_Dbl flux(4);  // output size of substeps
   
Origen::Vec_Dbl power(4); // output size of substeps
    mat
.solve(dtrel, &flux, &power);


    std
::cout << mat.to_string() << std::endl;

Our high performance computing team works on these kinds of things https://www.ornl.gov/directorate/exascale-computing-project. Like everything, it really depends on what exactly you are trying to do, who are you sponsors, etc.

In terms of parallel ORIGEN through input files, it's completely up to you. Most clusters have job schedulers where you would just say "here are my 300k jobs, do them how you see fit". You could thread them, i.e. on linux that would be `scalerte origen300000.inp &` followed by the next one. This is enough scripting, I'd recommend something powerful like python, in which case there are threading and even MPI methods available. I haven't used them but I have heard they are pretty robust and easy to use.

Good Luck!

Best,

Will Wieselquist
SCALE Team

Reply all
Reply to author
Forward
0 new messages