Running CASL chain segmentation fault

192 views
Skip to first unread message

Augusto Hernandez Solis

unread,
Dec 27, 2019, 4:07:06 PM12/27/19
to OpenMC Users Group
Hello!

I am testing the depletion capabilities of the newest version of OpenMC. I ran succesfully the pincell_depletion example that employs the chain_simple.xml file. Expanding the fuel composition a bit, and when I tried either the chain_casl.xml or the long endfb71 one, it crashes in the middle of the second or third transport calculation in a cluster. I attached both the input and the slurm output, for an mpi=1 and omp=72 in a node that I think has a lot of available memory (around 196 Gb in RAM, I think).

Any idea what could be?

Thanks and a happy 2020 !

Augusto
run_depletion.py
slurm-17914.out

Augusto Hernandez Solis

unread,
Dec 30, 2019, 5:27:16 PM12/30/19
to OpenMC Users Group
Update: Problem solved

The problem was not in the memory (as I was thinking originally). The problem was in the H5 neutron reaction data I was using. I was using a library created in h5 by me from ACE files that we have at work. Instead, I downloaded the JEFF32 H5 library from the openmc website and now it works. So the segmentation fault problem was (I think) most probable in the angular data, as the exit signal mentioned at the beginning of the error.

Augusto Hernandez Solis

unread,
Jan 2, 2020, 7:18:04 AM1/2/20
to OpenMC Users Group
2nd. update of the same issue:

Hi again,

I made additional tests with some different nuclear data libraries and depletion chains. Turns out that not always I am able to successfully execute to the end the depletion pin calculation.

For the following material composition, e.g.:
************************************************************
uo2 = openmc.Material(name='Fuel Batch 1')
uo2.set_density('g/cc' ,10.499)
uo2.temperature = 600.0
uo2.add_nuclide('O16'  ,1.16723E-01,'wo')
uo2.add_nuclide('U235' ,2.80E-01,'wo')
uo2.add_nuclide('U238' ,6.13744E-01,'wo')
#uo2.add_nuclide('Mn55',0.0,'wo')
#uo2.add_nuclide('Cu63',0.0,'wo')
#uo2.add_nuclide('Cu65',0.0,'wo')
#uo2.add_nuclide('Cf252',0.0,'wo')

****************************************************

So, for the above composition, I made some test cases at different temperatures, employing either ENDFB71 and JEFF32 hdf5 libraries provided by the OpenMC team, and by either trying the casl or endfb71 chain xml files. Results of successful (or not) completion can be found below in the following tables (one thing. In the material composition, I could either initialize some isotopes to a zero concentration to be able to read the hdf5 data and go all the way to the transport calculation. However, this doesn't mean that the code did not have a segmentation fault at some transport execution step):

Test cases with: chain_casl.xml

TMP. Fuel Material

Successful depletion with JEFF32 data? / Needed to initialize some concentrations to execute?

Successful depletion with ENDFB71 data? / Needed to initialize some concentrations to execute?

293

YES /NO

YES/NO

600

NO/NO

YES/NO

900

NO/NO

YES/NO

1200

NO/NO

YES/NO


Test cases with: chain_endfb71.xml

TMP. Fuel Material

Successful depletion with JEFF32 data? / Needed to initialize some concentrations to execute?

Successful depletion with ENDFB71 data? / Needed to initialize some concentrations to execute?

293

NO /YES

YES/NO

600

NO/YES

YES/NO

900

NO/YES

YES/NO

1200

NO/YES

YES/NO


As you can see from the first table, when utilizing the CASL chain and the JEFF32 data, I am only able to successfully complete a pin depletion at 293 K (at other temps., I am having a seg. fault). However, when I employed the ENDFB71 data, it runs pretty well at any temperature and at any depletion chain. This makes me think there maybe some issue with the processed H5 data from the JEFF32 library. Any comments, ideas will be greatly appreciated! And also, if you may know on how to properly fix this problem for home made H5 libraries.

Thanks!

Augusto

On Friday, December 27, 2019 at 10:07:06 PM UTC+1, Augusto Hernandez Solis wrote:

Javier Gonzalez

unread,
Jan 2, 2020, 1:00:29 PM1/2/20
to OpenMC Users Group
Hi Augusto,

I see that you have experience with the depletion capabilities of OpenMC 0.11.0, so, maybe you or someone else can help me. I am new simulating depletion and I am having some issues. 

I strictly followed the steps of the Pincell Depletion example (https://docs.openmc.org/en/v0.11.0/examples/pincell-depletion.html) but I'm having an error, which I don't understand. Attached is my input file and a figure where it is possible to observe that error (bottom right). To create the xml files, I am using Spyder 4.0.0 but I tried with Jupyter Notebook and I got the same error. If I eliminate where I define the depletion analysis, the simulation runs perfectly. For the depletion, I am using the chain_casl.xml. 

Please, any suggestion to fix this? 

Thanks in advance,
Javier
PincellDepletion.py
Figure-1.png

Augusto Hernandez Solis

unread,
Jan 2, 2020, 4:58:00 PM1/2/20
to OpenMC Users Group
Dear Javier,

I did manage to run your input depletion case. It looks like the problem is that the code can't find the cross_sections.xml file. Although from your input it seems that you tried correctly to let know the code the path for the cross_section file, it seems that it cannot find it (strange because as I mentioned, it seems correctly the definition). Thus, from your input, the only small change that I did was to comment the following line: ## materials.cross_sections = '/home/javier/Documents/OpenMC/DataENDF-7.1/cross_sections.xml'
Instead, I set the environment variable for the cross_sections file like this: export OPENMC_CROSS_SECTIONS= /home/ahsolis/cross_sections.xml (thus, you can do the following):
export OPENMC_CROSS_SECTIONS=/home/javier/Documents/OpenMC/DataENDF-7.1/cross_sections.xml

like that, I am pretty sure it will run for you as it did for me.

As a last note, is good that you are using the ENDFB71 H5 library, since as I've previously tested and commented before, it is the library that seems to be working for depletion calculations.

Augusto

Javier Gonzalez

unread,
Jan 3, 2020, 11:11:59 AM1/3/20
to OpenMC Users Group

Hi Augusto,


Thanks for your reply.


In fact, the way I defined the path to the cross sections is okay because the simulation runs before setting up the depletion. Now, I did what you suggested and it runs. So, it seems it is necessary to set the environment variable to the cross_sections.xml if you want to simulate depletion. I have three more questions:


1- If I want to try depletion with another nuclear data, I should set again the environment variable, right?

2- I read in the Pincell Depletion example that it is possible to create your own depletion chain but I did not find an example, do you have any idea?

3- You mentioned that you are using a cluster for your runs, I am also using a shared cluster and submit the job using a .sh file. Do you have any idea how to do this now with depletion? Without depletion, I only include “openmc” in that file and, when the resources that I request are available, the simulation starts. Now, with depletion, I am a little bit confused on how to submit the job.


Thanks,

Javier  

Augusto Hernandez Solis

unread,
Jan 4, 2020, 5:49:59 AM1/4/20
to OpenMC Users Group
Hi Javier,

The way I understand it works the depletion module of the new OpenMC (this is my opinion, hoping that I am ok but should always be backed up by the OpenMC developers ;-)) is that the transport calculation is carried out by the OpenMC executable that has been created with a C++ compiler, called via the shared openmc library. After the neutron spectra has been computed, then the depletion solver takes place via the Python API. I think this is the reason why if, you only run the transport calculation, then defining the materials.cross_sections attribute in the Python API is enough because it will export it to the materials.xml file. Nevertheless, if you carry out the additional execution of the depletion module, then such module will not look into the materials.xml file anymore but would look into the general environment variable OPENMC_CROSS_SECTIONS when updating the material composition at different burnup steps (I do not know if perhaps such cross_sections.xml file could be defined as an argument in the depletion.operator module, but I think that the best way either for only regular MC transport or with depletion capabilities, is to defined from the beginning the environment variable OPENMC_CROSS_SECTIONS. That is what I personally prefer to do). 

Now, answering to your questions, I can personally say the following:

1- If I want to try depletion with another nuclear data, I should set again the environment variable, right?


Yes. If you would like to load data coming from different major nuclear libraries during different executions, you should define at every run the path where the code can globally find the required data.


2- I read in the Pincell Depletion example that it is possible to create your own depletion chain but I did not find an example, do you have any idea?


There is the module openmc.deplete.chain in the Python API that is employed to create the depletion chain in XML format. For an example, I guess you can always take a look at the script openmc-make-depletion-chain where if, you have the endf formatted neutron reaction, decay and fission yields data, the module can easily create such chain via the "from_endf" method (look at this webpage for more info: https://docs.openmc.org/en/latest/pythonapi/generated/openmc.deplete.Chain.html#openmc.deplete.Chain). I guess if you need more examples and clarification, we could wait for more aid from the OpenMC developers.


3- You mentioned that you are using a cluster for your runs, I am also using a shared cluster and submit the job using a .sh file. Do you have any idea how to do this now with depletion? Without depletion, I only include “openmc” in that file and, when the resources that I request are available, the simulation starts. Now, with depletion, I am a little bit confused on how to submit the job.


If you would only like to run a transport calculation, you only need the materials, geometry, settings (and if you like the extra tallies) xml files. This could be created with the Python API. Thus, to only execute the transport calculation (as you well mentioned in your question) you only need to call the executable openmc that would search for such xml files. Nevertheless, if you instead would like to run a depletion calculation, you are required to have a Python input file where you call the depletion modules. Therefore, when you execute it in a shared cluster, you would need to execute it as a python input file. This is the reason why if a depletion calculation would like to be executed in parallel, you need the python mpi4py module. Below, you can find two examples on how I run in the cluster the depletion calculation using either the torque or slurn launching programs. This is for a single node that has 72 cores. I assumed 4 mpi tasks and 18 cores per task (this is only my personal way of executing in parallel and is only an example. Doing it in this way in a single node would make it faster at the cost of available memory. Otherwise, you can set to go across nodes if you like and to have as many tasks as nodes or, on the other hand, to only have shared memory computations per node. To exemplify this last case, in my particular cluster you would then have 1 mpi and 72 cores per task if you follow the rule of mpi per node. All in all, these is up to the user then).

********* For SLURM *****************

#SBATCH -t 1:0:0
#SBATCH --ntasks=4
#SBATCH --cpus-per-task=18
# SBATCH -N 1

module load /home/ahsolis/modopenmc

time srun python run_depletion.py

module unload /home/ahsolis/modopenmc
**********************************************

****** For Torque ***********************
#PBS -l nodes=1:ppn=72
export OMP_NUM_THREADS=18
module load /home/ahsolis/modopenmc

cd $PBS_O_WORKDIR

time mpirun -np 4 python run_depletion.py

module unload /home/ahsolis/modopenmc
**********************************************
I hope I was clear and precise, hoping to be helpful!

Augusto

Andrew Johnson

unread,
Jan 4, 2020, 12:14:56 PM1/4/20
to OpenMC Users Group
Augusto,

Thank you for reporting the outcome of your debugging adventure! One possible candidate for the strange behavior with the JEFF data is that the depletion chains involve some isotopes that may not exist at certain temperatures in the JEFF data. I haven't look in to this, but you could use the Python API to investigate. Isotopes can be search in the DepletionChain using
```python
>>> "U235" in chain
True
```

With this feature, you could iterate over all the nuclides present in your JEFF cross section data, and check if they exist in the depletion chain. I am confused as to why it was necessary for you to create isotopes at zero concentration. The depletion sequence should be able to handle that. Can you expand on what happened using the JEFF 293 K cross sections and the full ENDF chain with and without adding this trace isotopes?

Also thank you for the detailed response to Javier's remarks. I was just getting to those. You are correct in how the python API interfaces with the openmc library. Excellent!

Javier,

Augusto's response for running depletion is great, please follow that. It is recommended that you have the OPENMC_CROSS_SECTIONS environment variable set, specifically for running depletion but it can be helpful for transport as well. This eliminates the need to pass the cross section data onto your Material definition. To permanently set this environment variable, place the following in any shell script (personal bashrc file or cluster submission script)
```bash
export OPENMC_CROSS_SECTIONS=/path/to/cross_sections.xml
```

To create your own depletion chain, follow the Chain.from_endf method. It requires three types of files: neutron reaction data, neutron induced fission yield data, and isotopic decay data.

I apologize for the delay in getting to this issue, but I hope this has been helpful.

Andrew

On Friday, December 27, 2019 at 4:07:06 PM UTC-5, Augusto Hernandez Solis wrote:

Javier Gonzalez

unread,
Jan 6, 2020, 11:02:51 AM1/6/20
to OpenMC Users Group
Thanks Augusto and Andrew for your responses!


On Friday, December 27, 2019 at 4:07:06 PM UTC-5, Augusto Hernandez Solis wrote:

Javier Gonzalez

unread,
Jan 6, 2020, 11:58:37 AM1/6/20
to OpenMC Users Group
Hello everyone,

Now I am having a new problme. I am trying to do my depletion analysis and I'm getting this error:

ERROR: Failed to open HDF5 file with mode 'r':
        /home/gonzaj10/libraries/DataENDF-7.1/Am241.h5

I'm doing this analysis on a shared cluster. Using my PC and the same nuclear library, this error doesn't appear and the simulation is  successfully completed.
Any idea?

Thanks,
Javier
Image.png

Augusto Hernandez Solis

unread,
Jan 7, 2020, 4:52:22 AM1/7/20
to OpenMC Users Group
Dear Andrew,

Thanks for your reply. You requested me to expand more on the problem of the initialization of concentrations to zero in the material, while depleting with certain chain.xml and certain data library. So for instance, if I try to use the casl or the endfb71.xml depletion chain file along with the JEFF32 data set, if I don't initialize some concentrations to zero in the depletable material then I get the following error:

********************************************************************************
Reading Mn54 from /scratch/s1/ahsolis/jeff32_hdf5/Mn54.h5
 Reading Mn55 from /scratch/s1/ahsolis/jeff32_hdf5/Mn55.h5
 ERROR: Object "800K" does not exist in object /Mn55/reactions/reaction_005
 ERROR: Object "800K" does not exist in object /Mn55/reactions/reaction_005
*********************************************************************************
However, if I do like this:

uo2.add_nuclide('Mn55',0.0,'wo')

Then, the code does not present anymore such error but, in the end, it will ending up crashing either while computing  the second or third transport calculation (with a seg. fault., as previously described). This is why in the tables that I previously provided, I added the necessity of initializing such concentrations to zero in order to start the execution of the depletion calculation (even if it ended up crashing). As you could see from the tables, if I run for instance the casl or endfb71.xml chains with ENDFB71 H5 data, then there is no necessity to initialize concentrations to zero to start executing the run and also it ends successfully. Therefore, I think there is a clear connection between the seg. fault error due to lacking of data at certain temperature and the need to initialize concentrations at the material definition to start the depletion calculation.

Augusto

Paul Romano

unread,
Jan 10, 2020, 8:22:53 AM1/10/20
to Augusto Hernandez Solis, OpenMC Users Group
Hi Augusto,

Thanks for reporting the problem you're running into, and also thanks for your detailed (and correct) response to Javier. It does look like there's something wrong with the JEFF 3.2 data (the 800 K cross section for MT=5 in Mn55 really is missing from the file). To me, it's surprising that it works at all when you set the number density to zero; I would have expected it to fail for either case. How are you specifying temperatures for your problem? i.e., what are you specifying for settings.temperature? I'll see if I can come up with an explanation for this all.

Best regards,
Paul

--
You received this message because you are subscribed to the Google Groups "OpenMC Users Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openmc-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openmc-users/35c913c4-e72d-4d3e-890e-e292f1d660c4%40googlegroups.com.

Augusto Hernandez Solis

unread,
Jan 10, 2020, 9:05:30 AM1/10/20
to OpenMC Users Group
Hi Paul,

Thanks for your reply. The only way I specify a temperature was for the fuel material (an old habit I got from version 0.10.0). Below is how I needed to define the fuel material to make it execute with JEFF32 (even if it would end up crashing in a posteriori transport calculation). As you can see, I also needed to initialize to zero Cu63, Cu65 and Cf252, apart from Mn55:

uo2 = openmc.Material(name='Fuel Batch 1')
uo2.set_density('g/cc' ,10.499)
uo2.temperature = 600.0
uo2.add_nuclide('O16'  ,1.06723E-01,'wo')

uo2.add_nuclide('U235' ,2.80E-01,'wo')
uo2.add_nuclide('U238' ,6.13744E-01,'wo')
uo2.add_nuclide('Mn55',0.0,'wo')
uo2.add_nuclide('Cu63',0.0,'wo')
uo2.add_nuclide('Cu65',0.0,'wo')
uo2.add_nuclide('Cf252',0.0,'wo')
uo2.depletable = True

Other materials such as cladding, coolant and helium-gap were not set to any specific temperature, neither I specified a certain temperature any other way (i.e. settings file, cells, etc). In fact, I attached the input file I used, in case you want to take a look.

Best regards,

Augusto

On Friday, January 10, 2020 at 2:22:53 PM UTC+1, Paul Romano wrote:
Hi Augusto,

Thanks for reporting the problem you're running into, and also thanks for your detailed (and correct) response to Javier. It does look like there's something wrong with the JEFF 3.2 data (the 800 K cross section for MT=5 in Mn55 really is missing from the file). To me, it's surprising that it works at all when you set the number density to zero; I would have expected it to fail for either case. How are you specifying temperatures for your problem? i.e., what are you specifying for settings.temperature? I'll see if I can come up with an explanation for this all.

Best regards,
Paul

To unsubscribe from this group and stop receiving emails from it, send an email to openmc...@googlegroups.com.
run_depletion.py

Paul Romano

unread,
Jan 15, 2020, 8:15:06 AM1/15/20
to Augusto Hernandez Solis, OpenMC Users Group
Hi Augusto,

I have an update for you. When I ran your depletion script with the JEFF 3.2, I also ran into a segfault. I was able to track this issue down to some nuclides not having an angular distribution for elastic scattering specified (the one that caused problems for me was Te120). What I've done is put together a fix in the code so that in this case, an isotropic distribution is used. In addition, I've also fixed our data processing so that when converting an ACE file to our HDF5 format, if it sees this situation it will explicitly add an isotropic angular distribution, which will allow the current version of the code to work with updated data files. I've gone ahead and uploaded a new version of the JEFF 3.2 HDF5 files at openmc.org, so please re-download those and try your simulation again.

There is still an issue with Mn55 missing some reaction cross sections at 800 K, but this is an issue with the JEFF 3.2 ACE files so there is not much we can do about that. The only way around this would be to reprocess the data going through NJOY again. One thing you may want to try in your simulation is to use temperature interpolation and specify a temperature range over which data is loaded.

settings = openmc.Settings()
settings.temperature = {
    'method': 'interpolation',
    'range': (300.0, 1000.0)
}

Best regards,
Paul

To unsubscribe from this group and stop receiving emails from it, send an email to openmc-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openmc-users/1be0eda4-8efd-4da0-a0a2-1c657fdc6c33%40googlegroups.com.
Reply all
Reply to author
Forward
Message has been deleted
0 new messages