Merging in of research

50 views
Skip to first unread message

Adam Nelson

unread,
Jun 9, 2014, 10:41:53 AM6/9/14
to openm...@googlegroups.com
Hey all,

I finally reached a point where I can call my research `done*', working and useful to others.  For those who don't know, I was working on improving the efficiency of tallying multi-group cross-sections to allow MC to be used to get around the assumptions present in the variants of slowing down theory typically used to generate MGXS libraries.

To do that I wrote an ACE pre-processor code called NDPP (Nuclear Data Pre-Processor).  This code has similar functionality to NJOY's GROUPR module, though it does not perform the flux-weighting, it stops short of that such that the Monte Carlo code can do that.  So NDPP writes a file, similar to ACE, which the Monte Carlo code loads in to memory for use during tallies.  I then have a branch of OpenMC sitting in my repo which knows how to load these NDPP libraries and uses them for tallying. 

The method is only useful for data which has outgoing distribution information.  So that means the quantities like fission energy spectra, the scattering moment matrices, and photon production from (g,n) reactions.  Right now NDPP/OpenMC only handle the fission spectra and scattering moment matrices.

Anyways, the method works and it works quite well.  See the attached figure.  In this figure, I ran a UO2 pin cell in OpenMC, instrumented it with tallies using NDPP data and the traditional tallying scheme, created multi-group libraries at given active batch counts (the dots) and then ran those libraries with a deterministic transport solver (MPACT).  The lines show the pcm error between MPACT (w/ P2 scattering) and the OpenMC eigenvalue (at that statepoint) for both the traditional means of tallying data (Direct-Pn) and the NDPP-based data for chi and scatter-pn data.  As you can see, the NDPP-Pn+NDPP-Chi line is almost instantly converged, NDPP-Pn and analog-Chi is in 2nd place, and Direct-Pn takes significantly larger to converge.  I don't have figure-of-merit plots made yet, but its like a 1-3 order of magnitude improvement depending on the particular group and scattering order.

Ok, so with that being said, NDPP is available at www.github.com/ndpp/ndpp and my OpenMC branch is forked from the harmonics branch which a PR is based off of (I needed the nu-scatter-PN functionality that PR brings).  My questions for you guys are: 1) are you interested in me submitting upstream, and 2) if so, when is a good time to do so?  I would imagine after 0.6 comes out is a good time since we are coming up on that milestone anyways.

Thanks!
Adam

* It's never actually done, of course  
pcm_P2_small.png

Adam Nelson

unread,
Jun 11, 2014, 12:20:32 PM6/11/14
to openm...@googlegroups.com
I realize I owe you all more of a demonstration of worthiness before it should be included, I can do that after this Reno conference next week.

Paul Romano

unread,
Jun 12, 2014, 11:43:33 AM6/12/14
to Adam Nelson, openm...@googlegroups.com, Andrew Siegel
Hi Adam,

It is an impressive piece of work that you've put together. I think the question of how and when to merge your work is very much interrelated with the larger question of what the future of cross sections for OpenMC looks like. My personal opinion is that the ACE format certainly can not be the end state. As more capabilities are added over time (heavy nuclide resonance scattering, short time collision approximation, energy deposition, depletion), a lot of the data that is needed may not be present in the ACE data, so it needs to come in from elsewhere. To continue with the ACE format for neutron cross sections and then rely on other data sources as needed will result in a messy system, both for users and developers. The other difficulty with relying on the ACE format is that we then implicitly have a dependence on NJOY, which is under export control. We really should (in my opinion) be moving to an entirely open-source ecosystem, both for the simulation and for the data generation.

Now, that being said, moving away from ACE will likely be a monumental task requiring quite a bit of manpower, so if hypothetically we were to wait until we have an entirely new processing system (not to mention a version of OpenMC which could utilize the new cross section format) to merge in your work, you will be waiting a long time. Depending on how intrusive your current changes are (I'll try to have a look this week or next), perhaps we could merge it in for a v0.7 release with a goal of eventually moving the capability into a new cross section format.

Other thoughts?

Best,
Paul

Jon Walsh

unread,
Jun 12, 2014, 4:11:40 PM6/12/14
to openm...@googlegroups.com, nels...@umich.edu, sieg...@me.com, Carl Christopher Haugen, Colin Josey
Hey all,

The idea of an integrated, open-source nuclear data processing/preparation code (a la NJOY) is one that's been nagging me for a little while.  I would fully support going that route.  I think a lot of the work currently going on within CRPG - and of course the excellent work that you've put together, Adam - would fit nicely within the framework of such a project.  Along with Adam's work, it doesn't seem too far off that we could have (expanded) analogs to LEAPR, BROADR, and PURR based off what Carl, Colin, and I, respectively are doing.  And, in addition to the basic NJOY-like functionality of these modules, we'll (hopefully) have improved (speed, physical fidelity, etc.) methods that are developed in the course of our research.

While a couple of disjoint modules here and there does not an NJOY make, it would be a step, and I think the larger goal is worth pursuing.  I think it warrants serious consideration and discussion in the not-too-distant future.

Jon

Ben Forget

unread,
Jun 12, 2014, 8:57:09 PM6/12/14
to Jon Walsh, openm...@googlegroups.com, nels...@umich.edu, sieg...@me.com, Carl Christopher Haugen, Colin Josey
I agree with Jon that we have already multiple modules in the work that could provide capabilities similar to NJOY.  My vision however is not to create a separate data processing code, I would rather make OpenMC that data processing code.  The modularity of OpenMC could make it more than just a reactor analysis tool, we could also output processed cross-sections.  With Adam's, Colin's, Carl's and Jon's current work we would have most of the features necessary to generate data for traditional stochastic codes and with a few additions (fixed source), we could run cases to generate equivalence in dilution or subgroup cross-sections for deterministic codes (or direct self-shielded cross-sections).  The main roadblock I envision would be making pole representation a part of the new GNF format, otherwise we would need to implement whopper as part of OpenMC to make a complete package.  There has been some push by ORNL on this for years, but GNF might make this easier.  Obviously there is still lots of work left, but the closer we make OpenMC to using the fundamental data instead of pre-processed data, the closer we get to never needing a code like NJOY.

As for integrating Adam's research, I tend to agree with Paul that we shouldn't wait until we remove ACE since this could take lots of time and we would certainly want to preserve that capability regardless.  Hopefully the merge won't be too complex and easily extensible to whatever new format we adopt.

Ben

Adam Nelson

unread,
Jun 13, 2014, 3:16:33 PM6/13/14
to Ben Forget, Jon Walsh, openm...@googlegroups.com, sieg...@me.com, Carl Christopher Haugen, Colin Josey
Hi all,

Wow this conversation has certainly taken a turn for the better.  I'll start with the easier and more immediate question: does incorporating NDPP now hurt the goal of replacing NJOY & ACE later?  The answer to this is not really.  It does incur technical debt, but probably a negligible amount compared to the rest of the changes needed in the rest of our ecosystem.  NDPP operates on ACE data and then outputs its own library with only scattering distribution and fission outgoing energy distribution data.  It doesn't necessarily have to operate on ACE, it just does right now because I haven't had a need to do anything else.

Ok on to the now bigger topic: replacing NJOY.  Before I go further, let me get this straight, the end goal would be to have a complete open-source system which starts with an evaluated data set (in whatever format, ENDF or GND) and then produces a set of libraries that OpenMC uses in its simulations.  Further, OpenMC can be used in this system sort of as a replacement for GROUPR to produce multi-group x/s sets for something else.  Do I have this right? 

If so, then I'm definitely on board.  This is great for many reasons, my favorite being that this project alone can make 5-10 PhD students in to the next generation of Skip Kahlers and Bob McFarlanes - something which seems to be very rare in our current generation.

That being said, I do have to admit I am nervous about such a huge task.  The reason being that a few times when working on NDPP (which you can think of as replicating some functionality of GROUPR), I found I wasted alot of time chasing down a few issues that NJOY had solved in the 90s but those solutions were contained in references which were 'personal communication with R. MacFarlane.'  This isn't to say we can't figure out how to do what NJOY does, we can, but it will be difficult (not to mention reading the source to figure out what is being done requires someone much more competent than I).

Any ways, I have rambled on long enough about this.  Count me in, and please let me know if there is any way I can help.  If anyone on this email chain will be at Reno next week, we should meet up to talk more about this.
Adam

Paul Romano

unread,
Jun 13, 2014, 5:04:58 PM6/13/14
to Adam Nelson, Ben Forget, Jon Walsh, openm...@googlegroups.com, Andrew Siegel, Carl Christopher Haugen, Colin Josey
It's great to see that I'm not crazy for thinking this needs to happen... either that, or we're all crazy! Some of the points that have been made so far that I strongly agree with:
  • System should be open source
  • System goes from source data (ENDF/GND) to library usable in OpenMC
  • The processed data should be as close as possible to the fundamental data (thanks for suggesting this Ben -- this really resonates with me)
  • Perhaps it hasn't been said, but at this point in time we can all probably agree that the library should be HDF5

With these points, I think we have some agreement on the "what", but there are still many questions about the "how". Some of you may be aware that LLNL already has an open source ENDF/GND processing code, Fudge, that does already have some of the important functionality of NJOY (namely resonance reconstruction). This raises two questions in my mind:

  • The GND format is intended to store both evaluated and processed data, so that transport codes can access data directly from GND files. Should OpenMC be able to read GND data directly?
  • If we need any extra processing (e.g. multipole representation, Adam's work), should that be done as an extension of Fudge rather than in a separate processing code?

I'm curious to hear others thoughts. I think a reasonable first step that we, as a community can do, is to outline all the types of data that we envision OpenMC needing in the future, where the source data comes from, and what processing needs to be done on that data (if any). This will help us to come to some consensus on the "how".

Paul

Adam Nelson

unread,
Jun 19, 2014, 10:05:49 PM6/19/14
to openm...@googlegroups.com, nels...@umich.edu, ben.f...@gmail.com, jonathan...@gmail.com, sieg...@me.com, ccha...@mit.edu, cjo...@mit.edu
Hi Paul,

I actually kind of like the idea of wrapping FUDGE at least for GND/ENDF access and writing routines.  We can use their other currently functionality as well, but replace it with our own as ours matures.  This is further enabled because they said at Reno this week that they will be switching to a BSD-like license soon.  I only prefer wrapping to a straight-up fork because GND wont be finalized until 12/2015 and thus keeping up to date with a fork will require active effort on our part until then.

Finally, I gauged Jaakko Leppanen's interest in leaving NJiOY behind.  He is actually quite interested in this work, though he doesn't have the resources to do so now (or probably in the future).  I personally think it would be good at least to include him (or a member of the Serpent team) in on some of the larger design discussions (though maybe not decisions).  I say this due to their large user base and production-level interests.

Jon Walsh

unread,
Jun 19, 2014, 11:27:17 PM6/19/14
to openm...@googlegroups.com, nels...@umich.edu, ben.f...@gmail.com, jonathan...@gmail.com, sieg...@me.com, ccha...@mit.edu, cjo...@mit.edu
Others have better perspective regarding where nuclear data is headed, and I hope this doesn't come off as feet dragging, but I'll throw in a few more of my own cents while this project is still very much in the formative stage:

It seems like everyone agrees that going from evaluated data to a library format of our choosing is the way to go.  Everyone also seems to support being able to read ENDF OR GND, again, through a means of our choosing.

While FUDGE would allow us to do this, I think caution is warranted in choosing to go with a system that could very well end up being supported predominantly at Livermore.  They're independent streak does not always seem to work in they're favor.  LANL seems tied to going the ENDF7 route when the format is announced.  And in the attached paper, Red Cullen makes a compelling argument (in my opinion) for the continued use of text files as a key option for data exchange between organizations coupled with in-house processing tools that can be as forward-looking as the organization wants.  It would be easier to write off the viewpoint if he weren't right so damn often, even in his later reports.  I think forcing the development of our own processing tools also serves the goal of staying a great research code.

I very much agree with Adam that we should be careful how we let other organizations influence our decisions about how to move forward.  Hearing perspectives is always good, but, from my point of view, we don't want to give their production-level interests much room in the conversation.  From what I see out of FRENSIE (Wisconsin - w/ collaborators - neutral particle MC code with the aim of going open-source, written in C++) - which does not include results, I admit - we will have our hands full keeping our place as a leading testbed for Monte Carlo methods development - if that is the goal.  I certainly hope that it is.

Jon
endfx-cullen-2012.pdf

Paul Romano

unread,
Jun 23, 2014, 6:24:23 PM6/23/14
to Jon Walsh, openm...@googlegroups.com, Adam Nelson, Ben Forget, Andrew Siegel, ccha...@mit.edu, cjo...@mit.edu
Thanks Adam and Jon for pitching in. I get the sense that the nuclear data community is moving towards GND as the future format, so not to discount Red Cullen's opinion, but I would say that the point of view that ENDF6 or an ENDF6-like format will remain the de-facto data format into the indefinite future is a minority opinion; perhaps I could be wrong though. The development of the GND format did start at LLNL and is very LLNL-centric, but at this point it is under the purview of a WPEC subgroup.

I'm hoping that the library format we choose can be as close to the GND format as possible, if not the GND format itself. A lot of the motivation for creating a separate data format (like ACE) is obviated when you consider what is being proposed for GND. It's supposed to be able to hold all the necessary data for transport/depletion -- cross sections, secondary distributions, probability tables, thermal scattering data, decay data, fission yields. All the data is collected into a single file rather than being split out over several sublibraries (part of the reason handling data currently for depletion is complicated). So when I start thinking about what data we need and the processing required to get that data -- here's how I see it:
  • Pointwise cross sections -- requires resonance reconstruction
  • Secondary distributions -- may require transforming into linear PDFs (my preference would be to do much less processing than is currently done in ACER)
  • Probability tables -- requires generating
  • Thermal scattering -- transforming into conditional PDFs
  • Energy deposition (kermas) -- derived from cross sections, Q values, secondary distributions
All of this data is supposed to be able to be stored in the GND format. What I particularly like about it is that the file format and the in-memory format (both in Fudge and ultimately OpenMC) are both hierarchical, so reading the data is very logical.

Anyway, I think the discussion we've begun here could be very fruitful, so perhaps we ought to think about how to formalize the brainstorming we've started.

Paul
Reply all
Reply to author
Forward
0 new messages