origami UO2 burnup and decay - missing neutrons from alpha-n

129 views
Skip to first unread message

Margarita Tzivaki

unread,
Apr 10, 2024, 7:10:22 AM4/10/24
to SCALE Users Group
Hi all,

I am calculating fuel burnup with origami and subsequent decay with origen. When looking at the origen outputs I noticed that some reactions that I would expect to contribute to the alpha-n neutron intensity seem to be missing. Looking at the results by nuclide (in the first couple of years) my main alpha-n neutron contributions come from pu240, pu239 am243. However I would also expect a major contribution from pu238, cm244 and am241. This is also confirmed by the results from another (inhouse and older, so not necessarily comparable) code and by a cursory literature research.

I am wondering if I have made an input mistake (forgot a parameter somewhere for the alpha-n calculations?) or if I am missing something conceptually in the physics. I have tried setting the alpha-n medium explicitly to 'case' and 'uo2' and the cutoff to 0.0 in origen but that made no difference.

Any help would be great! Many thanks,
Margarita

I am using the following:

------------------------------------------------------------------------------------------------------------------------------
=origami

  title="dwr_uo2_16x16-20_40GWd"

  options{ mtu=1.0              
           ft71=all
           mcnp=yes
           output=all
           stdcomp=yes
           decayheat=yes
           temper=573
  }

  libs=[ "uo2_16x16" ]                
  fuelcomp{
    mix(1){
      dens=10.412
      stdcomp(fuel){
        base=uo2
        iso[ 92234=0.04
             92235=3.4
             92236=0.0
             92238=96.56 ]
      }
    }
  }
 
  modz=[ 0.7274 ]                      
  pz=[ 1.0 ]

  hist[
    cycle{ down=365 }                                
    cycle{ power=32.78 burn=305 nlib=4 down=60.00 }
    cycle{ power=32.79 burn=305 nlib=4 down=60.00 }
    cycle{ power=32.79 burn=305 nlib=4 down=60.00 }
    cycle{ power=32.79 burn=305 nlib=4 down=0 }
  ]                                  

    srcopt{
       print=yes
       sublib=all
       alphan_medium=uo2
       brem_medium=uo2
    }
    ggrp=[ ... ]  
                                                     
    ngrp=[ ... ]  
           
    print{
     nuc{
        total=yes
        units=[ MOLES GRAM-ATOMS GRAMS WATTS G-WATTS WEIGHT_PPM ATOMS_PPM ATOMS-PER-BARN-CM BECQUERELS ]
     }
     ele{
        total=yes
        units=[ MOLES GRAM-ATOMS GRAMS WATTS G-WATTS WEIGHT_PPM ATOMS_PPM ATOMS-PER-BARN-CM BECQUERELS ]

     }
    }
end

=origen
  bounds{
    gamma=[ ... ]
    neutron = [ ... ]
  }

case{
   title="dwr_uo2_16x16-20_40GWd"
   lib{ file="end7dec" pos=1 }
   time{
     units=YEARS
       t=[ 1, 2, 3, 4, 5, 10, 15, 20, 30, 40, 50, 75, 100]
   }
   mat{
     load{
       file="dwr_uo2_16x16-20_40GWd.f71" pos=21
     }
   }
   neutron=yes
   gamma=yes
   save=yes    
 
   print{
     neutron{
       summary=yes
       spectra=yes
       detailed=yes
     }
     
     gamma{
       summary=yes
       spectra=yes
       unbinned_warning=yes
     }
     
     nuc{units=[ MOLES GRAM-ATOMS GRAMS CURIES WATTS G-WATTS M3_AIR M3_WATER WEIGHT_PPM ATOMS_PPM ATOMS-PER-BARN-CM BECQUERELS ] sublibs=all total=yes}
     ele{units=[ MOLES GRAM-ATOMS GRAMS CURIES WATTS G-WATTS M3_AIR M3_WATER WEIGHT_PPM ATOMS_PPM ATOMS-PER-BARN-CM BECQUERELS ] sublibs=all total=yes}
     kinf=yes
   }
}
end



Steve Skutnik

unread,
Apr 10, 2024, 10:27:07 AM4/10/24
to SCALE Users Group
Hi Margarita,

First, what version of SCALE are you using? There was a past known issue with (alpha,n) calculations not quite respecting the cutoff correctly for long-term calculations. I don't know if this would affect you here, but it's worth checking.

Next, one thing to note is that it's calculating the (alpha,n) contributors at the last time step by default, unless you specify otherwise. Right now in your ORIGEN input, that's going to be at 100 years with the default cutoff.

I ran a modified version of your input using the default w17x17 library in SCALE 6.3.1 and modified your neutron block in ORIGEN as follows:

neutron{

  alphan_medium=CASE

  alphan_cutoff=0.0 %cutoff for alpha,n calculation

   alphan_step=LAST %step index in this case

}


Doing it this way, I observe all of the alpha decays you should expect. Note that I also modified the bounds block in ORIGEN (which uses "neutron" and "gamma" rather than "ngrp" and "ggrp" like ORIGAMI) and filled in placeholder values.


-Steve

Margarita Tzivaki

unread,
Apr 22, 2024, 1:58:25 PM4/22/24
to SCALE Users Group
Hi Steve,

It took me while to wrap my head around this and test what is going on and have a clearly formulated follow-up question. Thank you for explaining the alpha_n step parameter (I have definitely not been using it correctly) and I think this really is the cause of my confusion. As far as I can tell, the problem was caused by the last entry of my time input being 1E+6, at which point everything is 0. However, this has opened up more questions than I had before.

I want to look at the relative contributions of the inventory nuclides to neutron intensity from different sources. For alpha-n reactions I looked at this table: "=   (alpha,n) neutron intensity by nuclide (neutrons/sec) for case '1' (#1/1)", which (I thought) would give me the contributions over the times I had specified. Experimenting with the alpha_n step parameter, I have now noticed, that that table changes based on the setting of alpha_n step.

As an example for am241 after 5 years of decay, I am getting 1018100 n/s if I set alpha_n step to 5 years, but 1017300 n/s if I set alpha_n step to 1E+5 years. Not a huge difference I know, but if I set alpha_n step to  1E+6 years I am getting 0 for all timesteps (screenshot attached, since this is really hard to explain).

I am not entirely sure I understand, why this is happening. I am inclined to think that the "correct" results are the numbers where alpha_n step and the time match (colored in the screenshot), however then I am not sure what the point of this output table is. Since the values for each time are not much different when changing the alpha_n time step parameter, I might just have fallen over an edge case where once the nuclide contribution is 0, all other times are set to 0 as well. This now makes me worry about the interpretation of the results, since that means that entries equal to 0 wouldn't necessarily mean that there is no contribution at a time before alpha_n step, just that the specific nuclide I am looking at is gone at the time to which I have set alpha_n step.

I hope this makes somewhat sense. Any advice would be much appreciated! At this point I am quite curious to understand why this effect happens.
Margarita
am241.png

Steve Skutnik

unread,
Apr 22, 2024, 2:57:12 PM4/22/24
to SCALE Users Group
Hi Margarita,

To make a long story short: the (alpha,n) timestep is going to control "which" alphas are assumed to be contributing to your problem. So for example, if you have a decay chain which has multiple alpha emitters (e.g., U-238), you will see different alpha emitters / alpha energies as a function of time as you see these daughter products begin to grow in. For your case, you're going to have Pu-239 => U-235 (with very little decay into Th-231), Cm-244 => Pu-240 and Am-241 => Np-237. Since all of those daughters are very long-lived, the activities will be very small as will the intrinsic (alpha,n) source term. This, combined with the (alpha,n) cutoff, may be explaining why you're seeing zero or close to it at very long times. (Again, there is a known issue in SCALE 6.2.3 with the cutoff not quite working right on long time intervals, which has been fixed in 6.2.4).

So to answer your question a bit more clearly: this is a bit of an edge case and a known limitation with the (alpha,n) source term calculation: it's only looking at the alphas being emitted at alphan_step and then applying that retrospectively backward time. So, if your initial alpha emitters have disappeared by the time you're at your alphan_step, you're going to get zero; what's likely happening here is that a) the original nuclides have all decayed away, and b) the new alpha sources potentially weren't present above the threshold at previous timesteps.

My basic advice here is to be careful when using the (alpha,n) calculation over very long time intervals; consider breaking your problem up into smaller sub-intervals if you're going to be doing something like this. The reason you get (as you observe) the "best" answer when the timestep lines up with "alphan_step" is because you're faithfully getting all of the alpha emitters contributing at that time step. But again, if you're talking about a typical series of alpha emitters, you'll notice that list doesn't change too much for periods of a few years, but it can profoundly change if you're talking about orders of magnitude differences in time for decay.

-Steve

Margarita Tzivaki

unread,
Apr 23, 2024, 6:35:05 AM4/23/24
to SCALE Users Group
Hi Steve,
Thank you for your thorough explanation, this really is quite interesting! I noticed that I forgot to say in my previous Email that I am running SCALE 6.2.4, just in case this is relevant for the record. 

I do have one last question: How does the forward calculation work in origen? I have noticed that using a small alpha_n step (like alpha_n step =1), the calculated neutron intensities are a slight overestimation at later times over what I would get when using the appropriate alpha_n step for the time that I am looking at. I would be interested in understanding where this comes from. (This is purely curiosity at this point, so I am happy to go read a paper on it if you have something to point me to!)

Again thank you so much for your time.
Margarita

Steve Skutnik

unread,
Apr 23, 2024, 10:34:17 AM4/23/24
to SCALE Users Group
Out of curiosity, are you using:

alphan_cutoff=0.0 %cutoff for alpha,n calculation

I suspect any differences you're seeing in the alpha-n source term from later steps is probably due to the contributions of very low-activity alpha emitters which are being included if you base your set of alpha emitters based on an earlier time rather than a later one. If you're using the default value for alphan_cutoff, these low-activity emitters would be excluded in alphan_step for later steps.

-Steve

Margarita Tzivaki

unread,
Apr 25, 2024, 11:28:28 AM4/25/24
to SCALE Users Group
Yes I am using alphan_cutoff = 0. This was chosen since I was initially under the impression that the cutoffs are the issue. I am assuming it would be a good idea to use the default here. Is there a resource you could suggest that goes into the details of the actual mechanics of the alphan calculation?
Thank you!
Margarita

Steve Skutnik

unread,
Apr 26, 2024, 3:20:07 PM4/26/24
to SCALE Users Group
The actual mechanics of the (alpha,n) calculation are in Section 5.1.3 of the SCALE manual

Section 5.1.5 may also be of interest, as it discusses the specifics of the way cutoffs get applied.
Reply all
Reply to author
Forward
0 new messages