The results become wrong as shown in the figure on the left below, keff calculated by OpenMC decreases faster than deterministic code. However, if I divide both Fuel.volume and Intergrator.powerdensity by 8, the results are more reasonable as shown on the right below (although still wrong). How can this happen while total power is only 1/64 of what it was before? Does the cutting of some fuel pins affect the evolution of nuclide densities within fuel in these pins?
Regards,
Yue
Thank you, Andrew. If I understand you correctly, with “diff_burnable_mats” on, I can’t assign a single “total volume” to fuel material in this case. If there are two cells with different volumes in a geometry, even though the material filled in these two cells is the same, I still need to define two materials. For example, material_1.volume = 10 and material_2.volume =40 ?
Sorry, the two bottom pictures have swapped their positions. I posted several times, but the layout was never correct. I think google should add a “preview” feature.
I’m also surprised that dividing both volume and power density gave better agreement. Maybe I should not divide the total volume by 8? Because I built the second model by directly cutting a 17*17 lattice with a triangular prism. I didn’t build this model by defining a smaller lattice. Whether I cut or not, the volume of each pin is always V/(17*17)?
My last question is why dividing a model by 8 doesn’t reduce the standard deviation. I ran simulation of a single state point (thus no depletion) for two models. The number of particles and batches is the same for two models (200*20000). But there is no obvious difference between the standard deviations of eigenvalue.
I will retry with a manually constructed 1/8 model instead of directly cutting a complete assembly.
Regards,
Yue.