Reduce Processing Time

233 views
Skip to first unread message

Arielle

unread,
Mar 8, 2014, 2:25:37 PM3/8/14
to scale-us...@googlegroups.com
I'm running this scale input over a range H/U and range water densities for mixture #6.  Its taking around 715 minutes to do a run for a single cylinder sitting on a slab of concrete with a small sphere inside the cylinder.  I am wondering if I could do something different to reduce the amount of time these runs take.  I don't think they should take this long for a fairly simple model.  The input file is as follows:

'Input generated by GeeWiz SCALE 6.1 Compiled on Tue Sep  6 15:23:32 2011
=csas6
config2.2.a.5b - HU4 - .9 water density
v7-238
read composition
 uf6         1 den=3.093685 1 293
                                 92235 15
                                 92238 85   end
 h2o         2 den=0.9982063 1 293   end
 carbonsteel 3 1 293   end
 o           4 0 0.03336 293   end
 f           4 0 0.01668 293   end
 u-235       4 0 0.00101 293   end
 u-238       4 0 0.00733 293   end
 h           4 0 0.03336 293   end
 reg-concrete 5 1 300   end
 h2o         6 den=0.9982063 0.9 293   end
end composition
read celldata
  latticecell squarepitch fuelr=13.37859 4 gapr=61.2775 1 cladr=62.8396 3 hpitch=69.1896 6 end
end celldata
read parameter
 tme=5000
 gen=61600
 npg=2500
 nsk=1600
 htm=yes
 sig=0.0001
end parameter
read geometry
global unit 1
com="48y cylinder+water"
 xcylinder 1  61.2775   311.15        0
 xcylinder 2  62.8396   311.15        0
 ellipsoid 3     29.2  61.2775  61.2775  chord +x=0   origin  x=311.15 y=0 z=0
 ellipsoid 4     29.2  61.2775  61.2775  chord -x=0
 ellipsoid 5  30.7975  62.8396  62.8396  chord +x=0   origin  x=311.15 y=0 z=0
 ellipsoid 6  30.7975  62.8396  62.8396  chord -x=0
 cuboid 9      342      -31       72      -72  99.1986  -169.1986
 cuboid 10      342      -31       72      -72        0     -100   origin  x=0 y=0 z=-69.1896
 xcylinder 12  69.1896  341.9475  -30.7975
 hole 2   origin  x=323.5294 y=0 z=0
 media 1 1 1
 media 3 1 2 -1
 media 1 1 3
 media 3 1 5 -3
 media 2 1 9 -10 -12
 media 1 1 4
 media 3 1 6 -4
 media 5 1 10
 media 6 1 -2 -5 -6 12
 boundary 9
unit 2
com="uo2f2 sphere"
 sphere 1  13.37859
 media 4 1 1
 boundary 1
end geometry
read bnds
  body=9
    surface(1)=mirror
    surface(2)=mirror
    surface(3)=mirror
    surface(4)=mirror
    surface(5)=vacuum
    surface(6)=vacuum
  end bnds
end data
end

Arielle

unread,
Mar 10, 2014, 10:08:10 AM3/10/14
to scale-us...@googlegroups.com
I have tried just using KENO3D to calculate the volumes for the runs, but that didn't increase the run time much.  I decided to shorten the generations to 30000 and run it to a standard deviation of only 0.001.  It looks based on the comparison of keff plots for a standard deviation of 0.0001 (definitely converged) to the keff for a standard deviation of 0.001 that keff is converging.  The convergence occurs at around 4800 generations with the first 1600 skipped when standard deviation is set to 0.001.  That is compared to running all generations (30000 or 61600) for the same number of skipped generations when the standard deviation is set to 0.0001.  So instead of taking 715 minutes to run, it takes about an hour to run.

I thought about switching from CENTRM to 2REGION but I don't think I can based on my model.  Does anyone have any other suggestions? 

William BJ J. Marshall

unread,
Apr 9, 2014, 11:07:31 AM4/9/14
to scale-us...@googlegroups.com
Arielle - I'm sorry I didn't see this in the last month, but I hadn't been checking in the "general discussions" area.

First, I'd increase NPG to something more like 10,000.  This should allow you to discard fewer generations as each generation will be a better estimate of the actual reactivity.  Also, if the UO2F2 sphere is driving reactivity, you could use a starting source guess (read start ... end start) to get to source convergence faster.  Start type 6 allows you to specify points at which to start histories in the first generation, so you could specify a few points in and around that sphere (in global coordinates) to accelerate convergence.  Since you're skipping about 25% of the total generations you're running, you should look at ways of reducing this.  Then you can reduce NSK and the total number of generations will be reduced by the same amount.  It seems unlikely that you'd need to skip 1600 generations for a fairly simple model like this, so if you could cut that down to 50-100 you'd save 1500 generations - or about a third of the problem (20 minutes if my math is right).

The change in sigma is a good one, and actually the difference in run-time that you saw is in good agreement with theory.  Since uncertainty is proportional to 1/sqrt(N) and run-time scales linearly with N (number of particles run) increasing sig from .0001 to .001 should reduce run-time by a factor of 10^2=100.  Hence ~an hour compared to ~700 minutes.  So that sounds about right.

Switching to 2REGION will likely accelerate the cross section processing, but that is probably on the order of five minutes out of the 60 - 715 minutes.

An option that would increase speed dramatically would be to switch to KENO V.a.  You'd have to switch from ellipsoid to hemispherical ends - or you could use just a right circular cylinder - but the run-time reduction is probably on the order of a factor of 4.

If you're still struggling with this a month after the original post and would like me to look at something, I can probably do that.  If on the other hand, you've settled on something workable and are well along in your study - even better.

B.J.
Reply all
Reply to author
Forward
0 new messages