Fwd: FW: equilibrium parameter

20 views
Skip to first unread message

Chris Hayward

unread,
Feb 17, 2013, 8:53:46 AM2/17/13
to sunri...@googlegroups.com
Dear Taysun,

Sorry for the delayed reply. In the future, please send such questions to the mailing list because 1. you may get a faster response and 2. others can benefit from the discussion.

> I found  in equilibrium.h that

> 130  /** The tolerance for determining when the iteration has
> 131      converged. Since the calculation works by emitting the delta of
> 132      the cell luminosities in each iteration, the luminosity in the
> 133      grid will decrease over time as luminosity escapes the
> 134      volume. When the dust luminosity in the grid has decreased to
> 135      tolerance*L_initial, the iteration stops. */

> I guess the dust luminosity decreased to some small number doesn't mean that
> the temperature of the dust goes to zero, right?

Correct -- the dust temperature does not go to zero. In each iteration, some luminosity is emitted by the dust, and some fraction of that is reabsorbed and re-emitted. (The difference of the SEDs is reemitted; see below.) Eventually, when the dust temperature converges, the luminosity left in the grid is a small fraction of the initial luminosity, and the iteration stops.

> Could you possibly explain a little in more detail what 'emitting the delta of the cell luminosities' means?

In each step of the iterative calculation of the dust temperature, the full SED of the dust is not emitted. Rather, the difference between the current SED and the previous SED (they change because of the difference in the dust T) is emitted. This is discussed in detail in Section 2.4 of Jonsson, Groves, and Cox (2010).


> Also, I believe these lines are determining the convergence

> 443       // if the change is greater than the tolerance, the cell is
> 444       // not converged. However, because we are
> 445       if(abs(abs_ratio-1.0)>tolerance)
> 446         ++nc_notconv;

> and it seems that when the absorption rate by dust doesn't evolve much in the last two iterations,
> the program will stop computing further convergence.
> For instance, if I choose tolerance = 0.5, would it mean that the accuracy of the absorption rate is 50%?
> When comparing the resultant spectra with tolerance=0.2 and 0.5,
> the difference was negligible, so I guess I am probably missing something.
> I appreciate if you can give some more explanations on the convergence parameter.

The specific meaning of this criterion is described on the wiki (https://code.google.com/p/sunrise/wiki/McrxConfigAndOutputFileFormat). Put succinctly, this is the maximum amount by which the luminosity in each cell (that is sufficiently bright) can change in an iteration. Whether this parameter affects the resulting SED depends on the simulation. First, only wavelengths of a few micron and longer are (potentially) affected. Furthermore, if dust self-absorption is negligible, then the dust temperatures and SED will already be quite accurate once the primary dust heating from stars is calculated. However, if you have very high optical depths in some regions, I would be surprised if the SED does not change appreciably when you go from a tolerance of 0.5 to 0.2. If you want to discuss this further, please send your full sfrhist and mcrx output and parameter files.

Cheers,

Chris



--
Chris Hayward
Heidelberger Institut für Theoretische Studien
Schloss-Wolfsbrunnenweg 35
69118 Heidelberg, Germany
Google Voice: +1 (617) 744-9416
Office: +49 6221 533 284
Fax: +49 6221 533 298
http://www.cfa.harvard.edu/~chayward

Patrik Jonsson

unread,
Feb 17, 2013, 11:47:00 AM2/17/13
to sunri...@googlegroups.com
On Sun, Feb 17, 2013 at 5:53 AM, Chris Hayward <ccha...@gmail.com> wrote:
> Dear Taysun,
>
> Sorry for the delayed reply. In the future, please send such questions to
> the mailing list because 1. you may get a faster response and 2. others can
> benefit from the discussion.
>
>> I found in equilibrium.h that
>
>> 130 /** The tolerance for determining when the iteration has
>> 131 converged. Since the calculation works by emitting the delta of
>> 132 the cell luminosities in each iteration, the luminosity in the
>> 133 grid will decrease over time as luminosity escapes the
>> 134 volume. When the dust luminosity in the grid has decreased to
>> 135 tolerance*L_initial, the iteration stops. */

Hmm. That comment refers to the *old* way of calculating the
equilibrium, which is also what Chris is describing. In Sunrise v4,
the luminosity in the grid never changes during the calculation. (In
fact, one of its chief advantages is that it conserves luminosity
exactly.) Please see the announcement in the discussion group:
https://groups.google.com/forum/?fromgroups=#!topic/sunrisemcrx/gFJgui5Wkvs

>> and it seems that when the absorption rate by dust doesn't evolve much in
>> the last two iterations,
>> the program will stop computing further convergence.
>> For instance, if I choose tolerance = 0.5, would it mean that the accuracy
>> of the absorption rate is 50%?
>> When comparing the resultant spectra with tolerance=0.2 and 0.5,
>> the difference was negligible, so I guess I am probably missing something.
>> I appreciate if you can give some more explanations on the convergence
>> parameter.

No, you are correct. The key to realize is that *every cell* that is
above the threshold set by "ir_luminosity_percentile" is required to
be converged to this point. Because the noise in a cell depends on how
many rays pass through it, the convergence will generally be held up
by cells that are either small or are in a region of low intensity.
These are also the ones that don't contribute much to the output. This
depends on the particulars of your geometry and grid refinement. It is
thus perfectly possible for the integrated SED to change very little
with a tolerance of 50%, but there are individual cells that do. The
sum of all the cells will of course always have much lower variance
than individual cells.

Then, as Chris mentioned, if your problem is optically thin there is
no iteration necessary to get the right result regardless if your
settings. (It will still iterate a bit, though, to get the number of
rays up and to determine convergence.)

cheers,

/Patrik

mohammad safarzadeh

unread,
Feb 17, 2013, 4:41:40 PM2/17/13
to sunri...@googlegroups.com
So I have this question then:
if ir_tol parameter being 0.2 or 0.5 determines the number of iterations, and as Patrik said, cells that are either small or in low intensity regions are those that are harder to converge as the number of rays passing through them is low( am I right?) 
then it seems that increasing the ir_tol parameter will make the simulation to run longer because those cells are hard to make them converged ? 
but we dont care much about those cells. can we change the code such that it focuses on regions that have high intensity and are more contributing to the overall SED? 




--
You received this message because you are subscribed to the Google Groups "Sunrise" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sunrisemcrx...@googlegroups.com.
To post to this group, send email to sunri...@googlegroups.com.
Visit this group at http://groups.google.com/group/sunrisemcrx?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.



Chris Hayward

unread,
Feb 17, 2013, 4:51:29 PM2/17/13
to sunri...@googlegroups.com
Decreasing, not increasing, the ir_tol parameter makes the simulation run longer because you tolerate less Monte Carlo error (i.e., you require more rays to have 10% error than you do to have 20% error). Note also that 0.1 is the recommended value for this parameter, but you should always check whether the parameter choices ensure convergence for your specific situation.

We care about the cells that are emitting most of the luminosity. Thus, if you want to check the effects of allowing increasingly more cells to not satisfy the convergence criterion, you can increase the ir_luminosity_percentile keyword that Patrik mentioned. The default value is 0.01, so you ignore the least luminous 1%. You can see whether increasing this to, e.g., 5% affects your results significantly.

Cheers,

Chris

mohammad safarzadeh

unread,
Feb 17, 2013, 5:04:08 PM2/17/13
to sunri...@googlegroups.com
Right, 
Can I get a sense of the range of luminosity in the cells?
like ir_luminosity_percentile of 0.01 means how much luminous of a cell?
or to get a histogram of luminosity of cells?
I think this is IR_luminosity that is a SUNRISE output and not the hydro simulation snapshot, right?
and the number of cells is 

this line in sfrhist output?
there is this line in sfrhist stage:

Task 0 created 38508 leaf cells, should be 38508
Allocating a memory block for 38508 cell data objects, 6.75723 MB.

or  when I look at the yt exporter for Enzo, I see this :

refinement tree # of cells 591601, # of leaves 517651?

Patrik Jonsson

unread,
Feb 17, 2013, 6:01:55 PM2/17/13
to sunri...@googlegroups.com
Well, you are always free to change the code as you see fit.

That said, it seems the luminosity percentile setting should do
exactly what you want. My point was that at *any* percentile setting,
you'll be limited by the lowest luminosity cells that are included,
which means the change in the integrated SED will always be less than
the tolerance setting.

The radiation intensity in the cells is saved in the INTENSITY HDU.

cheers,

/Patrik

mohammad safarzadeh

unread,
Feb 17, 2013, 8:46:28 PM2/17/13
to sunri...@googlegroups.com
radiation intensity in the cells are outputs of SUNRISE or can we know out of YT for example that how much is the radiation intensity in a cell?
I think the radiation intensity in a cell is solely determined by its distance from star particles, right?
So we should be able to get it before mcrx run?
to cells one optically thin and the other one thick adjacent to each other have the same intensity?

Matthew Turk

unread,
Feb 17, 2013, 9:06:19 PM2/17/13
to sunri...@googlegroups.com
On Sun, Feb 17, 2013 at 8:46 PM, mohammad safarzadeh
<mtsafa...@gmail.com> wrote:
> radiation intensity in the cells are outputs of SUNRISE or can we know out
> of YT for example that how much is the radiation intensity in a cell?

Right now there's no way to get that information back in. Conceivably
if it were in a FITS file in the same order as the export, this could
be done. Recent (last week or so, but also in the upcoming 2.5
release this week) versions of yt include the ability to save data
that is generated in memory to a 'sidecar' file that preempts data in
the original output.

What this means is that if you could perform the octree export
operation in reverse, you could place data values back into the grids
in yt. This could then be saved out and examined later. I don't
think the reversal operation is implemented, but ChrisM might have an
idea how to do that.

Patrik Jonsson

unread,
Feb 17, 2013, 9:38:59 PM2/17/13
to sunri...@googlegroups.com
Seeing as the *purpose* of Sunrise is to calculate the radiation
intensity in the cells (which is what determines dust temperature),
you can't know it without running it. The only situation where you can
find out the intensity just from knowing the sources is if there's no
dust, but then there's no point in running it anyway...

On Sun, Feb 17, 2013 at 5:46 PM, mohammad safarzadeh

mohammad safarzadeh

unread,
Feb 18, 2013, 1:45:23 AM2/18/13
to sunri...@googlegroups.com
If a priori we don't know what cells are less luminous ones as it depends on dust etc, then how does the program determines what cells are the least 1% luminous ones for example? 
To understand what is what luminous, it should run the simulation and then see what happens to the cells.
then what is the point of setting that parameter then, because no matter what percentile we choose, we have to first run the sim for all of the cells and that does not help us then speeding up the run. or I am missing some points here.

Patrik Jonsson

unread,
Feb 18, 2013, 1:56:27 AM2/18/13
to sunri...@googlegroups.com
This is why the solution is iterative. Every iteration, you get an
estimate of the luminosity in all cells. The percentile only decides
which cells are used to determine if another iteration is necessary
based on the current estimate. I encourage you to read the Fleck &
Cummings paper that the method is based on to get an idea of how it
works.



On Sun, Feb 17, 2013 at 10:45 PM, mohammad safarzadeh

Taysun Kimm

unread,
Feb 19, 2013, 9:18:42 AM2/19/13
to sunri...@googlegroups.com
I see. I understand where I was confused.
Thanks for your detailed explanations, Chris and Patrik.

Best,
Taysun
Reply all
Reply to author
Forward
0 new messages