Dear Prof. Fang,
When simulating a (simplified) single fiber reflectance spectroscopy situation, I noticed the detector also collected a bunch of unexpected photons with high pathlengths and scatter counts (see figure).
The same holds true when I use the z-boundary as a detector. I've included an example code snippet below (without considering launch/collection angles for simplicity). This effect was not observed in mmclab. I started to observe strange photon statistics in a more complicated script, but I think it has to do with this effect. If you could provide an explanation or correction, it would be greatly appreciated.
Kind regards,
Robin
Script:
clear all; close all; clc
cfg.nphoton = 1e7;
cfg.vol = uint8(ones(60, 60, 60));
cfg.gpuid = 1;
cfg.autopilot = 1;
cfg.tstart = 0;
cfg.tend = 5e-9;
cfg.tstep = 5e-9;
cfg.unitinmm = 1;
% Source
cfg.issrcfrom0 = 1;
cfg.srctype = 'disk';
cfg.srcparam1(1) = 1;
cfg.srcpos=[30 30 0];
cfg.srcdir=[0 0 1];
cfg.isreflect = 0;
% Medium
cfg.prop = [0 0 1 1; 0.1 25 0.9 1.37];
% Detector
cfg.savedetflag = 'dspvx';
cfg.maxdetphoton = 1e7;
cfg.detpos = [30 30 0 1];
% cfg.bc = 'aaaaaa001000';
cfg.bc = 'aaaaaa000000';
[fluence,det,vol,seeds,traj] = mcxlab(cfg);
figure;
subplot(211)
histogram(det.nscat)
subplot(212)
histogram(det.nscat)
xlim([2.6e4 3e4])
ylim([0 2.5e4])
% Removing any photons outside defined radius
% r_max
= cfg.srcparam1(1); % Define max
radius
% r_tmp =
sqrt(sum((det.p(:,1:2)-cfg.srcpos(1:2)).^2,2)); % Make array with distances from detector
% det.data(:,r_tmp>r_max) = []; % Drop outside
radius
% det.nscat(r_tmp>r_max,:) = [];
% %
% figure;
% histogram(det.nscat)

hi Robin,
we took a look at this issue today, and was able to reproduce the hump.
we created a github issue at https://github.com/fangq/mcx/issues/222 and was able to fix it with the following commit
https://github.com/fangq/mcx/commit/b41c9154375aa8edab226d6ceb0be9cb104199de
as you can see in my
reply, this issue was caused by a previous commit made on
Aug. 3, 2023, specifically, a single line of change
- if (p->w >= 0.f) {
+ if (fabsf(p->w) > 0.f) {
the rationale behind this change was that when a photo packet propagate so long such that its float32 weight drop to exact 0.0 (very rare), they should skip diffuse-reflectance summation as they won't contribute. however, the above commit also excluded them from being stored.
from my test, there are about 82 photon packets out of 3811706 packets arriving at the detector are excluded - most of these have extremely long path and extremely long scattering count, but again, they are in very small number (0.002%). however, missing such photon packets seems to skew the nscat distribution at the longer tails.
by reverting > 0.f back to >= 0.f, the hump disappeared,
along with a few other smaller humps that are only visible on the
log-y scale. the corrected nscat distribution now looks like the
figure attached.
on the other side, I argue that this hump has a marginal impact
to all of your data analysis even it was not fixed. when you
processing any data with detected photon, you must consider their
respective weights in your data analysis, you can not directly
average or sum these quantities because each detected photon have
very different weights. that's why in our provided functions, we
perform weighted average or weighted sum
https://github.com/fangq/mcx/blob/master/utils/mcxmeanscat.m
https://github.com/fangq/mcx/blob/master/utils/mcxmeanpath.m
when you consider the long paths to these high nscat photons, their impact are numerically close to 0 (because the weights are nearly 0 - or exactly zero with limited precision).
anyhow, please watch the automated CI builds to be completed and try the updated binaries at
https://mcx.space/nightly/github/
Qianqian
--
You received this message because you are subscribed to the Google Groups "mcx-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mcx-users+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/mcx-users/cf696750-a790-4d18-9970-cd8b84773889n%40googlegroups.com.
I have a further discussion on this issue with one of my students today. it seems that my previous statement that "only a small fraction photons are affected" was not entirely accurate.
when a 0-weight photons are skipped previously, not only that we do not record this photon's partial-path/nscat data, but also fail to reset the shared memory buffer that is used to accumulate these metrics. As a result, the photon immediately following such packet will have wrong (over estimated) ppath/nscat values.
Although only 82 out of 3811706 had 0 weights, there could be more of such photons that do not land over the detector, therefore, the affected photon count may not be as little as I previously suggested - as indicated by the rather noticeable hump from your shared histogram.
anyhow, I hope this fix also solved the skewed statistics that
you had previously observed. To ensure the buffer is reset
regardless of weights, I had restructured the code in a follow up
commit
https://github.com/fangq/mcx/commit/026eebfa55fc64ecd255331f2bb9a69a6912c920
on a related note - mcx supports Russian roulette, but by default
this has been disabled as I was unsure if this setting can create
statistical bias to some of the metrics (such as TPFS, ppath and
nscat). In your sample code, if I enables Russian roulette using
the below setting (trigger Russian roulette when the weight drops
below 1e-5)
cfg.minenergy=1e-5;
despite this simulation is about 4x faster than without Russian
roulette, when I plot the histogram of nscat in log-y scale, I do
see that this setting, often enabled in other MC simulators,
produces a rather noticeable truncation effect

Qianqian
To view this discussion on the web visit https://groups.google.com/d/msgid/mcx-users/ea4ea444-3e1a-4106-b20a-9815ba705ba5n%40googlegroups.com.