I wanted to ask if there are some ways to increase the distribution of light without increasing the photon count? Since it is computationally expensive to run a simulation where we see real energy values ~on the order of 10's or 100's of mJ.
hi Alex
MCX outputs normalized fluence rate (assuming the input is 1 unit of energy or particle count). No matter how many photons you simulate, the mean values at each voxel should stay roughly the same, matching that of the Green's function.
Therefore, if you want to simulate a source with a higher power (100W light bulb with a 1 hour illumination - 360000 J of total energy delivered), you simply multiply the fluence or energy deposition by the total J delivered, i.e. 36x10^4. because this is
in the source term, the solution is totally linearly scalable.
However, I don't think this is what you are trying to fix/improve.
So if I have phantom that is 20cm x 20cm x 20cm with a dx of 1mm this is a 200x200x200 matrix and using 1e8 photons I am still seeing 0 fluence/ energy at the center of my phantom.
from what I can see, your real issue is the need for large photon numbers in order to produce statistically meaningful solutions in deep tissue regions. This is always a problem for Monte Carlo, because of its stochastic nature. A meaningful MC solution
requires to have significant number of photons to reach a voxel, so that you can calculate a stable mean. But for optically-thick scattering domain like what you described, this is very difficult - you just don' t have enough photons to go to those regions
as they have extremely low probability to get there.
The question you should really be asking is whether such simulation is desirable.
MCX (and most other MC simulator) is not designed to handle such scenario. If you need to simulate optically-thick medium with a depth over 6-10 cm, you should really consider the diffusion model, or a hybrid model using MC in the near-source area/void space and using diffusion for the rest (see a series of papers from Dr. Simon Arridge's group between 2000-2005). There is no stochastic noise in the diffusion solution, however, if the absorption is too high, the limited numerical precision of the linear solver may also produce a random noise due to the round-off errors, but still, it is way much faster than MC.
The rationale that I kept using a 60x60x60 1mm^3 domain for most of my MCX demos is that 6cm seems to be a good size of the domain where MC could be relevant. From the DOT/NIRS device perspective, most systems that needs MC have a depth/field-of-view typically
smaller than 6 cm. Simulating "sufficient" photons in such a domain is now quite feasible (seconds - 10s of seconds) using MCX/MCX-CL if you have a good GPU. However, beyond 6-10cm, I think you should seriously consider using diffusion instead. You can do
it brute force using MC, but it is not the most efficient method, and the difference it could make is also quite small.
Moreover, domain voxel dimensions plays a big role in the statistics of the MC solution. Like I showed in this simple example,
https://github.com/fangq/mcx/blob/master/mcxlab/examples/demo_qtest_subpixel.m
if you refine your mesh by 2x in x/y/z dimension, you need to expect 2^6=64x fold more computation in order to produce a solution that has comparable stochastic noise in each voxel - 8 fold of comes from the increase of voxel density, and another 8 fold
comes from the increase of the launched photons to compensate for the reduced voxel volume (thus less photons traversing through).
If you really want to go ahead and run a 20x20x20cm domain using MC, the only way you can get good solution is use larger voxel size, i.e. make your voxel 5 mm^3 could produce a much nicer solution than 1mm^1.
but again, if your domain is truly that size filled with tissue-like diffusive media (mus'~1/mm), MC is not the best choice, unless you can tolerate the lengthy run-times like MC in the old days.
Qianqian
--
You received this message because you are subscribed to the Google Groups "mcx-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mcx-users+...@googlegroups.com.
To post to this group, send email to mcx-...@googlegroups.com.
Visit this group at https://groups.google.com/group/mcx-users.
For more options, visit https://groups.google.com/d/optout.
I wanted to ask if there are some ways to increase the distribution of light without increasing the photon count? Since it is computationally expensive to run a simulation where we see real energy values ~on the order of 10's or 100's of mJ.So if I have phantom that is 20cm x 20cm x 20cm with a dx of 1mm this is a 200x200x200 matrix and using 1e8 photons I am still seeing 0 fluence/ energy at the center of my phantom.
I was looking at setting cfg.unitinmm=0.005 and then process the data later assuming it was 1 instead. However this leads to incorrect fluence values. So maybe the only thing to do is get a better GPU and pump the photon count up?
no, you can't do that, setting cfg.unitinmm=0.005 is equivalent to reducing mua/mus by 200(=1/0.005) fold. you have changed the domain's optical properties. The solutions won't be the same.
On Saturday, March 2, 2019 at 3:30:12 PM UTC-5, Alex wrote:I wanted to ask if there are some ways to increase the distribution of light without increasing the photon count? Since it is computationally expensive to run a simulation where we see real energy values ~on the order of 10's or 100's of mJ.
So if I have phantom that is 20cm x 20cm x 20cm with a dx of 1mm this is a 200x200x200 matrix and using 1e8 photons I am still seeing 0 fluence/ energy at the center of my phantom.
Hi Fang,
Would it be fine then to set cfg.unitinmm=0.005 and then take my mua and mus and just scale them by (1/cfg.unitinmm)? What is this doing on the backend? are the optical values technically in units of cfg.unitinmm ^-1 so this is why we need to scale them?
there are two places unitinmm is involved, first, the optical properties are scaled before simulation
https://github.com/fangq/mcx/blob/etherdome/src/mcx_utils.c#L882-L887
then, after computation, the output quantity normalization are scaled back using unitinmm
https://github.com/fangq/mcx/blob/etherdome/src/mcx_core.cu#L2136
that's pretty all that happen behind the scene for unitinmm.
essentially, anywhere if there is a length unit (mcx default length unit is mm or 1/mm) need to be converted to the grid unit before calculations and convert back after.
Qianqian
I'll also look into Dr. Arridge's group, but my medium is technically only 10cm^3 for this simulation (I did cut out the unnecessary water background), but the imaging system I am modeling is a 20cm diameter imaging plane so in the future I may need the extra space in the simulation.
I am not saying you can not use mcx for simulating large domains. I just personally prefer not to run lengthy simulations. If you have a good GPU and is willing to wait for a couple of hours, I bet MCX can give a decent solution.
For brain simulations, like in these newly added examples,
https://github.com/fangq/mcx/blob/master/mcxlab/examples/demo_fullhead_atlas.m
https://github.com/fangq/mcx/tree/master/example/colin27
https://github.com/fangq/mcx/blob/master/mcxlab/examples/demo_digimouse_sfdi.m
the domain size can be easily more than 10x10x10 cm, but we generally do not care about deep brain region because we know our optical detectors won't be able to see those photons. As a result, we just simulate the number of photons sufficient to get good SNR for the cortical region and ignore the noisy subcortical regions.
so what matters is not the size of the domain, but the SNR of the ROI that you are interested to study. If you want to have good SNR in every part of a large diffusive domain, you should expect to run mcx for minutes to hours, but again, it is totally
doable.
Qianqian
I believe I know where my misunderstand standing is coming from. I asked before If I could set cfg.unitinmm to 0.005 and assume it is 1 later. This is incorrect and if I were to do that I need to scale mua and mus accordingly. However mua and mus don't have to be scaled if you are just changing the cfg.unitinmm, as can be seen in the blood vessels example. Sorry for the misunderstanding.
glad that this is now clarified. let me know if you have any further questions.
On Monday, March 25, 2019 at 5:35:19 PM UTC-4, Alex wrote:Since we discussed scaling the optical properties in this thread I wanted to ask about the skin vessel demo. While it is hard to say exactly without know the wavelength the demo uses the value cfg.unitinmm is 0.005, however there is no scaling applied to mua and mus? and the values that are given for blood seem to be reasonable for lambda =532nm. Using what was said above would these values need to be reduced by 200 (1/0.005)?
--