hi Kris,
when you said "shape mismatch when reading the mcx output", were you referring to the shape of the detected photon data array when loading from the output file?
if so, what format did you ask mcx to save, and what function call did you use for loading the data?
if you use mcx in the command line, I suggest you use "-F jnii" because the output is a JSON file with explicit subfields for detected photons, making it easier to process.
if you use .mch, the loaded data must be properly parsed in order
to understand which column of data correspond to what information
you requested in -w flag.
for a simple test, I tried both mcxlab and mcx command line with -w XV and -X 1, the output look fine to me
./mcx --bench cube60 -w xv -X 1 -F jniicat cube60_detp.jdat | perl -pe 's/"[^"]{1000,}"/"..."/g'
{
"MCXData": {
"Info": {
"Version": 1,
"MediaNum": 2,
"DetNum": 4,
"ColumnNum": 6,
"TotalPhoton": 1000000,
"DetectedPhoton": 2984,
"SavedPhoton": 2984,
"LengthUnit": 1,
"SeedByte": 0,
"Normalizer": 200,
"Repeat": 1,
"SrcNum": 1,
"SaveDetFlag": 48,
"TotalSource": 1,
"Media": [{
"mua": 0,
"mus": 1.19209289550781e-07,
"g": 1,
"n": 1
}, {
"mua": 0.00499999988824129,
"mus": 1,
"g": 0.00999999977648258,
"n": 1.3700000047683716
}, {
"mua": 0.0020000000949949,
"mus": 5,
"g": 0.89999997615814209,
"n": 1
}]
},
"PhotonData": {
"p": {
"_ArrayType_": "single",
"_ArraySize_": [2984, 3],
"_ArrayZipType_": "zlib",
"_ArrayZipSize_": 8952,
"_ArrayZipData_": "..."
},
"v": {
"_ArrayType_": "single",
"_ArraySize_": [2984, 3],
"_ArrayZipType_": "zlib",
"_ArrayZipSize_": 8952,
"_ArrayZipData_": "..."
}
}
}
}
Qianqian
Kris --
You received this message because you are subscribed to the Google Groups "mcx-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mcx-users+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/mcx-users/9146c1fa-c805-4f46-9269-77b8269ade22n%40googlegroups.com.
can you update your jdata and bjdata python modules using the below command
python3 -m pip install --upgrade --force jdata bjdata
and rerun jd.load()? if it still give you an error, please upload your jdat or .bnii file in a shared drive such as dropbox or google drive and let me debug the issue.
To view this discussion on the web visit https://groups.google.com/d/msgid/mcx-users/093a75e3-4906-4c87-89de-d57494b81ba2n%40googlegroups.com.
hi Kris,
I was able to reproduce the issue on my side.
it appears that you had added -Z 2 in the command line. this flag tells mcx to not apply any compression. because the output is JSON, a default base64 compression is required to convert binary to text, so this results in double base64 encoding, making the decoder confused.
I created a ticket for this bug
https://github.com/fangq/mcx/issues/219
and was able to fix it with the below patch
https://github.com/fangq/mcx/commit/ea67ea901c8be1e8bce5ce333bf133c9f84af31c
my recommendation is to use a -Z compression method that is not
base64. if compression time is a concern, you can try lz4 (-Z 5)
which is expected to be 20x
faster than zlib.
the nightly build packages are being recompiled at this moment,
please check out the nightly build folder later and test (or
recompile mcx on your side). unfortunately github CI builds are
failing right now because of ubuntu launchpad outatge:
https://github.com/fangq/mcx/actions/runs/8836179945/job/24262240824
Qianqian
To view this discussion on the web visit https://groups.google.com/d/msgid/mcx-users/f8fc69bf-0158-418f-a837-4f5270690c40n%40googlegroups.com.
hi Kris,
saving dref (-X) requires the volume to have a layer of 0 padded
right outside of the boundary where you want to receive dref
readings. the built-in benchmark cube60 does not have this padded
0 layer, thus no dref output. you can manually add this 0 layer by
using --json
mcx --bench cube60 -w xv -X 1 -F jnii --json '{"Shapes":[{"ZLayers":[1,2,0]}]}'
although you will also need to move up detector z positions by 1 voxel to avoid warnings.
Qianqian
To view this discussion on the web visit https://groups.google.com/d/msgid/mcx-users/ea2ebdd3-b4c6-49c3-bd2b-8e1edd26f609n%40googlegroups.com.
hi Kris,
as I mentioned in a recent thread posted by Seonyeong,
https://groups.google.com/g/mcx-users/c/GBDXxezzKEw/m/1v2gFsgBAwAJ
in non-label based media types, you can't set 0 to a voxel to indicate background, instead, you must set either mua or mus or both as NaN to indicate background medium.
because dref is only saved in the background voxels right next to non-zero voxels, you must pad a layer of NaNs in mua/mus float arrays in order for mcx to save dref data.
I loaded your vol.bin file, and I could not see 0 or NaN in the volume, so I assume that was the cause.
Qianqian
To view this discussion on the web visit https://groups.google.com/d/msgid/mcx-users/dfc9cd1f-49d0-48a8-a655-f5b04dbbc7a6n%40googlegroups.com.
hi Liam and Kris,
I was able to load the volume again and can confirm that the NaN layer does exist.
To debug this issue, I recompiled mcx binary in the debug mode (make debug), and ran a single photon packet (-n 1) with either -b 0 or -b 1 with your command, here are the full logs of the first photon running in the two conditions (b1.log and b0.log).
I am not yet sure what was wrong, but I want to mention a few things while I was looking into this.
first thing is that when you set -b 0 to disable
reflection, it not only disables the reflection at the boundary,
but also throughout the entire domain, including between interior
voxels. I see you have defined a structure for n per voxel,
optically, this may be a very different problem when you model
light reflections vs ignoring it between spatial heterogeneous
voxels. but I am not certain this is the case here. I set the
refractive index to 1.3 constant across the domain, I also saw
very different reflection behavior.
if you do a diff to the two log files, the first 186 lines are the same, it starts to differ when the photon moves from voxel [76 29 19] to [76 29 20], where the per-voxel defined refractive index, n, is different in the two voxels. Therefore, in the -b 1 case, reflection/refraction calculations are invoked while the other one does not. From what I can tell, the -b 1 case, most photons exits from other surfaces but in -b 0, most of those are back reflected.
again, I do not fully understand what has caused this difference
- whether this difference is true when considering or ignoring
n-mismatch, or it is a bug in the code. I just want to share this
perhaps you could also read the log, or use the debug mode to
investigate this in parallel. you could also plot the trajectories
and see why photons are mostly forward-scattered in -b 1 but not
in -b 0. I suspect that there might be something related to the
boundaries of the low mua/low mus voxels (mua=1e-5, mus=1e-5).
Qianqian
To view this discussion on the web visit https://groups.google.com/d/msgid/mcx-users/6b22a413-7915-4ec9-a30f-42e159c20279n%40googlegroups.com.