Fixes #1369
Please read and adhere to the contribution guidelines.
Please tick the following:
https://github.com/SyneRBI/SIRF/pull/1370
(1 file)
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.![]()
@evgueni-ovtchinnikov pushed 1 commit.
—
View it on GitHub or unsubscribe.
You are receiving this because you are subscribed to this thread.![]()
I think the best strategy is to just use GibbsRelativeDifferencePenalty everywhere. It should be backwards compatible with RelativeDifferencePrior. @ColomboMatte0 is that correct? (we don't need the surrogate stuff here AFAIK)
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.![]()
It is backwards compatible with all the methods implemented in RelativedifferencePrior , we do not have SPS stuff for the RDP.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.![]()
@evgueni-ovtchinnikov pushed 1 commit.
—
View it on GitHub or unsubscribe.
You are receiving this because you are subscribed to this thread.![]()
@evgueni-ovtchinnikov pushed 1 commit.
—
View it on GitHub or unsubscribe.
You are receiving this because you are subscribed to this thread.![]()
@evgueni-ovtchinnikov pushed 1 commit.
—
View it on GitHub or unsubscribe.
You are receiving this because you are subscribed to this thread.![]()
@ColomboMatte0 can you suggest a test for this functionality?
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.![]()
@evgueni-ovtchinnikov pushed 1 commit.
—
View it on GitHub or unsubscribe.
You are receiving this because you are subscribed to this thread.![]()
@ColomboMatte0 could you please suggest a Python test for this functionality. Thank you, E.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.![]()
sorry i lost the previous notification, you mean a test for the hessian diagonal? not really sure what you mean with hessian gradient
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.![]()
Yes. For the diagonal (not the gradient). Not sure if we have one in STIR that could be replicated?
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.![]()
i remember i tested it numerically using finite differences from the gradient but probably those tests were not added in my PR, i can add it tomorrow, shouldn't be a big deal.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.![]()
$$ H_{ii} \approx \frac{\nabla_i f(x + \epsilon e_i) - \nabla_i f(x - \epsilon e_i)}{2\epsilon} $$
you can test the correctness by computing this, let me know if you want me to implement it somewhere.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.![]()
Good idea. I suggest to use
$$ H_{ii} \approx \frac{\nabla_i f(x + \epsilon e_i) - \nabla_i f(x)}{\epsilon} $$
a bit easier to implement, and also avoids potential issues with the RDP which is only defined on non-negative images.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.![]()
I doubt @ColomboMatte0 will have the time for this, unfortunately. Shouldn't be too hard to build on our current "gradient of a function" tests. Just one more loop...
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.![]()
i can do it. but shall i open a new quick PR for STIR? or you want me to add the test where exactly?
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.![]()
here src/xSTIR/pSTIR/tests/tests_qp_lc_rdp.py?
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.![]()
@ColomboMatte0 yes, that's a good place, please go ahead with a quick PR. Thanks a lot!
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.![]()
@KrisThielemans pushed 1 commit.
—
View it on GitHub or unsubscribe.
You are receiving this because you are subscribed to this thread.![]()
Test failures:
src/xSTIR/pSTIR/tests/tests_qp_lc_rdp.py:97: in Hessian_diagonal_test
test.check_if_less(abs(numerical - analytical) / max_H, 0.01)
...
E ValueError: +++ test 29 failed: np.float32(0.011938471) >= 0.01
Bounds are always very hard to set in this stuff. @ColomboMatte0 what do you think? (Did you run it locally?)
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.![]()
yes I tested it locally on a dataset and it worked on my end. I suspect the CI is running the test on a different dataset. A quick workaround could be to slightly increase the tolerance for this specific test, given that we are using a one-sided derivative.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.![]()
@KrisThielemans I ran the test locally with the CI dataset and reproduced the failure — the one-sided first-order finite difference has O(ε) truncation error which exceeds 1% on some voxels.
Two options:
Option A — relax tolerance to 2% (already in #1386)
Option B — use a second-order one-sided finite difference, keeping the 1% tolerance:
$$ H_{ii} \approx \frac{-3 , \nabla_i f(x) + 4 , \nabla_i f(x + \varepsilon , e_i) - \nabla_i f(x + 2\varepsilon , e_i)}{2\varepsilon} $$
This is O(ε²) instead of O(ε), still one-sided (safe for RDP with non-negative images), but requires one extra gradient evaluation per test voxel.
Let me know which you prefer and I'll update the PR.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.![]()
thanks. Although I like the second order formulation, I think it's probably not so easy to get right with fast changing priors such as the RDP around zero. I think the main aim here is to have a consistency check and indeed code check (as indeed you found some!).
I'll therefore accept your PR.
Obviously, if we don't have this in STIR yet, it would be useful to add it there as well (sorry for the duplication!). Feel free to try the 2nd order one there :-)
thanks again
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.![]()
@KrisThielemans pushed 1 commit.
—
View it on GitHub or unsubscribe.
You are receiving this because you are subscribed to this thread.![]()
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.![]()