[SyneRBI/SIRF] Poisson Log Likelihood ObjectiveFunction `gradient` with and without `out` parameter has larger difference than expected (Issue #1349)

2 views
Skip to first unread message

Edoardo Pasca

unread,
Oct 28, 2025, 9:44:00 AM10/28/25
to SyneRBI/SIRF, Subscribed
paskino created an issue (SyneRBI/SIRF#1349)

With a Poisson Log Likelihood ObjectiveFunction one may calculate the gradient at a certain image x by calling gradient. If a pre-initialised buffer is passed in the out parameter, this is filled with the result of the gradient calculation. If no out parameter is passed the function allocates and returns the result in a newly allocated buffer. However, comparing these 2 lead to larger difference than expected.

obj_fun = pet.make_Poisson_loglikelihood(acquired_data)
g1 = obj_fun.gradient(x)
g2 = g1 * 0
obj_fun.gradient(x, out=g2)
numpy.testing.assert_allclose(g2.asarray(), g1.asarray(), atol=1e-5)
image.png (view on web)


Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.Message ID: <SyneRBI/SIRF/issues/1349@github.com>

Kris Thielemans

unread,
Oct 28, 2025, 9:55:06 AM10/28/25
to SyneRBI/SIRF, Subscribed
KrisThielemans left a comment (SyneRBI/SIRF#1349)

There could be a difference in float<->double possibly? STIR works with floats. I don't think we're converting though, but @evgueni-ovtchinnikov can confirm.


Reply to this email directly, view it on GitHub, or unsubscribe.

You are receiving this because you are subscribed to this thread.Message ID: <SyneRBI/SIRF/issues/1349/3456598264@github.com>

Evgueni Ovtchinnikov

unread,
Oct 28, 2025, 10:06:21 AM10/28/25
to SyneRBI/SIRF, Subscribed
evgueni-ovtchinnikov left a comment (SyneRBI/SIRF#1349)

@paskino have you checked dtype of g1.asarray() and g2.asarray()?


Reply to this email directly, view it on GitHub, or unsubscribe.

You are receiving this because you are subscribed to this thread.Message ID: <SyneRBI/SIRF/issues/1349/3456645517@github.com>

MaxTousss

unread,
Feb 14, 2026, 4:31:15 PM (8 days ago) Feb 14
to SyneRBI/SIRF, Subscribed
MaxTousss left a comment (SyneRBI/SIRF#1349)

Greetings,

While doing some tests for the PETRIC2 challenge, I might have stumbled upon a similar problem. After some investigation, it seems that the backward operator is slightly unstable. One can observe this by using the BSREM2 algorithm provided in SIRF contrib and computing (Note, self is the class BSREM2 for a given dataset of PETRIC2)
np.sum((self.subset_gradient(self.x, 0) - self.subset_gradient(self.x, 0)).asarray().astype('float')**2) = 0.0013965924207424846
and
np.sum((self.acq_models[0].backward(self.data[0]) - self.acq_models[0].backward(self.data[0])).asarray().astype('float')**2) = 2.0470940317509713e-06
(Value were obtained from one of the dataset provided in PETRIC2 and mostly serve to show the "amplitude", i.e., slight difference, of the problem).

Note that my attempt to use the out feature of the backward method resulted in erratic testing. For example,

b1 = self.acq_models[0].backward((self.data[0]))
b2 = self.acq_models[0].backward((self.data[0]))
self.acq_models[0].backward((self.data[0]), out=b2)
self.acq_models[0].backward((self.data[0]), out=b1)
np.sum((b2-b1).asarray().astype('float')**2)
> 1.0141001191048331e-05
self.acq_models[0].backward((self.data[0]), out=b1)
np.sum((b2-b1).asarray().astype('float')**2)
> 2187614105.957749

This was tested in the docker provided for the PETRIC2 challenges.


Reply to this email directly, view it on GitHub, or unsubscribe.

You are receiving this because you are subscribed to this thread.Message ID: <SyneRBI/SIRF/issues/1349/3902556905@github.com>

Evgueni Ovtchinnikov

unread,
Feb 18, 2026, 10:58:23 AM (4 days ago) Feb 18
to SyneRBI/SIRF, Subscribed
evgueni-ovtchinnikov left a comment (SyneRBI/SIRF#1349)

@paskino @casperdcl Just as I suspected, this is a typical trouble with parallel computation.

I added these lines to osem_reconstruction.py after line 112:

    import numpy
    x = image
    obj_fun.set_up(x)
    g1 = obj_fun.gradient(x)
    g2 = g1 * 0
    obj_fun.gradient(x, out=g2)
    d = g2 - g1
    print(numpy.linalg.norm(d.asarray()))
    exit()

Each time I ran it I got slightly different norm of d.asarray(), e.g.:

0.017808886
0.015998868

After exporting OMP_NUM_THREADS=1, I got 0.0 each time.


Reply to this email directly, view it on GitHub, or unsubscribe.

You are receiving this because you are subscribed to this thread.Message ID: <SyneRBI/SIRF/issues/1349/3921645127@github.com>

Kris Thielemans

unread,
Feb 18, 2026, 2:13:06 PM (4 days ago) Feb 18
to SyneRBI/SIRF, Subscribed
KrisThielemans left a comment (SyneRBI/SIRF#1349)

I agree we'll get variations due to multithreading (it's hard to know how large these are from the numbers above, as we don't know what the norm is on f1, is course).

However, wasn't the main problem observed by @MargaretDuff that we have to to first fill g2 with 0? I have no idea what the usual numpy convention is, but suspect numpy will overwrite.


Reply to this email directly, view it on GitHub, or unsubscribe.

You are receiving this because you are subscribed to this thread.Message ID: <SyneRBI/SIRF/issues/1349/3922676860@github.com>

Casper da Costa-Luis

unread,
Feb 19, 2026, 5:13:08 PM (3 days ago) Feb 19
to SyneRBI/SIRF, Subscribed
casperdcl left a comment (SyneRBI/SIRF#1349)

Wait what do you mean by multithreading? Are we not using atomic operations where appropriate to ensure consistent results?


Reply to this email directly, view it on GitHub, or unsubscribe.

You are receiving this because you are subscribed to this thread.Message ID: <SyneRBI/SIRF/issues/1349/3930434355@github.com>

Kris Thielemans

unread,
Feb 19, 2026, 5:23:35 PM (3 days ago) Feb 19
to SyneRBI/SIRF, Subscribed
KrisThielemans left a comment (SyneRBI/SIRF#1349)

Calling parallelproj backprojection via STIR will likely (i.e., I didn't check) do some adding of backprojected images in CPU via OpenMP. Of course, it will do stuff in parallel on the GPU. Both avoid race conditions by atomic operations. That doesn't mean that results are consistent though, once you do floats. This is because a+b+c != c+b+a != a + c +b ....

Example, if you need to add a list of numbers with a varying magnitude, you get the best numerical precision by first adding the smallest ones.


Reply to this email directly, view it on GitHub, or unsubscribe.

You are receiving this because you are subscribed to this thread.Message ID: <SyneRBI/SIRF/issues/1349/3930485805@github.com>

Edoardo Pasca

unread,
Feb 20, 2026, 6:27:49 AM (2 days ago) Feb 20
to SyneRBI/SIRF, Subscribed

Closed #1349 as completed.


Reply to this email directly, view it on GitHub, or unsubscribe.

You are receiving this because you are subscribed to this thread.Message ID: <SyneRBI/SIRF/issue/1349/issue_event/22946090027@github.com>

Kris Thielemans

unread,
Feb 20, 2026, 6:45:54 AM (2 days ago) Feb 20
to SyneRBI/SIRF, Subscribed
KrisThielemans left a comment (SyneRBI/SIRF#1349)

Are you sure this can be closed? what about

However, wasn't the main problem observed by @MargaretDuff that we have to first fill g2 with 0? I have no idea what the usual numpy convention is, but suspect numpy will overwrite.


Reply to this email directly, view it on GitHub, or unsubscribe.

You are receiving this because you are subscribed to this thread.Message ID: <SyneRBI/SIRF/issues/1349/3933501807@github.com>

Kris Thielemans

unread,
Feb 20, 2026, 6:48:20 AM (2 days ago) Feb 20
to SyneRBI/SIRF, Subscribed

Reopened #1349.


Reply to this email directly, view it on GitHub, or unsubscribe.

You are receiving this because you are subscribed to this thread.Message ID: <SyneRBI/SIRF/issue/1349/issue_event/22946566277@github.com>

Kris Thielemans

unread,
Feb 20, 2026, 6:48:21 AM (2 days ago) Feb 20
to SyneRBI/SIRF, Subscribed
KrisThielemans left a comment (SyneRBI/SIRF#1349)

see https://discord.com/channels/1242028164105109574/1252651306779152535/1458465128117833728


Reply to this email directly, view it on GitHub, or unsubscribe.

You are receiving this because you are subscribed to this thread.Message ID: <SyneRBI/SIRF/issues/1349/3933528214@github.com>

Evgueni Ovtchinnikov

unread,
Feb 20, 2026, 8:22:23 AM (2 days ago) Feb 20
to SyneRBI/SIRF, Subscribed
evgueni-ovtchinnikov left a comment (SyneRBI/SIRF#1349)

@KrisThielemans see my understanding of Margaret's problem suggested 08/01/2026 at 17:44.


Reply to this email directly, view it on GitHub, or unsubscribe.

You are receiving this because you are subscribed to this thread.Message ID: <SyneRBI/SIRF/issues/1349/3934531807@github.com>

Kris Thielemans

unread,
Feb 20, 2026, 8:31:29 AM (2 days ago) Feb 20
to SyneRBI/SIRF, Subscribed
KrisThielemans left a comment (SyneRBI/SIRF#1349)

Do you mean #1349 (comment)? That is about numerical precision. Margaret's problem doesn't look like that


Reply to this email directly, view it on GitHub, or unsubscribe.

You are receiving this because you are subscribed to this thread.Message ID: <SyneRBI/SIRF/issues/1349/3934636846@github.com>

Evgueni Ovtchinnikov

unread,
Feb 20, 2026, 8:54:21 AM (2 days ago) Feb 20
to SyneRBI/SIRF, Subscribed
evgueni-ovtchinnikov left a comment (SyneRBI/SIRF#1349)

@KrisThielemans no, I refer specifically to my answer to @MargaretDuff of 08/01/2026 at 17:44. She never replied, so I take it she found my suggestions plausible.


Reply to this email directly, view it on GitHub, or unsubscribe.

You are receiving this because you are subscribed to this thread.Message ID: <SyneRBI/SIRF/issues/1349/3934880252@github.com>

Kris Thielemans

unread,
Feb 20, 2026, 9:01:04 AM (2 days ago) Feb 20
to SyneRBI/SIRF, Subscribed
KrisThielemans left a comment (SyneRBI/SIRF#1349)

Please quote it here.


Reply to this email directly, view it on GitHub, or unsubscribe.

You are receiving this because you are subscribed to this thread.Message ID: <SyneRBI/SIRF/issues/1349/3934950318@github.com>

Evgueni Ovtchinnikov

unread,
Feb 20, 2026, 9:39:33 AM (2 days ago) Feb 20
to SyneRBI/SIRF, Subscribed
evgueni-ovtchinnikov left a comment (SyneRBI/SIRF#1349)

For each i, x_out is overwritten showing us only what subset i of detector pairs "sees".
My understanding is that images of data subsets should be summed up, accumulating in x_tmp.
We could get better picture if you printed norms of x_out and x_tmp for each i.


Reply to this email directly, view it on GitHub, or unsubscribe.

You are receiving this because you are subscribed to this thread.Message ID: <SyneRBI/SIRF/issues/1349/3935285126@github.com>

Reply all
Reply to author
Forward
0 new messages