Follow up from CCPPETMR/Hackathon-SIRF#13
addresses requests from @evgueni-ovtchinnikov
https://github.com/CCPPETMR/SIRF/pull/237
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.
@evgueni-ovtchinnikov approved this pull request.
@evgueni-ovtchinnikov commented on this pull request.
In examples/Python/PET/acquisition_model.py:
> @@ -150,6 +150,16 @@ def main(): back_projected_image_as_array = back_projected_image.as_array() show_2D_array('Backprojection', back_projected_image_as_array[z,:,:]) +
remove trailing blanks here
@evgueni-ovtchinnikov commented on this pull request.
> @@ -1193,6 +1201,18 @@ def backward(self, ad, subset_num = 0, num_subsets = 1): (self.handle, ad.handle, subset_num, num_subsets) check_status(image.handle) return image + def get_linear_acquisition_model(self): + am = type(self)() + am.set_up( self.acq_templ, self.img_templ ) + return am + def direct(self, image, subset_num = 0, num_subsets = 1, ad = None): + return self.get_linear_acquisition_model().forward(image, \ + subset_num=subset_num, \ + num_subsets = num_subsets, \ + ad = ad) + def adjoint(self, ad, subset_num = 0, num_subsets = 1): + return self.backward(ad, subset_num = subset_num,
remove trailing blanks here
@paskino commented on this pull request.
> @@ -1193,6 +1201,18 @@ def backward(self, ad, subset_num = 0, num_subsets = 1): (self.handle, ad.handle, subset_num, num_subsets) check_status(image.handle) return image + def get_linear_acquisition_model(self): + am = type(self)() + am.set_up( self.acq_templ, self.img_templ ) + return am + def direct(self, image, subset_num = 0, num_subsets = 1, ad = None): + return self.get_linear_acquisition_model().forward(image, \ + subset_num=subset_num, \ + num_subsets = num_subsets, \ + ad = ad) + def adjoint(self, ad, subset_num = 0, num_subsets = 1): + return self.backward(ad, subset_num = subset_num,
I should have
@KrisThielemans requested changes on this pull request.
Some minor things.
You need to also add some notes to CHANGES.md and to the UserGuide.md. In the latter, it might be best to add a CIL compatibility section.
@mehrhardt, I thought direct
is not the same as forward
, i.e. it should be just the linear part? (not use the background term)
In examples/Python/PET/acquisition_model.py:
> @@ -151,6 +151,16 @@ def main(): back_projected_image_as_array = back_projected_image.as_array() show_2D_array('Backprojection', back_projected_image_as_array[z,:,:]) + acq_model.direct(image, 0, 4, simulated_data)
Please add a comment say what direct
does. Non-obvious terminology for most of us.
In examples/Python/PET/acquisition_model.py:
> @@ -151,6 +151,16 @@ def main(): back_projected_image_as_array = back_projected_image.as_array() show_2D_array('Backprojection', back_projected_image_as_array[z,:,:]) + acq_model.direct(image, 0, 4, simulated_data) + + # show simulated acquisition data + simulated_data_as_array_direct = simulated_data.as_array() + show_2D_array('Direct projection', simulated_data_as_array_direct[0,:,:]) + back_projected_image_adj = acq_model.adjoint(simulated_data, 0, 4)
empty line above and add comment about adjoint
In src/xGadgetron/pGadgetron/Gadgetron.py:
> @@ -388,6 +388,16 @@ def axpby(a, x, b, y): (a.real, a.imag, x.handle, b.real, b.imag, y.handle) return z; + def copy(self): + '''alias of clone''' + return self.clone() + def conjugate(self): + '''Returns the complex conjugate of the data '''
seems to be modifying the object itself as well. Is this intentional? If so, needs to be in the docstring
In src/xGadgetron/pGadgetron/Gadgetron.py:
> @@ -1202,6 +1212,10 @@ def backward(self, ad): (self.handle, ad.handle) check_status(image.handle) return image + def direct(self, image):
add docstring
In src/xGadgetron/pGadgetron/Gadgetron.py:
> @@ -1202,6 +1212,10 @@ def backward(self, ad): (self.handle, ad.handle) check_status(image.handle) return image + def direct(self, image): + return self.forward(image) + def adjoint(self, ad):
add docstring
> @@ -1141,6 +1144,11 @@ def set_up(self, acq_templ, img_templ): ''' assert_validity(acq_templ, AcquisitionData) assert_validity(img_templ, ImageData) + + # temporary save the templates in the class
grammar: "temporarily". However, it doesn't seem to be temporarily but permanently. So delete "temporary".
This has potentially impact on memory though. Do we really need to do this? They don't seem to be used anywhere.
> @@ -1193,7 +1201,19 @@ def backward(self, ad, subset_num = 0, num_subsets = 1): (self.handle, ad.handle, subset_num, num_subsets) check_status(image.handle) return image - + def get_linear_acquisition_model(self):
docstring. Is this a CIL function?
> @@ -1193,7 +1201,19 @@ def backward(self, ad, subset_num = 0, num_subsets = 1): (self.handle, ad.handle, subset_num, num_subsets) check_status(image.handle) return image - + def get_linear_acquisition_model(self): + am = type(self)() + am.set_up( self.acq_templ, self.img_templ ) + return am + def direct(self, image, subset_num = 0, num_subsets = 1, ad = None):
docstring. what's ad
anyway?
@mehrhardt commented on this pull request.
In examples/Python/PET/acquisition_model.py:
> @@ -151,6 +151,16 @@ def main(): back_projected_image_as_array = back_projected_image.as_array() show_2D_array('Backprojection', back_projected_image_as_array[z,:,:]) + acq_model.direct(image, 0, 4, simulated_data)
Also for me the terminology is not obvious. Especially with the background term being sometimes added and sometimes not. Perhaps this needs to be changed so that at least for some people this is intuitive?
@mehrhardt, I thought direct is not the same as forward, i.e. it should be just the linear part? (not use the background term)
Yes, this is what we agreed on during the hackathon. @KrisThielemans , has this been misused?
I guess the direct
terminology is imposed on us by CIL. @paskino, is that the case? If so, we cannot change it I guess. It will need careful documenting.
As direct
is (or should be) different from forward
, and we need it to implement certain algorithms as far as I recall (correct?), it needs to be implemented in C++ and Matlab. That'll need @evgueni-ovtchinnikov's help.
I think one problem we have here is that the naming comes in part from the underlying mathematical objects and in part from the PET imaging application.
To me, if we have an inverse problem, defined by some data y \in Y
and a mapping A : X -> Y
, then computing A(x)
is called the direct
or forward problem
. This problem can be linear or non-linear. In ccpi, A
is modelled as a linear operator so it makes sense to call A.direct(x) := A(x)
and A.adjoint(x) := A*(x)
. For the MRI part of SIRF, this remains the same. However, for the PET part, the model A
has been chosen to be non-linear (affine linear) as it includes the background term, too. Now, a non-linear A
can have both methods direct
and forward
but both should compute A(x)
and it should not have an adjoint
(as it is not linear).
I see two ways to resolve this conflict.
First, the PET model could be changed to be linear and all of these problems disappear. The PET model A
could still have a method like simulate_data
or forward_affine
to perform the affine model and get_background
to get the vector of background coefficients.
Second, we could leave the PET operator in SIRF non-linear A(x) = Bx + c
(should not have an adjoint method! only "backward" as this is mathematically not well-defined) and instead has methods like get_linear_model
=B, get_offset
=c and these are used in algorithms which explicitly use the linearity of the operator such as SPDHG and all of ccpi.
Either way is fine but I suppose we need to be very clear what we are doing.
We developed get_linear_acquisition_model
as base for direct
.
@mehrhardt @KrisThielemans I guess at the hackathon we chose option 2 as we developed get_linear_acquisition_model as base for direct.
Yes, that makes sense. The memory comes slowly back to me. However, we were not very consequent as we probably want to make a clear distinction between the non-linear and the linear operator. Currently these are mixed.
What about the adjoint
or backward
?
I haven't followed this conversation, but would it be helpful to distinguish between nonlinear and linear operators?
Could let Operator
be the more general class (to represent operators for which linearity cannot be assumed, such as the affine PET model you are discussing) having relevant functions/methods, perhaps including forward
or direct
but definitely not adjoint
. Then, perhaps as a subclass, there could be LinearOperator
, having both direct
and adjoint
functions/methods. I don't know if it would be preferable to use the same name such as direct
in both Operator
and LinearOperator
or not.
Seems one would then be able to construct the affine SIRF Operator
from a LinearOperator
and a perhaps a DataContainer
holding the offset.
More than a DataContainer
shouldn't it be a ConstantOperator
? But I think it's a good idea to develop a general Operator
and a subclass LinearOperator
. Please foster discussion in UoM.
@paskino commented on this pull request.
In src/xGadgetron/pGadgetron/Gadgetron.py:
> @@ -388,6 +388,16 @@ def axpby(a, x, b, y): (a.real, a.imag, x.handle, b.real, b.imag, y.handle) return z; + def copy(self): + '''alias of clone''' + return self.clone() + def conjugate(self): + '''Returns the complex conjugate of the data '''
I guess that's a mistake. We should return a new object like this
out = self.clone() out.fill(self.as_array().conjugate()) return out
I don't have a good understanding of the nonlinear SIRT operator and what the "offset" refers to. But reading above it appears to be (chosen to be) affine A = Bx + c
. Here I understand B
as a LinearOperator
and x
as an ImageData
. Then the result Bx
I imagine will be an AcquisitionData
. In order to add the offset c
to that I guess it should be an AcquisitionData
as well? I don't think it should be an operator, but it might make sense to have something like a ConstantData
type holding essentially a constant which is then (by +
) broadcast onto whatever size/datatype Bx
has. Not sure that is required though, as we might create c
as an AcquisitionData
using a method like get_offset
maybe called on construction of the Operator
A
.
@jakobsj you're correct that in PET A = Bx + c
, x
is an ImageData
and c
an AcquisitionData
.
If we'd want to be able to add ``Operators, then it'd make sense to have
ConstantOperator` (I think that terminology is more appropriate than `ConstantData`), but I thought CIL didn't go for adding of (objective) functions and all that. I don't see an urgent need for it at present.
(caveat: github markdown doesn't support math notation.)
We've chosen in SIRF to use terminology for non-math people, as that's most of our target audience. Hence the forward
and backward
terminology.
I'd like to stress that we really don't want to be restricted to linear and affine operators. As soon as we introduce dynamics etc (or indeed transmission tomography), all forward models will be non-affine. In that case, many algorithms need an operation corresponding to the gradient, i.e. something like
g = obj_func.gradient(x)
Supposing the objective function is something like distance(acq_data, acq_model.forward(x))
(plus penalties), obviously this would lead to
est_data = acq_data, acq_model.forward(x) g = acq_model.backward(distance.gradient2(acq_data, est_data), x)
where the backward(y,x)
is really the multiplication of the gradient at x
with y
. Obviously, in the affine case, acq_model.backward(y,x)
doesn't depend on x
(as it's just A^t y
).
I (somewhat belatedly) realise that we can resolve the affine question by shifting the constant into the "distance" function:
f=Ax
distance(acq_data, f) = KL(acq_data, f+ c)
This means you can stick to using a linear model. A similar trick will apply in the transmission problem, where a general model would be
f=Ax
distance(acq_data, f) = noise_model(acq_data, b exp(-f)+ c)
That's of course what @mehrhardt is doing in the algorithm derivation. I think this is quite neat, but on the other hand, I am convinced that physicists/engineers will expect an AcquisitionModel
to actually model what the (noise-free) measured data is.
So, going back to @mehrhardt 's suggestion
I see two ways to resolve this conflict.
First, the PET model could be changed to be linear and all of these problems disappear. The PET model A could still have a method like simulate_data or forward_affine to perform the affine model and get_background to get the vector of background coefficients.
Second, we could leave the PET operator in SIRF non-linear A(x) = Bx + c (should not have an adjoint method! only "backward" as this is mathematically not well-defined) and instead has methods like get_linear_model=B, get_offset=c and these are used in algorithms which explicitly use the linearity of the operator such as SPDHG and all of ccpi.
The first option seems non-intuitive to me, and is going to fall-over once we get to kinetics etc. I strongly suggest to keep PETAcquisitionModel
correspond to forward(x)=Bx+c
and backward(y)=B^t y
(with an ignored second argument I guess). Question is then how to accomodate SPDHG/CIL. get_linear_model
could work, but in effect would just return a PETAcquisitionModel
where the offset is 0. It seems a bit overkill (for SIRF) to create an extra class that. Given that in SIRF, the terminology for the offset is additive_term
, i think I'd therefore would do something like
get_linear_model
to AcquisitionModel
, which returns a copy, but with additive term 0get_additive_term
to AcquisitionModel
is_affine()
and is_linear()
methodsadjoint
which throws if not linear, but calls backward
otherwise.linear_model = acq_m
@KrisThielemans Yes, this sounds all pretty good to me. Then one has a "mathy" version which is sometimes needed but can always fall back to the "forward" "backward" notation.
If we do this, I think ideally most of these methods are in the C++ code, although we could/should implement them in Python only first to see if it works...
Not 100% sure what you'd want to do in CIL. Operator->AffineOperator->LinearOperator
? And what about the adjoint and gradient stuff? @jakobsj, any ideas? What does ODL do?
@paskino commented on this pull request.
> @@ -1141,6 +1144,11 @@ def set_up(self, acq_templ, img_templ): ''' assert_validity(acq_templ, AcquisitionData) assert_validity(img_templ, ImageData) + + # temporary save the templates in the class
They are used in get_linear_acquisition_model
and get_background_term
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.
> @@ -1193,7 +1201,19 @@ def backward(self, ad, subset_num = 0, num_subsets = 1): (self.handle, ad.handle, subset_num, num_subsets) check_status(image.handle) return image - + def get_linear_acquisition_model(self): + am = type(self)() + am.set_up( self.acq_templ, self.img_templ ) + return am + def direct(self, image, subset_num = 0, num_subsets = 1, ad = None):
ad
seems to be AcquisitionData
, and it is needed by the forward
method
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.
@evgueni-ovtchinnikov commented on this pull request.
> @@ -1193,7 +1201,19 @@ def backward(self, ad, subset_num = 0, num_subsets = 1): (self.handle, ad.handle, subset_num, num_subsets) check_status(image.handle) return image - + def get_linear_acquisition_model(self): + am = type(self)() + am.set_up( self.acq_templ, self.img_templ ) + return am + def direct(self, image, subset_num = 0, num_subsets = 1, ad = None):
if ad
is present, the forward projection is put there, otherwise it is returned (so it is like numpy's out
)
@paskino pushed 2 commits.
—
You are receiving this because you are subscribed to this thread.
@paskino commented on this pull request.
> @@ -1193,7 +1201,19 @@ def backward(self, ad, subset_num = 0, num_subsets = 1): (self.handle, ad.handle, subset_num, num_subsets) check_status(image.handle) return image - + def get_linear_acquisition_model(self): + am = type(self)() + am.set_up( self.acq_templ, self.img_templ ) + return am + def direct(self, image, subset_num = 0, num_subsets = 1, ad = None):
can we name it out
, then? It'll make it much clearer
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.
@KrisThielemans requested changes on this pull request.
some more changes in comments please.
In examples/Python/PET/acquisition_data.py:
> @@ -130,6 +130,10 @@ def main(): print('norm of image*10: %f' % image.norm()) diff = image.clone() - image print('norm of image.clone() - image: %f' % diff.norm()) + + # test clone vs copy
say here that copy
is an alias for clone
to conform to numpy
naming
In examples/Python/PET/acquisition_model.py:
> back_projected_image_as_array = back_projected_image.as_array() show_2D_array('Backprojection', back_projected_image_as_array[z,:,:]) + # direct is applying the forward method for linear AcquisitionModel⬇️ Suggested change
- # direct is applying the forward method for linear AcquisitionModel + # direct is alias for the forward method for a linear AcquisitionModel
In examples/Python/PET/acquisition_model.py:
> back_projected_image_as_array = back_projected_image.as_array() show_2D_array('Backprojection', back_projected_image_as_array[z,:,:]) + # direct is applying the forward method for linear AcquisitionModel + # raises error if the AcquisitionModel is not linear. + acq_model.direct(image, 0, 4, simulated_data) + + # show simulated acquisition data + simulated_data_as_array_direct = simulated_data.as_array() + show_2D_array('Direct projection', simulated_data_as_array_direct[0,:,:]) + + # adjoint is applying the backward method for linear AcquisitionModel⬇️ Suggested change
- # adjoint is applying the backward method for linear AcquisitionModel + # adjoint is an alias for the backward method for a linear AcquisitionModel
In src/xGadgetron/pGadgetron/Gadgetron.py:
> + if the AcquisitionModel is linear. + Added for CCPi CIL compatibility + + https://github.com/CCPPETMR/SIRF/pull/237#issuecomment-439894266 + ''' + return self.forward(image) + def adjoint(self, ad): + '''Back-projects acquisition data into image space, if the + AcquisitionModel is linear + Added for CCPi CIL compatibility + + https://github.com/CCPPETMR/SIRF/pull/237#issuecomment-439894266 + ''' + return self.backward(ad) + def is_affine(self): + '''Returns whether the background term is non zero'''⬇️ Suggested change
- '''Returns whether the background term is non zero''' + '''Returns if the acquisition model is affine (i.e. corresponding to A*x+b)'''
In src/xGadgetron/pGadgetron/Gadgetron.py:
> + https://github.com/CCPPETMR/SIRF/pull/237#issuecomment-439894266 + ''' + return self.forward(image) + def adjoint(self, ad): + '''Back-projects acquisition data into image space, if the + AcquisitionModel is linear + Added for CCPi CIL compatibility + + https://github.com/CCPPETMR/SIRF/pull/237#issuecomment-439894266 + ''' + return self.backward(ad) + def is_affine(self): + '''Returns whether the background term is non zero''' + return True + def is_linear(self): + '''Returns whether the background term is zero'''⬇️ Suggested change
- '''Returns whether the background term is zero''' + '''Returns whether the acquisition model is linear (i.e. corresponding to A*x, with zero background term)'''
> @@ -964,7 +964,7 @@ def __del__(self): if self.handle is not None: pyiutil.deleteDataHandle(self.handle) -class AcquisitionModel: +class AcquisitionModel(object):
what's this?
> + (F) y = S (G x + [a]) + [b] + where: + G is the geometric (ray tracing) projector from the image voxels + to the scanner's pairs of detectors (bins); + a and b are otional additive and background terms representing + the effects of noise and scattering; + S is the Acquisition Sensitivity Map + + Returns [a] + ''' + if self.at is None: + self.at = AcquisitionData(self.acq_templ) + self.at.fill(0) + return self.at + def get_constant_term(self): + '''Returns the sum of the additive and background terms of the AcquisitionModel⬇️ Suggested change
- '''Returns the sum of the additive and background terms of the AcquisitionModel + '''Returns the combination of the additive and background terms of the AcquisitionModel
sum seems confusing as the S matrix is applied
> + G is the geometric (ray tracing) projector from the image voxels + to the scanner's pairs of detectors (bins); + a and b are otional additive and background terms representing + the effects of noise and scattering; + S is the Acquisition Sensitivity Map + + Returns [a] + ''' + if self.at is None: + self.at = AcquisitionData(self.acq_templ) + self.at.fill(0) + return self.at + def get_constant_term(self): + '''Returns the sum of the additive and background terms of the AcquisitionModel + + PET acquisition model that relates an image x to the⬇️ Suggested change
- PET acquisition model that relates an image x to the + A PET acquisition model relates an image x to the
> + Returns [a] + ''' + if self.at is None: + self.at = AcquisitionData(self.acq_templ) + self.at.fill(0) + return self.at + def get_constant_term(self): + '''Returns the sum of the additive and background terms of the AcquisitionModel + + PET acquisition model that relates an image x to the + acquisition data y as + (F) y = S (G x + [a]) + [b] + where: + G is the geometric (ray tracing) projector from the image voxels + to the scanner's pairs of detectors (bins); + a and b are otional additive and background terms representing⬇️ Suggested change
- a and b are otional additive and background terms representing + a and b are optional additive and background terms representing
> + ''' + if self.at is None: + self.at = AcquisitionData(self.acq_templ) + self.at.fill(0) + return self.at + def get_constant_term(self): + '''Returns the sum of the additive and background terms of the AcquisitionModel + + PET acquisition model that relates an image x to the + acquisition data y as + (F) y = S (G x + [a]) + [b] + where: + G is the geometric (ray tracing) projector from the image voxels + to the scanner's pairs of detectors (bins); + a and b are otional additive and background terms representing + the effects of noise and scattering;⬇️ Suggested change
- the effects of noise and scattering; + the effects of accidental coincidences and scattering;
> @@ -1049,7 +1124,56 @@ def backward(self, ad, subset_num = 0, num_subsets = 1): (self.handle, ad.handle, subset_num, num_subsets) check_status(image.handle) return image + def get_linear_acquisition_model(self): + ''' + Returns the linear part of the AcquisitionModel⬇️ Suggested change
- Returns the linear part of the AcquisitionModel + Returns a new AcquisitionModel corresponding to the linear part of the current one.
> @@ -1193,7 +1201,19 @@ def backward(self, ad, subset_num = 0, num_subsets = 1): (self.handle, ad.handle, subset_num, num_subsets) check_status(image.handle) return image - + def get_linear_acquisition_model(self): + am = type(self)() + am.set_up( self.acq_templ, self.img_templ ) + return am + def direct(self, image, subset_num = 0, num_subsets = 1, ad = None):
yes please. However, this will need changing forward
as well. Possibly has consequences for the examples etc. and would be a backwards compatibility break, so please create a separate issue for it, with a separate PR, such that it has some visibility.
> + else: + raise error('AcquisitionModel is not linear\nYou can get the linear part of the AcquisitionModel with get_linear_acquisition_model') + def adjoint(self, ad, subset_num = 0, num_subsets = 1): + '''Back-projects acquisition data into image space, if the + AcquisitionModel is linear + + Added for CCPi CIL compatibility + https://github.com/CCPPETMR/SIRF/pull/237#issuecomment-439894266 + ''' + if self.is_linear(): + return self.backward(ad, subset_num = subset_num, + num_subsets = num_subsets) + else: + raise error('AcquisitionModel is not linear') + def is_affine(self): + '''Returns whether the AcquisitionModel is affine'''
see suggested change for MR.
possibly we should add in both that it is currently always True
> + '''Back-projects acquisition data into image space, if the + AcquisitionModel is linear + + Added for CCPi CIL compatibility + https://github.com/CCPPETMR/SIRF/pull/237#issuecomment-439894266 + ''' + if self.is_linear(): + return self.backward(ad, subset_num = subset_num, + num_subsets = num_subsets) + else: + raise error('AcquisitionModel is not linear') + def is_affine(self): + '''Returns whether the AcquisitionModel is affine''' + return True + def is_linear(self): + '''Returns whether the constant term is zero'''
see suggested change in MR
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.
@paskino, please add something to the UsersGuide as well ( I guess a special section on CIL compatibliity for Python)
Codacy found a few small issues as well.
At present, direct
throws an error if the model is not linear. I don't think this is necessary. direct
can just be an alias for forward
. Quoting @mehrhardt
if we have an inverse problem, defined by some data y \in Y and a mapping A : X -> Y, then computing A(x) is called the direct or forward problem
@paskino commented on this pull request.
In examples/Python/PET/acquisition_data.py:
> @@ -130,6 +130,10 @@ def main(): print('norm of image*10: %f' % image.norm()) diff = image.clone() - image print('norm of image.clone() - image: %f' % diff.norm()) + + # test clone vs copy
It's in the docstring of the method. Do I have to put it here as well?
That's right. It is the adjoint
which doesn't exist if the AcquisitionModel
isn't linear.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.
As I tried to document here the DataContainer
seems to be missing a number of methods. I thought I put them in but they may have disappeared when the sirf.DataContainer
has been introduced.
I will implement them with calls to as_array
, followed by the required operation in numpy and fill
. It's not too efficient, but it'll work.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.
@paskino commented on this pull request.
> @@ -1193,7 +1201,19 @@ def backward(self, ad, subset_num = 0, num_subsets = 1): (self.handle, ad.handle, subset_num, num_subsets) check_status(image.handle) return image - + def get_linear_acquisition_model(self): + am = type(self)() + am.set_up( self.acq_templ, self.img_templ ) + return am + def direct(self, image, subset_num = 0, num_subsets = 1, ad = None):
I used out
in the direct
method not in the forward
so there won't be backward incompatibility.
As I tried to document here the DataContainer is missing a number of methods.
I will implement them in sirf.DataContainer with calls to as_array, followed by the required operation in numpy and fill. It's not too efficient, but it'll work.
ok.please create an issue for it though. some of these would be easy to do in C++ I guess, but we can do that later. But we'll forget without issue :-)
@KrisThielemans commented on this pull request.
In examples/Python/PET/acquisition_data.py:
> @@ -130,6 +130,10 @@ def main(): print('norm of image*10: %f' % image.norm()) diff = image.clone() - image print('norm of image.clone() - image: %f' % diff.norm()) + + # test clone vs copy
you're assuming that people read the docstring... Yes, I'd put it here. it's an example. it'd confuse me if I see this code and I'd wonder "what's the difference between clone and copy now?"
I believe that those missing methods should be implemented in this PR. First in Python and then in C++, for which I'll open an issue.
@KrisThielemans pushed 1 commit.
—
You are receiving this because you are subscribed to this thread.
@KrisThielemans requested changes on this pull request.
a few more.
I think you still need to add get_linear_model
to the UsersGuide
In doc/UserGuide.md:
> +1. `get_background_term()` Returns the background term of the `AcquisitionModel` +1. `get_additive_term()` Returns the additive term of the `AcquisitionModel` +1. `get_constant_term()` Returns the sum of the additive and background terms of the `AcquisitionModel` + +### `DataContainer` + +`sirf.DataContainer` has the method `copy` as an alias to `clone`. +A number of methods are currently implemented on CCPi and not on SIRF `DataContainers`: +1. (Pixelwise) binary operations: + 1. `add(self, other , out=None, *args, **kwargs)` + 1. `subtract(self, other, out=None , *args, **kwargs):` + 1. `multiply(self, other , out=None, *args, **kwargs)` present with different signature `multiply(self, other)` + 1. `divide(self, other , out=None ,*args, **kwargs)` present with different signature `divide(self, other)` + 1. `power(self, other , out=None ,*args, **kwargs)` + 1. `maximum(self, other , out=None ,*args, **kwargs)` +1. all inline algebra is missing, achievable by using the above methods.
what do you mean with this? we can add them etc.
In examples/Python/MR/grappa_and_cil.py:
> @@ -0,0 +1,175 @@ +''' +GRAPPA reconstruction with the steepest descent step: illustrates
incorrect
In examples/Python/MR/grappa_and_cil.py:
> +GRAPPA reconstruction with the steepest descent step: illustrates +the use of Acquisition Model projections + +Usage: + grappa_and_steepest_descent.py [--help | options] + +Options: + -f <file>, --file=<file> raw data file + [default: simulated_MR_2D_cartesian_Grappa2.h5] + -p <path>, --path=<path> path to data files, defaults to data/examples/MR + subfolder of SIRF root folder + -e <engn>, --engine=<engn> reconstruction engine [default: Gadgetron] +''' + +## CCP PETMR Synergistic Image Reconstruction Framework (SIRF) +## Copyright 2015 - 2017 Rutherford Appleton Laboratory STFC.⬇️ Suggested change
-## Copyright 2015 - 2017 Rutherford Appleton Laboratory STFC. +## Copyright 2015 - 2018 Rutherford Appleton Laboratory STFC.
In examples/Python/MR/grappa_and_cil.py:
> +from sirf.Gadgetron import petmr_data_path +from sirf.Gadgetron import AcquisitionData +from sirf.Gadgetron import AcquisitionModel +from sirf.Gadgetron import AcquisitionDataProcessor +from sirf.Gadgetron import preprocess_acquisition_data +from sirf.Gadgetron import CartesianGRAPPAReconstructor +from sirf.Gadgetron import CoilSensitivityData + +from ccpi.optimisation.funcs import Norm2sq +from ccpi.optimisation.funcs import ZeroFun +from ccpi.optimisation.algs import FISTA +# from ccpi.optimisation.ops import PowerMethodNonsquare + +import numpy + +def PowerMethodNonsquare(op,numiters , x0=None):
please some doc
In examples/Python/MR/grappa_and_cil.py:
> @@ -0,0 +1,175 @@ +''' +GRAPPA reconstruction with the steepest descent step: illustrates +the use of Acquisition Model projections + +Usage: + grappa_and_steepest_descent.py [--help | options] + +Options: + -f <file>, --file=<file> raw data file + [default: simulated_MR_2D_cartesian_Grappa2.h5] + -p <path>, --path=<path> path to data files, defaults to data/examples/MR + subfolder of SIRF root folder + -e <engn>, --engine=<engn> reconstruction engine [default: Gadgetron]
delete as not used
In src/xGadgetron/pGadgetron/Gadgetron.py:
> @@ -937,6 +948,32 @@ def backward(self, ad): (self.handle, ad.handle) check_status(image.handle) return image + def direct(self, image, out = None): + '''Projects an image into the (simulated) acquisition space, + if the AcquisitionModel is linear.
delete
> @@ -973,7 +973,7 @@ class AcquisitionModel: G is the geometric (ray tracing) projector from the image voxels to the scanner's pairs of detectors (bins); a and b are otional additive and background terms representing - the effects of noise and scattering; assumed to be 0 if not present; + the effects of accidental coincidendes and scattering; assumed to be 0 if not present;⬇️ Suggested change
- the effects of accidental coincidendes and scattering; assumed to be 0 if not present; + the effects of accidental coincidences and scattering; assumed to be 0 if not present;
sorry if I misspelled it.occurs multiple times below
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.
codacy fails
Codacy fails mostly because of the grappa_and_cil.py example which is not finished. I'll move it to #238 when this PR is ready.
I need to implement the missing methods for DataContainer
before merging this. I also discussed some changes to the CIL that should be done.
Codacy fails mostly because of the grappa_and_cil.py example which is not finished. I'll move it to #238 when this PR is ready.
Let's not start moving files between PRs. My head is hurting already with so many PRs (all good things have a bad side...). you'd have to git rm
it here, and git add
it there. Nobody will understand (as we're not squashing)
@paskino commented on this pull request.
In src/xGadgetron/pGadgetron/Gadgetron.py:
> @@ -937,6 +948,32 @@ def backward(self, ad): (self.handle, ad.handle) check_status(image.handle) return image + def direct(self, image, out = None): + '''Projects an image into the (simulated) acquisition space, + if the AcquisitionModel is linear.
What? "if the AcquisitionModel is linear"?
That's not wrong, is it?
@KrisThielemans commented on this pull request.
In src/xGadgetron/pGadgetron/Gadgetron.py:
> @@ -937,6 +948,32 @@ def backward(self, ad): (self.handle, ad.handle) check_status(image.handle) return image + def direct(self, image, out = None): + '''Projects an image into the (simulated) acquisition space, + if the AcquisitionModel is linear.
direct = forward, so will also do it if it's not linear. (best to say it's an alias) like for copy etc
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.
@paskino pushed 44 commits.
—
You are receiving this because you are subscribed to this thread.
@paskino commented on this pull request.
I think I tackled all the points and updated the UserGuide accordingly.
> @@ -964,7 +964,7 @@ def __del__(self): if self.handle is not None: pyiutil.deleteDataHandle(self.handle) -class AcquisitionModel: +class AcquisitionModel(object):
That's the new style of class definition on Python (new since 2.2)
In examples/Python/PET/acquisition_data.py:
> @@ -130,6 +130,10 @@ def main(): print('norm of image*10: %f' % image.norm()) diff = image.clone() - image print('norm of image.clone() - image: %f' % diff.norm()) + + # test clone vs copy
added doc that copy is an alias of clone
> @@ -964,7 +964,7 @@ def __del__(self): if self.handle is not None: pyiutil.deleteDataHandle(self.handle) -class AcquisitionModel: +class AcquisitionModel(object):
I guess I will remove it because the code is full of old style classes definition, but it's worth thinking to update.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.
I guess I will remove it because the code is full of old style classes definition, but it's worth thinking to update.
please create an issue for it then (for SIRF general) to use the new style
@paskino pushed 2 commits.
—
You are receiving this because you are subscribed to this thread.
@KrisThielemans requested changes on this pull request.
a few clean-up comments and 1 bug?
In src/xGadgetron/pGadgetron/Gadgetron.py:
> @@ -937,6 +948,32 @@ def backward(self, ad): (self.handle, ad.handle) check_status(image.handle) return image + def direct(self, image, out = None): + '''Projects an image into the (simulated) acquisition space, + if the AcquisitionModel is linear.
these 2 lines need to go?
in any case, more confusing then anything else, as it is linear in this case. So why not say "alias of..."
In src/xGadgetron/pGadgetron/Gadgetron.py:
> @@ -937,6 +948,32 @@ def backward(self, ad): (self.handle, ad.handle) check_status(image.handle) return image + def direct(self, image, out = None): + '''Projects an image into the (simulated) acquisition space, + if the AcquisitionModel is linear. + Added for CCPi CIL compatibility + + https://github.com/CCPPETMR/SIRF/pull/237#issuecomment-439894266 + ''' + if not out is None: + raise error('out is not supported') + return self.forward(image) + def adjoint(self, ad , out = None): + '''Back-projects acquisition data into image space, if the
as above. Text is confusing as it is linear in this case. So why not say "alias of..."
In src/xSTIR/pSTIR/tests/tests_five.py:
> @@ -0,0 +1,102 @@ +# -*- coding: utf-8 -*- +"""STIR DataContainer algebra tests +v{version} + +Usage: + tests_five [--help | options] + +Options: + -r, --record record the measurements rather than check them + -v, --verbose report each test status + +{author} + +{licence}
where would it get it from?
In src/xSTIR/pSTIR/tests/tests_five.py:
> +Usage: + tests_five [--help | options] + +Options: + -r, --record record the measurements rather than check them + -v, --verbose report each test status + +{author} + +{licence} +""" +from sirf.Utilities import petmr_data_path, existing_filepath, \ + pTest, RE_PYEXT , runner +from sirf.STIR import MessageRedirector, AcquisitionData +__version__ = "0.2.2" +__author__ = "Casper da Costa-Luis, Edoardo Pasca"
Casper involved?
In examples/Python/PET/interactive/spdhg_reconstruction.py:
> + asm_attn = pet.AcquisitionSensitivityModel(attn_image, am) + # converting attenuation into attenuation factors (see previous exercise) + asm_attn.set_up(noisy_data) + attn_factors = pet.AcquisitionData(noisy_data) + attn_factors.fill(1.0) + print('applying attenuation (please wait, may take a while)...') + asm_attn.unnormalise(attn_factors) + print('done') + # use these in the final attenuation model + asm_attn = pet.AcquisitionSensitivityModel(attn_factors) + # chain attenuation and normalisation + asm = pet.AcquisitionSensitivityModel(asm_norm, asm_attn) + # update the acquisition model etc + am.set_acquisition_sensitivity(asm) + + am.set_background_term(randoms)
I think this is incorrect for SPDHG. you will add it to the KullbackLeibler later.
Or, you add it here, and then use the get_linear_model
below (which would be nice and clear in my opinion).
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.
@casperdcl commented on this pull request.
In src/xSTIR/pSTIR/tests/tests_five.py:
> +Usage: + tests_five [--help | options] + +Options: + -r, --record record the measurements rather than check them + -v, --verbose report each test status + +{author} + +{licence} +""" +from sirf.Utilities import petmr_data_path, existing_filepath, \ + pTest, RE_PYEXT , runner +from sirf.STIR import MessageRedirector, AcquisitionData +__version__ = "0.2.2" +__author__ = "Casper da Costa-Luis, Edoardo Pasca"
hah, saw my name mentioned I the massive comments list. Sorry if you've mentioned me in the past and I haven't noticed - need to find a way to not get spammed by irrelevant notifications so the important ones stand out. Anyway, er, about this file - a lot of it looks like a test framework I'd implemented :P
@paskino commented on this pull request.
In src/xSTIR/pSTIR/tests/tests_five.py:
> +Usage: + tests_five [--help | options] + +Options: + -r, --record record the measurements rather than check them + -v, --verbose report each test status + +{author} + +{licence} +""" +from sirf.Utilities import petmr_data_path, existing_filepath, \ + pTest, RE_PYEXT , runner +from sirf.STIR import MessageRedirector, AcquisitionData +__version__ = "0.2.2" +__author__ = "Casper da Costa-Luis, Edoardo Pasca"
This file is largely a plagiarism of another test in the same directory. For this reason I kept Casper's name. I just resolved to change the actual tests being run, not the way the script is run. I assumed these tests would be run in the ctest phase.
@paskino commented on this pull request.
In examples/Python/PET/interactive/spdhg_reconstruction.py:
> + asm_attn = pet.AcquisitionSensitivityModel(attn_image, am) + # converting attenuation into attenuation factors (see previous exercise) + asm_attn.set_up(noisy_data) + attn_factors = pet.AcquisitionData(noisy_data) + attn_factors.fill(1.0) + print('applying attenuation (please wait, may take a while)...') + asm_attn.unnormalise(attn_factors) + print('done') + # use these in the final attenuation model + asm_attn = pet.AcquisitionSensitivityModel(attn_factors) + # chain attenuation and normalisation + asm = pet.AcquisitionSensitivityModel(asm_norm, asm_attn) + # update the acquisition model etc + am.set_acquisition_sensitivity(asm) + + am.set_background_term(randoms)
@KrisThielemans I'm not the most entitled to comment on this (@mehrhardt help), however the KullbackLeibler fidelity gets passed the fully configured AcquisitionMethod
.
Then the direct
method in the AcquisitionModel
calls self.get_linear_acquisition_model().forward()
I had originally added this example to #238 so that I'd avoid this discussion on this particular PR, which deals with the additions of methods to SIRF classes in order to be able to use the CIL optimisation framework. The reason this example has appeared back here is the abstract @dkazanc is preparing. I'm happy to strip this example out of the PR so that we can merge the PR asap and then correct it.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.
@KrisThielemans commented on this pull request.
In examples/Python/PET/interactive/spdhg_reconstruction.py:
> + asm_attn = pet.AcquisitionSensitivityModel(attn_image, am) + # converting attenuation into attenuation factors (see previous exercise) + asm_attn.set_up(noisy_data) + attn_factors = pet.AcquisitionData(noisy_data) + attn_factors.fill(1.0) + print('applying attenuation (please wait, may take a while)...') + asm_attn.unnormalise(attn_factors) + print('done') + # use these in the final attenuation model + asm_attn = pet.AcquisitionSensitivityModel(attn_factors) + # chain attenuation and normalisation + asm = pet.AcquisitionSensitivityModel(asm_norm, asm_attn) + # update the acquisition model etc + am.set_acquisition_sensitivity(asm) + + am.set_background_term(randoms)
well, let's not do crazy merges before the results are ok...
in your link, AcquisitionModel,direct
just calls forward
. I know we originally said that direct
would do this, but we changed our mind...
For the future, I don't think it makes sense to let the user add the background to KL, and "hope" that spdgh
does the appropriate thing. But I wouldn't worry about it now.
I still don't know if the current code is doing the wrong thing of course...
@paskino pushed 2 commits.
—
You are receiving this because you are subscribed to this thread.
@mehrhardt commented on this pull request.
In examples/Python/PET/interactive/spdhg_reconstruction.py:
> +A = SubsetOperator(am.get_linear_acquisition_model(), 14) +A_norms = [PowerMethodNonsquare(Ai, 10, x0=image.copy()) for Ai in A] +#%% +# increase the norms to allow for inaccuracies in their computation +Ls = [1.05 * L for L in A_norms] + +f = [KullbackLeibler(op.sirf2sub(noisy_data), op.sirf2sub(background)) + for op in A] + +#%% +recon_noreg = spdhg(f, g_noreg, A, A_norms=Ls) + +#%% + +# set the number of iterations +niter = 5
As I commented via email, 5 "iterations" are not enough. Perhaps it is best to change this to:
nepochs = 5
nsubsets = 14 (see line 236)
niter = nsubsets * nepochs
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.
@mehrhardt commented on this pull request.
In examples/Python/PET/interactive/spdhgutils.py:
> + self.__num_subsets__ = num_subsets + + # FIXME: this may be known to STIR + x = op.img_templ.copy() + x.fill(1) + y = self.forward_sirf(x).as_array() + self.ind = numpy.nonzero(y.flatten())[0] + + def forward_sirf(self, x): + return self.__op__.forward(x, subset_num=self.__subset_num__, + num_subsets=self.__num_subsets__) + + def __call__(self, x): + y = self.__op__.forward(x, subset_num=self.__subset_num__, + num_subsets=self.__num_subsets__) + return self.sirf2sub(y)
Here is a mistake. This class assumes that "forward", "direct" and "call" are all the same. I think we agreed that this is not the case. Thus,
y = self.__op__.forward(x, subset_num=self.__subset_num__,
num_subsets=self.__num_subsets__)
should be changed to
y = self.__op__.direct(x, subset_num=self.__subset_num__,
num_subsets=self.__num_subsets__)
@mehrhardt commented on this pull request.
In examples/Python/PET/interactive/spdhgutils.py:
> + self.__num_subsets__ = num_subsets + + # FIXME: this may be known to STIR + x = op.img_templ.copy() + x.fill(1) + y = self.forward_sirf(x).as_array() + self.ind = numpy.nonzero(y.flatten())[0] + + def forward_sirf(self, x): + return self.__op__.forward(x, subset_num=self.__subset_num__, + num_subsets=self.__num_subsets__) + + def __call__(self, x): + y = self.__op__.forward(x, subset_num=self.__subset_num__, + num_subsets=self.__num_subsets__) + return self.sirf2sub(y)
Or maybe we can just remove the "call" altogether as both CCPi and SIRF do not support the syntax "A(x)" anyways.
@KrisThielemans commented on this pull request.
In examples/Python/PET/interactive/spdhgutils.py:
> + self.__num_subsets__ = num_subsets + + # FIXME: this may be known to STIR + x = op.img_templ.copy() + x.fill(1) + y = self.forward_sirf(x).as_array() + self.ind = numpy.nonzero(y.flatten())[0] + + def forward_sirf(self, x): + return self.__op__.forward(x, subset_num=self.__subset_num__, + num_subsets=self.__num_subsets__) + + def __call__(self, x): + y = self.__op__.forward(x, subset_num=self.__subset_num__, + num_subsets=self.__num_subsets__) + return self.sirf2sub(y)
This class assumes that "forward", "direct" and "call" are all the same. I think we agreed that this is not the case.
I believe we agreed that forward
and direct
were the same (after @jakobsj 's terminology post). I have directed @paskino accordingly. We introduced the get_linear_model
for this.
I have no strong feelings about __call__
. If we have it, it would be forward
(but then what do we do for adjoint
etc). Maybe it's an ODL thing?
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.
@mehrhardt commented on this pull request.
In examples/Python/PET/interactive/spdhgutils.py:
> + self.__num_subsets__ = num_subsets + + # FIXME: this may be known to STIR + x = op.img_templ.copy() + x.fill(1) + y = self.forward_sirf(x).as_array() + self.ind = numpy.nonzero(y.flatten())[0] + + def forward_sirf(self, x): + return self.__op__.forward(x, subset_num=self.__subset_num__, + num_subsets=self.__num_subsets__) + + def __call__(self, x): + y = self.__op__.forward(x, subset_num=self.__subset_num__, + num_subsets=self.__num_subsets__) + return self.sirf2sub(y)
I believe we agreed that forward and direct were the same (after @jakobsj 's terminology post). I have directed @paskino accordingly. We introduced the get_linear_model for this.
Ah, ok. I kind of lost track of our discussion.
I have no strong feelings about call. If we have it, it would be forward (but then what do we do for adjoint etc). Maybe it's an ODL thing?
In ODL, we write A(x)
and A.adjoint(y)
but it might actually be better not to use it in order to avoid misunderstandings.
@paskino commented on this pull request.
In examples/Python/PET/interactive/spdhgutils.py:
> + self.__num_subsets__ = num_subsets + + # FIXME: this may be known to STIR + x = op.img_templ.copy() + x.fill(1) + y = self.forward_sirf(x).as_array() + self.ind = numpy.nonzero(y.flatten())[0] + + def forward_sirf(self, x): + return self.__op__.forward(x, subset_num=self.__subset_num__, + num_subsets=self.__num_subsets__) + + def __call__(self, x): + y = self.__op__.forward(x, subset_num=self.__subset_num__, + num_subsets=self.__num_subsets__) + return self.sirf2sub(y)
@KrisThielemans I think that direct
should be almost an alias of forward
which raises an exception if the AcquisitionModel
is not linear:
def direct(self, image, subset_num = 0, num_subsets = 1, out = None):
'''Projects an image into the (simulated) acquisition space,
alias of forward.
Added for CCPi CIL compatibility
https://github.com/CCPPETMR/SIRF/pull/237#issuecomment-439894266 ''' if not self.is_linear(): raise error('AcquisitionModel is not linear\nYou can get the linear part of the AcquisitionModel with get_linear_acquisition_model') return self.forward(image, \ subset_num=subset_num, \ num_subsets = num_subsets, \ ad = out)
Currently the code will run even with an affine AcquisitionModel
, but it'll return garbage.