I have got an odd problem: How to get the transpose operation of the affine transformations?
For example:
I have an image I and 6 input parameters:
affineKernelMatrix = [ 1 0 0; 0 1 0; 0 0 1];
affineKernelMatrix(1:3, 1:2) = reshape(inputsParameters(1:6), 3, 2);
affineTransform = maketform('affine', affineKernelMatrix);
affineTransformedImage = imtransform(I, affineTransform, ...
'UData', udata, 'VData', vdata, ...
'XData', udata, 'YData', vdata);
We can get the 'inverse transformation' very easily because Matlab calculate the forward and inverse transformation using build-in functions.
However, I need a operator (or operation) to get the transpose of the affine transformations. I am not sure if it is called ajoint operator of affine or not, but any ideas will be appreciated.
Thanks very much,
Aaronne.
Just to be clear, this is not an ideal inverse. If you transform an image and then apply the reverse transformation using imtransform, you will not get back the original image exactly. There will be some interpolation/resampling errors...
> However, I need a operator (or operation) to get the transpose of the affine transformations. I am not sure if it is called ajoint operator of affine or not, but any ideas will be appreciated.
===============
No, it isn't provided by MATLAB. It's a major problem for people who want to minimize a cost function for jointly estimating both images and image transformation parameters.
A-ha, you point out the interesting part. May I ask if you have got any idea or references to implemented this in Matlab? I really need an efficiency method to implement this in Matlab.
Thanks so much!
Aaronne.
Hi Matt,
By the way, I am not asking for any existing Matlab codes. If you know any references about this like papers, websites please tell me. I am really looking forward to your reply.
Thanks a lot!!!
Aaronne
It seems only the 'translation' part of the affine transformations can not apply the transpose.
There is no hitch for the rotation, scaling, and shearing parts of the affine transformations. Am I right?
Thanks.
Aaronne
I have no references to implementations, but a lot of the concepts we've been talking about in your various posted threads are covered here:
http://www.eecs.umich.edu/~fessler/student/diss/06,jacobson.pdf
in particular in Sec 2.3 and Chapt 3.
As a starting point for implementations, you can look at any of the fast interpolation code advertised on the FEX, e.g.,
http://www.mathworks.com/matlabcentral/fileexchange/24183-2d-interpolation
The transpose of an image transformation is essentially the transpose of an interpolation operation. It's not such a huge project to modify existing interpolation code appropriately.
If you could do affine transformations consisting of (rotation/shear/translations) in continuous space, then the transpose (adjoint) and the inverse transformations are equivalent. This is because rotation/shear/translation all preserve te L2-norm of the image, i.e., the transformation is orthonormal.
When you work with discrete image transformation, though, there is the added effect of interpolation errors, so the equivalence between the transpose and the inverse is only approximate. I don't know how exact you need to be...
Thanks for your fast reply.
1. Of course, I want to take the interpolation error into account. Is there any way to include the concern of the interpolation?
2. Also regarding to what you are saying about 'the transformation is orthonormal' then the 'transpose (adjoint) and the inverse transformations are equivalent.', do you have any reference or mathematical proof of this?
Thanks very much,
Aaronne.
Yes, as we've discussed, since IMTRANSFORM is a linear and discrete transformation, it can be represented as a MATLAB sparse matrix , which you can transpose directly.
Or, you could write your own C coded version of imtransform based on existing interpolation code on the File Exchange (see post #9 above) and modify it to have the ability to do the transpose as well.
> 2. Also regarding to what you are saying about 'the transformation is orthonormal' then the 'transpose (adjoint) and the inverse transformations are equivalent.', do you have any reference or mathematical proof of this?
====================
Here's what I can prove. I'm not sure exactly how to generalize it to continuous space transformations, but it seems fairly intuitive that it should generalize:
Let T be any NxN invertible matrix with the property that
dot(T*x,T*y)=dot(x,y) for all x,y.
In particular, for x=y, this reduces to norm(T*x)=norm(x) meaning that T is a norm-preserving transformation
Then I claim that inv(T)=T.'
PROOF:
Fix arbitrary indices 1<=i<=N and 1<=j<=N and define
invT=inv(T),
I=eye(N),
x=I(:,i)
y=invT*I(:,j)
Then substituting into the relation dot(T*x,T*y) leads to
dot( T*x, T*y)
=dot( T*I(:,i), T*invT*I(:,j))
=dot( T*I(:,i), I(:,j) )
=T(j,i)
and substituting into dot(x,y) leads to
dot(x,y)
=dot( I(:,i), invT*I(:,j) )
=invT(i,j)
But since dot(T*x,T*y)=dot(x,y) and since i,j were arbitrary it follows that
T(j,i)=invT(i,j)
for all i,j or in other words T.'=inv(T)
Q.E.D.
Thanks for your references and math proof. You mainly proposed two method to get the transpose of the affine transformations:
1. Use sparse matrix to store the affine transformation matrix, and then transpose it directly.
Actually, I am not sure if I get it correctly. For my understanding, we should do this using an impulse of the image as well like:
[sx, sy] = size(I);
[sx2, sy2] = size(I).^2;
impulseImage = zeros(sx, sy);
affineSparseMatrix = zeros(sx2, sy2);
for i:sx
for j:sy
impulseImage (i, j)=1;
affineSparseMatrix(i*j, :) = reshape(imtransform(impulseImage, affineTransform), 1, sx*sy);
end
end
transposeAffineSparseMatrix = affineSparseMatrix;
For this simple Matlab implementation. Am I right?
2. You mean we can reimplement imtransform in C, using the interpolation method (the link you provided); however, we still need to run the loops like 1 does above, right?
Then I have two problems now:
a. Is there an efficient way instead of running the loops above? Because everytime of the update of the transformation, we need to repeat the loops again to create the sparse matrix. Although the matrix is very sparse may not be a memory problem, the speed is the thing I mainly worried about.
b. My problem will goes to 3D. Let's say 3D affine transformations. I know we can use 'tformarray' to apply 3D affine, but I am not sure if it is feasible to use the same idea to create the transpose of affine transformations.
I really appreciate all your help! Thanks very much and have a good day.
Cheers,
Aaronne.
Hi Matt,
Thanks for the fast reply. I get what you mean now. However,
1. Is there an efficient way to implement it instead of running the loops above?
Because I think in 3D it will cost lots of time to run the loops even once. Even in 2D, if the image is large, then it is not computational efficient.
2. Do you think 'tformarray' in 3D is the same as 'imtransform' in 2D?
Best Regards,
Aaronne.
Great. What I have done below always output all zeros. Any ideas? Thanks very much.
Aaronne.
clear all; close all;
I = phantom(16);
[sx, sy] = size(I);
affineSparseMatrix = zeros(sx^2, sy^2);
inputsParameters = [.9 0 1 .1 0.9 2];
affineKernelMatrix = [ 1 0 0; 0 1 0; 0 0 1];
affineKernelMatrix(1:3, 1:2) = reshape(inputsParameters(1:6), 3, 2);
affineTransform = maketform('affine', affineKernelMatrix);
udata = [0 1]; vdata = [0 1];
for i=1:sx
for j=1:sy
impulseImage = zeros(sx, sy);
impulseImage (i, j)=1;
affineSparseMatrix(i*j, :) = sparse(reshape(imtransform(impulseImage, affineTransform, ...
'UData', udata, 'VData', vdata, ...
'XData', udata, 'YData', vdata), 1, sx*sy));
end
end
spy(affineSparseMatrix);
I also have a read of your proof that in continuous space the transpose (adjoint) and the inverse transformations are equivalent; however, you seems to use a discrete NxN matrix to proof that.
Does that mean that the transpose (adjoint) and the inverse transformations are equivalent even in discrete case? If consider about digital image, they are discrete and we still have the transpose equals to inverse except the interpolation error, am I right?
Thanks and have a nice weekend,
Aaronne
function [out_vals,ignores]=linterp(mode_params,in_vals,xx,yy)
%Performs a 2D linear interpolation operation or its transpose.
%
%The general syntax is:
%
% [A,ignores]=linterp(mode_params,B,xx,yy)
%
%
%(1) FORWARD INTERPOLATION
%
%To do a forward interpolation operation, the syntax is
%
% [A,ignores]=linterp([],B,xx,yy)
%
%where B is an MxN matrix and xx, yy are vectors of interpolation
%coordinates. The output A is pretty much the same output as
%MATLAB's native interp2(B,xx,yy).
%
%The values of xx must be in the half-open interval [1,M) and
%the values of yy must be in [1,N). Coordinate pairs [xx(j),yy(j)]
%that violate these bounds will be ignored and A(j) will be set to zero.
%The optional output vector 'ignores' will form a list of such j.
%
%
%(2) TRANSPOSE INTERPOLATION
%
%The forward interpolation operation described in part (1) above is equivalent
%to A=W*B(:) for some matrix W that depends on xx and yy. Sometimes, we will
%wish to do the transpose operation B=reshape(W.'*A, M,N).
%
%The syntax for this is
%
% [B,ignores]=linterp([M,N],A,xx,yy)
%
%
if isempty(mode_params) %forward interpolation mode
istranspose=false;
[M,N]=size(in_vals);
else %transpose interpolation mode
istranspose=true;
M=mode_params(1);
N=mode_params(2);
end
%Type conversion - C code could probably just assume single input
if ~isa(xx,'single'), xx=single(xx); yy=single(yy);end
if ~isa(in_vals,'single'), in_vals=single(in_vals);end
XY=[xx(:),yy(:)];
nn=size(XY,1);
coords=floor(XY); %temporary assignment
%Boundary checking
good=(coords(:,1)<=M-1);
good=good & (coords(:,1)>=1);
good=good & (coords(:,2)<=N-1);
good=good & (coords(:,2)>=1);
ngood=nnz(good);
if ngood<nn
coords=coords(good,:);
XY=XY(good,:);
end
weights=nan(ngood,4,'single');
weights(:,[3,4])=XY-coords; %here, coords=floor(XY);
weights(:,[1,2])=1-weights(:,[3,4]);
coords=uint32(coords);
coords=sub2ind([M,N], coords(:,1), coords(:,2));
if ~istranspose; %Do normal, forward interpolation operation
vals1=in_vals(coords).*weights(:,1) + in_vals(coords+1).*weights(:,3);
vals2=in_vals(coords+M).*weights(:,1) + in_vals(coords+M+1).*weights(:,3);
vals=vals1.*weights(:,2) + vals2.*weights(:,4);
out_vals=zeros(nn,1,'single');
out_vals(good)=vals;
else %Do the tranpose operation
dd=[M*N 1];
out_vals=zeros(dd,'single');
in_vals=in_vals(good);
out_vals=accumarray(coords,weights(:,1).*weights(:,2).*in_vals,dd);
coords=coords+1;
out_vals=out_vals+accumarray(coords,weights(:,2).*weights(:,3).*in_vals,dd);
coords=coords+M-1;
out_vals=out_vals+accumarray(coords,weights(:,1).*weights(:,4).*in_vals,dd);
weights(:,1).*weights(:,4).*in_vals;
coords=coords+1;
out_vals=out_vals+accumarray(coords,weights(:,3).*weights(:,4).*in_vals,dd);
out_vals=reshape(out_vals,M,N);
end
%If desired, report rejected out-of-bounds data that was ignored
if nargout>1,
ignores=1:nn;
ignores=ignores(~good);
end
No, my proof was for the discrete case. I leave it to your imagination how it might be extended to the continuous case. Basically, instead of discrete dot-products, you will L2-integral inner products. Also, instead of x and y being Kronecker deltas, you would let them asymptotically approach Dirac deltas.
> Does that mean that the transpose (adjoint) and the inverse transformations are equivalent even in discrete case? If consider about digital image, they are discrete and we still have the transpose equals to inverse except the interpolation error, am I right?
==========================
If you happen to have a transformation T with the inner-product preserving property, then yes, my proof shows that the inverse and transpose are equivalent in the discrete case. However IMTRANSFORM will obviously satisfy this property only approximately for rigid affine transforms (for non-rigid transforms, the dot-product will have a scale factor), because of interpolation error.
I'm wondering of course whether for your purposes the approximation might be good enough. You haven't described your application in any detail, but I have a strong suspicion that you want to be able to do this transpose in order to compute gradients of registration cost functions. But most cost function minimization algorithms are robust to errors in the gradients, so I'm wondering if it's better to settle for this approximation in the interest of speed and coding effort...
In any case, I've coded some experiments down below that compare the transpose to the inverse (where all transformations are implemented using the LINTERP routine in my last post). If you display images I1,I2,I3, and I4, you can see that the inverse transformed image I4, is basically a smoothed-out version of the transpose-transformed image I3.
N=256; %length of square image grid
cx=(N+1)/2; cy=cx; %coordinates of image center
%creates image data for square object
I1=zeros(N);
I1((1:N/4)+N/2, (1:N/4)+N/2)=1;
%set up data for affine transform - 45 degree rotation plus translation
M=[cosd(45), sind(45);-sind(45), cosd(45)]; %rotation part
M(end+1,:)=[20;20]; %add translation
T=maketform('affine',M);
[U,V]=ndgrid(1:N,1:N); %spatial coordinates to be transformed
U=U(:)-cx; %put origin at center of image
V=V(:)-cy;
[X,Y]=tformfwd(T,U,V); %forward transformed coordinates
X=X+cx;
Y=Y+cy;
%% Perform a forward transform
I2=reshape(linterp([],I1,X,Y),N,N);
%% Transpose transform applied to I2
I3=linterp([N,N],I2,X,Y);
%% Inverse transform applied to I2
[X,Y]=tforminv(T,U,V);
X=X+cx;
Y=Y+cy;
I4=reshape(linterp([],I2,X,Y),N,N);
Thanks very much for your test .m files.
1. Yes, I haven't described my problem here because some research funding restrictions. If you are interested in, I guess if you can send me a personal message with your email address via this discussion forum, and I could then describe my research some how and send to you privately.
2. However, as you mentioned before, I try to measure the image intensities with transfromations. That is the basic idea, and it is actually been used in PET imaging already and it is not a secret (estimate PET image intensity and motion correction).
3. Regarding to your previous post, you said for a rigid transformation the inverse is equivalent to the transpose (or adjoint) if we ignore the interpolation error; however, for the non-rigid transformation, it is not true.
In the registration context, the affine transformation is not rigid (as for my understanding, the rigid transformation only contains rotation and translation). Then this assumption (inverse = transpose) is not hold for affine transformation, right?
4. About the interpolation error you mentioned, I test my program using the inverse transform instead of doing the transpose (I use affine transformations); however, the optimisation goes quite well.
But when I turn on the 'DerivativeCheck' option to 'on', I found that my analytical derivative has large discrepancy with Matlab build-in Finite-Difference-Method get. The largest related error is: Maximum relative discrepancy between derivatives = 6.13728
I am wondering if it is just the check the gradient for the 1st iteration. And I remember in another post you mentioned that the Matlab optimiser will refine the scaling of the problem using Hessian.
This is the main reason that I am not very confident using the inverse instead of transpose now because this kind of large discrepancy of the gradient.
5. You gave me some codes (and also a link) of the 2D interpolation, which I assume should done a better job than Matlab build-in 'interp' function. May I ask if you know any Matlab package perform 3D interpolation properly please? I may have it as a reference to develop my own transpose of the interpolation code in 3D later.
Anyway, thanks very much for all your efforts.
Aaronne.
Right, but the non-rigid case is only a minor modification from what we've already talked about. Let A be a general, invertible affine coorindinate transform and let T be the corresponding image transformation operator applied to continuous-space image
f(x,y,z):
Tf(x)=f(A(x,y,z))
Then the L2 inner products of f(x,y,z) and g(x,y,z) are related to Tf and Tg as follows
Integral( Tf*Tg )=Integral( f(x,y,z)*g(x,y,z) ) /Jac(A)
where Jac(A) is the Jacobian determinant of A. You should derive this yourself to make sure you're convinced. It just comes from a change of variables in the integral.
If you discretize the above integrals to get discrete dot products, you get
dot(Tf,Tg)=dot(f,g)/Jac(A)
and applying the same line of reasoning as in the proof from before, you get
T.'=inv(T)/Jac(A)
So, it's basically the same thing we had previously, but now there's an additional scale factor of Jac(A). Incidentally, notice that for rigid transforms Jac(A)=1 and it reduces to what we had before.
> 4. About the interpolation error you mentioned, I test my program using the inverse transform instead of doing the transpose (I use affine transformations); however, the optimisation goes quite well.
>
> But when I turn on the 'DerivativeCheck' option to 'on', I found that my analytical derivative has large discrepancy with Matlab build-in Finite-Difference-Method get. The largest related error is: Maximum relative discrepancy between derivatives = 6.13728
===================
Possibly because of the missing factor of Jac(A).
> This is the main reason that I am not very confident using the inverse instead of transpose now because this kind of large discrepancy of the gradient.
===================
But yet you say you're getting a good quality solution...
In any case, using my LINTERP you have the tools necessary to make a 2D comparison of both approaches, one using the actual transpose and one using the inverse as an adhoc substitute.
If you really care about robust convergence, one thing you could try is to use the inverse for several iterations to get reasonably close to the solution and then refine the solution by using the true transpose for several more iterations. This way, even if the transpose is computationally expensive, you will only need to use it for a limited number of iterations.
> 5. You gave me some codes (and also a link) of the 2D interpolation, which I assume should done a better job than Matlab build-in 'interp' function. May I ask if you know any Matlab package perform 3D interpolation properly please? I may have it as a reference to develop my own transpose of the interpolation code in 3D later.
====================
These might be worth looking at. I've only ever tried the last one, though
http://www.mathworks.com/matlabcentral/fileexchange/24177-3d-interpolation
http://www.mathworks.com/matlabcentral/fileexchange/19687-matlab-trilinear-interpolation
I know it has been a while now. But when I look back to this post, I realize that I need to add this Jac(A), i.e., Jacobian determinant of A, to the inverse affine transformation.
Without this scaling factor, there is still some problem to approximate the operation of the 'transpose of affine transformation' by just using the inverse directly.
I am a little bit confusing about the 'Jacobian determinant'. Should we get a Jacobian matrix of the transformation matrix A, like
J = jacobian(A, directionVectors);
and then calculate its determinant?
Jdet = det(J);
Thanks very much for all your help and have a nice day!
Aaronne.
If the affine transformation is
A*x=M*x+b
then the Jdet(A) =Jdet(M)
I am still not clear about how to calculate this 'Jacobian determinant',
Suppose I have defined an affine matrix in Matlab
affineKernelMatrix = [ 1 0 0; 0 1 0; 0 0 1];
affineKernelMatrix(1:3, 1:2) = reshape(inputsParameters(1:6), 3, 2);
affineTransform = maketform('affine', affineKernelMatrix);
affineTransformedImage = imtransform(I, affineTransform, ...
'UData', udata, 'VData', vdata, ...
'XData', udata, 'YData', vdata);
Then how to get Jdet(A) or Jdet(affineKernelMatrix) here?
The Matlab 'Jacobian matrix syntax' is
jacobian(f, v)
which is in the Symbolic Math Toolbox seems not a proper one. Sorry to bother you so much and thanks a lot.
Aaronne.
"Matt J" wrote in message <j32ub7$mrt$1...@newscl01ah.mathworks.com>...
M=affineKernelMatrix(1:2,1:2);
Jdet=det(M);