Unfolding:
To unfold an input array apply the hStep function to the input data..
Place the result in the first column of a matrix. Apply the hStep
function to the first column and put the result in the second column.
Continue until you have applied the hStep function 2k times. The
final column contains the original input data. This gives you your
unfolded matrix.
Folding:
To fold the matrix apply the hStep function to the first column then
ADD in the second column. Perform the hStep function again and ADD in
the third column. Continue until you have ADDed in the final column
and stop.
If you unfold a data array and then fold the resulting matrix the end
result is an array containing your original data scaled up by c. Where
c is the number of columns in the matrix. All the data has gone
through 2k hSteps.
Interference Patterns:
If you unfold an array containing a single non zero value and look at
the columns of the matrix you see that the data at first spreads out
then focuses back to a point in the final column. If you have two or
more non zero values in the input array you get more complex patterns
of focusing and spreading out.
Data Compression Idea (not tested):
Unfold the input data. Look for the highest magnitude value in the
matrix. Note it's value and location. Set the value in the matrix
to zero. Now you need to adjust the rest of the matrix. Apply hStep
to the altered column and replace the following column with the result.
If you are applying hStep to the final column put the result in the
first column. Continue until the entire matrix has been readjusted.
Repeat the above process a number of times getting a list of values and
locations.
Decompression:
Zero all the elements in the matrix. Then add to the matrix each noted
value at it's noted location. Fold the matrix, scale the result and
you are done.
Neural Networks (tested):
Feed forward:
Each element of the matrix corresponds to the output of a neuron. For
a given input place the response of each neuron in the matrix and fold
to get your network output.
Back propagation:
Given an input and target pair. Feed forward and get the output of the
net for the given input. Calculate the error. Unfold the error and
update each neuron with it's corresponing error value for the input.
New Idea for Back Propagation (not tested):
Update the neurons in a similar way to the compression algorithm. In
the matrix find the error with maximum magnitued. Update the
corresponding neuron. Recalculate the error the neuron is making .
Put this new error back in it's corresponding place in the matrix.
Readjust the rest of the matrix as per the compression algorithm.
Repeat the above process a few times.
Sean O'Connor 12 October 2006.
The Hadamard Transform Fact Sheet
Out of Place Algorithm:
S=an operational step in the transform
Unscaled 2-point transform
Step 1 of 1: (a+b,a-b)=S((a,b))
Unscaled 4-point transform
Step 1 of 2: (s1,s2,d1,d2)=(a+b,c+d,a-b,c-d)=S((a,b,c,d))
Step 2 of 2: (s1+s2,d1+d2,s1-s2,d1-d2)=S((s1,s2,d1,d2))
Unscaled 8-point transform
Step 1 of 3:
(s1,s2,s3,s4,d1,d2,d3,d4)=(a+b,c+d,e+f,g+h,a-b,c-d,e-f,g-h)=S((a,b,c,d,e,f,g,h))
Step 2 of 3:
(s5,s6,s7,s8,d5,d6,d7,d8)=(s1+s2,s3+s4,d1+d2,d3+d4,s1-s2,s3-s4,d1-d2,d3-d4)=S((s1,s2,s3,s4,d1,d2,d3,d4))
Step 3 of 3:
(s5+s6,s7+s8,d5+d6,d7+d8,s5-s6,s7-s8,d5-d6,d7-d8)=S((s5,s6,s7,s8,d5,d6,d7,d8))
The idea is to go through the input array pair wise and put the sum
values in the lower half of a new array and the difference values into
the upper half of the new array. Do this operation k times were the
length of the input array is 2^k (k is an integer>0). The scaling
factor is 1 divided by the square root of the array length.
In Place Algorithm
Unscaled 4-point transform
Step 1a of 2: (a+b,a-b,c,d)=S((a,b,c,d))
Step 1b of 2: (s1,d1,s2,d2)=((a+b,a-b,c+d,c-d)=S((a+b,a-b,c,d))
Step 2a of 2: (s1+s2,d1,s1-s2,d2)=S((s1,d1,s2,d2))
Step 2b of 2: (s1+s2,d1+d2,s1-s2,d1-d2)=S((s1+s2,d1,s1-s2,d2))
The idea is to break the input array into blocks of 2^n (n=0 to k-1)
then work sequentially through the values in pairs of blocks putting
the sum in the lower block and the difference in the upper block.
Sequency Algorithm
Let h(a) be a function that returns 1 if the bit wise parity of the
integer a is even and -1 if the parity of a is odd. Let d be the
array of data of length n (n=2^k, k an integer, k>0) that you want to
transform and let r be the result array.
r[k]=åd[i]*h(k AND i) (i=0 to n-1)
Discussion
The Hadamard transform is its own inverse. It transforms a single
nonzero point in the input to a 'sequency' pattern in the output
and vice versa. Each different point in the input maps to a different
sequency pattern in the output (were the actual sequency is the number
of transition between positive and negative values (either way) in the
output). The Hadamard transform is very different to the Fourier
Transform, be warned.
Properties
H= Scaled Hadamard transform
Pn= Permutation selected at random from the set of permutations
Ln= Permutation selected at random from the set of permutations that
leave the sum term of the Hadamard transform where it is.
Mn= Permutation selected at random from the set of permutations that
moves the sum term of the Hadamard transform from where it is.
Z= Zero the sum term of the Hadamard transform.
d= an array of general data (eg image or sound data)
r= an array of random numbers (with a uniform distribution for example)
g=an array of random numbers with the Gaussian distribution
· s=H(H(s))
· It preserves vector length, the vector length of H(s) equals
the vector length of s.
· It is orthogonal and linear. The transform of two different
sequency patterns added together is just two points.
· By the central limit theorem g=H(r).
· Data to Gaussian excluding the mean. g=H(Pj(Z(H(Pi(d))))).
· Data to Gaussian including mean. g=H(Pk(H(Mj(H(Pi(d)))))).
· Data to Gaussian keeping the mean. g=H(Lj(H(Pi(d))))
· If z=H(s)+s and u=H(s)-s the z=H(z) and u= -H(u)
Discussion
Suppose the array d contains pixel image data then a random permutation
essentially converts it into an array of random numbers that the
Hadamard transform will turn into Gaussian Noise. If d contains just a
single non-zero value then the first permutation will just shift it
around and when it is transformed it will give a sequency pattern.
Hence you need a second permutation which will break up the sequency
pattern into an array of random numbers (with binary values, but good
enough except for very small arrays).
Uses
A Replacement for the Box-Muller algorithm?
If u is an array of uniform random numbers with zero mean then g=H(u)
is an array of random numbers with the Gaussian distribution and zero
mean. The cost per Gaussian deviate is one call to a uniform random
generator and a few addition and subtraction operations. Obviously
u=H(H(u)). In some cases this may not matter. Where it does matter
you have various options in terms of preventing the inversion of the
Gaussian deviates back to the uniform. The best solution is a random
permutation of g and random sign flipping. Short of that just random
sign flipping will do.
Genetic Algorithms
Genetic algorithms often require a new normally distributed population
based on the old one. If the population size is an integer power of 2
then using the HLHP function (see above) on each dimension in the
population will achieve just that.
Image Segmentation
Imagine you put the 3 colour values for a pixel through HPZHP (which
would only partially fill the input array of course) and looked at one
of the output values of the function. If the value was positive then
it might be telling you that the pixel has a purplish colour. In any
event you can segment an image by colour by looking at the sign of one
of the outputs of the function. Going one step further you can put the
values of number of neighbouring pixels through HPZHP. Now each output
will be positive or negative depending on some pattern it recognises in
the input pixels.
Clustering Algorithms
The HPZHP function is expected to have a Gaussian distributed output.
Putting each output value through the Gaussian cumulative density
function should get you back to a uniform deviate over the interval 0
to 1. Selecting 3 outputs will give you a 1*1*1 cube which you can
divide up into 1000 small equally sized cubes. If you evaluate 10,000
vectors that have completely uncorrelated data in them then you can
expect each of the small cubes to be visited 10 times. More than that
would indicate some form of clustering in your data. Each different
permutation you use in HPZHP actually gives you a different window on
your data. The direction the window is pointing in is potluck but hey!
Appendix
Java Source code
package factsheet;
public class Hadamard {
final int steps;
final int size;
final float scale;
final float[] opArray;
// Length of vector to transform n=2^k
public Hadamard(int k) {
if(k<1 || k>29) throw new IllegalArgumentException();
steps=k;
size=1<<k;
scale=1f/(float)Math.sqrt(size);
opArray=new float[size];
}
// out of place transform
public void transformOP(float[] data){
float[] next=opArray;// opArray is final (bug control) so get a
variable reference to it.
if(data.length!=size) throw new IllegalArgumentException();
for(int i=0;i<steps;i++){
int pairwise=0;
int lowerHalf=0;
int upperHalf=size>>1;
while(pairwise<size){
float a=data[pairwise++];
float b=data[pairwise++];
next[lowerHalf++]=a+b;
next[upperHalf++]=a-b;
}
float[] temp=data; // swap arrays around.
data=next;
next=temp;
}
for(int i=0;i<size;i++) data[i]*=scale;
// if an odd number of transform steps have been done then you need to
copy back
// the result to the original data array reference (in next)
if((steps&1)==1) System.arraycopy(data,0,next,0,size);
}
// in place transform
public void transformIP(float[] data){
if(data.length!=size) throw new IllegalArgumentException();
int blockSize=1;
while(blockSize<size){
for(int blockPair=0;blockPair<size;blockPair+=blockSize<<1){
for(int i=0;i<blockSize;i++){
float a=data[blockPair+i];
float b=data[blockPair+blockSize+i];
data[blockPair+i]=a+b;
data[blockPair+blockSize+i]=a-b;
}
}
blockSize<<=1;
}
for(int i=0;i<size;i++) data[i]*=scale;
}
// sequency transform
public void transformSeq(float[] data){
if(data.length!=size) throw new IllegalArgumentException();
for(int i=0;i<size;i++){
float sum=0;
for(int j=0;j<size;j++){
sum+=data[j]*parity(j & i);
}
opArray[i]=sum;
}
for(int i=0;i<size;i++) opArray[i]*=scale;
System.arraycopy(opArray,0,data,0,size);
}
//
public int indexToSequency(int index){
if(index<0 || index>=size) throw new IllegalArgumentException();
int result=0;
int j=bitReverse(index,steps);
for(;j>0;j>>>=1)
result^=j;
return result;
}
public int sequencyToIndex(int seq){
if(seq<0 || seq>=size) throw new IllegalArgumentException();
seq^=seq>>1;
return bitReverse(seq,steps);
}
// Returns 1 if a is parity even and -1 if a is parity odd
public static int parity(int a){
a^=a>>16;
a^=a>>8;
a^=a>>4;
a^=a>>2;
a^=a>>1;
return 1-((a&1)<<1);
}
//
static final int[] mirrorAr=new int[256];
static{
for(int i=0;i<256;i++){
int res=0;
for(int j=0,k=i;j<8;j++){
res<<=1;
res|=k&1;
k>>=1;
}
mirrorAr[i]=res;
}
}
private static int bitReverse(int i,int nBits){
int result;
result=mirrorAr[i&255];
if(nBits<=8) return result>>>(8-nBits);
result<<=8;
i>>>=8;
result|=mirrorAr[i&255];
if(nBits<=16) return result>>>(16-nBits);
result<<=8;
i>>>=8;
result|=mirrorAr[i&255];
if(nBits<=24) return result>>>(24-nBits);
result<<=8;
i>>>=8;
result|=mirrorAr[i];
return result>>>(32-nBits);
}
}
Sean O'Connor 12 July 2006