solutions to convolution/fourier transform exercises?

219 views
Skip to first unread message

ayvo...@googlemail.com

unread,
Aug 13, 2014, 1:26:35 PM8/13/14
to analyzingneura...@googlegroups.com
Hi,

I was wondering if anyone is willing to share their (working) matlab code for the exercises in chapters 10 and 11.... I have tried to work through them but get very odd results - they are SIMILAR but for example in ch 10 when comparing manual vs matlab convolution I get a very strong drift for one of the methods (rather than both giving identical results), and in ch. 11 I get similar but smoother results for DTFT and something more of a flat line (but not quite) for the matlab fft....in addition I only get the DTFT right(ish) when I normalise the fourier coefficients for the signal but not for the kernel (just discovered this accidentally), otherwise their scales are wildly out of proportion. This seems a little arbitrary, right?

Clearly I am not ready for the next chapter but for the love of Matlab, I cannot figure out what precisely my mistakes are...maybe you went through the same issues and had a brighter moment?

Any help appreciated!

Thanks,

Alex

Mike X Cohen

unread,
Aug 13, 2014, 4:55:16 PM8/13/14
to analyzingneura...@googlegroups.com
Hi Alex. If you can describe your difficulty in more detail, and perhaps send some Matlab code or a screenshot of a figure, then I can try to give you some pointers to help you in the right direction. 

Mike





--
You received this message because you are subscribed to the Google Groups "AnalyzingNeuralTimeSeriesData" group.
To unsubscribe from this group and stop receiving emails from it, send an email to analyzingneuraltimes...@googlegroups.com.
Visit this group at http://groups.google.com/group/analyzingneuraltimeseriesdata.
For more options, visit https://groups.google.com/d/optout.



--
Mike X Cohen, PhD
mikexcohen.com

ayvo...@googlemail.com

unread,
Aug 14, 2014, 2:03:04 PM8/14/14
to analyzingneura...@googlegroups.com


Hi,

that is very helpful of you, thanks! Not one to give up easily I have done more work on this though:

I realised for the exercise in ch.10 that I simply made a mistake at rescaling the fourier coefficients. And after lots of blood, sweat and tears I managed to get the same results for all three methods. It turns out that I needed to add TRAILING zeros instead of padding at the beginning and end (as I did in time domain convolution, re: ch.10) for DTFT, which I only understood after finding the code for the MATLAB fft function, which apparently does the same. This was not explicit for me in the Tips and tricks section.

For this, however, and here I am still not sure if I took liberties, I needed to scale the multiplied Fourier coefficients, rather than the individual coefficients for the Gaussian and the signal. I am still struggling with the whole scaling issue -  other than its obvious function, - otherwise my signal disappears or becomes unreasonably large - what is the theoretical motivation behind it? Or in other words, why do I normalise the combined coefficients, and not each coefficient individually?


Thanks!

Mike X Cohen

unread,
Aug 15, 2014, 1:52:52 AM8/15/14
to analyzingneura...@googlegroups.com
Technically, the exercises in Chapter 10 do not require the Fourier transform. If you are computing convolution in the time-domain, then the zeros need to be padded on both the beginning and end of the signal (see figure 10.3). If you are computing convolution via frequency domain multiplication, then yes, the zeros are padded only at the end, although you can have the fft function do the padding for you by specifying a second input, N, which defines the number of total data points to use for the FFT, regardless of the length of the input. 

As for scaling, yes, this is always a tricky issue. In a Fourier decomposition, the coefficients can be scaled by N. In other words, fft(x)/length(x). If you are performing convolution with a non-zero-mean kernel -- which is the case in the exercises in Chapter 10 -- you can divide the result of convolution by the sum of the kernel. In other words, conv_result = conv_result/sum(kernel); 

I hope that helps,
Mike



--
You received this message because you are subscribed to the Google Groups "AnalyzingNeuralTimeSeriesData" group.
To unsubscribe from this group and stop receiving emails from it, send an email to analyzingneuraltimes...@googlegroups.com.
Visit this group at http://groups.google.com/group/analyzingneuraltimeseriesdata.
For more options, visit https://groups.google.com/d/optout.
Reply all
Reply to author
Forward
0 new messages