signal.deconvolve is essentially signal.lfilter, but I don't quite
understand what it does.
I changed 2 lines, partly by trial and error and by analogy to ARMA
models. I'm not quite sure the following changes are correct, but at
least it produces a nice graph
instead of deconvolve use lfilter directly
y_deconv = signal.lfilter(window, [1.], ys[::-1])[::-1]
and
center lfiltered window:
ax.plot(xs[window.size//2-1:-window.size//2], y_deconv[:-window.size+1], 'k-',
If your signal is periodic, then I would go for the fft versions of
convolution, and iir filtering.
My initial guesses were that there is either something wrong (hidden
assumption) about the starting values of signal convolve, or there are
problems because of the non-stationarity.
But maybe a signal expert knows better.
Josef
>
>
> _______________________________________________
> SciPy-User mailing list
> SciPy...@scipy.org
> http://mail.scipy.org/mailman/listinfo/scipy-user
>
>
_______________________________________________
SciPy-User mailing list
SciPy...@scipy.org
http://mail.scipy.org/mailman/listinfo/scipy-user
instead of deconvolve use lfilter directly
y_deconv = signal.lfilter(window, [1.], ys[::-1])[::-1]
and
center lfiltered window:
ax.plot(xs[window.size//2-1:-window.size//2], y_deconv[:-window.size+1], 'k-',
If your signal is periodic, then I would go for the fft versions of
convolution, and iir filtering.
I mixed up numerator and denominator for lfilter
>
>> instead of deconvolve use lfilter directly
>>
>> y_deconv = signal.lfilter(window, [1.], ys[::-1])[::-1]
>>
>> and
>>
>> center lfiltered window:
>>
>> ax.plot(xs[window.size//2-1:-window.size//2], y_deconv[:-window.size+1],
>> 'k-',
>>
>> If your signal is periodic, then I would go for the fft versions of
>> convolution, and iir filtering.
>
> The signal is not periodic.
>
> Ralf
>
>> My initial guesses were that there is either something wrong (hidden
>> assumption) about the starting values of signal convolve, or there are
>> problems because of the non-stationarity.
In terms of IIR filter the way I understand it from the ARMA analogy,
your window is not invertible
>>> r = np.roots(window)
>>> r*r.conj()
array([ 1.+0.j, 1.+0.j, 1.+0.j, 1.+0.j, 1.+0.j, 1.+0.j, 1.+0.j,
1.+0.j, 1.+0.j, 1.+0.j])
I'm not sure if this can work, but in IIR filters the way I know it,
the coefficient for the last observation is normalized to 1. Since all
roots are 1, the window cannot be inverted.
I think lfilter uses the same assumptions. lfilter has also a funny
way to determine initial conditions (zi) which I'm never quite sure
how to use. It doesn't matter much with invertible and stationary
processes, but in this case, I guess it does.
If I change the window to an invertible window
window_len = 11.
window = np.ones(window_len) / window_len
window[0] = 1.
then my current version, which should be close to your original
version, works, the deconvolved series looks similar to the original
series.
In either case, I think Joe's answer is more useful, since you can
directly manipulate the frequency response. I only used a simple IIR
frequency domain filter with an adaptation of fftconvolve.
Josef
Gah, I hit send too soon!The default eps parameter in that function should be more like 1.0e-6 instead of 0.1.You'll generally need to adjust the eps parameter to match the signal-to-noise ratio of the two signals you're deconvolving.
On Sun, Jul 31, 2011 at 9:22 PM, Joe Kington <jkin...@wisc.edu> wrote:Gah, I hit send too soon!The default eps parameter in that function should be more like 1.0e-6 instead of 0.1.You'll generally need to adjust the eps parameter to match the signal-to-noise ratio of the two signals you're deconvolving.
Thanks Joe. I had tried something similar, but now I have a name for the method and a confirmation that doing something like that makes sense. I'll play with this idea some more tomorrow.
In radio interferometry we obtain our images convolved with the
so-called "dirty beam", a convolution kernel that has a nice narrow
peak but usually a chaos of monstrous sidelobes often only marginally
smaller than the main lobe. We use a different regularization
condition to do our deconvolution: we treat the underlying image as a
modest collection of point sources. (One can see why this appeals to
astronomers.) Through an iterative process (the "CLEAN" algorithm and
its many descendants) we obtain an estimate of this underlying image.
But we very rarely actually work with this image directly. We normally
convolve it with a sort of idealized version of our kernel without all
the sidelobes. This then gives an image one might have obtained from a
normal telescope the size of the interferometer array. (Apart from all
the CLEAN artifacts.)
What I'm wondering is, is this final step of convolving with an
idealized version of the kernel standard practice elsewhere?
From one point of view it could just be parochiality, astronomers
being so accustomed to smudgy images that we have to convert anything
else to this format. But I think that at the least it softens the
effect of the rather strict regularization assumption behind CLEAN -
which amounts to "no extended sources". It probably makes us less
sensitive to shortcuts in CLEAN implementations. I think, though, that
this trick may be useful for many applications of deconvolution.
Rather than try to translate the image from the observed kernel to
some ideal Dirac-delta kernel, this tries to convert it from the
observed kernel to a similar but simpler kernel; one would expect the
impact of a deconvolution artifact to be related to the magnitude of
the difference between kernels.
In terms of 1D Fourier deconvolution, this is saying, after
deconvolution, that we don't really need all those high frequencies
amplified so much anyway, and smoothing them back down with a nice
clean easy-to-understand kernel. In these terms, in fact, it makes
perfect sense to use a wider kernel than necessary for this smoothing
if one is interested in larger-scale features.
Anne
Hi Ralf,I do a reasonable amount of (2 & 3D) deconvolution of microscopy images and the method I use depends quite a lot on the exact properties of the signal. You can usually get away with fft based convolutions even if your signal is not periodic as long as your kernel is significantly smaller than the signal extent.
As Joe mentioned, for a noisy signal convolving with the inverse or performing fourier domain division doesn't work as you end up amplifying high frequency noise components. You thus need some form of regularisation. The thresholding of fourier components that Joe suggests does this, but you might also want to explore more sophisticated options, the simplest of which is probably Wiener filtering (http://en.wikipedia.org/wiki/Wiener_deconvolution).
If you've got a signal which is constrained to be positive, it's often useful to introduce a positivity constraint on the deconvolution result which generally means you need an iterative algorithm. The choice of algorithm should also depend on the type of noise that is present in your signal - my image data is constrained to be +ve and typically has either Poisson or a mixture of Poisson and Gaussian noise and I use either the Richardson-Lucy or a weighted version of ICTM (Iterative Constrained Tikhonov-Miller) algorithm. I can provide more details of these if required.
On Sun, Jul 31, 2011 at 9:22 PM, Joe Kington <jkin...@wisc.edu> wrote:Gah, I hit send too soon!The default eps parameter in that function should be more like 1.0e-6 instead of 0.1.You'll generally need to adjust the eps parameter to match the signal-to-noise ratio of the two signals you're deconvolving.
Thanks Joe. I had tried something similar, but now I have a name for the method and a confirmation that doing something like that makes sense. I'll play with this idea some more tomorrow.
Cheers,
Ralf
I realize this discussion has gone rather far afield from efficient 1D
deconvolution, but we do a funny thing in radio interferometry, and
I'm curious whether this is normal for other kinds of deconvolution as
well.
In radio interferometry we obtain our images convolved with the
so-called "dirty beam", a convolution kernel that has a nice narrow
peak but usually a chaos of monstrous sidelobes often only marginally
smaller than the main lobe. We use a different regularization
condition to do our deconvolution: we treat the underlying image as a
modest collection of point sources. (One can see why this appeals to
astronomers.) Through an iterative process (the "CLEAN" algorithm and
its many descendants) we obtain an estimate of this underlying image.
But we very rarely actually work with this image directly. We normally
convolve it with a sort of idealized version of our kernel without all
the sidelobes. This then gives an image one might have obtained from a
normal telescope the size of the interferometer array. (Apart from all
the CLEAN artifacts.)
What I'm wondering is, is this final step of convolving with an
idealized version of the kernel standard practice elsewhere?
Hi Ralf,
5-15 times smaller would probably be fine, although you might want to watch the edges in the reconstruction - if they're at different dc levels you'll get edge artifacts (within ~ 1 kernel width of the edges). I'd tend to avoid spline filtering (or any form of noise reduction) before deconvolution, as this will also transform the data in a way not explained by the model you're using to deconvolve with.Weiner filtering is a 2 liner ->H = fft(kernel)deconvolved = ifftshift(ifft(fft(signal)*np.conj(H)/(H*np.conj(H) + lambda**2)))where lambda is your regularisation parameter, and white noise is assumed. There are various methods for choosing lambda optimally, but most people tend to use trial and error.Iterative methods are typically ~1-2 orders of magnitude slower than a Weiner filter, but with fast fft libraries and modern computers still quite reasonable for modest data sizes (a 3D image stack of ~ 512x512x50 pixels will tend to be done in a bit under a minute, can't really comment on 1D data, but unless your signal is very long I'd expect it to be significantly quicker). Ffts scale with O(nlogn) so will generally dramatically outperform things based on a simple convolution or filtering approaches (O(n**2)) for large n. This might make an iterative approach using ffts faster than something like scipy.signal.deconvolve if your kernel is large.
2011/7/31 Ralf Gommers <ralf.g...@googlemail.com>:
> For a measured signal that is the convolution of a real signal with a
> response function, plus measurement noise on top, I want to recover the real
> signal. Since I know what the response function is and the noise is
> high-frequency compared to the real signal, a straightforward approach is to
> smooth the measured signal (or fit a spline to it), then remove the response
> function by deconvolution. See example code below.
I ran across this (see below) soon ago since I'm dealing with
information theory recently. It has an deconvolution example included
in 1D, and it compares some different general methods in a kind-of
"unified framework", as far as this exists. I found it quite
informative and helpful. If you can't get access I can get it from
the library in 2 weeks. The citation is:
Robert L. Fry (ed.), Bayesian Inference and Maximum Entropy Methods in
Science and Engineering: 21st International Workshop, Baltimore,
Maryland, AIP Conf. Proc. 617 (2002)
ISBN 0-7354-0063-6; ISSN 0094-243X
Tutorial "Bayesian Inference for Inverse Problems" (A.
Mohammad-Djafari) on page 477ff.
It includes different noise models, afair, at least the structure how
to deal with this. If I'm not mistaken the problem discussed there
was a mass-spectrometry spectrum, so should been shot noise mainly,
and of course the psf.
The tutorial covers (in short) maximum entropy as well as maximum
likelihood, and a combination of both (hence the "unification"). I
cannot help much with this since I'm new to it myself. But I did a
reasonable literature search, and this was one of the best outcomes.
But as said, I was about information theory.
Hope this is a useful pointer,
Friedrich
> Can anyone point me towards code that does the deconvolution efficiently?
> Perhaps signal.deconvolve would do the trick, but I can't seem to make it
> work (except for directly on the output of np.convolve(y, window,
> mode='valid')).
No. In fact, I don't think there is an automagical solution anywhere. :-)
Good luck!
It's very like fitting a parametric model, yes, except that we don't
care much about the model parameters. In fact we often end up with
models that have clusters of "point sources" with positive and
negative emissions trying to match up with what is in reality a single
point source. This can be due to inadequacies of the dirty beam model
(though usually we have a decent estimate) or simply noise. In any
case smoothing with an idealized main lobe makes us much less
sensitive to this kind of junk. Plus if you're going to do this
anyway, it can make life much easier to constrain your point sources
to a grid.
(As an aside, this trick - of fitting a parametric model but then
extracting "observational" parameters for comparison to reduce
model-sensitivity - came up with some X-ray spectral data I was
looking at: you need to use a model to pull out the instrumental
effects, but if you report (say) the model luminosity in a band your
instrument can detect, then it doesn't much matter whether your model
thinks the photons are thermal or power-law. In principle you can even
do this trick with published model parameters, but you run into the
problem that people don't give full covariance matrices for the fitted
parameters so you get spurious uncertainties.)
As far as frequency aliasing, there's not so much coming from the
deconvolution, since our beam is so irregular. The actual observation
samples image spatial frequencies rather badly; it's the price we pay
for not having a filled aperture. So we're often simply missing
information on spatial frequencies, most often the lowest ones
(because there's a limit on how close you can put tracking dishes
together without shadowing). But I don't think this is a deconvolution
issue; in fact in situations where people are really pushing the
limits of interferometry, like the millimeter-wave interferometric
observations of the black hole at the center of our galaxy, you often
give up on producing an image at all and fit (say) an emission model
including the event horizon to the observed spatial frequencies
directly.
Anne