[SciPy-User] deconvolution of 1-D signals

1,688 views
Skip to first unread message

Ralf Gommers

unread,
Jul 31, 2011, 1:56:49 PM7/31/11
to SciPy Users List
Hi,

For a measured signal that is the convolution of a real signal with a response function, plus measurement noise on top, I want to recover the real signal. Since I know what the response function is and the noise is high-frequency compared to the real signal, a straightforward approach is to smooth the measured signal (or fit a spline to it), then remove the response function by deconvolution. See example code below.

Can anyone point me towards code that does the deconvolution efficiently? Perhaps signal.deconvolve would do the trick, but I can't seem to make it work (except for directly on the output of np.convolve(y, window, mode='valid')).

Thanks,
Ralf


import numpy as np
from scipy import interpolate, signal
import matplotlib.pyplot as plt

# Real signal
x = np.linspace(0, 10, num=201)
y = np.sin(x + np.pi/5)

# Noisy signal
mode = 'valid'
window_len = 11.
window = np.ones(window_len) / window_len
y_meas = np.convolve(y, window, mode=mode)
y_meas += 0.2 * np.random.rand(y_meas.size) - 0.1
if mode == 'full':
    xstep = x[1] - x[0]
    x_meas = np.concatenate([ \
        np.linspace(x[0] - window_len//2 * xstep, x[0] - xstep, num=window_len//2),
        x,
        np.linspace(x[-1] + xstep, x[-1] + window_len//2 * xstep, num=window_len//2)])
elif mode == 'valid':
    x_meas = x[window_len//2:-window_len//2+1]
elif mode == 'same':
    x_meas = x

# Approximating spline
xs = np.linspace(0, 10, num=500)
knots = np.array([1, 3, 5, 7, 9])
tck = interpolate.splrep(x_meas, y_meas, s=0, k=3, t=knots, task=-1)
ys = interpolate.splev(xs, tck, der=0)

# Find (low-frequency part of) original signal by deconvolution of smoothed
# measured signal and known window.
y_deconv = signal.deconvolve(ys, window)[0]  #FIXME

# Plot all signals
fig = plt.figure()
ax = fig.add_subplot(111)

ax.plot(x, y, 'b-', label="Original signal")
ax.plot(x_meas, y_meas, 'r-', label="Measured, noisy signal")
ax.plot(xs, ys, 'g-', label="Approximating spline")
ax.plot(xs[window.size//2-1:-window.size//2], y_deconv, 'k-',
        label="signal.deconvolve result")
ax.set_ylim([-1.2, 2])
ax.legend()

plt.show()

josef...@gmail.com

unread,
Jul 31, 2011, 3:10:55 PM7/31/11
to SciPy Users List

signal.deconvolve is essentially signal.lfilter, but I don't quite
understand what it does.

I changed 2 lines, partly by trial and error and by analogy to ARMA
models. I'm not quite sure the following changes are correct, but at
least it produces a nice graph

instead of deconvolve use lfilter directly

y_deconv = signal.lfilter(window, [1.], ys[::-1])[::-1]

and

center lfiltered window:

ax.plot(xs[window.size//2-1:-window.size//2], y_deconv[:-window.size+1], 'k-',

If your signal is periodic, then I would go for the fft versions of
convolution, and iir filtering.
My initial guesses were that there is either something wrong (hidden
assumption) about the starting values of signal convolve, or there are
problems because of the non-stationarity.

But maybe a signal expert knows better.

Josef

>
>
> _______________________________________________
> SciPy-User mailing list
> SciPy...@scipy.org
> http://mail.scipy.org/mailman/listinfo/scipy-user
>
>
_______________________________________________
SciPy-User mailing list
SciPy...@scipy.org
http://mail.scipy.org/mailman/listinfo/scipy-user

Joe Kington

unread,
Jul 31, 2011, 3:21:09 PM7/31/11
to SciPy Users List
I'm not a signal processing expert by any means, but this is a standard problem in seismology.

The problem is that your "window" has near-zero amplitude at high frequencies, so you're blowing up the high-frequency content of the noisy signal when you divide in the frequency domain.

A "water-level" deconvolution is a very simple way around this, and often works well.  It also allows you to skip the spline fitting, as it's effectively doing a low-pass filter.  

Basically, you just:
1) convert to the frequency domain
2) replace any amplitudes below some threshold with that threshold in the signal you're dividing by (the window, in your case)
3) pad the lengths to match
4) divide (the actual deconvolution)
5) convert back to the time domain

As a simple implementation (I've left out the various different modes of padding here... This is effectively just mode='same'.)

def water_level_decon(ys, window, eps=0.1):
    yfreq = np.fft.fft(ys)
    max_amp = yfreq.max()

    winfreq = np.fft.fft(window)
    winfreq[winfreq < eps] = eps

    padded = eps * np.ones_like(yfreq)
    padded[:winfreq.size] = winfreq

    newfreq = yfreq / padded
    newfreq *= max_amp / newfreq.max()

    return np.fft.ifft(newfreq)


Hope that helps a bit.
-Joe


In most cases, you'll need to adjust the eps parameter to match the level of noise you want to remove. In your particular case

Ralf Gommers

unread,
Jul 31, 2011, 3:41:51 PM7/31/11
to SciPy Users List
It doesn't have artifacts, but y_meas is almost identical to the spline, not to the real signal. Not sure what's going wrong there, but it didn't perform a deconvolution.

instead of deconvolve use lfilter directly

y_deconv = signal.lfilter(window, [1.],  ys[::-1])[::-1]

and

center lfiltered window:

ax.plot(xs[window.size//2-1:-window.size//2], y_deconv[:-window.size+1], 'k-',

If your signal is periodic, then I would go for the fft versions of
convolution, and iir filtering.

The signal is not periodic.
 
Ralf

Joe Kington

unread,
Jul 31, 2011, 3:22:53 PM7/31/11
to SciPy Users List
Gah, I hit send too soon!

The default eps parameter in that function should be more like 1.0e-6 instead of 0.1.

You'll generally need to adjust the eps parameter to match the signal-to-noise ratio of the two signals you're deconvolving. 

Hope it's useful, at any rate.
-Joe

josef...@gmail.com

unread,
Jul 31, 2011, 5:22:24 PM7/31/11
to SciPy Users List
On Sun, Jul 31, 2011 at 3:41 PM, Ralf Gommers

I mixed up numerator and denominator for lfilter

>
>> instead of deconvolve use lfilter directly
>>
>> y_deconv = signal.lfilter(window, [1.],  ys[::-1])[::-1]
>>
>> and
>>
>> center lfiltered window:
>>
>> ax.plot(xs[window.size//2-1:-window.size//2], y_deconv[:-window.size+1],
>> 'k-',
>>
>> If your signal is periodic, then I would go for the fft versions of
>> convolution, and iir filtering.
>
> The signal is not periodic.
>
> Ralf
>
>> My initial guesses were that there is either something wrong (hidden
>> assumption) about the starting values of signal convolve, or there are
>> problems because of the non-stationarity.

In terms of IIR filter the way I understand it from the ARMA analogy,
your window is not invertible

>>> r = np.roots(window)
>>> r*r.conj()
array([ 1.+0.j, 1.+0.j, 1.+0.j, 1.+0.j, 1.+0.j, 1.+0.j, 1.+0.j,
1.+0.j, 1.+0.j, 1.+0.j])

I'm not sure if this can work, but in IIR filters the way I know it,
the coefficient for the last observation is normalized to 1. Since all
roots are 1, the window cannot be inverted.
I think lfilter uses the same assumptions. lfilter has also a funny
way to determine initial conditions (zi) which I'm never quite sure
how to use. It doesn't matter much with invertible and stationary
processes, but in this case, I guess it does.

If I change the window to an invertible window

window_len = 11.
window = np.ones(window_len) / window_len

window[0] = 1.

then my current version, which should be close to your original
version, works, the deconvolved series looks similar to the original
series.

In either case, I think Joe's answer is more useful, since you can
directly manipulate the frequency response. I only used a simple IIR
frequency domain filter with an adaptation of fftconvolve.

Josef

Ralf Gommers

unread,
Jul 31, 2011, 6:05:24 PM7/31/11
to SciPy Users List
On Sun, Jul 31, 2011 at 9:22 PM, Joe Kington <jkin...@wisc.edu> wrote:
Gah, I hit send too soon!

The default eps parameter in that function should be more like 1.0e-6 instead of 0.1.

You'll generally need to adjust the eps parameter to match the signal-to-noise ratio of the two signals you're deconvolving. 

Thanks Joe. I had tried something similar, but now I have a name for the method and a confirmation that doing something like that makes sense. I'll play with this idea some more tomorrow.

Cheers,
Ralf

 

Lou Pecora

unread,
Jul 31, 2011, 6:09:33 PM7/31/11
to SciPy Users List
I'm surprised that no one has brought it up, yet, but deconvolution has to be *very carefully* done.  Look up "ill-posed problems".  Yes, it can be a filter process, but just dividing by the filter FFT coefficients is dangerous since they approach zero (usually) as the frequency increases.  That's the ill-posed part and it has to be controlled.  

"Regularization" is the what people call the controlling process.  It's an extra assumption that one invokes on the deconvolution that limits the problems... if it's done right.  There are surely lots of books and articles on Regularization and Deconvolution.  Start with Google and Wikipedia and Google Scholar.  It's not hard, but it is not a one-step problem, but more like an optimization problem.  
 
-- Lou Pecora, my views are my own.

Joe Kington

unread,
Jul 31, 2011, 6:10:24 PM7/31/11
to SciPy Users List
On Sun, Jul 31, 2011 at 2:05 PM, Ralf Gommers <ralf.g...@googlemail.com> wrote:


On Sun, Jul 31, 2011 at 9:22 PM, Joe Kington <jkin...@wisc.edu> wrote:
Gah, I hit send too soon!

The default eps parameter in that function should be more like 1.0e-6 instead of 0.1.

You'll generally need to adjust the eps parameter to match the signal-to-noise ratio of the two signals you're deconvolving. 

Thanks Joe. I had tried something similar, but now I have a name for the method and a confirmation that doing something like that makes sense. I'll play with this idea some more tomorrow.

For what it's worth, the implementation I showed there is completely wrong. I wasn't thinking clearly when I wrote that out.  The same concept should still work, though.

The most glaring mistake is that things should be padded before converting to the frequency domain.  More like this:

def water_level_decon(y_meas, window, eps=0.1):
    padded = np.zeros_like(y_meas)
    padded[:window.size] = window

    yfreq = np.fft.fft(y_meas)
    winfreq = np.fft.fft(padded)

    winfreq[winfreq < eps] = eps
    newfreq = yfreq / winfreq 

    return np.fft.ifft(newfreq)

Hope it's useful, anyway.
-Joe 

David Baddeley

unread,
Jul 31, 2011, 8:51:28 PM7/31/11
to SciPy Users List
Hi Ralf,

I do a reasonable amount of (2 & 3D) deconvolution of microscopy images and the method I use depends quite a lot on the exact properties of the signal. You can usually get away with fft based convolutions even if your signal is not periodic as long as your kernel is significantly smaller than the signal extent.

As Joe mentioned, for a noisy signal convolving with the inverse or performing fourier domain division doesn't work as you end up amplifying high frequency noise components. You thus need some form of regularisation. The thresholding of fourier components that Joe suggests does this, but you might also want to explore more sophisticated options, the simplest of which is probably Wiener filtering (http://en.wikipedia.org/wiki/Wiener_deconvolution). 

If you've got a signal which is constrained to be positive, it's often useful to introduce a positivity constraint on the deconvolution result which generally means you need an iterative algorithm. The choice of algorithm should also depend on the type of noise that is present in your signal -  my image data is constrained to be +ve and typically has either Poisson or a mixture of Poisson and Gaussian noise and I use either the Richardson-Lucy or a weighted version of ICTM (Iterative Constrained Tikhonov-Miller) algorithm. I can provide more details of these if required.

cheers,
David





From: Ralf Gommers <ralf.g...@googlemail.com>
To: SciPy Users List <scipy...@scipy.org>
Sent: Mon, 1 August, 2011 5:56:49 AM
Subject: [SciPy-User] deconvolution of 1-D signals

Anne Archibald

unread,
Aug 1, 2011, 1:20:16 AM8/1/11
to David Baddeley, SciPy Users List
I realize this discussion has gone rather far afield from efficient 1D
deconvolution, but we do a funny thing in radio interferometry, and
I'm curious whether this is normal for other kinds of deconvolution as
well.

In radio interferometry we obtain our images convolved with the
so-called "dirty beam", a convolution kernel that has a nice narrow
peak but usually a chaos of monstrous sidelobes often only marginally
smaller than the main lobe. We use a different regularization
condition to do our deconvolution: we treat the underlying image as a
modest collection of point sources. (One can see why this appeals to
astronomers.) Through an iterative process (the "CLEAN" algorithm and
its many descendants) we obtain an estimate of this underlying image.
But we very rarely actually work with this image directly. We normally
convolve it with a sort of idealized version of our kernel without all
the sidelobes. This then gives an image one might have obtained from a
normal telescope the size of the interferometer array. (Apart from all
the CLEAN artifacts.)

What I'm wondering is, is this final step of convolving with an
idealized version of the kernel standard practice elsewhere?

From one point of view it could just be parochiality, astronomers
being so accustomed to smudgy images that we have to convert anything
else to this format. But I think that at the least it softens the
effect of the rather strict regularization assumption behind CLEAN -
which amounts to "no extended sources". It probably makes us less
sensitive to shortcuts in CLEAN implementations. I think, though, that
this trick may be useful for many applications of deconvolution.
Rather than try to translate the image from the observed kernel to
some ideal Dirac-delta kernel, this tries to convert it from the
observed kernel to a similar but simpler kernel; one would expect the
impact of a deconvolution artifact to be related to the magnitude of
the difference between kernels.

In terms of 1D Fourier deconvolution, this is saying, after
deconvolution, that we don't really need all those high frequencies
amplified so much anyway, and smoothing them back down with a nice
clean easy-to-understand kernel. In these terms, in fact, it makes
perfect sense to use a wider kernel than necessary for this smoothing
if one is interested in larger-scale features.

Anne

Ralf Gommers

unread,
Aug 1, 2011, 2:03:10 AM8/1/11
to David Baddeley, SciPy Users List
On Mon, Aug 1, 2011 at 2:51 AM, David Baddeley <david_b...@yahoo.com.au> wrote:
Hi Ralf,

I do a reasonable amount of (2 & 3D) deconvolution of microscopy images and the method I use depends quite a lot on the exact properties of the signal. You can usually get away with fft based convolutions even if your signal is not periodic as long as your kernel is significantly smaller than the signal extent.

The kernel is typically about 5 to 15 times smaller than the signal extent, so I guess that may be problematic.

As Joe mentioned, for a noisy signal convolving with the inverse or performing fourier domain division doesn't work as you end up amplifying high frequency noise components. You thus need some form of regularisation. The thresholding of fourier components that Joe suggests does this, but you might also want to explore more sophisticated options, the simplest of which is probably Wiener filtering (http://en.wikipedia.org/wiki/Wiener_deconvolution). 

I'm aware of the problems with high frequency noise. This is why I tried the spline fitting - I figured that on a spline the deconvolution would be okay because the spline is very smooth. This should be fine for my data because the noise is much higher-frequency than the underlying signal, and the SNR is high to start with. But maybe there are better ways. I looked for a Python implementation of Wiener deconvolution but couldn't find one so quickly. Is there a package out there that has it?

If you've got a signal which is constrained to be positive, it's often useful to introduce a positivity constraint on the deconvolution result which generally means you need an iterative algorithm. The choice of algorithm should also depend on the type of noise that is present in your signal -  my image data is constrained to be +ve and typically has either Poisson or a mixture of Poisson and Gaussian noise and I use either the Richardson-Lucy or a weighted version of ICTM (Iterative Constrained Tikhonov-Miller) algorithm. I can provide more details of these if required.

By constrained to be positive I'm guessing you mean monotonic? Otherwise I could just add a constant offset, but that's probably not what you mean. What's typically the speed penalty for an iterative method?

Ralf
 
 

David Baddeley

unread,
Aug 1, 2011, 3:03:18 AM8/1/11
to Ralf Gommers, scipy...@scipy.org
Hi Ralf,

5-15 times smaller would probably be fine, although you might want to watch the edges in the reconstruction - if they're at different dc levels you'll get edge artifacts (within ~ 1 kernel width of the edges). I'd tend to avoid spline filtering (or any form of noise reduction) before deconvolution, as this will also transform the data in a way not explained by the model you're using to deconvolve with. 

Weiner filtering is a 2 liner -> 

H = fft(kernel)
deconvolved = ifftshift(ifft(fft(signal)*np.conj(H)/(H*np.conj(H) + lambda**2)))

where lambda is your regularisation parameter, and white noise is assumed. There are various methods for choosing lambda optimally, but most people tend to use trial and error.

Iterative methods are typically ~1-2 orders of magnitude slower than a Weiner filter, but with fast fft libraries and modern computers still quite reasonable for modest data sizes (a 3D image stack of ~ 512x512x50 pixels will tend to be done in a bit under a minute, can't really comment on 1D data, but unless your signal is very long I'd expect it to be significantly quicker). Ffts scale with O(nlogn) so will generally dramatically outperform things based on a simple convolution or filtering approaches (O(n**2)) for large n. This might make an iterative approach using ffts faster than something like scipy.signal.deconvolve if your kernel is large.

cheers,
David


From: Ralf Gommers <ralf.g...@googlemail.com>
To: David Baddeley <david_b...@yahoo.com.au>; SciPy Users List <scipy...@scipy.org>
Sent: Mon, 1 August, 2011 6:03:10 PM
Subject: Re: [SciPy-User] deconvolution of 1-D signals

Joe Kington

unread,
Jul 31, 2011, 6:10:04 PM7/31/11
to SciPy Users List
On Sun, Jul 31, 2011 at 2:05 PM, Ralf Gommers <ralf.g...@googlemail.com> wrote:


On Sun, Jul 31, 2011 at 9:22 PM, Joe Kington <jkin...@wisc.edu> wrote:
Gah, I hit send too soon!

The default eps parameter in that function should be more like 1.0e-6 instead of 0.1.

You'll generally need to adjust the eps parameter to match the signal-to-noise ratio of the two signals you're deconvolving. 

Thanks Joe. I had tried something similar, but now I have a name for the method and a confirmation that doing something like that makes sense. I'll play with this idea some more tomorrow.

For what it's worth, the implementation I showed there is completely wrong. I wasn't thinking clearly when I wrote that out.  The same concept should still work, though.

The most glaring mistake is that things should be padded before converting to the frequency domain.  More like this:

def water_level_decon(y_meas, window, eps=0.1):
    padded = np.zeros_like(y_meas)
    padded[:window.size] = window

    yfreq = np.fft.fft(y_meas)
    winfreq = np.fft.fft(padded)

    winfreq[winfreq < eps] = eps
    newfreq = yfreq / winfreq 

    return np.fft.ifft(newfreq)

Hope it's useful, anyway.
-Joe




Cheers,
Ralf

 

Charles R Harris

unread,
Aug 1, 2011, 10:14:13 AM8/1/11
to SciPy Users List
On Sun, Jul 31, 2011 at 11:20 PM, Anne Archibald <aarc...@physics.mcgill.ca> wrote:
I realize this discussion has gone rather far afield from efficient 1D
deconvolution, but we do a funny thing in radio interferometry, and
I'm curious whether this is normal for other kinds of deconvolution as
well.

In radio interferometry we obtain our images convolved with the
so-called "dirty beam", a convolution kernel that has a nice narrow
peak but usually a chaos of monstrous sidelobes often only marginally
smaller than the main lobe. We use a different regularization
condition to do our deconvolution: we treat the underlying image as a
modest collection of point sources. (One can see why this appeals to
astronomers.) Through an iterative process (the "CLEAN" algorithm and
its many descendants) we obtain an estimate of this underlying image.
But we very rarely actually work with this image directly. We normally
convolve it with a sort of idealized version of our kernel without all
the sidelobes. This then gives an image one might have obtained from a
normal telescope the size of the interferometer array. (Apart from all
the CLEAN artifacts.)

What I'm wondering is, is this final step of convolving with an
idealized version of the kernel standard practice elsewhere?


That's interesting. It sounds like fitting a parametric model, which yields points, followed by a smoothing that in some sense represents the error. Are there frequency aliasing problems associated with the deconvolution?

<snip>

Chuck

Charles R Harris

unread,
Aug 1, 2011, 10:19:34 AM8/1/11
to David Baddeley, SciPy Users List
On Mon, Aug 1, 2011 at 1:03 AM, David Baddeley <david_b...@yahoo.com.au> wrote:
Hi Ralf,

5-15 times smaller would probably be fine, although you might want to watch the edges in the reconstruction - if they're at different dc levels you'll get edge artifacts (within ~ 1 kernel width of the edges). I'd tend to avoid spline filtering (or any form of noise reduction) before deconvolution, as this will also transform the data in a way not explained by the model you're using to deconvolve with. 

Weiner filtering is a 2 liner -> 

H = fft(kernel)
deconvolved = ifftshift(ifft(fft(signal)*np.conj(H)/(H*np.conj(H) + lambda**2)))

where lambda is your regularisation parameter, and white noise is assumed. There are various methods for choosing lambda optimally, but most people tend to use trial and error.

Iterative methods are typically ~1-2 orders of magnitude slower than a Weiner filter, but with fast fft libraries and modern computers still quite reasonable for modest data sizes (a 3D image stack of ~ 512x512x50 pixels will tend to be done in a bit under a minute, can't really comment on 1D data, but unless your signal is very long I'd expect it to be significantly quicker). Ffts scale with O(nlogn) so will generally dramatically outperform things based on a simple convolution or filtering approaches (O(n**2)) for large n. This might make an iterative approach using ffts faster than something like scipy.signal.deconvolve if your kernel is large.


The main problem with Weiner filtering is that it assumes that both the signal and noise are Gaussian. For instance, if you are looking for spikes in noise, the amplitudes of the spikes would have a Gaussian distribution. The Weiner filter is then the Bayesian estimate that follows from those assumptions, but those might not be the best assumptions for the data.

<snip>

Chuck

Friedrich Romstedt

unread,
Aug 1, 2011, 4:22:18 PM8/1/11
to SciPy Users List
Hi Ralf,

2011/7/31 Ralf Gommers <ralf.g...@googlemail.com>:


> For a measured signal that is the convolution of a real signal with a
> response function, plus measurement noise on top, I want to recover the real
> signal. Since I know what the response function is and the noise is
> high-frequency compared to the real signal, a straightforward approach is to
> smooth the measured signal (or fit a spline to it), then remove the response
> function by deconvolution. See example code below.

I ran across this (see below) soon ago since I'm dealing with
information theory recently. It has an deconvolution example included
in 1D, and it compares some different general methods in a kind-of
"unified framework", as far as this exists. I found it quite
informative and helpful. If you can't get access I can get it from
the library in 2 weeks. The citation is:

Robert L. Fry (ed.), Bayesian Inference and Maximum Entropy Methods in
Science and Engineering: 21st International Workshop, Baltimore,
Maryland, AIP Conf. Proc. 617 (2002)
ISBN 0-7354-0063-6; ISSN 0094-243X
Tutorial "Bayesian Inference for Inverse Problems" (A.
Mohammad-Djafari) on page 477ff.

It includes different noise models, afair, at least the structure how
to deal with this. If I'm not mistaken the problem discussed there
was a mass-spectrometry spectrum, so should been shot noise mainly,
and of course the psf.

The tutorial covers (in short) maximum entropy as well as maximum
likelihood, and a combination of both (hence the "unification"). I
cannot help much with this since I'm new to it myself. But I did a
reasonable literature search, and this was one of the best outcomes.
But as said, I was about information theory.

Hope this is a useful pointer,
Friedrich

> Can anyone point me towards code that does the deconvolution efficiently?
> Perhaps signal.deconvolve would do the trick, but I can't seem to make it
> work (except for directly on the output of np.convolve(y, window,
> mode='valid')).

No. In fact, I don't think there is an automagical solution anywhere. :-)

Good luck!

Anne Archibald

unread,
Aug 1, 2011, 5:07:48 PM8/1/11
to SciPy Users List

It's very like fitting a parametric model, yes, except that we don't
care much about the model parameters. In fact we often end up with
models that have clusters of "point sources" with positive and
negative emissions trying to match up with what is in reality a single
point source. This can be due to inadequacies of the dirty beam model
(though usually we have a decent estimate) or simply noise. In any
case smoothing with an idealized main lobe makes us much less
sensitive to this kind of junk. Plus if you're going to do this
anyway, it can make life much easier to constrain your point sources
to a grid.

(As an aside, this trick - of fitting a parametric model but then
extracting "observational" parameters for comparison to reduce
model-sensitivity - came up with some X-ray spectral data I was
looking at: you need to use a model to pull out the instrumental
effects, but if you report (say) the model luminosity in a band your
instrument can detect, then it doesn't much matter whether your model
thinks the photons are thermal or power-law. In principle you can even
do this trick with published model parameters, but you run into the
problem that people don't give full covariance matrices for the fitted
parameters so you get spurious uncertainties.)

As far as frequency aliasing, there's not so much coming from the
deconvolution, since our beam is so irregular. The actual observation
samples image spatial frequencies rather badly; it's the price we pay
for not having a filled aperture. So we're often simply missing
information on spatial frequencies, most often the lowest ones
(because there's a limit on how close you can put tracking dishes
together without shadowing). But I don't think this is a deconvolution
issue; in fact in situations where people are really pushing the
limits of interferometry, like the millimeter-wave interferometric
observations of the black hole at the center of our galaxy, you often
give up on producing an image at all and fit (say) an emission model
including the event horizon to the observed spatial frequencies
directly.

Anne

Charles R Harris

unread,
Aug 1, 2011, 7:48:06 PM8/1/11
to SciPy Users List

Thanks Anne, it's a good trick to know about.

Chuck

Reply all
Reply to author
Forward
0 new messages