Yes, I think we believe you're just never going to get enough accuracy
and speed to make a decent user experience on a device without good
optics, so we've punted on this as not a high priority. The current
process doesn't even look at most of the image... a 2D deconvolution
would by itself make it several times slower. The new native code API
makes that less of a problem in theory but... still, seems like a
problem far better solved by a camera.
This step happens before finding finder patterns, so you have a
chicken-and-egg problem there... without it, it's hard to find the
finder patterns. Yes in principle you could estimate a point spread
function by photographing a thin black line or something.
Automatically figuring out the kernel seems way hard.
I tried many variations on this, passing several Gaussian kernels,
large and small, and some based on photographs from the camera, and
could never get a result that was meaningfully better. I could have
been missing something.