The basic idea is that if you have say a 1000x1000 sensor, and take
pictures HALF a pixel off, by taking 4 pictures you can make a
The "move the sensor" image stabilization cameras have the hardware to
move the sensor half a pixel in a controlled way.
Others may need to take "enough" pictures so that you "probably" have a
picture taken with the right offset.
Then, the next problem arises: If your lens is perfect, each pixel
takes the average of all the image projected on its "square". So if
you take the four images at half pixel intervals, you'd still have a
2x2 blur in the resulting 2000x2000 image. A sharpening algorithm is
This is complicated: Modern cameras however don't have RGB sensors for
each pixel, but either red, green OR blue.
Then... when say light hits each green sensor pixel, but misses the
intervening red or blue pixels, the "recover-the-color" algorithm
would say "GREEN!" when someone with a black-and-white shirt is at
eactly the right (or wrong :-) distance. (it's similar to the wrong
tie on the news phenomenon). Anyway, to prevent this, they make the
lenses or sensors in such a way that it is impossible to focus light
on eactly one pixel. A deliberate defocus.
So that makes things even more complicated.
Anyway... hugin should be able to position two images over each other
at subpixel accuracy. This is essential if you cannot control the
sensor at subpixel accuracy.
I would give hugin the ORIGINAL images. Then tell the remapper that
your output has a high resolution. This will do proper upscaling of
Enfuse is completely the wrong tool to then combine the results. So
you'll have to find something else to use here. Maybe just averaging
all the images works. Then a sharpening step is necessary.
** R.E....@BitWizard.nl ** http://www.BitWizard.nl/
** Delftechpark 26 2628 XH Delft, The Netherlands. KVK: 27239233 **
*-- BitWizard writes Linux device drivers for any device you may have! --*
The plan was simple, like my brother-in-law Phil. But unlike
Phil, this plan just might work.