How to best use Hugin/enfuse for super-resolution imaging

2,829 views
Skip to first unread message

Jeremy Henderson

unread,
Mar 3, 2015, 2:34:40 PM3/3/15
to hugi...@googlegroups.com
Hi,

I am trying to figure out how to best use Hugin and enfuse for creating super-resolution images. I'd appreciate any help as the results aren't quite satisfactory yet.

Here is my current procedure: 
1. Take several photos of a scene; handheld (in burst mode) is great as it provides the image-to-image variability required for the task
2. Use align_image_stack to align all images. Since all three translations and rotations can be affected between images, I am using all the options for aligning (-m -d -i -x -y -z).
3. Use enfuse, and here I have no clear idea for how to get the best results.

Any help greatly appreciated.

Thanks!

Jim Watters

unread,
Mar 3, 2015, 4:26:45 PM3/3/15
to hugi...@googlegroups.com
I tried to do something like this many years ago of a full moon but never got the results I was after.

You are expecting a higher detail meaning resolution image so I would out put at a higher resolution upsampling each image.
I don't think Enfuse is the right choice. You could use the contrast filter to use the sharpest portion of each image but I imagine every image is of the same sharpness and if it is soft it is soft all over.

I tried to use a median filter with Photoshop to take the most popular color at each pixel.
Other options are Image Stack by TawbaWare
http://www.tawbaware.com/imgstack.htm
ImageMagic might be able to do something similar.

Jim
-- 
Jim Watters
http://photocreations.ca

Jeremy Henderson

unread,
Mar 3, 2015, 5:27:01 PM3/3/15
to hugi...@googlegroups.com
Thanks for the quick reply.

Sorry, I forgot one step. Indeed, I do upsampling with imagemagick (mogrify -resize 200%) before aligning the images.

I was hoping there was an enfuse mode where aligned images are smartly averaged. Blending layers in Photoshop works, but it's a tedious process. I was hoping to be able to automate the whole thing.

I'll keep on experimenting.

Terry Duell

unread,
Mar 3, 2015, 8:53:07 PM3/3/15
to hugi...@googlegroups.com
Hello Jeremy,
On Wed, 04 Mar 2015 04:47:41 +1100, Jeremy Henderson <jeh...@gmail.com>
wrote:

> Hi,
>
> I am trying to figure out how to best use Hugin and enfuse for creating
> super-resolution images. I'd appreciate any help as the results aren't
> quite satisfactory yet.

>
> Any help greatly appreciated.

I don't think enfuse can help with this.

There is some GPL Matlab software here
<http://lcav.epfl.ch/software/superresolution> which may help.
It is a Matlab GUI tool, but the functions that actually do the real work
may run in Octave...I have not tried it.
There is also some Matlab software on the Matlab file exchange (it may be
the same?) which provides some links to some papers on the subject, if
that is any help.

Cheers,
--
Regards,
Terry Duell

Marius Loots

unread,
Mar 4, 2015, 12:58:40 AM3/4/15
to hugi...@googlegroups.com
Hallo All,

Wednesday, March 4, 2015, 3:53:03 AM, you wrote:
>> I am trying to figure out how to best use Hugin and enfuse for creating
>> super-resolution images. I'd appreciate any help as the results aren't
>> quite satisfactory yet.

>>
>> Any help greatly appreciated.

Terry> I don't think enfuse can help with this.

I suspect the reference to super-resolution imaging refers to this:
http://petapixel.com/2015/02/21/a-practical-guide-to-creating-superresolution-photos-with-photoshop/

If yes, then hugin can partially help.

But I agree with Terry that enfuse might not be able to help. You
would use Hugin to re-align the images. You can then use any other
photosoftware to complete the process from step 4 onwards (4. Average
the Layers).

The resize step (Resize image to 4x resolution (200% width/height))
may have to be done before or after alignment, for that I don't have a
specific answer.

I have taken a set of photos just last week after reading the article,
with the aim of using Hugin as well, but have not yet had time to
process the images. I do think there are potential for this, and would
be a good addition to the hugin's arsenal of applications if one
combine with GIMP. Also, from what I have read, I have a feeling that
GIMP on its own would not be a good candidate to to alignments with.


Groetnis
Marius
mailto:mlo...@medic.up.ac.za
--
add some chaos to your life and put the world in order
http://www.mapungubwe.co.za/
http://www.chaos.co.za/
skype: marius_loots

Hierdie boodskap en aanhangsels is aan 'n vrywaringsklousule
onderhewig. Volledige besonderhede is by
www.it.up.ac.za/documentation/governance/disclaimer/
beskikbaar.

David W. Jones

unread,
Mar 4, 2015, 1:03:27 AM3/4/15
to hugi...@googlegroups.com
Sorry, just curious, but what is a "super-resolution" image?
--
David W. Jones
gnome...@gmail.com
wandering the landscape of god
http://dancingtreefrog.com

Marius Loots

unread,
Mar 4, 2015, 1:18:03 AM3/4/15
to hugi...@googlegroups.com
Hallo David,

Wednesday, March 4, 2015, 8:03:18 AM, you wrote:
David> Sorry, just curious, but what is a "super-resolution" image?
I assume it refers to this tutorial:
http://petapixel.com/2015/02/21/a-practical-guide-to-creating-superresolution-photos-with-photoshop/
This is similar to the process used by the recently released Olympus
OMD-EM5vII. The sensor shifts one pixel distance between images and
then combines the result from a 16MP sensor into a 40MP photograph.

The method described uses the slight shift of handheld images as
substitute for the controlled sensor shift.

bugbear

unread,
Mar 4, 2015, 4:16:30 AM3/4/15
to hugi...@googlegroups.com
Jeremy Henderson wrote:
> Hi,
>
> I am trying to figure out how to best use Hugin and enfuse for creating super-resolution images. I'd appreciate any help as the results aren't quite satisfactory yet.

I looked into super-res a while back. Nothing came of it, but here's an edited
dump of my bookmark file from that time.

http://en.wikipedia.org/wiki/Superresolution
http://www.photoacute.com/studio/index.html
http://www.astrosurf.com/cidadao/super.htm

Look! You can get as much as 16 Meg Pix!!

http://www.ephotozine.com/article/imacon-launch-16-million-pixel-flexframe-4040-digital-camera-back-481

http://www.cs.technion.ac.il/cis/Project/Projects_done/superResolution/SuperResolution.htm

BugBear

Carl von Einem

unread,
Mar 4, 2015, 4:49:30 AM3/4/15
to hugi...@googlegroups.com
I think microscanning was introduced with the Kontron ProgRes 3012. That
was about 1997 or even earlier, the company was owned by BMW then.

Nice camera system but it needed to be directly connected to a special
PCI-card. There were no NuBus cards available from Kontron so we needed
one of those newer Macs with a PCI slot. I think at the time I used one
of these UMAX Macintosh clones.

See also http://projects.oucs.ox.ac.uk/jtap/reports/digit/ for some
information about that camera.

Carl

bugbear wrote on 04.03.15 10:16:

bugbear

unread,
Mar 4, 2015, 5:01:13 AM3/4/15
to hugi...@googlegroups.com
Carl von Einem wrote:
> I think microscanning was introduced with the Kontron ProgRes 3012. That was about 1997 or even earlier, the company was owned by BMW then.
>
> Nice camera system but it needed to be directly connected to a special PCI-card. There were no NuBus cards available from Kontron so we needed one of those newer Macs with a PCI slot. I think at the time I used one of these UMAX Macintosh clones.
>
> See also http://projects.oucs.ox.ac.uk/jtap/reports/digit/ for some information about that camera.

Doesn't that camera tessalate (as opposed to the super-resolution that the thread is about) ?

BugBear

Carl von Einem

unread,
Mar 4, 2015, 5:18:10 AM3/4/15
to hugi...@googlegroups.com
bugbear wrote on 04.03.15 11:01:
I compared it to the FlexFrame 4040 you mentioned. And I thought both
systems claimed to use some sort of a microscanning technique. No,
scanning is not the right word: it was a grabber.

Right now I can't see the fundamental difference between such an in
camera system and a handheld technique that uses natural shaking. And
that's purely based on what I read about that "super resolution"
technique in the petapixel blog post mentioned earlier.

If the main purpose of "super resolution" was to step beyond the border
of diffraction this might be helpful for gigapixel photography, right?

Carl

bugbear

unread,
Mar 4, 2015, 6:21:18 AM3/4/15
to hugi...@googlegroups.com
Carl von Einem wrote:
> bugbear wrote on 04.03.15 11:01:
>> Carl von Einem wrote:
>>> I think microscanning was introduced with the Kontron ProgRes 3012.
>>> That was about 1997 or even earlier, the company was owned by BMW then.
>>>
>>> Nice camera system but it needed to be directly connected to a special
>>> PCI-card. There were no NuBus cards available from Kontron so we
>>> needed one of those newer Macs with a PCI slot. I think at the time I
>>> used one of these UMAX Macintosh clones.
>>>
>>> See also http://projects.oucs.ox.ac.uk/jtap/reports/digit/ for some
>>> information about that camera.
>>
>> Doesn't that camera tessalate (as opposed to the super-resolution that
>> the thread is about) ?
>>
>> BugBear
>>
>
> I compared it to the FlexFrame 4040 you mentioned. And I thought both systems claimed to use some sort of a microscanning technique. No, scanning is not the right word: it was a grabber.
>
> Right now I can't see the fundamental difference between such an in camera system and a handheld technique that uses natural shaking. And that's purely based on what I read about that "super resolution" technique in the petapixel blog post mentioned earlier.
>

I don't think the super-res technique of this thread can make a 3000x2300 from a 500x300 CCD.

But tessalation can.

BugBear

Rogier Wolff

unread,
Mar 4, 2015, 11:52:44 AM3/4/15
to hugi...@googlegroups.com
The basic idea is that if you have say a 1000x1000 sensor, and take
pictures HALF a pixel off, by taking 4 pictures you can make a
2000x2000 image.

The "move the sensor" image stabilization cameras have the hardware to
move the sensor half a pixel in a controlled way.

Others may need to take "enough" pictures so that you "probably" have a
picture taken with the right offset.

Then, the next problem arises: If your lens is perfect, each pixel
takes the average of all the image projected on its "square". So if
you take the four images at half pixel intervals, you'd still have a
2x2 blur in the resulting 2000x2000 image. A sharpening algorithm is
required.

This is complicated: Modern cameras however don't have RGB sensors for
each pixel, but either red, green OR blue.

Then... when say light hits each green sensor pixel, but misses the
intervening red or blue pixels, the "recover-the-color" algorithm
would say "GREEN!" when someone with a black-and-white shirt is at
eactly the right (or wrong :-) distance. (it's similar to the wrong
tie on the news phenomenon). Anyway, to prevent this, they make the
lenses or sensors in such a way that it is impossible to focus light
on eactly one pixel. A deliberate defocus.

So that makes things even more complicated.

Anyway... hugin should be able to position two images over each other
at subpixel accuracy. This is essential if you cannot control the
sensor at subpixel accuracy.

I would give hugin the ORIGINAL images. Then tell the remapper that
your output has a high resolution. This will do proper upscaling of
the pixels.

Enfuse is completely the wrong tool to then combine the results. So
you'll have to find something else to use here. Maybe just averaging
all the images works. Then a sharpening step is necessary.

Roger.

--
** R.E....@BitWizard.nl ** http://www.BitWizard.nl/ ** +31-15-2600998 **
** Delftechpark 26 2628 XH Delft, The Netherlands. KVK: 27239233 **
*-- BitWizard writes Linux device drivers for any device you may have! --*
The plan was simple, like my brother-in-law Phil. But unlike
Phil, this plan just might work.

Jeremy Henderson

unread,
Mar 4, 2015, 4:44:59 PM3/4/15
to hugi...@googlegroups.com
I was under the impression that, in enfuse, setting all weights to equal values would result in arithmetic averaging. At least that's how I read the manual. But perhaps it's wrong.

bugbear

unread,
Mar 5, 2015, 4:15:42 AM3/5/15
to hugi...@googlegroups.com
Jeremy Henderson wrote:

>
> I was under the impression that, in enfuse, setting all weights to equal values would result in arithmetic averaging. At least that's how I read the manual. But perhaps it's wrong.

If you just want to average, a simple script using something like
netpbm could do it simply and efficiently.

BugBear

Bruno Postle

unread,
Mar 5, 2015, 5:38:47 AM3/5/15
to hugi...@googlegroups.com
On 5 March 2015 at 09:15, bugbear <bug...@papermule.co.uk> wrote:
>
> If you just want to average, a simple script using something like
> netpbm could do it simply and efficiently.

ImageMagick works well for averaging images, see this post from Pat
David: http://blog.patdavid.net/2012/08/imagemagick-average-blending-files.html

--
Bruno

bugbear

unread,
Mar 5, 2015, 5:58:49 AM3/5/15
to hugi...@googlegroups.com
Yes, probably OK for super-resolution work,
which has a small number (say 5-15) of source images.

But, in a world of HDR panoramas, even well equipped
computers *sometimes* need to worry about memory
efficiency, and ImageMagick, like enblend and enfuse, holds all the images
in memory, and ImageMagick uses quite a lot of memory per image.

BugBear

Jeremy Henderson

unread,
Mar 5, 2015, 10:49:52 AM3/5/15
to hugi...@googlegroups.com
Memory is an issue, particularly after having upsampled 16-MP images to 200%. ImageJ is another program that can do different types of averaging of a stack of images, but indeed, memory usage is the limiting factor.

Regarding upsampling: as a first step, I'd be happy to end up with an image with the same pixel number, i.e., no upsampling. The idea is to record images such that an image element that first fell on a red pixel will also fall on a green and a blue pixel in subsequent images. That is, all three colors will be recorded for every image element, as in mimicking a Foveon sensor. That should result in the same image dimensions but significantly higher effective resolution and color accuracy. I believe technologies that shift the sensor by one pixel at a time attempt exactly that.

Shifts by 1/2 of a pixel could then increase the resolution even further, but it's computationally much more expensive compared to shifting by one pixel. In any case, it seems to me that super-resolution imaging (as well as other multi-shot stacking techniques) has a lot of untapped potential, and I hope software manufacturers will provide tools now that probably every new camera at some point in the future will get the requisite hardware capabilities (sensor shift).

Terry Duell

unread,
Sep 3, 2015, 3:03:23 AM9/3/15
to hugi...@googlegroups.com
On Fri, 06 Mar 2015 02:49:52 +1100, Jeremy Henderson <jeh...@gmail.com>
wrote:

[snip]

> Shifts by 1/2 of a pixel could then increase the resolution even further,
> but it's computationally much more expensive compared to shifting by one
> pixel. In any case, it seems to me that super-resolution imaging (as well
> as other multi-shot stacking techniques) has a lot of untapped potential,
> and I hope software manufacturers will provide tools now that probably
> every new camera at some point in the future will get the requisite
> hardware capabilities (sensor shift).
>

Further to this discussion, for information of any who are interested, I
have been playing about with super-resolution (SR) imaging on and off for
a while now, not with any Hugin tools but using bespoke software in Octave.
I have used 4 shots, handheld, using continuous shooting mode (nominally 8
shots/sec), which minimises camera movement. I have read that others shoot
from a tripod and give the camera/tripod a slight nudge between shots.
I resample the images to twice the pixel resolution, then align these to a
specified accuracy using a DFT method (see Manuel Guizar-Sicairos, Samuel
T. Thurman, and James R. Fienup, "Efficient subpixel image registration
algorithms," Opt. Lett. 33, 156-158 (2008).), I am currently using 1/2
pixel accuracy. I then average the 4 images.
The results are noticeable, and do show improved fidelity, and also appear
'cleaner', which may be partly due to reduced noise resulting from the
averaging of 4 images.
I have also looked at results from the Pentax pixel-shifting (PS) from the
K3II, which also uses 4 images each shifted by 1 pixel to gather the exact
R,G and B at each pixel, hence avoiding demosaicing.
My results thus far suggest that the Pentax PS approach gives a better
result than my SR approach.
It's all a work in progress.

bugbear

unread,
Sep 3, 2015, 3:54:08 AM9/3/15
to hugi...@googlegroups.com
Terry Duell wrote:

>
> Further to this discussion, for information of any who are interested, I have been playing about with super-resolution (SR) imaging on and off for a while now, not with any Hugin tools but using bespoke software in Octave.

Presumably there is no benefit to SR, until the limit of camera resolution
and lens magnification is reached.

Below this limit, simple tessalation (which hugin is rather good at) will
give a reliable increase in subject resolution.

bugBear

Terry Duell

unread,
Sep 3, 2015, 6:21:14 PM9/3/15
to hugi...@googlegroups.com
Hello BugBear,

On Thu, 03 Sep 2015 17:54:00 +1000, bugbear <bug...@papermule.co.uk>
wrote:
The real benefits from SR is only one setup/shot required versus many for
Hugin.
In addition there are the straight out of camera (Olympus and Hasselblad)
SR solutions which overcome the post processing work needed with the Hugin
or other software approaches.
Other than those considerations, I would agree that Hugin could be a good
or better method of achieving any required increase in resolution.

David Haberthür

unread,
Sep 15, 2015, 4:16:23 PM9/15/15
to hugi...@googlegroups.com
Just a side-note:

> On 03 Sep 2015, at 09:03, Terry Duell <tdu...@iinet.net.au> wrote:
>
> I resample the images to twice the pixel resolution, then align these to a specified accuracy using a DFT method (see Manuel Guizar-Sicairos, Samuel T. Thurman, and James R. Fienup, "Efficient subpixel image registration algorithms," Opt. Lett. 33, 156-158 (2008).), I am currently using 1/2 pixel accuracy. I then average the 4 images.

I’ll tell Manuel that his algorithm is used “in the wild”, he works at the same institute as I do: http://www.psi.ch/media/from-inside-an-eggshell

Greetings,
Habi
signature.asc

Terry Duell

unread,
Sep 15, 2015, 6:28:43 PM9/15/15
to hugi...@googlegroups.com
I had a yarn with Manual some time back, regarding the use of his code and
some minor changes to allow it work OK in Octave.
Reply all
Reply to author
Forward
0 new messages