Re: [hugin-ptx] Hugin: How to build an HDR Panorama from bracketed images?

2,336 views
Skip to first unread message

Carlos Eduardo G. Carvalho (Cartola)

unread,
Oct 26, 2012, 12:44:44 PM10/26/12
to hugi...@googlegroups.com
There are some different approaches you can use, hugin itself can do the exposition fusion, for what it uses enfuse, but I usually prefer to combine the different expositions before mounting the panorama, so I deal with them as there were no panorama at all.

I usually use enfuse (which comes with hugin) and just let it do its job with no other specific configuration, them I stitch the resulting images on hugin. As you are in linux you can maybe make a script for that. I usually list the directory and redirect the output to a script, which I edit with vi, but this hint makes no sense if you are not familiar with those tools.

A command line example to combine 4 expositions:

$ enfuse -o output01.jpg input01.jpg input02.jpg input03.jpg input04.jpg

this creates the output01.jpg file, combining the 4 different expositions. If you haven't made the expositions with a tripod and need to realign the images, then you can use the command "align_image_stack", which also comes with hugin. Run it with no arguments for a small help.

Cheers,

Carlos E G Carvalho (Cartola)
http://cartola.org/360
http://www.panoforum.com.br/



2012/10/26 Ramiro Téllez Sanz <urci...@gmail.com>
Hi everyone! I'm new to this group.
I use Hugin 2011.4.0 on Linux.

I've been googling for a Hugin tutorial on how to build a Panorama from a series of bracketed images, to no avail.
In my case I take 5 hand-held bracketed images for every "tile" in the final panorama, and usually I don't take more than 3-4 "tiles" in a single row.

Could anyone teach me how to get a pseudo HDR Panorama with my picture series? I tried several ways of doing it but I always end up with a very diffused image. I step-by-step guide would be really appreciated.

Thanks in advance.

--
You received this message because you are subscribed to the Google Groups "Hugin and other free panoramic software" group.
A list of frequently asked questions is available at: http://wiki.panotools.org/Hugin_FAQ
To post to this group, send email to hugi...@googlegroups.com
To unsubscribe from this group, send email to hugin-ptx+...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/hugin-ptx

TvE

unread,
Nov 17, 2012, 12:55:30 PM11/17/12
to hugi...@googlegroups.com
Ramiro, I'm not an expert but here is what I've been happy with. I shoot 3 bracketed images per tile and I tend to have 10-12 tiles total (360 spherical pano). I load the images into hugin, then I create the stacks in the images tab. For each stack I run align-image-stack to find control points (lower-left pane of images tab), I use the following arguments to align_image_stack: -f %v -e -v -p %o -g 1 -i -c 1 -g 4 %i. I then select the best image of each stack and run cpfind to create control points across the tiles.

In one of the preview windows I ensure the EV is set to that of the brightest image so I don't get clipping. To stitch, I select "exposure fused from any arrangement" and jpeg (or tiff). This results in a 24-bit image that is automatically HDR-blended. It works well for me for sunset type of shots where the dynamic range needs to be compressed but isn't huge.

tbransco

unread,
Nov 17, 2012, 3:53:15 PM11/17/12
to hugi...@googlegroups.com
Thanks for your helpful post, TvE.

I have a question which I hope does not come across as a criticism, and that is: I notice there are two uses of the "-g" parameter in your command line: g=1 and g=4.  Was this intentional?

TvE

unread,
Nov 17, 2012, 9:46:47 PM11/17/12
to hugi...@googlegroups.com
I meant to use -g 4, but you can adjust. -g 4 reduces the image by 2^4, i.e.16x, to do the alignment.Perhaps -g 2 would be more accurate and not too slow.

However, something I realized when trying to process a handheld pano is that by default all the images in a stack are linked, so the optimizer doesn't actually align them. You have to select all images in the images tab and then uncheck the "Link" box in the lower-left. This allows the images in a stack to be adjusted relative to one another, which you need unless they have a 100% identical field of view.

Good luck and let me know if this works for you! Hugin has so many options that it's difficult to figure out how to do something efficiently... (Not a complaint, just a statement of fact...)

TvE

unread,
Nov 17, 2012, 10:00:42 PM11/17/12
to hugi...@googlegroups.com
Here's a recap of my workflow. I need to test it some more, but so far it seems to work:
  • shoot ~10 tiles for a spherical pano, total of ~30 images with 3x bracketing (10.5mm, full-frame fisheye, 6 tiles around, 2x-3x up, 2x down)
  • pre-process to increase sharpness and local contrast (wide unsharp-mask)
  • load all images into hugin
  • select each bracket group in images tab, make new stack, run align-image-stack -f %v -e -v -p %o -i -c 1 -g 2 %i
  • select all images in images tab, unlink positions
  • select "best" image in each stack and run cpfind+celeste to get the tiles to align
  • run optimizer in incremental from anchor mode
  • check control points table for worst fit and fiddle with control points, re-run optimizer with "everything without translation", repeat...
  • I often take a nadir shot from a slightly moved position so I don't have shadows in the same spot, to align those images I set them to use a different lens, which offers a few more degrees of freedom and I make sure control points are only set in the small area I need from those shots
  • mask tripod/feet out of shots in the mask tab, use "exclude region from stack"
  • run exposure optimization using HDR, var white bal, fixed exposure
  • fiddle with view in the fast preview
  • calculate optimal FOV in stitching tab
  • stitch using exposure fused from any arrangement to jpeg Q-90
  • adjust final histogram and colors in regular photo editor
  • phew!
If you have suggestions or corrections, I'd love to hear. Getting this far has taken many hours of fiddling with hugin. One thing I don't know is whether setting the EV in hugin makes a difference when using the exposure fusing, I believe not.

Carlos Eduardo G. Carvalho (Cartola)

unread,
Nov 18, 2012, 7:22:24 AM11/18/12
to hugi...@googlegroups.com
Hi, I am also not an expert, but I do a different approach. I usually do 3 to 5 auto bracketing with the camera using a tripod or a pole. When I use the tripod I don't need to align the images. Using the pole, as the structure sometimes has some instability, mainly when it is windy, I align images.

The different approach I meant is that I usually don't do this in hugin. I align and fuse images before making the pano and it gives me good results. My align command is really simple:

align_image_stack -a file1- IMG01.JPG IMG02.JPG IMG03.JPG

and it gives me the aligned files file1-0000.tif file1-0001.tif file1-0002.tif, which then I fuse with enfuse:

enfuse -o file1.jpg --compression=95 file1-*

I usually make little scripts for the whole job. I make this scripts each time in vi (a unix text editor), which makes the job relatively easy, so I didn't take the time to make a generic script yet.

Then I use the fused images into hugin to make the panos. I usually use 10 to 20 images, depending on the lens I have used.

Cheers,
2012/11/18 TvE <tvone...@gmail.com>

--

TvE

unread,
Nov 18, 2012, 1:56:43 PM11/18/12
to hugi...@googlegroups.com
Carlos, thanks for posting your steps. Running align_image_stack from the command line sounds like a nice idea. You could also output the result into the .pto file, I believe. I don't know about pre-fusing each stack into a 24-bit jpg. I would be concerned that you could end up with images that have very different dynamic ranges and that don't blend well into the final pano. E.g. if one stack has sunset-to-dark and the other is all-dark. But maybe enfuse deals with that properly or maybe those scenarios don't fuse well either way... If I have the time I may give the two approaches a spin. Another advantage of not pre-fusing is that align_image_stack sometimes has difficulties, specially when large areas are very dark or blown out in one of the images of the bracket and it's nice to be able to adjust the control points in hugin. Ah. maybe we can collect some more wisdom and then turn this thread into a wiki page.

Carlos Eduardo G. Carvalho (Cartola)

unread,
Nov 19, 2012, 5:35:41 AM11/19/12
to hugi...@googlegroups.com
Hi,

just to clarify, I don't work with 24-bit jpg. I usually shoot in JPG and work with it in 8 bits. But alit_image_stack can deal with tif and you can work with tif all the time. I just put JPG in my example because I use it :)

I sometimes have some ghosts in the final result of the enfusion, but I doubt if it would solve using a larger image. I think they happen because of parallax differences. I can clearly see that part of the image is good and another is not, so I think align_image_stack has done it's job. The solution (maybe) would be distort the image before fuse them and maybe hugin does that, I don't know. In fact I have never tried to make the exposure stacks directly into hugin. Maybe I should also give it a try :)

And surely the best option of all is to stabilize the camera to shoot, but sometimes it is a little hard to do: http://cartola.org/fotos/_cache/Diversas/Engenhocas/_screen/20111004-mastro.jpg


Cheers,

Carlos E G Carvalho (Cartola)
http://cartola.org/360
http://www.panoforum.com.br/



2012/11/18 TvE <tvone...@gmail.com>
Carlos, thanks for posting your steps. Running align_image_stack from the command line sounds like a nice idea. You could also output the result into the .pto file, I believe. I don't know about pre-fusing each stack into a 24-bit jpg. I would be concerned that you could end up with images that have very different dynamic ranges and that don't blend well into the final pano. E.g. if one stack has sunset-to-dark and the other is all-dark. But maybe enfuse deals with that properly or maybe those scenarios don't fuse well either way... If I have the time I may give the two approaches a spin. Another advantage of not pre-fusing is that align_image_stack sometimes has difficulties, specially when large areas are very dark or blown out in one of the images of the bracket and it's nice to be able to adjust the control points in hugin. Ah. maybe we can collect some more wisdom and then turn this thread into a wiki page.

--

Rogier Wolff

unread,
Nov 19, 2012, 6:05:57 AM11/19/12
to hugi...@googlegroups.com

Guys,

when you say "24 bit jpeg" almost everybody thinks of this as "full
color (RGB), 8 bits per channel".

What is needed for HDR work, is either a floating point format (about
12 bits per channel, would be sufficient, but 32 bits per channel are
normally used in hugin) or a high-bit-count fixed-point
format. Something like 16 might get close, but 24 bits per channel.
may be more appropriate.

JPEG format is not appropriate for non-8-bit-per-channel-data. So
Carlos in the workflow that you describe you're truncating a lot of
information when you output your intermediate files into the 8-bit
jpeg format.

So... Carlos, I would suggest that you try saving your intermediate
files as tiff, and then find out what options you have to create a
32-bit-per-channel-floating-point intermediate file. Then Hugin can,
while stitching, apply proper exposure corrections and things like
that. (or are you fixing the exposure bracket for the whole panorama?
In that case Hugin's exposure correction should be unneccessary.)


Roger.
--
** R.E....@BitWizard.nl ** http://www.BitWizard.nl/ ** +31-15-2600998 **
** Delftechpark 26 2628 XH Delft, The Netherlands. KVK: 27239233 **
*-- BitWizard writes Linux device drivers for any device you may have! --*
The plan was simple, like my brother-in-law Phil. But unlike
Phil, this plan just might work.

Carlos Eduardo G. Carvalho (Cartola)

unread,
Nov 19, 2012, 4:46:40 PM11/19/12
to hugi...@googlegroups.com
Thanks Rogier, I think I must study more those details of image formats. These information you told us have clarified my mind a little more. Do you recommend a good general reference about it?

And you are right, I fix the exposure bracketing for the whole panorama and as I use a manual workflow in hugin I just don't do exposure optimization.

To tell more details of my usual exposition workflow, I use Magic Lantern firmware on my canons, so I have a more flexible control of auto bracketing. I usually do expositions with 2EV of difference between pictures and choose, experimenting on the scene, if I need 3, 4 or 5 different expositions. Magic Lantern allow me to use up to 9 auto bracketing pics.

I also define the order. I usually set it to make the darker one first, then it goes increasing the exposition time, giving me more clear pics. Due to that I first measure the light on the most clear area of the scene. This setup will make the first picture, the darker one. Then I test the pictures in the most dark area and take a fast look at the pictures to decide how many brackets are ok. I never needed more than 5 until now, using 2EV between them. I usually think using less than 2EV this is useless. Have already tried, for example, 3 pics with 1EV between them and it really didn't expand the dynamic range. Maybe I should try something like 9 pics with 1EV dif and compare with 5 pics with 2EV dif.

Anyway, with this I do the workflow I mentioned before, in which, again, I don't make any exposure correction in hugin and I think I get some good results, which can be seen in my blog.

Here is one example where I have used 4 auto bracketing pics with 3EV between them - total of 12EV of difference from the first to the last: http://wp.me/p1AGa0-hp

Cheers,

Carlos E G Carvalho (Cartola)
http://cartola.org/360
http://www.panoforum.com.br/



2012/11/19 Rogier Wolff <rew-goog...@bitwizard.nl>

TvE

unread,
Nov 20, 2012, 1:29:42 AM11/20/12
to hugi...@googlegroups.com
Roger, I'm not sure things are as drastic as you make them when you say "What is needed for HDR work, is either a floating point format (about 12 bits per channel, would be sufficient, but 32 bits per channel are normally used in hugin) or a high-bit-count fixed-point format." Yes, to get the best possible results, you are correct. However, as far as I can tell, when Carlos runs enfuse on a stack, enfuse compresses the dynamic range into the available 24 RGB bits. So if the stack has a large dynamic range, he's loosing some color subtlety and some sensor noise, but he's not loosing highlights or shadows per-se. If he's then putting everything together into one 24-bit RGB image at the end without manual tone mapping, I'm not sure there is much quality to be gained by going through 48-bit or 96-bit intermediate images. Maybe you or someone has some examples where it does make a difference?

Gnome Nomad

unread,
Nov 20, 2012, 1:50:24 AM11/20/12
to hugi...@googlegroups.com
I do mine in 48-bit TIF, which is 16-bit per channel RGB. 48-bit color
is my DSLR's native depth.

Real HDR is done using multiple frames, shot at various exposure ranges
so the darkest exposure gives you no clipping at the dark end, and the
highest exposure gives you no clipping at the bright end, with enough
intermediate frames to give you good exposures through the midranges.
Enfusing with the full 48-bit color range gives you better results. When
Hugin finishes an image for image, it's a 48-bit color TIFF, not a
24-bit TIF.

I don't think enfuse reduces the dynamic range to 24-bit color.
> <http://groups.google.com/group/hugin-ptx>
>
> --
> ** R.E....@BitWizard.nl ** http://www.BitWizard.nl/ **
> +31-15-2600998 **
> ** Delftechpark 26 2628 XH Delft, The Netherlands. KVK: 27239233
> **
> *-- BitWizard writes Linux device drivers for any device you may
> have! --*
> The plan was simple, like my brother-in-law Phil. But unlike
> Phil, this plan just might work.


--
Gnome Nomad
gnome...@gmail.com
wandering the landscape of god
http://www.clanjones.org/david/
http://dancing-treefrog.deviantart.com/
http://www.cafepress.com/otherend/

Carlos Eduardo G. Carvalho (Cartola)

unread,
Nov 20, 2012, 4:22:32 AM11/20/12
to hugi...@googlegroups.com

2012/11/20 Gnome Nomad <gnome...@gmail.com>

I don't think enfuse reduces the dynamic range to 24-bit color.
 
Well, at least I make enfuse do this as I make the output from it in a 24-bit RGB JPG.

Surely more information is capable of giving more quality and maybe in some situations I would see some better results, mainly if I use it in the intermediate steps. As far as I've experienced until now I don't even see much need to use raw shooting. I've already tested it some times and when I am shooting a hard scene I shoot raw, but in 90% of the cases I am pretty much satisfied with shooting and working with 24-bit RGB JPGs.

I usually only publish on the web. I have already printed some pictures in fine quality and thought that there it could make more visual difference for me, but at the computer screen my poor eyes are satisfied till now :)

Gnome Nomad

unread,
Nov 21, 2012, 5:25:51 AM11/21/12
to hugi...@googlegroups.com
On 11/19/2012 11:22 PM, Carlos Eduardo G. Carvalho (Cartola) wrote:
>
> 2012/11/20 Gnome Nomad <gnome...@gmail.com <mailto:gnome...@gmail.com>>
>
> I don't think enfuse reduces the dynamic range to 24-bit color.
>
> Well, at least I make enfuse do this as I make the output from it in a
> 24-bit RGB JPG.

Yes, you can do that. I prefer to end with 16-bit TIF.

> Surely more information is capable of giving more quality and maybe in
> some situations I would see some better results, mainly if I use it in
> the intermediate steps. As far as I've experienced until now I don't
> even see much need to use raw shooting. I've already tested it some
> times and when I am shooting a hard scene I shoot raw, but in 90% of the
> cases I am pretty much satisfied with shooting and working with 24-bit
> RGB JPGs.

On some images now, I finish by pulling the 16-bit TIF in Luminance HDR
and tonemap the image.

> I usually only publish on the web. I have already printed some pictures
> in fine quality and thought that there it could make more visual
> difference for me, but at the computer screen my poor eyes are satisfied
> till now :)

You can't really see HDR on display screens or prints, because
high-dynamic range means wider dynamic range than display or print
technologies can reproduce. HDR is for all of the processing up to the
final output. I prefer TIF for that, too, or PNG, just because even at
100% quality, JPG loses color information. But I'm weird about that ...
Reply all
Reply to author
Forward
0 new messages