patch for bug #2033756 call for testing

19 views
Skip to first unread message

Lukáš Jirkovský

unread,
Jan 29, 2009, 1:25:46 PM1/29/09
to hugi...@googlegroups.com
Hello,
I've finally fixed the bug #2033756 (hugin_hdrmerge: incorrectly
generated weight mask), but there's one thing that really confuses me.
It seems that I'm only one who has been bothered by this bug. If this
bug existed on someone else system, I'm sure that someone would notice
it.

So I'm asking anyone who can test unpatched and optionally patched
hugin_hdrmerge to test it especially on windows and Mac OS X (I've
only Linux installed) and reply to me what result he got after
running:
hugin_hdrmerge -s w test_hdr_000*.exr -o test.exr
on the test files[1].

If resulting *.tiff files created with current hugin_hdrmerge (it
doesn't need to be the latest version, because hugin_hdrmerge haven't
changed for longer time) looks similar to the results in
http://blender6xx.ic.cz/pub/khan_testfiles/results_bad/ then this bug
really exist on other systems and patch can be applied to the trunk.
But if results looks like results in
http://blender6xx.ic.cz/pub/khan_testfiles/results_good/ then there is
something rotten in my computer ;-) as I'm the only one experiencing
this bug.

Patched version should give results similar to the
http://blender6xx.ic.cz/pub/khan_testfiles/results_good/

Patch is attached.

Thanks in advance,
Lukáš "stativ" Jirkovský

[1] http://blender6xx.ic.cz/pub/khan_testfiles/

hugin_hdrmerge.diff

Pablo d'Angelo

unread,
Jan 30, 2009, 5:27:29 PM1/30/09
to hugi...@googlegroups.com
Hi Lukáš,

thanks a lot for your hard work in tracking down the bugs. Actually, I
think this really is a copy and paste error, and the xxx_gray.pgm image
(infog) should be loaded here, instead of the xxx.exr image (info).

I think following patch should solve this specific problem correctly (I
haven't tested it):

Index: src/deghosting/khan.cpp
===================================================================
--- src/deghosting/khan.cpp (revision 3602)
+++ src/deghosting/khan.cpp (working copy)
@@ -141,7 +141,7 @@
(ui_mode & UI_EXPORT_INIT_WEIGHTS)) {
//load image
BImagePtr origGray(new BImage(info.size()));
- vigra::importImage(info, destImage(*origGray));
+ vigra::importImage(infog, destImage(*origGray));
FImagePtr weight(new FImage(info.size()));

// calculate initial weights, using mexican hat function

I wonder why vigra::importImage(info, destImage(*origGray)); executed
without throwing an exception, as it will try to load a 4 channel image
into a 1 channel one, as you noticed.

Lukáš Jirkovský schrieb:

Harry van der Wolf

unread,
Jan 30, 2009, 6:04:57 PM1/30/09
to hugi...@googlegroups.com
Hi Lukas and Pablo,

I tried an unmodified hugin_hdrmerge out of svn3590 on MacOSX 10.5.5 and did not even get an image but the following error:

caught exception: Error reading pixel data from image file "test_hdr_0000.exr". Error in Huffman-encoded data (decoded data are shorter than expected).
Abort trap

Something wrong with the image or with hugin_hdrmerge.
Your server reports the files as being 7.5MB but when downloading I only get 680MB without an error.
Also: the server is dead slow so maybe it is a bad download after all.

Harry



2009/1/29 Lukáš Jirkovský <l.jir...@gmail.com>

Lukáš Jirkovský

unread,
Jan 31, 2009, 3:24:34 AM1/31/09
to hugi...@googlegroups.com
2009/1/30 Pablo d'Angelo <pablo....@web.de>:

>
> Hi Lukáš,
>
> thanks a lot for your hard work in tracking down the bugs. Actually, I
> think this really is a copy and paste error, and the xxx_gray.pgm image
> (infog) should be loaded here, instead of the xxx.exr image (info).

I've thought this is the problem, but this needed less[1] changes so I
thought it was intended this way and it gives pretty similar results.
But as I'm thinking about it, your opinion seems reasonable. Because
otherwise luminance images would be unused. I'll make more test which
one prove to be better. On the other hand – using only exr would
supersede need to have more images to be able to use deghosting.
I'm thinking about making patch where it would be possible to change
between EXR input for grayscale and luminance images. It would give
hugin_hdrmerge more flexibility, because it could be used also for exr
images not produced by hugin's tools.


[1] your suggested patch also needs to change

- vigra::ImageImportInfo infog(inputFiles[i].c_str());

+ vigra::ImageImportInfo infog(grayFile.c_str())

>
> I think following patch should solve this specific problem correctly (I
> haven't tested it):
>
> Index: src/deghosting/khan.cpp
> ===================================================================
> --- src/deghosting/khan.cpp (revision 3602)
> +++ src/deghosting/khan.cpp (working copy)
> @@ -141,7 +141,7 @@
> (ui_mode & UI_EXPORT_INIT_WEIGHTS)) {
> //load image
> BImagePtr origGray(new BImage(info.size()));
> - vigra::importImage(info, destImage(*origGray));
> + vigra::importImage(infog, destImage(*origGray));
> FImagePtr weight(new FImage(info.size()));
>
> // calculate initial weights, using mexican hat function
>
> I wonder why vigra::importImage(info, destImage(*origGray)); executed
> without throwing an exception, as it will try to load a 4 channel image
> into a 1 channel one, as you noticed.
>

I think that exr implementation should notice this, because vigra only
loads vector with pixel values not regarding what it means (if it's
four values per pixel for RGBA values or one for grayscale), but maybe
I'm wrong, because I haven't consulted my theory with the code.

2009/1/31 Harry van der Wolf <hvd...@gmail.com>:


> Something wrong with the image or with hugin_hdrmerge.
> Your server reports the files as being 7.5MB but when downloading I only get
> 680MB without an error.
> Also: the server is dead slow so maybe it is a bad download after all.

It is possible. It's free hosting and it seems that they are
throttling speed down for users outside of Czech Republic. But I don't
have anything better.

Lukáš Jirkovský

unread,
Jan 31, 2009, 3:39:53 AM1/31/09
to hugi...@googlegroups.com
2009/1/31 Lukáš Jirkovský <l.jir...@gmail.com>:

Strange, it seems that using exr image instead of luminance gives me a
little bit better results.

Lukáš Jirkovský

unread,
Jan 31, 2009, 5:51:23 AM1/31/09
to hugi...@googlegroups.com
2009/1/31 Lukáš Jirkovský <l.jir...@gmail.com>:

I've made patch which adds option -w to the hugin_hdrmerge. With it
it's possible to select to use EXR or luminance as a base image for
generating weights with luminance set as default. If nobody complains,
I'll commit it to the trunk.

I've also uploaded test results of using different bases for
generating weights. It seems that using exr image as a base for
generating initial weights results in more aggressive deghosting.

after setting weights to be based on luminance images:
http://blender6xx.ic.cz/view_bigimg.php5?id_image=25
… weights based on exr image:
http://blender6xx.ic.cz/view_bigimg.php5?id_image=24

hugin_hdrmerge.diff

Lukáš Jirkovský

unread,
Jan 31, 2009, 6:21:00 AM1/31/09
to hugi...@googlegroups.com

One more thing. In linked images I've put down in Popis (description
in czech) command which I've used for merging image, but I've forgot
that bot used also "-a d" parameter.

Yuval Levy

unread,
Feb 1, 2009, 2:14:10 AM2/1/09
to hugi...@googlegroups.com
Hi Lukáš,

Lukáš Jirkovský wrote:
> I've made patch which adds option -w to the hugin_hdrmerge. With it
> it's possible to select to use EXR or luminance as a base image for
> generating weights with luminance set as default. If nobody complains,
> I'll commit it to the trunk.

I've tested quickly hugin_hdrmerge in Windows. I applied your two
patches (30/Jan and 31/Jan).

The good news is that it builds and it seems to get further than it used to.

The bad news is that it still crashes. I am trying to merge three 16bit
TIFF images of 8megapixel with the following command:

hugin_hdrmerge.exe -wl -ad -v -o p.exr p1.tif p2.tif p3.tif

It takes a few seconds to crunch, and the following text appears:

Applying: choosing pixel with the largest weight when weights are similar

Running Khan algorithm
Loading and preparing p1.tif
Loading and preparing p2.tif
Loading and preparing p3.tif
deghosting (Khan algorithm)
deghosting...


iteration 0
processing...
caught exception in khanIteration: bad allocation

This application has requested the Runtime to terminate it in an unusual
way.
Please contact the application's support team for more information.

It is not worse than what it used to be (as far as I recall I was never
able to run hugin_hdrmerge in Windows, it always crashed on me).

https://sourceforge.net/tracker2/index.php?func=detail&aid=1880300&group_id=77506&atid=550441

sorry
Yuv

Yuval Levy

unread,
Feb 1, 2009, 2:52:54 AM2/1/09
to hugi...@googlegroups.com
Yuval Levy wrote:
> It is not worse than what it used to be (as far as I recall I was never
> able to run hugin_hdrmerge in Windows, it always crashed on me).

I tested a bit more. With images resized to 1500x1000 and 8bit it works.

Yuv

Lukáš Jirkovský

unread,
Feb 1, 2009, 3:13:07 AM2/1/09
to hugi...@googlegroups.com
2009/2/1 Yuval Levy <goo...@levy.ch>:

Hmm… Does it crash only with tiff input? I think that hugin_hdrmerge
was supposed to work only with exr input even though vigra should take
care of any other supported format.

Lukáš Jirkovský

unread,
Feb 1, 2009, 3:23:22 AM2/1/09
to hugi...@googlegroups.com
Anyway, I've just committed it into the trunk, because it doesn't
broke anything what wasn't already broken.

Yuval Levy

unread,
Feb 1, 2009, 11:25:54 AM2/1/09
to hugi...@googlegroups.com
Lukáš Jirkovský wrote:
> Hmm… Does it crash only with tiff input? I think that hugin_hdrmerge
> was supposed to work only with exr input even though vigra should take
> care of any other supported format.

at 8mpx and three exposures it crashes also with JPG like with TIFF. At
smaller sizes (I tried with 3mpx) it works with JPG, PNG, TIFF, and even
with a mix of JPG and PNG (which does not really make sens, but if we
are testing...)

I did not test with EXR. My RAW converter does not output EXR and it
would be very limiting to me if the tool would only work with EXR files.

I found three little problems though:

1 - There is residual transparency. It becomes very visible in Photoshop
when I move the exposure slider to the left (underexposed). This happens
also with JPG, and becomes particularly visible at the edges of ghosts.
I suspect it has to do with the weights?

2 - When feeding the exact same pictures in JPEG and PNG format and
using -wl -ad the results differ significantly. More than what JPEG
artefacts could account for. With default arguments, I see no difference.

3 - when I merge to an EXR a standard -2/0/+2 stack, it appears
overexposed in Photoshop when compared to the same stack produced with
Photomatix or Qtpfsgui.

I hope this helps
Yuv

Lukáš Jirkovský

unread,
Feb 1, 2009, 12:23:56 PM2/1/09
to hugi...@googlegroups.com
2009/2/1 Yuval Levy <goo...@levy.ch>:

Thanks for extensive testing. I'll try to figure out what can cause
them and hopefully fix it.

Pablo d'Angelo

unread,
Feb 1, 2009, 4:31:02 PM2/1/09
to hugi...@googlegroups.com
Hi Lukáš,

I'm not sure how familar you are with the hdr merging process, so let me
give you a quick primer, before starting trial and error code changes
that might lead to sub-optimal behavior of the code.

hugin_hdrmerge was designed for merging high dynamic range images (it
was designed before enfuse was available). Assuming we have 3 images
that we want to fuse into a single HDR images, also lets assume that
they have been captured on a tripod and nothing in the scene moved (ie.
the images are perfectly registered).

Lets assume the "physical light intensity" one point in the image is
given by i=100. Further, for simplicity, we assume that the camera has a
linear response curve, that is, the gray value g stored in the image is
proportional to the true intensity:

g = e * i. (1)

Where e is a factor that comes from the exposure settings of the camera.
Also, lets assume that the image noise is constant and 2 gray values.
This is all extremely simplified to what a real camera does, but it
still provides some insight (physicists on this list, please forgive me ;-)

The photographer takes 3 exposures, with -2, 0, and +2 exposure
compensation (e_1=1/4, e_2=1 and e_3=4 are then the exposure factor for
image 1,2 and 3, respectively). This results in the following gray
values, as stored in the image:
g_1 = 100 / 4 = 25 (+- 2 noise) = 23 .. 27
g_2 = 100 * 1 = 100 (+- 2 noise) = 98 .. 102
g_3 = 100 * 4 = 255 (assuming that the camera outputs 8 bit images)

This means that this pixel in image 1 is very dark and has visible
noise, in image 2 is nicely exposed, and is blown out in image 3.

For the hdr image, we get the gray values g_1, g_2 and g_3 and we try to
recover the original intensity i. formula (1) would give us these values
for image 1,2 and 3:

i_1 = 92 .. 108
i_2 = 98 .. 102
i_3 = 64

So using image 1 and image 2 give plausible values, but image 3 gives
nonsense, as the pixel was overexposed. The noise has a larger effect
in image 1, so one could argue that it is best to use only image 2.
It turns out to be better to compute a weighted average:

i = w_1 * i_1 + w_2 * i_2 + w_3 * i_3

The only question is: How do we compute the weights? Clearly, the weight
for pixels with a gray value g=255 should get zero weight, as it is only
a lower bound for the true intensity. In practice, most programs seem to
use some kind of Gaussian or similar weight curve, that starts with w=0
for g=0, increased until it reaches a maxima for some well exposed gray
values, and then decreases again to have w=0 for g=255.

For example in the Khan paper
http://graphics.cs.ucf.edu/ekhan/data/ghost/icip06.pdf , this function
is used (the plot is somewhere in the paper):
w = 1 - (g/127.5 -1)^16

This is defined in khanSupport.cpp, weightMexicanHat()).

This weight should be computed with the xxx_gray.pgm images, because
they contain the original gray value (that can be used to judge how good
the exposure actually is), and not the HDR images i_1, i_2, i_3, where
we don't see that i_3 was actually an overexposed pixel (as we don't
know the exposure e_3 at this point).

What your patch actually did was loading an HDR image into an 8 bit
image. Except for extremely dark image, all pixels in the origGray
image will be clipped to 255. This will lead to the same weight for all
images (I haven't tested, but if you turn on the option to save the
initial weights (for example in cinepaint, or other HDR capable
programs), you can check that). These weights were then refined by the
Khan algorithm before they are used for HDR merging.

If that indeed turns out to result in a better deghosting result when
using identical weights before starting the deghosting, then there
should be an option to actually initialize the images to the same weight.

Oh, quite a long mail, and I have run out of time before writing down
some more points...

Lukáš Jirkovský schrieb:


> 2009/1/30 Pablo d'Angelo <pablo....@web.de>:
>> Hi Lukáš,

>> I wonder why vigra::importImage(info, destImage(*origGray)); executed
>> without throwing an exception, as it will try to load a 4 channel image
>> into a 1 channel one, as you noticed.
>>
>
> I think that exr implementation should notice this, because vigra only
> loads vector with pixel values not regarding what it means (if it's
> four values per pixel for RGBA values or one for grayscale), but maybe
> I'm wrong, because I haven't consulted my theory with the code.

Yes, vigra::importImage should throw an exception in that case, not sure
why it doesn't do that.

ciao
Pablo

Pablo d'Angelo

unread,
Feb 1, 2009, 4:40:43 PM2/1/09
to hugi...@googlegroups.com
Hi Yuv,

Yuval Levy wrote


> I've tested quickly hugin_hdrmerge in Windows. I applied your two
> patches (30/Jan and 31/Jan).
>
> The good news is that it builds and it seems to get further than it used to.
>
> The bad news is that it still crashes. I am trying to merge three 16bit
> TIFF images of 8megapixel with the following command:
>
> hugin_hdrmerge.exe -wl -ad -v -o p.exr p1.tif p2.tif p3.tif

Actually, hugin_hdrmerge assumes that p1.tif, p2.tif and p3.tif are
already images where the gray values correspond to intensity values (ie.
if corresponding pixels are exposed well, they should have the same
value). Suitable images are the .exr files created by nona, but not the
original 8 or 16 bit images.

The deghosting will not function properly if it is feed with raw,
non-exposure normalized input images (.jpg or "normally" converted RAW
image). The deghosting will compute differences between the images and
choose the color values where most images agree. This means that the
gray values inside the images must be comparable (for example,
proportional to the physical intensity).

ciao
Pablo

Yuval Levy

unread,
Feb 1, 2009, 7:28:18 PM2/1/09
to hugi...@googlegroups.com
Hi Pablo,

Pablo d'Angelo wrote:
> Actually, hugin_hdrmerge assumes that images where the gray values
> correspond to intensity values

Aha.


(ie.
> if corresponding pixels are exposed well, they should have the same
> value). Suitable images are the .exr files created by nona, but not the
> original 8 or 16 bit images.
>
> The deghosting will not function properly if it is feed with raw,
> non-exposure normalized input images

unlike enfuse? so if we want deghosting for enfuse we need to first
normalize the input images? wouldn't that defeat the purpose and the
beauty of enfuse?

So from a processing perspective we would need
- one function / CLI tool to normalize the images
- one function / CLI tool to deghost them
- one function / CLI tool to merge them to an HDR
- and we already have enfuse to merge them into a fused image

Yuv

Pablo d'Angelo

unread,
Feb 2, 2009, 2:37:03 AM2/2/09
to hugi...@googlegroups.com
Yuval Levy schrieb:

> Hi Pablo,
>
> Pablo d'Angelo wrote:
>> Actually, hugin_hdrmerge assumes that images where the gray values
> > correspond to intensity values
>
> Aha.
> (ie.
>> if corresponding pixels are exposed well, they should have the same
>> value). Suitable images are the .exr files created by nona, but not the
>> original 8 or 16 bit images.
>>
>> The deghosting will not function properly if it is feed with raw,
>> non-exposure normalized input images
>
> unlike enfuse? so if we want deghosting for enfuse we need to first
> normalize the input images? wouldn't that defeat the purpose and the
> beauty of enfuse?

It is possible to do a shortcut, as we only need a function that can
tell us how well two gray values in an image pair fit together.
I think it is possible to compute such a similarity fuction using mutual
information, or just using the joint histogram (sometimes called
coocurrence matrix) as distance metrics. It would be possible to add a
special mode for that to hugin_hdrmerge. The output would then be the
normal images, with the ghosts masked out in the alpha channel.

I haven't really played a lot with enfuse and the alpha channel, but if
I remember correctly, there are some problems with visible seams when
fusing images with masked out areas. It would be good to try a manual
deghosting first, and see what happens when painting the ghost black,
and masking it out using the alpha channel.

ciao
Pablo

Lukáš Jirkovský

unread,
Feb 2, 2009, 3:50:11 AM2/2/09
to hugi...@googlegroups.com
2009/2/1 Pablo d'Angelo <pablo....@web.de>:
>
> Hi Lukáš,
>

> I'm not sure how familar you are with the hdr merging process, so let me
> give you a quick primer, before starting trial and error code changes
> that might lead to sub-optimal behavior of the code.

In fact I'm not familiar with merging process at all, but I've made
patch which worked for me so I thought it's the right way. I apologize
for putting it to the trunk so early, it was my mistake. But I was so
excited that it works after a looong time.

I think I've got it. Then image which is has i nearest to ideal 100
should have the best exposed pixel, here it's i_2.

> i = w_1 * i_1 + w_2 * i_2 + w_3 * i_3
>
> The only question is: How do we compute the weights? Clearly, the weight
> for pixels with a gray value g=255 should get zero weight, as it is only
> a lower bound for the true intensity. In practice, most programs seem to
> use some kind of Gaussian or similar weight curve, that starts with w=0
> for g=0, increased until it reaches a maxima for some well exposed gray
> values, and then decreases again to have w=0 for g=255.
>
> For example in the Khan paper
> http://graphics.cs.ucf.edu/ekhan/data/ghost/icip06.pdf , this function
> is used (the plot is somewhere in the paper):
> w = 1 - (g/127.5 -1)^16
>
> This is defined in khanSupport.cpp, weightMexicanHat()).
>
> This weight should be computed with the xxx_gray.pgm images, because
> they contain the original gray value (that can be used to judge how good
> the exposure actually is), and not the HDR images i_1, i_2, i_3, where
> we don't see that i_3 was actually an overexposed pixel (as we don't
> know the exposure e_3 at this point).

So then it would work properly without luminance images only if
luminance would be computed inside the hugin_hdrmerge which would be
possible only for 8-bit images in linear colorspace (I'm not sure if
this is the right term)?

> What your patch actually did was loading an HDR image into an 8 bit
> image. Except for extremely dark image, all pixels in the origGray
> image will be clipped to 255. This will lead to the same weight for all
> images (I haven't tested, but if you turn on the option to save the
> initial weights (for example in cinepaint, or other HDR capable
> programs), you can check that). These weights were then refined by the
> Khan algorithm before they are used for HDR merging.

Actually not, the weights are pretty similar with luminance and exr
image used for input. But your explanation sheds light on why
luminance based weights has details in well exposed parts.
I've uploaded how does they look like:
http://blender6xx.ic.cz/view_bigimg.php5?id_image=26

> If that indeed turns out to result in a better deghosting result when
> using identical weights before starting the deghosting, then there
> should be an option to actually initialize the images to the same weight.

I'm not sure If I understand this well. You mean that weights from all
input are identical? They seem almost the same.

> Oh, quite a long mail, and I have run out of time before writing down
> some more points...
>

But very explaining. Thank you.

Lukáš Jirkovský

unread,
Feb 2, 2009, 7:46:35 AM2/2/09
to hugi...@googlegroups.com
First, I'd like to apologize to all mathematicians and physicist here
for my simplified thoughts.

I'm reading through that paper about khan and it makes me think that
luminance is not necessary to use deghosting properly. Luminance is in
fact used to determine which pixels are best exposed when merging
photos (as Pablo perfectly explained in this topic), but for
deghosting itself it should be enough to use original input image
(even though luminance is used in that paper) because most important
in weight generating is to match processed pixel with pixels around it
to determine if it's in likely to be in background or if it's moving
object.

In fact I think that without using luminance it would be possible to
use this algorithm for removing ghosts caused by paralax error in
ordinary panorama. I'm assuming that the panorama would be shot with a
single exposure so there's no need to choose best exposed pixels.

Reply all
Reply to author
Forward
0 new messages