enfuse for tonemapping of a single RAW

473 views
Skip to first unread message

Alexander Rabtchevich

unread,
Mar 21, 2011, 12:27:16 PM3/21/11
to hugin and other free panoramic software
hello

I used enfuse several times to compress dynamic range of a single RAW
image. The algorithm was to produce at least two tiffs with different
exposures, including one with correct highlights and one with good
shadows (+EV in RAW processing software), and to use enfuse to blend
the images into one picture. The result looks natural and most of the
times better than the one from "official" HDR tonemapping algorithms.

So the feature request is to implement internal tonemapping algorithm
from one file for enfuse without the need to produce several input
files with different exposures. The adjustment parameter can be a
value of exposure compensation.

With respect,
Alexander Rabtchevich

kfj

unread,
Mar 21, 2011, 2:43:25 PM3/21/11
to hugin and other free panoramic software


On 21 Mrz., 17:27, Alexander Rabtchevich
I think that's a good idea. Maybe as an option to enfuse? No need to
calculate intermediate images, if all that's desired is using the same
input image for an artificial bracket. If enfuse were called with only
one image, it could accept a set of EV deltas to emulate the
artificial bracket, but load the image only once, saving i/o and
memory, like

enfuse --fake_bracket -2 0 +2 --output product.tif input.tif

Kay

Erik Krause

unread,
Mar 21, 2011, 3:24:45 PM3/21/11
to hugi...@googlegroups.com
Am 21.03.2011 19:43, schrieb kfj:

> I think that's a good idea.

I don't know whether it's a good idea. It is pretty tedious compared to
simply extract all the dynamic range to a 16 bit TIFF in one go using a
decent raw converter and later use local contrast enhancement to improve
the image.

> Maybe as an option to enfuse? No need to
> calculate intermediate images, if all that's desired is using the same
> input image for an artificial bracket. If enfuse were called with only
> one image, it could accept a set of EV deltas to emulate the
> artificial bracket, but load the image only once, saving i/o and
> memory, like
>
> enfuse --fake_bracket -2 0 +2 --output product.tif input.tif

This would probably be possible for TIFF files but it would require to
extract the complete dynamic range to that TIFF (s.a.), which might or
might not be easy, depending on your raw converter. Better would be to
create the exposure steps directly from the raw file which would require
to use dcraw code inside enfuse.

But you can have this already: use a shell script that calls dcraw three
times each with different -b parameters (f.e. -b 0,25, -b 1, -b 4), pass
the result files to enfuse and delete any intermediate files. This way
you can use dcraw's other features like (basic) CA correction or white
balance adjustment.

BTW.: TuFuse Pro offers single image autobracketing, but it's windows
only and commercial.

--
Erik Krause
http://www.erik-krause.de

kfj

unread,
Mar 21, 2011, 4:08:25 PM3/21/11
to hugin and other free panoramic software
On 21 Mrz., 20:24, Erik Krause <erik.kra...@gmx.de> wrote:
> Am 21.03.2011 19:43, schrieb kfj:
>
> > I think that's a good idea.
>
> I don't know whether it's a good idea. It is pretty tedious compared to
> simply extract all the dynamic range to a 16 bit TIFF in one go using a
> decent raw converter and later use local contrast enhancement to improve
> the image.

I disagree. I think it's not at all about local contrast (as in focus
stacks), but about exposure fusion (The original post talks about
dynamic range compression which has nothing to do with contrast).
And I don't think Alexander wants to feed the RAW into the process
anyway, but the 16bit TIFF equivalent - enfuse can't read RAW anyway.

> > enfuse --fake_bracket -2 0 +2 --output product.tif input.tif
>
> This would probably be possible for TIFF files but it would require to
> extract the complete dynamic range to that TIFF (s.a.), which might or
> might not be easy, depending on your raw converter. Better would be to
> create the exposure steps directly from the raw file which would require
> to use dcraw code inside enfuse.

I think the whole point is to avoid intermediate files, or just use a
single intermediate (16 bit) TIFF file, since enfuse can't read RAW.

> But you can have this already: use a shell script that calls dcraw three
> times each with different -b parameters (f.e. -b 0,25, -b 1, -b 4), pass
> the result files to enfuse and delete any intermediate files. This way
> you can use dcraw's other features like (basic) CA correction or white
> balance adjustment.

Of course you can do it that way. You can also start with a TIFF file
and call nona several times with different EV values. In every case
you make intermediate files where all you want to do is do what your
BTW can seemingly do:

> BTW.: TuFuse Pro offers single image autobracketing, but it's windows
> only and commercial.

The proposition is to do precisely that by adding an option to enfuse.

Kay

Erik Krause

unread,
Mar 21, 2011, 4:51:30 PM3/21/11
to hugi...@googlegroups.com
Am 21.03.2011 21:08, schrieb kfj:

> I disagree. I think it's not at all about local contrast (as in focus
> stacks), but about exposure fusion

Exposure fusion is all about local contrast :-)

> (The original post talks about
> dynamic range compression which has nothing to do with contrast).

If you extract the whole dynamic range of a raw shot into a tiff you get
a pretty flat contrast. A large radius local contrast boost, like f.e. a
large radius unsharp mask - brings back the vividness into the image
without blowing the highlights and darkening the shadows too much like
simply increasing global contrast would do. All in all this has a
similar effect like enfuse has on an image stack.

The crux is extracting the whole dynamic range from a raw, which isn't
easy. But this would be required for the enfuse technique anyway (since
as you wrote enfuse reads no raw).

One way (using an outdated version of ACR) is described on
http://wiki.panotools.org/RAW_dynamic_range_extraction

kfj

unread,
Mar 22, 2011, 3:47:32 AM3/22/11
to hugin and other free panoramic software


On 21 Mrz., 21:51, Erik Krause <erik.kra...@gmx.de> wrote:
> Am 21.03.2011 21:08, schrieb kfj:
>
> > I disagree. I think it's not at all about local contrast (as in focus
> > stacks), but about exposure fusion
>
> Exposure fusion is all about local contrast :-)

Maybe my terminology is wrong. The way I understand it, enfuse does
two different things: look at intensities and colour of single
corresponding pixels (--exposure-weight and --saturation-weight) and/
or look at contrast and entropy in a definable neighbourhood (--
contrast-weight and --entropy-weight).

I was using the term 'exposure fusion' to refer to the first group of
criteria and 'focus stacking' for the second. To me the first bunch
lends itself to the technique described in the enfuse manual under
7.3.2. Common Misconceptions:

"A single image cannot be the source of an exposure series.
Raw-files in particular lend themselves to be converted multiple times
and the results being fused together. The technique is simpler,
faster, and usually even looks better than digital blending (as
opposed to using a graduated neutral density filter) or blending
exposures in an image manipulation program. Moreover, perfect
alignment comes free of charge!"

It seems to me that this technique would not produce images with
different local contrast that would be useful input for a stack of the
second type, using contrast or entropy weighting. But I think the
original post is refering to this technique.

Kay

Alexander Rabtchevich

unread,
Mar 22, 2011, 5:15:15 AM3/22/11
to hugin and other free panoramic software
Kay is right. It's all about dynamic range compression, which is made
at tonemapping, not about flat tiffs.

Our vision changes sensitivity within a scene so the shadows and
midtones are perceived bright enough even for a scene with high
dynamic range. That's not true for a linear camera sensor even after
application of global camera non-linear tonal curve.
If there is a highlight area within a frame like a sky or sun
reflections the difference in luminance between highlights and shadows
becomes 2 stops (4 times) and more. Application of global non-linear
tonal curve to increase luminance of shadows and midtones leads to
visual compression of highlights and often ruins skintones. The only
good solution is a masked approach.

When enfuse takes two tiffs (or even jpgs) from one RAW with normally
exposed highlights (and dark shadows and midtones) and normally
exposed shadows and midtones, but overexposed highlights, it produces
naturally looking image with good highlights, shadows and midtones. Of
cause, it can require some adjustment in weight settings or exposure
compensation of initial images, but the result can look very much
similar to the picture one saw when he was taking the snapshot. I can
provide the original RAW, intermediate and resulting tiffs to prove
the concept.

With respect,
Alexander Rabtchevich

JohnG

unread,
Mar 22, 2011, 7:42:39 AM3/22/11
to hugin and other free panoramic software
May I suggest an experiment ?

1. Take a RAW file and convert it -- so that the whole DR is included
-- into a single TIF . ( In order to get the whole DR into the TIF,
you will have to flatten the global contrast. )

2. From this TIF, manually create a +2EV TIF and a -2EV TIF, so you
now have a set of 3 TIFs. ( This step - as I understand it - is what
you want Hugin/Enfuse to do internally ).

3. Run the 3 TIFs through Enfuse and eyeball the results.

My guess is you will be disappointed by the results, due to the
unavoidable loss of contrast and saturation in step 1.

:-J

On Mar 22, 9:15 am, Alexander Rabtchevich

Alexander Rabtchevich

unread,
Mar 22, 2011, 9:48:20 AM3/22/11
to hugin and other free panoramic software
Why should one try the method which _does_not_work_ as you said when
there is the method which _works_?

There is no need for negative exposure if the histogram does not
contain overexposed pixels.

I've just made the experiment:
1. converted a RAW in darktable (git) to 16-bits tiff with usual tonal
curve and correct exposure for highlights (ETTR).
2. as there is a bug in darktable tiff export both graphicsmagick and
imagemagick refuse to take the image to increase its luminance. So I
opened the image in 8-bits GIMP 2.7.2 and saved it as png. Then I
added 2EV in GIMP with levels and saved the image with a new name (8-
bits png too.)
3. enfuse --GrayProjector=l-star -o 1.png DSC07748.png DSC07748_01.png

Here is a snapshot of original image

http://bigserpent.users.photofile.ru/photo/bigserpent/150141165/167350384.jpg

here is +2EV

http://bigserpent.users.photofile.ru/photo/bigserpent/150141165/167350381.jpg

and here is the result

http://bigserpent.users.photofile.ru/photo/bigserpent/150141165/167350379.jpg

As one can see it works even for 8-bits input. Note the histogram
shows real dynamic range compression. I can provide original image.



On 22 мар, 13:42, JohnG <vat...@yahoo.co.uk> wrote:
> May I suggest an experiment ?
>
> 1. Take a RAW file and convert it -- so that the whole DR is included
> -- into a single TIF . ( In order to get the whole DR into the TIF,
> you will have to flatten the global contrast. )
>
> 2. From this TIF, manually create a +2EV TIF and a -2EV TIF, so you
> now have a set of 3 TIFs. ( This step - as I understand it - is what
> you want Hugin/Enfuse to do internally ).
>
> 3. Run the 3 TIFs through Enfuse and eyeball the results.
>
> My guess is you will be disappointed by the results, due to the
> unavoidable loss of contrast and saturation in step 1.
>
> :-J
>


With respect,
Alexander Rabtchevich

Erik Krause

unread,
Mar 22, 2011, 12:53:53 PM3/22/11
to hugi...@googlegroups.com
Am 22.03.2011 08:47, schrieb kfj:
>> > Am 21.03.2011 21:08, schrieb kfj:
>> >
>>> > > I disagree. I think it's not at all about local contrast (as in focus
>>> > > stacks), but about exposure fusion
>> >
>> > Exposure fusion is all about local contrast:-)
> Maybe my terminology is wrong. The way I understand it, enfuse does
> two different things: look at intensities and colour of single
> corresponding pixels (--exposure-weight and --saturation-weight) and/
> or look at contrast and entropy in a definable neighbourhood (--
> contrast-weight and --entropy-weight).

Well, this is the technical view. From a photographers view enfuse
allows to preserve local contrast while it lowers global contrast. It
does this by the technical means you describe (which I understand pretty
well) and additionally - and thats the trick - by multi resolution
blending. It's the latter which preserves local contrast. If you simply
take the best exposed pixels from each shot, you get a very strange
looking image. See:
> http://research.edm.uhasselt.be/~tmertens/papers/exposure_fusion_reduced.pdf
(See the "Naive" in Figure 4)

I've done tonemapping from a single 16 bit TIFF with considerably high
dynamic range (12 to 14 f-stops from scanned negative film) years before
enfuse was first published and before HDR tonemappers where publicly
available. If you press 12 f-stops into a 16 bit TIFF you *get* pretty
flat contrast and you must look for a way to enhance it again. (See
http://www.erik-krause.de/bilder/crueize/crueize1.htm for an example)

In that time I wrote some photoshop actions which still are freely
available from http://www.erik-krause.de/contrast/index.htm
Later photoshop introduced "Shadows and Highlights" which does
essentially the same but is more prone to halos.

I tried the same technique on exposure bracketed series as well:
http://www.erik-krause.de/blending/index.htm but there was a lot of
tweaking necessary to achieve good results. enfuse was a big step
forward, probably the biggest since invention of the digital camera
because it preserves the local contrast from each exposure bracket and
puts it into a result image which looks believable - other than many HDR
tonemapping approaches. However, to use the medium dynamic range (8-10
f-stops) from a single raw shot it's not essential. Good results can be
achieved with easier techniques.

paul womack

unread,
Mar 23, 2011, 7:36:41 AM3/23/11
to hugi...@googlegroups.com

This is nice, but perhaps a little outside enfuse domain.

BTW, I would suggest ImageMagick's sigmoidal-contrast option
as perhaps being suitable for preparing the light/id/dark
frames for enfuse:

http://www.imagemagick.org/script/command-line-options.php#sigmoidal-contrast

I haven't tried it though.

BugBear

kfj

unread,
Mar 23, 2011, 2:11:17 PM3/23/11
to hugin and other free panoramic software
On 22 Mrz., 17:53, Erik Krause <erik.kra...@gmx.de> wrote:

> Well, this is the technical view. From a photographers view enfuse
> allows to preserve local contrast while it lowers global contrast. It
> does this by the technical means you describe (which I understand pretty
> well) and additionally - and thats the trick - by multi resolution
> blending. It's the latter which preserves local contrast. If you simply
> take the best exposed pixels from each shot, you get a very strange
> looking image. See:>http://research.edm.uhasselt.be/~tmertens/papers/exposure_fusion_redu...
>
> (See the "Naive" in Figure 4)

Thank you for the link to this article. I simply wasn't aware that
enfuse also uses multi-resolution blending - I thought only enblend
did this. No wonder the results are as good as they are. I have played
with half-manual exposure fusion with the gimp (I think there was a
plugin, and you could tweak the masks manually) - the technique was
quite similar to what you describe with photoshop, but I was never
quite convinced by it, comparing it to enfuse's result, which was
available already at the time. So I fully agee if you say that

> enfuse was a big step
> forward, probably the biggest since invention of the digital camera
> because it preserves the local contrast from each exposure bracket and
> puts it into a result image which looks believable - other than many HDR
> tonemapping approaches.

But you state that

> However, to use the medium dynamic range (8-10
> f-stops) from a single raw shot it's not essential. Good results can be
> achieved with easier techniques.

I think that the technique of having two versions of a RAW (usually
that's enough for the effect) and enfusing them is a simple technique
and the results are really nice. If you could make that into a one-
step process eliminating the intermediate images, I think this would
be very helpful. But maybe you can enlighten me also as to what you'd
propse as an easier technique! My sensor produces 14bit images (I
think it does) which is certainly well above anything a monitor can
display, so some dynamic range compression is needed. I'm convinced
that enfuse-like techniques produce superior results to HDR
tonemapping with less effort, but if you know of an even simpler
method that does the trick, please let me know.

My eternal problem is the sky and particularly clouds (I do
landscapes). I aim to do an exposure so that nothing at all is
overexposed if I can avoid it - sometimes allowing some leeway for
clouds, and of course for the sun itself if it's in my picture. If I
do that with 100 ISO (the lowest my camera can do) the dark areas will
usually be okay noise-wise and have enough dynamic range to produce
detail. If not, I do a second shot, but it's nice to get away with
one. Of course exposing like this will often leave the image too dark
all over. But if I use the single RAW to make one version where the
sky is fine and another where the rest looks good and enfuse them, I
often come up with a result that's just fine, and I just don't get the
same convincing look otherwise.

Kay

Erik Krause

unread,
Mar 23, 2011, 4:38:50 PM3/23/11
to hugi...@googlegroups.com
Am 23.03.2011 19:11, schrieb kfj:
> But maybe you can enlighten me also as to what you'd
> propse as an easier technique! My sensor produces 14bit images (I
> think it does) which is certainly well above anything a monitor can
> display, so some dynamic range compression is needed. I'm convinced
> that enfuse-like techniques produce superior results to HDR
> tonemapping with less effort, but if you know of an even simpler
> method that does the trick, please let me know.

The problem is the raw conversion. Most raw converters let you choose
the "exposure" but it's very hard to extract the full DR. Most likely
this is because such an image would look very flat and the raw converter
would get bad critics.

To find the settings in your raw converter you can use any image with
enough DR, however, I suggest using an exposure bracketed series of a
simply target consisting of large white and black areas. For the wiki
article I used a white paper on a black cotton and an additional grey
card for reference: http://wiki.panotools.org/File:Camera-RAW-04.jpg

You need to find settings the clip least highlights and least shadows.
Usually this requires to pull down the contrast slider. But be aware:
The camera has clipped the highlights already. If you use clipping
display and lower brightness some program don't show a region actually
clipped by the camera as clipped because it's darker now.

If you shoot for panoramas you need to process all images the same with
fixed settings of course. Later processing can be done after stitching
which makes stitching fast and easy.

To get back original vividness without loosing the highlights and shadow
details try f.e. unsharp mask filter with radius between 50 and 200
pixels, amount 20% and threshold 0. This might blow some highlights. If
you don't want that use a mask to protect highlights. Same applies for
shadows. Another technique is to create a mask from the highlights only,
blur it and use it to selectively increase their contrast by adjusting
the black point. Do the same for the shadows but use the white point.
However, I don't know whether the Gimp allows for all that techniques...

best regards

Bob Bright

unread,
Mar 23, 2011, 7:12:46 PM3/23/11
to hugi...@googlegroups.com
On 11-03-22 06:48 AM, Alexander Rabtchevich wrote:

> I've just made the experiment:
> 1. converted a RAW in darktable (git) to 16-bits tiff with usual tonal
> curve and correct exposure for highlights (ETTR).
> 2. as there is a bug in darktable tiff export both graphicsmagick and
> imagemagick refuse to take the image to increase its luminance. So I
> opened the image in 8-bits GIMP 2.7.2 and saved it as png. Then I
> added 2EV in GIMP with levels and saved the image with a new name (8-
> bits png too.)
> 3. enfuse --GrayProjector=l-star -o 1.png DSC07748.png DSC07748_01.png
>

> [...]


>
> As one can see it works even for 8-bits input. Note the histogram
> shows real dynamic range compression. I can provide original image.

Hi Alexander,

It's very easy to emulate this workflow in hugin. Just load 2 copies of
your tiff into hugin, set the exposure value of one of them 0 and the
other to +2, and then output your images. You can automate the process
to some extent by saving this project as a .pto, and applying it as a
template to any other tiff you want to work on. There are a couple of
catches, however:

1. Both of hugin's exposure fusion output options ("Exposure fused from
stacks" and "Exposure fused from any arrangement") ignore exposure
corrections, so you can't use them for fusing pseudo-bracketed stacks.
You have to output "Exposure corrected, low dynamic range" images
instead, and fuse them from the command line. (BTW, does anyone know
why hugin's fusion outputs ignore exposure correction? I can't think of
any good reason why they should.)

2. Apart from the fact that given hugin's current state you still have
to resort to running enfuse from the command line with the "remapped"
images, you'll almost certainly get better results if you save multiple
versions of your tiff with different EVs using darktable or some other
raw converter. The reason is simple: Unless you've taken the trouble to
capture as much dynamic range as possible by reducing the global
contrast, your base tiff (even if it's 16 bit) is going to have
considerably less dynamic range than the raw file. So producing +2EV or
-2EV copies from this tiff isn't going to yield as much highlight and
shadow detail as producing +2EV/-2EV copies from the raw file. On the
other hand, if you do reduce the contrast of the base tiff to preserve
dynamic range, it's probably not going to yield satisfactory results
when you fuse it with the +2EV/-2EV copies.
The only way around this limitation would be to build a raw
converter into hugin or enfuse, which IMO would be quite inappropriate.
(Of course, you may not be interested in squeezing every last bit of
detail from your raw files -- in which case, relatively automatic
pseudo-bracketing in hugin could be achieved simply by modifying the
fusion output options so they don't ignore exposure correction.)

Cheers,
BBB
--

Bob Bright
Vancouver Island Digital Imaging
(250) 857-9887
BBBr...@VictoriaVR.ca
http://VictoriaVR.ca


Bob Bright

unread,
Mar 23, 2011, 11:25:59 PM3/23/11
to hugi...@googlegroups.com

Erik, I don't understand why you think this technique is easier. As you
note, it can be difficult to capture the full dynamic range of a raw
file in a single 16-bit tiff. But it's trivial to capture the full DR
in a stack of tiffs with different EVs. And (although I'm sure this
depends on the workflow you're accustomed to) feeding the exposure stack
to enfuse strikes me as a lot easier than fiddling with large-radius
unsharp masks, etc. in post-processing. Granted, if you're being very
fussy about the final result you may have to fiddle with enfuse's
settings. But most of the time the default enfuse settings produce
results which are as good or better than any I've seen using other
techniques. And as a bonus, because we're spreading the dynamic range
of the raw file across multiple tiffs, there's little or no benefit to
using 16-bit images, so we can stick with an 8-bit workflow throughout
which saves time, memory and disk space, and most importantly allows us
to do all of our post-processing in the gimp.

So what's not to like about pseudo-bracketing and exposure fusion?

Erik Krause

unread,
Mar 24, 2011, 5:07:41 AM3/24/11
to hugi...@googlegroups.com
Am 24.03.2011 04:25, schrieb Bob Bright:
> Erik, I don't understand why you think this technique is easier. As you
> note, it can be difficult to capture the full dynamic range of a raw
> file in a single 16-bit tiff.

It's difficult to find good settings. Once you have them you only need
to apply them. And currently, since enfuse doesn't read raw, you would
need to do this anyway if enfuse would do autobracketing on it's own.

> But it's trivial to capture the full DR
> in a stack of tiffs with different EVs. And (although I'm sure this
> depends on the workflow you're accustomed to) feeding the exposure stack
> to enfuse strikes me as a lot easier than fiddling with large-radius
> unsharp masks, etc. in post-processing.

Unsharp masking is done in a few seconds. If you edit your final images
anyway it's almost no additional effort. And you can do it on the
finished panorama, which gives you better control. And last but not
least editing seams in a panorama is far easier on single shots than on
brackets.

> And as a bonus, because we're spreading the dynamic range
> of the raw file across multiple tiffs, there's little or no benefit to
> using 16-bit images, so we can stick with an 8-bit workflow throughout
> which saves time, memory and disk space, and most importantly allows us
> to do all of our post-processing in the gimp.

Well, of course what's easy or not depends on the tools you use.
However, using three 8 bit TIFFs doesn't save anything compared to one
16 bit TIFF.

> So what's not to like about pseudo-bracketing and exposure fusion?

I don't deny artificial brackets might be useful in some cases. In some
previous post I even gave hints how to automate the process using dcraw.
And of course it opens new possibilities: F.e. selective shadow
de-noising or a different white balance for each bracket... For panorama
stitching if you don't need those specialties it's a clumsy and
complicated workflow.

Bob Bright

unread,
Mar 24, 2011, 1:00:55 PM3/24/11
to hugi...@googlegroups.com
On 11-03-24 02:07 AM, Erik Krause wrote:
>> But it's trivial to capture the full DR
>> in a stack of tiffs with different EVs. And (although I'm sure this
>> depends on the workflow you're accustomed to) feeding the exposure stack
>> to enfuse strikes me as a lot easier than fiddling with large-radius
>> unsharp masks, etc. in post-processing.
>
> Unsharp masking is done in a few seconds. If you edit your final
> images anyway it's almost no additional effort. And you can do it on
> the finished panorama, which gives you better control. And last but
> not least editing seams in a panorama is far easier on single shots
> than on brackets.

Hmm, you must have much better hardware than I do. I just tried
applying a radius 100 unsharp mask to a couple of full sized 8-bit
panoramas in the gimp. The 24.5 megapixel one took approx. 5.5 minutes,
and the 50 megapixel one took 8.5 minutes. This might go some way to
explaining why you regard your technique as easier. If your machine is
so fast that you can unsharp mask a full sized 16-bit panorama in a few
seconds, then experimenting with different mask radii, adding separate
highlight and shadow masks to preserve detail, etc. is going to be much
less painful for you that it is for me.

(BTW, what kind of hardware do you have? The 5.5/8.5 minute results
were on an 4 year old Toshiba laptop with an Intel Core2 Duo @ 1.73GHz
and only 2 GB of ram. It wasn't hitting swap during the unsharp
masking, though, so I don't think lack of memory was an issue.)

Re: your point about editing seams, I've been doing virtually all of my
manual seam placement in hugin since Thomas added the mask editor. And
since he added the stack variants of the masks (thanks again, Thomas!),
manual seam placement for bracketed shots is exactly as easy as it is
for single shots. But in any case, wherever you do your manual seam
placement and whatever tools you use, it's a simple matter to fuse the
brackets prior to that stage so you only have to work on single shots.

>> And as a bonus, because we're spreading the dynamic range
>> of the raw file across multiple tiffs, there's little or no benefit to
>> using 16-bit images, so we can stick with an 8-bit workflow throughout
>> which saves time, memory and disk space, and most importantly allows us
>> to do all of our post-processing in the gimp.
>
> Well, of course what's easy or not depends on the tools you use.
> However, using three 8 bit TIFFs doesn't save anything compared to one
> 16 bit TIFF.

This depends on where the exposure fusion takes place in the workflow.
Using three 8-bit tiffs which get fused into one 8-bit tiff immediately
after they're saved by the raw converter will most certainly save on
time, memory and disk space, compared with a 16-bit workflow all the way
from raw conversion to final edits, since everything from optimizing,
stitching and blending to correcting stitching errors to post-processing
operates on an equal number of images which are half the size. If you
have enough memory and disk space and sufficiently fast hardware,
perhaps the savings are inconsequential to you. But for those of us
with lesser hardware, it means we can do more with less.

> I don't deny artificial brackets might be useful in some cases. In
> some previous post I even gave hints how to automate the process using
> dcraw. And of course it opens new possibilities: F.e. selective shadow
> de-noising or a different white balance for each bracket... For
> panorama stitching if you don't need those specialties it's a clumsy
> and complicated workflow.

Clumsy and complicated? Are you sure you didn't mean "more efficient
workflow which yields excellent results, but is different than the one I
currently use"? :-)

Erik Krause

unread,
Mar 24, 2011, 3:50:24 PM3/24/11
to hugi...@googlegroups.com
Am 24.03.2011 18:00, schrieb Bob Bright:
> (BTW, what kind of hardware do you have? The 5.5/8.5 minute results
> were on an 4 year old Toshiba laptop with an Intel Core2 Duo @ 1.73GHz
> and only 2 GB of ram. It wasn't hitting swap during the unsharp
> masking, though, so I don't think lack of memory was an issue.)

I'm on Windows 7 x64, 8 GB RAM with AMD Phenom II X4 at 3.4 GHz And I
use Photoshop CS2 (pretty old, not 64 bit) for postprocessing.

I tried the Gimp for windows several times and found it pretty slow, but
I thought that was the windows version only...

Gnome Nomad

unread,
Mar 25, 2011, 6:13:56 AM3/25/11
to hugi...@googlegroups.com

I use the native Linux version of GIMP, not slow at all. My problem with
GIMP is it only does 8-bit color. I use other software (Bibble) for
processing that can make full use of 16-bit color.

For comparison to you speedburners, I do most of my processing work on a
7-year-old 1.5GHz Celeron M Toshiba laptop with 2GB RAM, and some on a
64-bit Sempron 2GHz desktop also with 2GB RAM. But Linux handles large
files better than Windows does, don't know if that makes a difference or
not.

--
Gnome Nomad
gnome...@gmail.com
wandering the landscape of god
http://www.cafepress.com/otherend/

Jeffrey Martin

unread,
Mar 31, 2011, 3:59:37 AM3/31/11
to hugi...@googlegroups.com
It is a GOOD IDEA :-)

i can't help thinking that all this debating, the feature could have been built by now ;-))) (not by me though)
(not an excuse to simply build features instead of having a good debate, i fully realize.)

Jeffrey

Lukáš Jirkovský

unread,
Mar 31, 2011, 5:35:03 AM3/31/11
to hugi...@googlegroups.com


Yeah, but it would have been of mediocre quality at best.

RAW processing is out of scope of hugin and if it was implemented it
would be something as simple as using dcraw to load an image. Nobody
(hyperbole) uses dcraw to process their RAWs, because it's too simple
to be useful alone. We couldn't compete with applications like
Lightroom, Aperture or Rawtherapee. It would probably take a few man
years to get there.

Lukas

Jeffrey Martin

unread,
Mar 31, 2011, 2:57:33 PM3/31/11
to hugi...@googlegroups.com
well, there are tons of raw developing programs that use dcraw. so what do you mean? no one uses dcraw command line?

Lukáš Jirkovský

unread,
Mar 31, 2011, 3:01:02 PM3/31/11
to hugi...@googlegroups.com
On 31 March 2011 20:57, Jeffrey Martin <360c...@gmail.com> wrote:
> no one uses dcraw command line?

That's what I meant.

Erik Krause

unread,
Mar 31, 2011, 5:36:14 PM3/31/11
to hugi...@googlegroups.com
Am 31.03.2011 20:57, schrieb Jeffrey Martin:
> well, there are tons of raw developing programs that use dcraw.

Most raw developing programs use dcraw code to read and decode raw
files, but very few actually use the dcraw post processing. There are
better highlight recovery algorithms, there are far better
interpolators, better white balance techniques, denoising and so on.
dcraw even treats the camera orientation sensor differently on some
cameras (it uses maker notes instead of EXIF) and it delivers a
different image size.

As I wrote before: A shell script to create three temporary files by
varying the dcraw -b parameter and a subsequent enfuse will provide you
with the desired functionality.

kfj

unread,
Apr 1, 2011, 3:13:24 AM4/1/11
to hugin and other free panoramic software


On 21 Mrz., 18:27, Alexander Rabtchevich
<alexander.v.rabtchev...@gmail.com> wrote:

> So the feature request is to implement internal tonemapping algorithm
> from one file for enfuse without the need to produce several input
> files with different exposures. The adjustment parameter can be a
> value of exposure compensation.

I'd like to point out that if I look at the initial posting, I think
this is not directly about raw processing. It is a feature request for
processing a single image with enfuse rather than having to produce
several different versions of the image and feed them to enfuse. The
conversion of the initial raw image into a format enfuse can read is,
I'd say, not a problem and can be achieved easily using any of a
number of raw processing tools. The requested feature should work just
the same with a single 16 bit TIFF and might even do some good to 8
bit and even JPEG material. Being able to feed in a raw image might be
nice-to-have, but I think this isn't the point of the request, really.

While musing about the topic and playing with the technique some more
recently, I discovered a nice method for some of my landscapes. The
source images were automatically exposed raws which I converted to 16
bit TIFF. When combined, the total dynamic range was beyond a single
LDR image, so I either had blown highlights or too-dark shadows - even
though the information was there in the 16 bits of the output, it had
to be squashed into diplayability. So I let hugin calculate the whole
panorama twice, once -1 EV from the calculated optimum, and once +1.
Then I enfused the resulting two panoramic images. Since the two
panoramas are from the same pto, the alignment is perfect, and enfuse
picks the nice sky from the darker one and the defined shadows from
the lighter one - just the type of dynamic range compression enfuse is
so good at. If enfuse had the behaviour Alexander requested, I suppose
I could just use it on the middle-exposed panorama to the same effect,
again saving the creation of two (or more) input images. Looks to me
like another good application for the requested feature, so I'm even
more for it.

Kay

paul womack

unread,
Apr 1, 2011, 4:26:29 AM4/1/11
to hugi...@googlegroups.com
kfj wrote:

>
> While musing about the topic and playing with the technique some more
> recently, I discovered a nice method for some of my landscapes. The
> source images were automatically exposed raws which I converted to 16
> bit TIFF. When combined, the total dynamic range was beyond a single
> LDR image, so I either had blown highlights or too-dark shadows - even
> though the information was there in the 16 bits of the output, it had
> to be squashed into diplayability. So I let hugin calculate the whole
> panorama twice, once -1 EV from the calculated optimum, and once +1.
> Then I enfused the resulting two panoramic images. Since the two
> panoramas are from the same pto, the alignment is perfect, and enfuse
> picks the nice sky from the darker one and the defined shadows from
> the lighter one - just the type of dynamic range compression enfuse is
> so good at.

Alternatively, you could have made a single 16 bit output panorama,
and generated the two (-1 +1) exposures from it for enfusing.

In effect an HDR workflow with tonemapping at the end.

Enfuse's USP is that one can go straight from the multiple
exposures of a stack to a tonemapped output, without an HDR
version ever existing.

What we appear to be discussing is a way of leveraging enfuse's
undoubted strength in situations where we do, in fact, have an HDR.

BugBear

Alexander Rabtchevich

unread,
Apr 1, 2011, 6:39:32 AM4/1/11
to hugin and other free panoramic software
Yep, my posting and example weren't about RAW processing. The
bracketing file was obtained from a 16-bit tiff, not from RAW. There
was no processing of the image after bracketing at all before enfuse.
And the result was pretty satisfying.

So, let me return to original idea. It is belong RAW conversion.

If we have RAW from a modern camera, it is naturally a HDR image
itself because of close to 12 bits dynamic range of modern cameras at
lowest ISOs. When we convert it to some format (16-bits tiff is the
best choice) in our favorite converter (I prefer darktable),
_the_resulting_ image contains enough data to restore shadows.

So, if we take properly exposed tiff (or even jpg), make required
exposure correction (+EV) in some software and feed that images to
enfuse as a set of bracketed images, it produces nice result with
compressed global contrast and preserved local one. One can say it
could be better to use real bracketing, but it is not always possible
on one hand and the result can be satisfactory on another.

If enfuse could internally make a in-memory copy of a single picture
with given (via a option) exposure compensation and threat that images
as a exposure stuck that can make it a nice tonemapping program. I
guess that is the simplest solution requiring minimum efforts - if an
option is used, instead of loading second image just create it in
memory from the first one and apply given exposure compensation.


On 1 апр, 10:13, kfj <_...@yahoo.com> wrote:
> On 21 Mrz., 18:27, Alexander Rabtchevich
>
> <alexander.v.rabtchev...@gmail.com> wrote:
> > So the feature request is to implement internal tonemapping algorithm
> > from one file for enfuse without the need to produce several input
> > files with different exposures. The adjustment parameter can be a
> > value of exposure compensation.
>
> I'd like to point out that if I look at the initial posting, I think
> this is not directly about raw processing. It is a feature request for
> processing a single image with enfuse rather than having to produce
> several different versions of the image and feed them to enfuse. The
> conversion of the initial raw image into a format enfuse can read is,
> I'd say, not a problem and can be achieved easily using any of a
> number of raw processing tools. The requested feature should work just
> the same with a single 16 bit TIFF and might even do some good to 8
> bit and even JPEG material. Being able to feed in a raw image might be
> nice-to-have, but I think this isn't the point of the request, really.
> Kay

With respect,
Alexander Rabtchevich

Jeffrey Martin

unread,
Apr 4, 2011, 3:53:56 AM4/4/11
to hugi...@googlegroups.com
on a related note, i'm curious - what is the equivalent to making a +1 exposure on a raw file - what actual settings would you change in a tiff? simply move the white/black point?


Bruno Postle

unread,
Apr 4, 2011, 4:12:53 PM4/4/11
to Hugin ptx

Adjusting EV isn't straightforward in an image editor since you
don't know the response curve. This is why you often get better
results just adjusting the gamma (or you can do it properly in
Hugin).

--
Bruno

Jeffrey Martin

unread,
Apr 7, 2011, 2:54:55 AM4/7/11
to hugi...@googlegroups.com
if one assumes a "standard response curve" then how would one go about it?

Jeff

Alexander Rabtchevich

unread,
Apr 7, 2011, 4:12:01 AM4/7/11
to hugin and other free panoramic software
I can say the result can be good regardless the global tonecurve used
to produce input image. As I tried to add exposure in external program
on ready image with camera tonecurve being already applied in RAW
converter, that should not be the case.

Bruno Postle

unread,
Apr 7, 2011, 2:20:32 PM4/7/11
to Hugin ptx
On Wed 06-Apr-2011 at 23:54 -0700, Jeffrey Martin wrote:

> if one assumes a "standard response curve" then how would one go
> about it?

If I knew what a standard response curve looked like as EMoR
parameters then I would suggest that Hugin used it as a default.

Currently we use 0,0,0,0,0 which doesn't correspond to any real
camera.

--
Bruno

Pablo d'Angelo

unread,
Apr 8, 2011, 2:21:55 PM4/8/11
to hugi...@googlegroups.com
Hi Bruno,

Thats something like the "mean" response curve of all cameras/films that
where used to derive the EMoR model, so it should be a good start.

ciao
Pablo

Bruno Postle

unread,
Apr 8, 2011, 4:51:53 PM4/8/11
to Hugin ptx

Ok, I must have got the wrong idea about this. I remember Joost
saying that ptgui uses a different set of baseline EMoR parameters,
but I could have misheard this completely.

--
Bruno

Yclept Nemo

unread,
Apr 8, 2011, 11:30:24 PM4/8/11
to hugi...@googlegroups.com
I don't know if this is related, but the luxrender project recently
added film response curves intended to emulate various cameras during
tonemapping
Here are some details (read down the topic)
http://www.luxrender.net/forum/viewtopic.php?f=12&t=5456
And a link to the CRF files:
http://www.luxrender.net/release/crf/

Alexander Rabtchevich

unread,
Apr 11, 2011, 6:52:43 AM4/11/11
to hugin and other free panoramic software
darktable project ( darktable.sf.net ) has a lot of base curves for
different manufactures. But I would like to note the curve used at RAW
conversion can differ from vendor's one.
Reply all
Reply to author
Forward
0 new messages