> I think that's a good idea.
I don't know whether it's a good idea. It is pretty tedious compared to
simply extract all the dynamic range to a 16 bit TIFF in one go using a
decent raw converter and later use local contrast enhancement to improve
the image.
> Maybe as an option to enfuse? No need to
> calculate intermediate images, if all that's desired is using the same
> input image for an artificial bracket. If enfuse were called with only
> one image, it could accept a set of EV deltas to emulate the
> artificial bracket, but load the image only once, saving i/o and
> memory, like
>
> enfuse --fake_bracket -2 0 +2 --output product.tif input.tif
This would probably be possible for TIFF files but it would require to
extract the complete dynamic range to that TIFF (s.a.), which might or
might not be easy, depending on your raw converter. Better would be to
create the exposure steps directly from the raw file which would require
to use dcraw code inside enfuse.
But you can have this already: use a shell script that calls dcraw three
times each with different -b parameters (f.e. -b 0,25, -b 1, -b 4), pass
the result files to enfuse and delete any intermediate files. This way
you can use dcraw's other features like (basic) CA correction or white
balance adjustment.
BTW.: TuFuse Pro offers single image autobracketing, but it's windows
only and commercial.
--
Erik Krause
http://www.erik-krause.de
> I disagree. I think it's not at all about local contrast (as in focus
> stacks), but about exposure fusion
Exposure fusion is all about local contrast :-)
> (The original post talks about
> dynamic range compression which has nothing to do with contrast).
If you extract the whole dynamic range of a raw shot into a tiff you get
a pretty flat contrast. A large radius local contrast boost, like f.e. a
large radius unsharp mask - brings back the vividness into the image
without blowing the highlights and darkening the shadows too much like
simply increasing global contrast would do. All in all this has a
similar effect like enfuse has on an image stack.
The crux is extracting the whole dynamic range from a raw, which isn't
easy. But this would be required for the enfuse technique anyway (since
as you wrote enfuse reads no raw).
One way (using an outdated version of ACR) is described on
http://wiki.panotools.org/RAW_dynamic_range_extraction
Well, this is the technical view. From a photographers view enfuse
allows to preserve local contrast while it lowers global contrast. It
does this by the technical means you describe (which I understand pretty
well) and additionally - and thats the trick - by multi resolution
blending. It's the latter which preserves local contrast. If you simply
take the best exposed pixels from each shot, you get a very strange
looking image. See:
> http://research.edm.uhasselt.be/~tmertens/papers/exposure_fusion_reduced.pdf
(See the "Naive" in Figure 4)
I've done tonemapping from a single 16 bit TIFF with considerably high
dynamic range (12 to 14 f-stops from scanned negative film) years before
enfuse was first published and before HDR tonemappers where publicly
available. If you press 12 f-stops into a 16 bit TIFF you *get* pretty
flat contrast and you must look for a way to enhance it again. (See
http://www.erik-krause.de/bilder/crueize/crueize1.htm for an example)
In that time I wrote some photoshop actions which still are freely
available from http://www.erik-krause.de/contrast/index.htm
Later photoshop introduced "Shadows and Highlights" which does
essentially the same but is more prone to halos.
I tried the same technique on exposure bracketed series as well:
http://www.erik-krause.de/blending/index.htm but there was a lot of
tweaking necessary to achieve good results. enfuse was a big step
forward, probably the biggest since invention of the digital camera
because it preserves the local contrast from each exposure bracket and
puts it into a result image which looks believable - other than many HDR
tonemapping approaches. However, to use the medium dynamic range (8-10
f-stops) from a single raw shot it's not essential. Good results can be
achieved with easier techniques.
This is nice, but perhaps a little outside enfuse domain.
BTW, I would suggest ImageMagick's sigmoidal-contrast option
as perhaps being suitable for preparing the light/id/dark
frames for enfuse:
http://www.imagemagick.org/script/command-line-options.php#sigmoidal-contrast
I haven't tried it though.
BugBear
The problem is the raw conversion. Most raw converters let you choose
the "exposure" but it's very hard to extract the full DR. Most likely
this is because such an image would look very flat and the raw converter
would get bad critics.
To find the settings in your raw converter you can use any image with
enough DR, however, I suggest using an exposure bracketed series of a
simply target consisting of large white and black areas. For the wiki
article I used a white paper on a black cotton and an additional grey
card for reference: http://wiki.panotools.org/File:Camera-RAW-04.jpg
You need to find settings the clip least highlights and least shadows.
Usually this requires to pull down the contrast slider. But be aware:
The camera has clipped the highlights already. If you use clipping
display and lower brightness some program don't show a region actually
clipped by the camera as clipped because it's darker now.
If you shoot for panoramas you need to process all images the same with
fixed settings of course. Later processing can be done after stitching
which makes stitching fast and easy.
To get back original vividness without loosing the highlights and shadow
details try f.e. unsharp mask filter with radius between 50 and 200
pixels, amount 20% and threshold 0. This might blow some highlights. If
you don't want that use a mask to protect highlights. Same applies for
shadows. Another technique is to create a mask from the highlights only,
blur it and use it to selectively increase their contrast by adjusting
the black point. Do the same for the shadows but use the white point.
However, I don't know whether the Gimp allows for all that techniques...
best regards
> I've just made the experiment:
> 1. converted a RAW in darktable (git) to 16-bits tiff with usual tonal
> curve and correct exposure for highlights (ETTR).
> 2. as there is a bug in darktable tiff export both graphicsmagick and
> imagemagick refuse to take the image to increase its luminance. So I
> opened the image in 8-bits GIMP 2.7.2 and saved it as png. Then I
> added 2EV in GIMP with levels and saved the image with a new name (8-
> bits png too.)
> 3. enfuse --GrayProjector=l-star -o 1.png DSC07748.png DSC07748_01.png
>
> [...]
>
> As one can see it works even for 8-bits input. Note the histogram
> shows real dynamic range compression. I can provide original image.
Hi Alexander,
It's very easy to emulate this workflow in hugin. Just load 2 copies of
your tiff into hugin, set the exposure value of one of them 0 and the
other to +2, and then output your images. You can automate the process
to some extent by saving this project as a .pto, and applying it as a
template to any other tiff you want to work on. There are a couple of
catches, however:
1. Both of hugin's exposure fusion output options ("Exposure fused from
stacks" and "Exposure fused from any arrangement") ignore exposure
corrections, so you can't use them for fusing pseudo-bracketed stacks.
You have to output "Exposure corrected, low dynamic range" images
instead, and fuse them from the command line. (BTW, does anyone know
why hugin's fusion outputs ignore exposure correction? I can't think of
any good reason why they should.)
2. Apart from the fact that given hugin's current state you still have
to resort to running enfuse from the command line with the "remapped"
images, you'll almost certainly get better results if you save multiple
versions of your tiff with different EVs using darktable or some other
raw converter. The reason is simple: Unless you've taken the trouble to
capture as much dynamic range as possible by reducing the global
contrast, your base tiff (even if it's 16 bit) is going to have
considerably less dynamic range than the raw file. So producing +2EV or
-2EV copies from this tiff isn't going to yield as much highlight and
shadow detail as producing +2EV/-2EV copies from the raw file. On the
other hand, if you do reduce the contrast of the base tiff to preserve
dynamic range, it's probably not going to yield satisfactory results
when you fuse it with the +2EV/-2EV copies.
The only way around this limitation would be to build a raw
converter into hugin or enfuse, which IMO would be quite inappropriate.
(Of course, you may not be interested in squeezing every last bit of
detail from your raw files -- in which case, relatively automatic
pseudo-bracketing in hugin could be achieved simply by modifying the
fusion output options so they don't ignore exposure correction.)
Cheers,
BBB
--
Bob Bright
Vancouver Island Digital Imaging
(250) 857-9887
BBBr...@VictoriaVR.ca
http://VictoriaVR.ca
Erik, I don't understand why you think this technique is easier. As you
note, it can be difficult to capture the full dynamic range of a raw
file in a single 16-bit tiff. But it's trivial to capture the full DR
in a stack of tiffs with different EVs. And (although I'm sure this
depends on the workflow you're accustomed to) feeding the exposure stack
to enfuse strikes me as a lot easier than fiddling with large-radius
unsharp masks, etc. in post-processing. Granted, if you're being very
fussy about the final result you may have to fiddle with enfuse's
settings. But most of the time the default enfuse settings produce
results which are as good or better than any I've seen using other
techniques. And as a bonus, because we're spreading the dynamic range
of the raw file across multiple tiffs, there's little or no benefit to
using 16-bit images, so we can stick with an 8-bit workflow throughout
which saves time, memory and disk space, and most importantly allows us
to do all of our post-processing in the gimp.
So what's not to like about pseudo-bracketing and exposure fusion?
It's difficult to find good settings. Once you have them you only need
to apply them. And currently, since enfuse doesn't read raw, you would
need to do this anyway if enfuse would do autobracketing on it's own.
> But it's trivial to capture the full DR
> in a stack of tiffs with different EVs. And (although I'm sure this
> depends on the workflow you're accustomed to) feeding the exposure stack
> to enfuse strikes me as a lot easier than fiddling with large-radius
> unsharp masks, etc. in post-processing.
Unsharp masking is done in a few seconds. If you edit your final images
anyway it's almost no additional effort. And you can do it on the
finished panorama, which gives you better control. And last but not
least editing seams in a panorama is far easier on single shots than on
brackets.
> And as a bonus, because we're spreading the dynamic range
> of the raw file across multiple tiffs, there's little or no benefit to
> using 16-bit images, so we can stick with an 8-bit workflow throughout
> which saves time, memory and disk space, and most importantly allows us
> to do all of our post-processing in the gimp.
Well, of course what's easy or not depends on the tools you use.
However, using three 8 bit TIFFs doesn't save anything compared to one
16 bit TIFF.
> So what's not to like about pseudo-bracketing and exposure fusion?
I don't deny artificial brackets might be useful in some cases. In some
previous post I even gave hints how to automate the process using dcraw.
And of course it opens new possibilities: F.e. selective shadow
de-noising or a different white balance for each bracket... For panorama
stitching if you don't need those specialties it's a clumsy and
complicated workflow.
Hmm, you must have much better hardware than I do. I just tried
applying a radius 100 unsharp mask to a couple of full sized 8-bit
panoramas in the gimp. The 24.5 megapixel one took approx. 5.5 minutes,
and the 50 megapixel one took 8.5 minutes. This might go some way to
explaining why you regard your technique as easier. If your machine is
so fast that you can unsharp mask a full sized 16-bit panorama in a few
seconds, then experimenting with different mask radii, adding separate
highlight and shadow masks to preserve detail, etc. is going to be much
less painful for you that it is for me.
(BTW, what kind of hardware do you have? The 5.5/8.5 minute results
were on an 4 year old Toshiba laptop with an Intel Core2 Duo @ 1.73GHz
and only 2 GB of ram. It wasn't hitting swap during the unsharp
masking, though, so I don't think lack of memory was an issue.)
Re: your point about editing seams, I've been doing virtually all of my
manual seam placement in hugin since Thomas added the mask editor. And
since he added the stack variants of the masks (thanks again, Thomas!),
manual seam placement for bracketed shots is exactly as easy as it is
for single shots. But in any case, wherever you do your manual seam
placement and whatever tools you use, it's a simple matter to fuse the
brackets prior to that stage so you only have to work on single shots.
>> And as a bonus, because we're spreading the dynamic range
>> of the raw file across multiple tiffs, there's little or no benefit to
>> using 16-bit images, so we can stick with an 8-bit workflow throughout
>> which saves time, memory and disk space, and most importantly allows us
>> to do all of our post-processing in the gimp.
>
> Well, of course what's easy or not depends on the tools you use.
> However, using three 8 bit TIFFs doesn't save anything compared to one
> 16 bit TIFF.
This depends on where the exposure fusion takes place in the workflow.
Using three 8-bit tiffs which get fused into one 8-bit tiff immediately
after they're saved by the raw converter will most certainly save on
time, memory and disk space, compared with a 16-bit workflow all the way
from raw conversion to final edits, since everything from optimizing,
stitching and blending to correcting stitching errors to post-processing
operates on an equal number of images which are half the size. If you
have enough memory and disk space and sufficiently fast hardware,
perhaps the savings are inconsequential to you. But for those of us
with lesser hardware, it means we can do more with less.
> I don't deny artificial brackets might be useful in some cases. In
> some previous post I even gave hints how to automate the process using
> dcraw. And of course it opens new possibilities: F.e. selective shadow
> de-noising or a different white balance for each bracket... For
> panorama stitching if you don't need those specialties it's a clumsy
> and complicated workflow.
Clumsy and complicated? Are you sure you didn't mean "more efficient
workflow which yields excellent results, but is different than the one I
currently use"? :-)
I'm on Windows 7 x64, 8 GB RAM with AMD Phenom II X4 at 3.4 GHz And I
use Photoshop CS2 (pretty old, not 64 bit) for postprocessing.
I tried the Gimp for windows several times and found it pretty slow, but
I thought that was the windows version only...
I use the native Linux version of GIMP, not slow at all. My problem with
GIMP is it only does 8-bit color. I use other software (Bibble) for
processing that can make full use of 16-bit color.
For comparison to you speedburners, I do most of my processing work on a
7-year-old 1.5GHz Celeron M Toshiba laptop with 2GB RAM, and some on a
64-bit Sempron 2GHz desktop also with 2GB RAM. But Linux handles large
files better than Windows does, don't know if that makes a difference or
not.
--
Gnome Nomad
gnome...@gmail.com
wandering the landscape of god
http://www.cafepress.com/otherend/
Yeah, but it would have been of mediocre quality at best.
RAW processing is out of scope of hugin and if it was implemented it
would be something as simple as using dcraw to load an image. Nobody
(hyperbole) uses dcraw to process their RAWs, because it's too simple
to be useful alone. We couldn't compete with applications like
Lightroom, Aperture or Rawtherapee. It would probably take a few man
years to get there.
Lukas
That's what I meant.
Most raw developing programs use dcraw code to read and decode raw
files, but very few actually use the dcraw post processing. There are
better highlight recovery algorithms, there are far better
interpolators, better white balance techniques, denoising and so on.
dcraw even treats the camera orientation sensor differently on some
cameras (it uses maker notes instead of EXIF) and it delivers a
different image size.
As I wrote before: A shell script to create three temporary files by
varying the dcraw -b parameter and a subsequent enfuse will provide you
with the desired functionality.
>
> While musing about the topic and playing with the technique some more
> recently, I discovered a nice method for some of my landscapes. The
> source images were automatically exposed raws which I converted to 16
> bit TIFF. When combined, the total dynamic range was beyond a single
> LDR image, so I either had blown highlights or too-dark shadows - even
> though the information was there in the 16 bits of the output, it had
> to be squashed into diplayability. So I let hugin calculate the whole
> panorama twice, once -1 EV from the calculated optimum, and once +1.
> Then I enfused the resulting two panoramic images. Since the two
> panoramas are from the same pto, the alignment is perfect, and enfuse
> picks the nice sky from the darker one and the defined shadows from
> the lighter one - just the type of dynamic range compression enfuse is
> so good at.
Alternatively, you could have made a single 16 bit output panorama,
and generated the two (-1 +1) exposures from it for enfusing.
In effect an HDR workflow with tonemapping at the end.
Enfuse's USP is that one can go straight from the multiple
exposures of a stack to a tonemapped output, without an HDR
version ever existing.
What we appear to be discussing is a way of leveraging enfuse's
undoubted strength in situations where we do, in fact, have an HDR.
BugBear
Adjusting EV isn't straightforward in an image editor since you
don't know the response curve. This is why you often get better
results just adjusting the gamma (or you can do it properly in
Hugin).
--
Bruno
> if one assumes a "standard response curve" then how would one go
> about it?
If I knew what a standard response curve looked like as EMoR
parameters then I would suggest that Hugin used it as a default.
Currently we use 0,0,0,0,0 which doesn't correspond to any real
camera.
--
Bruno
Thats something like the "mean" response curve of all cameras/films that
where used to derive the EMoR model, so it should be a good start.
ciao
Pablo
Ok, I must have got the wrong idea about this. I remember Joost
saying that ptgui uses a different set of baseline EMoR parameters,
but I could have misheard this completely.
--
Bruno