this topic is something i have been pondering for several months now
(working with raw files for use in a panorama). i have found it takes
a lot of processing time and can take a ton of reviewing/clicking on
the individual photos in each panorama. after running a few tests
earlier this year i wrote a perl script that does a lot of the photo
processing for me. here are the steps i take, i am sure there is room
for improvement:
1) take the photos (hand held, lots of overlap between photos, manual
mode on the camera - i set exposure for slightly underexposed photos
based on the best lite area of the panorama when possible)
2) transfer files to the computer
3) run my perl script which does the following:
a) using ufraw-batch, analyze each photo to detect 'auto' exposure
and black point (using camera wb settings, or appropriate wb setting)
b) write each photo's 'auto' settings to a text file for review
later if needed, i also write many other settings from ufraw to the
text file
c) calculate statistics (std dev, mean, median, etc...) for the
photos in the group, these go in the text file also
d) process the group of photos for each set of 'auto' exposure and
black-point found in step a above (this is very time consuming and
resource heavy - for instance, if there are just 6 photos in the
group, the script generates 6 sets of 6 photos each for a total of 36
images)
* for the next update of my script, i am going to have it process
only a handful of these exposure settings, the min, max, mean, etc...
when there are 20 or 30 or 57 photos to deal with, it very quickly
becomes unmanageable to process every exposure setting)
4) i run hugin and generate a panorama and blend it with enfuse
(currently i generate a panorama for each exposure and then blend
manually with enfuse, i'll be trying the hdr capabilities of hugin
soon, though)