Major Major Major Problem:
Even though there are 275 images, they are split across 24 stacks, so
not that many control points are needed. Finding the control points
takes about 45 minutes using multirow CPFind + align_image_stack and
generates approximately 30.000 points. The points match the adjacent
bracketed images, and one optimally exposed image from each stack
matches the adjacent stacks. The matches are generally very good and I
am able to reduce my error distance for all points to less than 2.5px
relatively quickly.
However when I save and restart, while there are maybe the same amount
of control points according to the control points table - around
30.000, I don't remember the exact number, they match different
images. So obviously the matches are now very bad. For example,
adjacent bracketed images are generally no longer matched (missing all
control points). Adjacent stacks are also generally no longer matched
(missing all control points), and if there are matches, they are weak.
My maximum CP distance jumped from 2.5 to 30, and average distance
from 0.09 to 8. After re-optimization, maximum distance jumped up to
50, standard deviation back down to 2.125... but the image alignment
was completely screwed up. Points I'm pretty sure that I deleted by
hand have re-appeared. It is an enormous mess. (note: and I haven't
even used the fast preview window -> see my last question)
Luckily, Hugin saves the individual image positions correctly so if I
don't apply optimizations I don't modify the panorama. However, I just
found an area that needs more control points to get rid of some
glitches, and the only way to modify the control points and
re-optimize is to start from scratch.
Other Problems:
This means I can't close/reopen hugin, which is annoying not only
because I've had hugin crash twice, but also the "clean control
points" freezes hugin and I have to manually kill it. Ten hours after
clicking "clean control points", the hugin interface remained
inactive/dead. The terminal outputs a lot of:
" ...
" Number of images 2
" No Parameters to optimize
" Bad params
" ...
Really Minor Problems:
. Hugin segfaults if I press enter on the keyboard while the splash
window has focus
. If there are too many images in the horizontal list of the preview
tab of the Fast Preview Window, then a scrollbar appears and partially
covers the image list.
. stacks only works out-of-the box with a maximum of 12 stacks. Images
from any extra stacks remained unclicked in the optimizer/exposer tab
lists, and so aren't aligned/optimized.
Stuff I just don't understand:
Two images from adjacent stacks have about 40 control points with a
maximum distance of 2px. To the eye, the points looked perfectly
aligned. However, Hugin decided to rotate the right image 180 degrees.
Basically, the first four stacks were right-side up, the next four
where upside-down, the next two where 90 degrees clockwise, etc
despite perfect control point matches. I had to go to the image tab
and reset the roll and pitch manually.
Feature Requests:
. The control point table - 30.000 points - just too slow. Several
minutes required to regenerate the list after deleting a point. Can it
be made faster?
. Similarly, the layout tab of the fast preview window is even slower
- several tens of minutes to update.
I find that the layout mode of the fast preview window is
indispensable. If it can't be used and the user knows the approximate
angle of rotation between bracket sets, there should be a method in
the images tab to programmatically set the yaw, or pitch/roll: for
example progressively rotate each stack X degrees counterclockwise
(from above)
Questions:
. Is there any tool that can merge stacks into HDR without any
alignment or other pre-processing? That way, I can work with only 24
hdr images instead of the full 275? And can exposure fusion work with
HDR images?
. Is it possible to configure control point creations on:
* all images
* only overlapping images
. Does changing the EV of "displayed images" in the fast preview
window have any effect on the output panorama, especially when using
HDR or Exposure Fusion modes.
. Why does hugin's stitching/enblend use practically no memory (25% of
RAM) but all swap space (100% of partition)?
. Is it possible to run the *same* vertical line across several
images? == four points across two images that also help precisely
align the images.
. I assume that the fast preview window tools (center/fit/straighten)
actually affect the image parameters (yaw,pitch,roll,etc) and do not
modify control points. Therefore, after applying center/fit/straighten
I would think if I subsequently optimize the project it would revert
to the exact state before applying center/fit/straighten. That is, the
image positions/parameters are identical to before applying
center/fit/straighten, and the control points and control-point
distances are also identical to the state before applying
center/fit/straighten. Why is this not the case?
Anyway, aside from these several questions, the documentation is very
good. And I really like the interface.
thanks
Please specify the exact version of Hugin and the operating
system you are using. Some idea about your hardware configuration
(cpu/RAM) may also help.
> Stuff I just don't understand:
> Two images from adjacent stacks have about 40 control points
> with a maximum distance of 2px. To the eye, the points looked
> perfectly aligned. However, Hugin decided to rotate the right
> image 180 degrees. Basically, the first four stacks were
> right-side up, the next four where upside-down, the next two
> where 90 degrees clockwise, etc despite perfect control point
> matches. I had to go to the image tab and reset the roll and
> pitch manually.
Are you optimizing for the field of view? Hugin will flip images if
it thinks they cover more than 360 degrees and you don't allow it
to change the FOV.
> Questions:
> . Is there any tool that can merge stacks into HDR without any
> alignment or other pre-processing?
Qtpfsgui/LuminanceHDR and various proprietary HDR programs.
> And can exposure fusion work with HDR images?
No, but you can manually enfuse your stacks first, then use them
as input to Hugin.
> . Is it possible to configure control point creations on:
> * all images
> * only overlapping images
Select the images you want in the Images tab before clicking the
"Create control points" button.
--
Markku Kolkka
markku...@iki.fi
> Questions:Qtpfsgui/LuminanceHDR and various proprietary HDR programs.
> . Is there any tool that can merge stacks into HDR without any
> alignment or other pre-processing?
hugin 2010.4.0
Linux 2.6.37-ARCH
Intel T2400 @ 1.83GHz
RAM, 4GB, 3.25 available
In any case I restarted the panorama with fewer brackets. The
interface is much snappier so I was able to play with a bunch of
configurations.
I've figured out what is happening. When control points are added,
their default distance is 0.0 and so all the images present a perfect
(green) fit. Typically, I'd expect control-point distances to be
calculated only after optimizing the control points. However it seems
that upon start-up hugin also calculates the distance: and if the
images have yet to be optimized then the control point distance
becomes terrible and the hugin presents a red (awful) image fit.
I've noticed that hugin also calculates control-point distances in
some other circumstances; off the top-of-my-head I can't recall but I
believe it involves interacting with the fast preview window. In any
case, this seems to be incredibly counter-intuitive behavior serving
no useful purpose. It definitely confused me. Any other symptoms I
mentioned, such as disappearing/moved control points, can problably
regarded as my error: most likely I had removed the points from
another bracket set.
>> Other Problems:
>> This means I can't close/reopen hugin, which is annoying not only
>> because I've had hugin crash twice, but also the "clean control
>> points" freezes hugin and I have to manually kill it. Ten hours after
>> clicking "clean control points", the hugin interface remained
>> inactive/dead. The terminal outputs a lot of:
>> " ...
>> " Number of images 2
>> " No Parameters to optimize
>> " Bad params
>> " ...
I've also figured this one out. If no images are selected, then "clean
control points" works as expected. If all the images are selected then
hugin will freeze. Perhaps when all the images are selected hugin
tries to run through every possible combination? I don't know.
However, hugin completely locks up: GUI no longer updates, terminal
unresponsize with no output.
In any case, I've found clean control points very unhelpful: it tends
to only remove points that I've manually added. There should some
preference to prevent this.
>> Really Minor Problems:
>> . Hugin segfaults if I press enter on the keyboard while the splash
>> window has focus
More information. It segfaults on pretty much any mouse or keyboard input.
>> . If there are too many images in the horizontal list of the preview
>> tab of the Fast Preview Window, then a scrollbar appears and partially
>> covers the image list.
>> . stacks only works out-of-the box with a maximum of 12 stacks. Images
>> from any extra stacks remained unclicked in the optimizer/exposer tab
>> lists, and so aren't aligned/optimized.
I wasn't able to duplicate this while creating my reduced-image
panorama with an equal number of stacks.
>> Stuff I just don't understand:
>> Two images from adjacent stacks have about 40 control points with a
>> maximum distance of 2px. To the eye, the points looked perfectly
>> aligned. However, Hugin decided to rotate the right image 180 degrees.
>> Basically, the first four stacks were right-side up, the next four
>> where upside-down, the next two where 90 degrees clockwise, etc
>> despite perfect control point matches. I had to go to the image tab
>> and reset the roll and pitch manually.
>
> Are you optimizing for the field of view? Hugin will flip images if
> it thinks they cover more than 360 degrees and you don't allow it
> to change the FOV.
No, since I've already calibrated the lens I am only optimizing yaw,
pitch and roll. While the panorama total itself covers 375 degrees,
each individual images covers 29.7 degrees. However I've noticed that
if the yaw is already approximately correct, then hugin won't flip
images.
>> Feature Requests:
>> . The control point table - 30.000 points - just too slow. Several
>> minutes required to regenerate the list after deleting a point. Can it
>> be made faster?
My new panorama had only 9.000 points, and the control point table is
still slower than a tortoise.
>> . Similarly, the layout tab of the fast preview window is even slower
>> - several tens of minutes to update.
>> I find that the layout mode of the fast preview window is
>> indispensable. If it can't be used and the user knows the approximate
>> angle of rotation between bracket sets, there should be a method in
>> the images tab to programmatically set the yaw, or pitch/roll: for
>> example progressively rotate each stack X degrees counterclockwise
>> (from above)
>>
>> Questions:
>> . Is there any tool that can merge stacks into HDR without any
>> alignment or other pre-processing? That way, I can work with only 24
>> hdr images instead of the full 275? And can exposure fusion work with
>> HDR images?
>
> Qtpfsgui/LuminanceHDR and various proprietary HDR programs.
> ...
> No, but you can manually enfuse your stacks first, then use them
> as input to Hugin.
Thanks, PFSCalibrate seems to be what I'll use. Note that I'm not
looking to enfuse or tonemap individual stacks, since those operations
are location-based and would generate seams and incomplete results
without being applied to the complete panorama.
When I say HDR, i mean full dynamic range linear color space
non-tonemapped image. One reason I ask is because Hugin cannot read
Camera RAW images. RAW images are in linear color space so hugin would
not have to reverse-calculate the response curve applied by
ufraw/dcraw. In this respect HDR images have the same benefit as RAW
images: linear color space and smaller+faster than tiff.
I'm curious if anyone has taken this approach and can confirm these
theories. I would especially like to know if working with OpenEXR
images is significantly faster than TIFF.
>> . Is it possible to configure control point creations on:
>> * all images
>> * only overlapping images
>
> Select the images you want in the Images tab before clicking the
> "Create control points" button.
Thanks, that was invaluable advice.
>> . Does changing the EV of "displayed images" in the fast preview
>> window have any effect on the output panorama, especially when using
>> HDR or Exposure Fusion modes.
>> . Why does hugin's stitching/enblend use practically no memory (25% of
>> RAM) but all swap space (100% of partition)?
>> . Is it possible to run the *same* vertical line across several
>> images? == four points across two images that also help precisely
>> align the images.
>> . I assume that the fast preview window tools (center/fit/straighten)
>> actually affect the image parameters (yaw,pitch,roll,etc) and do not
>> modify control points. Therefore, after applying center/fit/straighten
>> I would think if I subsequently optimize the project it would revert
>> to the exact state before applying center/fit/straighten. That is, the
>> image positions/parameters are identical to before applying
>> center/fit/straighten, and the control points and control-point
>> distances are also identical to the state before applying
>> center/fit/straighten. Why is this not the case?
After playing around with the straighten tool, I've found that it does
not seem to affect the relative positions of images, nor does it
affect control point distance/correlation. How is it possible to move
images without affecting these parameters?
And I have an additional question:
I have a lightpost running vertically straight through the center
section of an overlapping area between two images. There are 40
control points scattered across these two images, and I am sure all
control points are accurate. Furthermore each control point can be
optimized to an error of less than 1px. So I am stumped as to why, in
the stitched output, the lightpost is misaligned by at least 20px.
Does anyone have any suggestions?
In any case, by specifically placing more control points on the
top-area of the lightpost I was able to reduce the seam (by the way,
the location of enblend's seam could not have been any worse). Since
the vertical pole itself has no identifying features and furthermore
is not vertical (haha) I thought I would try to add a "straight line"
series of control points.
Have I done this correctly? On the left image I place four points: "A
B B A" to form two "line3" entries. In the right image I place
another four points: "C D D C" across the same image-line to form
another two "line3" entries. Then I optimize, and while the images in
question are perfectly aligned (no seam), it throws off the panorama
so badly that after 10 minutes of reoptimizing, the closest possible
fit between the other images is 80px on average.
>> Anyway, aside from these several questions, the documentation is very
>> good. And I really like the interface.
>>
>> thanks
and thanks again
On Apr 7, 2011 6:15 PM, "Yclept Nemo" <orbis...@gmail.com> wrote:
>
> RAW images are in linear color space so hugin would
> not have to reverse-calculate the response curve applied by
> ufraw/dcraw.
Is that generally true? I don't know much about raw processing (or indeed about image sensor electrical behaviour) , but I wouldn't have guessed that light input -> raw pixel values would be necessarily or even typically linear.
> And I have an additional question:
>
> I have a lightpost running vertically straight through the center
> section of an overlapping area between two images. There are 40
> control points scattered across these two images, and I am sure all
> control points are accurate. Furthermore each control point can be
> optimized to an error of less than 1px. So I am stumped as to why, in
> the stitched output, the lightpost is misaligned by at least 20px.
> Does anyone have any suggestions?
Well, if it were my panorama it would be because of parallax from hand-holding the camera, but I'm assuming that you're using a calibrated panorama head.
(And a much better lens: my 18-200mm Nikkor has enough variation in FoV due to aperture and focus changes there I can't get a good fit when processing image-stacked panoramas without optimizing (unlinked) v too.)
Christopher
I'm pretty sure it's accurate:
http://www.luminous-landscape.com/tutorials/expose-right.shtml
The gist:
2x light = 2x voltage = 2x pixel intensity. Since human vision is
logarithmic-ish, perceiving a greater range of luminance across darker
values, a centred exposure fails to utilize the upper-half of the
camera's sensor.
Anyway my point is that since RAW images represent actually the image
recorded by the sensor, reading directly from these files makes moot
the need to apply a camera response curve or optimize for
white-balance.
>> And I have an additional question:
>>
>> I have a lightpost running vertically straight through the center
>> section of an overlapping area between two images. There are 40
>> control points scattered across these two images, and I am sure all
>> control points are accurate. Furthermore each control point can be
>> optimized to an error of less than 1px. So I am stumped as to why, in
>> the stitched output, the lightpost is misaligned by at least 20px.
>> Does anyone have any suggestions?
>
> Well, if it were my panorama it would be because of parallax from
> hand-holding the camera, but I'm assuming that you're using a calibrated
> panorama head.
>
> (And a much better lens: my 18-200mm Nikkor has enough variation in FoV due
> to aperture and focus changes there I can't get a good fit when processing
> image-stacked panoramas without optimizing (unlinked) v too.)
Yes I did manage to borrow a panoramic head; even without out, since
the lightpost is about 50-75 feet distance I doubt parallax would
cause any problems. Interestingly enough I found that since the
panoramic head is so large vertically and lacking a clamp, at certain
angles it must have acted as sail and subtly rotated - a few of my
brackets stacks are misaligned.
I use a cheap Canon EFS 18-55mm lens which I calibrated specifically for 28mm
Anyway, is it possible to produce a perfectly aligned 360 degree
panorama? I've been trying really hard - many different strategies -
and am unable to get rid of artifacts. I find it interesting how
radical and under-realized a job the blender does - I wish hugin would
allow a visualization of the seams (graph paths) used by enblend:
Different strategies:
1] Let hugin/CPFind pick points. Many many points. Many don't actually
correspond to identical features despite high correlation. Images are
drastically unaligned, yet enblend does a very good job. Very few
artifacts (2-3) with high error (10-50px perceived)
2] Let hugin/CPFind pick points, then manually subtract. Never
finished. Too many points to filter, plus optimization strategy wants
to get rid of my points (all my points -> 100+px error distance)
2] Pick points myself. 20-40 per stack. Precise + high correlations.
Images are very well aligned, yet many minor artifacts produced. 10-20
artifacts with error of (2-10px)
3] Pick Points + Use straight lines. Perhaps I don't know how to use
straight lines. In any case, whereas normal points average error of
<1px, line CPs average error of 10-20px. 20-30 artifacts with error of
2-10px, visually errors are less noticeable than in previous strategy.
4] Pick Points + Use vertical lines. Work in Progress.
Anyone have tips? This is really frustrating.
Also the documentation says straight control point lines require more
than two points, does that mean more than two points per image: ["A B
B A","C D D C"] or more than two points overall: ["A A","B B"]
It should be straight-forward to calculate the maximum parallax error
(in pixels) based on an estimated upper bound in the change of
position of the no-perspective point and the angular distance between
pixels at 28mm. But yes: I would have thought that that would be
sufficient, even if the control points were quite a lot further away
and the pano head wasn't exactly calibrated (i.e., not rotating
precisely around the NPP).
> I use a cheap Canon EFS 18-55mm lens which I calibrated specifically for 28mm
I think I would be worried about not being able to re-set the lens to
exactly the focal length it was calibrated at. On my (relatively
cheap) lenses, even if I were to tape the zoom ring to the body so it
couldn't rotate between calibration and shooting (or equivalently make
sure it is hard against the 18mm stop), there is enough play in the
mechanism that I would not expect to get good results without
optimizing (unlinked) v. I don't generally find it necessary to
unlink a, b, and c, though, although I do include these in the
optimisation.
Aside: I admit that I'm not really sure I see the point of being able
to save/load lens parameters when I can just include these in the
optimisation. Is there any advantage, other than perhaps saving some
CPU time? I would think think that, except for prime lenses, any
reduction in optimisation time would be at the risk of the lens not
actually being at the same focal length that was used for the
calibration.
> Anyway, is it possible to produce a perfectly aligned 360 degree
> panorama? I've been trying really hard - many different strategies -
> and am unable to get rid of artifacts.
I have shot and aligned only one full 360-degree panorama, and it was
from handheld, and despite this I did not have much trouble getting
things to align sufficiently for there to be no visibile artifacts.
Here is the strategy I used, which I've used before on several other
less-than-360 panoramas with good success:
- Shoot three bracketed exposures of each of 21 stacks (9 stacks
around the equator) (from the middle of a large, cobbled public
square).
- Load 63 images into Hugin.
- In the image list and in the preview, select the middle-exposed
image from each stack (#1, 4, 7, ... 61) and use cpfind to create
control points.
- Optimize (y, p, r) to start with, with "Only use control points
between image selected in preview window" ticked.
- Manual control point editing: fine-tune all; delete all control
points in sky and most on the cobblestones in the foreground (large
parallax errors due to hand-holding); delete a few more down an alley
(again, parallax errors); add quite a few manually between overlapping
images where cpfind did not do a good job (probably about 25cps on
each of a dozen different overlap pairs).
- Optimize (y, p, r, v (unlinked), a, b, c (linked), d, e (unlinked))
and continue to examine and adjust or delete poorly-aligned control
points until errors are relatively small (average error = 0.85,
maximum error < 3.8).
At this point almost all the control points are on the fronts of
buildings around the square.
- Do test stitch to check alignment. No problems - even with cobbles
in foreground enblend has done a good job hiding misalignments.
- Now, select first stack (images 0, 1 & 2) and use Align_image_stack
linear to create control points. Repeat for each additional image
stack.
- Create control points manually for images where Align_image_stack
did not do a good job (usually due to my having allowed the camera to
roll slightly between exposure, which usually results in control
points being clustered around centre of image with few at edges).
- Select all images in preview window, and optimize the same
parameters as before (y, p, r, v (unlinked), a, b, c (linked), d, e
(unlinked)) to bring everything into alignment.
- Stitch. Add masks to deal with moving people, birds, etc. Repeat.
There are probably several ways I could improve this process (e.g.:
I'm not sure it's necessary or particularly useful to select images in
groups of three when using Align_image_stack).
In general, I find the strategy of having only the middle-range
exposure from each stack linked to adjacent images, with lots of
auto-generated control points between the images in each stack
provides good results while keeping the total number of control points
to a reasonable value (~8k, for the panorama described above).
Christopher
-> http://wiki.panotools.org/Horizontal_control_points
-> http://wiki.panotools.org/Vertical_control_points
-> http://wiki.panotools.org/Panotools_internals#Line_control_points
> And is it useful to have these lines across different images
Yes, it is useful. If you want to level your panorama using the horizon
it is best to have horizontal CPs approximately 45� apart.
--
Erik Krause
http://www.erik-krause.de
With regards to "line" control points.
If I have 5 images A B C D E that I wish to line up,
am I better making
A-B, B-C, C-D, D-E
or
A-B, A-C, A-D, A-E
or
A-C, B-C, C-D, D-E
pairs?
BugBear
Just to be clear, the problem is not in my control points. I have 24
stacks with 50% overlap, per overlap I've manually placed 20-40
high-correlation well-distributed accurate control points. After
optimizing my average error is 0.4, rms error 0.6, max error 1.7.
The problem is that despite such a high accuracy, hugin produces a
panorama with so many and so noticeable artifacts. Furthermore, I
doubt the problem can be attributed to accuracy: I aligned the
camera's sensor plane with the panoramic head's center of rotation,
and pretty much all objects are more than 35 feet distant.
In any case, your suggestion was really very good, am I'm running a
test stitch now.
My zoom lens has slots around the barrel demarcating focal lengths, so
therefore it was relatively easy to set a focal length of 28mm. The
barrel is stiff enough to prevent the focal length from changing.
Furthermore, the camera provides the focal length in the exif image
data and it turns out this number is consistent across all images;
this is what hugin uses to calculate FOV.
Nonetheless I adopted the philosophy of "distort each individual image
as non-physically as needed to minimize control-point distances". Even
though I was pretty sure all lens parameters where correct (v) and
consistent (a,b,c,d,e) across all images, I nonetheless unlinked and
optimized v,a,b,c,d,e for each image:
After optimizing my average error is 0.1, rms error 0.2 and maximum
error distance 0.93.
Like I said, I'm still stitching but hopefully this gets rid of the
glitches once-and-for-all. I'm also going to attempt another test
stitch with a similar strategy on the version with the straight lines.
"Line" control points are evaluated (optimized) by comparing the a line drawn by
the first two points and the distance all other points are from that line. So I
always place the first two points at the ends of the line. The rest of the
points for that line makes no difference how they are added.
So if I had multiple straight lines going though multiple images I would
probably do.
A-E, B-D, C-C
If does not matter how the "line" points are added to B, C, D
Important: Horizontal, vertical, and straight lines are evaluated on their
output projection.
--
Jim Watters
http://photocreations.ca
Hm, so that's why my mercator projection + straight line @ ~25° was
throwing off the alignment...
so this means that:
equirectangular: vertical lines only, plus horizon line
Does this also apply to cylindrical, mercator, miller, architectural ?
(i'm using mercator)
That explains the GSoC idea of moving "line" control points to the
preview windows.
> Yes, it is useful. If you want to level your panorama using the horizon it is best to have horizontal CPs approximately 45° apart.
Only problem I see with horizon control lines, is that unlike straight
lines they only specify two points. Horizontal CPs are more effective
further apart, however all the intermediate images are not
straightened: they can be wavy/squiggly.
It is very unlikely the entrance pupil of your lens coincides with the
sensor plane. The no-parallax-point (NPP) is located at the center of
the entrance pupil. If you used the sensor plane to rotate around you
probably have misplaced the NPP by 60mm or more. This could produce
parallax errors, even at larger distances. You can use the formulas on
http://wiki.panotools.org/Parallax to estimate how large they would be
(calculate for the shortest distance).
It marks the sensor plane location in the body. The no-parallax
point is in the _lens_, not in the body. See the following
documents:
http://www.johnhpanos.com/epcalib.htm
http://toothwalker.org/optics/cop.html#stitching
http://www.janrik.net/PanoPostings/NoParallaxPoint/TheoryOfTheNoParallaxPoint.pdf
--
Markku Kolkka
markku...@iki.fi
> The no-parallax point is in the_lens_, not in the body.
While this is generally the case it's not necessarily so. Long telephoto
lenses can have it behind the lens or even behind the camera. There is a
special kind of lens design called telecentric (in object space) where
the entrance pupil and hence the NPP is at infinity behind the lens.
These lenses don't show any perspective distortion - the magnification
stays the same independently from the object distance. That's why they
are used in machine vision. Another interesting use are focus bracketed
macro panoramas where you would get severe parallax errors if you used a
normal lens. Rik Littlefield showed this very impressively:
http://www.photomacrography.net/forum/viewtopic.php?t=1032
I've noticed that enblend is 99% of the stiching process; furthermore
it seems severely limited by disk-io. If enblend is swapping to disk
due to a memory limit then the enblend manually describes how to
increase the limit; is this a good idea? I run a 4GB system of which
3.25GB are available to user processes; enblend uses 28%, 932MB very
near to the 1GB limit. If I increase the limit to 2.5GB or 2.25GB will
it create a noticeable speed-up? On the other hand, if enblend is
simply writing to disk, then I'd like to know how much space on-disk
it uses/requires - to where is it writing? Therefore maybe I can
create a 1.5GB ramdisk.. which begs the question, if the ramdisk
becomes full, will enblend fall back to another partition?
If I understand you right, a line control can have more than two points
associated with it.
I'm using Version: 2010.5.0.b5a907b23b85 and do not see any
obvious way to create anything other than a two point control (pair).
BugBear
My panorama is outputting to "exposure fused from stacks as TIFF" and
"HDR as EXR", in both cases the blend process has taken about two
hours. However the fused TIFF file was written to in under 15 minutes,
whereas enblend has been "writing final output" of the EXR file for
the past eight hours. The file size is 2.6MB, six hours ago it was
2.2MB (compare to 870MB TIFF). I'm not familiar enough in terms of
size, compression, and processing power required by the OpenEXR format
to understand if this is a bug in 2010.4.0. Should I backup the
individual stacks, cancel the current process, and attempt manually
stitching to TIFF instead?
> If I understand you right, a line control can have more than two points
> associated with it.
Only straight lines can have more than two points. They are designated
by the same t-number (>2). Hugin allows for this, just select the same
line number under "mode".
Horizontal lines and vertical lines can have only one pair each.
Ah - hah. So for a multi-point horizon I should make a straight line
which happens to be horizontal?
BugBear
I found vertical lines work better than straight lines which happen to
be vertical. The images between the horizontal/vertical lines should
be oriented by overlapping normal control points, imo.
Enblend, whether writing the final output as EXR or TIFF, will
continue writing forever. When TIFF, outputs a base 600MB image within
the first minute, then adds 100MB ever hour nonstop... 1.6GB to go and
no end in site - the fused version is 800MB. When EXR, outputs a base
2.2MB image within the first image, then adds 500K per hour.
No, just use multiple pairs of horizontal control points. You can't
force a straight line to be horizontal.
I'm only able to create a semi-decent projection from equirectangular
and even then the top flanges outwards, but perhaps this is since the
image was shot quite close (100ft) to the base of the buildings and
there is substantial amounts of perspective distortion.
How about a combination:
Put a horizontal pair on A-E and a straight line across
A-B-C-D-E ?
BugBear
Ok, I have an HDR EXR image that was successfully and completely
written to by Hugin but which is still broken... it's 10MB if anyone
wants it. I also posted a smaller PNG version at
http://imagebin.org/147952 Notice the black areas, as well as the
unnaturally noisy white area in the bottom left.
I am having a problem with the corresponding tif fused output version.
There are two perfectly horizontal 15px-wide dark (darker EV) or
burned bands running directly across the image, not near any seams.
Within these bands are three additional dodged or lightened bands.
While the top band is running across the bottom quarter of the image
where there is no overlap, it is still too high to crop - I would lose
1/8 of my pano.
That is what I did ... I guess because the picture was literally taken
leaning backwards and looking upwards and the horizon line really runs
through the bottom portion of the image, it is not possible to flatten
the rectilinear version. I'm satisfied with mercator however.
You can only retro-correct perspective if the subject is flat (2D);
the facades of many building are (close to) 2D, so this is feasible.
If you alter the perspective (implicitly altering the point where the image
was taken), and there is genuine perspective in the image
(lots of 3D objects ate various distances) it all goes wrong.
I used perspective correction to "square up"
this shot of some gothic carving, and the strongly 3D carving
looks a bit odd.
http://galootcentral.com/components/cpgalbums/userpics/10152/gothic.JPG
BugBear
Sorry, this isn't true. Panotools and hugin can be used to perfectly
simulate a shift lens (perfect in terms of geometry, not DoF). If you
shoot f.e. above horizon with a shift lens you keep the sensor plane
vertical (not tilted). That's why horizontals stay horizontal. The
perspective isn't altered. As long as you shoot the source images for a
mosaic from a common viewpoint (the no-parallax-point of the lens), you
can get exactly the same. Only the focus plane (and hence the DoF) is
tilted.
See http://wiki.panotools.org/Perspective_correction for details.
After trying many times to level a handheld beach pano using horizontal
lines, here's what I did that finally succeeded:
1. Set up horizontal lines from the first frame to each of the other
photos, connecting the left edge point on the horizon to the left edge
point in each case.
2. Set up similar horizontal lines for the right edge points.
3. Optimized everything INCLUDING translation.
Like magic, the horizon straightened itself out.
I think it was the translation that made it work. Optimizing without
translation didn't straighten out the horizon.
Now I need to go back and re-do some of my older panos that had similar
issues.
--
Gnome Nomad
gnome...@gmail.com
wandering the landscape of god
http://www.cafepress.com/otherend/
On 19 Nov., 08:44, Gnome Nomad <gnomeno...@gmail.com> wrote:
> After trying many times to level a handheld beach pano using horizontal
> lines, here's what I did that finally succeeded:
>
> 1. Set up horizontal lines from the first frame to each of the other
> photos, connecting the left edge point on the horizon to the left edge
> point in each case.
>
> 2. Set up similar horizontal lines for the right edge points.
>
> 3. Optimized everything INCLUDING translation.
>
> Like magic, the horizon straightened itself out.
>
> I think it was the translation that made it work. Optimizing without
> translation didn't straighten out the horizon.
I think you probably performed some magic, rather ;-)
Your method sounds odd to me, even though coercing the translation
parameter into use for a strip panorama might sometimes work, it'll
certainly fail in a 360X180. Here's what I'd do:
1. pick out an image which is near the center of your pano and shows a
good length of horizon. Set a horizontal line control point on it
picking two horizon points as far apart as possible and only optimize
roll for this single image - you may have to adapt pitch manually to
have the horizon at the right height.
2. start with the leftmost image showing the horizon. Pick two points
on the horizon and create a new line (not horizontal or vertical, just
a line control point.) Carry on by adding two horizon points from each
other image showing the horizon to that same line.
3. Now, with the image chose in 1. as your position anchor, optimize
for position. The horizon should be level because of 1. and 2. should
should bring all the other images in line.
If your horizon isn't level enough, you can add more horizontal line
control points - now try and put these with one point on the leftmost
horizon image and one on the rightmost.
Finally, keep in mind that your other CPs will likely be from points
on the beach, and since the pano is handheld, there will be
parallactic errors. Using these CPs will result in your images being
aligned by features on the beach, while your horizon goes awry. Try
and delete as many of these CPs as possible - with the horizon defined
by 1-2-3, you might even get away with one CP per pair (providing your
lens is well-calibrated)
While I'm on the topic I'd like to hint at a technique I sometimes use
when I fix horizons: I've made an image in 2:1 format with a degree
pattern (30X30 degree checkerboard, translates to, like, 30X30 pixel
checkerboard on a 360X180 pixel image) and include this image into the
panorama as being equirectangular with 360 degreed hfov. The grid has
a clearly defined horizon and I can now 'glue' line CPs to this line.
The grid image makes a good anchor, then - and for the stitching I
just switch it off in the preview.
Kay
Kay
On Nov 19, 3:05 am, kfj <_...@yahoo.com> wrote:
. . .
On Nov 19, 3:05 am, kfj <_...@yahoo.com> wrote:
>1.) Only the actual horizon should be assigned as a "horizontal
>line" (unless you just want some line, or the average of some lines,
>to be straight and at the horizontal center (equator) of the panorama)
>because the horizon line is the only latitudinal line that lies upon a
>great circle line (the equator.)
Yes, for spherical panoramas. Horizontal lines can also be useful
for removing perspective from façades of buildings, but only when
you are using rectilinear projection for the output.
>2.) But any vertical line can be designated as vertical since all
>vertical lines lie upon longitudinal great circle lines. (This is how
>I usually straighten my handheld panoramas.)
Yes, true for most output projections, but not the case for fisheye
output projections where only the vertical down the middle can be
straight.
>3.) Any other physically straight lines can be used, for helping out
>calculations, but only as 'lines" (not vertical or horizontal) since
>it is not expected that they will be great circle lines. These are
>useful but cannot help with straightening.
Yes, though all straight lines in the scene map to 'great circles'
on the 'sphere'.
--
Bruno
What about equirectangular or cylindrical (or Mercator)?
--
Robert Krawitz <r...@alum.mit.edu>
Tall Clubs International -- http://www.tall.org/ or 1-888-IM-TALL-2
Member of the League for Programming Freedom -- http://ProgFree.org
Project lead for Gutenprint -- http://gimp-print.sourceforge.net
"Linux doesn't dictate how I work, I dictate how Linux works."
--Eric Crampton
In these projections the only features in the scene that will be
horizontal in the output image are: the horizon at sea, or features
of circular buildings (so long as you are standing in the exact
centre of the building).
--
Bruno
On Nov 21, 5:43 pm, Bruno Postle <br...@postle.net> wrote:
. . . > >What about equirectangular or cylindrical (or Mercator)?
On Mon, Nov 21, 2011 at 10:59:46PM +0000, Bruno Postle wrote:
> On Mon 21-Nov-2011 at 13:56 -0800, JohnPW wrote:
> >Please clarify this for me as I want to make sure I understand (and it
> >may be helpful to other newer Panorama makers like myself.)
> >These are my assumptions:
>
> >1.) Only the actual horizon should be assigned as a "horizontal
> >line" (unless you just want some line, or the average of some lines,
> >to be straight and at the horizontal center (equator) of the panorama)
> >because the horizon line is the only latitudinal line that lies upon a
> >great circle line (the equator.)
>
> Yes, for spherical panoramas. Horizontal lines can also be useful
> for removing perspective from fa�ades of buildings, but only when
> you are using rectilinear projection for the output.
In that case, they should be called horizon-lines instead of horizontal
lines.
I thought that horizontal and vertical control points matter to
the optimization step.
Normally, I thought the control points are all transformed into
the spherical coordinates, and for each pair both the longitude and
lattitude are compared. In fact the distance is calculated and optimized.
I thought that for a horizontal controlpoint pair, the lattitude
simply doesn't count. So all that the optimization step cares about
is the that they line up horizontally.
Similarly for the vertical control lines. There the horizontal position,
or longitude is not taken into account.
I thought that all this was independent of the projection
being used for the final result.
Roger.
--
** R.E....@BitWizard.nl ** http://www.BitWizard.nl/ ** +31-15-2600998 **
** Delftechpark 26 2628 XH Delft, The Netherlands. KVK: 27239233 **
*-- BitWizard writes Linux device drivers for any device you may have! --*
The plan was simple, like my brother-in-law Phil. But unlike
Phil, this plan just might work.
This is how I understand it:
Horizontal line control points (HLCPs) in hugin only correlate two
image points - unlike line control points, they cannot contain more
than two points. Therefore, any two points in the scene which are
equidistant from the viewer and at the same height should make valid
HLCPs in projections which preserve horizontal lines, like equirect
and cylindrical.
Kay
On 22 Nov 2011 08:15, "Rogier Wolff" <rew-goog...@bitwizard.nl> wrote:
>
> I thought that for a horizontal controlpoint pair, the lattitude
> simply doesn't count. So all that the optimization step cares about
> is the that they line up horizontally.
>
> Similarly for the vertical control lines. There the horizontal position,
> or longitude is not taken into account.
This is effectively true with equirectangular, or any of the other cylindrical output projections.
> I thought that all this was independent of the projection
> being used for the final result.
Nope, the horizontal and vertical points are evaluated in the output canvas, so the output projection is critical.
This is much simpler conceptually, as far as the optimiser is concerned they are the same thing.
--
Bruno
Do you mean that the control point matching happens in ouput
projection space?
i.e. a controlpoint in Image1 at X1, Y1, and in Image 2 at X2,Y2 is
transformed using the parameters for Image1 (i.e. roll1, pitch1) to a
roll/pitch coordinate pair in the pano-sphere and then onto the output
canvas using the output transformation?
The same is then done for the X2, Y2, and the difference is optimized.
This would mean that for instance a mercator projection that has a
distortion near the poles will favor the lattitude of controlpoints
near the pole being "perfect" sacrificing all other overlaps.
It would also mean that after changing the output projection, you need
to optimize again.
A friend shoots "all around" panoramas. As output projection she needs
a projection onto a cube around the pano-sphere. So if I understand
things correctly, she will set the output projection to
equirectangular, stitch an output image, rotate the viewpoint by 90
degrees and stitch another face of the cube until all 6 faces are
done.
Now optimization is probably done with the output projection set to
one of the faces. Now all control points that lie outside the face of
the cube are distorted and optimized in weird ways that do not reflect
their role in the final output.
Of course something can be said for doing it this way: if there is a
minute difference in the projection of the layout of two images near
the center of the output image, and the same minute difference in
degrees on the panosphere expands to several tens of pixels near the
edge of the output image, it might be good to "fix" that controlpoint
near the edge, and tolerate a slightly larger error on the one in the
middle.
But I would prefer to optimize in panosphere coordinates. Doing it the
other way introduces errors based on the assumption that all
controlpoints are perfect. They are not. And the lens parameters are
not perfect.
I think we'd get a much better fit (in a mathematical sense, on the
panosphere) if we'd just use the panoshpere coordinates.... Once we
have that, we're ready to optimize lens parameters etc etc, to get the
final errors out. And then a "list the controlpoints starting with the
largest error" allows you to find the controlpoints that really have
errors in placement.....
But again... It's entirely possible that I'm misunderstanding how
hugin actually works. (I'm reading "cooking for geeks" and the book
has explained one simple thing to me and something I didn't manage
before (again and again) worked first time, simply because now I
understand the underlying chemistry. Similarly I want to know how
hugin works to be able to better control it.)
It is for horizontal and vertical control points. I can't remember
if it still is for 'normal' points, I seem to remember this might
have changed at some point - You need to look in the pano13 code.
>A friend shoots "all around" panoramas. As output projection she needs
>a projection onto a cube around the pano-sphere. So if I understand
>things correctly, she will set the output projection to
>equirectangular, stitch an output image, rotate the viewpoint by 90
>degrees and stitch another face of the cube until all 6 faces are
>done.
This would be a bad idea since there is no guarantee that the
enblend seams would continue over the edges between tiles.
Definitely better to stitch an equirectangular and then split it to
cubefaces as a subsequent step.
--
Bruno
Could be, maybe the phase of the moon.
> Your method sounds odd to me, even though coercing the translation
> parameter into use for a strip panorama might sometimes work, it'll
> certainly fail in a 360X180.
I'm not that ambitious. I did a 4x4 interior panorama of a local
cathedral, also handheld, that one aligned quite nicely without any
effort on my part.
> Here's what I'd do:
>
> 1. pick out an image which is near the center of your pano and shows a
> good length of horizon. Set a horizontal line control point on it
> picking two horizon points as far apart as possible and only optimize
> roll for this single image - you may have to adapt pitch manually to
> have the horizon at the right height.
>
> 2. start with the leftmost image showing the horizon. Pick two points
> on the horizon and create a new line (not horizontal or vertical, just
> a line control point.) Carry on by adding two horizon points from each
> other image showing the horizon to that same line.
>
> 3. Now, with the image chose in 1. as your position anchor, optimize
> for position. The horizon should be level because of 1. and 2. should
> should bring all the other images in line.
Thanks, I'll have to try that.
> If your horizon isn't level enough, you can add more horizontal line
> control points - now try and put these with one point on the leftmost
> horizon image and one on the rightmost.
>
> Finally, keep in mind that your other CPs will likely be from points
> on the beach, and since the pano is handheld, there will be
> parallactic errors. Using these CPs will result in your images being
> aligned by features on the beach, while your horizon goes awry. Try
> and delete as many of these CPs as possible - with the horizon defined
> by 1-2-3, you might even get away with one CP per pair (providing your
> lens is well-calibrated)
Have never calibrated any of my lenses.
> While I'm on the topic I'd like to hint at a technique I sometimes use
> when I fix horizons: I've made an image in 2:1 format with a degree
> pattern (30X30 degree checkerboard, translates to, like, 30X30 pixel
> checkerboard on a 360X180 pixel image) and include this image into the
> panorama as being equirectangular with 360 degreed hfov. The grid has
> a clearly defined horizon and I can now 'glue' line CPs to this line.
> The grid image makes a good anchor, then - and for the stitching I
> just switch it off in the preview.
Now that's an interesting idea! Will have to try that out!
On 1 Dez., 22:54, JohnPW <johnpwatk...@gmail.com> wrote:
> Even though nobody asked :-) I'm posting a link to the target I made
> following Kay's description. It probably isn't that useful folks since
> flicker turned it into a smallish jpeg, but it's good enough to
> experiment with and see if you find it useful enough to make your own.
> --Johnhttp://www.flickr.com/photos/johnpwatkins/sets/72157628238521079/
nice one :)
and it is really easy to make with gimp. It's fun mixing in artificial
images!
you may want to correct your headline, which goes
'Usefu Panorama Image'
Kay
That's an interesting idea. One could turn a painting or an artificial
landscape (CAD drawing, child's drawing, etc.) into an interactive
panorama.
What have you tried?
So far I've only done stuff along the lines I've mentioned here - a
few geometric images to mix in to see how transformations function (or
check their correctness) or to use as guidelines. But I have a more
ambitious idea, and your positing has made me publish it:
http://groups.google.com/group/hugin-ptx/t/977d8bdc87dfbfac
Kay