jordan
> --
> You received this message because you are subscribed to the Google Groups "Hugin and other free panoramic software" group.
> A list of frequently asked questions is available at: http://wiki.panotools.org/Hugin_FAQ
> To post to this group, send email to hugi...@googlegroups.com
> To unsubscribe from this group, send email to hugin-ptx+...@googlegroups.com
> For more options, visit this group at http://groups.google.com/group/hugin-ptx
There is a cpfind --linearmatch option to only attempt alignment
between consecutive photos. I'm not sure if this was available in
2010.4.0 or if you need to use a 2011.0.0 snapshot.
My experience of doing this with other tools is that you only need
one blurred frame to break the sequence, there is an additional
cpfind --linearmatchlen option that you can use to match further
than just one adjacent photo.
--
Bruno
> On Sun 15-May-2011 at 10:36 -0700, Thomas Robitaille wrote:
>>
>> I'm trying to make a panorama with thousands of images from a video
>> sequence. Finding the control points takes a long time, so I was
>> wondering if there is a way (in a script or Hugin) to only look for
>> control points in neighboring images, so e.g. image 2 would only have
>> control points with images 1 and 3, and so on?
>
> There is a cpfind --linearmatch option to only attempt alignment between consecutive photos. I'm not sure if this was available in 2010.4.0 or if you need to use a 2011.0.0 snapshot.
I didn't know about the --linearmatch option, that could have saved me quite a bit of hassle with this pano: http://flic.kr/p/9GvwJ6
I wasn't able to coerce cpfind in only finding 3-4 controlpoints per image, it insisted on finding 10. In the sequence I ended up with +1000 controlpoints per image, since the overlap was rather big. This made running the optimizer rather slow :)
Definitely need to investigate this further now.
Habi
one could argue that it is more user friendly to try to join the first and
last image of the sequence (assuming a full circle) or that it is more user
friendly not to try to join them (assuming only a partial tour of the
horizon). the most popular use-case should be the default.
there is no way for cpfind to know or guess if the sequence covers a full
circle without trying to match them.
things could be more user-friendly with a switch in cpfind, e.g. --360, that
would do for the user what currently requires multiple steps:
* generate a pto file containing all images (all.pto)
* generate a pto file containing only the first and last images (seam.pto)
* cpfind --linearmatch -o sequence.pto all.pto
* cpfind -o link.pto seam.pto
* ptomerge -o 360project.pto sequence.pto link.pto
cpfind --360 --linearmatch -o 360project.pto all.pto would:
1. do the linear matching
2. try to find cps between the first and last image
3. if found them, add them to the project.
> --multirow OTOH is very useful, but i've found it to be slightly less
> reliable than the "just search every damn image pair" mode :)
I had mixed results with --multirow as well, especially with the "sky-
bordering" row. individual images on that row link better to the image
underneath them than to those left and right, but what I see from --multirow
are just rows that are linked at the beginning and at the end. "just search
every damn image pair" gets even worse result (plenty of false positives and
bad links that drive cpclean crazy). My solution is to manually select two
images from the same column and manually trigger cpfind on them. It would be
easier for me if I could trigger cpfind from the CP tab (where I get visual
confirmation that I am indeed matching two tiles of the same column); or even
better if cpfind would have an option to add robustness to the multirow
solution and match columns after the rows have been determined, i.e. once it
is determined that images 1-25 are the first row and 26-50 the second row,
pair-match them 1-26, 2-27, 3-28, ... adding those CPs to the project.
> So, it seems that the best would be to use a high sensitivity in cpfind,
> and then use cpclean to prune the images to a reasonable number of CP's
> before optimizing?
if there are too many bad cps, cpclean can actually prune the good ones
instead.
> another question (hopefully not digressing too much) is keypoint data
> useful only after an image FOV (and lens type - rect or fish) is
> specified? i.e. you couldn't include some kind of SIFT data in the exif of
> the image (unless you first specified lens type and fov)?
AFAIK the keypoint data is independent of FOV and lens type. When I first
start a project that might get difficult, I run cpfind with the --kall option.
this stores the keys and speeds up subsequent interactions.
One last thing, Jeffrey. May I ask you to quote a few lines from what you are
referring to in your posting? I am reading this mailing list sporadically and
on a mail client. Without context, most of your messages are difficult to
make sense of.
Thank you for being considerate of mailing list users in the future.
Yuv
On May 18, 2011 07:55:59 AM Jeffrey Martin wrote:
> the problem with --linearmatch is that (unless i'm mistaken) it never tries
> to join the first and last image of the sequence, so this is really not so
> useful in the real world (unless you really want the user to specify what
> type of pano they're shooting, which to me seems really unfriendly - this
> stuff should just work)one could argue that it is more user friendly to try to join the first and
last image of the sequence (assuming a full circle) or that it is more user
friendly not to try to join them (assuming only a partial tour of the
horizon). the most popular use-case should be the default.there is no way for cpfind to know or guess if the sequence covers a full
circle without trying to match them.
things could be more user-friendly with a switch in cpfind, e.g. --360, that
would do for the user what currently requires multiple steps:
* generate a pto file containing all images (all.pto)
* generate a pto file containing only the first and last images (seam.pto)
* cpfind --linearmatch -o sequence.pto all.pto
* cpfind -o link.pto seam.pto
* ptomerge -o 360project.pto sequence.pto link.ptocpfind --360 --linearmatch -o 360project.pto all.pto would:
1. do the linear matching
2. try to find cps between the first and last image
3. if found them, add them to the project.
One last thing, Jeffrey. May I ask you to quote a few lines from what you are
referring to in your posting? I am reading this mailing list sporadically and
on a mail client. Without context, most of your messages are difficult to
make sense of.Thank you for being considerate of mailing list users in the future.
I've never even tried to shoot one, let alone stitch one. So I wouldn't
make it the default. But what's being said about adding this convenience
feature sounds good and workable to me.
--
Gnome Nomad
gnome...@gmail.com
wandering the landscape of god
http://www.cafepress.com/otherend/
Jeffrey Martin wrote:
> one could argue that it is more user friendly to try to join the
> first and
> last image of the sequence (assuming a full circle) or that it is
> more user
> friendly not to try to join them (assuming only a partial tour of the
> horizon). the most popular use-case should be the default.
>
> there is no way for cpfind to know or guess if the sequence covers a
> full
> circle without trying to match them.
>
>
> I think it should try by default. in my experience (but i'm biased
> towards 360's admittedly) most panos people try to stitch are 360 images.I've never even tried to shoot one, let alone stitch one. So I wouldn't
make it the default. But what's being said about adding this convenience
feature sounds good and workable to me.
I don't have a strong preference either way but I do have a slight preference
for no automatic matching attempt by default.
If there is a switch to activate the extra feature, we can also add a
preference to toggle it on by default for those who do not do 360 and do not
need this feature activated.
Even one false positive is one too many. The cost of computation I don't
mind. The cost of user time to undo the false positive I do mind.
Defaults are meant to be set for the majority of use cases and preferences are
the way to satisfy alternatives, not blind brute force (extra computation).
Yuv
I'd prefer it not try unless I ask it to. Then I wouldn't have to deal
with any spurious connections. I'd know whether or not I'd shot a 360. I
suspect there are a lot fewer people shooting 360s than shotting smaller
panoramas.
> I think it should try by default. in my experience (but i'm biased
> towards 360's admittedly) most panos people try to stitch are 360 images.
I think you have a self-selecting sample.
Most of my Hugin uses are NOT 360, either horizontal or vertical.
Just one (my) vote, obviously.
BugBear
Another guess could be made to join image chains together, if several chains of
images are created see if the start of one would join the start of the next.
Once it has been established that there are multiple rows of images, then if an
image currently only connects to 1 or 2 other images, it could probably use more
connections.
>> --multirow OTOH is very useful, but i've found it to be slightly less
>> reliable than the "just search every damn image pair" mode :)
> I had mixed results with --multirow as well, especially with the "sky-
> bordering" row. individual images on that row link better to the image
> underneath them than to those left and right, but what I see from --multirow
> are just rows that are linked at the beginning and at the end. "just search
> every damn image pair" gets even worse result (plenty of false positives and
> bad links that drive cpclean crazy). My solution is to manually select two
> images from the same column and manually trigger cpfind on them. It would be
> easier for me if I could trigger cpfind from the CP tab (where I get visual
> confirmation that I am indeed matching two tiles of the same column); or even
> better if cpfind would have an option to add robustness to the multirow
> solution and match columns after the rows have been determined, i.e. once it
> is determined that images 1-25 are the first row and 26-50 the second row,
> pair-match them 1-26, 2-27, 3-28, ... adding those CPs to the project.
More intelligence is welcome. And if the optimizer could also determine that
some images are not connected to the main chain. That these images could be
evenly spaced between connected images.
--
Jim Watters
http://photocreations.ca
I think there is some confusion here, the cpfind --multirow option
given a single row panorama will both try and match photos
sequentially and try and join the ends into a full circle.
The main advantage of the --linearmatch option is for video
sequences where every frame overlaps a large number of other frames.
Here it would be very inefficient to try and generate control points
for all these overlaps, it makes sense to only match photos that are
adjacent in the sequence.
>> AFAIK the keypoint data is independent of FOV and lens type.
>> When I first start a project that might get difficult, I run
>> cpfind with the --kall option. this stores the keys and speeds
>> up subsequent interactions.
>ah interesting. can anyone confirm if this really is true? (that
>keypoints are independent of FOV)
I'm pretty sure that the features are identified in conformal space
(as with autopano-sift-c), in which case they are very dependent on
all the lens parameters given in the input project.
--
Bruno
These are some of the steps of the cpfind --multirow option.
--
Bruno
I set up a quick experiment starting from two JPG with all EXIF data intact.
1. Created copies with no EXIF data:
cp 8165.JPG 8165stripped.jpg
cp 8166.JPG 8166stripped.jpg
exiftool -all= *.jpg
2. Used panostart to generate bootstrap Makefile files:
panostart -o withexif.mk *.JPG
panostart -o noexif.mk *.jpg
3. Generated pto files:
make -f withexif.mk
make -f noexif.mk
failed with an error:
make[1]: autopano-sift-c: Command not found
make[1]: *** [8165-8166_simple.a.pto] Error 127
make[1]: Leaving directory `/home/yuv/usecase'
make: *** [8165-8166.pto] Error 2
but the pointless pto files are generated.
4. used cpfind on the pointless pto.
cpfind --kall -o pointless.pto 8165-8166.pointless.pto
creates the .key files (but no pointless.pto - cpfind bug?)
cpfind --kall -o stripped.pointless.pto 8165-8166stripped.pointless.pto
fails with
WARN: 08:17:11.927531 (/home/yuv/src/hugin/hugin-
tarball/hugin-2011.0.0/src/hugin_base/panodata/Panorama.cpp:1762) readData():
Failed to read from dataInput.
ERROR: couldn't parse panos tool script: '8165-8166stripped.pointless.pto'!
5. digged a little bit deeper. loaded 8165-8166stripped.pointless.pto into
Hugin GUI, edited the preferences to add the --kall switch and hit the button
to trigger cpfind. Log excerpts:
Hugins cpfind 2011.0.0.0fd3e119979c
based on Pan-o-matic by Anael Orlinski
Project contains the following images:
Image 0
Imagefile: /home/yuv/usecase/8165stripped.jpg
Remapped : no
Image 1
Imagefile: /home/yuv/usecase/8166stripped.jpg
Remapped : no
--- Analyze Images ---
i0 : Analyzing image...
i1 : Analyzing image...
6. compared the resulting key files:
diff 8165.key 8165stripped.key
10808c10808
< /home/yuv/usecase/8165.JPG
---
> /home/yuv/usecase/8165stripped.jpg
7. digged further. In the Hugin GUI, using the images without EXIF data, I
varied HFOV between 2 and 200 and changed the lens projection type.
CONCLUSIONS:
1. the key files seem to be independent of FOV input, confirming the first
half of my statement. Whether using the actual value computed from EXIF (9.8)
or the default guess in Hugin (50), or the broader manually edited range (2 to
200), the key files are invariable *as long as projection type (f) stays
constant*.
2. the key files do depend on projection type input. Changing from
Rectilinear (Hugin's default guess) to any other input projection yields
completely different results. My mistake to take Hugin's default guess that
works in my particular test case for a more general rule.
3. i have not tested against distortion and other lens parameters other than
HFOV (v) and projection (f).
4. https://bugs.launchpad.net/hugin/+bug/786204
5. where are bugs in panotools-script tracked? it would be nice if it would
work without the autopano-sift-c dependency.
Yuv
did not work as expected in my case, although the initial body of CP was
generated with cpfind --multirow option.
I had to manually select images in the same column and trigger cpfind between
them to add good CPs.
Yuv
would still need to consider if they are stacked and/or how much overlap there
is between them.
> Another guess could be made to join image chains together, if several
> chains of images are created see if the start of one would join the start
> of the next.
yes, this makes sense, and then work progressively through the two chains.
> Once it has been established that there are multiple rows of images, then
> if an image currently only connects to 1 or 2 other images, it could
> probably use more connections.
I think that adding more connection to the upper / lower row would be
desirable in any case as it adds robustness to the project.
> More intelligence is welcome. And if the optimizer could also determine
> that some images are not connected to the main chain. That these images
> could be evenly spaced between connected images.
Yes, this kind of even spacing is usually just what it takes with the sky or
other featureless areas of a multirow pano.
Yuv
This certainly looks like a bug, there is no point changing
behaviour with projection if angle of view is ignored.
>5. where are bugs in panotools-script tracked? it would be nice if it would
>work without the autopano-sift-c dependency.
Panotools-Script bugs used to go in the sourceforge tracker. The
SVN version has used cpfind rather than autopano-sift-c since
January, but I need to do a release.
--
Bruno
On Thu, May 19, 2011 at 07:26:59PM -0400, Yuval Levy wrote:
>
> I don't have a strong preference either way but I do have a slight preference
> for no automatic matching attempt by default.
> If there is a switch to activate the extra feature, we can also add
> a preference to toggle it on by default for those who do not do 360
> and do not need this feature activated.
> Even one false positive is one too many. The cost of computation I
> don't mind. The cost of user time to undo the false positive I do
> mind.
In some applications a false positive is more troublesome than a false
negative. How about: "I think that that's Bin Laden, shoot him!".
Or an identification system saying: "Your biometrics match with Obama,
welcome mr president to the military compound".
The parameters of such an accesscontrol system can be tuned to make
false positives say 100x less likely than false negatives.
But in this case false negatives are maybe just as bad. And the user
forgetting to tick the box: 360 degree shot would imho result in a
"false negative".
> Defaults are meant to be set for the majority of use cases and
> preferences are the way to satisfy alternatives, not blind brute
> force (extra computation).
Hugin IS a program that invests brute computational force for
conveniece. It is a convenience that we find control points, and it is
a convenience that hugin optimizes the position of the images based on
the control points.
So IHMO we should by default check for the wraparound in the images.
A configurable option might have a few settings. For example,
"never", "always", "ask", "based on FOV".
Or how about a popup (I hate popups) that says: "it seems you've shot
a 360 degree pano, correct" when a match is detected between the first
and last shots. Then ONLY when a false negative is detected does it
require user intervention: a response to a popup question.
And that FOV setting can be useful too. When a match is found we have
a linear transformation that allows us to predict the next point,
right? So how about we extrapolate to calculate the left side of the
right image, and compute the overlap. Add all FOVs together, subtract
the overlaps, and if this comes to say > 330 degrees, do the
wraparound match...
Roger.
--
** R.E....@BitWizard.nl ** http://www.BitWizard.nl/ ** +31-15-2600998 **
** Delftechpark 26 2628 XH Delft, The Netherlands. KVK: 27239233 **
*-- BitWizard writes Linux device drivers for any device you may have! --*
Q: It doesn't work. A: Look buddy, doesn't work is an ambiguous statement.
Does it sit on the couch all day? Is it unemployed? Please be specific!
Define 'it' and what it isn't doing. --------- Adapted from lxrbot FAQ
that's theory. What I observe in practice are curled rows that taint the rest
of the strategy after the rows have been identified and optimized first time.
this happens to me with an alpine valley scenery. The curl starts in those
images where the sky becomes a prominent part of the image (3/4 and more of
the surface). Those images have poor features in the left<->right overlap,
and good features in the top<->down overlap. But these good features are not
automatically matched, I guess because the initial optimization of the stripes
yields the curls and confuses the logic.
the way I understand multi-row now is that the rows are optimized and the
result is accepted blindly. maybe we should make an assumption that these
rows are somewhat straight and fix the curl?
my images were shot on a calbrated panohead and it was level (and even if it
was not, the camera's movement is linear to the horizon corrected for the
levelling factor, not curled.
> That's already done by the cache switch.
the cache switch would be redundant if the k-switches and the o switch behaved
in an intuitive way. I have attached a patch to my bug rep... ehem feature
request. [0]
> Everybody is complaining about the documentation.
Nobody in this thread complained about the documentation.
Speaking for myself:
* I complained about an unintuitive behavior (for which I posted a patch [0]);
* I described the practical outcome and problems I am confronted with when
using the tool;
* I reported the result of an empirical test about the influence of FOV and
projection input on detection output;
* I was part of an open brain storming suggesting potential improvements to
the tool's strategy and defaults.
Did I make mistakes? of course I did. I am not using cpfind on a daily (or
even weekly) basis and don't have every small detail present in the forehead.
Did other make mistakes? probably too.
Did this make the discussion wrong or irrelevant? I don't think so. If there
were no issues, there would be no discussion. And the issue is not with
documentation. It is with improving and making the tool more intuitive to the
occasional user who does not have every small detail present in the forehead.
> nobody is reading the existing documentation. There all these
> issues are described. So why writing documentation, if nobody reads
> it.
I'm sorry to read you frustrated. I did read the documentation, it is
helpful, it does indeed address most of the issues described but it is not the
end of the story.
Do you read the documentation of your car every time before driving it?
There is a combination of factors at play that determine user proficiency all
together. I am not using cpfind on a daily basis and so I do not recall all
details. This is especially true for the counter-intuitive ones.
To me it is counter-intuitive that the -k switch overrides the -o switch and
no warning is displayed on the screen.
When I feed it a carefully shot multi-raw panorama and it curls the row that
borders between land (good features) and sky (poor/no features), it tells me
that the strategy has room for improvement.
There will be many ideas thrown at it to improve it. Some will not be
feasible. Some will be based on wrong assumption. Some will work best in
some cases and others in other. But please don't shout back RTFM at everybody
here who is trying to make sense of the current observed behavior in real life
and how it differs from the expected / predicted behavior in theory.
I really look forward for python scripting to advance and become mainstream.
These strategy things are better left to a high level scripting language using
the detection/matching/optimization functionalities as building blocks.
Good news for Ubuntu users: I updated the wiki instructions and now everybody
with Lucid (10.04) and later can access python scripting; and Philipp updated
the nightlies build process so that now the python scripting interface is
available to those brave enough to type the following commands in the CLI:
sudo add-apt-repository ppa:hugin/hugin-builds
sudo add-apt-repository ppa:hugin/nightly
sudo apt-get update
sudo apt-get install hugin
Yuv
>the way I understand multi-row now is that the rows are optimized and the
>result is accepted blindly. maybe we should make an assumption that these
>rows are somewhat straight and fix the curl?
Actually cpfind already does this (or is supposed to), the estimated
positions used as a basis for subsequent overlap detection are
determined without optimising 'roll'.
These 'curls' sound to me like the initial estimate of the angle of
view of the photos is wrong (which may or may not be related to the
angle of view behaviour you noticed generating key files).
>> nobody is reading the existing documentation. There all these
>> issues are described. So why writing documentation, if nobody reads
>> it.
>
>I'm sorry to read you frustrated. I did read the documentation, it is
>helpful, it does indeed address most of the issues described but it is not the
>end of the story.
There have been a lot of comments regarding cpfind that seem to be
referring to experiences with other software, or suggesting features
that cpfind already has (or is supposed to have). It would be
difficult at this point for anyone who has just been reading the PTX
list to have an accurate idea of what cpfind does.
--
Bruno
it's a pitch+roll effect.
> These 'curls' sound to me like the initial estimate of the angle of
> view of the photos is wrong (which may or may not be related to the
> angle of view behaviour you noticed generating key files).
mhh... full EXIF is passed to Hugin and looking at the pto project that goes
into cpfind the intial estimate of hfov seems OK.
> >> nobody is reading the existing documentation. There all these
> >> issues are described. So why writing documentation, if nobody reads
> >> it.
> >
> >I'm sorry to read you frustrated. I did read the documentation, it is
> >helpful, it does indeed address most of the issues described but it is not
> >the end of the story.
>
> There have been a lot of comments regarding cpfind that seem to be
> referring to experiences with other software, or suggesting features
> that cpfind already has (or is supposed to have). It would be
> difficult at this point for anyone who has just been reading the PTX
> list to have an accurate idea of what cpfind does.
yes, cpfind needs to give more feedback. I think Thomas just committed
something in that sense. Also cpfind has evolved quite rapidly and things
have changed significantly between 2010.4 and 2011.0.
Yuv
the problem is that at this point it's already too late and I am in manual
mode. So I might as well select manually the image pairs and run cpfind each
time on them. If there was a button to trigger cpfind on the cp tab, this
would be an easy way to waste some time clicking away...
... but I can also try to script this in python, can I?
> I do a 360x180 with a fisheye as a backdrop and then 'pin' the longer-
> lens shots to it.
I did not have the luxury. There was no fisheye anywhere near me. But yes,
in ideal circumstances, that's the smart thing to do.
> I must admit that I hadn't looked into the cpfind documentation fore a
> while, and that the text that is now in the wiki is actually good and
> helpful. I often simply call the program in question, or call man with
> it, and what you get this way is pretty thin. Sorry for being sloppy.
I skimmed over the wiki page and it looks suspiciously similar to the man
page.
> I would dearly like to understand more of how cpfind is doing what it
> does. I made several attempts to look into the code but I find it hard
> to penetrate.
Yes, I found the code hard to navigate too. When I submitted my patch this
morning, I just peeled away at it onion layer after onion layer starting from
the CLI switches and did not get much far - just enough for a small hack.
The frustrating thing with this is that if I focus hard and apply myself, I
might get to understand it and do some things with it in a few hours. But two
weeks down the road this knowledge is already forgotten and I am back to
square zero trying to understand things that I understood weeks earlier but
that have gone out of my mind in the meantime.
> hsi/hpi is just a first step in the right direction. The helper
> programs (CPGs, warping, blending) can all be made into modules with a
> bit of SWIG magic
we need a list of functionalities already available; a todo-list of
functionality to expose next; a how-to for doing it.
> > Good news for Ubuntu users: I updated the wiki instructions and now
> > everybody with Lucid (10.04) and later can access python scripting; and
> > Philipp updated the nightlies build process so that now the python
> > scripting interface is available to those brave enough to type the
> > following commands in the CLI:
> >
> > sudo add-apt-repository ppa:hugin/hugin-builds
> > sudo add-apt-repository ppa:hugin/nightly
> > sudo apt-get update
> > sudo apt-get install hugin
>
> this sounds exciting! Does it mean that the hugin you get from the
> PPAs is now per default an hsi/hpi-enabled version?
Short answer: YES.
Long answer: there are two sets of binaries in the PPA: "nightlies" and
"builds". At this very moment the unconditional YES applies to the nightlies
only. Hugin-builds is for builds from tarball and the YES will apply to
hugin-builds shortly after the release of 2011.2.0_beta1. Users not fearing
the bleeding edge use the nightlies and have full access now.
Conclusion: we want to move fast on releasing 2011.2.0.
Yuv
not tonight, but i will want to write a small iterator that runs cpfind and
adds point to the project for a pair of images.
> > we need a list of functionalities already available; a todo-list of
> > functionality to expose next; a how-to for doing it.
>
> The easiest route is the one I took in hsi/hpi, which is basically
> wrapping the C++ headers. This takes little effort - if you look at
> hsi.i, the interface definition, it isn't large at all, and not very
> complicated either. The problem with this approach is that the
> resulting python module is precisely as difficult to grasp as the C++
> header itself. The objects are the same, the methods are, and if you
> dont know what they do in C++, being able to access them from Python
> doesn't help. If the wrapped C++ code is well-evolved and high-level,
> with teling names - and what I found in hugin usually was like that -
> your Python module is valuable, otherwise you have to put in more
> work.
understand. so which headers would be next on the list of desirable but not
yet added to hsi.i ?
Yuv