Finding control points for many images

299 views
Skip to first unread message

Thomas Robitaille

unread,
May 15, 2011, 1:36:40 PM5/15/11
to hugin and other free panoramic software
Hello,

I'm trying to make a panorama with thousands of images from a video
sequence. Finding the control points takes a long time, so I was
wondering if there is a way (in a script or Hugin) to only look for
control points in neighboring images, so e.g. image 2 would only have
control points with images 1 and 3, and so on?

Thanks for any help,

Thomas

Jordan Miller

unread,
May 15, 2011, 3:54:10 PM5/15/11
to hugi...@googlegroups.com
did you try it on the command line with things like autopano-sift-c and the more recent algorithms that work better in the latest hugin?

jordan

> --
> You received this message because you are subscribed to the Google Groups "Hugin and other free panoramic software" group.
> A list of frequently asked questions is available at: http://wiki.panotools.org/Hugin_FAQ
> To post to this group, send email to hugi...@googlegroups.com
> To unsubscribe from this group, send email to hugin-ptx+...@googlegroups.com
> For more options, visit this group at http://groups.google.com/group/hugin-ptx

Bruno Postle

unread,
May 15, 2011, 4:21:34 PM5/15/11
to hugin and other free panoramic software

There is a cpfind --linearmatch option to only attempt alignment
between consecutive photos. I'm not sure if this was available in
2010.4.0 or if you need to use a 2011.0.0 snapshot.

My experience of doing this with other tools is that you only need
one blurred frame to break the sequence, there is an additional
cpfind --linearmatchlen option that you can use to match further
than just one adjacent photo.

--
Bruno

Carlos Eduardo G. Carvalho (Cartola)

unread,
May 16, 2011, 8:11:05 AM5/16/11
to hugi...@googlegroups.com
I am doing it in a way that can be scripted. I generate one PTO file for each pair of images then join them with pto_merge, that comes with hugin.

[ ]s, Carlos.

2011/5/15 Bruno Postle <br...@postle.net>

David Haberthür

unread,
May 17, 2011, 8:16:56 AM5/17/11
to hugi...@googlegroups.com

On 15.05.2011, at 22:21, Bruno Postle wrote:

> On Sun 15-May-2011 at 10:36 -0700, Thomas Robitaille wrote:
>>
>> I'm trying to make a panorama with thousands of images from a video
>> sequence. Finding the control points takes a long time, so I was
>> wondering if there is a way (in a script or Hugin) to only look for
>> control points in neighboring images, so e.g. image 2 would only have
>> control points with images 1 and 3, and so on?
>
> There is a cpfind --linearmatch option to only attempt alignment between consecutive photos. I'm not sure if this was available in 2010.4.0 or if you need to use a 2011.0.0 snapshot.

I didn't know about the --linearmatch option, that could have saved me quite a bit of hassle with this pano: http://flic.kr/p/9GvwJ6
I wasn't able to coerce cpfind in only finding 3-4 controlpoints per image, it insisted on finding 10. In the sequence I ended up with +1000 controlpoints per image, since the overlap was rather big. This made running the optimizer rather slow :)
Definitely need to investigate this further now.

Habi

Jeffrey Martin

unread,
May 17, 2011, 11:30:38 AM5/17/11
to hugi...@googlegroups.com
that reminds me, is there any way to limit the number of CP's per pair?


i fully agree, generating too many CP's makes problems but sometimes it is necessary to jack up the sensitivity of cpfind to find any matches at all! so there is a problem here waiting to be solved :-)))

kfj

unread,
May 17, 2011, 1:23:08 PM5/17/11
to hugin and other free panoramic software
On 17 Mai, 17:30, Jeffrey Martin <360cit...@gmail.com> wrote:
> that reminds me, is there any way to limit the number of CP's per pair?

I think the CPs are found anyway. Once all likely candidates are lined
up, the best ones are kept. Some CPGs have a built-in option to limit
the number of CPs per pair to a certain number, but cpfind doesn't
implement this (please correct me if I'm wrong). With apsc, the
relevant paramenter is

--maxmatches <matches> Output no more that this many control points
per
image pair (default: 25, zero means
unlimited)

but cpfind only offers

--minmatches <int> Minimum matches (default : 6)

what the intended use of this parameter is escapes me - if it can't
find any matches, how would it fabricate the requested minimum of six?
Anyway, once the CPs are there, you can try and throw away ones which
are 'worse' than others. The problem is which quality criteria you
use. One obvious criterion is the 'CP distance', and my simple
top_cps_only script does just that:

http://bazaar.launchpad.net/~kfj/+junk/script/view/head:/main/top_cps_only.py

the other criterion is even spread of CPs, which is mathematically
more demanding, and I've not dealt with the issue, so top_cps_only may
leave a bunch of CPs very close to each other, which is not really
what you want. You can also use cpclean (or click on 'clean CPs')
until you only have roughly the desired number left - the tool itself
doesn't offer to only keep a certain number but removes a certain
percentage. I think it takes into account even spread, though, and
will keep a minimum number per pair as well.

While I'm going on about CPGs - it's also worth noting that using
cpfind (and, for that matter, panomatic) without --linearmatch is
particularly bad with many images, since both CPGs will look at all
image pairs, so processing time is quadratic with the number of
images. apsc on the other hand will keep all feature points from all
images in memory and then do a global search, so it is much better
than quadratic (I'm not sure about the precise mathematics, but my gut
feeling is N log N for the global search). I hope these are all just
teething problems - cpfind claims decendency from panomatic and
therefore has inherited some idiosyncrasies from it, but I hope it'll
continue evolving at it's current pace and now that the feature
detection seems to be running very well indeed, maybe other parts of
it can be improved.

The look-at-all-pairs method has it's merits (badly matched images are
more likely to have some CPs found for them), and if you only have up
to, say, two dozen images, processing time is still bearable. So my
point, in short, is: if you can't use --linearmatch and have many
images, try apsc instead of cpfind.

> i fully agree, generating too many CP's makes problems but sometimes it is
> necessary to jack up the sensitivity of cpfind to find any matches at all!
> so there is a problem here waiting to be solved :-)))

In my experience, having a large number of CPs is also helpful when
calibrating lenses, for multilens panoramas and with handheld takes
with parallax problems. And as far as ramping up cpfind's sensitivity
is concerned, rejoice: with a (very recent) fix, the --fullscale
option now seems to work just fine.

Kay

Jeffrey Martin

unread,
May 18, 2011, 7:55:59 AM5/18/11
to hugi...@googlegroups.com
thanks Kay for your insight into this issue.

the problem with --linearmatch is that (unless i'm mistaken) it never tries to join the first and last image of the sequence, so this is really not so useful in the real world (unless you really want the user to specify what type of pano they're shooting, which to me seems really unfriendly - this stuff should just work)

--multirow OTOH is very useful, but i've found it to be slightly less reliable than the "just search every damn image pair" mode :)

So, it seems that the best would be to use a high sensitivity in cpfind, and then use cpclean to prune the images to a reasonable number of CP's before optimizing?

another question (hopefully not digressing too much) is keypoint data useful only after an image FOV (and lens type - rect or fish) is specified? i.e. you couldn't include some kind of SIFT data in the exif of the image (unless you first specified lens type and fov)?

Jeffrey

Yuval Levy

unread,
May 18, 2011, 9:02:06 PM5/18/11
to hugi...@googlegroups.com
On May 18, 2011 07:55:59 AM Jeffrey Martin wrote:
> the problem with --linearmatch is that (unless i'm mistaken) it never tries
> to join the first and last image of the sequence, so this is really not so
> useful in the real world (unless you really want the user to specify what
> type of pano they're shooting, which to me seems really unfriendly - this
> stuff should just work)

one could argue that it is more user friendly to try to join the first and
last image of the sequence (assuming a full circle) or that it is more user
friendly not to try to join them (assuming only a partial tour of the
horizon). the most popular use-case should be the default.

there is no way for cpfind to know or guess if the sequence covers a full
circle without trying to match them.

things could be more user-friendly with a switch in cpfind, e.g. --360, that
would do for the user what currently requires multiple steps:
* generate a pto file containing all images (all.pto)
* generate a pto file containing only the first and last images (seam.pto)
* cpfind --linearmatch -o sequence.pto all.pto
* cpfind -o link.pto seam.pto
* ptomerge -o 360project.pto sequence.pto link.pto

cpfind --360 --linearmatch -o 360project.pto all.pto would:
1. do the linear matching
2. try to find cps between the first and last image
3. if found them, add them to the project.


> --multirow OTOH is very useful, but i've found it to be slightly less
> reliable than the "just search every damn image pair" mode :)

I had mixed results with --multirow as well, especially with the "sky-
bordering" row. individual images on that row link better to the image
underneath them than to those left and right, but what I see from --multirow
are just rows that are linked at the beginning and at the end. "just search
every damn image pair" gets even worse result (plenty of false positives and
bad links that drive cpclean crazy). My solution is to manually select two
images from the same column and manually trigger cpfind on them. It would be
easier for me if I could trigger cpfind from the CP tab (where I get visual
confirmation that I am indeed matching two tiles of the same column); or even
better if cpfind would have an option to add robustness to the multirow
solution and match columns after the rows have been determined, i.e. once it
is determined that images 1-25 are the first row and 26-50 the second row,
pair-match them 1-26, 2-27, 3-28, ... adding those CPs to the project.


> So, it seems that the best would be to use a high sensitivity in cpfind,
> and then use cpclean to prune the images to a reasonable number of CP's
> before optimizing?

if there are too many bad cps, cpclean can actually prune the good ones
instead.


> another question (hopefully not digressing too much) is keypoint data
> useful only after an image FOV (and lens type - rect or fish) is
> specified? i.e. you couldn't include some kind of SIFT data in the exif of
> the image (unless you first specified lens type and fov)?

AFAIK the keypoint data is independent of FOV and lens type. When I first
start a project that might get difficult, I run cpfind with the --kall option.
this stores the keys and speeds up subsequent interactions.

One last thing, Jeffrey. May I ask you to quote a few lines from what you are
referring to in your posting? I am reading this mailing list sporadically and
on a mail client. Without context, most of your messages are difficult to
make sense of.

Thank you for being considerate of mailing list users in the future.

Yuv

signature.asc

Jeffrey Martin

unread,
May 19, 2011, 10:59:53 AM5/19/11
to hugi...@googlegroups.com
replies below.


On Thursday, May 19, 2011 3:02:06 AM UTC+2, Yuv wrote:
On May 18, 2011 07:55:59 AM Jeffrey Martin wrote:
> the problem with --linearmatch is that (unless i'm mistaken) it never tries
> to join the first and last image of the sequence, so this is really not so
> useful in the real world (unless you really want the user to specify what
> type of pano they're shooting, which to me seems really unfriendly - this
> stuff should just work)

one could argue that it is more user friendly to try to join the first and
last image of the sequence (assuming a full circle) or that it is more user
friendly not to try to join them (assuming only a partial tour of the
horizon).  the most popular use-case should be the default.

there is no way for cpfind to know or guess if the sequence covers a full
circle without trying to match them.


I think it should try by default. in my experience (but i'm biased towards 360's admittedly) most panos people try to stitch are 360 images.

things could be more user-friendly with a switch in cpfind, e.g. --360, that
would do for the user what currently requires multiple steps:
* generate a pto file containing all images (all.pto)
* generate a pto file containing only the first and last images (seam.pto)
* cpfind --linearmatch -o sequence.pto all.pto
* cpfind -o link.pto seam.pto
* ptomerge -o 360project.pto sequence.pto link.pto

cpfind --360 --linearmatch -o 360project.pto all.pto would:
1. do the linear matching
2. try to find cps between the first and last image
3. if found them, add them to the project.


yes, interesting idea!
 

ah interesting.
can anyone confirm if this really is true? (that keypoints are independent of FOV)

One last thing, Jeffrey.  May I ask you to quote a few lines from what you are
referring to in your posting?  I am reading this mailing list sporadically and
on a mail client.  Without context, most of your messages are difficult to
make sense of.

Thank you for being considerate of mailing list users in the future.


yes of course, sorry about that!

Jeffrey

Gnome Nomad

unread,
May 19, 2011, 3:53:41 PM5/19/11
to hugi...@googlegroups.com
Jeffrey Martin wrote:
> replies below.
>
> On Thursday, May 19, 2011 3:02:06 AM UTC+2, Yuv wrote:
>
> On May 18, 2011 07:55:59 AM Jeffrey Martin wrote:
> > the problem with --linearmatch is that (unless i'm mistaken) it
> never tries
> > to join the first and last image of the sequence, so this is
> really not so
> > useful in the real world (unless you really want the user to
> specify what
> > type of pano they're shooting, which to me seems really
> unfriendly - this
> > stuff should just work)
>
> one could argue that it is more user friendly to try to join the
> first and
> last image of the sequence (assuming a full circle) or that it is
> more user
> friendly not to try to join them (assuming only a partial tour of the
> horizon). the most popular use-case should be the default.
>
> there is no way for cpfind to know or guess if the sequence covers a
> full
> circle without trying to match them.
>
>
> I think it should try by default. in my experience (but i'm biased
> towards 360's admittedly) most panos people try to stitch are 360 images.

I've never even tried to shoot one, let alone stitch one. So I wouldn't
make it the default. But what's being said about adding this convenience
feature sounds good and workable to me.

--
Gnome Nomad
gnome...@gmail.com
wandering the landscape of god
http://www.cafepress.com/otherend/

Bart van Andel

unread,
May 19, 2011, 5:57:00 PM5/19/11
to hugi...@googlegroups.com
On Thursday, May 19, 2011 9:53:41 PM UTC+2, GnomeNomad wrote:
Jeffrey Martin wrote:
>     one could argue that it is more user friendly to try to join the
>     first and
>     last image of the sequence (assuming a full circle) or that it is
>     more user
>     friendly not to try to join them (assuming only a partial tour of the
>     horizon).  the most popular use-case should be the default.
>
>     there is no way for cpfind to know or guess if the sequence covers a
>     full
>     circle without trying to match them.
>
>
> I think it should try by default. in my experience (but i'm biased
> towards 360's admittedly) most panos people try to stitch are 360 images.

I've never even tried to shoot one, let alone stitch one. So I wouldn't
make it the default. But what's being said about adding this convenience
feature sounds good and workable to me.


I don't see any harm in having cpfind to *try* to match the first and last image. If it finds a connection, voila, probably it was indeed a 360 degree pano. If it doesn't find any matches, then probably it wasn't. How big are the changes of a false positive (e.g. it connects the images where it shouldn't)? Or a false negative (it doesn't where it should)? My guess is that in most cases it will work. So even though I don't often shoot a 360 (I've only done maybe 4 of them) I opt to try finding a 360 by default. The small extra cost of computation I don't mind.

--
Bart
 

Yuval Levy

unread,
May 19, 2011, 7:26:59 PM5/19/11
to hugi...@googlegroups.com
On May 19, 2011 05:57:00 PM Bart van Andel wrote:
> On Thursday, May 19, 2011 9:53:41 PM UTC+2, GnomeNomad wrote:
> > Jeffrey Martin wrote:
> > > one could argue that it is more user friendly to try to join the
> > > first and
> > > last image of the sequence (assuming a full circle) or that it is
> > > more user
> > > friendly not to try to join them (assuming only a partial tour of
> > > the horizon). the most popular use-case should be the default.
> > >
> > > there is no way for cpfind to know or guess if the sequence covers
> > > a full
> > > circle without trying to match them.
> > >
> > > I think it should try by default. in my experience (but i'm biased
> > > towards 360's admittedly) most panos people try to stitch are 360
> > > images.
> >
> > I've never even tried to shoot one, let alone stitch one. So I wouldn't
> > make it the default. But what's being said about adding this conveniencey
> > feature sounds good and workable to me.
>
> I don't see any harm in having cpfind to try

I don't have a strong preference either way but I do have a slight preference
for no automatic matching attempt by default.

If there is a switch to activate the extra feature, we can also add a
preference to toggle it on by default for those who do not do 360 and do not
need this feature activated.

Even one false positive is one too many. The cost of computation I don't
mind. The cost of user time to undo the false positive I do mind.

Defaults are meant to be set for the majority of use cases and preferences are
the way to satisfy alternatives, not blind brute force (extra computation).

Yuv

signature.asc

kfj

unread,
May 20, 2011, 1:09:30 AM5/20/11
to hugin and other free panoramic software


On 19 Mai, 23:57, Bart van Andel <bavanan...@gmail.com> wrote:

> I don't see any harm in having cpfind to *try* to match the first and last
> image. If it finds a connection, voila, probably it was indeed a 360 degree
> pano. If it doesn't find any matches, then probably it wasn't. How big are
> the changes of a false positive (e.g. it connects the images where it
> shouldn't)? Or a false negative (it doesn't where it should)? My guess is
> that in most cases it will work. So even though I don't often shoot a 360
> (I've only done maybe 4 of them) I opt to try finding a 360 by default. The
> small extra cost of computation I don't mind.

I agree with you. I'd propose trying to match first and last should be
the default. The small chance of a false-positive is, IMHO, not a big
issue, and doesn't cost much user time, as the user can simply select
the first and last image in the image tab and remove their CPs with a
single click. Furthermore I'd rely on the CPG to produce few such
mismatches. Still, a command line switch to force the bahaviour this
way or that should be simple enough to add.

While this is pending, it's no big deal either to do a linear match
and then select the first and last image only in the images tab and
run the CPG just on those two images to finish a 360 degree panorama.

Let me remark that stuff like this is precisely what will become much
less of an issue once the writing and use of plugins becomes common.
Currently we have to discuss a new feature which has to be introduced
into the C++ code and may make it into the next release. The future is
glue scripts in python that orchestrate CPG use - change a line, add a
routine, put it online, let the others play with it - no compiling and
linking needed. You'll love it ;-)

Kay

Gnome Nomad

unread,
May 20, 2011, 3:36:22 AM5/20/11
to hugi...@googlegroups.com

I'd prefer it not try unless I ask it to. Then I wouldn't have to deal
with any spurious connections. I'd know whether or not I'd shot a 360. I
suspect there are a lot fewer people shooting 360s than shotting smaller
panoramas.

paul womack

unread,
May 20, 2011, 5:12:13 AM5/20/11
to hugi...@googlegroups.com
Jeffrey Martin wrote:

> I think it should try by default. in my experience (but i'm biased
> towards 360's admittedly) most panos people try to stitch are 360 images.

I think you have a self-selecting sample.

Most of my Hugin uses are NOT 360, either horizontal or vertical.

Just one (my) vote, obviously.

BugBear

Jim Watters

unread,
May 20, 2011, 11:16:02 AM5/20/11
to hugi...@googlegroups.com
On 2011-05-18 10:02 PM, Yuval Levy wrote:
> On May 18, 2011 07:55:59 AM Jeffrey Martin wrote:
>> the problem with --linearmatch is that (unless i'm mistaken) it never tries
>> to join the first and last image of the sequence, so this is really not so
>> useful in the real world (unless you really want the user to specify what
>> type of pano they're shooting, which to me seems really unfriendly - this
>> stuff should just work)
> one could argue that it is more user friendly to try to join the first and
> last image of the sequence (assuming a full circle) or that it is more user
> friendly not to try to join them (assuming only a partial tour of the
> horizon). the most popular use-case should be the default.
>
> there is no way for cpfind to know or guess if the sequence covers a full
> circle without trying to match them.
A reasonable guess could be done with the FoV of the lens and the number of images.

Another guess could be made to join image chains together, if several chains of
images are created see if the start of one would join the start of the next.

Once it has been established that there are multiple rows of images, then if an
image currently only connects to 1 or 2 other images, it could probably use more
connections.

>> --multirow OTOH is very useful, but i've found it to be slightly less
>> reliable than the "just search every damn image pair" mode :)
> I had mixed results with --multirow as well, especially with the "sky-
> bordering" row. individual images on that row link better to the image
> underneath them than to those left and right, but what I see from --multirow
> are just rows that are linked at the beginning and at the end. "just search
> every damn image pair" gets even worse result (plenty of false positives and
> bad links that drive cpclean crazy). My solution is to manually select two
> images from the same column and manually trigger cpfind on them. It would be
> easier for me if I could trigger cpfind from the CP tab (where I get visual
> confirmation that I am indeed matching two tiles of the same column); or even
> better if cpfind would have an option to add robustness to the multirow
> solution and match columns after the rows have been determined, i.e. once it
> is determined that images 1-25 are the first row and 26-50 the second row,
> pair-match them 1-26, 2-27, 3-28, ... adding those CPs to the project.

More intelligence is welcome. And if the optimizer could also determine that
some images are not connected to the main chain. That these images could be
evenly spaced between connected images.

--
Jim Watters
http://photocreations.ca

Bruno Postle

unread,
May 20, 2011, 6:11:16 PM5/20/11
to Hugin ptx
On Thu 19-May-2011 at 07:59 -0700, Jeffrey Martin wrote:
>On Thursday, May 19, 2011 3:02:06 AM UTC+2, Yuv wrote:
>> On May 18, 2011 07:55:59 AM Jeffrey Martin wrote:
>> > the problem with --linearmatch is that (unless i'm mistaken) it
>> > never tries to join the first and last image of the sequence,
>> > so this is really not so useful in the real world (unless you
>> > really want the user to specify what type of pano they're
>> > shooting, which to me seems really unfriendly - this stuff
>> > should just work)

I think there is some confusion here, the cpfind --multirow option
given a single row panorama will both try and match photos
sequentially and try and join the ends into a full circle.

The main advantage of the --linearmatch option is for video
sequences where every frame overlaps a large number of other frames.
Here it would be very inefficient to try and generate control points
for all these overlaps, it makes sense to only match photos that are
adjacent in the sequence.

>> AFAIK the keypoint data is independent of FOV and lens type.
>> When I first start a project that might get difficult, I run
>> cpfind with the --kall option. this stores the keys and speeds
>> up subsequent interactions.

>ah interesting. can anyone confirm if this really is true? (that
>keypoints are independent of FOV)

I'm pretty sure that the features are identified in conformal space
(as with autopano-sift-c), in which case they are very dependent on
all the lens parameters given in the input project.

--
Bruno

Bruno Postle

unread,
May 20, 2011, 6:11:24 PM5/20/11
to Hugin ptx
On Fri 20-May-2011 at 12:16 -0300, Jim Watters wrote:
>
>Another guess could be made to join image chains together, if several
>chains of images are created see if the start of one would join the
>start of the next.
>
>Once it has been established that there are multiple rows of images,
>then if an image currently only connects to 1 or 2 other images, it
>could probably use more connections.

These are some of the steps of the cpfind --multirow option.

--
Bruno

Yuval Levy

unread,
May 21, 2011, 11:04:39 AM5/21/11
to hugi...@googlegroups.com

I set up a quick experiment starting from two JPG with all EXIF data intact.

1. Created copies with no EXIF data:

cp 8165.JPG 8165stripped.jpg
cp 8166.JPG 8166stripped.jpg
exiftool -all= *.jpg


2. Used panostart to generate bootstrap Makefile files:

panostart -o withexif.mk *.JPG
panostart -o noexif.mk *.jpg


3. Generated pto files:

make -f withexif.mk
make -f noexif.mk

failed with an error:
make[1]: autopano-sift-c: Command not found
make[1]: *** [8165-8166_simple.a.pto] Error 127
make[1]: Leaving directory `/home/yuv/usecase'
make: *** [8165-8166.pto] Error 2

but the pointless pto files are generated.

4. used cpfind on the pointless pto.

cpfind --kall -o pointless.pto 8165-8166.pointless.pto

creates the .key files (but no pointless.pto - cpfind bug?)

cpfind --kall -o stripped.pointless.pto 8165-8166stripped.pointless.pto

fails with

WARN: 08:17:11.927531 (/home/yuv/src/hugin/hugin-
tarball/hugin-2011.0.0/src/hugin_base/panodata/Panorama.cpp:1762) readData():
Failed to read from dataInput.
ERROR: couldn't parse panos tool script: '8165-8166stripped.pointless.pto'!

5. digged a little bit deeper. loaded 8165-8166stripped.pointless.pto into
Hugin GUI, edited the preferences to add the --kall switch and hit the button
to trigger cpfind. Log excerpts:

Hugins cpfind 2011.0.0.0fd3e119979c
based on Pan-o-matic by Anael Orlinski

Project contains the following images:
Image 0
Imagefile: /home/yuv/usecase/8165stripped.jpg
Remapped : no
Image 1
Imagefile: /home/yuv/usecase/8166stripped.jpg
Remapped : no

--- Analyze Images ---
i0 : Analyzing image...
i1 : Analyzing image...

6. compared the resulting key files:

diff 8165.key 8165stripped.key
10808c10808
< /home/yuv/usecase/8165.JPG
---
> /home/yuv/usecase/8165stripped.jpg

7. digged further. In the Hugin GUI, using the images without EXIF data, I
varied HFOV between 2 and 200 and changed the lens projection type.

CONCLUSIONS:

1. the key files seem to be independent of FOV input, confirming the first
half of my statement. Whether using the actual value computed from EXIF (9.8)
or the default guess in Hugin (50), or the broader manually edited range (2 to
200), the key files are invariable *as long as projection type (f) stays
constant*.

2. the key files do depend on projection type input. Changing from
Rectilinear (Hugin's default guess) to any other input projection yields
completely different results. My mistake to take Hugin's default guess that
works in my particular test case for a more general rule.

3. i have not tested against distortion and other lens parameters other than
HFOV (v) and projection (f).

4. https://bugs.launchpad.net/hugin/+bug/786204

5. where are bugs in panotools-script tracked? it would be nice if it would
work without the autopano-sift-c dependency.

Yuv

signature.asc

Yuval Levy

unread,
May 21, 2011, 11:07:11 AM5/21/11
to hugi...@googlegroups.com

did not work as expected in my case, although the initial body of CP was
generated with cpfind --multirow option.

I had to manually select images in the same column and trigger cpfind between
them to add good CPs.

Yuv

signature.asc

Yuval Levy

unread,
May 21, 2011, 11:20:57 AM5/21/11
to hugi...@googlegroups.com
On May 20, 2011 11:16:02 am Jim Watters wrote:
> On 2011-05-18 10:02 PM, Yuval Levy wrote:
> > there is no way for cpfind to know or guess if the sequence covers a full
> > circle without trying to match them.
>
> A reasonable guess could be done with the FoV of the lens and the number of
> images.

would still need to consider if they are stacked and/or how much overlap there
is between them.


> Another guess could be made to join image chains together, if several
> chains of images are created see if the start of one would join the start
> of the next.

yes, this makes sense, and then work progressively through the two chains.


> Once it has been established that there are multiple rows of images, then
> if an image currently only connects to 1 or 2 other images, it could
> probably use more connections.

I think that adding more connection to the upper / lower row would be
desirable in any case as it adds robustness to the project.


> More intelligence is welcome. And if the optimizer could also determine
> that some images are not connected to the main chain. That these images
> could be evenly spaced between connected images.

Yes, this kind of even spacing is usually just what it takes with the sky or
other featureless areas of a multirow pano.

Yuv

signature.asc

Bruno Postle

unread,
May 21, 2011, 6:52:10 PM5/21/11
to Hugin ptx
On Sat 21-May-2011 at 11:04 -0400, Yuval Levy wrote:
>
>1. the key files seem to be independent of FOV input, confirming the first
>half of my statement. Whether using the actual value computed from EXIF (9.8)
>or the default guess in Hugin (50), or the broader manually edited range (2 to
>200), the key files are invariable *as long as projection type (f) stays
>constant*.
>
>2. the key files do depend on projection type input. Changing from
>Rectilinear (Hugin's default guess) to any other input projection yields
>completely different results. My mistake to take Hugin's default guess that
>works in my particular test case for a more general rule.

This certainly looks like a bug, there is no point changing
behaviour with projection if angle of view is ignored.

>5. where are bugs in panotools-script tracked? it would be nice if it would
>work without the autopano-sift-c dependency.

Panotools-Script bugs used to go in the sourceforge tracker. The
SVN version has used cpfind rather than autopano-sift-c since
January, but I need to do a release.

--
Bruno

Rogier Wolff

unread,
May 22, 2011, 1:34:56 AM5/22/11
to hugi...@googlegroups.com


On Thu, May 19, 2011 at 07:26:59PM -0400, Yuval Levy wrote:
>
> I don't have a strong preference either way but I do have a slight preference
> for no automatic matching attempt by default.

> If there is a switch to activate the extra feature, we can also add
> a preference to toggle it on by default for those who do not do 360
> and do not need this feature activated.

> Even one false positive is one too many. The cost of computation I
> don't mind. The cost of user time to undo the false positive I do
> mind.

In some applications a false positive is more troublesome than a false
negative. How about: "I think that that's Bin Laden, shoot him!".
Or an identification system saying: "Your biometrics match with Obama,
welcome mr president to the military compound".

The parameters of such an accesscontrol system can be tuned to make
false positives say 100x less likely than false negatives.

But in this case false negatives are maybe just as bad. And the user
forgetting to tick the box: 360 degree shot would imho result in a
"false negative".

> Defaults are meant to be set for the majority of use cases and
> preferences are the way to satisfy alternatives, not blind brute
> force (extra computation).

Hugin IS a program that invests brute computational force for
conveniece. It is a convenience that we find control points, and it is
a convenience that hugin optimizes the position of the images based on
the control points.

So IHMO we should by default check for the wraparound in the images.
A configurable option might have a few settings. For example,
"never", "always", "ask", "based on FOV".

Or how about a popup (I hate popups) that says: "it seems you've shot
a 360 degree pano, correct" when a match is detected between the first
and last shots. Then ONLY when a false negative is detected does it
require user intervention: a response to a popup question.

And that FOV setting can be useful too. When a match is found we have
a linear transformation that allows us to predict the next point,
right? So how about we extrapolate to calculate the left side of the
right image, and compute the overlap. Add all FOVs together, subtract
the overlaps, and if this comes to say > 330 degrees, do the
wraparound match...

Roger.

--
** R.E....@BitWizard.nl ** http://www.BitWizard.nl/ ** +31-15-2600998 **
** Delftechpark 26 2628 XH Delft, The Netherlands. KVK: 27239233 **
*-- BitWizard writes Linux device drivers for any device you may have! --*
Q: It doesn't work. A: Look buddy, doesn't work is an ambiguous statement.
Does it sit on the couch all day? Is it unemployed? Please be specific!
Define 'it' and what it isn't doing. --------- Adapted from lxrbot FAQ

kfj

unread,
May 22, 2011, 4:35:30 AM5/22/11
to hugin and other free panoramic software
On 22 Mai, 07:34, Rogier Wolff <rew-googlegro...@BitWizard.nl> wrote:

> So IHMO we should by default check for the wraparound in the images.
> A configurable option might have a few settings. For example,
> "never", "always", "ask", "based on FOV".

I think the last option would approach a reasonable default. If
there's a match between not-in-line images and the FOV makes it
feasible that this match is a genuine overlap, assume that it is one.
After all, if the match-all-pairs startegy is used, this match between
not-in-line images will also be found, and noone is afraid of false-
positives here. And hugin is not an image-recognition system; often
enough people just come in as ghosts to be promptly removed.

So I'd propose this method:

Linear matching finds one or more strips of connected image sequences.
Roughly optimize each strip sperately with the image information the
pto has then (if the user has supplied wrong data it's his/her fault).
These roughly optimized strips can be checked for likely overlaps or
near-overlaps between their first and last image(s), if overlaps are
detected in this preliminary step, a 360 degree situation is suspected
and the images in question are fed to the CPG to see if the initial
assumption of 360-degrees can be supported with CPs. If so, the
likelihood of 360 degrees is so large that hugin takes them as such
and the user has to explicitly decouple the nonsequential images if
it's a false-positive after all.

After all users can simply press undo these days if they're not happy
with the result of an automatic operation. It might be helpful to put
the ring-closing operation into a separately undoable step to keep the
result of the linear matching. The detection of CPs between head- and
tail-images can be done quickly if the keypoints for the images which
have been generated during the linear matching are kept.

I suppose working in strips will be a common enough work flow to make
this behaviour satisfactory for most users that venture as far as
choosing a CPG setting other than the standard. Checking for overlaps
is already in the code base.

While I'm at it - couldn't we introduce a mechanism (activated by a
preference) to automatically store and keep the keyfile for every
image once it's been calculated (like, IMG_1234.key in the same
directory) and have the CPG use the keyfile if it's there and only
calculate a new one in certain circumstances (like, it's been told to
do so, or the CPG settings are different to the ones the extant
keyfile has been made with)? Storage is cheap these days, and this
could save quite some time in certain scenarios. It would even be
quite feasible to just calculate keyfiles for a whole take or
collection, like right after importing the images, and whatever
matching strategies would be employed later could all go ahead without
calculating the feature points.

Kay

T. Modes

unread,
May 22, 2011, 4:56:47 AM5/22/11
to hugin and other free panoramic software
> So I'd propose this method:
>
> Linear matching finds one or more strips of connected image sequences.
> Roughly optimize each strip sperately with the image information the
> pto has then (if the user has supplied wrong data it's his/her fault).
> These roughly optimized strips can be checked for likely overlaps or
> near-overlaps between their first and last image(s), if overlaps are
> detected in this preliminary step, a 360 degree situation is suspected
> and the images in question are fed to the CPG to see if the initial
> assumption of 360-degrees can be supported with CPs. If so, the
> likelihood of 360 degrees is so large that hugin takes them as such
> and the user has to explicitly decouple the nonsequential images if
> it's a false-positive after all.

That's a description of the already existing multirow matching
strategy.

> While I'm at it - couldn't we introduce a mechanism (activated by a
> preference) to automatically store and keep the keyfile for every
> image once it's been calculated (like, IMG_1234.key in the same
> directory) and have the CPG use the keyfile if it's there and only
> calculate a new one in certain circumstances (like, it's been told to
> do so, or the CPG settings are different to the ones the extant
> keyfile has been made with)? Storage is cheap these days, and this
> could save quite some time in certain scenarios. It would even be
> quite feasible to just calculate keyfiles for a whole take or
> collection, like right after importing the images, and whatever
> matching strategies would be employed later could all go ahead without
> calculating the feature points.

That's already done by the cache switch.

Everybody is complaining about the documentation. But following this
thread nobody is reading the existing documentation. There all these
issues are described. So why writing documentation, if nobody reads
it.

Thomas

kfj

unread,
May 22, 2011, 5:33:50 AM5/22/11
to hugin and other free panoramic software


On 22 Mai, 10:56, "T. Modes" <Thomas.Mo...@gmx.de> wrote:
> > So I'd propose this method:
>
> > Linear matching finds one or more strips of connected image sequences.
> > Roughly optimize each strip sperately with the image information the
> > pto has then (if the user has supplied wrong data it's his/her fault).
> > These roughly optimized strips can be checked for likely overlaps or
> > near-overlaps between their first and last image(s), if overlaps are
> > detected in this preliminary step, a 360 degree situation is suspected
> > and the images in question are fed to the CPG to see if the initial
> > assumption of 360-degrees can be supported with CPs. If so, the
> > likelihood of 360 degrees is so large that hugin takes them as such
> > and the user has to explicitly decouple the nonsequential images if
> > it's a false-positive after all.
>
> That's a description of the already existing multirow matching
> strategy.

Multirow matching puts all start- and end-images from the detected
strips into a separate matching step and also finds connections
between rows that way. I agree that my propostion is very similar to
multirow matching.

> > While I'm at it - couldn't we introduce a mechanism (activated by a
> > preference) to automatically store and keep the keyfile for every
> > image once it's been calculated (like, IMG_1234.key in the same
> > directory) and have the CPG use the keyfile if it's there and only
> > calculate a new one in certain circumstances (like, it's been told to
> > do so, or the CPG settings are different to the ones the extant
> > keyfile has been made with)? Storage is cheap these days, and this
> > could save quite some time in certain scenarios. It would even be
> > quite feasible to just calculate keyfiles for a whole take or
> > collection, like right after importing the images, and whatever
> > matching strategies would be employed later could all go ahead without
> > calculating the feature points.
>
> That's already done by the cache switch.
>

My proposition is to not store the keyfiles in the temp tirectory and
delete them soon after, but to make them a permanent issue that is
saved in the image's directory with the image's name plus the .key
extension. I'd like to furher decouple feature detection and matching
and make the keyfiles permanent. Maybe I misread the documentation,
but it looked to me as if reuse of keyfiles was currently limited.

> Everybody is complaining about the documentation. But following this
> thread nobody is reading the existing documentation. There all these
> issues are described. So why writing documentation, if nobody reads
> it.

Bear with us. Be gentle. If you feel that we should be made aware of
existing documantation, the friendly thing would be to post a helpful
link to the documentation rather than telling us off. I think this is
the link needed:

http://wiki.panotools.org/Hugin_Parameters_for_Control_Point_Detectors_dialog

Is this what you meant? And maybe you can recommend other bits as
well?

Kay

Yuval Levy

unread,
May 22, 2011, 10:10:38 AM5/22/11
to hugi...@googlegroups.com
On May 22, 2011 04:56:47 AM T. Modes wrote:
> That's a description of the already existing multirow matching
> strategy.

that's theory. What I observe in practice are curled rows that taint the rest
of the strategy after the rows have been identified and optimized first time.

this happens to me with an alpine valley scenery. The curl starts in those
images where the sky becomes a prominent part of the image (3/4 and more of
the surface). Those images have poor features in the left<->right overlap,
and good features in the top<->down overlap. But these good features are not
automatically matched, I guess because the initial optimization of the stripes
yields the curls and confuses the logic.

the way I understand multi-row now is that the rows are optimized and the
result is accepted blindly. maybe we should make an assumption that these
rows are somewhat straight and fix the curl?

my images were shot on a calbrated panohead and it was level (and even if it
was not, the camera's movement is linear to the horizon corrected for the
levelling factor, not curled.


> That's already done by the cache switch.

the cache switch would be redundant if the k-switches and the o switch behaved
in an intuitive way. I have attached a patch to my bug rep... ehem feature
request. [0]


> Everybody is complaining about the documentation.

Nobody in this thread complained about the documentation.

Speaking for myself:
* I complained about an unintuitive behavior (for which I posted a patch [0]);
* I described the practical outcome and problems I am confronted with when
using the tool;
* I reported the result of an empirical test about the influence of FOV and
projection input on detection output;
* I was part of an open brain storming suggesting potential improvements to
the tool's strategy and defaults.

Did I make mistakes? of course I did. I am not using cpfind on a daily (or
even weekly) basis and don't have every small detail present in the forehead.

Did other make mistakes? probably too.

Did this make the discussion wrong or irrelevant? I don't think so. If there
were no issues, there would be no discussion. And the issue is not with
documentation. It is with improving and making the tool more intuitive to the
occasional user who does not have every small detail present in the forehead.


> nobody is reading the existing documentation. There all these
> issues are described. So why writing documentation, if nobody reads
> it.

I'm sorry to read you frustrated. I did read the documentation, it is
helpful, it does indeed address most of the issues described but it is not the
end of the story.

Do you read the documentation of your car every time before driving it?

There is a combination of factors at play that determine user proficiency all
together. I am not using cpfind on a daily basis and so I do not recall all
details. This is especially true for the counter-intuitive ones.

To me it is counter-intuitive that the -k switch overrides the -o switch and
no warning is displayed on the screen.

When I feed it a carefully shot multi-raw panorama and it curls the row that
borders between land (good features) and sky (poor/no features), it tells me
that the strategy has room for improvement.

There will be many ideas thrown at it to improve it. Some will not be
feasible. Some will be based on wrong assumption. Some will work best in
some cases and others in other. But please don't shout back RTFM at everybody
here who is trying to make sense of the current observed behavior in real life
and how it differs from the expected / predicted behavior in theory.

I really look forward for python scripting to advance and become mainstream.
These strategy things are better left to a high level scripting language using
the detection/matching/optimization functionalities as building blocks.

Good news for Ubuntu users: I updated the wiki instructions and now everybody
with Lucid (10.04) and later can access python scripting; and Philipp updated
the nightlies build process so that now the python scripting interface is
available to those brave enough to type the following commands in the CLI:

sudo add-apt-repository ppa:hugin/hugin-builds
sudo add-apt-repository ppa:hugin/nightly
sudo apt-get update
sudo apt-get install hugin

Yuv


[0] https://bugs.launchpad.net/hugin/+bug/786204

signature.asc

kfj

unread,
May 22, 2011, 1:43:01 PM5/22/11
to hugin and other free panoramic software
On 22 Mai, 16:10, Yuval Levy <goo...@levy.ch> wrote:

> that's theory.  What I observe in practice are curled rows that taint the rest
> of the strategy after the rows have been identified and optimized first time.

maybe the curled stripes react well to the horizon straightening
routine. Just an idea.

> this happens to me with an alpine valley scenery.  The curl starts in those
> images where the sky becomes a prominent part of the image (3/4 and more of
> the surface).  Those images have poor features in the left<->right overlap,
> and good features in the top<->down overlap.  But these good features are not
> automatically matched, I guess because the initial optimization of the stripes
> yields the curls and confuses the logic.

I do a 360x180 with a fisheye as a backdrop and then 'pin' the longer-
lens shots to it. This is one reason why I developed woa, because that
makes good matches between images of different lenses, but even
without woa, if the lenses aren't too vastly different and you have,
say, only two or three matches from each long-lens shot to the fisheye
shots, everything is already quite nicely in place.

> > nobody is reading the existing documentation. There all these
> > issues are described. So why writing documentation, if nobody reads
> > it.
>
> I'm sorry to read you frustrated.  I did read the documentation, it is
> helpful, it does indeed address most of the issues described but it is not the
> end of the story.

I must admit that I hadn't looked into the cpfind documentation fore a
while, and that the text that is now in the wiki is actually good and
helpful. I often simply call the program in question, or call man with
it, and what you get this way is pretty thin. Sorry for being sloppy.

I would dearly like to understand more of how cpfind is doing what it
does. I made several attempts to look into the code but I find it hard
to penetrate. The code is very sparsely documented. The other CPGs
usually aren't much better. I really appreciate a well-written and
well-presented technical paper as the ones coming with enblend/enfuse.
If anyone knows of any more in-depth information on cpfind, please let
me know, and please forgive me for not having found it myself. I'd be
curious about the 'gradient-based detector' and how it differs from
SURF.

> I really look forward for python scripting to advance and become mainstream.  
> These strategy things are better left to a high level scripting language using
> the detection/matching/optimization functionalities as building blocks.

absolutely. Stuff like orchestrating CPG use for specific shooting
patterns is an ideal field for scripting. One of my first python
scripts to deal with matters panoramic was a glue script that would be
called by hugin as if it were a CPG and would construct calls to
various CPGs. This allowed me to do stuff I couldn't do with the plain
edit-the-command-line approach (like call A. Jenny's autopano under
Linux using wine).

hsi/hpi is just a first step in the right direction. The helper
programs (CPGs, warping, blending) can all be made into modules with a
bit of SWIG magic, now that we have open source programs for them -
the path is now trodden, so it should be easier than the first one -
and with Python as glue their functionality would suddenly become
available inside the hugin process, no need to pass data via pipes,
parameters or files. My dream is to have a bunch of Python modules for
the various aspects of panography, a GUI (in wxPython) to control and
visualize stuff and a collection of plugins combining the module
functions into useful routines to be presented by the GUI. And a group
of users that can easily try, modify and share their code without
going through compile/link cycles.

> Good news for Ubuntu users:  I updated the wiki instructions and now everybody
> with Lucid (10.04) and later can access python scripting; and Philipp updated
> the nightlies build process so that now the python scripting interface is
> available to those brave enough to type the following commands in the CLI:
>
>   sudo add-apt-repository ppa:hugin/hugin-builds
>   sudo add-apt-repository ppa:hugin/nightly
>   sudo apt-get update
>   sudo apt-get install hugin

this sounds exciting! Does it mean that the hugin you get from the
PPAs is now per default an hsi/hpi-enabled version?

Kay

Bruno Postle

unread,
May 22, 2011, 5:26:19 PM5/22/11
to Hugin ptx
On Sun 22-May-2011 at 10:10 -0400, Yuval Levy wrote:
>On May 22, 2011 04:56:47 AM T. Modes wrote:
>> That's a description of the already existing multirow matching
>> strategy.
>
>that's theory. What I observe in practice are curled rows that taint the rest
>of the strategy after the rows have been identified and optimized first time.

>the way I understand multi-row now is that the rows are optimized and the


>result is accepted blindly. maybe we should make an assumption that these
>rows are somewhat straight and fix the curl?

Actually cpfind already does this (or is supposed to), the estimated
positions used as a basis for subsequent overlap detection are
determined without optimising 'roll'.

These 'curls' sound to me like the initial estimate of the angle of
view of the photos is wrong (which may or may not be related to the
angle of view behaviour you noticed generating key files).

>> nobody is reading the existing documentation. There all these
>> issues are described. So why writing documentation, if nobody reads
>> it.
>
>I'm sorry to read you frustrated. I did read the documentation, it is
>helpful, it does indeed address most of the issues described but it is not the
>end of the story.

There have been a lot of comments regarding cpfind that seem to be
referring to experiences with other software, or suggesting features
that cpfind already has (or is supposed to have). It would be
difficult at this point for anyone who has just been reading the PTX
list to have an accurate idea of what cpfind does.

--
Bruno

Yuval Levy

unread,
May 22, 2011, 6:04:32 PM5/22/11
to hugi...@googlegroups.com
On May 22, 2011 05:26:19 PM Bruno Postle wrote:
> >the way I understand multi-row now is that the rows are optimized and the
> >result is accepted blindly. maybe we should make an assumption that these
> >rows are somewhat straight and fix the curl?
>
> Actually cpfind already does this (or is supposed to), the estimated
> positions used as a basis for subsequent overlap detection are
> determined without optimising 'roll'.

it's a pitch+roll effect.


> These 'curls' sound to me like the initial estimate of the angle of
> view of the photos is wrong (which may or may not be related to the
> angle of view behaviour you noticed generating key files).

mhh... full EXIF is passed to Hugin and looking at the pto project that goes
into cpfind the intial estimate of hfov seems OK.


> >> nobody is reading the existing documentation. There all these
> >> issues are described. So why writing documentation, if nobody reads
> >> it.
> >
> >I'm sorry to read you frustrated. I did read the documentation, it is
> >helpful, it does indeed address most of the issues described but it is not
> >the end of the story.
>
> There have been a lot of comments regarding cpfind that seem to be
> referring to experiences with other software, or suggesting features
> that cpfind already has (or is supposed to have). It would be
> difficult at this point for anyone who has just been reading the PTX
> list to have an accurate idea of what cpfind does.

yes, cpfind needs to give more feedback. I think Thomas just committed
something in that sense. Also cpfind has evolved quite rapidly and things
have changed significantly between 2010.4 and 2011.0.

Yuv

signature.asc

Yuval Levy

unread,
May 22, 2011, 6:50:58 PM5/22/11
to hugi...@googlegroups.com
On May 22, 2011 01:43:01 PM kfj wrote:
> On 22 Mai, 16:10, Yuval Levy <goo...@levy.ch> wrote:
> > that's theory. What I observe in practice are curled rows that taint the
> > rest of the strategy after the rows have been identified and optimized
> > first time.
>
> maybe the curled stripes react well to the horizon straightening
> routine. Just an idea.

the problem is that at this point it's already too late and I am in manual
mode. So I might as well select manually the image pairs and run cpfind each
time on them. If there was a button to trigger cpfind on the cp tab, this
would be an easy way to waste some time clicking away...

... but I can also try to script this in python, can I?


> I do a 360x180 with a fisheye as a backdrop and then 'pin' the longer-
> lens shots to it.

I did not have the luxury. There was no fisheye anywhere near me. But yes,
in ideal circumstances, that's the smart thing to do.


> I must admit that I hadn't looked into the cpfind documentation fore a
> while, and that the text that is now in the wiki is actually good and
> helpful. I often simply call the program in question, or call man with
> it, and what you get this way is pretty thin. Sorry for being sloppy.

I skimmed over the wiki page and it looks suspiciously similar to the man
page.


> I would dearly like to understand more of how cpfind is doing what it
> does. I made several attempts to look into the code but I find it hard
> to penetrate.

Yes, I found the code hard to navigate too. When I submitted my patch this
morning, I just peeled away at it onion layer after onion layer starting from
the CLI switches and did not get much far - just enough for a small hack.

The frustrating thing with this is that if I focus hard and apply myself, I
might get to understand it and do some things with it in a few hours. But two
weeks down the road this knowledge is already forgotten and I am back to
square zero trying to understand things that I understood weeks earlier but
that have gone out of my mind in the meantime.


> hsi/hpi is just a first step in the right direction. The helper
> programs (CPGs, warping, blending) can all be made into modules with a
> bit of SWIG magic

we need a list of functionalities already available; a todo-list of
functionality to expose next; a how-to for doing it.


> > Good news for Ubuntu users: I updated the wiki instructions and now
> > everybody with Lucid (10.04) and later can access python scripting; and
> > Philipp updated the nightlies build process so that now the python
> > scripting interface is available to those brave enough to type the
> > following commands in the CLI:
> >
> > sudo add-apt-repository ppa:hugin/hugin-builds
> > sudo add-apt-repository ppa:hugin/nightly
> > sudo apt-get update
> > sudo apt-get install hugin
>
> this sounds exciting! Does it mean that the hugin you get from the
> PPAs is now per default an hsi/hpi-enabled version?

Short answer: YES.

Long answer: there are two sets of binaries in the PPA: "nightlies" and
"builds". At this very moment the unconditional YES applies to the nightlies
only. Hugin-builds is for builds from tarball and the YES will apply to
hugin-builds shortly after the release of 2011.2.0_beta1. Users not fearing
the bleeding edge use the nightlies and have full access now.

Conclusion: we want to move fast on releasing 2011.2.0.

Yuv

signature.asc

kfj

unread,
May 23, 2011, 2:43:44 AM5/23/11
to hugin and other free panoramic software
On 23 Mai, 00:50, Yuval Levy <goo...@levy.ch> wrote:

> If there was a button to trigger cpfind on the cp tab, this
> would be an easy way to waste some time clicking away...
> ... but I can also try to script this in python, can I?

I very much hope that you can not only try but even succeed in doing
so ;-)

> > I must admit that I hadn't looked into the cpfind documentation fore a
> > while, and that the text that is now in the wiki is actually good and
> > helpful. I often simply call the program in question, or call man with
> > it, and what you get this way is pretty thin. Sorry for being sloppy.
>
> I skimmed over the wiki page and it looks suspiciously similar to the man
> page.

hmm... it does. wonder how many more of my words I'll have to eat in
this thread?

> > I would dearly like to understand more of how cpfind is doing what it
> > does. I made several attempts to look into the code but I find it hard
> > to penetrate.
>
> Yes, I found the code hard to navigate too.  When I submitted my patch this
> morning, I just peeled away at it onion layer after onion layer starting from
> the CLI switches and did not get much far - just enough for a small hack.

> The frustrating thing with this is that if I focus hard and apply myself, I
> might get to understand it and do some things with it in a few hours.  But two
> weeks down the road this knowledge is already forgotten and I am back to
> square zero trying to understand things that I understood weeks earlier but
> that have gone out of my mind in the meantime.

This is why I make such an efort with documenting the code and writing
READMEs. I find that with my own code it takes longer to forget what
it does and how and why, but a year down the line I certainly
appreciate a little help. It gets worse as I get older, as well.

> > hsi/hpi is just a first step in the right direction. The helper
> > programs (CPGs, warping, blending) can all be made into modules with a
> > bit of SWIG magic
>
> we need a list of functionalities already available; a todo-list of
> functionality to expose next; a how-to for doing it.

The easiest route is the one I took in hsi/hpi, which is basically
wrapping the C++ headers. This takes little effort - if you look at
hsi.i, the interface definition, it isn't large at all, and not very
complicated either. The problem with this approach is that the
resulting python module is precisely as difficult to grasp as the C++
header itself. The objects are the same, the methods are, and if you
dont know what they do in C++, being able to access them from Python
doesn't help. If the wrapped C++ code is well-evolved and high-level,
with teling names - and what I found in hugin usually was like that -
your Python module is valuable, otherwise you have to put in more
work.

Kay

Yuval Levy

unread,
May 23, 2011, 11:50:51 PM5/23/11
to hugi...@googlegroups.com
On May 23, 2011 02:43:44 AM kfj wrote:
> On 23 Mai, 00:50, Yuval Levy <goo...@levy.ch> wrote:
> > If there was a button to trigger cpfind on the cp tab, this
> > would be an easy way to waste some time clicking away...
> > ... but I can also try to script this in python, can I?
>
> I very much hope that you can not only try but even succeed in doing
> so ;-)

not tonight, but i will want to write a small iterator that runs cpfind and
adds point to the project for a pair of images.


> > we need a list of functionalities already available; a todo-list of
> > functionality to expose next; a how-to for doing it.
>
> The easiest route is the one I took in hsi/hpi, which is basically
> wrapping the C++ headers. This takes little effort - if you look at
> hsi.i, the interface definition, it isn't large at all, and not very
> complicated either. The problem with this approach is that the
> resulting python module is precisely as difficult to grasp as the C++
> header itself. The objects are the same, the methods are, and if you
> dont know what they do in C++, being able to access them from Python
> doesn't help. If the wrapped C++ code is well-evolved and high-level,
> with teling names - and what I found in hugin usually was like that -
> your Python module is valuable, otherwise you have to put in more
> work.

understand. so which headers would be next on the list of desirable but not
yet added to hsi.i ?

Yuv

signature.asc
Reply all
Reply to author
Forward
0 new messages