idea for CP deletion

85 views
Skip to first unread message

kevin

unread,
Dec 27, 2010, 9:14:12 AM12/27/10
to hugin and other free panoramic software
I was reading this thread [0] and it came up about panos with large
number of images and large number of CP. I've done a number of panos
that contain between 200-300 images and the resulting CPs will be in
the thousands. One reason is because when I've tried to delete down
to a reasonable number of CPs I'll then have images that are
unconnected. Would there be a way to delete CPs so that we keep the
top 5 CPs between images?

An example might make this easier to understand what I'm trying to
say. If I have a pano with lots of images (200+), I'l first run a CP
finder, then fine-tune CP points, then delete all controls points that
are less then 0.8. But this might still leave me with thousands of
CPs. If I increase my number to say anything under 0.9 after fine-
tuning then I might delete all CPs between certain images. So if
image 125 and 127 have 25 CPs between them, but they are all 0.85
after fine-tuning, if I delete anything under 0.9 I'll remove all CPs
between those images.

For large image panos it'd be nice to delete all CPs except for say
the top 5 between images. So image 125 overlaps with 127, 131, and
142. So after running this type of deletion we'd be left with the
best 5 between 127, the best 5 between 131, and the best 5 between
142. This can be done manually by running ptosort, then fine-tuning
CP points, and then looking at the CP table - but automated would be
nicer!



[0] <http://groups.google.com/group/hugin-ptx/browse_thread/thread/
d47d94bc1bee7270?hl=en#>

kfj

unread,
Dec 27, 2010, 10:01:44 AM12/27/10
to hugin and other free panoramic software


On 27 Dez., 15:14, kevin <ke...@bluelavalamp.net> wrote:

> For large image panos it'd be nice to delete all CPs except for say
> the top 5 between images.  So image 125 overlaps with 127, 131, and
> 142.  So after running this type of deletion we'd be left with the
> best 5 between 127, the best 5 between 131, and the best 5 between
> 142.  This can be done manually by running ptosort, then fine-tuning
> CP points, and then looking at the CP table - but automated would be
> nicer!

You are right. this is an interesting problem. What slightly
complicates the task which looks so straightforward initially is that
the pto file doesn't contain information about the 'distance' of a
control point pair, only the points' coordinates in image coordinates.
So there would have to be an additional stage of remapping the CP
coordinates to panorama coordinates and so gaining the distance. There
is a tool for doing just that (pano_trafo), so I'll have a look if I
can come up with something to do the trick!

Kay

Bruno Postle

unread,
Dec 27, 2010, 10:24:26 AM12/27/10
to hugi...@googlegroups.com
On 27 December 2010 14:14, kevin <ke...@bluelavalamp.net> wrote:
> I was reading this thread [0] and it came up about panos with large
> number of images and large number of CP.  I've done a number of panos
> that contain between 200-300 images and the resulting CPs will be in
> the thousands.  One reason is because when I've tried to delete down
> to a reasonable number of CPs I'll then have images that are
> unconnected.  Would there be a way to delete CPs so that we keep the
> top 5 CPs between images?

This is more or less what the cpclean -p options does.

--
Bruno

kfj

unread,
Dec 27, 2010, 1:34:46 PM12/27/10
to hugin and other free panoramic software


On 27 Dez., 16:24, Bruno Postle <brunopos...@googlemail.com> wrote:

> This is more or less what the cpclean -p options does.

I've nevertheless written the script; it will keep the N best points
for every image pair. But I have an isuue with pano_trafo first which
I need to resolve before my script can work. I'll post when it's done.

Kay

kfj

unread,
Dec 28, 2010, 6:25:53 AM12/28/10
to hugin and other free panoramic software


On 27 Dez., 15:14, kevin <ke...@bluelavalamp.net> wrote:

> Would there be a way to delete CPs so that we keep the
> top 5 CPs between images?

The script is ready, but it uses a modified version of pano_trafo
(pano_trafo_extended) which you can only have if you compile it from
the source which I've put in the repo as well. It belongs into the
tools section, alongside pano_trafo. If you can't build from source,
you'll have to wait and see if my modifications maybe become
mainstream. The script is python, I wrote it using Python 2.6 on a
Kubuntu 10.10 system; you may have to grab the python argparse module
which, I think, isn't yet in the standard library.

http://bazaar.launchpad.net/%7Ekfj/%2Bjunk/script/annotate/head%3A/main/top_cps_only.py
http://bazaar.launchpad.net/%7Ekfj/%2Bjunk/script/annotate/head%3A/main/pano_trafo_extended.cpp

If you're happy with Bruno's approach to rather remove a 'bad'
percentage, take that - it's a fine idea as well and it's already
mainstream ;-)

Kay

Yuval Levy

unread,
Dec 28, 2010, 8:47:15 AM12/28/10
to hugi...@googlegroups.com
On December 28, 2010 06:25:53 am kfj wrote:
> wait and see if my modifications maybe become mainstream.

I am inclined to add them to the main repository after 2010.4.0 is declared
final. We'll have to see how to integrate neatly in the file tree. I would,
of course, prefer to have the initiator and main contributor having access to
the repo directly, but this would require you to sign up to a SourceForge
account.


> The script is python, I wrote it using Python 2.6 on a
> Kubuntu 10.10 system; you may have to grab the python argparse module
> which, I think, isn't yet in the standard library.

confirmed. And indeed I updated already the Wiki instructions [0]


> If you're happy with Bruno's approach to rather remove a 'bad'
> percentage, take that - it's a fine idea as well and it's already
> mainstream ;-)

both approaches have their merits and deserve to be available in mainstream.

Yuv


[0] <http://wiki.panotools.org/Hugin_Compiling_Ubuntu#Dependencies>

signature.asc

kfj

unread,
Dec 28, 2010, 9:36:59 AM12/28/10
to hugin and other free panoramic software


On 28 Dez., 14:47, Yuval Levy <goo...@levy.ch> wrote:
> On December 28, 2010 06:25:53 am kfj wrote:
>
> > wait and see if my modifications maybe become mainstream.
>
> I am inclined to add them to the main repository after 2010.4.0 is declared
> final.  We'll have to see how to integrate neatly in the file tree.  I would,
> of course, prefer to have the initiator and main contributor having access to
> the repo directly, but this would require you to sign up to a SourceForge
> account.

I was refering to my modification of pano_trafo, which hopefully will
replace the current one in precisely the same place in the file tree.
As I mentioned in the other thread about pano_trafo, I'll follow your
proposed route to introduce my version of it into the body of code.

> > The script is python, I wrote it using Python 2.6 on a
> > Kubuntu 10.10 system; you may have to grab the python argparse module
> > which, I think, isn't yet in the standard library.
>
> confirmed.  And indeed I updated already the Wiki instructions [0]

Thanks. The argparse module is supposed to become part of the Python
Standard Library in the future, particularly in the 3.x series; I
think it may be standard there already, this is why I give it
preference over the current, less powerful, mechanism. I couldn't be
bothered to support both.

> > If you're happy with Bruno's approach to rather remove a 'bad'
> > percentage, take that - it's a fine idea as well and it's already
> > mainstream ;-)
>
> both approaches have their merits and deserve to be available in mainstream.

I'm glad you appreciate my contribution

Kay

Yuval Levy

unread,
Dec 28, 2010, 12:08:36 PM12/28/10
to hugi...@googlegroups.com
On December 28, 2010 09:36:59 am kfj wrote:
> I'm glad you appreciate my contribution

not only me

https://bugs.launchpad.net/hugin/+bug/685489

Yuv

signature.asc

Rogier Wolff

unread,
Dec 28, 2010, 6:29:17 PM12/28/10
to hugi...@googlegroups.com

Allow me to add that I really don't think that removing all the "far"
points is the best way to do this.

Often hugin will find control points in the grass near my
feet. Sometimes one or two near the horizon where it actualy matters.

Now the 5 or 10 control points in the grass will "twist" the images so
that the one or two good ones are some of the furthest from their
proper mapping. But in fact they are the ones that matter most.

Throwing them away (well ok, commenting them out) isn't going to help
the pano.

You know what I'd like to do? I'd like to assign weights to the
control points. For example, all control points would start with
weight 10. But then I can set the weight of those in the distance to
100 to increase their weight. Or I can set the weight of those closeby
to 1 to reduce it. (for an identical effect).

One step further we'd find different ways to set the weights
automatically. For example the similarity index might come into
play. Or I might be able, as a user, to indicate: the further up an
image the higher the weight.

When I shoot panos, I take portrait pictures. These overlap in a
narrow region with the next image. So I think I'd like about

--sieve2width 2 --sieve2height 5 --sieve1size 3

To have points near both edges let and right, and a uniform
distribution along the height of the image. If in all ten areas 3
control points are found, 30 is a bit much. On the other hand, it
doesn't always find control points. Hmm. cpfind can't know the
overlapping area, right? So I would need say a 5x5 grid to guarantee
some points in the overlapping area. Oh well. Then we might have
even more control points if things match up well....

Roger.

--
** R.E....@BitWizard.nl ** http://www.BitWizard.nl/ ** +31-15-2600998 **
** Delftechpark 26 2628 XH Delft, The Netherlands. KVK: 27239233 **
*-- BitWizard writes Linux device drivers for any device you may have! --*
Q: It doesn't work. A: Look buddy, doesn't work is an ambiguous statement.
Does it sit on the couch all day? Is it unemployed? Please be specific!
Define 'it' and what it isn't doing. --------- Adapted from lxrbot FAQ

kfj

unread,
Dec 29, 2010, 4:50:17 AM12/29/10
to hugin and other free panoramic software


On 29 Dez., 00:29, Rogier Wolff <rew-googlegro...@BitWizard.nl> wrote:

> Allow me to add that I really don't think that removing all the "far"
> points is the best way to do this.

Overall you are probably right. I contributed my script for the
specific purpose of Kevin's initial request - he noticed that when
doing an overall delete-by-distance, some images would be left without
control points altogether, since they couldn't compete with other
image pairs where the situation was better. It's is a real problem,
I've been there, I've sympathized and I felt it worth my while to do
something about it, though my contribution will only become practical
once my modifications to pano_trafo have been accepted, which I hope
they will. Until then, Bruno's solution ought to solve Kevin's
specific problem. If you look at the purpose of the script in a
different way, you can maybe appreciate it's usefulness better: it's
an insurance that a certain number of control points will be kept for
each image pair when throwing out less well fitted CPs.

> Often hugin will find control points in the grass near my
> feet. Sometimes one or two near the horizon where it actualy matters.
>
> Now the 5 or 10 control points in the grass will "twist" the images so
> that the one or two good ones are some of the furthest from their
> proper mapping. But in fact they are the ones that matter most.

as far as twisting is concerned, see my final remark

> Throwing them away (well ok, commenting them out) isn't going to help
> the pano.

I totally agree that a simple approach like the one currently taken -
i.e. just looking at the 'fit' or 'distance' won't help or even be
detrimental in certain situations. But it may work for quite some and
be better than what there is now. At some point the domain of GUIs or
command line parameters isn't sufficient anymore to cater for specific
needs - this is where scripting begins. Designing the UI, one walks a
fine line between keeping it simple and comprehensible even for less
experienced users, and nevertheless offering powerful enough features.
A scripting interface is provided for those who want more, if not
total, control.

My current endeavours of producing a few simple Python scripts to deal
with the issue are just bait, really - and born from my Python
background and my dislike of perl. It seems to me, though, that Python
is ideal as a scripting language - it's free, it's easily included,
the path is trodden, the syntax is easy... just to name a few
advantages. Me peeking and poking in the pto files isn't really what I
have in mind in the longer term, it's because I haven't found an easy
way into hugin's body of code, since that isn't very well documented
and, well, organically grown - to be nice about it - and there isn't a
scripting interface yet, just a bunch of historically grown
interfaces:

- the CPG interface, which isn't much to write home about by anyone's
standards, so I've written glue code for that
- pto files, which work but are ugly and dialect-ridden - but I have
written a pto parser to help
- lens.ini files, which are inflexible and a bit obscure
- the interfaces to other tools which aren' even fully transparent (at
least not to me) so when I want to control these interfaces, I usually
have to write glue code as well (as I have done to interface with
panomatic, which obstinately refuses to work with cropped images)

and there may be more than I can think of at short notice.

> You know what I'd like to do? I'd like to assign weights to the
> control points. For example, all control points would start with
> weight 10. But then I can set the weight of those in the distance to
> 100 to increase their weight. Or I can set the weight of those closeby
> to 1 to reduce it. (for an identical effect).

I'm with you there. To facilitate experimentation along these lines,
I'll do the following: In my Python script, I'll provide the set of
control points as a set of objects with more properties than the ones
which can be gleaned from the mere coordinates in the pto. I'll
include the 'distance' and the coordinates in pano space for a start.
Then everyone who's capable of writing a bit of code can interface
with that and flag the points they want deleted.

> One step further we'd find different ways to set the weights
> automatically. For example the similarity index might come into
> play. Or I might be able, as a user, to indicate: the further up an
> image the higher the weight.

As I have remarked previously, UI design is walking a thin line. If we
had a scripting interface, we could leave such more ambitios levels of
control to the domain of optional functionality: plugins. Most large
software packages introduce them at some point, that's one of the
advantages of a scripting interface. And many of them choose Python
(even if they've started out with follies like lisp dialects
initially, if you know who I mean)

> When I shoot panos, I take portrait pictures. These overlap in a
> narrow region with the next image. So I think I'd like about
>
>   --sieve2width 2 --sieve2height 5 --sieve1size 3
>
> To have points near both edges let and right, and a uniform
> distribution along the height of the image. If in all ten areas 3
> control points are found, 30 is a bit much. On the other hand, it
> doesn't always find control points. Hmm. cpfind can't know the
> overlapping area, right? So I would need say a 5x5 grid to guarantee
> some points in the overlapping area. Oh well. Then we might have
> even more control points if things match up well....

More room for scripting experiments. Let me just finish with a remark
I've made over and over: if your lens is well calibrated, you really
don't need that many CPs. Either you use a set with a great number of
CPs to calibrate your lens, or you have a well-calibrated lens and
only need the CPs to nudge your images in place. It shouldn't be
necessary at all to optimize lens parameters with every pano to 'bend'
the images to fit, and if you don't, what do you need so many CPs for?

> Q: It doesn't work. A: Look buddy, doesn't work is an ambiguous statement.
> Does it sit on the couch all day? Is it unemployed? Please be specific!
> Define 'it' and what it isn't doing. --------- Adapted from lxrbot FAQ

That actually made me laugh
Kay

kevin

unread,
Dec 29, 2010, 6:01:25 AM12/29/10
to hugin and other free panoramic software

On Dec 28, 6:25 am, kfj <_...@yahoo.com> wrote:

>
> If you're happy with Bruno's approach to rather remove a 'bad'
> percentage, take that - it's a fine idea as well and it's already
> mainstream ;-)
>
> Kay


Thanks for the script! I haven't had a chance to test it out yet, but
I hope to towards the end of this week. I'll let you know.

Kevin

kfj

unread,
Dec 29, 2010, 6:07:36 AM12/29/10
to hugin and other free panoramic software


On 29 Dez., 10:50, kfj <_...@yahoo.com> wrote:

> To facilitate experimentation along these lines,
> I'll do the following: In my Python script, I'll provide the set of
> control points as a set of objects with more properties than the ones
> which can be gleaned from the mere coordinates in the pto. I'll
> include the 'distance' and the coordinates in pano space for a start.
> Then everyone who's capable of writing a bit of code can interface
> with that and flag the points they want deleted.

script is up for grabs at

http://bazaar.launchpad.net/%7Ekfj/%2Bjunk/script/annotate/head%3A/main/cp_interface.py

Kay

Yuval Levy

unread,
Dec 29, 2010, 7:42:12 AM12/29/10
to hugi...@googlegroups.com
On December 28, 2010 06:29:17 pm Rogier Wolff wrote:
> Throwing [control points] away (well ok, commenting them out) isn't
> going to help the pano.

there should be a better way to enable/disable CPs selectively...


> You know what I'd like to do? I'd like to assign weights to the
> control points.

... and weighting would help in this and many other cases. For example in the
past the wish has been expressed to discern between user-generated CPs and
computer-generated CPs.

instead of commenting out CPs, you would set them to a weight of 0.

Still: I don't think CPs are the ultimate tool for the image alignment
process. They are historically grown, because humans would pin down printed
photos to each other before digital stitching came along; and when digital
stitching came along it was the human user interface to it mimicking the pre-
digital interaction and pinning down images to each other by the use of CPs.

The availability of feature detection/matching algorithms enabled the
replication of the pinning process, but there must be a more efficient way of
aligning images that does not require CPs.

Then we can always use CPs as the human-entry interface to the process, or to
try to detect orientation in a situation with less than perfect input (no
orientation data in EXIF).

Yuv

signature.asc

Rogier Wolff

unread,
Dec 29, 2010, 8:09:33 AM12/29/10
to hugi...@googlegroups.com
On Wed, Dec 29, 2010 at 07:42:12AM -0500, Yuval Levy wrote:
> On December 28, 2010 06:29:17 pm Rogier Wolff wrote:
> > Throwing [control points] away (well ok, commenting them out) isn't
> > going to help the pano.
>
> there should be a better way to enable/disable CPs selectively...
>
>
> > You know what I'd like to do? I'd like to assign weights to the
> > control points.
>
> ... and weighting would help in this and many other cases. For example in the
> past the wish has been expressed to discern between user-generated CPs and
> computer-generated CPs.

Indeed. And a user-preference can set the default weight for both of
them. Some users trust themselves, others trust the computer
better. :-)

> instead of commenting out CPs, you would set them to a weight of 0.

Right. Or at least very low.

> Still: I don't think CPs are the ultimate tool for the image
> alignment process. They are historically grown, because humans
> would pin down printed photos to each other before digital stitching
> came along; and when digital stitching came along it was the human
> user interface to it mimicking the pre- digital interaction and
> pinning down images to each other by the use of CPs.

What I think should be possible is that you optimize the cross
correlation of say a 10x10 area of the image with another image. This
is computationally expensive. This would only be feasable to do for
example for a 5 pixel radius of a point 20 pixels north of a
controlpoint.

I can't think of how to get an initial point to start working from
besides asking the user or doing the feature detection thingy....

But this is WAY beyond addition of an extra field in the
"controlpoint" structure.

kfj

unread,
Dec 29, 2010, 1:26:21 PM12/29/10
to hugin and other free panoramic software


On 29 Dez., 14:09, Rogier Wolff <rew-googlegro...@BitWizard.nl> wrote:
> On Wed, Dec 29, 2010 at 07:42:12AM -0500, Yuval Levy wrote:

> > ... and weighting would help in this and many other cases.  For example in the
> > past the wish has been expressed to discern between user-generated CPs and
> > computer-generated CPs.
>
> Indeed. And a user-preference can set the default weight for both of
> them. Some users trust themselves, others trust the computer
> better. :-)

User input is resource-intensive. If I actually set a CP, I'm rather
sure that it belongs there and I trust my vision. I invest time and
effort. This should be noted by the program - and honoured by a high
weight.

> > instead of commenting out CPs, you would set them to a weight of 0.
>
> Right. Or at least very low.

If the feature pops up in hugin, I'll adapt my script ;-)

> > Still: I don't think CPs are the ultimate tool for the image
> > alignment process...

I agree. But what else do we have? I'm asking this because I'm
genuinely curious and at a loss here.

>
> What I think should be possible is that you optimize the cross
> correlation of say a 10x10 area of the image with another image. This
> is computationally expensive. This would only be feasable to do for
> example for a 5 pixel radius of a point 20 pixels north of a
> controlpoint.

I've tinkered with cross-correlation. It is extremely sensitive to
small shifts, so you have to work on interpolated data and nudge them
by small increments until you land in the minimum (if you're lucky).
Cross-correlation is also very sensitive to (even small) distortions.
If you wrestle with it for a while you become so frustrated that you
have to use SIFT feature points for a while to take heart again - at
least that's my experience. It sounds like a good idea until you get
down to actually trying to get it to work. There was an interesting
article 'MIKOLAJCZYK AND SCHMID: A PERFORMANCE EVALUATION OF LOCAL
DESCRIPTORS' where cross-correlation is discussed together with SIFT
and some others:

http://lear.inrialpes.fr/pubs/2005/MS05/mikolajczyk_pami05.pdf

Feature detectors may still well be the way to go - after all it's a
limited understanding of what they do if one just reduces them to the
'control point' one can derive from them.

> I can't think of how to get an initial point to start working from
> besides asking the user or doing the feature detection thingy....

or use two feature detectors. My idea would be to use the fastest
available feature detector, pick it's best few results in a ROI and
then verify that these locations are indeed corresponding with a heavy-
duty, most-likely-to-succeed detector.

Kay

Yuval Levy

unread,
Jan 2, 2011, 9:36:59 AM1/2/11
to hugi...@googlegroups.com
On December 29, 2010 01:26:21 pm kfj wrote:
> On 29 Dez., 14:09, Rogier Wolff <rew-googlegro...@BitWizard.nl> wrote:
> > On Wed, Dec 29, 2010 at 07:42:12AM -0500, Yuval Levy wrote:
> > > Still: I don't think CPs are the ultimate tool for the image
> > > alignment process...
>
> I agree. But what else do we have? I'm asking this because I'm
> genuinely curious and at a loss here.

first and foremost is a calibrated lens. then you only deal with image
orientation and not with imperfections in the image.


> > I can't think of how to get an initial point to start working from
> > besides asking the user or doing the feature detection thingy....
>
> or use two feature detectors. My idea would be to use the fastest
> available feature detector, pick it's best few results in a ROI and
> then verify that these locations are indeed corresponding with a heavy-
> duty, most-likely-to-succeed detector.

or an image pyramid, starting detection on the low scale version of the
image...

Yuv

signature.asc

Yuval Levy

unread,
Jan 2, 2011, 9:37:00 AM1/2/11
to hugi...@googlegroups.com
On December 29, 2010 04:50:17 am kfj wrote:
> my contribution will only become practical
> once my modifications to pano_trafo have been accepted

done (yesterday).


> If you look at the purpose of the script in a
> different way, you can maybe appreciate it's usefulness better: it's
> an insurance that a certain number of control points will be kept for
> each image pair when throwing out less well fitted CPs.

makes sense and is indeed the weakness of keeping the top X% overall.


> At some point the domain of GUIs or
> command line parameters isn't sufficient anymore to cater for specific
> needs - this is where scripting begins.

While I agree with you that your script is useful, I disagree with the above
statement. This is not about GUI vs. CLI. It is about workflow.

Our GUI represent the workflow at a very superficial level:
load > generate cp > align > stitch

One level lower there are multiple steps involved, and problems happen
inevitably when we try to execute those multiple steps simultaneously.

For example, we use the optimizer to:
- correct for lens distortion
- align images on the panosphere
- correct for variations (intentional or unintentional) of the viewpoint

What we need are flows, and a script is indeed a very powerful way to express
and modify flows; and we need a strategy playing conditionally across flows.

In the case of CP generation, Thomas has introduced a simple but very powerful
flow based on the assumption that the images are taken in sequence.

We need a similar approach to the rest of the process; and probably even going
forth and back at the superficial level a few time, simulating what an expert
user would do.

When designing these flows (or scripts), we need to ask ourselves the
questions: what key metrics determines if the step is necessary or not? what
key metrics determine if the step succeeded or failed?

from the very very start (and completely spontaneous, so there may be errors):

1. are EXIV data available? if no: the "smartsisstant" is screwed. display a
message asking the user about their pre-processing workflow (in most cases we
can assume digital input nowadays)

2. based on the EXIV lens identification, FOV, F-Stop: do I have lens
correction parameters stored? if yes, load them. If not suggest lens
calibration (as a separate process) and recommend the user interrupts the
stitch process until the lens is calibrated.

3. is there a pattern to the exposure values? if they are all the same, this
is a single panorama. if they are regular (e.g. -2/0/+2) this is a user who
knows how to shoot HDR. if they are all over the place it can be a panorama
shoot in automatic mode (additional info in EXIV too); or it can be that there
are one or two straying extra exposures (e.g. for a door or window in the
scene). And I have not considered yet the case that multiple panoramas could
be in the same set of images.

and so on, and so on.


> Designing the UI, one walks a
> fine line between keeping it simple and comprehensible even for less
> experienced users, and nevertheless offering powerful enough features.
> A scripting interface is provided for those who want more, if not
> total, control.

this is not (yet) about designing the UI. It is one level deeper.


> My current endeavours of producing a few simple Python scripts to deal
> with the issue are just bait, really - and born from my Python
> background and my dislike of perl.

likes and dislikes aside, I agree with you that Python is the right way to go.
Python bindings into the Hugin codebase are long time on my wishlist.


> there isn't a scripting interface yet

well, theoretically you can shell-script or perl-script or Python-script all
of the CLI tools. That's not optimal, but sufficient for a start.

The 'old way' of doing things was to script something for the CLI tools and if
it would make sense wait for a coder to implement it in the main code and
speed it up. With Python binding that second part would not be necessary
since your Python script would share the same memory and variables as the code
without going through slow and expensive read/write operations.


> To facilitate experimentation along these lines,
> I'll do the following: In my Python script, I'll provide the set of
> control points as a set of objects with more properties than the ones
> which can be gleaned from the mere coordinates in the pto. I'll
> include the 'distance' and the coordinates in pano space for a start.
> Then everyone who's capable of writing a bit of code can interface
> with that and flag the points they want deleted.

Good. Next in those properties we'd need to identify hand-picked from
generated CPs, and here is the limitation of the approach: we need to
interface into code and extend that structure there too. And the PTO file, of
course.

currently a CP line in the PTO file looks like:

c n0 N1 x2319.75430140533 y1306.89102657256 X635.409481924115
Y1295.02701452379 t0


what would it take to expand it to:

c n0 N1 x2319.75430140533 y1306.89102657256 X635.409481924115
Y1295.02701452379 t0 s0 a0 w100

with
s: source (0=unknown, 1=human, 2=cpfind, etc.)
a: active (0=yes, 1=no)
w: weight (0 to 100)

the big question is: would the tools that currently parse the PTO script
stumble on this, or is it safe to add?

if it is safe to add to the specs, let's do it. Then we can go about adding
support for the s/a/w script parameters, all while making the assumption that
they may or may not be present.

...

> More room for scripting experiments. Let me just finish with a remark
> I've made over and over: if your lens is well calibrated, you really
> don't need that many CPs. Either you use a set with a great number of
> CPs to calibrate your lens, or you have a well-calibrated lens and
> only need the CPs to nudge your images in place. It shouldn't be
> necessary at all to optimize lens parameters with every pano to 'bend'
> the images to fit, and if you don't, what do you need so many CPs for?

EXACTLY! Every time I see users coming back with thousands of CPs I wonder if
this is another meaningless attempt to assemble the largest quantity of boring
pixels or if it is an exercise in global warming by CPU strain.

Theory is that three strategically placed CPs per image pair are enough when
the lens is already calibrated. I work with five CPs per image pair, and for
small projects (e.g. full sphericals with six fisheye shots) repeating the
left-click-right-click dance on the CP tab is equally fast and yield better
results than any CP generator I've tried before. A CP generator becomes
useful only when dealing with a large number of input images.

Yuv

signature.asc

kfj

unread,
Jan 2, 2011, 1:38:46 PM1/2/11
to hugin and other free panoramic software


On 2 Jan., 15:37, Yuval Levy <goo...@levy.ch> wrote:

> currently a CP line in the PTO file looks like:
>
> c n0 N1 x2319.75430140533 y1306.89102657256 X635.409481924115
> Y1295.02701452379 t0
>
> what would it take to expand it to:
>
> c n0 N1 x2319.75430140533 y1306.89102657256 X635.409481924115
> Y1295.02701452379 t0 s0 a0 w100

Just a quick answer to this one: my Python pto parser would accept it,
but the C parser used by hugin would probably throw a fit, so you'd
have to change code in there. But there is another way. It's what I
call an 'ugly duckling' in my python pto parser: you could introduce a
line in the guise of a comment, preceding the CP line, looking like

#-hugin cpWeight s0 a0 w100

AFAIK this is established practice for introducing additional
('noncritical') values into the pto without breaking compatibility
with other pto-using programs.

Kay

kevin

unread,
Jan 2, 2011, 2:33:31 PM1/2/11
to hugin and other free panoramic software
Kay,

Thanks for the script, it works great! It's helped in speeding up CP
optimization greatly.

kfj

unread,
Jan 2, 2011, 2:37:36 PM1/2/11
to hugin and other free panoramic software
You're welcome! By now my change to pano_trafo has also made it into
the repo, so if you grab the latest code and build it, a version of
pano_trafo which can be called without the image number on the command
line but then expects image numbers before the coordinates will be
built.

Kay

Bruno Postle

unread,
Jan 2, 2011, 6:34:51 PM1/2/11
to hugin and other free panoramic software
On Sun 02-Jan-2011 at 10:38 -0800, kfj wrote:
>
>#-hugin cpWeight s0 a0 w100
>
>AFAIK this is established practice for introducing additional
>('noncritical') values into the pto without breaking compatibility
>with other pto-using programs.

It is used for Hugin GUI metadata that doesn't materially alter the
project.

Your extra info for control points would presumably modify the
behaviour of the optimiser, so they need to be 'normal'
control-point parameters instead.

--
Bruno

Yuval Levy

unread,
Jan 2, 2011, 9:47:48 PM1/2/11
to hugi...@googlegroups.com
On January 2, 2011 01:38:46 pm kfj wrote:
> the C parser used by hugin would probably throw a fit

you'll be surprised to know that Hugin took a hand-crafted pto file with s/a/w
parameters without any problem, and ran the optimization as if they were not
there. So from that perspective, we can add them and then add support to them
in the relevant tools.


> #-hugin cpWeight s0 a0 w100
>
> AFAIK this is established practice for introducing additional
> ('noncritical') values into the pto without breaking compatibility
> with other pto-using programs.

Really? what other pto-using programs? and what compatibility?

I tried feeding a simple PTO file straight out of Hugin to PToptimizer, with
whom the specs are supposedly 'shared'.

It chokes... already on the p line, so compatibility is broken anyway.

Even after removing the E/R/S parameters from the p line, it chokes again on
the Eev in the i line.

And: what you call 'established practice' is just a confusing hack. Let's do
things right.


A. PUT THE HISTORY OF THE PTO FORMAT TO REST

It was once derived from the original panotools syntax specified by Helmut
Dersch. In that sense it shares some DNA and a common ancestor with the
syntax of other PT based tools. But they have all evolved in different
directions and it can be safely assumed that:
1. the PTO format is specified by the Hugin project and for Hugin.
2. it has evolved to fit the growing needs of Hugin (e.g. adding photometric
variables, translation variables, etc.)
3. we are on our own and can keep it growing and try to steer the growth
toward a sensible solution


B. SPECIFYING THE NEW NEEDS

1. there is a case/request to activate/deactivate CPs
2. there is a case to weight CPs
3. there is a case to discern the source of CPs (human or computer generated)


C. EXTENDED SPECIFICATION OF THE PTO FORMAT TO MATCH THESE NEEDS

New variables on the 'c' lines:
- s: source (0=unknown, 1=human, 2=cpfind, etc.)
- a: active (0=yes, 1=no)
- w: weight (0 to 100)

Note that the these params are optionals and they have sensible default for
backward compatibility if a parser is properly coded, maybe with the exception
of weight - should it be in a range from 1 to 100 to avoid that all CPs have
weight of 0?

If there are no objections I will updated the specs both in Hugin and in
Libpano to make developers aware of things to come.

Ideally

http://hugin.hg.sourceforge.net/hgweb/hugin/hugin/file/default/doc/nona.txt
http://panotools.svn.sourceforge.net/viewvc/panotools/trunk/libpano/doc/stitch.txt
http://panotools.svn.sourceforge.net/viewvc/panotools/trunk/libpano/doc/Optimize.txt

should share a single, synchronized and coordinated spec.


D. PARSING A PTO FILE (or a PT dialect script)

I don't know how many parsers there are out there that do parse pto files. I
hope for them that they are designed as robustly as your Python parser or
Hugin's own parser. A good parser should be able to ignore noise such as
extra parameters that it does not need; or have a 'strict' syntax option to
throw warnings with a switch to disable those warnings. Feature request for
Panotools.

https://bugs.launchpad.net/panotools/+bug/696673

I don't know if Bruno's Perl tools are affected?


E. MAKING SENSE OF THE NEW PARAMETERS

It may or may not make sense for different tools to implement the new
parameters. Once they are specified and the parsers are robust enough, there
is no hurry. You can start playing around with these parameters in your
Python scripts; I can add the facility into the Hugin GUI to discern which
points are manually entered by the user; and so on. Eventually they will make
sense in the big scheme of things.

Yuv

signature.asc

Rogier Wolff

unread,
Jan 3, 2011, 5:19:53 AM1/3/11
to hugi...@googlegroups.com
On Sun, Jan 02, 2011 at 09:37:00AM -0500, Yuval Levy wrote:
> > More room for scripting experiments. Let me just finish with a remark
> > I've made over and over: if your lens is well calibrated, you really
> > don't need that many CPs. Either you use a set with a great number of
> > CPs to calibrate your lens, or you have a well-calibrated lens and
> > only need the CPs to nudge your images in place. It shouldn't be
> > necessary at all to optimize lens parameters with every pano to 'bend'
> > the images to fit, and if you don't, what do you need so many CPs for?
>
> EXACTLY! Every time I see users coming back with thousands of CPs I wonder if
> this is another meaningless attempt to assemble the largest quantity of boring
> pixels or if it is an exercise in global warming by CPU strain.
>
> Theory is that three strategically placed CPs per image pair are enough when
> the lens is already calibrated. I work with five CPs per image pair, and for
> small projects (e.g. full sphericals with six fisheye shots) repeating the
> left-click-right-click dance on the CP tab is equally fast and yield better
> results than any CP generator I've tried before. A CP generator becomes
> useful only when dealing with a large number of input images.

But do you find this fulfilling work? Setting aside the fact that
you've become convinced that the algorithms of today generate worse
results than you, Wouldn't you prefer to have the computer do this
boring work for you?

You apparently do the "pano-with-six-fisheye-images" dance regularly.
Calibrating the lens and then just using that is much easier.

I have a zoomlens.

Focal lenght: 18-135mm
F-stop: f/2.0- f/32
focus: 1.5m - inf

So worst case I have to have a 3D matrix of lens calibration settings
for two analog and one digital parameter. (one of which doesn't go
into the exif data). I think that calibrating the lens on the current
project is a reasonable compromise.


Roger.

--
** R.E....@BitWizard.nl ** http://www.BitWizard.nl/ ** +31-15-2600998 **
** Delftechpark 26 2628 XH Delft, The Netherlands. KVK: 27239233 **
*-- BitWizard writes Linux device drivers for any device you may have! --*

Rogier Wolff

unread,
Jan 3, 2011, 6:27:29 AM1/3/11
to hugi...@googlegroups.com

On Sun, Jan 02, 2011 at 09:47:48PM -0500, Yuval Levy wrote:
> > #-hugin cpWeight s0 a0 w100

Hey, do you see a need for an active-weight-zero control point?
Do you see a need for an inactive weight-not-zero control point?

If you set the weight to zero for control points that are not active,
the optimizer just has to add "cp->weight * " to the line that says

toterr += this_distance;

How about: negative weights are "inactive"?

Then my suggestion already doesn't work. Maybe then it would even be
better from a software engineering standpoint to just do it your way.


Why do you thing weight is an integer?

When you start algorithmically assigning weights, it would make sense
to allow fractional weights.

What about setting the weight to "1" by default?

Then you can lower the weight by usign values 0-0.99 and increase it
by using values above 1. So if you manually tag a controlpoint as
being very valid, you can increase it outside the normal range.

There are two cases where above-default weights are useful. In the
first case, you simply have a very well defined control point. Say a
perfectly black 90 degree angle on a white background. That will match
up very well with another photo. So you have more confidence in this
point than the noisy controlpoints that cpfind found in a perfectly
blue sky. A weight of say 2 or 3 may be appropriate here.

In the second case, you have some feature that ends up looking
crooked(sp?) even if just a few pixels off. So you want that control
point to carry more weight than the others. It should be able to pivot
around that point, but land the parameters such that it aligns almost
perfectly. A weight of 100 might be appropriate here (with my
default-is-1.0 scheme).

Bruno Postle

unread,
Jan 3, 2011, 9:49:21 AM1/3/11
to Hugin ptx
On Sun 02-Jan-2011 at 21:47 -0500, Yuval Levy wrote:
>
>If there are no objections I will updated the specs both in Hugin and in
>Libpano to make developers aware of things to come.

There is no need to update the docs until the feature exists, the
implementation often determines the details anyway. e.g. weights
are unlikely to be integers as already noted.

Note that an existing hack to simulate weighting is to simply
duplicate control points, it works ok.

>I don't know how many parsers there are out there that do parse pto files. I
>hope for them that they are designed as robustly as your Python parser or
>Hugin's own parser. A good parser should be able to ignore noise such as
>extra parameters that it does not need; or have a 'strict' syntax option to
>throw warnings with a switch to disable those warnings. Feature request for
>Panotools.
>
>https://bugs.launchpad.net/panotools/+bug/696673
>
>I don't know if Bruno's Perl tools are affected?

Panotools::Script is ok, it will just ignore any unknown parameters
so long as they are implemented using the existing syntax.

I don't think there is a problem here, if extra data needs to be
added to the Hugin .pto format then it can be added, the other tools
can catch up. See the addition of photometric parameters and masks
to see how it can be done painlessly.

--
Bruno

Message has been deleted

Jeffrey Martin

unread,
Jan 4, 2011, 1:06:15 PM1/4/11
to hugi...@googlegroups.com
Bruno,

If i'm not mistaken, the -p in cpclean only optimizes each pair of images according to the standard deviation you specify - if you are still at risk of removing all CP's between images, right?


Bruno Postle

unread,
Jan 4, 2011, 5:52:11 PM1/4/11
to Hugin ptx
On Tue 04-Jan-2011 at 09:57 -0800, Jeffrey Martin wrote:
>i don't see a -p option for cpclean here
>http://wiki.panotools.org/Cpclean
>
>where is this documented?

The wiki pages can be just a general description. Usually the
definitive guide to any of these tools is given on the command-line,
e.g:

$ cpclean -h
cpclean: remove wrong control points by statistic method
cpclean version 2010.4.0.a26eaba2eda3

Usage: cpclean [options] input.pto

CPClean uses statistical methods to remove wrong control points

Step 1 optimises all images pairs, calculates for each pair mean
and standard deviation and removes all control points
with error bigger than mean+n*sigma
Step 2 optimises the whole panorama, calculates mean and standard deviation
for all control points and removes all control points with error
bigger than mean+n*sigma

Options:
-o file.pto Output Hugin PTO file. Default: '<filename>_clean.pto'.
-n num distance factor for checking (default: 2)
-p do only pairwise optimisation (skip step 2)
-w do optimise whole panorama (skip step 1)
-h shows help

This is definitive because this is the bit written by the
programmer. If there is more relevant but technical info then it
could also be in a man page or on the wiki (or both sometimes).

--
Bruno

Bruno Postle

unread,
Jan 4, 2011, 5:52:23 PM1/4/11
to Hugin ptx

No it should only remove some of the points with an error-distance
greater than the average, you will always have some points left.

--
Bruno

kfj

unread,
Jan 5, 2011, 4:05:28 AM1/5/11
to hugin and other free panoramic software


On 4 Jan., 23:52, Bruno Postle <br...@postle.net> wrote:
>     Usage:  cpclean [options] input.pto
>
>     CPClean uses statistical methods to remove wrong control points

Bruno, maybe you can help. You may have noticed my quick-shot Python
script to remove all but the N 'best' CPs for each image pair. Now
while in principle I'm happy with it, there is a fundamental problem:
my 'distance' calculation is wrong. I use the coordinate distance in
the panorama, but of course this strictly only works for mosaics, and
otherwise I should use the distance on the pano sphere. I could make a
temporary pano with equirect output and 360X180 fov, then I could
calculate the distance on the sphere from the polar coordinates. But
of course this is clumsy; maybe you can think of a more elegant way?

Kay

Bruno Postle

unread,
Jan 5, 2011, 6:11:59 PM1/5/11
to hugin and other free panoramic software
On Wed 05-Jan-2011 at 01:05 -0800, kfj wrote:
>On 4 Jan., 23:52, Bruno Postle <br...@postle.net> wrote:
>>     Usage:  cpclean [options] input.pto
>>
>>     CPClean uses statistical methods to remove wrong control points
>
>Bruno, maybe you can help. You may have noticed my quick-shot Python
>script to remove all but the N 'best' CPs for each image pair. Now
>while in principle I'm happy with it, there is a fundamental problem:
>my 'distance' calculation is wrong. I use the coordinate distance in
>the panorama, but of course this strictly only works for mosaics, and
>otherwise I should use the distance on the pano sphere.

You could use PToptimizer which writes all this statistical
information about control point errors to the project file. Though
you have to clean a Hugin project file to make it acceptable to the
panotools parser in PToptimizer.

There are two tools by Iouri Ivliev in Panotools::Script to help
with this, unfortunately I have never had a chance to play with
them (it would be really useful if somebody could evaluate all this
stuff and report their experiences):

ptsed is a command-line editor for .pto projects, amongst other
things it will clean a .pto file so it can be read by
panotools/PTmender/PToptimizer.

ptscluster is another statistical filter for control points (like
ptoclean and cpclean), it uses the information in the PToptimizer
output and tries to leave a spread of control points rather than
just deleting the 'worst'.

>I could make a temporary pano with equirect output and 360X180 fov,
>then I could calculate the distance on the sphere from the polar
>coordinates. But of course this is clumsy; maybe you can think of a
>more elegant way?

You could reimplement a panorama transformation and measure the
angular distance directly, this isn't so difficult (there is one of
these in Panotools::Script used by ptoclean).

--
Bruno

kfj

unread,
Jan 6, 2011, 1:03:18 PM1/6/11
to hugin and other free panoramic software


On 6 Jan., 00:11, Bruno Postle <br...@postle.net> wrote:

> You could use PToptimizer which writes all this statistical
> information about control point errors to the project file.  Though
> you have to clean a Hugin project file to make it acceptable to the
> panotools parser in PToptimizer.

Bruno, thank you for your advice. I have started out on a new project
towards the goal of eventually providing a scripting interface for
hugin, prompted by the awkwardness of only interfacing with the pto,
when all the infrastructure needed is already there in hugin. You may
want to have a look at

http://groups.google.com/group/hugin-ptx/browse_thread/thread/6e0da57a1b1aefe9#

I've run into another problem with finding the CP errors here - if I
call the optimizer on the Panorama object, the errors are filled in,
but now I wonder if there isn't a way to just ask for the calculation
of the distances without actually optimizing the CPs? After all, if
the transformations for the images are known, the distances should be
calculable without optimization. cpclean also calls the optimizer,
probably because it had the same problem?

Kay

Bruno Postle

unread,
Jan 6, 2011, 5:53:00 PM1/6/11
to hugin and other free panoramic software
On Thu 06-Jan-2011 at 10:03 -0800, kfj wrote:
>
>Bruno, thank you for your advice. I have started out on a new project
>towards the goal of eventually providing a scripting interface for
>hugin

>I've run into another problem with finding the CP errors here - if I


>call the optimizer on the Panorama object, the errors are filled in,
>but now I wonder if there isn't a way to just ask for the calculation
>of the distances without actually optimizing the CPs?

Sorry, I'm not familiar with this code. Hugin updates all the
control point distances after optimisation and on initially loading
a project (so it isn't necessary to run the optimiser to get this
info).

--
Bruno

Pablo d'Angelo

unread,
Jan 6, 2011, 7:37:36 PM1/6/11
to hugi...@googlegroups.com
Am 06.01.2011 19:03, schrieb kfj:
>
> Bruno, thank you for your advice. I have started out on a new project
> towards the goal of eventually providing a scripting interface for
> hugin,

that would be really nice! I once tried with boost::python, but I hit
some strange roadblocks, which prevented me from creating a useful
interface...

> I've run into another problem with finding the CP errors here - if I
> call the optimizer on the Panorama object, the errors are filled in,
> but now I wonder if there isn't a way to just ask for the calculation
> of the distances without actually optimizing the CPs? After all, if
> the transformations for the images are known, the distances should be
> calculable without optimization. cpclean also calls the optimizer,
> probably because it had the same problem?

I don't think there is a ready made function for computing the distances
inside the hugin source. The distances can also be computed using the
PTools::Transform objects (transform both points from input image
coordinates into panorama space equirect and compute the great circle
distance (haversine formula) there. Note that the distances depend on
the output image projection and size.

ciao
Pablo

kfj

unread,
Jan 7, 2011, 4:04:57 AM1/7/11
to hugin and other free panoramic software


On 7 Jan., 01:37, Pablo d'Angelo <pablo.dang...@web.de> wrote:

> > Bruno, thank you for your advice. I have started out on a new project
> > towards the goal of eventually providing a scripting interface for
> > hugin,

> that would be really nice! I once tried with boost::python, but I hit
> some strange roadblocks, which prevented me from creating a useful
> interface...

I looked into using Boost and bjam, and I also considered pyrex and
Cython. SWIG won out in the end, even though there are a few valid
points of criticism. The situation we're facing here is interfacing
with a complete preexisting software, so we don't want to create
scripting language modules from scratch, but we want to wrap the
existing code to become accessible to the scripting language - and,
eventually, to enable the scripting language to accept hugin data
types to enable the creation of plugins and the use of the scripting
'glue' to pass hugin objects around in-memory. SWIG has quite
extensive support for C++ as well:

http://www.swig.org/Doc1.3/SWIGPlus.html#SWIGPlus

It seems to be a mature and widely-used software, and, as I pointed
out, it's well-documented, so well indeed that it took me only half a
day to get a prototype Python module up and running that linked in
whatever hugin functionality a program from the 'tools' section would
access - and that includes the time it took me to convince cmake to
build the module (sigh).

> I don't think there is a ready made function for computing the distances
> inside the hugin source. The distances can also be computed using the
> PTools::Transform objects (transform both points from input image
> coordinates into panorama space equirect and compute the great circle
> distance (haversine formula) there. Note that the distances depend on
> the output image projection and size.

I feel I've opened a can of worms here. The problem is this: If you
are creating a 'proper' panorama, where the model of a central
viewpoint and a surrounding pano sphere applies, calculating the
distance on a great circle on the pano sphere is the correct way to
go. This can obviously be achieved (and cpclean does it that way) by
setting the output projection to equirect and taking it from there.
But what about mosaics? Imagine a mosaic of a 100m wall full of
graffiti. Obviously, you want to measure CP error as standard flat
distance sqrt(x*x+y*y) here. So you'd have to measure your distance
from coordinates on the output rectilinear image, and the approach to
just convert to equirect and use that for distance calculation would
yield totally wrong results.

Apart from the output projection being set to rectiliniear and the X,
Y and Z parameters hinting in that direction, there isn't really a
clear indication that the output is a mosaic (correct me if I'm wrong)
and the underlying engine is blissfully unaware of the distinction.

Kay

kfj

unread,
Jan 7, 2011, 4:07:41 AM1/7/11
to hugin and other free panoramic software


On 6 Jan., 23:53, Bruno Postle <br...@postle.net> wrote:

> Sorry, I'm not familiar with this code.  Hugin updates all the
> control point distances after optimisation and on initially loading
> a project (so it isn't necessary to run the optimiser to get this
> info).

If you could name the routine hugin calls when it updates the CP
distances 'on initially loading a project' I might be able to call
that. That would save me the hassle of peeking around in the code
myself ;-)

Kay

T. Modes

unread,
Jan 7, 2011, 1:06:31 PM1/7/11
to hugin and other free panoramic software
calcCtrlPointErrors

kfj

unread,
Jan 7, 2011, 3:37:05 PM1/7/11
to hugin and other free panoramic software


On 7 Jan., 19:06, "T. Modes" <Thomas.Mo...@gmx.de> wrote:
> calcCtrlPointErrors

Thanks, that was what I was looking for!

Kay

Tom Sharpless

unread,
Jan 8, 2011, 12:07:10 PM1/8/11
to hugin and other free panoramic software
Hey Yuv,

On Jan 2, 9:47 pm, Yuval Levy <goo...@levy.ch> wrote:
> B. SPECIFYING THE NEW NEEDS
>
> 1. there is a case/request to activate/deactivate CPs
> 2. there is a case to weight CPs
> 3. there is a case to discern the source of CPs (human or computer generated)
>
I would like to add

4. case to mark sets of CPs to be used for morph-to-fit.

This potentially very useful function deserves to be revived.
Dersch's original implementation is too limited and hard to use, but
with a better engine and a good UI, morph-to-fit could be a real
parallax-killer.

Best,
Tom

Yuval Levy

unread,
Jan 9, 2011, 12:05:01 AM1/9/11
to hugi...@googlegroups.com
On January 3, 2011 09:49:21 am Bruno Postle wrote:
> On Sun 02-Jan-2011 at 21:47 -0500, Yuval Levy wrote:
> >If there are no objections I will updated the specs both in Hugin and in
> >Libpano to make developers aware of things to come.
>
> There is no need to update the docs until the feature exists, the
> implementation often determines the details anyway.

this is the wrong way of doing things. first write the specs, then
implemented them. as an extra bonus, well written specs are the documentation
that often lacks^H^H^H^H^Hfollows implementation.


> Note that an existing hack to simulate weighting is to simply
> duplicate control points, it works ok.

Hack. Hugin also has a function to remove duplicate CPs. I already see the
unpredictable results from improperly mixing the two.


> Panotools::Script is ok, it will just ignore any unknown parameters
> so long as they are implemented using the existing syntax.

good to know.


> I don't think there is a problem here, if extra data needs to be
> added to the Hugin .pto format then it can be added, the other tools
> can catch up. See the addition of photometric parameters and masks
> to see how it can be done painlessly.

thanks. I went by the book [0], posted [1] to panotools-devel and if there is
no objection I will unify the specs. Once the specs are unified there will be
work to do to bring the parser(s) up to date; and to actually implement the
new parameters. I know I will move forward at a snail pace, so if somebody is
faster than me, be my guest and take me over. Otherwise, I will appreciate
the mentorship of more experienced devs as I am making my mistakes while
trying to implement this.

Yuv

[0]
http://panotools.svn.sourceforge.net/viewvc/panotools/trunk/libpano/doc/developmentPolicy.txt?revision=1060&view=markup
[1] http://article.gmane.org/gmane.comp.graphics.panotools.devel/1761

signature.asc

Yuval Levy

unread,
Jan 9, 2011, 12:05:09 AM1/9/11
to hugi...@googlegroups.com
On January 3, 2011 05:19:53 am Rogier Wolff wrote:
> > Theory is that three strategically placed CPs per image pair are enough
> > when the lens is already calibrated. I work with five CPs per image
> > pair, and for small projects (e.g. full sphericals with six fisheye
> > shots) repeating the left-click-right-click dance on the CP tab is
> > equally fast and yield better results than any CP generator I've tried
> > before. A CP generator becomes useful only when dealing with a large
> > number of input images.
>
> But do you find this fulfilling work?

of course not, but I'll do what is faster.


> Setting aside the fact that
> you've become convinced that the algorithms of today generate worse
> results than you, Wouldn't you prefer to have the computer do this
> boring work for you?

I am not convinced that today's algorithms generate worse results than me. I
had timed them to be slower back when I was doing this and I am convinced that
there is room for improvement.


> You apparently do the "pano-with-six-fisheye-images" dance regularly.

not at all. I can count on a hand the number of full sphericals I have shot
in the past two years. Maybe two hands. I made more partial stitches at
medium to long focal distance, and cpfind is really great on them. See
attached example.


> Calibrating the lens and then just using that is much easier.
>
> I have a zoomlens.
>
> Focal lenght: 18-135mm
> F-stop: f/2.0- f/32
> focus: 1.5m - inf
>
> So worst case I have to have a 3D matrix of lens calibration settings
> for two analog and one digital parameter. (one of which doesn't go
> into the exif data). I think that calibrating the lens on the current
> project is a reasonable compromise.

sure, calibrating the lens on the current project is a reasonable compromise -
and I even show it in my last tutorial [0].

the 3D matrix can be approximated / interpolated with multiple points in this
3D space, and if Hugin runs a calibration on every project you do, and store
the result in that 3D space, it will become increasingly better at processing
your panos.

Yuv
>
>
> Roger.
[0] http://panospace.wordpress.com/2010/09/19/linear-panoramas-mosaic-
tutorial/

gva.jpg
signature.asc

Yuval Levy

unread,
Jan 9, 2011, 12:05:23 AM1/9/11
to hugi...@googlegroups.com
On January 3, 2011 06:27:29 am Rogier Wolff wrote:
> On Sun, Jan 02, 2011 at 09:47:48PM -0500, Yuval Levy wrote:
> > > #-hugin cpWeight s0 a0 w100
>
> Hey, do you see a need for an active-weight-zero control point?

Not really, weight is relative.


> Do you see a need for an inactive weight-not-zero control point?

Yes - when I want to deactivate it without losing the weight information (as
you do presently implicitly by commenting the line rather than deleting it).
The active/not active toggles are like the visibility toggles in the fast
preview.


> Why do you thing weight is an integer?

actually I did not, but I realize that my specs were ambiguous. Should have
read w100.0 to remove that ambiguity.


> What about setting the weight to "1" by default?

though of it. the problem is that legacy projects will come with no s/a/w
parameters assigned. setting all defaults to 0 makes sure that there are no
unexpected behaviors when parsing old PTO files. But we can expect parsers to
be written to specification, right? so, yes 1 makes it for a sensible default.

Yuv

signature.asc

T. Modes

unread,
Jan 9, 2011, 4:23:24 AM1/9/11
to hugin and other free panoramic software


On 9 Jan., 06:05, Yuval Levy <goo...@levy.ch> wrote:
> On January 3, 2011 09:49:21 am Bruno Postle wrote:
>
> > On Sun 02-Jan-2011 at 21:47 -0500, Yuval Levy wrote:
> > >If there are no objections I will updated the specs both in Hugin and in
> > >Libpano to make developers aware of things to come.
>
> > There is no need to update the docs until the feature exists, the
> > implementation often determines the details anyway.
>
> this is the wrong way of doing things.  first write the specs, then
> implemented them.  as an extra bonus, well written specs are the documentation
> that often lacks^H^H^H^H^Hfollows implementation.
>

That's not fully correct. I agree with Bruno.
First write the spec of the *function* and *not of the fileformat*.
The implementation of the function determines then the modification of
the pto file format.
E.g. the weight, first you need to know how the optimizer can work
with the weight factors: can the algorithm better work with float
numbers (0.0 - 1.0 or better 0.0 - 100.0) or it is better to use
integer (0 - 100 or 0 - 256 or ...). So the implementation determines
the number format and the number format specify then the file spec.
If you write first the file spec and the implement the function you
will often need to trade off between the spec of the file format and
the requirements of the existing code, which makes it more complicate.

So as a first step we need the spec of the function and not of the
file format.

Thomas

kfj

unread,
Jan 9, 2011, 7:45:32 AM1/9/11
to hugin and other free panoramic software


On 9 Jan., 10:23, "T. Modes" <Thomas.Mo...@gmx.de> wrote:

> That's not fully correct. I agree with Bruno.
> First write the spec of the *function* and *not of the fileformat*.

This is a hen-and-egg problem. How did nature solve it? By evolution.
How do you evolve in software? You write a prototype, then privately
iterate { test, document, modify, }. Then you introduce it as an
option and have it peer-reviewed, and once you get feedback (or
consent by silence) , you might even do another couple of iterations
before introducing it to the public.

You can't reasonably expect to get either the docu or the
implementation right straight away, no matter how good your idea or
your coding skills are.

Kay

Yuval Levy

unread,
Jan 9, 2011, 9:21:14 AM1/9/11
to hugi...@googlegroups.com
On January 8, 2011 12:07:10 pm Tom Sharpless wrote:
> 4. case to mark sets of CPs to be used for morph-to-fit.

thanks for this, Tom. I'll update the work-in-progress specs accordingly by
transforming the active flag from a one bit to a two bits parameter:

- s: source (0=unknown, 1=human, 2=cpfind, etc.) - default 0
- a: active (0=yes, 1=not for optimization 2=not for morph-to-fit 3=neither
for optimization nor for morph to fit) - default 0
- w: weight (0.0 to 100.0) - default 1.0

let me know if this looks good enough for you or if you have better ideas.

Yuv

signature.asc

Rogier Wolff

unread,
Jan 10, 2011, 3:27:19 AM1/10/11
to hugi...@googlegroups.com
On Sun, Jan 09, 2011 at 09:21:14AM -0500, Yuval Levy wrote:
> - w: weight (0.0 to 100.0) - default 1.0

I still say: Don't mention an upper limit.

Weight: floating point, default 1.0. We currently don't expect
negative weights to be useful, but who knows.

You can add a remark: currently Yuval thinks values between about 0
and 100 make sense and outside are not really useful, but who knows
what may become useful in the future.

Do not write unneccesary things into the spec. From your spec someone
might implement a parser that reads 1 or two digits, a period followed
by one more digit. Or special case 100.0.

Now weight 1 is invalid. Should be 1.0 for that parser. We can't
specify 1.05. 0.1 and 0.2 are a factor of two apart, and whereas we
have a 1/1000 resolution in specifying high weights we only have 0/10
resolution of specifying lower weights.

Yuval Levy

unread,
Jan 10, 2011, 7:46:57 AM1/10/11
to hugi...@googlegroups.com
On January 10, 2011 03:27:19 am Rogier Wolff wrote:
> On Sun, Jan 09, 2011 at 09:21:14AM -0500, Yuval Levy wrote:
> > - w: weight (0.0 to 100.0) - default 1.0
>
> I still say: Don't mention an upper limit.
>
> Weight: floating point, default 1.0. We currently don't expect
> negative weights to be useful, but who knows.

OK


> You can add a remark: currently Yuval thinks values between about 0
> and 100 make sense and outside are not really useful, but who knows
> what may become useful in the future.

not necessary.


> Do not write unneccesary things into the spec.

OK

Yuv

signature.asc
Reply all
Reply to author
Forward
0 new messages