Next GUI - take 2

52 views
Skip to first unread message

Yuval Levy

unread,
Oct 13, 2009, 12:54:43 AM10/13/09
to hugi...@googlegroups.com
Hi all,

I think it will take a few iterations, here is the next one. The
"traditional preview vs. fast preview" was focused too much on one
aspect of the whole - an important one, but we should not limit
ourselves to that aspect - nor to the pseudo-workflow aspect of the
current tabbed design of the main application window - when re-designing
the GUI.

So I wipe out my drawing board and start from scratch again.

First, a word about my motives and how I intend to go about this.

I am scratching my own itch. If it is helpful for others, good. If not,
that's OK too. I will try to avoid conflicts, which means that I will
not commit to trunk things that have not been agreed - or if I'll do so
I will start a development codeline.

My first objective is to articulate and develop a clear vision. It is
better if it can be a common vision; and I appreciate everybody's
opinion - particularly those that differ from mine because they widen up
my thoughts and may show me aspects that I did not consider.

Once the vision is clear, I intend to go about implementing it. To me
this is an exercise in learning wxWidgets and cross-platform GUI
implementation.

Consensus is improbable (but not impossible) on such a broad topic
loaded with subjectivity and preconceived notions; even more so when
discussed at a hypothetical level. I thought whether I should run my
thought process here in the open; or whether I should come up with
another mockup, maybe even a functional one. I decided for the open,
public approach because I believe that the view of a single individual
is narrower than the view of a group. I look forward for *you* to
influence my thought.

I'll first allow myself to say what I like. Then I will lay out the
current state of my thinking.

I like (in order of priority):
* to use the largest possible part of the screen for preview,
CP-editing, mask-editing, input-crop-editing.
* to control all aspects of the process down to the smallest detail and
decide which part I delegate to automation and which one I override with
manual intervention.
* to use the mouse the least possible.
* to use the keyboard the least possible.

There are many aspects to panorama-creation - many dimensions to look at
things. The current application is workflow oriented. IMO this kind of
aspect is secondary and should not determine the application's overall
design.

There are two aspects that are central to me:
- the visual aspect (the input images and the output result)
- the control aspect (the transformations that these images undergo from
input to output)

The workflow is merely a consequence of the two - it can be fully
automated (in the case of the assisted workflow), or I can
influence/control different aspects of it.

The current hierarchy of things put the workflow at the top (tabs). I
actually never used that order. It is not that I dislike it, it is just
not relevant (except with the 1/2/3 buttons on the assistant).

I do mainly two kind of manipulations through the Hugin GUI:
visual-editing (V) and workflow-control (W). Currently they are mixed
throughout the application's different windows: there is some V (like
the CP-editor and crop tabs) in the main application tabbed interface
which is mostly W; and there is some W (like the choice of output
projection) in the preview window, which IMO should be V-only.

I would like Hugin to have two main windows, one V and one W. Some
popups if warranted, although I'd make the CP list (just to name one
popup) a tab in the W window.

On the (V)isual window I want the preview, the CP-tab; the crop tab; and
in future the masking tab and all other visual work that depends on
moving the mouse on the input images or output preview. What is common
to them is that they need a large surface; and that I usually work on
only one of them at a time. It could be tabbed, like Bruno started to
articulate in the previous thread about the traditional preview.

On the (W)orkflow window I want to have all the text/numeric "manual"
controls. What is common to them is that they have a logical sequence,
but I end up jumping back and forth as I fine tune the workflow.

Some tasks can be done with both a visual or a manual control - e.g.
dragging the final panorama crop visually or entering it manually. This
is a user preference things and I believe both should be available - in
the V and W window.

Right now we have unclear separation of V vs. W work, and this has
detrimental consequences on the usage of display space.

I want to have the largest possible V-Window. I think most people do,
although not always full screen. I'd like the development thrust to go
toward reducing its chroma to a minimum. There are already too many
controls on that window. Most users will open it in either full screen
or near full screen.

Bruno expressed some interesting ideas of how the V-Window may possibly
look, with the Compose/Crop/Layout/Slow tabs.

Things are a little bit more nuanced for the W-Window. I have two
personal preferences for it that depend on my work environment.

When I am on the dual screen workstation, I would like to have the
W-Window on the second screen.

I personally (this is something that will drive the newbie nuts) would
do away with all tabs and other artificial separation: simply order all
the controls somehow logically on a single large window and that's it.
V-Work on the left; W-Work on the right; all unfolded and wide open.
Let's call this "expert mode". If the Stitcher tab was scary, don't even
try to imagine this window. I know it would work for me, like the
current Stitcher tab works very well for me. But I also know that it
would turn newbies off, so I will not impose it - definitely not as default.

When I am on a single screen laptop, I would like to have the W-Window
floating on top of the V-Window. It will come in my way, so I will want
it to use the smallest possible surface to be reasonable. I rather
expand it in width than in height, so that it can take the place of the
current preview's toolbar + images selection. This way I won't have to
move the palette around too much. Let's call this "palette mode". I know
it will turn off some experts like Thomas. There is a trade-off between
the size of the floating palette and the number of controls that can go
on a tab. In my previous mockup I was at the one extreme: smallest
possible palette size (drawback: highest number of tabs). The idea being
to disintegrate before re-integrating: each "micro-tab" would be one
step in the project (e.g. in the undo list, that could then be displayed
as a list of "micro-tabs").

"palette mode" - and the breaking down of the W-Window into smaller
chunks - has a major advantage for newbies: it focuses the user on the
one single step in the workflow, and only on the controls necessary for
it. Each such micro-step would line up like perls in a (workflow) chain.

I would add (for others, not for me) two other modes:

"Single Window Mode" (SWM) - many users are intimidated/confused by the
windowing metaphore. In SWM the W-Window is simply integrated visually
on the top area of the V-Window.

"Palette + Help Mode" (PHM) - a larger W-Window that displays a help
text associated with each micro-step in the palette.

That's my current thinking. To summarize:

visual work on the left; control work on the right. Two windows.

The V-Window:
- Crop (from current tabbed main app)
- Control Points (from current tabbed main app)
- Fast Preview (with variations / functions as described by Bruno)
- Traditional Preview

The W-Window:
- List of Images / Camera / Lenses
- List of CP
- Optimizer
- Exposure
- Stitcher
- Workflow (Assistant)

For the V-Window:
* maximize footprint
* work with the mouse - slide, drag, etc.

For the W-Window:
* minimize footprint
* work with keyboard - tabulate between micro-steps and fields; use
keyboard shortcuts to quickly access individual micro-steps
* multiple versions:
** all-in-one, single page control center for the power user
** small palette with plenty of micro-steps for the user who need guidance


Yuv

grow

unread,
Oct 13, 2009, 4:41:05 AM10/13/09
to hugin and other free panoramic software
Yuv,
You have lots of interesting ideas here.

My experience of the current model is that it is a pipe-line with only
one possible input opening. One variation that I have wished for when
problems arise is provoked by the fact that in the current model
Hugin will give me some of the intermediate files as well as the final
result allowing me to use them in post-Hugin-processing in Photoshop
or Gimp etc. What I have often itched for, is for Hugin to let me
intervene on some of those intermediate files and then restart the
Hugin process at a mid-point.

For example last summer I shot a panorama on a small boat bobbing on
the sea beneath some cliffs ... I stitched two versions of it one with
control points only on the boat and one with control points only on
the cliffs. The two versions are in my personal queue of Photoshop
jobs - it looks do-able and eventually I will combine them.

If I could have interleaved the intermediate files from the two
versions and then re-run just the enblend step that might have been an
alternative to a lot of Post-Hugin processing in Photoshop I
certainly would have experimented with it.

I know that this sort of thing could be achieved by using the command
line but I suspect that there are a growing number of "intermediate"
level users, such as myself, who have come to grips with the basics,
have a reasonable understanding of Hugin's workflow but are not yet
confident to do this sort of thing on the raw command line.

To take a simpler example than the boat one ... I have had problems
in the past where the down shots came out rather dark ... I have
solved this by going back to the source images and pushing the
exposure for the three bracketed downshots by one or two stops and
then re-running the whole stitch ... had I been able to either:
- redo just the remapping and fusing of the adjusted down shot set
and then drop them in to the input set for the blend step
or
- take the intermediate fused file for the down shot and tweak it's
"exposure" and then re-run the enblend step

it would have saved repeating the processing of lots of images that
were OK in the first place.

I think that what I am imagining would perhaps fit on the Stitcher tab
as some sort of expert-mode that would allow the user to select from
the
Remapper
Fuser
Blender
and run just one of them ... then intervene in some way before running
just another one and so on also perhaps setting are adjusting the
files to which the step would be applied.

This would be useful for people working on large projects producing
huge high resolution output files (or at least projects that are large
compared to the user's computing power) where a full Hugin cycle may
take several hours.

all the best

George

Bart van Andel

unread,
Oct 13, 2009, 9:48:58 AM10/13/09
to hugin and other free panoramic software
I like your ideas, Yuv. One aspect which you are pointing out is that
different users with different understanding of the way Hugin works
(or: the way you can use Hugin) may need different GUI layouts. Expert
users like yourself may find it useful to have as much controls as
possible on a single page, so using different tabs won't be needed.
Other users will however get lost when too many controls are shown at
once.

For this purpose, I like the concept of window layout presets
("workspaces") as several other, complicated, tools are using. These
consist of a combination of docked views and toolboxes, which may or
may not include a tabbed interface. Just a few examples:

* A workspace may consist of V and W (I'm copying Yuv's terminology).
For example, an "all-in-one" workspace may have a large V at the
center and some groups of W controls (e.g. exposure settings,
optimizer settings, output projection, cropping) on one or both sides.

* Or it may consist of an OpenGL preview window in one window with
projection options at the bottom, with control points views in another
tab, and another window with all other W controls.

* Or it may consist of one large V window, and another one with all
the W controls.

Hugin may provide some generally usable presets, which users can adapt
these to their own preference by opening/closing toolboxes and
dragging and dropping them into position. We won't have to reinvent
the wheel, because several "docking toolkits" are available. I've
quickly screened wxAUI (advanced user interface) [1] which apparently
even allows saving docking presets out of the box [2]. Another one is
called IFM [3], but the website does not provide a lot of information.


I've had another idea in my mind for some time now. It may sound a
little too fancy or even surreal, but I guess this shouldn't stop me
from writing it down here. The idea is probably a lot simpler to
explain than to implement it. In my head it has some visual
assemblance to Microsoft's Photosynth [4], but with a completely
different purpose. It's like this (just the big picture).

Imagine a single OpenGL window where everything goes:
* You can drop your images there, and use the mouse to put them into
their approximate locations ("layout branch") or just run a control
point finder to do this for you.
* You can move around in 3D, as the images are all placed onto a
panoshpere, or linearly (for a linear panorama), or just floating in
3D. This can be used to select a "camera location" which will be used
to render the output panorama. A bit like the Pannini tool, but with
single images instead of a rendered panorama.
* Images can be clicked, which makes them show up full screen, so
cropping/masking can be edited.
* Another view (preferably selectable through a keyboard shortcut)
would be showing the images and their connections on an approximately
flat surface. This way, it can easily be determined whether there are
images connected by control points which shouldn't be connected.
Deleting the connection deletes all connecting control points. Adding
a connection opens up yet another view, where control points can be
added either manually or by using a control point finder. Clicking
such a connection also opens this view.
* Since OpenGL allows using multiple views at the same time, any view
could have an "undock" button which moves the fullscreen view into
just a part of the screen. This way, both the 3D (pano) view can be
opened and any of the other views.
* Ideally, the controls themselves are OpenGL thingies as well, such
that there is really only one window. This might mean we'd need to
write our own layout managers etc etc as well... It could be a fun
project but maybe a little too distant from the core ideas of Hugin.
Maybe for Hugin 3.0...

This is really a brain storm. Ideas / dreams may be way off, but still
contain interesting points. No need to constrain ourselves in the
brain storm phase, right? :)

[1] http://wiki.wxwidgets.org/WxAUI
[2] http://www.kirix.com/labs/wxaui/screenshots.html (video demo at
bottom of the page)
[3] http://wiki.wxwidgets.org/WxIFM
[4] http://photosynth.net/

--
Bart

Lukáš Jirkovský

unread,
Oct 13, 2009, 11:16:12 AM10/13/09
to hugi...@googlegroups.com
2009/10/13 Bart van Andel <bavan...@gmail.com>:
I thought about something like this. Is it possible to have also
minimize button on these floating windows? I don't see it on
screenshots and I'd miss it.
That sounds cool but I'm not sure if completely 3D workspace would be
clear enough to most users. For majority of users the only 3D
workspace they've ever used is some action game.

>
> This is really a brain storm. Ideas / dreams may be way off, but still
> contain interesting points. No need to constrain ourselves in the
> brain storm phase, right? :)
>
> [1] http://wiki.wxwidgets.org/WxAUI
> [2] http://www.kirix.com/labs/wxaui/screenshots.html (video demo at
> bottom of the page)
> [3] http://wiki.wxwidgets.org/WxIFM
> [4] http://photosynth.net/
>
> --
> Bart
> >
>

I don't have any strong opinion about hugin GUI except one. I think
there should be something like the current assistant tab. I can think
of some wizard (everyone loves wizards ;-).

Lukas

Oskar Sander

unread,
Oct 13, 2009, 11:30:45 AM10/13/09
to hugi...@googlegroups.com
Sounds like great ideas Yuv.  

V
* Maximize effective image area within these windows regardless of full screen or not is used(My intepretation of maximize footprint)
* Hey, these functions could even all coexist open as parallel V-window instances!

W
* Concentrate & condense controls and information to save dialogue space ( My interpretation of minimize footprint)



2009/10/13 Bart van Andel <bavan...@gmail.com>
Imagine a single OpenGL window where everything goes:
* You can drop your images there, and use the mouse to put them into
their approximate locations ("layout branch") or just run a control
point finder to do this for you.


This is exactly what Photoshop Photomerge allows you to do [1].  You are working with a "lightboard" metaphor, and have not yet used images lined up at the top bar.  The application will try and group and place images fore you, but leave unconnected images on the top bar.

Sneak peek at the competition is never a bad thing ;-)


[1] http://www.photoshopsupport.com/tutorials/cb/photomerge.html

--
/O

Yuval Levy

unread,
Oct 13, 2009, 1:18:40 PM10/13/09
to hugi...@googlegroups.com
Bart van Andel wrote:
> We won't have to reinvent
> the wheel, because several "docking toolkits" are available. I've
> quickly screened wxAUI (advanced user interface) [1] which apparently
> even allows saving docking presets out of the box [2].

interestingly, Thomas Modes came up with this in a private conversation.
Also: wxPython has the same widgets as wxWidgets in C++ and one of the
things to consider, since this is becoming a complete rewrite, is to
start from scratch a wxPython GUI to replace the current C++ GUI.


> I've had another idea in my mind for some time now. It may sound a
> little too fancy or even surreal, but I guess this shouldn't stop me
> from writing it down here.

that's a good idea, just a little bit ahead of time, and beyond the
means of this specific project team.

to push it even further and summarize it in a paragraph, imagine a tool
placing the photos in real 3D space. imagine navigating the real 3D
space, move between the images, freely. place any 3D structure anywhere
in that space. place a point somewhere in space and project from that
point to the 3D structure through the images. want ortographic
projection? possible too. And then all of a sudden the 3D pixel cloud
liberates itself from the flat 2D pictures....

Blender is the user interface for this. The project would not be the
next GUI for Hugin. It would be full 3D extension for libpano; and then
integration in Blender.

Yuv

J. Schneider

unread,
Oct 13, 2009, 4:59:05 PM10/13/09
to hugi...@googlegroups.com
> I think that what I am imagining would perhaps fit on the Stitcher tab
> as some sort of expert-mode that would allow the user to select from
> the
> Remapper
> Fuser
> Blender
> and run just one of them ... then intervene in some way before running
> just another one and so on also perhaps setting are adjusting the
> files to which the step would be applied.
>
> This would be useful for people working on large projects producing
> huge high resolution output files (or at least projects that are large
> compared to the user's computing power) where a full Hugin cycle may
> take several hours.

I have wished this several times when my computer crashed during a
stitch and I could at least have saved the remapping step if I had been
able to start over at the blending step. (Not possible on the command
line with cropped intermedaite files.) Neither hugin nor the batch
processor allow this - you are just asked if the existing images should
be overwritten, not if they should be reused.
regards
Joachim

Bruno Postle

unread,
Oct 13, 2009, 5:33:09 PM10/13/09
to Hugin ptx
On Tue 13-Oct-2009 at 22:59 +0200, J. Schneider wrote:
>
>I have wished this several times when my computer crashed during a
>stitch and I could at least have saved the remapping step if I had been
>able to start over at the blending step. (Not possible on the command
>line with cropped intermedaite files.) Neither hugin nor the batch
>processor allow this

Yes, it is a shame that the Batch Processor doesn't support this,
though it works very well on the command-line.

The problem with cropped TIFF files can be worked-around by turning
it off in Hugin, but really this needs to be fixed in your image
editor. The Gimp understands offsets in multi-page TIFFs, so it is
probably a really easy fix for single-layer files in the Gimp.

--
Bruno

Bart van Andel

unread,
Oct 14, 2009, 7:50:32 AM10/14/09
to hugin and other free panoramic software
On 13 okt, 19:18, Yuval Levy <goo...@levy.ch> wrote:
>   Also: wxPython has the same widgets as wxWidgets in C++ and one of the
> things to consider, since this is becoming a complete rewrite, is to
> start from scratch a wxPython GUI to replace the current C++ GUI.

I haven't programmed either wx*, so I really don't know, but will that
allow for the application to become completely scriptable? Like, you
can write a script which basically replaces the current wizard tab
(but still looks and works the same), and for a different workflow,
this script can be modified to show a slightly different wizard. In
other words: the GUI is built on-the-fly from a "GUI script" kind of
file. Could be XML or whatever. This way Hugin could be made even less
dependent on the exact tools used. Plugins could exist of (for
instance) an executable and an XML file which specifies the name,
version, what kind of executable it is, and how to use / call it
(e.g., "autopano", "keypoint finder", "autopano %imgfiles%")

> > I've had another idea in my mind for some time now. It may sound a
> > little too fancy or even surreal, but I guess this shouldn't stop me
> > from writing it down here.
>
> that's a good idea, just a little bit ahead of time, and beyond the
> means of this specific project team.
>
> to push it even further and summarize it in a paragraph, imagine a tool
> placing the photos in real 3D space. imagine navigating the real 3D
> space, move between the images, freely. place any 3D structure anywhere
> in that space. place a point somewhere in space and project from that
> point to the 3D structure through the images. want ortographic
> projection? possible too. And then all of a sudden the 3D pixel cloud
> liberates itself from the flat 2D pictures....
>
> Blender is the user interface for this. The project would not be the
> next GUI for Hugin. It would be full 3D extension for libpano; and then
> integration in Blender.

Personally, the last time I've tried Blender, I found it "not too
intuitive" to use. Writing a 3D engine with game-like controls isn't
necessarily hard, but could be far easier to navigate than Blender (at
least in my rather limited experience). Of course it might be even
better to create a package which could be used both as a plugin for
Blender *and* with a standalone 3D engine. Or we could use the Quake 3
[1] or Doom 3 [2] engine ;)

--
Bart

[1] http://en.wikipedia.org/wiki/Id_Tech_3
[2] http://en.wikipedia.org/wiki/Id_Tech_4 (source not yet released)

Nicolas Pelletier

unread,
Oct 14, 2009, 10:29:23 AM10/14/09
to hugi...@googlegroups.com
There is one thing that the GUI (and the file format... possibly also caused by the libraries) limits and does not seem natural.

What we see as a single process is in my opinion 2 steps... Maybe could be seen as the V and W.

First step - Generate the "mapped" pano which includes everything up to the point where everything is aligned.
Second step - Export\generate pano which includes choosing projection, "recentering", choosing output size, cropping and then doing the actual stitch.

This is currently set up as a one to one match, but should be a one to many I think:

The first part could be done with (one per project):
- Many images with control points and aligned with nona
- A single equirectangular
- 6 cube faces

And the second one could be a series of output (many per project):
- A low rez stereographic projection centered at X
- A higher rez stereographic projection centered at Y and with a smaller FOV
- A HDR equirectangular for postprocess elsewhere (with an alignment different from the other ones)

And for the outputs, you could process one or many.

Also, I'm all for a "quick expert" gui, but I would like easier access to each step through scripting... I already do many steps through command line, but some things such as "pruning" the bad control points can only really be done through the GUI which sometime slows down pretty hard with the high number of images. So if the python widgets enable opening up the API to python, I'm all in.

nick

Bruno Postle

unread,
Oct 14, 2009, 2:00:21 PM10/14/09
to Hugin ptx
On Wed 14-Oct-2009 at 10:29 -0400, Nicolas Pelletier wrote:
>
>This is currently set up as a one to one match, but should be a one to many
>I think:
>
>The first part could be done with (one per project):
>- Many images with control points and aligned with nona
>- A single equirectangular
>- 6 cube faces
>
>And the second one could be a series of output (many per project):
>- A low rez stereographic projection centered at X
>- A higher rez stereographic projection centered at Y and with a smaller FOV
>- A HDR equirectangular for postprocess elsewhere (with an alignment
>different from the other ones)

The problem with stitching one-to-many is that you really don't want
to repeat the stitch for each of your output 'views' - Not only will
this take forever, but your seam lines will be in different places
each time.

The right way to do this is to stitch a 'base' equirectangular of
the scene, then generate different projections/views from this.

You can do this in Hugin, just import the equirectangular into a new
project and use the various output projections, or use Tom's Panini
tool which is specifically designed for extracting different views.

What I'm trying to say is that the 'many' part is necessarily a
post-processing step, and doesn't really benefit from being
integrated into Hugin.

--
Bruno

Bruno Postle

unread,
Oct 14, 2009, 2:20:41 PM10/14/09
to Hugin ptx
On Tue 13-Oct-2009 at 01:41 -0700, grow wrote:
>
>My experience of the current model is that it is a pipe-line with only
>one possible input opening. One variation that I have wished for when
>problems arise is provoked by the fact that in the current model
>Hugin will give me some of the intermediate files as well as the final
>result allowing me to use them in post-Hugin-processing in Photoshop
>or Gimp etc. What I have often itched for, is for Hugin to let me
>intervene on some of those intermediate files and then restart the
>Hugin process at a mid-point.

The Makefile stitching process allows exactly this, unfortunately it
is only currently available on the command-line.

i.e. save the project in Hugin but stitch using the shell:

make -f project.pto.mk

Edit an intermediate file and repeat the command-line, only relevant
processing will be redone.

It would be really nice for the Batch Processor to support this kind
of workflow.

--
Bruno

Nicolas Pelletier

unread,
Oct 14, 2009, 4:09:44 PM10/14/09
to hugi...@googlegroups.com
I'm not convinced it is a post processing step. I think it depends on where we draw the what should hugin do boundary.

I'm currently working with the exact workflow you mentioned, create a 360 180 equi and then use this as the first element in the chain. But if the other steps were only post processing, then why should we have other projections and other import method?

Also, from this equirectangular, if we had many output, we could have a nice workflow that generate all 6 faces of the cubes for some VR interface without needing another tool, or swapping around with the same project or 6 projects.

only my 2 cents...

nick

Bruno Postle

unread,
Oct 14, 2009, 4:44:30 PM10/14/09
to Hugin ptx
On Wed 14-Oct-2009 at 16:09 -0400, Nicolas Pelletier wrote:
>I'm not convinced it is a post processing step. I think it depends on where
>we draw the what should hugin do boundary.

I guess that where I'm coming from is that Hugin and the Stitcher
tab are complex enough as it is.

>I'm currently working with the exact workflow you mentioned, create a 360
>180 equi and then use this as the first element in the chain. But if the
>other steps were only post processing, then why should we have other
>projections and other import method?

Mainly because we can, but also because partial panoramas are
legitimate targets, and because people have a use for stitching
partial equirectangular and cylindrical input.

>Also, from this equirectangular, if we had many output, we could have a nice
>workflow that generate all 6 faces of the cubes for some VR interface
>without needing another tool, or swapping around with the same project or 6
>projects.

We do actually have much of this (cubic, little-planet, thumbnails,
QTVR, PanoSalado etc...) as targets in the Makefile.equirect.mk
plugin - It would be trivially easy to enable this stuff in the
Stitcher tab or the Batch Processor, but it adds a bunch of
dependencies (ImageMagick, perl(Panotools::Script)).

awbrody

unread,
Oct 14, 2009, 10:31:50 PM10/14/09
to hugin and other free panoramic software
I have been using hugin for about 3 years and follow the development
with a lot of interest.
It would be good if one were to be able to select and move individual
and groups of images in the fast preview window. This would be useful
in the case of sky images with no coherent set of control points.
It may prove useful to implement something like skeleton deformation
as it is used in animation to warp the images, perhaps a fine tuning
the deformation to accommodate panoramas photographed without rotation
about the correct axes.

Tom Sharpless

unread,
Oct 15, 2009, 11:03:25 AM10/15/09
to hugin and other free panoramic software
Wow. Thanks, Yuv. Here is my 25 cents worth:

V & W is a good basis.

V must be fully OpenGL based and very responsive. The "traditional
preview" should be a generated QTVR file that gets displayed in an
external or internal spherical pano viewer (like in PTGui). V must
provide Layout, including predefined frameworks into which you can
drop individual images. And it should provide fast, editable access
to all of an images' properties (in a big popup window) with a right
click -- make that a stack of images, for HDR.

I think it would be nice if W were actually based on a flow diagram,
showing what you can (sanely) do -- and what you have already done --
with the contents of the project at the current time. That would
encompass everything from project operations (load, save, apply
settings...) to lens calibration to image processing details (align
stack, correct TCA, ..., ..., ...) and of course you would click on
active elements to drill down to the details. Note: if you can
display what has already been done, then you should be able to avoid
doing it over again needlessly, when I push the Go button.

Some R&D would be needed to find a UI engine to support a "flow chart"
W. The stock tree or outline type provides neither for joining paths
nor for history; and the UIs often used for "visual programming" are
too formless (as well as low in information density, and plain ugly).
So does anyone know a good open tool for generating structured,
history-aware flow diagrams?

I agree with Bruno that the Makefile based processing is a strong
feature of Hugin that is far too hard to exploit. A GUI page that
actually laid out the same possibilities, and let you select and
configure them, would be a godsend for visualization-challenged people
like me.

It should not be forgotten that behind the GUI there need to be some
very capable processing engines, based on a sensible model of the
goals and possibilities. Helmut's clear conception of those matters
is one of the reasons why PT has been so successful for so long
(another is his insistence on doing the math right). But the range of
goals and possibilities has expanded greatly since 1995, and I think
it is now time to give some serious thought to redefining the "libpano
API" in the light of present needs. That process should be driven by
a thorough use-case analysis, which I hope the present discussion will
generate.

Best, Tom

Tom Sharpless

unread,
Oct 15, 2009, 12:43:31 PM10/15/09
to hugin and other free panoramic software
As seems to be usual, I agree with Bruno.

The job of a stitcher is to prepare, match up, line up, and combine
images on the panosphere. Generating some flat projection of the
panosphere is a necessary part of that job, but generating all
possible projections is not. Creating printable views involves so
many possibilities, and so many aesthetic considerations, that it is a
speciality all its own. And repeating the entire process from source
images to get each view is a big waste of time: things like lens
correction, photometric correction and, especially, blending really
need to be done only once.

I think "a new Hugin" should provide only two direct stitching
targets: cube faces and equirectangular, and let you convert either of
those to other projections later.

I include equirectangular only for the sake of tradition. Cube faces
have several advantages. You can display them almost immediately with
QuickTime or Flash technology. Because they are rectilinear images,
they are easy to judge, and to retouch. And they even let you
stitch faster, because to map from a radially symmetric projection
(e.g. almost any photo) to a radially symmetric projection, you only
have to implement a nonlinear function in one dimension, along the
radius. That one dimensional mapping can be a lookup table; whereas
to stitch to equirectangular requires a full 2-D nonlinear mapping (I
use this trick in autopano-sift-c to generate stereographic images
faster than is possible with libpano).

For composing "just the right view" you need an interactive program,
that can custom fit the projection to the subject to some extent
(think of a view camera on steroids). Both my Panini, and Max Lyons'
PTAssembler preview window, come close (from different directions) to
what I would ideally want. Maybe the new Hugin suite should
incorporate such a program; but it ought not to be configured as a set
of sub-options on stitching.

Regards, Tom

Seb Perez-D

unread,
Oct 15, 2009, 4:31:08 PM10/15/09
to hugi...@googlegroups.com
On Thu, Oct 15, 2009 at 18:43, Tom Sharpless <tksha...@gmail.com> wrote:
> I think "a new Hugin" should provide only two direct stitching
> targets: cube faces and equirectangular, and let you convert either of
> those to other projections later.

There is however one big usage of Hugin, to generate partial
panoramas. For these, cylindrical or Mercator or rectilinear are more
appropriate. Generating a full equirectangular would not be a solution
in these cases. So Hugin has to keep *some* output projections in
addition to equirectangular and cube face.

My 0.02€

Cheers,

Seb

Tom Sharpless

unread,
Oct 15, 2009, 10:50:16 PM10/15/09
to hugin and other free panoramic software
Hi Seb,

Yes, I would accept rectilinear, of arbitrary angular size, as a
necessary target format, because that is all that a great many "long-
lens" panoramic photographers want and need; sorry, I just tend to
"think spherical".

But what's wrong with cropped equirectangular? Anything that can be
stored as cylindrical or mercator can just as well be stored as eqr.
If you know how it it is cropped there is no problem converting it.
True, PT currently does not handle such images, but that is just an
old mistake. (It has peeved me for a long time because I built some
cameras that generate partial equirectangular images, and always had
to pad them for input to PT) But we don't need to perpetuate that
error, do we?

For that matter, a cropped cubic format is perfectly feasible, too,
though I doubt it could ever be popular.

Note: one improvement the "libpano API" badly needs is explicit
angular dimensions and explicit projection center coordinates for all
images. Then interconverting "cropped" and/or "decentered" images
would be no problem.

Regards, Tom


On Oct 15, 4:31 pm, Seb Perez-D <sbp...@gmail.com> wrote:

Bart van Andel

unread,
Oct 16, 2009, 5:02:55 AM10/16/09
to hugin and other free panoramic software
On 15 okt, 18:43, Tom Sharpless <tksharpl...@gmail.com> wrote:
> I think "a new Hugin" should provide only two direct stitching
> targets: cube faces and equirectangular, and let you convert either of
> those to other projections later.

When this is done internally (e.g., as an intermediate kind of file),
I don't see a problem. Hugin could just blend the images into an eqr
and then remap later.

However I think the end user should be bothered with as little steps
as possible to get the desired output. It may be confusing if the
output is always eqr or cubic, and the user has to load this file
again (in hugin or another (?) reprojection tool) to get the final
output.

By the way, if cylindrical projection is the desired output, isn't
there a risk of compressing the zenit and nadir too much when using
eqr as an intermediate format? Just curious, I haven't tried (I don't
have high res full spherical data available anyway).

--
Bart

Nicolas Pelletier

unread,
Oct 16, 2009, 7:07:37 AM10/16/09
to hugi...@googlegroups.com
"I think "a new Hugin" should provide only two direct stitching
targets: cube faces and equirectangular, and let you convert either of
those to other projections later."

I agree with this completely. The only other point I was trying to make is that "converting to other projections" should not involve one project file per projection IMHO.

Thanks,

nick

Lukáš Jirkovský

unread,
Oct 16, 2009, 9:19:14 AM10/16/09
to hugi...@googlegroups.com
Hi

2009/10/16 Nicolas Pelletier <nicolas....@gmail.com>:
> "I think "a new Hugin" should provide only two direct stitching
> targets: cube faces and equirectangular, and let you convert either of
> those to other projections later."
> I agree with this completely. The only other point I was trying to make is
> that "converting to other projections" should not involve one project file
> per projection IMHO.
> Thanks,
> nick


I don't like this. It adds unnecessary step to panorama creation.
Usually I select the projection in fast preview by trying all of them
and comparing them how nice they are immediately. Now it would mean
that I've to stitch complete panorama (sometimes I crop it quite a
lot) and then reprojecting it somewhere. What would be better that
when someone specifies more projections (this would need also some
work on GUI) it would _internally_ use cube faces/equirect otherwise
it would work the same.

Lukas

Tom Sharpless

unread,
Oct 17, 2009, 11:28:21 PM10/17/09
to hugin and other free panoramic software
Hi Lukas,

On Oct 16, 9:19 am, Lukáš Jirkovský <l.jirkov...@gmail.com> wrote:
> Hi
>
> 2009/10/16 Nicolas Pelletier <nicolas.pellet...@gmail.com>:
>
> > "I think "a new Hugin" should provide only two direct stitching
> > targets: cube faces and equirectangular, and let you convert either of
> > those to other projections later."
> > I agree with this completely. The only other point I was trying to make is
> > that "converting to other projections" should not involve one project file
> > per projection IMHO.
> > Thanks,
> > nick
>
> I don't like this. It adds unnecessary step to panorama creation.
> Usually I select the projection in fast preview by trying all of them
> and comparing them how nice they are immediately. Now it would mean
> that I've to stitch complete panorama (sometimes I crop it quite a
> lot) and then reprojecting it somewhere. What would be better that
> when someone specifies more projections (this would need also some
> work on GUI) it would _internally_ use cube faces/equirect otherwise
> it would work the same.

What I had in mind is indeed like the fast preview window, with many
projections and cropping, except it is a fast "postview" window, that
shows you an already stitched pano at high resolution, and lets you
format a view interactively, with instant switching between
projections. As I have learned from writing Panini, that kind of
display is quite feasible with OpenGL technology. It works especially
well if the pano is in cubic format.

I believe (but have not yet seen) that would be possible to render
final high resolution reprojections quite fast using the video
hardware via OpenGL. But even if the final rendering had to be done
in software, I would prefer this way of composing my images, as it is
pretty hard to get "just the right view" while looking at a low
resolution preview.
And if I want to keep several views it will probably save time,
because reprojecting a large image is a lot faster than stitching it
from small ones (no blending needed).

Regards, Tom
>
> Lukas
>
>
>
> > On Thu, Oct 15, 2009 at 12:43 PM, Tom Sharpless <tksharpl...@gmail.com>

J. Schneider

unread,
Oct 18, 2009, 4:18:11 PM10/18/09
to hugi...@googlegroups.com
Tom Sharpless wrote: > I think it would be nice if W were actually based
on a flow diagram,
> showing what you can (sanely) do -- and what you have already done --
> with the contents of the project at the current time. That would
> encompass everything from project operations (load, save, apply
> settings...) to lens calibration to image processing details (align
> stack, correct TCA, ..., ..., ...) and of course you would click on
> active elements to drill down to the details. Note: if you can
> display what has already been done, then you should be able to avoid
> doing it over again needlessly, when I push the Go button.

I can't tell about the way to program this, I'd just like to chip in
with my suggestion from May 1st '09:
"[hugin-ptx] adapting the GUI for different workflows" ->
http://www.joachimschneider.info/hugin_workflow_assembly.gif (+ *.pdf).

Best regards
Joachim

Bruno Postle

unread,
Oct 18, 2009, 4:18:24 PM10/18/09
to Hugin ptx
On Wed 14-Oct-2009 at 19:31 -0700, awbrody wrote:
>
>It would be good if one were to be able to select and move individual
>and groups of images in the fast preview window. This would be useful
>in the case of sky images with no coherent set of control points.

This works already for 'unconnected groups' of photos. Any photo
without control points can be dragged around the Fast Preview
independently of the rest of the photos. Two photos connected
together by control points will drag around together etc...

It would be nice to have 'modifier key' to drag photos individully
regardless of control points, but since the optimiser would undo
this movement, it wouldn't be much use.

>It may prove useful to implement something like skeleton deformation
>as it is used in animation to warp the images, perhaps a fine tuning
>the deformation to accommodate panoramas photographed without rotation
>about the correct axes.

libpano13 has a morph-to-fit function which distorts the reprojected
image to force a 'perfect' alignment, but nona has never been
modified to support it. It is a bit crude though, we have discussed
some kind of alternative fitting with a smooth spline patch but
nobody has written the code.

--
Bruno

J. Schneider

unread,
Oct 18, 2009, 4:26:28 PM10/18/09
to hugi...@googlegroups.com
Bart van Andel wrote:
> When this is done internally (e.g., as an intermediate kind of file),
> I don't see a problem. Hugin could just blend the images into an eqr
> and then remap later.
>
> However I think the end user should be bothered with as little steps
> as possible to get the desired output.
I support this. Usually I need only one projection in the end and I
believe I can choose it quite well in the fast preview. Only cropping
more precisely is done later with other tools. As long as I don't see
anything of this intermediate step and it doesn't take too much
additional time it is OK. (Of course, when I want different projections
from one panorama I find it stupid to repeat remapping.)

> By the way, if cylindrical projection is the desired output, isn't
> there a risk of compressing the zenit and nadir too much when using

> eqr as an intermediate format? Just curious, [...]
Me too, that's what I fear as well. Isn't it a particular strength of PT
to calculate *one* operation that has to be done to each pixel out of
all the different deformations and transformations so there are no
subsequent steps that might deteriorate quality?

regards
Joachim

J. Schneider

unread,
Oct 18, 2009, 4:31:32 PM10/18/09
to hugi...@googlegroups.com
>> It may prove useful to implement something like skeleton deformation
>> as it is used in animation to warp the images, perhaps a fine tuning
>> the deformation to accommodate panoramas photographed without rotation
>> about the correct axes.
>
> libpano13 has a morph-to-fit function which distorts the reprojected
> image to force a 'perfect' alignment, but nona has never been
> modified to support it. It is a bit crude though, we have discussed
> some kind of alternative fitting with a smooth spline patch but
> nobody has written the code.

This is something I often miss with my badly shot handheld panos ...

regards
Joachim

James Legg

unread,
Oct 18, 2009, 5:17:09 PM10/18/09
to hugi...@googlegroups.com
On Sat, 2009-10-17 at 20:28 -0700, Tom Sharpless wrote:
> Hi Lukas,
>
> On Oct 16, 9:19 am, Lukáš Jirkovský <l.jirkov...@gmail.com> wrote:
> > Hi
> >
> > 2009/10/16 Nicolas Pelletier <nicolas.pellet...@gmail.com>:
> >
> > > "I think "a new Hugin" should provide only two direct stitching
> > > targets: cube faces and equirectangular, and let you convert either of
> > > those to other projections later."
> > > I agree with this completely. The only other point I was trying to make is
> > > that "converting to other projections" should not involve one project file
> > > per projection IMHO.
> > > Thanks,
> > > nick
> >
> > I don't like this. It adds unnecessary step to panorama creation.
> > Usually I select the projection in fast preview by trying all of them
> > and comparing them how nice they are immediately. Now it would mean
> > that I've to stitch complete panorama (sometimes I crop it quite a
> > lot) and then reprojecting it somewhere. What would be better that
> > when someone specifies more projections (this would need also some
> > work on GUI) it would _internally_ use cube faces/equirect otherwise
> > it would work the same.

Agreed: If you only want one output projection, use that directly.
Otherwise you'll waste time and memory processing details that are too
fine to see in, or are outside of, the final projection; and you risk
having blurry sections where enough detail is present in the input
images.

> What I had in mind is indeed like the fast preview window, with many
> projections and cropping, except it is a fast "postview" window, that
> shows you an already stitched pano at high resolution, and lets you
> format a view interactively, with instant switching between
> projections. As I have learned from writing Panini, that kind of
> display is quite feasible with OpenGL technology. It works especially
> well if the pano is in cubic format.

Is this any different to loading an already stitched equirectangular (or
cube faces) into a new project? Either way, the panorama has to be
stitched before the fast *view can be shown. To me the fast preview is
already suitable for framing a transform of an equirectangular image.

> I believe (but have not yet seen) that would be possible to render
> final high resolution reprojections quite fast using the video
> hardware via OpenGL.

This exactly what nona -g does.

> But even if the final rendering had to be done
> in software, I would prefer this way of composing my images, as it is
> pretty hard to get "just the right view" while looking at a low
> resolution preview.

It shouldn't be too low a resolution. Do you need more texture detail,
more accurate transforms, or a zoom function?

> And if I want to keep several views it will probably save time,
> because reprojecting a large image is a lot faster than stitching it
> from small ones (no blending needed).

True.

-James

Lukáš Jirkovský

unread,
Oct 19, 2009, 2:50:24 AM10/19/09
to hugi...@googlegroups.com
Hi Tom,

2009/10/18 Tom Sharpless <tksha...@gmail.com>:
I've probably misunderstood you. This really doesn't sound that bad,
but I'm not sure if waiting for this "postview" window wouldn't be a
bit confusing. I'd prefer to keep the possibility to decide (or better
look at the possibilities) before stitching. Current Fast preview is
really good for it because it uses "good enough" resolution.

Here's my idea. There would be something like current fast preview
which would use OpenGL for projection. You could select more different
projections there (and see them immediately even before stitch). These
would be done by remapping. If you later open this window, you could
add more projections. These would be made from the intermediate (cube
faces) from the previous step if they exist.

>
> I believe (but have not yet seen) that would be possible to render
> final high resolution reprojections quite fast using the video
> hardware via OpenGL.  But even if the final rendering had to be done
> in software, I would prefer this way of composing my images, as it is
> pretty hard to get "just the right view" while looking at a low
> resolution preview.
> And if I want to keep several views it will probably save time,
> because reprojecting a large image is a lot faster than stitching it
> from small ones (no blending needed).

That's surely a big advantage over current system.
best regards,
Lukas

Lukáš Jirkovský

unread,
Oct 19, 2009, 2:58:30 AM10/19/09
to hugi...@googlegroups.com
2009/10/19 Lukáš Jirkovský <l.jir...@gmail.com>:
Just one note. If there is only one projection selected I'd keep the
same workflow as we use now. So the only difference would be that
create new project, load cube faces/equirect and stitch which has to
be done now for remapping it would be done automatically when there
are more output projections selected and thus it would yield some
performance gain.

Bruno Postle

unread,
Oct 22, 2009, 5:35:16 PM10/22/09
to Hugin ptx
On Tue 13-Oct-2009 at 06:48 -0700, Bart van Andel wrote:
>
>I've had another idea in my mind for some time now. It may sound a
>little too fancy or even surreal, but I guess this shouldn't stop me
>from writing it down here.
>
>Imagine a single OpenGL window where everything goes:
>* You can drop your images there, and use the mouse to put them into
>their approximate locations ("layout branch") or just run a control
>point finder to do this for you.

We almost have this, though at the moment you can only drag-and-drop
photos onto the Hugin main window and not the Preview window (a
bug).

Also, added photos inherit the same position as the first photo in
the project, so they are always hidden in the Preview initially - A
fixed drag-and-drop would use the mouse location to place the photo.

>* Images can be clicked, which makes them show up full screen, so
>cropping/masking can be edited.

This is an extension to the Identify tool that has been discussed
before: currently clicking on an overlap opens two images in the
Control Points tab, clicking on a single image should open that
image in the Images tab.

>* Another view (preferably selectable through a keyboard shortcut)
>would be showing the images and their connections on an approximately
>flat surface. This way, it can easily be determined whether there are
>images connected by control points which shouldn't be connected.
>Deleting the connection deletes all connecting control points. Adding
>a connection opens up yet another view, where control points can be
>added either manually or by using a control point finder. Clicking
>such a connection also opens this view.

This is a near description of the 'layout mode' in the
gsoc2009_layout branch.

--
Bruno

Oskar Sander

unread,
Oct 23, 2009, 6:26:12 AM10/23/09
to hugi...@googlegroups.com


2009/10/22 Bruno Postle <br...@postle.net>

Also, added photos inherit the same position as the first photo in
the project, so they are always hidden in the Preview initially - A
fixed drag-and-drop would use the mouse location to place the photo.


A nice way to handle this is to let unplaced(hidden) photos show up on a pane on the side from where the can be dragged and dropped in place on the Preview window.


 
>Adding
>a connection opens up yet another view, where control points can be
>added either manually or by using a control point finder. Clicking
>such a connection also opens this view.

This is a near description of the 'layout mode' in the
gsoc2009_layout branch.

A fairly straight forward extension of that branch (once it is integrated) would be to run pairwise CP detection based on all "intended" connections.
Also, even though current detectors are scale and orientation invariant, an interesting thought would be if feeding initial values on scale,orientation and overlap could assist in a CP detection/pruning algorithm...
 

Cheers
O
Reply all
Reply to author
Forward
0 new messages