Thanks, sounds fun. Tried to build it on Debian 10, got this:
~/pv$ make
clang++ -c -Ofast -std=c++11 -c pv_no_rendering.cc -o pv_no_rendering.o
clang++ -c -Ofast -std=c++11 -c pv_initialize.cc -o pv_initialize.o
clang++ -c -D PV_EXTERN=extern -mavx2 -D PV_ARCH=PV_AVX2 -Ofast
-std=c++11 -c pv_rendering.cc -o pv_avx2.o
In file included from pv_rendering.cc:108:
In file included from ./pv_common.h:59:
In file included from /usr/include/Vc/Vc:30:
In file included from /usr/include/Vc/vector.h:35:
In file included from /usr/include/Vc/avx/vector.h:32:
In file included from /usr/include/Vc/scalar/../common/../avx/casts.h:33:
In file included from
/usr/include/Vc/scalar/../common/../avx/../sse/casts.h:31:
/usr/include/Vc/scalar/../common/../sse/intrinsics.h:601:13: error:
argument to
'__builtin_ia32_vec_ext_v4sf' must be a constant integer
_MM_EXTRACT_FLOAT(f, v, i);
^~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/include/clang/7.0.1/include/smmintrin.h:890:11: note: expanded from
macro
'_MM_EXTRACT_FLOAT'
{ (D) = __builtin_ia32_vec_ext_v4sf((__v4sf)(__m128)(X), (int)(N)); }
^ ~~~~~~~~
1 error generated.
make: *** [makefile:38: pv_avx2.o] Error 1
Ideas?
> * Access to the output is much more immediate, because the lengthy
> stitching process is avoided. As long as your image set remains the
> same, you can quickly reflect changes in the PTO file by simply
> pressing 'F1' in pv, which will reload the PTO file and reuse the
> interpolators, which otherwise take some time to set up. You can use
> pv as an 'external preview' while working on the PTO in a stitcher
> like hugin, save work in the stitcher and refresh pv to see the
> output straight away.
> * Interpolation is done directly on the source images. This preserves
> the full source image quality and resolution - you can, e.g., zoom
> into the view like into the source images, without being limited to
> the resolution you chose for the output of stitching, and without
> the images having been geometrically transformed and then stored
> before being read back in again, losing some quality in the process.
> Same goes for exposure.
> * pv uses a geometric approach to select from which source image(s) a
> target pixel should be taken. Without feathering or alpha
> processing, the simple rule is: 'take from the source image whose
> center is closest to the target pixel'. Seams are not explicitly
> defined but occur as an emergent phenomenon. the 'closest-center'
> rule automatically chooses those parts of the source images which
> are usually the best - technically speaking - because they are least
> encumbered with lens flaws.
> * You can experiment easily with several target projections (use
> --target_projection=...), again without having to stitch an output
> image. If you set up pv's viewer window with the right aspect ratio
> and use a snapshot magnification appropriate to your desired output
> size, making a snapshot of what you see is equivalent to stitching
> with a stitcher. The process is WYSIWYG, and output quality is
> defined by output size and the 'quality interpolator' which is
> currently in use (per default, a cubic b-spline). It's much faster
> than 'true' stitching. The rendition of a snapshot is delegated to a
> separate thread 'on the back burner', so you can keep on viewing
> while the snapshot(s) are completed in the background.
>
>
> Now for some disadvantages:
>
> * 'live stitching' is memory-hungry. Every source image is read from
> disk and converted to an internal representation (typically a set of
> two float-based image pyramids). You can save on memory by passing
> --build_pyramids=no, which will impose a few limits - for simply
> viewing the PTO it's okay, though. But even then, more memory is
> needed than for viewing a stitched panorama.
> * There is - as of yet - no seam optimization. The seams occur where
> two source images 'meet', and there is nothing you can do about it -
> registration errors can be masked to a degree by using feathering,
> but that's it.
> * Animation can be slow. When viewing single images/panoramas, pans,
> zooms, etc. should not stutter on a reasonably powerful system. But
> with 'live stitching', frame rates drop. Especially when viewing
> views where many source images contribute, and with target
> projections other than rectilinear, frame rates may go down to a few
> per second.
>
>
> This is a wide topic, so there are many more factors to consider, but I
> won't go on here. What I'd like to mention is a second 'blending mode'
> which pv offers. If you pass --blending=hdr, you can view registered
> exposure series as if they had already been blended into an HDR output.
> While pv has only rudimentary tonemapping (you might say it's only range
> compression), this type of view still can give you a good idea about how
> well-suited an image set is for HDR blending. There is no deghosting,
> though. Making snapshots of hdr-blended views can preserve the full
> dynamic range, provided the output format is capable of doing so -
> typically you'd use openEXR output. So you can use pv to HDR-blend sets
> of images as well.
>
> Kay
--
David W. Jones
gnome...@gmail.com
wandering the landscape of god
http://dancingtreefrog.com
My password is the last 8 digits of π.