Hi Tom,
The proper solution is to not build or use the TraitsGeom library, as
the final public API is going to be most close to the one given in
lib/Alembic/AbcCoreAbstract. But even that one is very low-level, so
on top of that, there will be a new library (AbcGeom) plus a helper
library called AbcNice. The AbcNice + AbcGeom library will compose
into something a lot like AlembicAsset/AlembicTraitsGeom. Here's a
excerpt from a PolyMesh writing test (geometric data is coming from an
included file; the 'g' in front of 'g_verts' etc. is for "global"):
namespace Abc = Alembic::AbcNice;
namespace Ago = Alembic::AbcGeom;
//-*****************************************************************************
// Global time information
Abc::chrono_t g_secondsPerFrame = 1.0 / 24.0;
Abc::chrono_t g_timeOfSample0 = 19.0;
void Brevity_Example1_MeshOut()
{
Abc::OArchive archive( "brevityPolyMesh1.abc",
Abc::ErrorHandler::kThrowPolicy );
Ago::OTransform xform( archive, "meshy_xform",
Abc::TimeSamplingType( g_secondsPerFrame ) );
Ago::OPolyMesh mesh( xform, "meshy_shape" );
Ago::OPolyMesh::Sample mesh_samp(
Abc::OV3fArraySample( g_verts, g_numVerts ),
Abc::OIntArraySample( g_indices, g_numIndices ),
Abc::OIntArraySample( g_starts, g_numStarts ),
Abc::OV3fArraySample( g_normals, g_numNormals ),
Abc::OV2fArraySample( g_uvs, g_numUVs ) );
mesh.set( mesh_samp );
for ( Abc::index_t sampIndex = 0; sampIndex < 60; ++sampIndex )
{
Abc::chrono_t sampTime = g_timeOfSample0 +
g_secondsPerFrame * ( Abc::chrono_t ) sampIndex;
Abc::M44d mtx;
Abc::float64_t ytrans = 0.125 * ( Abc::float64_t )sampIndex;
mtx.makeIdentity();
mtx.setTranslation( Abc::V3d( 0.0, ytrans, 0.0 ) );
Ago::OTransform::Sample xform_samp( mtx );
xform.set( xform_samp,
Abc::OSampleSelector( sampIndex, sampTime ) );
}
std::cout << "Writing: " << archive.getName() << std::endl;
}
Doing tests on things like filesize, IO performance, etc. is
premature, as the binary format of Alembic files is different than
those produced by AlembicAsset.
>
> Out of curiosity, is this build process going to be simplified when
> the final release is completed? There are quite a few steps and and
> number of opportunities to screw things up. Anything but the default
> builds, for Alembic and for all of its dependencies, really
> complicates the process.
>
Short answer: "probably". We've been working on making the build
setup as polished, foolproof, and easy as possible. However, building
is a difficult general problem: ABI-incompatibilities across different
C++ compiler versions (ensuring that if you're using, say, Maya, that
you must build your Alembic libraries and plugins with the same
compiler), constructing sane link paths, etc., all conspire to make it
impossible to reduce the complexity below a certain floor.
For that reason, we are recommending certain best-practices, such as
linking statically whenever possible (eg, in the case of libz, libm,
the ilmbase libs like Half, Iex, etc.), building HDF5 as a C-only
library (eliminating C++ ABI issues), and relying on a minimum set of
non-header-only Boost libs (and with work, we may be able to eliminate
all non-header-only Boost libraries, or possibly Boost entirely if
you're using a sufficiently modern C++ compiler -- but that is not a
priority at the moment). It may be that one's setup is very standard
and unified, though, and everything will "just work". But the reality
remains that you'll need:
- Boost 1.42 or greater (1.43 is good)
- ilmbase (the foundation of OpenEXR; used to provide Imath, etc.)
- HDF5 (it's likely we'll recommend 1.8.5 at the time of release,
though we're currently using 1.8.4-patch1)
The other reality is that we'll be providing pre-built bundles in
various configurations, and hopefully, that will save most people a
lot of pain.
Anyway, thank you for your interest and enthusiasm! If you'd like, it
might be a good idea to sign up to the alembic-announce list
(http://groups.google.com/group/alembic-announce). Once we have the
final release ready to go, we'll send a message to that list (it's not
a discussion list, it's announcements only). As we state on the
project's Google Code site, we'll be releasing the "1.0" version later
this month.
-Joe
For the GLUT Xmu problem check that you have the package libxmu-dev
installed and it should find it automatically.
We will be fixing support for compiling with Maya and Renderman in the
next push. Please try compiling without USE_MAYA and USE_PRMAN set to
true. In the mean time if you want to use Maya and Renderman you will
need to fiddle with the hard-coded paths in build/AlembicMaya.cmake
and build/AlembicPRMan.cmake, or pass the correct CMake arguments
using the alembic_bootstrap.py script. This script is also set to be
updated in the next push.
Please let us know if there's any other issues we can iron out
Thanks
David
The reason AlembicHDF5 could not be found is because it couldn't find
HDF5 libraries to compile against. I'll put a guard in the CMakeLists
for the next push to check for this case. Where have you installed
HDF5? Have a look at the build/AlembicHDF5.cmake file. If you notice
it uses FIND_PACKAGE( HDF5 ) - which is a CMake command that uses the
FindHDF5.cmake module that is part of CMake. This module is a little
convoluted, however there are a number of ways to configure it:
1. Pass a CMake argument : -D
HDF5_ROOT:STRING=/usr/local/hdf5-1.8.4-patch1 (this is what the
bootstrap script does)
2. Hard code the path here: SET( HDF5_ROOT "/usr/local/hdf5-1.8.4-patch1" )
3. Set an environment variable called HDF5_ROOT pointing to where HDF5
is installed. (this is not our preferred solution)
4. Set an environment variabled called HDF5_INCLUDE_DIR pointing to
where HDF5 include directories are (this is also not our preferred
solution)
The next bootstrap script actually tests that HDF5 is found and also
that it compiles a successful test executable using the found HDF5
library.
It seems you are very close to compiling it successfully.
Thanks for your feedback.
David
The current iteration of the bootstrap script can actually find all
the HDF5 stuff needed to build without using CMake at all. This is an
excerpt from a mail sent to one of our external alpha developers who
have a goofy system setup about this:
"""
I'm pretty sure
the reason you got that message (HDF5 NOT FOUND) is that in spite of
your setting the libraries and include dir for hdf5, CMake was unable
to actually find the package HDF5, probably because the path to the
"h5cc" program was not in your $PATH, and you didn't have an HDF5_ROOT
environment variable set. h5cc just spits out information about how
you compiled hdf5, including its configured install location (so it
will be inaccurate if you did "--prefix=/foo" but actually manually
installed it to /bar; there's not a good way around this, aside from
not using cmake, or doing what I'm doing to fool it). Anyway, cmake
runs that program and parses the output to try to find where the hdf5
stuff is. I'm pretty sure this is very dumb, but again, cmake is
clunky and idiosyncratic (every "Find<foo>.cmake" script it comes with
works differently).
So, the bootstrap script will figure out a sane value for HDF5_ROOT
(by finding the longest common filepath between the hdf5 include dir
and the hdf5 lib dir), check os.environ for HDF5_ROOT, and if it's not
already there, it will do "os.environ['HDF5_ROOT']=<computed hdf5
root>". Then when CMake goes to find the hdf5 package, it will use
that to try to find the h5cc program. It can then try to do what it
does and find hdf5.
If cmake can't find hdf5, but you've used the bootstrap script to tell
it where the libraries and include dirs are, our build setup will
failover to what the bootstrapper told it. I tested a build at ILM
where I'd commented out the "FIND_PACKAGE( HDF5 )" line in
build/AlembicHDF5.cmake, and it built and tested correctly.
"""
So, there is that, and very very soon, the repo will be updated with
this. Thank you again for your patience and enthusiasm!
-Joe
Hi Markus, I replied privately (check your mail?), but the answer is
that that code is deprecated and will be removed very shortly. I
apologize for the state of the repo; it was a snapshot of our codebase
immediately prior to our SIGGraph demo, and represented a somewhat
disordered state. We're sorting out some non-technical issues
currently, though, and will update the main branch on the
code.google.com site as soon as that is done. Thanks again for your
patience and enthusiasm!
-Joe
Hi Markus, it looks like your version of HDF5 was not built correctly;
the "recompile with -fPIC" is a message from the linker regarding
libhdf5.a. This is possible if you installed hdf5 via a binary
package manager.
- If you did build hdf5 yourself, did you follow the instructions in
$ALEMBIC_SRC_ROOT/contrib/HDF5.build?
- What is the output of '/usr/local/hdf5/bin/h5cc -showconfig'?
-Joe
The short answer is that you need to make sure all the libraries that
Alembic links to are compiled with -fPIC and that you choose the
64-bit version of that library. That compile error looks like the zlib
library that comes installed with your ubuntu system is actually a
32-bit library. I'm assuming your system is 64-bit ubuntu, in which
case it means you probably have both the 32-bit and 64-bit version of
zlib installed and have chosen the 32-bit version. (or don't have the
64-bit version installed).
When the alembic_bootstrap script prompts for the ZLIB library, make
sure you choose the 64-bit version (e.g. option 2 below). I am not
sure we support 32-bit builds fully yet (we all run 64-bit builds
here)... though contributions in helping us get it compiling are
welcome!
--------
Number of your choice, new value, or blank to accept the default value:
[/usr/include/zlib.h]>
Using value '/usr/include/zlib.h'
Checking /usr/include/zlib.h
Using Zlib include directory: /usr/include
Please enter the full path to the zlib library
(eg, "/usr/lib/libz.a")
Enter the number of the choice you'd like to use, or enter
a different value if none of the choices are valid.
1) /usr/lib/libz.a
2) /usr/lib64/libz.a
------
Thanks for being an early adopter. It helps iron the bugs out for
future developers! :)
Cheers
David
Yes, currently, it will only build properly on 64-bit systems, due to
the Dimensions class (and the need to deal correctly with different
values for size_t). This issue is known to us and we will likely fix
that soon.
-Joe
Hmm, it's strange that your version of libz was not built with
position-independent code (that's what the '-fPIC' flag does). You
can try a couple things, though:
- try using /usr/lib64/libz.so
- download the source to zlib (http://zlib.net/zlib-1.2.5.tar.gz),
and build it yourself, ensuring that it's built with -fPIC, then using
the one you built for Alembic
It exists in the Ubuntu package repository:
http://packages.ubuntu.com/lucid/zlib1g-dev ... you can re-compile it
as a static archive after customizing the build for -fPIC but I
wouldn't recommend it (at least not overriding the version in
/usr/lib64). I assume zlib has it compiled without -fPIC for static
builds for a reason - possibly performance as -fPIC should only be
used on 64-bit machines. I'm not sure if this is a bug in the static
zlib library that comes with ubuntu. anyway, if you want to try the
libz.so file that should work.
you can confirm if the libz.a file is 64-bit by running:
cd tmp; ar -x /usr/lib/libz.a compress.o; objdump -r /tmp/compress.o
Excellent! It sounds, then, like there's a bug with Ubuntu's zlib
package (the static archive should still be built with -fPIC). If you
get a chance, send me a private message with the output of 'dpkg-query
-s zlib1g' and 'cat /etc/issue', and I'll look into filing a bug with
Ubuntu :)
-Joe