Storing per-sample per-face-vertex normals is Alembic's current method of saving this information.
This suggestion is about more efficient disk-space usage. Storing a static array of face-vertex “hardness” booleans should be fine grained enough to support any application that makes use of edge hardness. For example, both Maya’s edge-hardness boolean, and Max’s smoothing groups could be implemented using this same array, and such caches would be interchangeable between applications.
I would argue that compactness is very important, but getting the same model (or normals) in every application that loads an Alembic file is more important.
That would be more compact, agreed. However, Houdini's loader (for example), would need to understand how to interpret these booleans and turn them back into normals. Renderman as well. So, this would take a fairly important (and application specific) modeling concept and move the logic for the "look" of the normals into either the Alembic library (yuck) or into each of the individual loaders (worse for consistency).
Dramatically smaller caches but with a non-negligible computation cost is a non starter.
Are you sure that edge-hardness is consistent among nearly all applications, or just the applications you happen to be using?
Chadrik, just a quick note that you say edge hardness but you describe a data set of boolean that are per face-vertex rather than per edge. Which is the preferred canonical representation or the
Sent from my phone, sorry for my grammar and terseness.
> Dramatically smaller caches but with a non-negligible computation cost is a non starter.Why? Is it a stated goal of the Alembic project that computation speed is of the utmost priority above all other things?
Edge hardness in prman is an integer, not a boolean. Which of these apps supports an edge hardness of 5?
With #2 being the issue potentially at play here. But, you are correct that the total cost for reading the geometry should be taken into account. It's certainly possible to imagine that a significantly smaller file with some computation to fill in the missing normals could actually end up being faster. Not yet proven, but that seems like it could potentially be true.
Let me confirm I understand the file size issue correctly: You have deforming geometry and in order to store the normals you need to store an extra vector for each vertex. In the case of soft normals (shared), this should add 1 vector for each point, potentially doubling the size of the uncompressed point data (or you can leave them off and the host program will add them on by default). In the case of hard normals (unshared), you could potentially have as many extra vectors as you have average number of sides on your n-gons (so maybe 3 times larger files for a triangle-based topology or 4 times larger for a quad-based topology). And, in your cases, the hard and soft normals need to be mixed on your animated models so the importing program can't just know which type of calculation to run on a per object or per shape basis. Is that what you're seeing?
To move forward, what needs to be proven is if all of the modeling packages that support "hard" normals without explicitly outputting normals can be satisfied with a single definition of which vertices are hard and soft.
And, can we transparently provide support for applications that don't support hard normals in a way that works everywhere.
So far I've heard a couple of slightly different variations and no conclusion on widespread application support. If a single definition can be proven to work with all modeling and animation packages without significant compromise, and implementation is simple enough and fast enough not to break the Alembic Promises, we should consider either adding this ourselves or accepting a contribution to add this to Alembic.
--
You received this message because you are subscribed to the Google Groups "alembic-discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to alembic-discuss...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
Not all of these packages are used extensively in high end production, so I'd argue that it's questionable whether Alembic should attempt to serve the users of all these software packages - professional or otherwise.
Note also that these lists do not include Zeno, or any of the other proprietary tools used at major high end animation and visual effects studios - some of whom lead development on Alembic.
As someone who has wrestled for years to develop interfaces to pipeline infrastructure designed for specific prosumer 3D packages, I'm really opposed to integrating support for features of a specific application or application vendor into what is supposed to be a generic exchange format.