Legacy DCC compatibility and the new libstdc++ ABI

230 views
Skip to first unread message

Nick Cannon

unread,
Jul 16, 2023, 1:22:10 PM7/16/23
to vfx-platform-discuss
Studios upgrading from older Linux distros like CentOS/RHEL 7 are moving to distros with the new libstdc++ ABI that introduces new compatibility considerations when needing to run older DCCs.

I'm aware of a couple of studios handling this successfully, and others who are unsure how best to address it.

For those of you at studios needing to run both modern and legacy DCCs on your workstations, how are you handling this? Dedicated hardware, VMs, containers, or software compatibility layers?

Please use this thread to share experiences and ideas so we can all learn from each other.

Nick

Neal Gompa

unread,
Jul 16, 2023, 2:06:47 PM7/16/23
to nick....@gmail.com, vfx-platform-discuss
Why would you expect issues at runtime? Unless software is being
recompiled, it should just work as libstdc++ supports both ABIs at
runtime, it just will only use the new ABI for build-time stuff.

See: https://gcc.gnu.org/onlinedocs/libstdc++/manual/using_dual_abi.html



--
真実はいつも一つ!/ Always, there's only one truth!

Nick Cannon

unread,
Jul 16, 2023, 5:58:08 PM7/16/23
to Neal Gompa, vfx-platform-discuss
I'll let others who are dealing with these challenges chime in with details, but the issues are related to the co-existence and integration of legacy software compiled with the old ABI and software compiled with the new ABI. Per the CY2023 guidance, software vendors should be building against the new ABI for all major DCC releases from this year.

The compatibility considerations are not just about the new ABI, but also the complexities of running legacy software on a much more modern distro in a studio environment where legacy versions sometimes need to be run alongside current releases. 

Nick

Kimball Thurston

unread,
Jul 16, 2023, 6:28:44 PM7/16/23
to vfx-platform-discuss
We actually have two lightweight compatibility layers we maintain at Wētā FX:

First, a runtime one which is things that all versions of the OS have in LD_LIBRARY_PATH. This has a few things in it (I see 40 files and a symlink), the most important being the newest libstdc++ trying to stay ahead of both the O.S. and the newest compiler we are supporting. For example, I think we're on libstdc++.so for gcc 12.x - so technically ahead of the VFX reference platform and the O.S. but just so we don't have to update quite as often. Given the symbol versioning in libstdc++, this is safe and really smooths out most of the compatibility issues we've had in the past. Similarly, libraries like the address sanitizer / gcc / gfortran / etc. come along with that. Then we provide an OpenMP that attempts to deal with the varied implementations of that (clang / gcc / intel) and their varied level of support.

One reference platform-specific thing of note here: we are also starting to do that same trick for libraries like TBB which provides a backwards compatible shared object, enabling people to mix TBB versions in an environment more easily. Basically, a solution born out of practicality when mixing environments with different versions of tools. The inline headers and all that layer of TBB remain reference platform consistent, this is only the shared object.

Then we have recently added a second compatibility layer which is O.S. version specific, such that we can support legacy DCC tools and such which need older shared object versions that they don't provide in their distributions. Normally, this might be handled by having a "compat" version from the O.S. but we have been finding it less frustrating to construct this set manually, as occasionally this involves relinking to avoid sprawl and keep the layer as light touch as possible and not just replicating entire old versions of an O.S.. That currently has 60 or so shared objects and about the same number of symlinks for Ubuntu 22.04, so not horrible to maintain.

Both of these sets of shared objects are then added to our environments universally via LD_LIBRARY_PATH since we have that mechanism and it allows us to test different setups. Although once settled, they don't change very frequently so might easily be able to do something less environment variable dependent, via something like overlay FS, OCI container, or simple package with ld.so.conf changes.

We have separately done something similarly for jemalloc, as different DCCs have different methods of compiling jemalloc - basically the function names that are exposed, but share the same SO name. So when we want to use jemalloc in our own tools, which may provide libraries for use in a particular DCC, we provide a jemalloc which provides all the symbols (as symbol aliases). This is not yet included in the above, but managed separately, although we may look at that in the future.

Happy to provide more details as needed.

best,
Kimball

Jeff Bradley

unread,
Jul 17, 2023, 12:03:33 AM7/17/23
to Kimball Thurston, vfx-platform-discuss
Our studio primarily uses a compatibility layer to aid the transition of older DCCs and internal software to newer distros. The compatibility layer contains both runtime and build-time objects. This worked well in the past. We have not completed our current testing, so I can’t say “mission accomplished” yet. We’re also looking into containers for some scenarios. VMs and dedicated hardware are backup plans.

Nick perfectly described our situation. We have hundreds of software packages under active development, with compatibility reaching back five or six years. To provide features and support to current and upcoming productions, we will continue development on leaf node packages in these “legacy” environments. On the other hand, the lower-level package dependencies won’t require changes, and therefore won’t need to be rebuilt. The compatibility layer helps minimize risk to production by continuing to allow forward progress while maintaining stability in as many packages as possible.

I’m intrigued by Kimball’s description and will discuss it with my team. The common challenges my studio faces have not been solved by using software newer than the VFX reference platform. Instead, we’ve focused on simplifications to the environments to improve consistency. This freed developers, TDs and users from worrying about the combinatorics. But your forward facing compatibility approach could open up some valuable options for developers.

Thanks,
Jeff


--
You received this message because you are subscribed to the Google Groups "vfx-platform-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to vfx-platform-dis...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/vfx-platform-discuss/1d7672cb-6c21-4b8a-9ebc-7c5b42e54c3dn%40googlegroups.com.

Kimball Thurston

unread,
Jul 18, 2023, 9:09:05 AM7/18/23
to Jeff Bradley, vfx-platform-discuss
Hey Jeff :)

We also have multiple hundreds of things in flight, so definitely share that pain :) The libstdc++ layer and friends I mentioned, which are a separate layer from the more traditional legacy compatibility layer, do not inherently lower the permutations in the build matrix. For several tools where we have stronger control and opinions about the API, and the resulting ABI and external dependencies, it absolutely does - but those are more the exception. Rather, because of common library dependencies and passing those richer data types around, it is still a bit of a mess. So it helps reduce compatibility support load and more flexible environment creation more than build matrix if you will. Although there is more there to do and consider around that.

Related, this is also one of the reasons there is a push within the OpenEXR team to become more backwards compatible than we've been historically, such that a new version won't (always) require recompiling / relinking everything. This is likely not feasible or even reasonable for every project, but maybe we can chip away at it over time.

If it would help, we could organize a meetup at siggraph / open source days and maybe get people who know more details than my blathering conveys involved.

best,

Kimball
Reply all
Reply to author
Forward
0 new messages