Fwd: Still having trouble with Cortex and Houdini!

57 views
Skip to first unread message

Hradec

unread,
Oct 9, 2016, 12:24:33 AM10/9/16
to cortexdev
Hi guys... me again with cortex/houdini problems...

To be exact, apart from ONE single time I was able to build cortex with houdini 12 and boost 1.51, I was NEVER able to build cortex with any other version of houdini.... 13, 14 and now 15! Always crashes when I do "import IECore" inside houdini!

This time I have re-build my whole build mechanism, so now I can easily build any dependency package version, using any gcc (I currently got a custom build 4.1.2, 4.8.5 and the system gcc 6.1.2)

I'm still building averything with 4.1.2 because of nuke, which still comes with its own libstdc++ from gcc version 4.1.2. 


But houdini 15 doesnt build with 4.1.2 anymore. So I'm using 4.8.5 to build IECoreHoudini, and I have tried boost 1.51 and now boost 1.55, which is the version recommended by SideFX for houdini 15. 

But no matter what version of boost I use, what version of gcc I build boost/IECore/IECorePython, I always get this stack trace:

(gdb) bt
#0  0x00007fffdff33ce6 in __strcmp_ssse3 () from /usr/lib/libc.so.6
#1  0x00007fffd29cffa4 in std::_Rb_tree<boost::python::converter::registration, boost::python::converter::registration, std::_Identity<boost::python::converter::registration>, std::less<boost::python::converter::registration>, std::allocator<boost::python::converter::registration> >::insert_unique(boost::python::converter::registration const&) () from /atomo/pipeline/libs/linux/x86_64/gcc-6.2.120160830/boost/1.51.0/lib/python2.7/libboost_python.so.1.51.0
#2  0x00007fffd29cf772 in boost::python::converter::(anonymous namespace)::get(boost::python::type_info, bool) () from /atomo/pipeline/libs/linux/x86_64/gcc-6.2.120160830/boost/1.51.0/lib/python2.7/libboost_python.so.1.51.0
#3  0x00007fffcdd7a78e in boost::python::converter::detail::registry_lookup2<IECore::TypedData<Imath_2_2::Vec2<double> > const volatile> ()
    at /atomo/pipeline/libs/linux/x86_64/gcc-6.2.120160830/boost/1.51.0/include/boost/python/converter/registered.hpp:87
#4  boost::python::converter::detail::registry_lookup1<IECore::TypedData<Imath_2_2::Vec2<double> > const volatile&> () at /atomo/pipeline/libs/linux/x86_64/gcc-6.2.120160830/boost/1.51.0/include/boost/python/converter/registered.hpp:94
#5  __static_initialization_and_destruction_0 (__initialize_p=<optimized out>, __priority=<optimized out>) at /atomo/pipeline/libs/linux/x86_64/gcc-6.2.120160830/boost/1.51.0/include/boost/python/converter/registered.hpp:105
#6  0x00007fffcdd7aff3 in global constructors keyed to std::string IECorePython::str<char>(char&) () at src/IECorePython/SimpleTypedDataBinding.cpp:438
#7  0x00007ffff7de94fa in call_init.part () from /lib64/ld-linux-x86-64.so.2
#8  0x00007ffff7de960b in _dl_init () from /lib64/ld-linux-x86-64.so.2
#9  0x00007ffff7dedb38 in dl_open_worker () from /lib64/ld-linux-x86-64.so.2
#10 0x00007ffff7de93a4 in _dl_catch_error () from /lib64/ld-linux-x86-64.so.2
#11 0x00007ffff7ded2d9 in _dl_open () from /lib64/ld-linux-x86-64.so.2
#12 0x00007fffe7383ee9 in ?? () from /usr/lib/libdl.so.2
#13 0x00007ffff7de93a4 in _dl_catch_error () from /lib64/ld-linux-x86-64.so.2
#14 0x00007fffe7384521 in ?? () from /usr/lib/libdl.so.2
#15 0x00007fffe7383f82 in dlopen () from /usr/lib/libdl.so.2
#16 0x00007ffff7b18d3e in _PyImport_GetDynLoadFunc (fqname=fqname@entry=0x7fffb98d9000 "IECore._IECore", shortname=shortname@entry=0x7fffb98d9007 "_IECore", 
    pathname=pathname@entry=0x7fffb98db000 "/atomo/pipeline/libs/linux/x86_64/gcc-6.2.120160830/cortex/9.13.2/lib/boost1.51.0/python2.7/site-packages/IECore/_IECore.so", fp=fp@entry=0x7fffb9812280) at Python/dynload_shlib.c:130
#17 0x00007ffff7af650e in _PyImport_LoadDynamicModule (name=0x7fffb98d9000 "IECore._IECore", 
    pathname=0x7fffb98db000 "/atomo/pipeline/libs/linux/x86_64/gcc-6.2.120160830/cortex/9.13.2/lib/boost1.51.0/python2.7/site-packages/IECore/_IECore.so", fp=0x7fffb9812280) at ./Python/importdl.c:42
#18 0x00007ffff7af40e9 in import_submodule (mod=mod@entry=0x7fffbdc70638, subname=subname@entry=0x7fffb98d9007 "_IECore", fullname=fullname@entry=0x7fffb98d9000 "IECore._IECore") at Python/import.c:2700
#19 0x00007ffff7af4d23 in load_next (p_buflen=<synthetic pointer>, buf=0x7fffb98d9000 "IECore._IECore", p_name=<synthetic pointer>, altmod=0x7ffff7da37e0 <_Py_NoneStruct>, mod=0x7fffbdc70638) at Python/import.c:2515
#20 import_module_level (locals=<optimized out>, level=<optimized out>, fromlist=0x7fffbdd90b90, globals=<optimized out>, name=0x0) at Python/import.c:2224
#21 PyImport_ImportModuleLevel (name=0x7fffbdd8d684 "_IECore", globals=<optimized out>, locals=<optimized out>, fromlist=0x7fffbdd90b90, level=<optimized out>) at Python/import.c:2288
#22 0x00007ffff7ad343f in builtin___import__ (self=<optimized out>, args=<optimized out>, kwds=<optimized out>) at Python/bltinmodule.c:49
#23 0x00007ffff7a247d3 in PyObject_Call (func=func@entry=0x7fffd7b60050, arg=arg@entry=0x7fffbf1ae940, kw=<optimized out>) at Objects/abstract.c:2529
#24 0x00007ffff7ad4e97 in PyEval_CallObjectWithKeywords (func=func@entry=0x7fffd7b60050, arg=arg@entry=0x7fffbf1ae940, kw=kw@entry=0x0) at Python/ceval.c:3890
#25 0x00007ffff7ad7e06 in PyEval_EvalFrameEx (f=f@entry=0x7fffb983b860, throwflag=throwflag@entry=0) at Python/ceval.c:2333
#26 0x00007ffff7adb4fd in PyEval_EvalCodeEx (co=co@entry=0x7fffbdd843b0, globals=globals@entry=0x7fffbc633da0, locals=locals@entry=0x7fffbc633da0, args=args@entry=0x0, argcount=argcount@entry=0, kws=kws@entry=0x0, kwcount=kwcount@entry=0, 
    defs=defs@entry=0x0, defcount=defcount@entry=0, closure=closure@entry=0x0) at Python/ceval.c:3253
#27 0x00007ffff7adb632 in PyEval_EvalCode (co=co@entry=0x7fffbdd843b0, globals=globals@entry=0x7fffbc633da0, locals=locals@entry=0x7fffbc633da0) at Python/ceval.c:667
#28 0x00007ffff7af2e8c in PyImport_ExecCodeModuleEx (name=name@entry=0x7fffb9828000 "IECore", co=co@entry=0x7fffbdd843b0, 
    pathname=pathname@entry=0x7fffb9832000 "/atomo/pipeline/libs/linux/x86_64/gcc-6.2.120160830/cortex/9.13.2/lib/boost1.51.0/python2.7/site-packages/IECore/__init__.py") at Python/import.c:709
#29 0x00007ffff7af31ee in load_source_module (name=0x7fffb9828000 "IECore", pathname=0x7fffb9832000 "/atomo/pipeline/libs/linux/x86_64/gcc-6.2.120160830/cortex/9.13.2/lib/boost1.51.0/python2.7/site-packages/IECore/__init__.py", 
    fp=<optimized out>) at Python/import.c:1099
#30 0x00007ffff7af4629 in load_package (name=0x7fffb9828000 "IECore", pathname=<optimized out>) at Python/import.c:1166
#31 0x00007ffff7af40e9 in import_submodule (mod=mod@entry=0x7ffff7da37e0 <_Py_NoneStruct>, subname=subname@entry=0x7fffb9828000 "IECore", fullname=fullname@entry=0x7fffb9828000 "IECore") at Python/import.c:2700
#32 0x00007ffff7af4d23 in load_next (p_buflen=<synthetic pointer>, buf=0x7fffb9828000 "IECore", p_name=<synthetic pointer>, altmod=0x7ffff7da37e0 <_Py_NoneStruct>, mod=0x7ffff7da37e0 <_Py_NoneStruct>) at Python/import.c:2515
#33 import_module_level (locals=<optimized out>, level=<optimized out>, fromlist=0x7ffff7da37e0 <_Py_NoneStruct>, globals=<optimized out>, name=0x0) at Python/import.c:2224
---Type <return> to continue, or q <return> to quit---
#34 PyImport_ImportModuleLevel (name=0x7fffbdd23fe4 "IECore", globals=<optimized out>, locals=<optimized out>, fromlist=0x7ffff7da37e0 <_Py_NoneStruct>, level=<optimized out>) at Python/import.c:2288
#35 0x00007ffff7ad343f in builtin___import__ (self=<optimized out>, args=<optimized out>, kwds=<optimized out>) at Python/bltinmodule.c:49
#36 0x00007ffff7a247d3 in PyObject_Call (func=func@entry=0x7fffd7b60050, arg=arg@entry=0x7fffbef36310, kw=<optimized out>) at Objects/abstract.c:2529
#37 0x00007ffff7ad4e97 in PyEval_CallObjectWithKeywords (func=func@entry=0x7fffd7b60050, arg=arg@entry=0x7fffbef36310, kw=kw@entry=0x0) at Python/ceval.c:3890
#38 0x00007ffff7ad7e06 in PyEval_EvalFrameEx (f=f@entry=0x7fffb983b6a0, throwflag=throwflag@entry=0) at Python/ceval.c:2333
#39 0x00007ffff7adb4fd in PyEval_EvalCodeEx (co=co@entry=0x7fffbdd84e30, globals=globals@entry=0x7fffd7bae6e0, locals=locals@entry=0x7fffd7bae6e0, args=args@entry=0x0, argcount=argcount@entry=0, kws=kws@entry=0x0, kwcount=kwcount@entry=0, 
    defs=defs@entry=0x0, defcount=defcount@entry=0, closure=closure@entry=0x0) at Python/ceval.c:3253
#40 0x00007ffff7adb632 in PyEval_EvalCode (co=co@entry=0x7fffbdd84e30, globals=globals@entry=0x7fffd7bae6e0, locals=locals@entry=0x7fffd7bae6e0) at Python/ceval.c:667
#41 0x00007ffff7adab2b in exec_statement (locals=0x7fffd7bae6e0, globals=0x7fffd7bae6e0, prog=0x7fffbdd84e30, f=0x7fffb9860c20) at Python/ceval.c:4718
#42 PyEval_EvalFrameEx (f=f@entry=0x7fffb9860c20, throwflag=throwflag@entry=0) at Python/ceval.c:1880
#43 0x00007ffff7adb4fd in PyEval_EvalCodeEx (co=<optimized out>, globals=<optimized out>, locals=locals@entry=0x0, args=<optimized out>, argcount=argcount@entry=2, kws=0x7fffb98607d0, kwcount=0, defs=0x0, defcount=0, closure=closure@entry=0x0)
    at Python/ceval.c:3253
#44 0x00007ffff7ada9e5 in fast_function (nk=<optimized out>, na=2, n=2, pp_stack=0x7fffba7fd7d0, func=0x7fffbef41848) at Python/ceval.c:4117
#45 call_function (oparg=<optimized out>, pp_stack=0x7fffba7fd7d0) at Python/ceval.c:4042
#46 PyEval_EvalFrameEx (f=f@entry=0x7fffb9860620, throwflag=throwflag@entry=0) at Python/ceval.c:2666
#47 0x00007ffff7adb4fd in PyEval_EvalCodeEx (co=<optimized out>, globals=<optimized out>, locals=locals@entry=0x0, args=<optimized out>, argcount=argcount@entry=3, kws=0x7fffb98605d0, kwcount=0, defs=0x7fffbf0bcb60, defcount=2, 
    closure=closure@entry=0x0) at Python/ceval.c:3253
#48 0x00007ffff7ada9e5 in fast_function (nk=<optimized out>, na=3, n=3, pp_stack=0x7fffba7fd9f0, func=0x7fffbef417d0) at Python/ceval.c:4117
#49 call_function (oparg=<optimized out>, pp_stack=0x7fffba7fd9f0) at Python/ceval.c:4042
#50 PyEval_EvalFrameEx (f=f@entry=0x7fffb9860420, throwflag=throwflag@entry=0) at Python/ceval.c:2666
#51 0x00007ffff7adaa93 in fast_function (nk=<optimized out>, na=2, n=2, pp_stack=0x7fffba7fdb70, func=0x7fffbef41b90) at Python/ceval.c:4107
#52 call_function (oparg=<optimized out>, pp_stack=0x7fffba7fdb70) at Python/ceval.c:4042
#53 PyEval_EvalFrameEx (f=f@entry=0x7fffb9811620, throwflag=throwflag@entry=0) at Python/ceval.c:2666
#54 0x00007ffff7adaa93 in fast_function (nk=<optimized out>, na=2, n=2, pp_stack=0x7fffba7fdcf0, func=0x7fffbe76b500) at Python/ceval.c:4107
#55 call_function (oparg=<optimized out>, pp_stack=0x7fffba7fdcf0) at Python/ceval.c:4042
#56 PyEval_EvalFrameEx (f=f@entry=0x7fffb985f420, throwflag=throwflag@entry=0) at Python/ceval.c:2666
#57 0x00007ffff7adb4fd in PyEval_EvalCodeEx (co=<optimized out>, globals=<optimized out>, locals=locals@entry=0x0, args=<optimized out>, argcount=argcount@entry=1, kws=0x7fffb983aa28, kwcount=0, defs=0x7fffbe7689a8, defcount=1, 
    closure=closure@entry=0x0) at Python/ceval.c:3253
#58 0x00007ffff7ada9e5 in fast_function (nk=<optimized out>, na=1, n=1, pp_stack=0x7fffba7fdf10, func=0x7fffbe76b410) at Python/ceval.c:4117
#59 call_function (oparg=<optimized out>, pp_stack=0x7fffba7fdf10) at Python/ceval.c:4042
#60 PyEval_EvalFrameEx (f=f@entry=0x7fffb983a8a0, throwflag=throwflag@entry=0) at Python/ceval.c:2666
#61 0x00007ffff7adb4fd in PyEval_EvalCodeEx (co=<optimized out>, globals=<optimized out>, locals=<optimized out>, args=args@entry=0x0, argcount=argcount@entry=0, kws=kws@entry=0x0, kwcount=kwcount@entry=0, defs=defs@entry=0x0, 
    defcount=defcount@entry=0, closure=closure@entry=0x0) at Python/ceval.c:3253
#62 0x00007ffff7adb632 in PyEval_EvalCode (co=<optimized out>, globals=<optimized out>, locals=<optimized out>) at Python/ceval.c:667
#63 0x00007fffeb2fe559 in PY_CompiledCode::evaluateUsingDicts(PY_Result::Type, void*, void*, PY_Result&) const () from /atomo/apps/linux/x86_64/houdini/hfs15.5.480/bin/../dsolib/libHoudiniUT.so
#64 0x00007fffeb2fe836 in PY_CompiledCode::evaluate(PY_Result::Type, PY_Result&) const () from /atomo/apps/linux/x86_64/houdini/hfs15.5.480/bin/../dsolib/libHoudiniUT.so
#65 0x00007fffeb307088 in pyPythonThreadStartCallback(void*) () from /atomo/apps/linux/x86_64/houdini/hfs15.5.480/bin/../dsolib/libHoudiniUT.so
#66 0x00007fffeb698523 in UT_Thread::threadWrapper(void*) () from /atomo/apps/linux/x86_64/houdini/hfs15.5.480/bin/../dsolib/libHoudiniUT.so
#67 0x00007fffe758e454 in start_thread () from /usr/lib/libpthread.so.0
#68 0x00007fffdfef37df in clone () from /usr/lib/libc.so.6

I have tried all possible combinations... building boost 1.55 with gcc 4.8.5 and IECoreHoudini with 4.8.5, building boost with gcc 4.1.2, building boost with system gcc, etc... I'm now actually building one IECore/IECorePython PER BOOST VERSION, so I can try different combinations. 

I rather prefer stick with gcc 4.1.2 for the packages, and only use a newer gcc version to build cortex for specific applications that require a newer gcc, instead of building dependency packages with different gcc versions... 

The version of cortex is 9.13.2, the latest release in github. 

Everything is working and loading correctly in Maya/Nuke/standalone python... but Houdini!!

To be fair, it's being almost 3 years now that I have being trying to get the studio to drop alembic and fbx and start using cortex and gaffer, but because I can't reliably support Houdini, we always end up going back... It's a really annoying situation!!

And I really can't figure this out anymore... any help is greatly appreciated!!!

btw, although it says /atomo/pipeline/libs/linux/x86_64/gcc-6.2.120160830/ in the path for the libraries, I am building the libraries with gcc 4.1.2( or 4.8.5 for houdini only). That version is just to specify the current version of the system (we use Arch linux, and when theres a major update, they update the gcc version!)

cheers...

-H

Andrew Kaufman

unread,
Oct 11, 2016, 2:05:06 PM10/11/16
to cort...@googlegroups.com
Sorry you're having such struggles with this... we're definitely building the latest releases of Cortex successfully with both Houdini 15.0 and 15.5... We are on CentOS 7.1 and using a custom built gcc 4.8.3, along with Boost 1.55.0, OpenEXR 2.2.0, and TBB 4.3. One thing that we seem to do differently to what you're describing is that we always build all dependencies and all of Cortex (not just IECoreHoudini) using the same gcc for any one app (and app version). So we have entirely separate installs of boost/exr/tbb that get used inside Maya, Nuke, and Houdini, and we do the same for IECore/IECorePython.

I know its a larger overhead for the build system, but its really the only reliable way to ensure you're not getting library level clashes. Ideally everyone will get onto VFXPlatform and this stuff will become less of an issue, but for now we maintain the separate install stacks for all our libraries. Have you tried something like that or are you trying to mix/match within an app?

Andrew



--
--
You received this message because you are subscribed to the "cortexdev" group.
To post to this group, send email to cort...@googlegroups.com
To unsubscribe from this group, send email to cortexdev-unsubscribe@googlegroups.com
For more options, visit this group at http://groups.google.com/group/cortexdev?hl=en
---
You received this message because you are subscribed to the Google Groups "cortexdev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cortexdev+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Hradec

unread,
Oct 12, 2016, 12:24:09 PM10/12/16
to cortexdev


I've never tried building the whole cortex for each app!. I'm allways building IECore* with the lowest gcc possible (4.1.2) and its dependencies (boost,tbb,exr,jpeg,tiff, freeglut, freetype, etc). 

Literally all dependencies apart from the main system libc, which made our life simpler so we don't rely on a especific linux distro! Distro upgrade is a breeze!! (we can run the same pipe in fedora, debian, arch, etc).

Doing this seemed  to work incredibly well for us since we had no problems at all with cortex  in all apps (maya, 3delight, prman, nuke), apart from Houdini, unfortunatelly! 

But I'll give that a try for sure! As I'm already building IECoreHoudini with gcc 4.8.5, I'll do what you're doing... gcc 4.8.3 and build boost 1.55, tbb 4.3(we have being using tbb 4.4), exr 2.2 and the whole cortex with this gcc, only for Houdini!! Hopefully it will fix this once and for all! lol

thanks for the kind reply... really appreciate!! 

cheers...

H



--
Hradec
Reply all
Reply to author
Forward
0 new messages