using hierarchical shared libraries, best practices

156 views
Skip to first unread message

Ondrej Platek

unread,
Apr 27, 2013, 9:06:13 AM4/27/13
to pytho...@googlegroups.com
Hello,

I am trying to figure out the best way how to access C code composed of several shared libraries.
I would like to minimize the code compiled through cffi, because the shared libraries would be compiled from source through Makefile.

1) One obvious way is to write C program prog.c which uses all the shared libraries e.g library libtest2.so, and libtest3.so.
and use ffi.verify(source_prog).

Which means that test2 and test3 are independent and connected through prog.c.

prog.c    
             libtest2.so
             libtest3.so

In my case the prog.c is rather large and complicated. So I would prefer the method number 2)

2) The other idea is to write the glue code in C and compile the glue code as libtest1.so.
The C glue code would have very simple API and the methods would be used directly from Python.

The hierarchy would change 

prog.c
        libtest1.so // in libtest1.so are called functions defined in libtest2.so/libtest3.so
                    libtest2.so
                    libtest3.so

I have written the prototype with verify function. See:
The code
Simple instruction how to compile it

Actually, for me it would be sufficient to use dlopen if I just could open all three libraries at once!

Thanks for any hints

Ondra

Simon Sapin

unread,
Apr 27, 2013, 9:10:04 AM4/27/13
to pytho...@googlegroups.com
Le 27/04/2013 15:06, Ondrej Platek a �crit :
> Actually, for me it would be sufficient to use /dlopen/ if I just could
> open *all three libraries at once!*

I think you only need to dlopen() the libraries that export symbols that
you want to use. These libraries� own recursive dependencies are loaded
automatically.

For example, cairocffi just uses dlopen('cairo'). I don�t need to
specify Freetype, Fontconfig or libpng explicitly, even though they are
used by cairo.

--
Simon Sapin

Ondrej Platek

unread,
Apr 27, 2013, 10:49:43 AM4/27/13
to pytho...@googlegroups.com, simon...@exyr.org
ffi.dlopen('libtest1.so')
throws an error
library not found: 'libtest1.so

I tried setting LD_LIBRARY_PATH to the directory with libtest1.so,
but that also did not helped, so I tried to use ffi.verify


Ondra

Ondrej Platek

unread,
Apr 27, 2013, 11:20:47 AM4/27/13
to pytho...@googlegroups.com, simon...@exyr.org
See the example code which fails int the attached test4.zip
test4.zip

Ondrej Platek

unread,
Apr 27, 2013, 11:24:43 AM4/27/13
to pytho...@googlegroups.com, simon...@exyr.org
Very interesting is that
lib = ffi.dlopen('libtest2.so')
lib.test2()

works fine!


On Saturday, 27 April 2013 15:10:04 UTC+2, Simon Sapin wrote:

Ondrej Platek

unread,
Apr 27, 2013, 11:38:18 AM4/27/13
to pytho...@googlegroups.com, simon...@exyr.org
Actually, I figured it out. I need to call dlopen in test1.cc to open shared library,
because when I tried to set ffi.RTLD_LAZY in lib = ffi.dlopen('libtest1.so', ffi.RTLD_LAZY)
the response is:

test1 # it works
python: symbol lookup error: libtest1.so: undefined symbol: test2

PS: Sorry for spamming so many answers, but at least I figured it out. Tomorrow I will code it up.


On Saturday, 27 April 2013 15:10:04 UTC+2, Simon Sapin wrote:

_k...@yahoo.com

unread,
May 1, 2013, 11:45:16 AM5/1/13
to pytho...@googlegroups.com


On Saturday, April 27, 2013 3:06:13 PM UTC+2, Ondrej Platek wrote:
Hello,

I am trying to figure out the best way how to access C code composed of several shared libraries.
I would like to minimize the code compiled through cffi, because the shared libraries would be compiled from source through Makefile.

1) One obvious way is to write C program prog.c which uses all the shared libraries e.g library libtest2.so, and libtest3.so.
and use ffi.verify(source_prog).

If you're going down that way, a cool method is not to actually make libtest1.so, but to have the glue code in a python sting, cdef() the prototypes and feed the code itself it into verify(). Anyway, judging from my (limited) experience, the expensive step in the whole process is definitely not the call to verify, but the cdefs. This surprised me when I looked into the matter, but AFAICT it's due to the fact that the cdef processing is done with the pycparser module, which is pure python. Compared to that a call to the C compiler is very fast.

Kay
Reply all
Reply to author
Forward
0 new messages