"libpaths.so: undefined symbol: lua_gettop)"

1,333 views
Skip to first unread message

Hugh Perkins

unread,
Aug 29, 2015, 9:23:48 PM8/29/15
to torch7
I'm trying to wrap torch in python, and I get this really odd error "libpaths.so: undefined symbol: lua_gettop)" when I try to "require 'nn'", or "require 'paths'".  I've been fighting with this for hours, so wondering if anyone has seen something similar, seen a solution?  Trying to simplify the problem as much as possible, I created a C, or C++ (tried both :-P) shared object that does:

void go(void) {
    lua_State *L = luaL_newstate();
    luaL_openlibs(L);

    lua_getglobal(L, "require");
    lua_pushstring(L, "nn");
    lua_call(L, 1, 1);
    lua_setglobal(L, "nn");

    lua_close(L);
}

... nothing complex really.

If I build this as an so, and call this from a C (or C++) executable, it works ok.  If I build this as an .so, then call it from python, it gives the undefined symbol error.  I'm linking the .so brutally with liblua5.1.so.0, so gettop should be present.  I also tried including the entire Lua sourcecode, ie lapi.c and so on, in the shared library, still hvae the same issue. I'm wondering if it's something to do with linker options passed to the python wrapper, or maybe something in the python enviornment?  The python wrapper that calls the shared object is linked as:

c++ -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -D_FORTIFY_SOURCE=2 -g -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security build/temp.linux-x86_64-2.7/pytest.o -Lbuild -L/usr/lib/x86_64-linux-gnu -Wl,-R. -Wl,-Rbuild -Wl,-R/usr/lib/x86_64-linux-gnu -ltestlib -llua5.1 -o build/lib.linux-x86_64-2.7/pytest.so

If anyone has any ideas what is going on, would be much appreciated :-)  Dont need a full solution, just if any hints as to what might be breaking, even the smallest amount of information, might be enough to somehow 'break through' this issue.

Hugh

Hugh Perkins

unread,
Aug 29, 2015, 10:39:38 PM8/29/15
to torch7
Finally fixed it, I think, by preloading liblua5.1.so, passing in option `RTLD_GLOBAL`:

void go(void) {
    void *hdl = dlopen("liblua5.1.so", RTLD_NOW | RTLD_GLOBAL);
    if(hdl == 0) {
        cout << dlerror() << endl;
        return;
    } else {
        cout << "loaded lua library" << endl;

    }

    lua_State *L = luaL_newstate();
    luaL_openlibs(L);

    lua_getglobal(L, "require");
    lua_pushstring(L, "nn");
    lua_call(L, 1, 1);
    lua_setglobal(L, "nn");

    lua_close(L);
    cout << "done" << endl;
}

Hugh Perkins

unread,
Aug 29, 2015, 11:37:43 PM8/29/15
to torch7
Note: to anyone who encounters a similar problem in the future, awesome information avilable on this at http://www.qnx.com/developers/docs/6.3.2/neutrino/lib_ref/d/dlopen.html  In a very short page, it covers dlopen, RTLD_GLOBAL, -Wl,-E, -Wl,-Bsymbolic, and a few more things.  Excellent :-)

alban desmaison

unread,
Aug 30, 2015, 7:57:46 AM8/30/15
to torch7
Hi if you are trying to make such a wrapper, you may want to take a look at this project https://github.com/albanD/lunatic-python/tree/tensor_ndarray
This is a work in progress with still lot of dependencies on fblualib (should be removed) and hard coded paths.
You have an example of what can be done with it in this test file: https://github.com/albanD/lunatic-python/blob/3ca5b3f9fcfa493cb043a863418959fdebdaa100/test_net.py

I had the same problem with the libraries that can be solved on the python side directly from https://github.com/albanD/lunatic-python/blob/3ca5b3f9fcfa493cb043a863418959fdebdaa100/test_net.py#L3-L7 (I did not found a proper fix on the c side yet)

If this kind of binding is interesting for you, let me know and I can spend more time to clean this up so it can be used.

Hugh Perkins

unread,
Sep 5, 2015, 3:48:39 AM9/5/15
to torch7
Hi Alban.

Hi, I missed your post earlier.  Just seen it just now.  It looks like you have taken ownership of Lunatic Python?  That's cool.  We have so many languages around, each with their own set of libraries, and it would be good to be able to bridge between them easily, so we can leverage libraries from multiple languages.

By the way, I have started to write dedicated torch wrappers for python at https://github.com/hughperkins/pytorch   It's fairly early days yet.  There are two parts:
- Cython wrappers for the various Tensor and Storage classes.  These directly wrap the torch C functions that define the Tensor and Storage behavior, eg per-element add, matrix multiplication, and so on
- Python wrappers for the lua classes, for 'nn', eg for Linear and so on.  These directly wrap the lua classes, and probably work in a similar way to Lunatic python.  Maybe could use Lunatic Python instead, not sure.  Right now my lua wrappers seem to work fairly well, but obvoiusly if lunatic python made it even easier, and even more automatic, then that would rock.

Hugh

Hugh Perkins

unread,
Sep 5, 2015, 7:26:06 AM9/5/15
to torch7
Hi Alban,

By the way, just wondering, can Lunatic Python be used to port char-rnn to Python? http://github.com/karpathy/char-rnn

Hugh

alban desmaison

unread,
Sep 7, 2015, 4:15:40 AM9/7/15
to torch7
Hi Hugh,

I did not took ownership, I was just looking for a way to easily call torch code in python.
The idea was to be able to use this https://github.com/albanD/deep-visualization-toolbox/tree/lua_binding with a torch backend. In the current state of the code it works forward/backward on CPU. The changes I made to Lunatic Python were really just a hack to be able to use this toolbox.

I have seen your pytorch binding, it looks very interesting.

The Lunatic-python approach is really different:
  - Run a lua VM at the same time as the python interpreter.
  - All call to torch (and other lua functions) are done using this lua vm.
  - In the python space, arrays are numpy ndarray. They are transformed automatically to a torch Tensor when going to the lua space (they share the memory storage so no big memory overhead): you do not have Tensors in the python space.
  - There is a generic python object that can wrap any lua element. https://github.com/albanD/lunatic-python/blob/3ca5b3f9fcfa493cb043a863418959fdebdaa100/src/luainpython.cpp#L399 . This can wrap any torch object, for example a network like here https://github.com/albanD/lunatic-python/blob/3ca5b3f9fcfa493cb043a863418959fdebdaa100/test_net.py#L18 or a library like nn.

I really like the idea of sharing object between the python and lua space. Though it needs some care on the validity of the objects. I did not run into problems with the GC yet, but I am sure some work needs to be done here.
Also I am still wondering how to handle CudaTensor since there is no such element in the standard numpy library (a workaround  is just keeping them in the lua space for now and perform a :float() before sending it back to the python space).

I have not looked into the char-rnn project. I will take a look while cleaning up my code. I would say there is no fondamental reason for it not to work. But we may run in some problems anyway.
I will update this thread when I will have more informations about this.

Hugh Perkins

unread,
Sep 26, 2015, 9:29:54 PM9/26/15
to torch7
Hi Alban,

Do you know the 'issue tracker' is not activated in your lunatic-python fork?  Please activate it :-)

Hugh

Hugh Perkins

unread,
Sep 26, 2015, 9:35:15 PM9/26/15
to torch7
Also, there are a zillion forks at hte moment.  Can you start merging them please :-)   You dont need to ask to merge: just merge everything that doesnt look like it will break something too horribly.  (Remember: our instinct is to think our own changes are super awesome, and everyone else's are bizarre, not needed, so push back against that instinct: unless something is going to clearly break something, even if it makes you go 'ewww' a bit, I recommend just merging it anyway :-)   (obviously, it's possible to request something to be cleaned up slightly, but I don't think there's any need to ask for justificaiton for new features and stuff in general: if someone put the time into writing it, it's at least useful t othat person)).

alban desmaison

unread,
Sep 27, 2015, 8:41:51 AM9/27/15
to torch7
Hi Hugh,

I activated the issue tracker. I may add some stuff there two (better than the readme).
I looked very quickly at few of the other forks. Some of them seems interesting and I will take some time to merge them.
I am also finishing the tests on char-rnn but I will be really busy until this Wednesday, I should be able to finish this next week end.

Hugh Perkins

unread,
Sep 27, 2015, 8:49:02 AM9/27/15
to torch7
 
> I activated the issue tracker. I may add some stuff there two (better than the readme).
> I looked very quickly at few of the other forks. Some of them seems interesting and I will take some time to merge them.
> I am also finishing the tests on char-rnn but I will be really busy until this Wednesday, I should be able to finish this next week end.

Awesome!  Sounds excellent :-)

alban desmaison

unread,
Oct 11, 2015, 12:38:41 PM10/11/15
to torch7
Hi Hugh,

Sorry for the delay but here is a functionnal version of char-rnn with lunatic-python:
Using this version https://github.com/albanD/lunatic-python (still a bit complex to install if you do not have fblualib installed already), you can run the python version of the char-rnn training script that is in this branch https://github.com/albanD/char-rnn/tree/python .
The python version is not very pythonic but I just kept it very close to the original lua script.

Hugh Perkins

unread,
Oct 11, 2015, 4:13:16 PM10/11/15
to torch7
Nice!  Will take a look :-)

Jin Ma

unread,
Jul 5, 2016, 2:24:12 AM7/5/16
to torch7

nice answer. I solved my problem with your answer. thanks.
Reply all
Reply to author
Forward
0 new messages