Re: This platform lacks a functioning sem_open implementation This platform lacks a functioning sem_ope

2,197 views
Skip to first unread message

Dima Pasechnik

unread,
Apr 17, 2013, 7:59:16 AM4/17/13
to sage-...@googlegroups.com
On 2013-04-17, pang <pablo....@uam.es> wrote:
> Hello!
>
> I have downloaded a sage 5.8 binary for an atom 64 bits server, and it
> worked. Then I tried to build a secure server. It takes more work
> than it did previously. I could install the openssl package, but then
> I had to rebuild Sage and it failed. It was the same error I get when
> I try to compile from source:

could be some binary incompatibility (binary sage releases are known to
be deficient in this way).
It seems that teh binary was built on a multiprocessor machine using
many threads, and this somehow "poisoned" it.

Can you build Sage completely from source instead?

>
> Building modified file sage/ext/interpreters/wrapper_el.pyx.
> Executing 340 commands (using 1 thread)
> Traceback (most recent call last):
> File "setup.py", line 835, in <module>
> execute_list_of_commands(queue)
> File "setup.py", line 278, in execute_list_of_commands
> execute_list_of_commands_in_parallel(command_list, nthreads)
> File "setup.py", line 228, in execute_list_of_commands_in_parallel
> p = Pool(nthreads)
> File "/home/sageadmin/sage-5.8/local/lib/python/multiprocessing/__init__.py", line 232, in Pool
> return Pool(processes, initializer, initargs, maxtasksperchild)
> File "/home/sageadmin/sage-5.8/local/lib/python/multiprocessing/pool.py", line 115, in __init__
> self._setup_queues()
> File "/home/sageadmin/sage-5.8/local/lib/python/multiprocessing/pool.py", line 209, in _setup_queues
> from .queues import SimpleQueue
> File "/home/sageadmin/sage-5.8/local/lib/python/multiprocessing/queues.py", line 48, in <module>
> from multiprocessing.synchronize import Lock, BoundedSemaphore, Semaphore, Condition
> File "/home/sageadmin/sage-5.8/local/lib/python/multiprocessing/synchronize.py", line 59, in <module>
> " function, see issue 3770.")
> ImportError: This platform lacks a functioning sem_open implementation, therefore, the required synchronization primitives needed will not function, see issue 3770.
> Error installing modified sage library code.
> ERROR installing Sage
>
> real 0m25.170s
> user 0m20.649s
> sys 0m3.808s
> ************************************************************************
> Error installing package sage-5.8
> ************************************************************************
> Please email sage-devel (http://groups.google.com/group/sage-devel)
> explaining the problem and including the relevant part of the log file
> /home/sageadmin/sage-5.8/spkg/logs/sage-5.8.log
> Describe your computer, operating system, etc.
> If you want to try to fix the problem yourself, *don't* just cd to
> /home/sageadmin/sage-5.8/spkg/build/sage-5.8 and type 'make' or whatever is appropriate.
> Instead, the following commands setup all environment variables
> correctly and load a subshell for you to debug the error:
> (cd '/home/sageadmin/sage-5.8/spkg/build/sage-5.8' && '/home/sageadmin/sage-5.8/sage' -sh)
> When you are done debugging, you can type "exit" to leave the subshell.
> ************************************************************************
> make[2]: *** [installed/sage-5.8] Error 1
> make[2]: se sale del directorio «/home/sageadmin/sage-5.8/spkg»
> real 704m17.485s
> user 650m49.564s
> sys 27m49.360s
> Error building Sage.
> make: *** [build] Error 1
>

mabshoff

unread,
Apr 17, 2013, 8:06:02 AM4/17/13
to sage-...@googlegroups.com


On Wednesday, April 17, 2013 11:45:13 AM UTC+2, pang wrote:
Hello!

I have downloaded a sage 5.8 binary for an atom 64 bits server, and it worked.

Until you invoke something that uses mulltiprocessing :)
 
Then I tried to build a secure server. It takes more work than it did previously. I could install the openssl package, but then I had to rebuild Sage and it failed. It was the same error I get when I try to compile from source:

Building modified file sage/ext/interpreters/wrapper_el.pyx.
Executing 340 commands (using 1 thread)
Traceback (most recent call last):
  File "setup.py", line 835, in <module>
    execute_list_of_commands(queue)
  File "setup.py", line 278, in execute_list_of_commands
    execute_list_of_commands_in_parallel(command_list, nthreads)
  File "setup.py", line 228, in execute_list_of_commands_in_parallel
    p = Pool(nthreads)
  File "/home/sageadmin/sage-5.8/local/lib/python/multiprocessing/__init__.py", line 232, in Pool
    return Pool(processes, initializer, initargs, maxtasksperchild)
  File "/home/sageadmin/sage-5.8/local/lib/python/multiprocessing/pool.py", line 115, in __init__
    self._setup_queues()
  File "/home/sageadmin/sage-5.8/local/lib/python/multiprocessing/pool.py", line 209, in _setup_queues
    from .queues import SimpleQueue
  File "/home/sageadmin/sage-5.8/local/lib/python/multiprocessing/queues.py", line 48, in <module>
    from multiprocessing.synchronize import Lock, BoundedSemaphore, Semaphore, Condition
  File "/home/sageadmin/sage-5.8/local/lib/python/multiprocessing/synchronize.py", line 59, in <module>
    " function, see issue 3770.")
ImportError: This platform lacks a functioning sem_open implementation, therefore, the required synchronization primitives needed will not function, see issue 3770.
Error installing modified sage library code.
ERROR installing Sage

You need shared memory for multiprocessing to work. Check if your /dev/shm is mounted. If you compile your own kernel you might not have shared memory enabled at all, I have seen that on some MIPS systems. There is also the possibility that you for some strange config reason do not have access to shared memory or exhausted your quota. So giving some more info on your system setup would help.

Cheers,

Michael
 

pang

unread,
Apr 17, 2013, 5:32:52 PM4/17/13
to sage-...@googlegroups.com


El miércoles, 17 de abril de 2013 13:59:16 UTC+2, Dima Pasechnik escribió:
Can you build Sage completely from source instead?

The above error was when trying to compile from source, starting from scratch. I removed the binaries before starting!

pang

unread,
Apr 17, 2013, 5:59:29 PM4/17/13
to sage-...@googlegroups.com


El miércoles, 17 de abril de 2013 14:06:02 UTC+2, mabshoff escribió:


On Wednesday, April 17, 2013 11:45:13 AM UTC+2, pang wrote:
You need shared memory for multiprocessing to work. Check if your /dev/shm is mounted.

There is a strange line in the output of "mount"

none on /run/shm type tmpfs (rw,nosuid,nodev,relatime)
 
If you compile your own kernel you might not have shared memory enabled at all, I have seen that on some MIPS systems. There is also the possibility that you for some strange config reason do not have access to shared memory or exhausted your quota. So giving some more info on your system setup would help.

It's a server I rented at ovh.net. The kernel seems patched. What do you need?
This is the output of `cat /proc/cpuinfo`
 processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 54   
model name      : Intel(R) Atom(TM) CPU N2800   @ 1.86GHz
stepping        : 1
microcode       : 0x10d
cpu MHz         : 798.000
cache size      : 512 KB
physical id     : 0
siblings        : 4
core id         : 0
cpu cores       : 2
apicid          : 0
initial apicid  : 0
fpu             : yes  
fpu_exception   : yes  
cpuid level     : 10   
wp              : yes  
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dt
s acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts nopl nonstop_
tsc aperfmperf pni dtes64 monitor ds_cpl est tm2 ssse3 cx16 xtpr pdcm movbe lahf_lm arat dts
bogomips        : 3734.07
clflush size    : 64   
cache_alignment : 64   
address sizes   : 36 bits physical, 48 bits virtual
power management:

processor       : 1
vendor_id       : GenuineIntel
cpu family      : 6
model           : 54   
model name      : Intel(R) Atom(TM) CPU N2800   @ 1.86GHz
stepping        : 1
microcode       : 0x10d
cpu MHz         : 798.000
cache size      : 512 KB
physical id     : 0
siblings        : 4
core id         : 0
cpu cores       : 2
apicid          : 1
initial apicid  : 1
fpu             : yes  
fpu_exception   : yes  
cpuid level     : 10   
wp              : yes  
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dt
s acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts nopl nonstop_
tsc aperfmperf pni dtes64 monitor ds_cpl est tm2 ssse3 cx16 xtpr pdcm movbe lahf_lm arat dts
bogomips        : 3733.21
clflush size    : 64   
cache_alignment : 64   
address sizes   : 36 bits physical, 48 bits virtual
power management:

processor       : 2
vendor_id       : GenuineIntel
cpu family      : 6
model           : 54   
model name      : Intel(R) Atom(TM) CPU N2800   @ 1.86GHz
stepping        : 1
microcode       : 0x10d
cpu MHz         : 798.000
cache size      : 512 KB
physical id     : 0
siblings        : 4
core id         : 1
cpu cores       : 2
apicid          : 2
initial apicid  : 2
fpu             : yes
fpu_exception   : yes
cpuid level     : 10
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts nopl nonstop_tsc aperfmperf pni dtes64 monitor ds_cpl est tm2 ssse3 cx16 xtpr pdcm movbe lahf_lm arat dts
bogomips        : 3733.24
clflush size    : 64
cache_alignment : 64
address sizes   : 36 bits physical, 48 bits virtual
power management:

processor       : 3
vendor_id       : GenuineIntel
cpu family      : 6
model           : 54  
model name      : Intel(R) Atom(TM) CPU N2800   @ 1.86GHz
stepping        : 1
microcode       : 0x10d
cpu MHz         : 798.000
cache size      : 512 KB
physical id     : 0
siblings        : 4
core id         : 1
cpu cores       : 2
apicid          : 3
initial apicid  : 3
fpu             : yes
fpu_exception   : yes
cpuid level     : 10
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts nopl nonstop_tsc aperfmperf pni dtes64 monitor ds_cpl est tm2 ssse3 cx16 xtpr pdcm movbe lahf_lm arat dts
bogomips        : 3733.27
clflush size    : 64
cache_alignment : 64
address sizes   : 36 bits physical, 48 bits virtual
power management:


and `cat /proc/meminfo`
 
MemTotal:        4026732 kB
MemFree:         3206676 kB
Buffers:           48184 kB
Cached:           245288 kB
SwapCached:            0 kB
Active:           500488 kB
Inactive:         198020 kB
Active(anon):     405124 kB
Inactive(anon):      400 kB
Active(file):      95364 kB
Inactive(file):   197620 kB
Unevictable:           0 kB
Mlocked:               0 kB
SwapTotal:      12285944 kB
SwapFree:       12285944 kB
Dirty:                 4 kB
Writeback:             0 kB
AnonPages:        405036 kB
Mapped:            23888 kB
Shmem:               484 kB
Slab:              39372 kB
SReclaimable:      21772 kB
SUnreclaim:        17600 kB
KernelStack:        1304 kB
PageTables:         8844 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:    14299308 kB
Committed_AS:     933084 kB
VmallocTotal:   34359738367 kB
VmallocUsed:       93448 kB
VmallocChunk:   34359642556 kB
DirectMap4k:        2048 kB
DirectMap2M:     4175872 kB


Thanks for the answer, anyway!

Andrey Novoseltsev

unread,
Apr 18, 2013, 1:55:17 AM4/18/13
to sage-...@googlegroups.com
(hate new google groups...)

I think you may be able to make a link from /dev/shm to /run/shm - I had the same error playing with LXC containers and it was resolved by linking/mounting tmpfs in the appropriate place. Unfortunately I don't remember precise steps, but it was possible to find them back then ;-)

Best regards,
Andrey

Michael Abshoff

unread,
Apr 18, 2013, 3:29:48 AM4/18/13
to sage-devel
On Thu, Apr 18, 2013 at 7:55 AM, Andrey Novoseltsev <novo...@gmail.com> wrote:
(hate new google groups...)

I think you may be able to make a link from /dev/shm to /run/shm


That seems to be mostly an issue if you use Debian or Debian based distro since they are following the HFS standard - see

   http://wiki.debian.org/ReleaseGoals/RunDirectory

 
- I had the same error playing with LXC containers and it was resolved by linking/mounting tmpfs in the appropriate place. Unfortunately I don't remember precise steps, but it was possible to find them back then ;-)


Yeah, a LXC based system will potentially give you some trouble, so either linking /dev/shm to /run/shm or changing the mount point might fix it, i.e.

   sudo rm -rf /dev/shm && sudo ln -s /run/shm /dev/shm

You might want to be careful with that since it mucks around deeply in your system. Way back in 2004 or so I accidentally deleted /dev/null by accident and it took a while to figure out what was wrong on the next reboot since until then it did not really cause too much trouble.

I tried a little google magic, but I did not see anything that pointed to a bug in the python multiprocessing module hardcoding /dev/shm and I am too lazy atm to take a look myself. Either way, POSIX demands /dev/shm to be there, so you might want to talk to the provider of your build machine to get that issue fixed ;).


Best regards,
Andrey


Cheers,

Michael
 

--
You received this message because you are subscribed to the Google Groups "sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sage-devel+...@googlegroups.com.
To post to this group, send email to sage-...@googlegroups.com.
Visit this group at http://groups.google.com/group/sage-devel?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.
 
 

mabshoff

unread,
Apr 18, 2013, 3:44:19 AM4/18/13
to sage-...@googlegroups.com

<SNIP>


 
- I had the same error playing with LXC containers and it was resolved by linking/mounting tmpfs in the appropriate place. Unfortunately I don't remember precise steps, but it was possible to find them back then ;-)


Yeah, a LXC based system will potentially give you some trouble, so either linking /dev/shm to /run/shm or changing the mount point might fix it, i.e.

   sudo rm -rf /dev/shm && sudo ln -s /run/shm /dev/shm


<SNIP>

I just checked on my Ubuntu 12.10 based box which I booted by accident and /run/shm is a link to /dev/shm:

mabshoff@buildbox:~$ mount | grep shm
none on /run/shm type tmpfs (rw,nosuid,nodev)
mabshoff@buildbox:~$ ls -ald /dev/shm
lrwxrwxrwx 1 root root 8 Apr 18 09:34 /dev/shm -> /run/shm

So chances are that setting that link will fix the problem for you.

Cheers,

Michael

pang

unread,
Apr 18, 2013, 4:35:21 AM4/18/13
to sage-...@googlegroups.com
sudo rm -rf /dev/shm && sudo ln -s /run/shm /dev/shm
 
It really looked promising, but unfortunately, it didn't work: is a reboot necessary?

Michael Abshoff

unread,
Apr 18, 2013, 4:36:26 AM4/18/13
to sage-devel
On Thu, Apr 18, 2013 at 10:35 AM, pang <pablo....@uam.es> wrote:
sudo rm -rf /dev/shm && sudo ln -s /run/shm /dev/shm
 
It really looked promising, but unfortunately, it didn't work: is a reboot necessary?

Eh, it depends: What went wrong?

leif

unread,
Apr 18, 2013, 9:09:43 AM4/18/13
to sage-...@googlegroups.com
I don't think so.

What does

$ ls -ld /run/shm

give? (I.e., are the permissions correct?)


-leif

--
() The ASCII Ribbon Campaign
/\ Help Cure HTML E-Mail

pang

unread,
Apr 18, 2013, 10:04:23 AM4/18/13
to sage-...@googlegroups.com
On Thursday, April 18, 2013 3:09:43 PM UTC+2, leif wrote:
pang wrote:
>     sudo rm -rf /dev/shm && sudo ln -s /run/shm /dev/shm
>
>
> It really looked promising, but unfortunately, it didn't work: is a
> reboot necessary?

I don't think so.

What does

$ ls -ld /run/shm

give?  (I.e., are the permissions correct?)
 
root@ks3316508 ~# ls -ald /run/shm
drwxrwxrwt 2 root root 60 abr 18 10:32 /run/shm/
root@ks3316508 ~# ls -ald /dev/shm
lrwxrwxrwx 1 root root 9 abr 18 08:40 /dev/shm -> /run/shm//

I don't know what that double slash (/run/shm//) was doing there, but I repeated the process, got the following:

root@ks3316508 ~# ls -ald /run/shm
drwxrwxrwt 2 root root 60 abr 18 10:32 /run/shm/
root@ks3316508 ~# ls -ald /dev/shm
lrwxrwxrwx 1 root root 9 abr 18 08:40 /dev/shm -> /run/shm/

then "make clean", then "make", then I got the same error as always:


ImportError: This platform lacks a functioning sem_open implementation, therefore, the required synchronization primitives needed will not function, see issue 3770.

etcetera.

Any more ideas?

Andrey Novoseltsev

unread,
Apr 18, 2013, 10:05:46 AM4/18/13
to sage-devel
I also used the following little program to check if semaphores got
on:

#include <unistd.h>
#include <fcntl.h>
#include <stdio.h>
#include <semaphore.h>
#include <sys/stat.h>

int main(void) {
sem_t *a = sem_open("/autoconf", O_CREAT, S_IRUSR|S_IWUSR, 0);
if (a == SEM_FAILED) {
perror("sem_open");
return 1;
}
printf("All OK!");
sem_close(a);
sem_unlink("/autoconf");
return 0;
}


In my lxc case it certainly was not a Python issue.

pang

unread,
Apr 18, 2013, 10:23:08 AM4/18/13
to sage-...@googlegroups.com
Not a c expert, I get a:

sageadmin@ks3316508 ~> g++ semaphores.c
/tmp/cc4ALTKd.o: In function `main':
semaphores.c:(.text+0x22): undefined reference to `sem_open'
semaphores.c:(.text+0x59): undefined reference to `sem_close'
semaphores.c:(.text+0x63): undefined reference to `sem_unlink'

does this say something to you?

pang

unread,
Apr 18, 2013, 10:27:30 AM4/18/13
to sage-...@googlegroups.com

Googled the thing, found I had to "link with pthread lib, using -lpthread option", compiled it, run it: "All ok!"
 

leif

unread,
Apr 18, 2013, 11:05:00 AM4/18/13
to sage-...@googlegroups.com
pang wrote:
> On Thursday, April 18, 2013 4:23:08 PM UTC+2, pang wrote:
>
> Not a c expert, I get a:
>
> sageadmin@ks3316508 ~> g++ semaphores..c
> /tmp/cc4ALTKd.o: In function `main':
> semaphores.c:(.text+0x22): undefined reference to `sem_open'
> semaphores.c:(.text+0x59): undefined reference to `sem_close'
> semaphores.c:(.text+0x63): undefined reference to `sem_unlink'
>
>
> does this say something to you?
>
>
> Googled the thing, found I had to "link with pthread lib, using
> |-lpthread| option", compiled it, run it: "All ok!"

'man sem_open' would probably have been quicker... ;-)


Did you run that as root, or as the user you're building Sage with?


(And did you rebuild the Python spkg after creating the symlink?)

pang

unread,
Apr 22, 2013, 7:06:21 AM4/22/13
to sage-...@googlegroups.com
El jueves, 18 de abril de 2013 17:05:00 UTC+2, leif escribió:
'man sem_open' would probably have been quicker... ;-)

There are more man pages that could be dreamt in my philosophy.
 
Did you run that as root, or as the user you're building Sage with?

"All ok!" for both.
 
(And did you rebuild the Python spkg after creating the symlink?)

I'm trying to compile the source code for the first time, so I guess there is no spkg to rebuild.

But...

I removed the folder, extracted again, tried 'make' again and it's now compiling past the sem_open point. So thanks a ton for your awesome help.
Reply all
Reply to author
Forward
0 new messages