Severe slowness of singular interface on some machines partially caused by _synchronize

5 views
Skip to first unread message

Simon King

unread,
Nov 18, 2010, 3:09:21 AM11/18/10
to sage-devel
Hi!

In a different post, I asked about hardware tweaks that might help me
to overcome a massive performance loss using the pexpect interfaces.
Here I am opening a new thread, because it is about code.

Meanwhile I can narrow the problem down, at least one third of it.

When I define
{{{
def test(n):
st = singular.cputime()
ct = cputime()
wt = walltime()
for i in range(n):
a = singular.eval('def a%d=%d'%(i,i))
print "Wall time:", walltime(wt)
print "Total CPU:", cputime(ct)+singular.cputime(st)
}}}
then I get
{{{
sage: test(1000)
Wall time: 40.0399508476
Total CPU: 0.0
}}}

Very bad. This is 2/3 of the overhead that I get from "a=singular(i)".

"eval" does at least two things: It synchronises the interface, and
then sends a command that is evaluated by singular.

When I replace "singular.eval..." by "singular._sendstr('def a%d=%d;
\n'%(i,i))" in the above function, I get
{{{
sage: test(1000)
Wall time: 0.015585899353
Total CPU: 0.04
sage: singular('a100')
100
}}}

Hence, very quick!

But when I replace "singular.eval..." by "singular._synchronize()", I
get
{{{
sage: test(1000)
Wall time: 19.9999539852
Total CPU: 0.0
}}}

Hence, _synchronize() is to blame for 1/3 of the abysmal overhead
reported in the other thread!

Do you have any idea how it can be explained that this overhead occurs
on the machines at my university, but apparently not (to such extent)
on sage.math or sage.bsd?

Could you please test if you can replicate the overhead? Because, if I
am the only person experiencing it, it would hardly be worth a ticket.

Cheers,
Simon

Simon King

unread,
Nov 18, 2010, 3:35:22 AM11/18/10
to sage-devel
Running my test function with %prun revealed that the overhead is
caused in "select.select", which is called 1025 times with
tottime=20.306 seconds.

Isn't select.select related with regular expressions? Hence, could it
be that _synchronize could be optimized by pre-compiling some regular
expressions that are used in communication with the interface?

Best regards,
Simon

Alex Leone

unread,
Nov 18, 2010, 3:56:09 AM11/18/10
to sage-...@googlegroups.com
select.select is a networking call - in this case (probably) to see if
there is any output from the singular process.  I would try running
singular directly to see if it takes ~20 seconds for the prompt to
appear.

 - Alex

http://docs.python.org/library/select.html

Simon King

unread,
Nov 18, 2010, 4:14:34 AM11/18/10
to sage-devel
Hi Alex,

On 18 Nov., 09:56, Alex Leone <acle...@gmail.com> wrote:
> select.select is a networking call

Yes, meanwhile I found the man pages.

> there is any output from the singular process.  I would try running
> singular directly to see if it takes ~20 seconds for the prompt to
> appear.

I think this can be excluded, since I did measure the cpu time spent
by Singular (which was a small fraction of a second).

Anyway. Is there a known reason (e.g., kernel configuration) that
might cause select() being slow?

Cheers,
Simon

David Kirkby

unread,
Nov 18, 2010, 5:44:40 AM11/18/10
to sage-...@googlegroups.com
On 18 November 2010 08:09, Simon King <simon...@uni-jena.de> wrote:
> Hi!
>
> In a different post, I asked about hardware tweaks that might help me
> to overcome a massive performance loss using the pexpect interfaces.
> Here I am opening a new thread, because it is about code.
>
> Meanwhile I can narrow the problem down, at least one third of it.
>
> When I define
> {{{
> def test(n):
>    st = singular.cputime()
>    ct = cputime()
>    wt = walltime()
>    for i in range(n):
>        a = singular.eval('def a%d=%d'%(i,i))
>    print "Wall time:", walltime(wt)
>    print "Total CPU:", cputime(ct)+singular.cputime(st)
> }}}
> then I get
> {{{
> sage: test(1000)
> Wall time: 40.0399508476
> Total CPU: 0.0
> }}}
>
> Very bad. This is 2/3 of the overhead that I get from "a=singular(i)".

<snip>

> Could you please test if you can replicate the overhead? Because, if I
> am the only person experiencing it, it would hardly be worth a ticket.
>
> Cheers,
> Simon

I don't see this issue on my OpenSolaris machine

sage: def test(n):
....: st = singular.cputime()
....: ct = cputime()
....: wt = walltime()
....: for i in range(n):
....: a = singular.eval('def a%d=%d'%(i,i))
....: print "Wall time:", walltime(wt)
....: print "Total CPU:", cputime(ct)+singular.cputime(st)
....:
sage: sage: test(1000)
Wall time: 0.151273012161
Total CPU: 0.170581

I note we are not running the latest version of pexpect. We are
running 2.0, but the latest is 2.3

http://www.noah.org/wiki/pexpect#Download_and_Installation

I would be tempted to try updating the .spkg to the latest pexpect and
see if that fixes it.

Even if you are currently the only one experiencing this, it is worth
creating a ticket for it. But put as much information as possible
about your system. Are the file systems local or on an NFS system? If
on NFS, can you find out what the server is?

It could be others will get this, but just don't notice is because
they don't use Singular, or have just accepted the slowness, rather
than questioning it, as you are doing.

Do you get any warnings when the pexpect module is built?

If you run SAGE_CHECK=yes, what failures do you get? (Everyone seems
to get some failures.) I would add that to a trac ticket, as it might
be useful.

I wonder if there's any chance of getting the Singular developers to
provide a C API, which means we could despense with using pexect for
this.

Dave

David Kirkby

unread,
Nov 18, 2010, 6:04:56 AM11/18/10
to sage-...@googlegroups.com
On 18 November 2010 08:09, Simon King <simon...@uni-jena.de> wrote:
> Hi!
>
> In a different post, I asked about hardware tweaks that might help me
> to overcome a massive performance loss using the pexpect interfaces.
> Here I am opening a new thread, because it is about code.
>
> Meanwhile I can narrow the problem down, at least one third of it.

<snip>

> Could you please test if you can replicate the overhead? Because, if I
> am the only person experiencing it, it would hardly be worth a ticket.
>
> Cheers,
> Simon

I don't know an alful lot about the doctests, but I wonder if it's
possible to create one that would detect the problem you see? You are
using 40 seconds wall time, I'm using 0.15. If the number 'n' could be
adjusted so that it would not take more than 1 second on even the
slowest hardware, then we could check that the total wall time used is
less than a second.

I somewhat doubt you are the only one seeing this to be honest, but
without some sort of test, we will never know.

Dave

Simon King

unread,
Nov 18, 2010, 6:05:49 AM11/18/10
to sage-devel
Hi Dave!

On 18 Nov., 11:44, David Kirkby <david.kir...@onetel.net> wrote:
> ...
> I don't see this issue on my OpenSolaris machine

... and not on bsd.math and sage.math either.

> I note we are not running the latest version of pexpect. We are
> running 2.0, but the latest is 2.3

The overhead seems to be completely due to slowness of select.select
(according to prun). Would updating pexpect help to use select() more
efficiently?

> Even if you are currently the only one experiencing this, it is worth
> creating a ticket for it. But put as much information as possible
> about your system. Are the file systems local or on an NFS system? If
> on NFS, can you find out what the server is?

The commands that are send through the Sage-Singular interface are
short, hence, files are not used in this case. But I experience it
both on an NFS system and when I define DOT_SAGE to be on the local
disk.

> Do you get any warnings when the pexpect module is built?

No. But since pexpect seems to be pure Python, I think there can't
compiler problems be involved.

> If you run SAGE_CHECK=yes, what failures do you get? (Everyone seems
> to get some failures.)

I did not compile with SAGE_CHECK=yes. Would "make ptestall" suffice?

> I wonder if there's any chance of getting the Singular developers to
> provide a C API, which means we could despense with using pexect for
> this.

I don't think that Singular is to blame. I observe a similar overhead
for the communication with Gap. And if I'm not mistaken, the C API is
used in libsingular.

Cheers,
Simon

David Kirkby

unread,
Nov 18, 2010, 7:11:53 AM11/18/10
to sage-...@googlegroups.com
On 18 November 2010 11:05, Simon King <simon...@uni-jena.de> wrote:
> Hi Dave!
>
> On 18 Nov., 11:44, David Kirkby <david.kir...@onetel.net> wrote:
>> ...
>> I don't see this issue on my OpenSolaris machine
>
> ... and not on bsd.math and sage.math either.
>
>> I note we are not running the latest version of pexpect. We are
>> running 2.0, but the latest is 2.3
>
> The overhead seems to be completely due to slowness of select.select
> (according to prun). Would updating pexpect help to use select() more
> efficiently?

I've no idea, but I'd certainly give it a try myself.

>> Do you get any warnings when the pexpect module is built?
>
> No. But since pexpect seems to be pure Python, I think there can't
> compiler problems be involved.


>> If you run SAGE_CHECK=yes, what failures do you get? (Everyone seems
>> to get some failures.)
>
> I did not compile with SAGE_CHECK=yes. Would "make ptestall" suffice?

No, you would have to run
$ export SAGE_CHECK=yes
$ ./sage -f python-$version

That will list any failures. Conceivably pexpect might use a different
method depending on what bits of python work properly.


>> I wonder if there's any chance of getting the Singular developers to
>> provide a C API, which means we could despense with using pexect for
>> this.
>
> I don't think that Singular is to blame. I observe a similar overhead
> for the communication with Gap. And if I'm not mistaken, the C API is
> used in libsingular.

Just in general, it would be good if pexpect could be avoided whenever
possible. I believe for example someone on the Sage project is working
on one for Maxima.

> Cheers,
> Simon

Simon King

unread,
Nov 18, 2010, 10:32:52 AM11/18/10
to sage-devel
Hi Dave,

On 18 Nov., 13:11, David Kirkby <david.kir...@onetel.net> wrote:
> > I did not compile with SAGE_CHECK=yes. Would "make ptestall" suffice?
>
> No, you would have to run
> $ export SAGE_CHECK=yes
> $ ./sage -f python-$version

Doing it now.

> Just in general, it would be good if pexpect could be avoided whenever
> possible. I believe for example someone on the Sage project is working
> on one for Maxima.

What would be the alternative? The Singular C API is already in use,
but a pseudo-terminal interface is still usefull. And I think in the
case of GAP a C API will not be available any time soon. Is there a
better Python module for creating interfaces?

Cheers,
Simon

Simon King

unread,
Nov 18, 2010, 11:23:03 AM11/18/10
to sage-devel
Hi Dave!

On 18 Nov., 16:32, Simon King <simon.k...@uni-jena.de> wrote:
> > No, you would have to run
> > $ export SAGE_CHECK=yes
> > $ ./sage -f python-$version
>
> Doing it now.

Result:

323 tests OK.
3 tests failed:
test_distutils test_urllib test_zlib
39 tests skipped:
test_aepack test_al test_applesingle test_bsddb test_bsddb185
test_bsddb3 test_cd test_cl test_codecmaps_cn test_codecmaps_hk
test_codecmaps_jp test_codecmaps_kr test_codecmaps_tw test_curses
test_dbm test_dl test_gdbm test_gl test_imageop test_imgfile
test_kqueue test_linuxaudiodev test_macos test_macostools
test_normalization test_ossaudiodev test_pep277 test_py3kwarn
test_scriptpackages test_smtpnet test_socketserver test_startfile
test_sunaudiodev test_timeout test_urllib2net test_urllibnet
test_winreg test_winsound test_zipfile64
3 skips unexpected on linux2:
test_dbm test_gdbm test_bsddb

Some details:
test test_urllib failed -- Traceback (most recent call last):
File "/mnt/local/king/SAGE/sage-4.6/spkg/build/python-2.6.4.p9/src/
Lib/test/te
st_urllib.py", line 104, in setUp
for k, v in os.environ.iteritems():
RuntimeError: dictionary changed size during iteration

test test_zlib failed -- Traceback (most recent call last):
File "/mnt/local/king/SAGE/sage-4.6/spkg/build/python-2.6.4.p9/src/
Lib/test/test_zlib.py", line 84, in test_baddecompressobj
self.assertRaises(ValueError, zlib.decompressobj, 0)
AssertionError: ValueError not raised

test test_distutils failed -- errors occurred; run in verbose mode for
details


Does that seem related to select.select being slow?

Cheers,
Simon

David Kirkby

unread,
Nov 18, 2010, 6:09:16 PM11/18/10
to sage-...@googlegroups.com
On 18 November 2010 16:23, Simon King <simon...@uni-jena.de> wrote:
> Hi Dave!
>
> On 18 Nov., 16:32, Simon King <simon.k...@uni-jena.de> wrote:
>> > No, you would have to run
>> > $ export SAGE_CHECK=yes
>> > $ ./sage -f python-$version
>>
>> Doing it now.
>
> Result:
>
> 323 tests OK.
> 3 tests failed:
>    test_distutils test_urllib test_zlib

Distutils is for building other python modules

http://docs.python.org/library/distutils.html

so conceivably a failure on that could cause an issue with a module
like pexpect.

But I think others have had that fail too. test_zlib has failed for
many people. I don't think I've ever seen anyone on the sage list
report a failure of test_urllib, but the name does not really suggest
it will be the cause.

3 failues is not an excessive number. Most people seem to get a few.


> 39 tests skipped:
>    test_aepack test_al test_applesingle test_bsddb test_bsddb185
>    test_bsddb3 test_cd test_cl test_codecmaps_cn test_codecmaps_hk
>    test_codecmaps_jp test_codecmaps_kr test_codecmaps_tw test_curses
>    test_dbm test_dl test_gdbm test_gl test_imageop test_imgfile
>    test_kqueue test_linuxaudiodev test_macos test_macostools
>    test_normalization test_ossaudiodev test_pep277 test_py3kwarn
>    test_scriptpackages test_smtpnet test_socketserver test_startfile
>    test_sunaudiodev test_timeout test_urllib2net test_urllibnet
>    test_winreg test_winsound test_zipfile64

The reasons for skippling some of them are obvious, like test_sunaudiodev.

> 3 skips unexpected on linux2:
>    test_dbm test_gdbm test_bsddb

I don't know if these have any significance.

You might get more joy asking on the python forums, or perhaps a
pexpect mailing list, forum, or similar. The pexpect author should
have more idea than any of us.


> test test_distutils failed -- errors occurred; run in verbose mode for
> details
>
>
> Does that seem related to select.select being slow?

> Cheers,
> Simon

Dave

Dan Drake

unread,
Nov 18, 2010, 9:39:14 PM11/18/10
to sage-...@googlegroups.com
Here's what I get on a 64-bit Ubuntu 10.10 system. I compiled Sage on
this machine.

On Thu, 18 Nov 2010 at 12:09AM -0800, Simon King wrote:
> When I define
> {{{
> def test(n):
> st = singular.cputime()
> ct = cputime()
> wt = walltime()
> for i in range(n):
> a = singular.eval('def a%d=%d'%(i,i))
> print "Wall time:", walltime(wt)
> print "Total CPU:", cputime(ct)+singular.cputime(st)
> }}}
> then I get
> {{{
> sage: test(1000)
> Wall time: 40.0399508476
> Total CPU: 0.0
> }}}

I get:

sage: test(1000)
Wall time: 40.4516711235
Total CPU: 0.37

> When I replace "singular.eval..." by "singular._sendstr('def a%d=%d;
> \n'%(i,i))" in the above function, I get
> {{{
> sage: test(1000)
> Wall time: 0.015585899353
> Total CPU: 0.04
> sage: singular('a100')
> 100
> }}}

With the same change, I get:

sage: test(1000)
Wall time: 0.00672912597656
Total CPU: 0.02

> But when I replace "singular.eval..." by "singular._synchronize()", I
> get
> {{{
> sage: test(1000)
> Wall time: 19.9999539852
> Total CPU: 0.0
> }}}

Here, I get:

sage: test(1000)
Wall time: 20.219892025
Total CPU: 0.13

So your machine is not alone.

On sagenb.kaist.ac.kr, an 8-core Xeon machine running Ubuntu 10.04.1, I get:


sage: test(1000) # original singular.eval
Wall time: 40.0399620533
Total CPU: 0.4

sage: test(1000) # singular._sendstr
Wall time: 0.0119400024414
Total CPU: 0.04

sage: test(1000) # singular._synchronize
Wall time: 19.9999010563
Total CPU: 0.15

So your machine is *definitely* not alone.

Dan

--
--- Dan Drake
----- http://mathsci.kaist.ac.kr/~drake
-------

signature.asc

Simon King

unread,
Nov 19, 2010, 1:28:47 AM11/19/10
to sage-devel
Hi Dan!

On 19 Nov., 03:39, Dan Drake <dr...@kaist.edu> wrote:
> ...
> So your machine is *definitely* not alone.

Releaving, in a way...

So, now the question is what all the machines have in common. I notice
that it is Ubuntu in many (or all?) cases. One of the machines at my
university used to *not* show the big overhead when it was running
Scientific Linux. Now it is (if I understood correctly what the
sysadminn said) Ubuntu.

Is there a counter example? Is sage.math under Ubuntu, too?

I'll try to find out how to contact the pexpect developers and ask
them if they know a solution.

Cheers,
Simon

Dima Pasechnik

unread,
Nov 19, 2010, 2:47:22 AM11/19/10
to sage-devel


On Nov 19, 2:28 pm, Simon King <simon.k...@uni-jena.de> wrote:
> Hi Dan!
>
> On 19 Nov., 03:39, Dan Drake <dr...@kaist.edu> wrote:
>
> > ...
> > So your machine is *definitely* not alone.
>
> Releaving, in a way...
>
> So, now the question is what all the machines have in common. I notice
> that it is Ubuntu in many (or all?) cases. One of the machines at my
> university used to *not* show the big overhead when it was running
> Scientific Linux. Now it is (if I understood correctly what the
> sysadminn said) Ubuntu.
>
> Is there a counter example? Is sage.math under Ubuntu, too?

on Debian x64 (VMWare virtual machine) I get
sage: sage: test(1000)
Wall time: 16.1236770153
Total CPU: 0.19601

on Centos (a clone of Red Hat) x64 I get
sage: test(1000)
Wall time: 0.176967144012
Total CPU: 0.170977
sage:

which looks like a confirmation of your theory, that it is a Debian/
Ubuntu specific.

Dima

Simon King

unread,
Nov 19, 2010, 2:47:38 AM11/19/10
to sage-devel
On 19 Nov., 07:28, Simon King <simon.k...@uni-jena.de> wrote:
> So, now the question is what all the machines have in common. I notice
> that it is Ubuntu in many (or all?) cases. One of the machines at my
> university used to *not* show the big overhead when it was running
> Scientific Linux. Now it is (if I understood correctly what the
> sysadminn said) Ubuntu.

Small correction: One of the problematic machines runs Debian 6.0, not
Ubuntu. But according to our sysadmin, Ubuntu is based on Debian, so
that one might conjecture that the problem occurs on machines that in
some way are based on Debian.

> Is there a counter example? Is sage.math under Ubuntu, too?

Resp. under Debian (at least indirectly)?

So, we have overhead on
1.
Linux mpc721 2.6.32-24-server #43-Ubuntu SMP Thu Sep 16 16:05:42 UTC
2010 x86_64 GNU/Linux
Four Dual Core AMD Opteron(tm) Processors 270, 1800.000MHz
2.
Linux mpc622 2.6.34.linuxpool #0 SMP PREEMPT Wed May 19 16:32:19 CEST
2010 x86_64 GNU/Linux
(This is Debian 6.0 according to the sysadmin)
Intel(R) Core(TM) i3 CPU 530 @ 2.93GHz
3.
The 64-bit Ubuntu 10.10 system of Dan Drake
4.
sagenb.kaist.ac.kr, an 8-core Xeon machine running Ubuntu 10.04.1

We have no overhead on
1.
sage.math
2.
bsd.math
3.
The OpenSolaris machine of David Kirkby
4.
Machine #1 above, when it was running Scientific Linux.

Let's see if that gives some clue to the pexpect people...

Cheers,
Simon

Dan Drake

unread,
Nov 19, 2010, 6:33:47 AM11/19/10
to sage-...@googlegroups.com
On Thu, 18 Nov 2010 at 10:28PM -0800, Simon King wrote:
> So, now the question is what all the machines have in common. I notice
> that it is Ubuntu in many (or all?) cases. One of the machines at my
> university used to *not* show the big overhead when it was running
> Scientific Linux. Now it is (if I understood correctly what the
> sysadminn said) Ubuntu.
>
> Is there a counter example? Is sage.math under Ubuntu, too?

For what it's worth, I ran your tests on my Macbook. It's an Intel
Macbook from 2006 or 2007, running OS X 10.5 and Sage 4.5. I got:

sage: test1(1000) # the basic "eval()" test
Wall time: 0.275583028793
Total CPU: 0.327884

sage: test3(1000) # the "synchronize()" test
Wall time: 0.106857776642
Total CPU: 0.141513

The _sendstr() test would work when I ran it a few times, but eventually it
would hang. I'm guessing that the interface got de-sync'ed.

I can check this on an Arch VM tomorrow.

signature.asc

Simon King

unread,
Nov 19, 2010, 8:30:19 AM11/19/10
to sage-devel
Hi Dan!

On 19 Nov., 12:33, Dan Drake <dr...@kaist.edu> wrote:
> For what it's worth, I ran your tests on my Macbook. It's an Intel
> Macbook from 2006 or 2007, running OS X 10.5 and Sage 4.5.

Thank you!

Meanwhile I contacted Noah (the pexpect developer). Let's see what he
thinks about it.

Best regards,
Simon

Willem Jan Palenstijn

unread,
Nov 19, 2010, 9:12:52 AM11/19/10
to sage-...@googlegroups.com
On Thu, Nov 18, 2010 at 11:47:38PM -0800, Simon King wrote:
> On 19 Nov., 07:28, Simon King <simon.k...@uni-jena.de> wrote:
> > So, now the question is what all the machines have in common. I notice
> > that it is Ubuntu in many (or all?) cases. One of the machines at my
> > university used to *not* show the big overhead when it was running
> > Scientific Linux. Now it is (if I understood correctly what the
> > sysadminn said) Ubuntu.
>
> Small correction: One of the problematic machines runs Debian 6.0, not
> Ubuntu. But according to our sysadmin, Ubuntu is based on Debian, so
> that one might conjecture that the problem occurs on machines that in
> some way are based on Debian.

Could you try to download, compile and run a small test program on a
problematic machine? It times how fast a pseudo-terminal responds, which might
be the problem judging by a few quick tests I ran.

wget http://www.usecode.org/misc/timeptmx.c
gcc -o timeptmx timeptmx.c
strace -o timeptmx.log -f -ttt ./timeptmx
grep aaa timeptmx.log

That should output something like this:

16095 1290175675.065705 write(3, "aaa", 3) = 3
16096 1290175675.065749 <... read resumed> "aaa", 256) = 3

The difference between these two timestamps seems to determine how fast pexpect
responds. In this case it's fast (1290175675.065705 to 1290175675.065749 is
only 44 microseconds), but I've seen 1.8ms on other machines with newer
kernels.

Something similar to this is run twice for every singular.eval() call, and
seems to be the major factor in execution (wall) time.


-Willem Jan

Dima Pasechnik

unread,
Nov 19, 2010, 10:11:19 AM11/19/10
to sage-devel


On Nov 19, 10:12 pm, Willem Jan Palenstijn <w...@usecode.org> wrote:
> On Thu, Nov 18, 2010 at 11:47:38PM -0800, Simon King wrote:
> > On 19 Nov., 07:28, Simon King <simon.k...@uni-jena.de> wrote:
> > > So, now the question is what all the machines have in common. I notice
> > > that it is Ubuntu in many (or all?) cases. One of the machines at my
> > > university used to *not* show the big overhead when it was running
> > > Scientific Linux. Now it is (if I understood correctly what the
> > > sysadminn said) Ubuntu.
>
> > Small correction: One of the problematic machines runs Debian 6.0, not
> > Ubuntu. But according to our sysadmin, Ubuntu is based on Debian, so
> > that one might conjecture that the problem occurs on machines that in
> > some way are based on Debian.
>
> Could you try to download, compile and run a small test program on a
> problematic machine? It times how fast a pseudo-terminal responds, which might
> be the problem judging by a few quick tests I ran.
>
> wgethttp://www.usecode.org/misc/timeptmx.c
> gcc -o timeptmx timeptmx.c
> strace -o timeptmx.log -f -ttt ./timeptmx
> grep aaa timeptmx.log
>
> That should output something like this:
>
> 16095 1290175675.065705 write(3, "aaa", 3) = 3
> 16096 1290175675.065749 <... read resumed> "aaa", 256) = 3
>
> The difference between these two timestamps seems to determine how fast pexpect
> responds. In this case it's fast (1290175675.065705 to 1290175675.065749 is
> only 44 microseconds), but I've seen 1.8ms on other machines with newer
> kernels.
>

it seems that on my pair of Linux machines (see my post in the thread)
I see exactly this happening:

Debian (testing), kernel 2.6.32, 6105 microseconds
dima@banana:/tmp/pexpect$ gcc -o timeptmx timeptmx.c
dima@banana:/tmp/pexpect$ strace -o timeptmx.log -f -ttt ./timeptmx
dima@banana:/tmp/pexpect$ grep aaa timeptmx.log
17157 1290178717.172903 write(3, "aaa", 3) = 3
17158 1290178717.179008 <... read resumed> "aaa", 256) = 3
dima@banana:/tmp/pexpect$ uname -a
Linux banana 2.6.32-5-amd64 #1 SMP Sat Oct 30 14:18:21 UTC 2010 x86_64
GNU/Linux

Centos, kernel 2.6.18, 109 microseconds
[dima@grapefruit pexpect]$ gcc -o timeptmx timeptmx.c
[dima@grapefruit pexpect]$ strace -o timeptmx.log -f -ttt ./timeptmx
[dima@grapefruit pexpect]$ grep aaa timeptmx.log
17650 1290178940.664300 write(3, "aaa", 3) = 3
17651 1290178940.664409 <... read resumed> "aaa", 256) = 3
[dima@grapefruit pexpect]$ uname -a
Linux grapefruit.local.spms.ntu.edu.sg 2.6.18-194.17.4.el5.centos.plus
#1 SMP Tue Oct 26 04:07:11 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux


Dima

David Kirkby

unread,
Nov 19, 2010, 10:18:42 AM11/19/10
to sage-...@googlegroups.com


If there is no trac ticket for this, which i believe is the case, then
one should be created.

This could be real mess if updates of the kernal are going to cause
this slowdown.

it must be worth trying the latst pexpect source code - we are several
revisions out of date.

Dave

Simon King

unread,
Nov 19, 2010, 12:39:06 PM11/19/10
to sage-devel
Hi Willem!

On 19 Nov., 15:12, Willem Jan Palenstijn <w...@usecode.org> wrote:
> Could you try to download, compile and run a small test program on a
> problematic machine? It times how fast a pseudo-terminal responds, which might
> be the problem judging by a few quick tests I ran.
>
> wget http://www.usecode.org/misc/timeptmx.c
> gcc -o timeptmx timeptmx.c
> strace -o timeptmx.log -f -ttt ./timeptmx
> grep aaa timeptmx.log
>
> That should output something like this:
>
> 16095 1290175675.065705 write(3, "aaa", 3) = 3
> 16096 1290175675.065749 <... read resumed> "aaa", 256) = 3
>
> The difference between these two timestamps seems to determine how fast pexpect
> responds. In this case it's fast (1290175675.065705 to 1290175675.065749 is
> only 44 microseconds), but I've seen 1.8ms on other machines with newer
> kernels.

Thank you for that hint!

I get (on two machines)
16796 1290187196.906112 write(3, "aaa", 3) = 3
16797 1290187196.914130 <... read resumed> "aaa", 256) = 3
and
23921 1290187440.127914 write(3, "aaa", 3) = 3
23922 1290187440.150274 <... read resumed> "aaa", 256) = 3

And the difference is 0.00801801681518555 resp. 0.0223600864410400.

8ms resp. 22ms difference?? Quite a lot!! And the 22ms are on the
machine that shows the worst overhead.

> Something similar to this is run twice for every singular.eval() call, and
> seems to be the major factor in execution (wall) time.

Indeed. When synchronizing the interface (which is the first thing
done in singular.eval(...)), some random number is send, and then it
is waited for the prompt (and the number) to appear in the output
stream. This, if I understand correctly, is done using the select()
call applied to a pseudo terminal. And similar things happen when the
actual command is send through the interface - again, one waits for
the interface's prompt to appear in the output stream.

But I don't know where the third call happens when creating a
SingularElement (recall: _synchronize() is responsible for one third,
the rest of eval(...) for another third of the overhead of creating a
SingularElement). Perhaps worth to analyze.

I will try to open a ticket for it, summarizing this thread in the
ticket description. But am I right that there should be two separate
tickets for the overhead issue and the pexpect upgrade, since it is
not proved that the new pexpect version does better?

Best regards, and thank you to everybody,
Simon

Simon King

unread,
Nov 19, 2010, 2:38:57 PM11/19/10
to sage-devel
On 19 Nov., 18:39, Simon King <simon.k...@uni-jena.de> wrote:
> But I don't know where the third call happens when creating a
> SingularElement (recall: _synchronize() is responsible for one third,
> the rest of eval(...) for another third of the overhead of creating a
> SingularElement). Perhaps worth to analyze.

Got it: The Singular interface keeps trac of variable names that are
not needed anymore and should be killed in Singular (to save
memory). More precisely:
1) SingularElement.__del__() declares the underlying variable in
Singular as "to be killed".
2) In singular.eval(), these killings are actually committed.
3) Each variable is killed by one call of singular._eval_line(), which
then waits for the prompt to appear. It involves a call to
select.select().

So: If I have "singular(i)" in my test function, SingularElements are
created, and killed later, which is the third time that
select.select() is called. If I instead have "singular.eval(...)", no
SingularElements are involved, which avoids one out of three
select.select() calls.

But I wonder: Is it really a good strategy to kill SingularElement in
a method like singular.eval(), that is not (directly) related with
SingularElement??

Some arguments:
- If a programmer directly uses singular.eval, the reason could be to
explicitly avoid SingularElement (this is my motivation, sometimes).
So, why should singular.eval() waste time with killing
SingularElements?
- I don't see why killing the variable is not already done in
SingularElement.__del__, but postponed to a later call of
singular.eval. It might spare resources to kill a *group* of variables
with a single call. But such mass murder is not implemented anyway:
The variables are killed one by one, inside singular.eval.
- Heuristically, singular.eval is called far more often than
SingularElement.__init__. So, if one seeks to reduce the overhead by
killing variables in larger groups, one may consider to do it in
SingularElement.__init__, since it is called less often.
- Or, if one wants frequent kills: Why is it not done inside
singular._synchronize()? In that way, one would need to wait for the
prompt only *once*, rather than one time for the synchronization and
one time for each variable.

> I will try to open a ticket for it, summarizing this thread in the
> ticket description. But am I right that there should be two separate
> tickets for the overhead issue and the pexpect upgrade, since it is
> not proved that the new pexpect version does better?

I think I'll do three tickets: One for the issue that I describe in
this post, one for the machine-dependent slowness of pexpect (my
original post, "reported upstream") and one for upgrading pexpect.

Cheers,
Simon

Dan Drake

unread,
Nov 19, 2010, 8:39:19 PM11/19/10
to sage-...@googlegroups.com
On Fri, 19 Nov 2010 at 02:12PM +0000, Willem Jan Palenstijn wrote:
> Could you try to download, compile and run a small test program on a
> problematic machine? It times how fast a pseudo-terminal responds, which might
> be the problem judging by a few quick tests I ran.
>
> wget http://www.usecode.org/misc/timeptmx.c
> gcc -o timeptmx timeptmx.c
> strace -o timeptmx.log -f -ttt ./timeptmx
> grep aaa timeptmx.log

drake@klee:/scratch/download$ grep aaa timeptmx.log
508 1290216318.809968 write(3, "aaa", 3) = 3
509 1290216318.837492 <... read resumed> "aaa", 256) = 3

So, about .027 seconds. That's on my 64-bit Ubuntu 10.10 machine, Core 2
Quad, with a current kernel:

Linux klee 2.6.35-22-generic #35-Ubuntu SMP Sat Oct 16 20:45:36 UTC 2010
x86_64 GNU/Linux

On the same machine, inside an Arch virtual machine running in
VirtualBox: (I don't have cut and paste enabled for the VM, so I'm
manually typing this:

936 129...5.908400 write(3, "aaa"...
937 129...5.928973 >... read resumed...

So, the timing is faster, but still a couple orders of magnitude behind
your timings. Yikes.

signature.asc

Simon King

unread,
Nov 20, 2010, 4:24:23 AM11/20/10
to sage-devel
On 19 Nov., 20:38, Simon King <simon.k...@uni-jena.de> wrote:
> I think I'll do three tickets: One for the issue that I describe in
> this post,  one for the machine-dependent slowness of pexpect (my
> original post, "reported upstream") and one for upgrading pexpect.

Done! It is #10296 for Singular specific issues, #10295 for upgrading
pexpect, and #10294 for the general problem of calls to select() being
slow on some systems (which may be solved by upgrading pexpect or by
switching to expect, but may very well be a "wontfix").

Best regards,
Simon

Dr. David Kirkby

unread,
Nov 20, 2010, 7:21:13 AM11/20/10
to sage-...@googlegroups.com
That seems all very logical. But I can't help feeling trying an upgrade of
pexect will take only a few minutes, and might just solve all your problems. If
it fails to solve the problems, you could put a quick note on #10295 it did not
help, then forget #10295

I doubt it will help, but it is remotely possible, and to do test would probably
take you less than 10 minutes - no need to bother with mercurial, patches,
review or anything else. Just try it and see if it works.


Dave

Simon King

unread,
Nov 20, 2010, 10:02:47 AM11/20/10
to sage-devel
Hi David,

On 20 Nov., 13:21, "Dr. David Kirkby" <david.kir...@onetel.net> wrote:
> That seems all very logical. But I can't help feeling trying an upgrade of
> pexect will take only a few minutes, and might just solve all your problems.

Even if it does, the singular-specific issues raised in #10296 are
still valid.

First I'd like to see what the pexpect developer has to say. Francois
Bissey remarked that we are using pexpect 2.0 because the following
versions were *slower*. Also, it seems difficult to imagine that
pexpect is now working around pseudo terminals being slow - after all,
it relies on pseudo terminals. So, an upgrade does not sound very
promising.

Hence, for now I am focusing on #10296 (and on fitting the furniture
into my kitchen...)

Cheers,
Simon

François Bissey

unread,
Nov 20, 2010, 3:38:48 PM11/20/10
to sage-...@googlegroups.com
Well, when you have time I think it would be a good idea to try Dave's
suggestion. Bumping the spkg to version 2.4 is almost trivial.
My notes indicates that
--- pexpect.py.orig 2007-04-16 07:08:24.000000000 -0700
+++ pexpect.py 2009-01-23 01:49:18.000000000 -0800
@@ -1130,7 +1130,7 @@
"""
# Special case where filename already contains a path.
if os.path.dirname(filename) != '':
- if os.access (filename, os.X_OK):
+ if os.access (filename, os.X_OK) and not os.path.isdir(f):
return filename

if not os.environ.has_key('PATH') or os.environ['PATH'] == '':
@@ -1145,7 +1145,7 @@

for path in pathlist:
f = os.path.join(path, filename)
- if os.access(f, os.X_OK):
+ if os.access(f, os.X_OK) and not os.path.isdir(f):
return f
return None

is still needed in 2.4, but not the other patch included in 2.0 as it has been
adopted upstream.
You form an opinion then. You should also try to do some plotting in the
notebook. In our case that was the deal breaker. It would interesting to
know if your experience is different.

Francois

Johannes

unread,
Nov 21, 2010, 9:35:24 AM11/21/10
to sage-...@googlegroups.com

24618 1290349603.215559 write(3, "aaa", 3) = 3
24619 1290349603.219923 <... read resumed> "aaa", 256) = 3

in the first case:
Wall time: 16.0200681686
Total CPU: 0.862045

in the second one:
Wall time: 0.0230610370636
Total CPU: 0.042001

in the third one:
Wall time: 8.00897812843
Total CPU: 0.300014

and my config:
Linux neo 2.6.32-25-generic #45-Ubuntu SMP Sat Oct 16 19:48:22 UTC 2010
i686 GNU/Linux
j_schn14@neo:~$ sage -version
| Sage Version 4.6, Release Date: 2010-10-30

hope this helps.
greatz Johannes

Simon King

unread,
Nov 21, 2010, 2:20:15 PM11/21/10
to sage-devel
Hi all!

At ticket #10294, Mike Hansen gave a very helpful remark. He pointed
to a thread at http://groups.google.com/group/linux.kernel/browse_thread/thread/5a2b00e35b0864a7

Our system administrator already answered to me that the solution
suggested in this thread (or in another thread that is linked) looks
very promising: There is a "low latency mode" for pseudo terminals
that apparently can be chosen by some kernel configuration. He will
not be here tomorrow, but he thinks that the problem is likely to be
resolved on Tuesday.

Only he wonders why seemingly every distribution except Debian and
Ubuntu is using the patch, or why a problem on a common distribution
like Debian or Ubuntu has not been complained about by other Sage
users before.

Cheers,
Simon

Dima Pasechnik

unread,
Nov 22, 2010, 12:09:11 AM11/22/10
to sage-devel
Are we talking about the same kernels?
The kernel de-commit described in the post was done in 2009,
and it's not clear to me whether this has made it into
production kernels already.
(almost certainly not into Debian stable...)

Dima

On Nov 22, 3:20 am, Simon King <simon.k...@uni-jena.de> wrote:
> Hi all!
>
> At ticket #10294, Mike Hansen gave a very helpful remark. He pointed
> to a thread athttp://groups.google.com/group/linux.kernel/browse_thread/thread/5a2b...

Simon King

unread,
Nov 22, 2010, 1:46:10 AM11/22/10
to sage-devel
Hi Dima!

On 22 Nov., 06:09, Dima Pasechnik <dimp...@gmail.com> wrote:
> Are we talking about the same kernels?
> The kernel de-commit described in the post was done in 2009,
> and it's not clear to me whether this has made it into
> production kernels already.
> (almost certainly not into Debian stable...)

Sorry, questions about kernels or about different distributions of
Linux are beyond my knowledge.

Cheers,
Simon

Alex Leone

unread,
Nov 22, 2010, 8:12:35 AM11/22/10
to sage-...@googlegroups.com
for reference: I'm running Ubuntu 10.10 on a quad core, 2.6.35-22-generic

1. This is not an issue with select.  This is an issue with the low_latency pty stuff that Simon mentioned.  I tested this by modifying pexpect to use true nonblocking io (ie set the child_fd to O_NONBLOCK, and then just read() and ignore EAGAIN), and there still was a delay before read would return something other than EAGAIN.

2. There's nothing wrong with the singular interface.

3.  Possible fix: for singular and gap (and anything else that doesn't need a full tty, ie doesn't use fancy screen stuff like top), perhaps we should use subprocess.Popen(... stdout=PIPE, stdin=PIPE, stderr=PIPE) and then communicate through the pipes.  There shouldn't be any latency.  I'm currently modifying pexpect to do this and will report my findings.

 - Alex

Michael Brickenstein

unread,
Nov 22, 2010, 8:40:12 AM11/22/10
to sage-...@googlegroups.com

Thanks for all your efforts about the performance of our interface.

For Singular, there exists the libsingular interface.
I would appreciate every help making it the perfect Singular interface, instead of optimizing
the generic pexpect interface.

Cheers,
Michael

>
> - Alex
>
> --
> To post to this group, send an email to sage-...@googlegroups.com
> To unsubscribe from this group, send an email to sage-devel+...@googlegroups.com
> For more options, visit this group at http://groups.google.com/group/sage-devel
> URL: http://www.sagemath.org

-------------------------------------------
Dr. rer. nat. Michael Brickenstein
Mathematisches Forschungsinstitut Oberwolfach gGmbH
Schwarzwaldstr. 9 - 11
77709 Oberwolfach
Tel.: 07834/979-31
Fax: 07834/979-38

Simon King

unread,
Nov 22, 2010, 12:00:58 PM11/22/10
to sage-devel
Hi Alex and Michael!

On 22 Nov., 14:40, Michael Brickenstein <brickenst...@mfo.de> wrote:
> Am 22.11.2010 um 14:12 schrieb Alex Leone:
>
> > for reference: I'm running Ubuntu 10.10 on a quad core, 2.6.35-22-generic
>
> ...
> > 2. There's nothing wrong with the singular interface.

The commands
a=gap(1)
a=gap(2)
require to wait for a Gap prompt twice.

The commands
a=singular(1)
a=singular(2)
require to wait for a Singular prompt five times.

So, there *is* something wrong with the Singular interface: The
overhead is considerably more than with the other interfaces!

And I do believe that this is worth a ticket (I think it is the last
of the three tickets that I opened for this thread).

> > 3.  Possible fix: for singular and gap (and anything else that doesn't need a full tty, ie doesn't use fancy screen stuff like top), perhaps we should use subprocess.Popen(... stdout=PIPE, stdin=PIPE, stderr=PIPE) and then communicate through the pipes.  There shouldn't be any latency.  I'm currently modifying pexpect to do this and will report my findings.

Would this be possible without too much effort? This sounds great!

But the Singular interface having more overhead than other interfaces
is orthogonal to the "pexpect vs. pipes" issue.

> For Singular, there exists the libsingular interface.
> I would appreciate every help making it the perfect Singular interface, instead of optimizing
> the generic pexpect interface.

1. Sage uses some pexpect interfaces that can not be replaced by C API
right now, and some will never be replacable by a C API: Maxima and
GAP belong to the first category IIRC; Maple, Magma and other
interfaces belong to the second category (unless Magma changes to
GPL...)

So, optimizing the generic pexpect interface, as suggested by Alex, is
a *very* interesting project.

2. IMHO, making libsingular "the perfect Singular interface" would
include to re-write any Sage code that currently relies on the pexpect
interface. And this would certainly be a lot of stuff to re-write.

Best regards,
Simon

William Stein

unread,
Nov 22, 2010, 2:44:27 PM11/22/10
to sage-...@googlegroups.com

That's not clear to me. Maybe the full singular command prompt could
be used via a library interface? After all, it is just a C++ program.
The C interface to GAP that I wrote just involves rewriting the
GAP read-eval-print loop (the command line interface) and turning it
into C library calls. I literally just stick some data in where the
read would put it, then tell GAP that it just read something in, so go
evaluate it, etc. I mentioned this to Martin Albrecht (author of
libsingular), and he said it would be more difficult to do that with
Singular. But that doesn't mean it is impossible. It might be better
to write something like this and not change any of Sage, than to have
to rewrite a lot of Sage.

-- William

Simon King

unread,
Nov 22, 2010, 4:43:12 PM11/22/10
to sage-devel
Hi William,

On 22 Nov., 20:44, William Stein <wst...@gmail.com> wrote:
> > 2. IMHO, making libsingular "the perfect Singular interface" would
> > include to re-write any Sage code that currently relies on the pexpect
> > interface. And this would certainly be a lot of stuff to re-write.
>
> That's not clear to me.  Maybe the full singular command prompt could
> be used via a library interface?  After all, it is just a C++ program.

What I meant was: There is a difference between "being able to do
sth." and "doing sth."

I am rather sure that there is much code in Sage and probably third
party code as well that uses Singular via pexpect interface. It is
one thing to say to the authors of that code: "Meanwhile you can do
all this via libsingular" (as much as I understood, it *is* now
possible). But it is a completely different thing to make the authors
change all their code so that it *does* use libsingular.

Hence, if a small change in the singular pexpect interface reduces the
overhead by 2/3 then a lot of code would benefit - thus, such small
change is worth-while.

>     The C interface to GAP that I wrote just involves rewriting the
> GAP read-eval-print loop (the command line interface) and turning it
> into C library calls.  I literally just stick some data in where the
> read would put it, then tell GAP that it just read something in, so go
> evaluate it, etc.

I didn't know that there was so much progress - I remember someone
mentioning that certain internal aspects of GAP would make the
creation of "libgap" difficult.

Here is my personal list of pros and contras.

Improving the text interfaces by replacing pexpect:
- Such radical change (as suggested by Alex) would probably involve
much new library code to be written.
+ AFAIK, text interfaces *will* remain important in Sage
+ Lots of existing code would benefit.

Improving pexpect by reducing the latency time of pseudo terminals:
- I doubt that Sage can change it, as this sounds like the job of a
system administrator.
+ Users might be happy if they can find an explanation in the manual
how they (or their sys admin) can improve the performance of all
existing interface-intensive Sage programs.

Reducing the particular overhead of the Singular interface:
+ The changes I am suggesting are small (doc tests pass, so, I guess I
can post a patch tomorrow)
+ Lots of existing code would benefit.

Make libsingular applicable in a broader range of functionality:
+ (a BIG plus) If something can be done via C API then it is certainly
better than a text interface.
- Existing code would not benefit.

Everything has pros. So, simply one has to decide about one's
priority. Mine is to reduce the latency time on my machine and to
reduce the singular interface overhead, since that seems to involve
both the most impact on my work in cohomology and the least effort.

Cheers,
Simon

Alex Leone

unread,
Nov 22, 2010, 5:56:20 PM11/22/10
to sage-...@googlegroups.com
I wrote a small script (see attached) that reads ~1 second of the stdout of a command opened with subprocess.Popen().  For some reason singular doesn't like this and refuses to show any output.  However gap works fine:

singular output:

{{{
SAGE_ROOT=/home/alex/progs/src/sage-4.6.1.alpha2
(sage subshell) P5Q-E:sage-4.6.1.alpha2 alex$ python test_Popen.py /home/alex/progs/src/sage-4.6.1.alpha2/local/bin/Singular-3-1-1 -t
Starting subprocess with args=
['/home/alex/progs/src/sage-4.6.1.alpha2/local/bin/Singular-3-1-1', '-t']
p_pid=30070   p_stdinfd=4   p_stdoutfd=5   p_stderrfd=None
p.terminate()
returncode=-15
}}}


gap output:

{{{
SAGE_ROOT=/home/alex/progs/src/sage-4.6.1.alpha2
(sage subshell) P5Q-E:sage-4.6.1.alpha2 alex$ python test_Popen.py gap
Starting subprocess with args=
['gap']
p_pid=30073   p_stdinfd=4   p_stdoutfd=5   p_stderrfd=None
Got:
        
Got:
                #########           ######         ###########           ###  
             #############          ######         ############         ####  
            ##############         ########        #############       #####  
           ###############         ########        #####   ######      #####  
          ######         #         #########       #####    #####     ######  
         ######                   ##########       #####    #####    #######  
         #####                    ##### ####       #####   ######   ########  
         ####                    #####  #####      #############   ###  ####  
         #####     #######       ####    ####      ###########    ####  ####  
         #####     #######      #####    #####     ######        ####   ####  
         #####     #######      #####    #####     #####         #############
          #####      #####     ################    #####         #############
          ######     #####     ################    #####         #############
          ################    ##################   #####                ####  
           ###############    #####        #####   #####                ####  
             #############    #####        #####   #####                ####  
              #########      #####          #####  #####                ####  
                                                                              
         Information at:  http://www.gap-system.org
         Try '?help' for help. See also  '?copyright' and  '?authors'
        
       Loading the library. Please be patient, this may take a while.
Got:
    GAP4, Version: 4.4.12 of 17-Dec-2008, x86_64-unknown-linux-gnu-gcc
Got:
    gap> 
p.terminate()
returncode=-15
}}}


So it seems singular needs a tty.

- Alex
test_Popen.py

Mike Hansen

unread,
Nov 22, 2010, 6:02:25 PM11/22/10
to sage-...@googlegroups.com
On Mon, Nov 22, 2010 at 2:56 PM, Alex Leone <acl...@gmail.com> wrote:
> I wrote a small script (see attached) that reads ~1 second of the stdout of
> a command opened with subprocess.Popen().  For some reason singular doesn't
> like this and refuses to show any output.  However gap works fine:

pexpect (and other expect variants) were written specifically to avoid
deadlocks which you are most likely seeing with Singular. For more
info on this, check out
http://effbot.org/pyfaq/how-do-i-run-a-subprocess-with-pipes-connected-to-both-input-and-output.htm

--Mike

Alex Leone

unread,
Nov 22, 2010, 7:01:59 PM11/22/10
to sage-...@googlegroups.com
This should be the problem:

1. If you start singular normally, it prints version info, etc to stdout and then waits for stdin.  So if we read stdout, we should get something immediately after the process is started.

2. We set stdout to nonblocking (in the python process), so read() doesn't block.

3. I ran strace to see what was happening.  For some reason singular is trying to read from stdin before it prints anything to stdout.

relevant strace details from the python process:

{{{
pipe([3, 4])                            = 0
pipe([5, 6])                            = 0
pipe([7, 8])                            = 0
fcntl(8, F_GETFD)                       = 0
fcntl(8, F_SETFD, FD_CLOEXEC)           = 0
clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x7f8aaf3eb9d0) = 30213
close(8)                                = 0
close(3)                                = 0
close(6)                                = 0
...
fcntl(5, F_GETFL)                       = 0 (flags O_RDONLY)
fcntl(5, F_SETFL, O_RDONLY|O_NONBLOCK)  = 0
read(5, 0x1a9c8a4, 4096)                = -1 EAGAIN (Resource temporarily unavailable)
... (this read is repeated while the python process tries to read any output from the singular process for about a second)
...
kill(30213, SIGTERM)                    = 0
...
--- SIGCHLD (Child exited) @ 0 (0) ---
...
}}}


relevant strace details from the clone()-ed singular process:

{{{
close(4)                                = 0
close(5)                                = 0
close(7)                                = 0
dup2(3, 0)                              = 0
dup2(6, 1)                              = 1
dup2(6, 2)                              = 2
close(3)                                = 0
close(6)                                = 0
execve("/home/alex/progs/src/sage-4.6.1.alpha2/local/bin/Singular-3-1-1", ["Singular-3-1-1", "-t"], [/* 81 vars */]) = 0
...
ioctl(0, SNDCTL_TMR_TIMEBASE or TCGETS, 0x7fff17936e60) = -1 EINVAL (Invalid argument)
...
fstat(0, {st_mode=S_IFIFO|0600, st_size=0, ...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7ff5da40b000
read(0, 0x7ff5da40b000, 4096)           = ? ERESTARTSYS (To be restarted)
--- SIGTERM (Terminated) @ 0 (0) ---
}}}


The above output was produced by
{{{
strace -o strace_python_singular_-t_noshell.txt -ff -F python test_Popen.py Singular-3-1-1 -t
}}}


 - Alex

On Mon, Nov 22, 2010 at 3:02 PM, Mike Hansen <mha...@gmail.com> wrote:

pexpect (and other expect variants) were written specifically to avoid
deadlocks which you are most likely seeing with Singular.  For more
info on this, check out
http://effbot.org/pyfaq/how-do-i-run-a-subprocess-with-pipes-connected-to-both-input-and-output.htm

--Mike


 - Alex 

Alex Leone

unread,
Nov 22, 2010, 7:03:44 PM11/22/10
to sage-...@googlegroups.com
Sorry, I meant that deadlock *shouldn't* be problem.  The problem is the call to read(0, )... by the singular process before it writes anything to stdout (eg write(1, "....)).

 - Alex

Michael Brickenstein

unread,
Nov 23, 2010, 2:47:02 AM11/23/10
to sage-...@googlegroups.com

Am 22.11.2010 um 23:56 schrieb Alex Leone:

> Loading the library. Please be patient, this may take a while.
> Got:
> GAP4, Version: 4.4.12 of 17-Dec-2008, x86_64-unknown-linux-gnu-gcc
> Got:
> gap>
> p.terminate()
> returncode=-15
> }}}
>
>
> So it seems singular needs a tty.
>
> - Alex

sage -singular --help
Singular version 3-1-0 -- a CAS for polynomial computations. Usage:
Singular-3-1-0 [options] [file1 [file2 ...]]
Options:
...
-t --no-tty Do not redefine the terminal characteristics
...


Cheers,
Michael


>
> --
> To post to this group, send an email to sage-...@googlegroups.com
> To unsubscribe from this group, send an email to sage-devel+...@googlegroups.com
> For more options, visit this group at http://groups.google.com/group/sage-devel
> URL: http://www.sagemath.org

> <test_Popen.py>

Reply all
Reply to author
Forward
0 new messages