We're releasing Sage 5.0.beta8.
Source archive:
http://boxen.math.washington.edu/home/release/sage-5.0.beta8/sage-5.0.beta8.tar
Upgrade path:
http://boxen.math.washington.edu/home/release/sage-5.0.beta8/sage-5.0.beta8/
The source and upgrade path can also be found on the mirror network
(you might need to wait a while before the mirrors are synchronized):
http://www.sagemath.org/download-latest.html
Please build, test, and report! We'd love to hear about your
experiences with this release.
== Tickets ==
* We closed 292 tickets in this release. For details, see
http://boxen.math.washington.edu/home/release/sage-5.0.beta8/tickets.html
Closed tickets:
#2999: Some packages don't respect the CC environment variable [Reviewed
by Michael Orlitzky, R. Andrew Ohana]
#3000: Some packages don't respect the CXX environment variable
[Reviewed by Michael Orlitzky, R. Andrew Ohana]
#3631: Delete *.pyc files when building Sage specific spkgs like extcode
[Reviewed by Jeroen Demeyer]
#7626: delete PBUILD code in local/bin/sage-sage script [Reviewed by
Jeroen Demeyer]
#11303: Fix the documentation of attach [Reviewed by Florent Hivert]
Merged in sage-5.0.beta8:
#9128: Florent Hivert: Sphinx should be aware of all.py to find its
links [Reviewed by Andrey Novoseltsev, Nicolas M. Thiéry]
#10296: Simon King: Singular interface wasting time by waiting for the
prompt too often [Reviewed by Martin Albrecht]
#10682: Dima Pasechnik: Upgrade maxima to 5.26 [Reviewed by Jean-Pierre
Flori, Nils Bruin]
#10817: Christian Stump: implementation of the generalized associahedron
as a polyhedral complex [Reviewed by Frédéric Chapoton, Nicolas M. Thiéry]
#10976: Christopher Swenson: computing order of a certain subgroup of a
permutation group is double dog slow (compared to Magma) [Reviewed by
William Stein]
#12202: Sebastian Pancratz, David Loeffler: Bug in
hecke_operator_on_basis [Reviewed by Jan Vonk]
#12392: David Roe: Doctest fix in sage/categories/modules_with_basis.py
[Reviewed by Jim Stark]
#12397: David Roe: Change doctests to remove trailing backslashes
[Reviewed by Jim Stark]
#12405: Jeroen Demeyer: Add $SAGE_LOCAL/lib64 to LD_LIBRARY_PATH
[Reviewed by Volker Braun]
#12470: Jeroen Demeyer: Remove scripts related to the Debian
distribution [Reviewed by Punarbasu Purkayastha]
#12480: David Roe: NTL segfault on OS X 10.7 [Reviewed by William Stein,
Jeroen Demeyer]
#12519: Jeroen Demeyer: cvxopt should not add -lcblas and -latlas on
Darwin [Reviewed by Dmitrii Pasechnik]
#12562: Jeroen Demeyer: In Singular spkg-install, disable -pipe on SunOS
[Reviewed by John Palmieri]
#12564: Daniel Krenn: documentation of SR wildcard: n instead of i
[Reviewed by David Loeffler]
#12581: Karl-Dieter Crisman: Fix contour and other plot default aspect
ratio [Reviewed by Benjamin Jones, David Loeffler]
#12585: Hugh Thomas: Bring matrix/matrix0.pyx to 100% coverage [Reviewed
by David Loeffler, Karl-Dieter Crisman]
#12616: Nathann Cohen: The LP are not deallocated because of cyclic
references ! [Reviewed by Simon King]
#12618: Jeroen Demeyer: Don't delete dist/sage-rsync directory in
sage-rsyncdist script [Reviewed by David Roe]
#12625: David Roe: Conversion of pari elements to Sage fails on some
negative valuation elements [Reviewed by Xavier Caruso]
#12626: David Coudert: Kautz, Imase and Itoh, and Generalized de Bruijn
digraph generators [Reviewed by Nathann Cohen]
#12629: Jeroen Demeyer: Completely disable the LinBox commentator
[Reviewed by Martin Albrecht]
#12632: David Loeffler: bug comparing trivial Dirichlet characters
[Reviewed by Jonathan Bober]
#12633: Nils Bruin: Fix doc of attach [Reviewed by Justin Walker]
#12635: Jeroen Demeyer: Remove pbuild files [Reviewed by Punarbasu
Purkayastha]
#12637: John Palmieri: Follow-up to #4949: don't delete the current
working directory [Reviewed by Jeroen Demeyer]
#12642: Nils Bruin: magma_free interface is broken [Reviewed by William
Stein]
#12645: Simon King: Fix rst markup for sage/combinat/sf/sf.py (and add
to manual) and sage/structure/dynamic_class.py [Reviewed by Nicolas M.
Thiéry]
On Wed, Mar 14, 2012 at 1:09 AM, Jeroen Demeyer <jdem...@cage.ugent.be> wrote:
> Please build, test, and report! We'd love to hear about your
> experiences with this release.
Built fine on Ubuntu 11.10, Intel(R) Core(TM) i7-2600 CPU @ 3.40GHz
$ uname -a
Linux melb 3.0.0-16-generic-pae #29-Ubuntu SMP Tue Feb 14 13:56:31 UTC
2012 i686 i686 i386 GNU/Linux
Both the HTML and PDF versions of the documentation built OK. All
tests passed with "ptestlong".
--
Regards,
Minh Van Nguyen
http://sage.math.washington.edu/home/mvngu/
Also passed ptestlong on
Ubuntu 10.04.4 LTS x86_64 (AMD E-450, GCC 4.4.3, native code)
-leif
>
> $ uname -a
> Linux melb 3.0.0-16-generic-pae #29-Ubuntu SMP Tue Feb 14 13:56:31 UTC
> 2012 i686 i686 i386 GNU/Linux
>
> Both the HTML and PDF versions of the documentation built OK. All
> tests passed with "ptestlong".
>
--
() The ASCII Ribbon Campaign
/\ Help Cure HTML E-Mail
> Dear Sage lovers,
>
> We're releasing Sage 5.0.beta8.
>
> Source archive:
>
> http://boxen.math.washington.edu/home/release/sage-5.0.beta8/sage-5.0.beta8.tar
Built from scratch on Mac OS X, 10.6.8 (Dual 6-core Xeon): no problems. All tests ('ptestlong') passed!
Justin
--
Justin C. Walker
Curmudgeon-at-large
--
Network, n., Difference between work
charged for and work done
Well, what's funny about that message?
More importantly, how does your $SAGE_ROOT/local/lib/pkgconfig/libR.pc
look like?
-leif
Well, what's funny about that message?
More importantly, how does your $SAGE_ROOT/local/lib/pkgconfig/libR.pc
look like?
Did you do anything special (like reinstalling the R spkg) with your
original installation before you made the copy, or did anything unusual
happen during the build?
The .pc files get "initialized"* (the $SAGE_ROOT part of any paths
factored out into a SAGE_ROOT variable) during the first start-up of
Sage, which usually happens right after the build (if you type 'make'
rather than 'make build'); you can normally see this from the timestamps
of $SAGE_ROOT/local/lib/sage-started.txt and the .pc files (in
$SAGE_ROOT/local/lib/pkgconfig/), which are (almost) identical.
But maybe you just did 'make build' and made the copy before running
Sage once (running 'make doc' or 'make *test*' has the same effect),
which you [currently] shouldn't do... :-)
You should also check whether the libR.pc file of your original
installation is sane (i.e., got "initialized" as described above); if
not, try running Sage and take a second look at it.
-leif
_____
* in case they aren't yet; in principle an spkg's spkg-install script
should already set up the SAGE_ROOT variable there, and make all paths
relative to it.
P.S.: My (slightly redundant but correct) libR.pc looks like this:
SAGE_ROOT=/data/leif/Sage/sage-5.0.beta8
rhome=${SAGE_ROOT}/local/lib/R
rlibdir=${rhome}/lib
rincludedir=${SAGE_ROOT}/local/lib/R/include
Name: libR
Description: R as a library
Version: 2.14.0
Libs: -L${rlibdir} -lR
Cflags: -I${rincludedir} -I${rincludedir}
Libs.private:
pkgconfig/libR.pc
> look like?
>
> The copied version still contains the path to the old version -- it's
> not been updated as it should have been.
Did you do anything special (like reinstalling the R spkg) with your
original installation before you made the copy, or did anything unusual
happen during the build?
Well, the problem is that the .pc file can [currently] only be updated
(i.e., paths adapted, or more precisely, the definition of SAGE_ROOT
changed) if it's previously been "initialized", which should have
happened in your original installation, as mentioned in my previous post.
To cure your installation(s), it should be sufficient to delete
$SAGE_ROOT/local/lib/sage-current-location.txt of your original
installation, run its Sage once, make a fresh copy and run the Sage of
the copy to this time really update all hard-coded paths.
I still wonder why your libR.pc didn't get "initialized" in the first place.
Do the [other] .pc files (except the symlinks) in
$SAGE_ROOT/local/lib/pkgconfig/ all have the same timestamp as
$SAGE_ROOT/local/lib/sage-started.txt?
(And did you build Sage with 'make build' or just 'make'?)
-leif
Best regards,
Alexander
> links [Reviewed by Andrey Novoseltsev, Nicolas M. Thi�ry]
> #10296: Simon King: Singular interface wasting time by waiting for the
> prompt too often [Reviewed by Martin Albrecht]
> #10682: Dima Pasechnik: Upgrade maxima to 5.26 [Reviewed by Jean-Pierre
> Flori, Nils Bruin]
> #10817: Christian Stump: implementation of the generalized associahedron
> as a polyhedral complex [Reviewed by Fr�d�ric Chapoton, Nicolas M. Thi�ry]
> Thi�ry]
>
--
Dr. rer. nat. Dipl.-Math. Alexander Dreyer
Abteilung "Systemanalyse, Prognose und Regelung"
Fraunhofer Institut f�r Techno- und Wirtschaftsmathematik (ITWM)
Fraunhofer-Platz 1
67663 Kaiserslautern
Telefon +49 (0) 631-31600-4318
Fax +49 (0) 631-31600-1099
E-Mail alexande...@itwm.fraunhofer.de
Internet http://www.itwm.fraunhofer.de/sys/dreyer.html
I have no idea why libR.pc has a more recent timestamp than the others! I think I built with "make ptestlong", actually, in order to build and test in one go. I'm doing another test build now, to see if I can replicate the problem consistently.
It's hard to imagine what kind of processor could so slow that it times out on the kind of tests run in 'interrupt.pyx'. 30 minutes?
Isn't this kind of failure more indicative of a lost interrupt (or something similar)?
Justin
--
Justin C. Walker
Director
Institute for the Enhancement of the Director's Income
--
Fame is fleeting, but obscurity
just drags on and on. F&E
I guess you didn't mean this seriously...
If interrupt.pyx takes more than 30 minutes, this means it hangs.
I btw. noticed I get (busy) orphans on timeouts; this is relatively new,
i.e., had IMHO been solved a while ago.
Volker Braun wrote:
> You need to increase the timeout if your processor is this slow (sorry ;-)I guess you didn't mean this seriously...
If interrupt.pyx takes more than 30 minutes, this means it hangs.
Best regards,
Andrey
I have seen this before, on a core i7. It went away when I tried
again, and I haven't seen it again, so I didn't think much of it. No,
your CPU is not too slow.
Is the timeout actually reproducible? My guess is no. That file seems
to be have some imperfectly designed tests with race conditions that
could cause occasional failures. For example, the function
cdef void infinite_malloc_loop():
cdef size_t s = 1
while True:
sage_free(sage_malloc(s))
s *= 2
if (s > 1000000): s = 1
will ignore an interrupt if it is received on either of the last two
lines, I think, and the tests are of the form "run this function and
then interrupt it to make sure that it can be interrupted."
(And this test will also leak memory if it is interrupted after the
malloc() but before the free().)
Actually, I think that something is waiting for something that goes way slower than it should, as on different AMD processors from quite old Athlon 64 to quite new Phenom II X6 some tests take forever without doing anything. The worst offender seems to besage -t -long sage/sandpiles/sandpile.py
When I run this test, I can see the process in the top ouput which is almost always sleeping. There is also this command
Singular-3-1-3 -t --ticks-per-sec 1000
which seems to be associated to this doctest. So - no CPU activity, no disk activity, but this test takes like 8 minutes and used to take 20 on a 3.2GHz CPU, so to get things going I was using -tp 24 (despite of only 6 cores) in which case all long tests were done in 20 min. CPU is not slow...
Best regards,
Andrey
So fermat runs Ubuntu 10.04, as opposed to selmer?
So fermat runs Ubuntu 10.04, as opposed to selmer?
On Wednesday, 21 March 2012 05:49:55 UTC, Andrey Novoseltsev wrote:
Actually, I think that something is waiting for something that goes way slower than it should, as on different AMD processors from quite old Athlon 64 to quite new Phenom II X6 some tests take forever without doing anything. The worst offender seems to besage -t -long sage/sandpiles/sandpile.py
When I run this test, I can see the process in the top ouput which is almost always sleeping. There is also this command
Singular-3-1-3 -t --ticks-per-sec 1000
which seems to be associated to this doctest. So - no CPU activity, no disk activity, but this test takes like 8 minutes and used to take 20 on a 3.2GHz CPU, so to get things going I was using -tp 24 (despite of only 6 cores) in which case all long tests were done in 20 min. CPU is not slow...
Best regards,
Andrey
I think this is probably orthogonal to Georg's problem in interrupt.pyx, but certain functions that call external packages such as Singular or Gap seem to take forever on certain machines, and sandpile.py is one of the worst offenders. Here at Warwick we have two boxes (fermat and selmer) which are used by the number theory group, which are not that dissimilar in architecture and CPU speed; but the sandpile.py test takes about four times longer on fermat than on selmer, and regularly exceeds the default 360-second timeout for standard (non-long) doctests.
Well, Debian fixed this relatively quickly, while Ubuntu didn't (in the
10.04 series; later releases never had this problem).
I run Ubuntu 10.04.4 LTS with a 2.6.38 kernel, so do no longer have this
problem.
> It was discussed on sage-devel at length, by Simon King and others.
And here as well IIRC.
(The 2.6.38 kernel is not from the .4 release; the latter has 2.6.32.)
@ selmer (older) is running Ubuntu 9.04 with 2.6.28-13-generic kernel
and gcc version 4.3.3 (Ubuntu 4.3.3-5ubuntu4)
@ fermat (less old) is running Ubuntu 10.04.3 LTS with
2.6.32-37-server kernel and gcc version 4.4.3 (Ubuntu
4.4.3-4ubuntu5.1)
Both are rather heavily used, and also physically inaccessible to us
without some hassle, so do not get their software updates as often as
one might hope for.
John
> --
> You received this message because you are subscribed to the Google Groups
> "sage-release" group.
> To post to this group, send email to sage-r...@googlegroups.com.
> To unsubscribe from this group, send email to
> sage-release...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/sage-release?hl=en.
>