Huge virtual memory usage

141 views
Skip to first unread message

Andrey Novoseltsev

unread,
Mar 27, 2016, 3:28:42 PM3/27/16
to sage-devel
Hello,

I'm struggling with 1/0 error message while upgrading SageMathCell - instead of division by zero I am getting memory error during introspection. I also get memory error on

l = [0]*10000

And perhaps it is related to:

novoselt@sage:~/sage$ ./sage
┌────────────────────────────────────────────────────────────────────┐
│ SageMath Version 7.2.beta0, Release Date: 2016-03-24               │
│ Type "notebook()" for the browser-based notebook interface.        │
│ Type "help()" for help.                                            │
└────────────────────────────────────────────────────────────────────┘
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ Warning: this is a prerelease version, and it may be unstable.     ┃
┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛
sage: get_memory_usage()
21466.96484375

21.5GB on start!!! (The machine has 16.)

On a server with several versions and 128GB RAM:
novoselt@sagenb:~$ /var/opt/sage-6.9/sage -c "print get_memory_usage()"                                                                                                                                           
955.32421875                                                                                                                                                                                                      
novoselt@sagenb:~$ /var/opt/sage-6.10/sage -c "print get_memory_usage()"                                                                                                                                          
975.36328125
novoselt@sagenb:~$ /var/opt/sage-7.0/sage -c "print get_memory_usage()"
973.3046875
novoselt@sagenb:~$ sage -c "print get_memory_usage()"
48520.28125
(the last one is for 7.1)

I understand that in general virtual memory size does not matter much, but if there are ulimits on it it does, and come on - why 20x (if not 50x) increase?!

Andrey

Andrey Novoseltsev

unread,
Mar 27, 2016, 3:36:03 PM3/27/16
to sage-devel
This WAS the problem. Such allocation makes it just completely pointless to limit address space size to prevent accidents.

Ralf Stephan

unread,
Mar 28, 2016, 1:11:36 AM3/28/16
to sage-devel
I get a 3.5x increase with 6.9 --> 7.2beta0. Not as much as 20x but still...

Volker Braun

unread,
Mar 28, 2016, 3:35:41 AM3/28/16
to sage-devel
Presumably this is due to #19883

There isn't really any problem here, though. If you implement your own version of malloc then, in some implementations, you'll need about as much virtual memory as ram to do the accounting. It just uses your 128TB of virtual memory space. 

On a related note, 'ulimit -v" is not a good way to restrict memory usage.

Nils Bruin

unread,
Mar 28, 2016, 11:54:34 AM3/28/16
to sage-devel
On Monday, March 28, 2016 at 12:35:41 AM UTC-7, Volker Braun wrote:
Presumably this is due to #19883

There isn't really any problem here, though. If you implement your own version of malloc then, in some implementations, you'll need about as much virtual memory as ram to do the accounting. It just uses your 128TB of virtual memory space. 

On a related note, 'ulimit -v" is not a good way to restrict memory usage.

It's a simple one to set, though, and in many contexts it protects against  silly memory errors (e.g., on a multi-user machine, it helps to kill runaway processes before their memory usage causes excessive thrashing). So I think in practice you'll often meet such limits. Perhaps we should take any ulimits into account when we compute our defaults? With #19883, 1/4 of the virtual address space is allocated for the pari stack. Perhaps we should take the ulimit into account when computing this max size?

Andrey Novoseltsev

unread,
Mar 28, 2016, 12:03:27 PM3/28/16
to sage-devel

I didn't try fiddling with ulimits recently, but as I recall -v was the only one related to memory that actually worked, that's the reason I was using it for SageNB and SageMathCell... On a related note Python's default recursion limit seems to be about 1000 independent of machine - those who don't like it can change to their particular huge memory situation. I think it is more sensible - a lot of memory is there usually not for the usage of a single process, but rather to allow running a lot of stuff at once.
 

Volker Braun

unread,
Mar 28, 2016, 1:15:44 PM3/28/16
to sage-devel
On Monday, March 28, 2016 at 5:54:34 PM UTC+2, Nils Bruin wrote:
It's a simple one to set, though, and in many contexts it protects against  silly memory errors (e.g., on a multi-user machine
 
But it doesn't help when you have multiple processes, as its per-process limit only.
 
Perhaps we should take any ulimits into account when we compute our defaults?

We do:

$ sage -c "print get_memory_usage()"
33707.9921875
$ ulimit -v 10000000
$ sage -c "print get_memory_usage()"
3554.1171875
$ ulimit -v 2000000
$ sage -c "print get_memory_usage()"
1600.98046875

William Stein

unread,
Mar 28, 2016, 2:16:36 PM3/28/16
to sage-devel
On Mon, Mar 28, 2016 at 10:15 AM, Volker Braun <vbrau...@gmail.com> wrote:
> On Monday, March 28, 2016 at 5:54:34 PM UTC+2, Nils Bruin wrote:
>>
>> It's a simple one to set, though, and in many contexts it protects against
>> silly memory errors (e.g., on a multi-user machine
>
>
> But it doesn't help when you have multiple processes, as its per-process
> limit only.

Another very common thing to ulimit is the number of processes.
e.g., the default I think on Ubuntu is maybe 1000.

I'm really sad to see that get_memory_usage() is suddenly no longer of
any use in seeing how much memory Sage is using. It used to be
extremely useful as a first check of usage.

-- William

>> Perhaps we should take any ulimits into account when we compute our
>> defaults?
>
>
> We do:
>
> $ sage -c "print get_memory_usage()"
> 33707.9921875
> $ ulimit -v 10000000
> $ sage -c "print get_memory_usage()"
> 3554.1171875
> $ ulimit -v 2000000
> $ sage -c "print get_memory_usage()"
> 1600.98046875
>
> --
> You received this message because you are subscribed to the Google Groups
> "sage-devel" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sage-devel+...@googlegroups.com.
> To post to this group, send email to sage-...@googlegroups.com.
> Visit this group at https://groups.google.com/group/sage-devel.
> For more options, visit https://groups.google.com/d/optout.



--
William (http://wstein.org)

Volker Braun

unread,
Mar 28, 2016, 4:29:45 PM3/28/16
to sage-devel
On Monday, March 28, 2016 at 8:16:36 PM UTC+2, William wrote:
Another very common thing to ulimit is the number of processes.
e.g., the default I think on Ubuntu is maybe 1000.

Yes, and 1000 * RLIMIT_AS is almost always enough to make things very sloooow...

I'm really sad to see that get_memory_usage() is suddenly no longer of
any use in seeing how much memory Sage is using.  It used to be
extremely useful as a first check of usage.

It has always shown the virtual memory usage, which is almost never useful.

Much better diagnostic would be the unique size set = memory that is unique to that process. Underestimates because it excludes shared memory but probably the most useful to find memory leaks... See e.g. the smem utility (even written in Python).
Reply all
Reply to author
Forward
0 new messages