Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.

Dismiss

5 views

Skip to first unread message

Sep 10, 2005, 8:21:34 AM9/10/05

to

During this calculation:

m= Table[ (i-j)+3.*j^-4 +50*i , {i,1,10000} , {j,1,10000} ];

Det[m]

the kernel has jammed and the following message has been visualized

No more memory available.

Mathematica kernel has shut down.

It is not a banal problem, more times I encounter me with these

problems that really make unusable Mathematica for my purposes . In

such cases I am forced to pass to Matlab that is behaved better

always. But I love Mathematica and I want to look for a solution not

to abandon it

Precise that know Mathematica enough to fund and therefore I am not

scandalized me if in such problems the time of calculation lengthens

but really I don't understand because in reality it jams

Precise that have introduced more times this problem in numerous

occasions and anybody has ever known how to give me a satisfactory

explanation. Obviously I don't expect me that Mathematica is as fast

as a compiled program but that that expect me from a so valid program

it is that if it don't succeed in completing a calculation point out

me the reality motive.

Is it a wrong thing to pretend it?

The dimension of the in demand memory is not limited from windows xp

or my hardware-software(OS) configuration, in fact such error also

happens in xp 64 Bit with Mathematica 5.2 (64 Bit) Athlon 64 2 GB Ram.

Is it probably a big limit of mathematica in to manage data of big

dimensions ?

Are these problems the true limit of mathematica ?

Is it possible to overcome this obstacle? ……or Where am I being wrong?

Thanks for the help

Sep 10, 2005, 10:47:21 AM9/10/05

to

Hi. Looks like as the size of your matrix gets bigger, the Determinant

tends towards zero.

However, I can't explain why with a size 100*100 (the first output), it

returns "0." and with the largest sizes, numbers that are very small.

tends towards zero.

However, I can't explain why with a size 100*100 (the first output), it

returns "0." and with the largest sizes, numbers that are very small.

In[1]:=

Simplify[(i-j)+3/j^4+50*i]

Out[1]=

51*i+3/j^4-j

In[2]:=

Table[Det[Table[51.*i+3/j^4-j,{i,n},{j,n}]],{n,100,400,100}]

Out[2]=

{0.,

-1.1058552925176574276594077*^-2291,

7.9451068970561274753784538*^-3316,

-2.0828859268407618803830820*^-4315

}

--

Dana DeLouis

Mathematica 5.2

"LumisROB" <lumisrob...@yahoo.com> wrote in message

news:loj5i15en27o6sqff...@4ax.com...

> Is it possible to overcome this obstacle? ..or Where am I being wrong?

>

>

> Thanks for the help

Sep 10, 2005, 12:26:08 PM9/10/05

to

Dana wrote:

> Hi. Looks like as the size of your matrix gets bigger, the Determinant

> tends towards zero.

> However, I can't explain why with a size 100*100 (the first output), it

> returns "0." and with the largest sizes, numbers that are very small.

>

> In[1]:=

> Simplify[(i-j)+3/j^4+50*i]

>

> Out[1]=

> 51*i+3/j^4-j

>

> In[2]:=

> Table[Det[Table[51.*i+3/j^4-j,{i,n},{j,n}]],{n,100,400,100}]

>

>

> Out[2]=

> {0.,

> -1.1058552925176574276594077*^-2291,

> 7.9451068970561274753784538*^-3316,

> -2.0828859268407618803830820*^-4315

> }

>

>

>

>

In Mathematica 5.0.0 on an Intel Pentium 4,> Hi. Looks like as the size of your matrix gets bigger, the Determinant

> tends towards zero.

> However, I can't explain why with a size 100*100 (the first output), it

> returns "0." and with the largest sizes, numbers that are very small.

>

> In[1]:=

> Simplify[(i-j)+3/j^4+50*i]

>

> Out[1]=

> 51*i+3/j^4-j

>

> In[2]:=

> Table[Det[Table[51.*i+3/j^4-j,{i,n},{j,n}]],{n,100,400,100}]

>

>

> Out[2]=

> {0.,

> -1.1058552925176574276594077*^-2291,

> 7.9451068970561274753784538*^-3316,

> -2.0828859268407618803830820*^-4315

> }

>

>

>

>

{

-2.3072190778544011494885203792388`15.954589770191005\

*^-1432,

1.6138130601828444080943999966955787`15.9545897701910\

05*^-2817,

-2.17670639763548416057774043151`15.954589770191005*^\

-4180,

1.283924316663441664176128821`15.954589770191005*^-55\

02}

So the result "0.0" is a recent innovation. On my

system, the Precision of the first number is 15.9546

(decimal digits), not MachinePrecision, which makes

one wonder what is going to happen in version 5.3 or 5.4.

Mathematica's numerical model has been criticized by

several people (including me), since Mathematica 1.0.

If you do the calculation EXACTLY by using 51 instead of "51."

the determinant of the 100X100 matrix comes out exactly 0.

In fact, the determinant of the 10x10, 20x20 etc all come out

exactly 0.

The advantage of doing exact arithmetic (and using a CAS

like Mathematica, Maple,Macsyma etc) over Matlab, is that the results

can often be exactly right. In Matlab the result will

be exactly right only by coincidence.

For example, is there anything special about 51.0, 51.000000000000000 or

exactly 51? Well, why not just put in a symbol, like A instead

of 51. and compute the determinant. the 10X10 determinant

is exactly zero. I assume that knowing the answer you can prove

the determinant is zero always.

Try that in Matlab. Actually, matlab can be coerced to doing

it, but only by calling Maple. A much more versatile system

then would actually be sometime like Mathematica, which could,

if necessary for efficiency, call Matlab. The Mathematica

people claim to be doing that, or maybe they claim to have

already done that.

So, for the original poster: your complaint about mathematica

running out of memory has not been solved, but your reason for

trying that computation is probably no longer compelling.

RJF

Sep 10, 2005, 2:04:13 PM9/10/05

to

LumisROB schrieb:

Hi Roberto,

as Paul Abbott and me, we have written in comp.soft-sys.math.mathematica

in messages <dfovu8$ffs$1...@smc.vnet.net> (Paul) and

<dfov8q$f97$1...@smc.vnet.net> (me), where you posted a very similar matrix

with complex entries, the first 3 lines are not independent. The same is

true in this case:

In[1]:=

m = Table[(i - j) + 3/j^4 + 50*i, {i, 1, 10000}, {j, 1, 3}];

Solve[m . {x, y, z} == 0]

Out[2]=

{{x -> (497*z)/1647,

y -> -((2144*z)/1647)}}

If you've got such big matrices, try to play around with similar smaller

ones (like Dana did) using exact numbers, where possible (like Richard

did) and it is often easy to see simple relations (like Peter and Paul

;-) did).

Hope this helps,

Peter

--

Peter Pein, Berlin

GnuPG Key ID: 0xA34C5A82

http://people.freenet.de/Peter_Berlin/

Sep 10, 2005, 2:31:35 PM9/10/05

to

Hi. Thanks. I didn't even think about testing the equation exactly. You're

right, the Det quickly drops to zero from 3 onward.

I looked at a small table matrix, but I'm not that good to quickly spot the

reason the Det is zero.

right, the Det quickly drops to zero from 3 onward.

I looked at a small table matrix, but I'm not that good to quickly spot the

reason the Det is zero.

equ=Simplify[(i-j)+3/j^4+50*i]

51*i+3/j^4-j

Table[Det[Table[equ,{i,n},{j,n}]],{n,1,10}]

{53, 3111/16, 0, 0, 0, 0, 0, 0, 0, 0}

$Version

"5.2 for Microsoft Windows (June 20, 2005)"

Here's another technique for posting those small numbers. This breaks the

number into the Mantissa, and Exponent.

MantissaExponent[Table[Det[Table[equ,{i,n},{j,n}]],{n,100,600,100}]]

{{0.,-307},

{-0.1105855292517657,-2290},

{ 0.7945106897056127,-3315},

{-0.2082885926840762,-4314},

{-0.4993937307336691,-5285},

{-0.1384814122285155,-6265}

}

--

Dana DeLouis

"Richard Fateman" <fat...@cs.berkeley.edu> wrote in message

news:AODUe.5012$wk6....@newssvr11.news.prodigy.com...

Message has been deleted

Sep 10, 2005, 2:49:45 PM9/10/05

to

In exact arithmetic such matrices have only rank 2 for orders n>=2,

so they are obvious singular for n>2. What is the point

of going to n=100000? That matrix will take >10^10*8 bytes if entries

are stored as 64-bit integers. Not many computers have 80 GB of RAM.

so they are obvious singular for n>2. What is the point

of going to n=100000? That matrix will take >10^10*8 bytes if entries

are stored as 64-bit integers. Not many computers have 80 GB of RAM.

Sep 10, 2005, 3:59:38 PM9/10/05

to

Oops, sorry "only" n=10000. (Thanks Richard) That should fit on a 1GB

machine.

In any case, the exercise is pointless beyond n=3.

A 10000 x 10000 instance is 9998 times singular.

machine.

In any case, the exercise is pointless beyond n=3.

A 10000 x 10000 instance is 9998 times singular.

Sep 11, 2005, 9:15:02 AM9/11/05

to

On Sat, 10 Sep 2005 20:04:13 +0200, Peter Pein <pet...@dordos.net>

wrote:

wrote:

>LumisROB schrieb:

>> During this calculation:

>>

>

>Hi Roberto,

>

>as Paul Abbott and me, we have written in comp.soft-sys.math.mathematica

>in messages <dfovu8$ffs$1...@smc.vnet.net> (Paul) and

><dfov8q$f97$1...@smc.vnet.net> (me), where you posted a very similar matrix

>with complex entries,

>Peter

Hi Peter,

Thanks to you and all the others for their help

I regret that on the comp.soft-sys.math.mathematica my answers are not

been published. I have sent more times the answers but they are always

lost. Very strange thing. I now write here these considerations of

mine:

I apologize for the error that however had immediately corrected. The

problem is that as has already happened me other times, when I

criticize Mathematica my message it doesn't reach destination (on

comp.soft-sys.math.mathematica). Now since I am not an expert of

neswgroup someone can say to me ……why this thing happens? who filters

the messages? Is it moderate by Wolfram !!?

I have also sent a complete message in which I said that to my notice

Mathematica could have a big defect of planning respect the management

of big data but also this has gone lost

Now dispatch this and again the other.

I apologize therefore to all those people that have helped me with

very deep answers but unfortunately they referred to a wrong problem

to cause my error (the exact valus is i and not the complex I . Even

if to the goals of the calculation nothing changes having set as

variable of iteration I (the error has been due in fact to an unlucky

copy and paste in whichi has been turned into I).

The my problem is ---- During this calculation:

m= Table[ (i-j)+3.*j^-4 +50*i , {i,1,10000} , {j,1,10000} ];

Det[m]

the kernel has jammed and the following message has been visualized

No more memory available.

Mathematica kernel has shut down.

For the poster RICHARD: Hi, Richard

Thanks for the answer but the problem is not the value of the Det (in

fact I have forced the calculation so that is effected in machine's

precision ) but the interruption of the calculation that would make

to think or to a bug or as me I think to a very bad planning of the

motor of numerical calculation of the management of big data.

You understand well that to say this is a different thing that to say:

Mathematica is a Cas with the merits typical of the arbitrary

precision but it is slow in numerical calculations. The problem is

that Wolfram is publicizing the vers. 5.2 saying that its limits are

alone those represented by the 64 Bit... but to me this seems not to

be true. In my calculation the limit if there is it is not in the

available resources but in a bug or in a planning... that they have to

explain me....or I am being wrong and then that calculation must have

completed

It is not a banal problem, more times I encounter me with these

problems that really make unusable Mathematica for my purposes and

they always occur in big calculations

In such cases I am forced to pass to Matlab (or a compiled C++

programs) that is behaved better always and this is a confirmation

that this problem is not due to lack of memory or if it is so it is

because mathematica manages badly it …… Or what I am wrong in my

reasoning

You try to calculate the matrix from me suitable that has 10^5 x 10^5

elements. Around 100 GBs are needed in total but Mathematica in

presence of such resources interrupts him.Matlab instead no, why? And

you mind well me I am not an estimator of Matlab... I love Mathematica

and I want to look for a solution not to abandon it

Precise that know Mathematica enough to fund and therefore I am not

scandalized me if in such problems the time of calculation lengthens

but really I don't understand because in reality it jams

Precise that have introduced more times this problem in numerous

occasions and anybody has ever known how to give me a satisfactory

explanation. Obviously I don't expect me that Mathematica is as fast

as a compiled program but that that expect me from a so valid program

it is that if it don't succeed in completing a calculation point out

me the reality motive.

Is it a wrong thing to pretend it?

The dimension of the in demand memory is not limited from windows xp

or my hardware-software(OS) configuration, in fact such error also

happens in xp 64 Bit with Mathematica 5.2 (64 Bit) Athlon 64 2 GB Ram.

Is it probably a big limit of mathematica in to manage data of big

dimensions ?

Are these problems the true limit of mathematica ?

I think that if the usual way of calculation is not worth resources to

understand if we will succeed in finishing a calculation (or rather if

all the rules of the numerical analysis are distorted) the Wolfram

should point out what in reality Mathematica does in these types of

calculations

Sep 11, 2005, 1:07:25 PM9/11/05

to

LumisROB wrote:

<snip>

> when I

> criticize Mathematica my message it doesn't reach destination (on

> comp.soft-sys.math.mathematica). Now since I am not an expert of

> neswgroup someone can say to me ……why this thing happens? who filters

> the messages? Is it moderate by Wolfram !!?

<snip>

> when I

> criticize Mathematica my message it doesn't reach destination (on

> comp.soft-sys.math.mathematica). Now since I am not an expert of

> neswgroup someone can say to me ……why this thing happens? who filters

> the messages? Is it moderate by Wolfram !!?

No, Steve Christensen is the moderator.

Maybe your message is not getting through, (but you seem to be

able to send to this newsgroup!)

You do not know how much memory Mathematica uses up unless you

measure it. It may use several times as much memory as you think

it needs. In some cases it may use O(n^2) memory when you think it

needs only O(n). It may run out of memory on a stack which is allocated

a finite resource. ($RecursionLimit is 256. But the error message

you got is less informative.) Read about ByteCount and Memory Management.

If you habitually run out of memory in a computer algebra system,

you will have to find some way to be clever to reduce the size

of the problem, or find a different method.

You may very well have found a bug in Mathematica, but that may

not solve your problem. You may just convert a program that runs

out of memory into one that runs (almost) forever.

RJF

Sep 11, 2005, 3:57:48 PM9/11/05

to

Hi richard,

thanks for the help

I don't think that it is a bug I think rather than the motor of

numerical calculation has not big limits of speed but of efficiency in

the use of the memory

However Thanks

Sep 11, 2005, 5:51:10 PM9/11/05

to

IMHO a well implemented data structure for lists of repeating objects

should have a memory overhead of only O(1) for a 1D n-object list,

O(n) for a 2D n x n objects list, O(n^2) for a 3D n x n x n objects

list, and so on.

should have a memory overhead of only O(1) for a 1D n-object list,

O(n) for a 2D n x n objects list, O(n^2) for a 3D n x n x n objects

list, and so on.

That was my experience in writing database systems 3 decades

ago. Are those orders of magnitude still considered correct? For

example, what would be the overhead in Lisp for a n x n table

of identical objects?

Sep 11, 2005, 8:16:46 PM9/11/05

to

What makes you think that Mathematica is "well implemented"?

consider this program

Test[lim_] := Do[{a = Table[ i, {i, 10^n}];

Print[Timing[a[[1]] = a[[1]] + 1;]]},

{n, 1, lim}]

What this does is it allocates a vector of size 10, 100, 1000, etc.

and then it times the operation of incrementing just one element of the

vector. In a well-implemented system, the time to change one

element of a vector, especially the first element, should be a constant

regardless of the length of the vector. In Mathematica, the time

depends on the length of the vector. In my test, going from n=6 to

n=7, the time jumps by a factor of 100. In going from n=7 to n=8

the time jumps by a factor of 8.

Try Test[8], or if you dare, Test[9].

In Lisp, normally implemented arrays of size nXm of identical

objects (i.e. pointers to the same object) should be of size

proportional to nxm.

I do not know if the language definition requires this, however.

RJF

Sep 12, 2005, 11:13:53 AM9/12/05

to

On Sun, 11 Sep 2005 17:07:25 GMT, Richard Fateman

<fat...@cs.berkeley.edu> wrote:

<fat...@cs.berkeley.edu> wrote:

>LumisROB wrote:

><snip>

.....

>

>You may very well have found a bug in Mathematica, but that may

>not solve your problem. You may just convert a program that runs

>out of memory into one that runs (almost) forever.

>

>RJF

Hi Richard,

Thanks for the help. You have understood the truth my problem. Or

rather the impossibility with Mathematica to foresee if a numerical

problem can be completed developing a simple coarse calculation

regarding the occupation of memory Ram and Virtual (use often this

approach in calculations to the limit when I use Matlab or the C). I

rest, I abdicate numerical calculations on problems (I treat problems

of structural engineering with the FEM) of big dimensions.

Unfortunately some interactions with a program as Mathematica were

often me really useful

thanks of heart to everybody

Roberto

Sep 12, 2005, 12:58:33 PM9/12/05

to

My guess (without proof) is that the internal Mathematica data

structures are

a combination of linked-list and paging. That would explain the

erratic timings.

structures are

a combination of linked-list and paging. That would explain the

erratic timings.

Sep 12, 2005, 1:32:13 PM9/12/05

to car...@colorado.edu

The primary reason is that when you change an element in a

vector, Mathematica believes it must re-evaluate all the

elements of the vector. This means that an assignment to

a vector of length n is not O(1), but O(n), even if only one element

changes. Of course paging can also add to the expense

for large vectors.

RJF

Sep 12, 2005, 2:52:08 PM9/12/05

to

This is primarily a first-time hit. Subsequent alterations do not seem

to incur this penalty. Indeed, one can modify all of the elements in

O(n). The test below illustrates.

test[lim_] := Do[

a = Range[10^n];

Print[n];

tt1 = Timing[a[[44]]++;];

tt2 = Timing[a[[1]]++;];

tt3 = Timing[a[[10^(n-1)-2]]++;];

tt4 = Timing[a+=4];

Print[Chop[Map[First,{tt1,tt2,tt3,tt4}]]/Second];

,{n, 5, lim}]

In[2]:= test[8]

5

{0.001, 0, 0, 0.001}

6

{0.011998, 0, 0, 0.020997}

7

{0.088987, 0, 0, 0.125981}

8

{0.679896, 0, 0, 1.12283}

This reevaluation semantics used in Mathematica is admittedly not

always optimal. But it is also, in this example, not so bad as one

might guess based only on knowing the first time behavior.

Daniel Lichtblau

Wolfram Research

Sep 12, 2005, 3:41:02 PM9/12/05

to

A linked list with no heap overflow area would lead to that kind

of behavior. But it is erratic so it there must be another factor.

of behavior. But it is erratic so it there must be another factor.

Sep 12, 2005, 6:04:03 PM9/12/05

to

Hi Carlos

These problems are all predictable ones if you study to fund the

kernel of Mathematica. What is magic and incomprehensible it is the

motive for which Mathematica in numerical problems that should be

resolved interrupts him for lack of memory

Cheers

Roberto

Sep 12, 2005, 6:23:06 PM9/12/05

to

On 12 Sep 2005 11:52:08 -0700, "Daniel Lichtblau" <da...@wolfram.com>

wrote:

wrote:

>

>

>In[2]:= test[8]

>5

>{0.001, 0, 0, 0.001}

>6

>{0.011998, 0, 0, 0.020997}

>7

>{0.088987, 0, 0, 0.125981}

>8

>{0.679896, 0, 0, 1.12283}

>

>This reevaluation semantics used in Mathematica is admittedly not

>always optimal. But it is also, in this example, not so bad as one

>might guess based only on knowing the first time behavior.

>

>Daniel Lichtblau

>Wolfram Research

Ok I have understood this concept

This behavior is predictable knowing as mathematica kernel is

implemented

The true problem is:

is it possible to foresee in advance having a computer,

an OS (32 or 64 Bit) and Mathematica(32 or 64 Bit) which dimensions of

a determined problem I succeed in resolving?

With Matlab or Maple or C I have always succeeded in banal way in

making this calculation (I work on problems treated with Finite

Element method).

With Mathematica no and anybody me ago a practical example

For instance because he doesn't succeed in calculating

m = Table[(i - j) + 3./j^4 + 50*i, {i, 1, 10^5}, {j, 1, 10^5}];

m[[1]].m[[1]]

ByteCount[1.0]=16

if I go to calculate this matrix

logic would like 10^5 * 10^5 *16 Bytes * 10^-9 = 160 GB

This problem on my hardware is interrupted for lack of memory (as

usual).

With Matlab or Maple on Same Hardware it is resolved (Ok!! with

exasperated slowness..... but they resolve )

Matlab (or a compiled program) performs this calculation because, I

beklieve, the data that have to contemporarily enter

the memory don't exceed the available Ram and the Virtual memory

handles the rest

Because nobody responds to this question?

is it perhaps the error that I commit so banal not to even receive an

answer?

Cheers

Roberto

Sep 12, 2005, 10:43:38 PM9/12/05

to

> [...]

I can respond to some of this; other parts I simply do not understand.

First, the matrix you create takes up a considerable amount of space.

If it is an array of machine doubles one would expect it to occupy

10^4*10^4*8 bytes. But let's have a quick look with a smaller version

of the matrix.

In[4]:=

tt[n_]:=Table[(i-j)+3./j^4+50*i,{i,n},{j,n}]

In[10]:=

t200=tt[200];

b200=ByteCount[t200];

b200/200.^2

Out[12]= 8.0015

Fine so far. Let's go ever so slightly larger, to 216 rows.

In[13]:=

t216=tt[216];

b216=ByteCount[t216];

b216/216.^2

Out[15]= 20.1301

Okay...we're now using over 20 bytes per entry on average. What

happened? Apparently we no longer get a packed array (One might verify

this using the predicate Developer`PackedArrayQ). This is quite likely

a bug and in any case we'll investigate further. Right now one can work

around this problem by explicit conversion of inputs to machine

doubles.

ttpack[n_] := Table[N[(i - j)] + 3./N[j]^4 + N[50*i], {i, n}, {j, n}]

(One might wonder where and why the breakdown occurs. It happens when

we go from 215 to 216 rows. Packed arrays can handle 2^31 elements in

total, and we are crossing the threshhold of 2^(31/2) elements here,

which I'm sure will figure into the diagnosis.)

Next question is what happens if you do obtain a packed array, and then

try to take the determinant. There are a few ways in which one can run

out of memory. One is that most 32 bit Mathematica platforms have a 2

Gb limit on total space. The determinant computation will require at

least one full sized copy of the matrix (using the Lapack functions

appropriate for computing such things). I'm not certain that only one

copy is utilized; there may be a need for more workspace. Also the

800,000,000 byte matrices each need contiguous storage, which may or

may not be available. Also Mathematica itself occupies space. The

upshot is it is not hard to see memory being an issue.

As of version 5.2 there is support for 64-bit platforms, with greater

memory capabilities. We have confirmed that on 64 bit systems with

sufficient RAM the example above can be handled (tested thus far on a

Linux and a Windows machine). Specifically, one can allocate the matrix

and compute its (possibly ill conditioned) determinant.

I'll mention that we tried creating a 8000 x 8000 matrix of random

reals in one of the other languages you indicate, in order to test the

claim that it can handle matrices in that size range. We got an

out-of-memory message. It is not clear to me what exactly you do to get

different behavior. Quite likely this, too, hinges on configuration

specifics such as available RAM.

For the record, whatever you may think you did, you did not do it on a

desktop machine with a matrix occupying 160 Gb of memory. Maybe you

meant 1.6 Gb?

Daniel Lichtblau

Wolfram Research

Sep 13, 2005, 11:37:48 AM9/13/05

to

On 12 Sep 2005 19:43:38 -0700, "Daniel Lichtblau" <da...@wolfram.com>

wrote:

wrote:

>

>(One might wonder where and why the breakdown occurs. It happens when

>we go from 215 to 216 rows. Packed arrays can handle 2^31 elements in

>total, and we are crossing the threshhold of 2^(31/2) elements here,

>which I'm sure will figure into the diagnosis.)

Is this limit of the packedArrays worth for Mathem 5.2?

Where these accurate information can be retrieved on the limits of the

packeds array ...etc

If they are in the documentation what time I don't have available I

apologize me. In contrary case can you point out me where to be able

to deepen the kernel of mathematica?

On the texts I am sure that nothing of this doesn't exist. It serves

me because I would like to create a program fem that should undertake

itself to develop heavy numerical computations

>Next question is what happens if you do obtain a packed array, and then

>try to take the determinant. There are a few ways in which one can run

>out of memory. One is that most 32 bit Mathematica platforms have a 2

>Gb limit on total space.

Just for this motive I have not marveled when the calculation is

arrested on Mathematica 5.1 + XP(32Bit)

I have marveled instead when the calculation is arrested on

Mathematica 5.2 + WinXP(64Bit) 2GB Ram 400GB HD

> The determinant computation will require at

>least one full sized copy of the matrix (using the Lapack functions

>appropriate for computing such things). I'm not certain that only one

>copy is utilized; there may be a need for more workspace. Also the

>800,000,000 byte matrices each need contiguous storage, which may or

>may not be available. Also Mathematica itself occupies space. The

>upshot is it is not hard to see memory being an issue.

>

>As of version 5.2 there is support for 64-bit platforms, with greater

>memory capabilities. We have confirmed that on 64 bit systems with

>sufficient RAM the example above can be handled (tested thus far on a

>Linux and a Windows machine). Specifically, one can allocate the matrix

>and compute its (possibly ill conditioned) determinant.

Could I know how much RAM could be necessary to create a matrix (if it

is possible ) of 10^5 * 10^5 elements and to operate one single

products (Blas1) of the type m[[i]]. m[[j]] ? (Mathem 5.2 + XP(64) or

Linux Suse 64)

>I'll mention that we tried creating a 8000 x 8000 matrix of random

>reals in one of the other languages you indicate, in order to test the

>claim that it can handle matrices in that size range. We got an

>out-of-memory message. It is not clear to me what exactly you do to get

>different behavior. Quite likely this, too, hinges on configuration

>specifics such as available RAM.

Personally I have experimented in positive sense using Gnu gcc +

UbuntuLinux 5.04 (64Bit) + ACML (amd) (Blas+Lapack) 2GB RAM+ 400GB HD

+Athlon64

>

>

>For the record, whatever you may think you did, you did not do it on a

>desktop machine with a matrix occupying 160 Gb of memory. Maybe you

>meant 1.6 Gb?

Attention in this case I have well specified that I create only the

matrix and I don't calculate the determinant

However thanks of heart. I have finally understood the true essence of

the problem. I believe that without understanding well as the

technology packed array has been implemented both useless to try to

make analytical calculations on the dimension of memory used during

the calculation

Cheers

Rob

Sep 13, 2005, 2:45:35 PM9/13/05

to

LumisROB wrote:

> On 12 Sep 2005 19:43:38 -0700, "Daniel Lichtblau" <da...@wolfram.com>

> wrote:

>

> >

> >(One might wonder where and why the breakdown occurs. It happens when

> >we go from 215 to 216 rows. Packed arrays can handle 2^31 elements in

> >total, and we are crossing the threshhold of 2^(31/2) elements here,

> >which I'm sure will figure into the diagnosis.)

>

> Is this limit of the packedArrays worth for Mathem 5.2?

Actually I found out it is correct behavior. There is an overflow of

machine integer arithmetic in computing j^4 when j>=216. Using explicit

conversion of j to a double (via N[]) is the appropriate thing to do in

order to avoid this pitfall.

> Where these accurate information can be retrieved on the limits of the

> packeds array ...etc

As of version 5.2 the maximum number of elements in a packed array is,

I believe, 2^30 (my claim of 2^31 was probably mistaken). To the best

of my knowledge this is not documented.

> [...]

> >Next question is what happens if you do obtain a packed array, and then

> >try to take the determinant. There are a few ways in which one can run

> >out of memory. One is that most 32 bit Mathematica platforms have a 2

> >Gb limit on total space.

>

> Just for this motive I have not marveled when the calculation is

> arrested on Mathematica 5.1 + XP(32Bit)

> I have marveled instead when the calculation is arrested on

> Mathematica 5.2 + WinXP(64Bit) 2GB Ram 400GB HD

As I noted, we are able to do the computation on 64 bit platforms. One

thing to note: I am told that if you are using a beta version of 64-bit

WinXP then your Mathematica installation will most likely be for the

32-bit version.

> > [...]

> >As of version 5.2 there is support for 64-bit platforms, with greater

> >memory capabilities. We have confirmed that on 64 bit systems with

> >sufficient RAM the example above can be handled (tested thus far on a

> >Linux and a Windows machine). Specifically, one can allocate the matrix

> >and compute its (possibly ill conditioned) determinant.

>

> Could I know how much RAM could be necessary to create a matrix (if it

> is possible ) of 10^5 * 10^5 elements and to operate one single

> products (Blas1) of the type m[[i]]. m[[j]] ? (Mathem 5.2 + XP(64) or

> Linux Suse 64)

That would be 10^10 (10 billion) elements. Mathematica will not support

matrices of that size. It does not matter how much RAM you have.

> >I'll mention that we tried creating a 8000 x 8000 matrix of random

> >reals in one of the other languages you indicate, in order to test the

> >claim that it can handle matrices in that size range. We got an

> >out-of-memory message. It is not clear to me what exactly you do to get

> >different behavior. Quite likely this, too, hinges on configuration

> >specifics such as available RAM.

>

>

> Personally I have experimented in positive sense using Gnu gcc +

> UbuntuLinux 5.04 (64Bit) + ACML (amd) (Blas+Lapack) 2GB RAM+ 400GB HD

> +Athlon64

We tested one of the Ma... languages you had mentioned. We were unable

to verify your claim that it handled a 10^4 x 10^4 matrix determinant

in any way other than to run out of memory.

> >For the record, whatever you may think you did, you did not do it on a

> >desktop machine with a matrix occupying 160 Gb of memory. Maybe you

> >meant 1.6 Gb?

>

> Attention in this case I have well specified that I create only the

> matrix and I don't calculate the determinant

(1) We were unable to create a random matrix of dimension 11000 x 11000

in the language we tried. Let alone 10^5 x 10^5.

(2) Why do you specifically fault Mathematica on memory capacity for

failing to find the determinant, if Matlab and Maple cannot find it

either? You rather strongly imply that the memory restriction you

encounter is specific to Mathematica. As best we can verify, this is

not correct. At least one of those others, on our installation, got

swamped even earlier (below 8000 x 8000).

On some 32-bit platforms Mathematica will compute the determinant of a

8000 x 8000 matrix, and on others it will run out of memory. So I would

suspect there are similar memory restrictions between languages in

regard to this type of computation, at least on 32 bit platforms.

In[1]:= mat = Table[Random[], {8000}, {8000}];

In[2]:= Timing[dd = Det[mat]]

9561

Out[2]= {69.79 Second, 2.731952583914477 10

> However thanks of heart. I have finally understood the true essence of

> the problem. I believe that without understanding well as the

> technology packed array has been implemented both useless to try to

> make analytical calculations on the dimension of memory used during

> the calculation

>

>

> Cheers

> Rob

It is not terribly difficult to make memory estimates involving packed

arrays in Mathematica. The only information not documented is total

array size of 2^30 elements. You certainly know that, when packed, they

occupy 8 bytes each for machine doubles. You can assume that a

workspace copy will be required for matrix manipulations that require

level 3 BLAS. There might be other considerations related to your

machine, installation, availability of contiguous memory, etc. But most

such issues will not be specific to Mathematica.

Daniel Lichtblau

Wolfram Research

Sep 13, 2005, 4:34:24 PM9/13/05

to

In article <4324CB72...@cs.berkeley.edu>, Richard Fateman

<fat...@cs.berkeley.edu> wrote:

<fat...@cs.berkeley.edu> wrote:

> car...@colorado.edu wrote:

> > IMHO a well implemented data structure for lists of repeating objects

> > should have a memory overhead of only O(1) for a 1D n-object list,

> > O(n) for a 2D n x n objects list, O(n^2) for a 3D n x n x n objects

> > list, and so on.

> >

> > That was my experience in writing database systems 3 decades

> > ago. Are those orders of magnitude still considered correct? For

> > example, what would be the overhead in Lisp for a n x n table

> > of identical objects?

> >

>

> What makes you think that Mathematica is "well implemented"?

>

> consider this program

> Test[lim_] := Do[{a = Table[ i, {i, 10^n}];

> Print[Timing[a[[1]] = a[[1]] + 1;]]},

>

> {n, 1, lim}]

>

> What this does is it allocates a vector of size 10, 100, 1000, etc.

> and then it times the operation of incrementing just one element of the

> vector. In a well-implemented system, the time to change one

> element of a vector, especially the first element, should be a constant

> regardless of the length of the vector. In Mathematica, the time

> depends on the length of the vector...

This can be infuriating. I remember writing a database program in UCSD

Pascal thirty years ago, only to discover that the time UCSD took to

look up a record in the database depended LINEARLY on the record number

n. Why? Because, to look up byte number n*sizeof(record), it ADDED

sizeof(record) to itself n times, instead of multiplying n times

sizeof(record). Completely useless. I had to write my own file

routines.

--Ron Bruck

Sep 13, 2005, 5:35:37 PM9/13/05

to

I agree; the kernel is well overdue for a thorough

rewrite. It should facilitate numerical mathematics with

zero-overhead MD arrays as primitive (atomic

types) and direct access to LAPACK and PETS.

But I cannot "fund" the study. Sorry, I dont have

that kind of money.

rewrite. It should facilitate numerical mathematics with

zero-overhead MD arrays as primitive (atomic

types) and direct access to LAPACK and PETS.

But I cannot "fund" the study. Sorry, I dont have

that kind of money.

Sep 13, 2005, 6:11:27 PM9/13/05

to

car...@colorado.edu wrote:

> I agree; the kernel is well overdue for a thorough

> rewrite. It should facilitate numerical mathematics with

> zero-overhead MD arrays as primitive (atomic

> types)

What exactly do you think a packed array is in Mathematica?

and direct access to LAPACK and PETS.

> But I cannot "fund" the study. Sorry, I dont have

> that kind of money.

A considerable amount of interface to Lapack and BLAS is there but not

documented. To get some idea of the functions involved, have a look at

??LinearAlgebra`*

and

??LinearAlgebra`*`*

None of this has anything much to do with "a full rewrite" of the

Mathematica kernel, whatever that might mean.

Daniel Lichtblau

Wolfram Research

Sep 13, 2005, 7:17:18 PM9/13/05

to

On 13 Sep 2005 11:45:35 -0700, "Daniel Lichtblau" <da...@wolfram.com>

wrote:

wrote:

>>

>> Just for this motive I have not marveled when the calculation is

>> arrested on Mathematica 5.1 + XP(32Bit)

>> I have marveled instead when the calculation is arrested on

>> Mathematica 5.2 + WinXP(64Bit) 2GB Ram 400GB HD

>

>As I noted, we are able to do the computation on 64 bit platforms. One

>thing to note: I am told that if you are using a beta version of 64-bit

>WinXP then your Mathematica installation will most likely be for the

>32-bit version.

>

Mathematica 5.2 from you distributed in version try has limitations?

How do I do to know if the installation of Mathematica 5.2 on Win XP

64 are really working to 64 bit?

>>

>>

>> Personally I have experimented in positive sense using Gnu gcc +

>> UbuntuLinux 5.04 (64Bit) + ACML (amd) (Blas+Lapack) 2GB RAM+ 400GB HD

>> +Athlon64

>

>We tested one of the Ma... languages you had mentioned. We were unable

>to verify your claim that it handled a 10^4 x 10^4 matrix determinant

>in any way other than to run out of memory.

>

>

>

>(1) We were unable to create a random matrix of dimension 11000 x 11000

>in the language we tried. Let alone 10^5 x 10^5.

>

>(2) Why do you specifically fault Mathematica on memory capacity for

>failing to find the determinant, if Matlab and Maple cannot find it

>either? You rather strongly imply that the memory restriction you

>encounter is specific to Mathematica. As best we can verify, this is

>not correct. At least one of those others, on our installation, got

>swamped even earlier (below 8000 x 8000).

>

I have never said this, I have never said to have resolved the

calculation of the determinant with Matlab and I have specified this

more times. I have said instead that in a lot of occasions in which I

had interruptions of numerical calculation in Mathematica I succeeded

in Matlab or C in resolving the problem.

However the problem is not this, I have observed more times that I

love Mathematica and really for this I wanted to look for remedies to

be able to use it to fund

>

>It is not terribly difficult to make memory estimates involving packed

>arrays in Mathematica. The only information not documented is total

>array size of 2^30 elements. You certainly know that, when packed, they

>occupy 8 bytes each for machine doubles. You can assume that a

>workspace copy will be required for matrix manipulations that require

>level 3 BLAS. There might be other considerations related to your

>machine, installation, availability of contiguous memory, etc. But most

>such issues will not be specific to Mathematica.

>

>

Thanks so many as soon as the time to my disposition will allow me to

deepen these matters and to apply these jewels recommends I will

certainly do it.

So many graces for the jewel help

Roberto

Sep 13, 2005, 8:15:52 PM9/13/05

to

By "rewrite" I mean *a* kernel for computational mathematics,

and nothing else. Call it a toolbox if you want.

Sort of "reverse Matlab". In Matlab symbolic services f

orm a separate toolbox bridge Here the

computational kernel should be a largely a bridge to public

domain libraries, of which there are several excellent ones.

and nothing else. Call it a toolbox if you want.

Sort of "reverse Matlab". In Matlab symbolic services f

orm a separate toolbox bridge Here the

computational kernel should be a largely a bridge to public

domain libraries, of which there are several excellent ones.

The symbolic kernel should be separate from the computational

one w/each interacting with the front end. One Cell property

would be the appropriate kernel it will execute.

Interweaving high performance numerics and symbolics in the

same pot risks the danger of "bug instability": fixing a bug

introduces 10 more. Beside the appropriate data structures are

different. Uncontrollable complexity has bothered many projects:

eventually the whole programming corps is busy just chasing bugs.

Sep 13, 2005, 8:52:18 PM9/13/05

to

Daniel wrote:

>>

>

>

I Apologize you, I make you an example that on my hardware-software

configuration ( WinXP 32+2Gb Ram ) it brings Mathematica to fail while

it is being resolved with other tool (of which don't do the name

because it seems that I want to make publicity to other tools while my

only purpose is to understand the motor of numerical calculation of

mathematica )

m = Table[ i - j + 3./ j^4 + 50 * i , {i, 1, 10^ 4 }, {j, 1,

10^4}];

Norm[m, Infinity]

No more memory available.

Mathematica kernel has shut down.

I have created one of them now because as I told you often in past

has happened me this and therefore I think about being able to create

many others of it. This last problem as you can easily verify it

doesn't find limits in Lapack( or other numerical library) or in the

hardware or OS(32 bit) from me used

However thanks for your help. It has been conclusive to clarify me

some concepts even if the situation is not still me well clear I will

think it on a little

Kindest regards

Roberto

Sep 14, 2005, 11:23:09 AM9/14/05

to

car...@colorado.edu wrote:

> By "rewrite" I mean *a* kernel for computational mathematics,

> and nothing else. Call it a toolbox if you want.

> Sort of "reverse Matlab". In Matlab symbolic services f

> orm a separate toolbox bridge Here the

> computational kernel should be a largely a bridge to public

> domain libraries, of which there are several excellent ones.

Thank you for clarifying what you had in mind.

> The symbolic kernel should be separate from the computational

> one w/each interacting with the front end. One Cell property

> would be the appropriate kernel it will execute.

I fail to see how this strategy would be useful, if the main purpose is

to allow symbolic and numeric computations.

I certainly understand that various libraries are powerful and useful.

But some e.g for dense and sparse linear algebra are already included

in the distribution of Mathematica, and others might be accessed by

external calls e.g. via MathLink.

One necessity is that there be adequate support for needed underlying

data structures, which differ between various libraries (for good

reason). This is in general supported by the capability of importing

from and exporting to various formats, in particular some sparse matrix

formats. Certainly there may be important ones (from the point of view

of specific purposes) missing. But supporting more formats is most

likely unrelated to what you have in mind. So I would still raise the

question of specifically what and how a "numerical kernel" strategy

that relies on arbitrary (or even specific) external libraries would

do, or how it would interact with the rest of Mathematica in terms of

passing back results.

If what you want is encorporation of more external libraries in the

Mathematica distribution, that is another matter. From what you wrote I

am assuming this is not really what you have in mind.

> Interweaving high performance numerics and symbolics in the

> same pot risks the danger of "bug instability": fixing a bug

> introduces 10 more. Beside the appropriate data structures are

> different. Uncontrollable complexity has bothered many projects:

> eventually the whole programming corps is busy just chasing bugs.

I find that to be often the case due to various types of "code

fragility". In particular it can arise when one function relies on

results from many others in such a way that small, defensible changes

down below lead to worsened behavior in the caller. Symbolic

integration appears to be particularly vulnerable to this phenomenon.

What is not so clear to me is how this sort of fragility problem, if I

correctly understand you, might particularly afflict either numeric or

symbolic computation simply from their coexistence in the same general

framework. Possibly if you could give an example or two...?

One thing I will note is there is a slowly growing area of "hybrid

symbolic-numeric computation" wherein methods and algorithms from one

field get applied to the other. This interplay provides some incentive

for keeping the two regimes yoked. I'm not sure this conflicts with

what you advocate, but it seems as though it might.

Let me make a general remark or two about what I think you are

proposing. Dedicated data structures certainly have their place. For

example, the SparseArray objects introduced in version 5 of Mathematica

were intended as providing reasonable means to do fast sparse linear

algebra. But certainly they will not provide sufficient flexibility to

accomodate any and all external library needs, which again implies the

need for import/export capabilities. On the flip side, there is not

always a good reason to form special data structures when a general one

will suffice. I think packed arrays, which are just efficiently

allocated/addressed tensor structures, provide an example of this in

that they are regarded just as lists (and basically indistinguishable

therefrom) in Mathematica.

By the way, I agree with the general sentiment that Mathematica is not

amenable to implementation and effective use of arbitrary data

structures. (Such can be done, but one has to know the magic

incantations...). I find that this sort of shortcoming can be an issue

when dealing with various algorithms from computer science, for

example. It can impede Mathematica efficiency as well in the numeric

realm, when, for example, use of Compile is precluded by a need for

non-tensorial data structures. But again I do not see how your

proposal, as I understand it, would address any of this.

Daniel Lichtblau

Wolfram Research

Sep 14, 2005, 11:31:43 AM9/14/05

to

LumisROB wrote:

> Daniel wrote:

>

> I Apologize you, I make you an example that on my hardware-software

> configuration ( WinXP 32+2Gb Ram ) it brings Mathematica to fail while

> it is being resolved with other tool (of which don't do the name

> because it seems that I want to make publicity to other tools while my

> only purpose is to understand the motor of numerical calculation of

> mathematica )

>

>

> m = Table[ i - j + 3./ j^4 + 50 * i , {i, 1, 10^ 4 }, {j, 1,

> 10^4}];

> Norm[m, Infinity]

>

> No more memory available.

> Mathematica kernel has shut down.

I suspect this is because the j^4 overflows machine integer arithmetic,

causing the matrix not to be packed. When I modify this slightly to use

only machine doubles (by computing N[j]^4 instead of j^4) I get a

result.

In[17]:= mat = Table[ i - j + 3./ N[j]^4 + 50*i, {i,10^4 },

{j,10^4}];

In[18]:=Developer`PackedArrayQ[mat]

Out[18]= True

In[19]:= ByteCount[mat]

Out[19]= 800000060

In[20]:= Norm[mat, Infinity]

9

Out[20]= 5.05 10

I would guess this is small enough to work with as a packed array but a

bit too large to handle in Mathematica, at least on 32-bit

installations, when not explicitly packed.

> [...]

> Kindest regards

>

> Roberto

Daniel Lichtblau

Wolfram Research

Sep 14, 2005, 5:58:09 PM9/14/05

to

On 14 Sep 2005 08:31:43 -0700, "Daniel Lichtblau" <da...@wolfram.com>

wrote:

wrote:

>

>LumisROB wrote:

>> Daniel wrote:

>>

>> I Apologize you, I make you an example that on my hardware-software

>> configuration ( WinXP 32+2Gb Ram ) it brings Mathematica to fail while

>> it is being resolved with other tool (of which don't do the name

>> because it seems that I want to make publicity to other tools while my

>> only purpose is to understand the motor of numerical calculation of

>> mathematica )

>>

>>

>> m = Table[ i - j + 3./ j^4 + 50 * i , {i, 1, 10^ 4 }, {j, 1,

>> 10^4}];

>> Norm[m, Infinity]

>>

>> No more memory available.

>> Mathematica kernel has shut down.

>

>I suspect this is because the j^4 overflows machine integer arithmetic,

>causing the matrix not to be packed. When I modify this slightly to use

>only machine doubles (by computing N[j]^4 instead of j^4) I get a

>result.

>

>

>

>Daniel Lichtblau

>Wolfram Research

Indeed this motivation explains a lot of anomalies that I found only

in the numerical calculation. It serves me know something in more on

the packeds array. Where I can look ?. Could you point out a text that

explains the bonds among Ram Virtual Memory pagining flow of low-level

data ...et cetera to a general level that can give a general vision ?

Cheers

Roberto

Sep 15, 2005, 4:01:14 PM9/15/05

to

On 14 Sep 2005 08:31:43 -0700, "Daniel Lichtblau" <da...@wolfram.com>

wrote:

wrote:

I have verified the things that you have written me, let's see:

>It is not terribly difficult to make memory estimates involving packed

>arrays in Mathematica. The only information not documented is total

>array size of 2^30 elements. You certainly know that, when packed, they

>occupy 8 bytes each for machine doubles. .

>Daniel Lichtblau

>Wolfram Research

Sorry, This is not always true as I have verified in more occasions

The created object is an packedarray but the dimension of which the

memory is increased doesn't correspond to what you say. Is banal to

make an example (builds my matrix with an inferior dimension and you

will see)

>You can assume that a

>workspace copy will be required for matrix manipulations that require

>level 3 BLAS. There might be other considerations related to your

>machine, installation, availability of contiguous memory, etc. But most

>such issues will not be specific to Mathematica

Sorry,this occupation of memory is necessary not only for operations

of the type Blas3

This would be logical. In fact, if I build a matrix random and then do

I build another of it without cancelling the first Mathematica it

requires that both reside in the RAM despite me doesn't perform

calculations that involve both.. it seems that the virtual memory is

not exploited Because Mathematica doesn't set the first matrix in

virtual memory to make place to the creation of the second?

I keep on not understanding the way according to which Mathematica

certainly manages the memory and the manual it doesn't help to deepen

Cheers

Roberto

Sep 15, 2005, 5:19:32 PM9/15/05

to

LumisROB wrote:

> On 14 Sep 2005 08:31:43 -0700, "Daniel Lichtblau" <da...@wolfram.com>

> wrote:

>

> I have verified the things that you have written me, let's see:

>

>

> >It is not terribly difficult to make memory estimates involving packed

> >arrays in Mathematica. The only information not documented is total

> >array size of 2^30 elements. You certainly know that, when packed, they

> >occupy 8 bytes each for machine doubles. .

> >Daniel Lichtblau

> >Wolfram Research

>

> Sorry, This is not always true as I have verified in more occasions

> The created object is an packedarray but the dimension of which the

> memory is increased doesn't correspond to what you say. Is banal to

> make an example (builds my matrix with an inferior dimension and you

> will see)

Show me an example of code that

(1) Creates a matrix of machine doubles in Mathematica, such that

(2) The matrix is packed, according to Developer`PackedArrayQ, and

(3) The space occupied by the matrix, according to ByteCount, deviates

notably from 8 x total number of elements.

I have seen no indication of any matrix created in Mathematica and

satisfying these three considerations. What I have seen are matrices

with byte count approximately 20 x number of elements, but they were

not packed (I explained this once or twice).

> >You can assume that a

> >workspace copy will be required for matrix manipulations that require

> >level 3 BLAS. There might be other considerations related to your

> >machine, installation, availability of contiguous memory, etc. But most

> >such issues will not be specific to Mathematica

>

> Sorry,this occupation of memory is necessary not only for operations

> of the type Blas3

> This would be logical.

I'm pretty sure I did not say that level 3 BLAS requirements are the

ONLY reason more memory might be required. But they are a major one. In

general operations that use Lapack and require manipulation of an

entire matrix will result in Mathematica making a copy of the input

matrix.

> In fact, if I build a matrix random and then do

> I build another of it without cancelling the first Mathematica it

> requires that both reside in the RAM despite me doesn't perform

> calculations that involve both.

It is not clear to me what you are claiming. If you do

mat1 = Table[Random[],{10^3},{10^3}];

mat2 = Table[Random[],{10^3},{10^3}];

then certainly you now have two matrices. Whether both reside in RAM

will be operating system dependent.

Is this what you mean?

> it seems that the virtual memory is

> not exploited Because Mathematica doesn't set the first matrix in

> virtual memory to make place to the creation of the second?

> I keep on not understanding the way according to which Mathematica

> certainly manages the memory and the manual it doesn't help to deepen

>

> Cheers

> Roberto

Mathematica does nothing special with memory allocation (it is handled

via C malloc). This is operating system dependent.

Daniel Lichtblau

Wolfram Research

Sep 16, 2005, 12:33:42 AM9/16/05

to

In article <makji1d5efmn4fl3u...@4ax.com>, LumisROB

<lumisrob...@yahoo.com> wrote:

> .... it seems that the virtual memory is

> not exploited Because Mathematica doesn't set the first matrix in

> virtual memory to make place to the creation of the second?

> I keep on not understanding the way according to which Mathematica

> certainly manages the memory and the manual it doesn't help to deepen

<lumisrob...@yahoo.com> wrote:

> .... it seems that the virtual memory is

> not exploited Because Mathematica doesn't set the first matrix in

> virtual memory to make place to the creation of the second?

> I keep on not understanding the way according to which Mathematica

> certainly manages the memory and the manual it doesn't help to deepen

Huh? How do you "exploit virtual memory"? VM should be TRANSPARENT to

the program.

Are you, perchance, using Windows? Then get a real OS! (Although

Windows VM behaves like any other, AFAIK.)

--Ron Bruck

Sep 16, 2005, 2:39:15 AM9/16/05

to

On Thu, 15 Sep 2005 21:33:42 -0700, Ronald Bruck <br...@math.usc.edu>

wrote:

Hi Ronald,

wrote:

Hi Ronald,

>Huh? How do you "exploit virtual memory"? VM should be TRANSPARENT to

>the program.

also I have always thought this

and this I have to say it is almost always happened when I "played"

with the computer

>

>Are you, perchance, using Windows? Then get a real OS! (Although

>Windows VM behaves like any other, AFAIK.)

I start to think that you are right

what I succeed in doing with ubuntu linux (64 bit) in comparison to

windows is... upsetting

and precise that am a profane of linux

I have to resolve numerical problems with great docks of data: does it

exist in linux a program similar to matlab or mathematica that it is

reliable?

Cheers

Rob

Sep 16, 2005, 3:55:42 AM9/16/05

to

On 15 Sep 2005 14:19:32 -0700, "Daniel Lichtblau" <da...@wolfram.com>

wrote:

wrote:

>

>Show me an example of code that

>(1) Creates a matrix of machine doubles in Mathematica, such that

>(2) The matrix is packed, according to Developer`PackedArrayQ, and

>(3) The space occupied by the matrix, according to ByteCount, deviates

>notably from 8 x total number of elements.

>

>I have seen no indication of any matrix created in Mathematica and

>satisfying these three considerations. What I have seen are matrices

>with byte count approximately 20 x number of elements, but they were

>not packed (I explained this once or twice).

>

In[1]:=

MaxMemoryUsed[ ] * 10^-6 MB

Out[1]=

2.214328 MB

In[2]:=

m=Table[(i-j)+3./N[j] ^4+50*i,{i,1,1000},{j,1,1000}];

In[3]:=

ByteCount[m[[1,1]] ]

Out[3]=

16

In[4]:=

Developer`PackedArrayQ[m]

Out[4]=

True

In[5]:=

MaxMemoryUsed[ ] * 10^-6 MB

Out[5]=

10.132752 MB

----------------------------- OK!!!!!!!

In[6]:=

1000* 1000* 8bytes /.bytes -> 10^-6 MB

Out[6]=

8. MB

In[14]:=

m2 = Table[ (i_ j) + 3. / N[j] ^4+ 50 * i, {i, 1, 1000}, {j, 1,

1000}];

Developer`PackedArrayQ[m2]

MaxMemoryUsed[ ] * 10^-6 MB

Out[15]=

True

Out[16]=

42.14968 MB

-------------------------------- WHY ??????..... tries to define more

than once the SAME object

>It is not clear to me what you are claiming. If you do

>

>mat1 = Table[Random[],{10^3},{10^3}];

>mat2 = Table[Random[],{10^3},{10^3}];

>

>then certainly you now have two matrices. Whether both reside in RAM

>will be operating system dependent.

>

>Is this what you mean?

OK But the problem that I set me is another

If I don't succeed in working because the space of addressing (in

windows) it limits me to 2Gb (max 3Gb) then I look for share-out my

job to try to reach the finishing line. In this procedure I have to

try to break the problem in so many small problems. Such small

problems owe however to effectively be managed in the virtual memory

Is this street practicable in mathematica?

it for example is not possible to climb over limitations of memory in

a matrix multiplication passing through block matrix multiplication ?

>

>Mathematica does nothing special with memory allocation (it is handled

>via C malloc). This is operating system dependent.

>Personally I don't believe that Mathematica manages the heap and the stack in usual way.

Surely it doesn't manage the memory in the same way according to which

the memory(Heap and Stack) is managed in a program c++..... but this

doesn't marvel me, mathematica is obviously a different tool

How does it manage it?

From the manual it is not understood. The indications are very generic

>Daniel Lichtblau

>Wolfram Research

Sep 16, 2005, 5:56:22 AM9/16/05

to

On Fri, 16 Sep 2005 07:55:42 GMT, LumisROB

<lumisrob...@yahoo.com> wrote:

<lumisrob...@yahoo.com> wrote:

>In[14]:=

>m2 = Table[ (i_ j) + 3. / N[j] ^4+ 50 * i, {i, 1, 1000}, {j, 1,

>1000}];

>Developer`PackedArrayQ[m2]

>MaxMemoryUsed[ ] * 10^-6 MB

>Out[15]=

>True

>Out[16]=

>42.14968 MB

>-------------------------------- WHY ??????..... tries to define more

>than once the SAME object

>

Sorry,I have committed a naivety.Since I activate in automatic

$HistoryLength I had not realized in reality that has not been loaded

Cheers Rob

Sep 16, 2005, 11:37:12 AM9/16/05

to

In article <hhpki1lq7q127m3l1...@4ax.com>, LumisROB

<lumi...@yahoo.com> wrote:

...

> I have to resolve numerical problems with great docks of data: does it

> exist in linux a program similar to matlab or mathematica that it is

> reliable?

<lumi...@yahoo.com> wrote:

...

> I have to resolve numerical problems with great docks of data: does it

> exist in linux a program similar to matlab or mathematica that it is

> reliable?

Well, both MATLAB and Mathematica have UNIX versions. And Linux

versions. They're note quite as nice as the Mac/Windows versions in

terms of licensing (the licenses are far more sensitive to hardware

changes; both companies are responsive to requests for new passwords,

but I learned to dislike this method when Macsyma went belly-up and I

lost everything).

My impression is both programs are FASTER in Linux.

--Ron Bruck

Sep 16, 2005, 12:36:07 PM9/16/05

to

The performance of VM, if implemented at all, is OS and hardware

dependent. Even for different Unix flavors (that includes Mac OSX)

your mileage may vary.

dependent. Even for different Unix flavors (that includes Mac OSX)

your mileage may vary.

The best implementation I can recall was VAX-VMS (it had to be, given

their OS name). For the Cray I had to write my own using the idea of

a "RAM device" and was a hell of a tune up. Any comments on Windows?

I know they have used some soft comps from VMS after 2000.

Sep 16, 2005, 4:34:40 PM9/16/05

to

On 16 Sep 2005 09:36:07 -0700, car...@colorado.edu wrote:

Hi Carlos

on these matters I believe that in few knows the things to

fund.Personally as soon as I think about having understood something

it arrives the test that makes to jump everything. An account is to

manage heap stack in a program and an account it is to understand the

low-level operation.

As it is possible to resolve a problem of limit of memory? if I have

to create a great matrix and the system it doesn't allow me this I

thought about breaking the problem and to resolve him... but it is not

so simple and the documentations are often poor (and if this can be

justified in a free library really I don't understand because this

happens in commercial products of elevated cost. )

I have an admiration towards the Vax-VMSs if nothing else because I

think that they forced to use the head and not to make the robots.

Do you know some book that faces these matters and is valid?

Thanks of heart

Sep 17, 2005, 10:54:14 AM9/17/05

to

On 15 Sep 2005 14:19:32 -0700, "Daniel Lichtblau" <da...@wolfram.com>

wrote:

wrote:

>> m = Table[ i - j + 3./ j^4 + 50 * i , {i, 1, 10^ 4 }, {j, 1,

>> 10^4}];

>> Norm[m, Infinity]

>>

>> No more memory available.

>> Mathematica kernel has shut down.

>

>I suspect this is because the j^4 overflows machine integer arithmetic,

>causing the matrix not to be packed. When I modify this slightly to use

>only machine doubles (by computing N[j]^4 instead of j^4) I get a

>result.

>

>

>

>Daniel Lichtblau

>Wolfram Research

Indeed this motivation explains a lot of anomalies that I found only

in the numerical calculation but I think that this motive doesn't

exclude others of it.

I explain you:

I have performed the calculation under on the configuration WinXP(64)

Bit + Mathematica 5.2 + 2GB Ram + 400GB HardDisks. Working with other

tools I succeed in resolving without not even worrying me about the

virtual memory, everything it is managed in transparent way as I have

thought it always had to happen. What happens in Mathematica ?(also

assuring himself that the packed-array are built.... In this problem I

don't overcome the limit of the packeds array and not even the limit

of the space of addressing of the operating system )

m1 = Table[ i - j + 3./ j^4 + 50 * i , {i, 1, 10^ 4 }, {j, 1,

10^4}];

m2 = Table[ i - j + 3./ j^4 + 50 * i , {i, 1, 10^ 4 }, {j, 1,

10^4}];

m3 = Table[ i - j + 3./ j^4 + 50 * i , {i, 1, 10^ 4 }, {j, 1,

10^4}];

m4 = Table[ i - j + 3./ j^4 + 50 * i , {i, 1, 10^ 4 }, {j, 1,

10^4}];

No more memory available.

Mathematica kernel has shut down.

>It is not terribly difficult to make memory estimates involving packed

>arrays in Mathematica. The only information not documented is.....

>There might be other considerations related to your

>machine, installation, availability of contiguous memory, etc. But most

>such issues will not be specific to Mathematica.

>Daniel Lichtblau

>Wolfram Research

I keep on thinking that you are too much optimist on Mathematica

Please explain me with some numbers because the interruption dictates

above it happens so that can understand well where I am (or my

computer + Mathematica !!! ) wrong

Thanks of your time

Rob

Sep 17, 2005, 12:18:54 PM9/17/05

to

Dear Rob:

1. Yes, it is a poor behavior of the system if it gives you

no indication that you are approaching the limit of available memory.

Especially bad if it reaches that limit and then crashes

with loss of all work.

Mathematica should address

this issue. Some Lisp systems have a better

engineered approach to dealing with such problems,

and Mathematica could be improved.

But how many times must you post the same message?

2. The fact that you can find out how much space

is used by a particular object in Mathematica

(ignoring sharing), by ByteCount should make it

possible for you to study the phenomenon without

posting more questions.

3. The size of your computer hard disk is irrelevant,

as Daniel L. has pointed out. Please read the responses

to your questions. Perhaps find someone who is

more fluent in English to translate to your native

language.

Sep 17, 2005, 2:56:20 PM9/17/05

to

28. Ronald Bruck Sep 15, 10:33 pm show options

Newsgroups: sci.math.symbolic

From: Ronald Bruck <b...@math.usc.edu> - Find messages by this author

Date: Thu, 15 Sep 2005 21:33:42 -0700

Local: Thurs, Sep 15 2005 10:33 pm

Subject: Re: a big limit of mathematica?

> Huh? How do you "exploit virtual memory"? VM should be TRANSPARENT to

> the program.

Not necessarily. In a modular and well documented OS (that is BTW

the case with most Unix flavors) you should be able to turn off the

system VM

and write your own VM pager, customized to your application

"granularity".

For example, your application might specify the page size.

Sep 17, 2005, 3:18:15 PM9/17/05