Thank you
--
Alexander Vigodner | E-mail: sa...@bfr.co.il,
Bloomberg L.P., Financial Research | Work: 972-3-6944202,
IBM house, 10th floor, 2 Weizmann St., | Fax: 972-3-6944225,
Tel Aviv, 6136, Israel | Home: 972-9-8651680.
> Hi,
> I hope that somebody can answer the following simple question.
> The parameter 'n' in stacksize(n) is limited by my hardware (so to
> increase n
> I need more memory, for example) or 'n' is limited by SciLab itself. At
> my
> computer I cannot set n more than 1e7 (10000000)
This limit is set by your OS which of course is aware of your RAM and
you swap space. It is in fact how much you can allocate dynamically in C
using free in one shot.
Note that 1e7 is huge. n is the number of double precision numbers. So
it corresponds to 8e7 bytes! That is 80 Meg.
Hope that helps
Ramine
Aka "virtual memory". You can check/set your limits with
limit ([t]csh)
or
ulimit -a (bash)
The machin from which I'm writing this mail has not probs
at all with stacksize(10000000).
$ free
total used free shared buffers cached
Mem: 256876 251580 5296 45852 100684 90576
-/+ buffers/cache: 60320 196556
Swap: 618320 4872 613448
> It is in fact how much you can allocate dynamically in C
> using free in one shot.
I have been thinking it uses malloc(3). Learn, what you don't
know? Moreover, malloc(3) is just a front-end to the kernel's
memory allocator. C as a language is not involved here.
> Note that 1e7 is huge. n is the number of double precision numbers. So
> it corresponds to 8e7 bytes! That is 80 Meg.
Well, what is huge for one is tiny for the other...
<personal_opinion>
IMHO anything below 4 GB virtual memory should be doable wo/major
headaches.
</personal_opinion>
lvd
Sent via Deja.com http://www.deja.com/
Before you buy.