laurent series

10 views
Skip to first unread message

Ralf Hemmecke

unread,
Jun 12, 2023, 4:49:06 AM6/12/23
to fricas-devel
Dear Waldek,

I am heavily using UnivariateLaurentSeries. Unfortunately now I am
running into a memory problem (See end of mail.)

These numbers don't tell me anything, but what I see is that my main
memory (32 GB) is filled completely before I em put into ldb.
Unfortunately, I am totally inexperienced to figure out relevant
information with ldb.

However, I think the problem must come form having stored all
intermediate result that are not garbage collected.

-- red loop:=[288, 0][[28158, 9], [28150, 1]]
-- redx:=[288, 0][[28158, 9], [28150, 1]]
196

What you read above are some statistics I printed during the computation
to figure out the source of the problem.

The computation goes with two laurent series, with orders -288 and 0.
The coefficient(ser,0) has 28158 binary digits in the numerator and 9
binary digits in the denominator. Similar for the second series.

If I estimate the amount of memory, then I roughly get

(28158/8) bytes * 289 (num of coefficients -288..0) * 200 (num of such
series)

That gives roughly 30 MB if I am not wrong.

I understand that UnivariateLaurentSeries internally must keep some
structure in order to compute the next coefficient (I'm actually only
interested in the coefficients -289..0, eventually.)

Anyway. I have no clue how 30 GB are filled.

Question, how can I find out whether there is some stuff that could
perhaps be garbage collected. With gnome-system-monitor I sometimes see
jumps from 7.5GB to 7GB, i.e. there is some garbage collection going on,
but not enough.

I have already tried to additionally extend the prinipal part of any
argument series. The memory is filled with and without that strategy.

Of course, the whole proplem can lie on my side, but since I currently
do not know how to figure out where in my code I would have to
explicitly remove a reference to some memory,

I understand that it is not hard till impossible for you to give me some
hint without seeing my code, but I am hoping.

Let me say just this. The series involve is the Klein j-invariant. From
which I will have to roughly have to take the 17th power of j(tau) and
of j(17 tau) and do reductions of another Laurent series with some
"basis" polynomials bi(j,j17) where p is of degree < [17,17] in both
series.
Then I multiply the result with j and reduce again by the basis. I need
roughly 300 of these rounds, but get my memory full in round 200.

BTW, I can compute the coefficient for q^10000 of j^17 has 7466 binary
digits and I can compute it in about 420 seconds.

Hmmm... seems like UnivariateLaurentSeries is not the problem.

I investigate my code further, but maybe you have a quick hint.

Ralf

I compute j via

kleinJInvariant(): L ==
ef: L := eulerFunction 1
ef2: L := ef * ef
ef4: L := ef2 * ef2
ps8: L := recip(ef4 * ef4)::L
monomial(1, -1) * (eisensteinE4()*ps8)^3

where eisensteinE4 and eulerFunction are defined as

eisensteinStream(c: R, k: NN, n: PP): Stream R == delay
s: ZZ := sumOfKthPowerDivisors(n,k)$IntegerNumberTheoryFunctions
cons(s*c, eisensteinStream(c, k, n+1))
eisensteinE4(): L ==
laurent(0, cons(1, eisensteinStream(240::R, 3, 1)))

-- This computes (q^s; q^s)_\infty. Input condition: s>0.
-- Faster version of eulerFunctions building on the
-- Pentagonal number theorem (4 coefficients at once).
eulerStream(s: PP, n: ZZ, m: ZZ, z: S): S == delay
-- We compute the series for q=q^s, so the parameters
-- correspond to n=2ks, m4="last exponent for the previous k"
-- z="remaining stream".
n1 := n - s -- corresponds to (2k-1)s
n2 := n + n1 -- corresponds to (4k-1)s
m1 := m + n2 -s - s; -- 1st exponent = m + (4k-3)*s
m2 := m1 + n1; -- 2nd exponent
m3 := m2 + n2; -- 3rd exponent
m4 := m3 + n -- 4th exponent
cons([m1, -1], cons([m2, -1], cons([m3, 1], cons([m4, 1],
eulerStream(s, n+s+s, m4, z)))))


Heap exhausted during garbage collection: 160 bytes available, 544
requested.
Gen Boxed Code Raw LgBox LgCode LgRaw Pin Alloc Waste
Trig WP GCs Mem-age
3 12410 0 212978 0 0 0 1701 3448270144
244486848 2795436714 0 1 0.7786
4 24632 1 421238 0 0 0 4226 6892924352
412226112 2000000 10271 0 0.6782
5 0 0 0 0 0 0 0 0
0 2000000 0 0 0.0000
6 1959 10 847 241 0 116 0 50760704
1225728 2000000 2989 0 0.0000
Total bytes allocated = 10391955200
Dynamic-space-size bytes = 11049893888
GC control variables:
*GC-INHIBIT* = true
*GC-PENDING* = true
*STOP-FOR-GC-PENDING* = false
fatal error encountered in SBCL pid 776724 tid 776724:
Heap exhausted, game over.

Welcome to LDB, a low-level debugger for the Lisp runtime environment.
ldb>

Waldek Hebisch

unread,
Jun 12, 2023, 7:50:37 AM6/12/23
to fricas...@googlegroups.com
On Mon, Jun 12, 2023 at 10:49:02AM +0200, Ralf Hemmecke wrote:
> Dear Waldek,
>
> I am heavily using UnivariateLaurentSeries. Unfortunately now I am running
> into a memory problem (See end of mail.)
>
> These numbers don't tell me anything, but what I see is that my main memory
> (32 GB) is filled completely before I em put into ldb.

Below it look that your FriCAS is configured to use 11GB. If you
have 32 GB you can try to give more memory to FriCAS (or less to
figure out how far you can get in smaller memory).

> Unfortunately, I am totally inexperienced to figure out relevant information
> with ldb.
>
> However, I think the problem must come form having stored all intermediate
> result that are not garbage collected.
>
> -- red loop:=[288, 0][[28158, 9], [28150, 1]]
> -- redx:=[288, 0][[28158, 9], [28150, 1]]
> 196
>
> What you read above are some statistics I printed during the computation to
> figure out the source of the problem.
>
> The computation goes with two laurent series, with orders -288 and 0.
> The coefficient(ser,0) has 28158 binary digits in the numerator and 9 binary
> digits in the denominator. Similar for the second series.
>
> If I estimate the amount of memory, then I roughly get
>
> (28158/8) bytes * 289 (num of coefficients -288..0) * 200 (num of such
> series)
>
> That gives roughly 30 MB if I am not wrong.
>
> I understand that UnivariateLaurentSeries internally must keep some
> structure in order to compute the next coefficient (I'm actually only
> interested in the coefficients -289..0, eventually.)

There is overhead of list node per stream term. In your case this
should be negligible (list node needs 16 bytes, your data is much
larger).

I would have doubts about number of series that you keep. Namely,
to be able to compute next terms series keep references to arguments.
For multiplication that means argument to product are needed as long
as product is needed. If you do say 1000 series multiplications,
then you actually keep all arguments to multiplications which
probably goes in thousends. Addition can discard "used" terms,
but is you add a product than this still keeps reference to arguments
of multiplication.

When doing computations with series conservative assumption
is that you need to keep whole computation tree in memory.
In other words, memory taken by series can be garbage collected
only whan whole computation is done and all resulting series
are discarded.

> Anyway. I have no clue how 30 GB are filled.
>
> Question, how can I find out whether there is some stuff that could perhaps
> be garbage collected. With gnome-system-monitor I sometimes see jumps from
> 7.5GB to 7GB, i.e. there is some garbage collection going on, but not
> enough.
>
> I have already tried to additionally extend the prinipal part of any
> argument series. The memory is filled with and without that strategy.
>
> Of course, the whole proplem can lie on my side, but since I currently do
> not know how to figure out where in my code I would have to explicitly
> remove a reference to some memory,
>
> I understand that it is not hard till impossible for you to give me some
> hint without seeing my code, but I am hoping.
>
> Let me say just this. The series involve is the Klein j-invariant. From
> which I will have to roughly have to take the 17th power of j(tau) and of
> j(17 tau) and do reductions of another Laurent series with some "basis"
> polynomials bi(j,j17) where p is of degree < [17,17] in both series.
> Then I multiply the result with j and reduce again by the basis. I need
> roughly 300 of these rounds, but get my memory full in round 200.

It looks that your reduction step is likely to create something
like 200 series as intermediate results. With 200 rounds that
may give enough series to fill memory.

--
Waldek Hebisch

Ralf Hemmecke

unread,
Jun 12, 2023, 9:38:48 AM6/12/23
to fricas...@googlegroups.com
Dear Waldek,

thank you for answering so quickly.

On 12.06.23 14:33, Waldek Hebisch wrote:
> It looks that your reduction step is likely to create something
> like 200 series as intermediate results. With 200 rounds that
> may give enough series to fill memory.

Yes, meanwhile I also came close to this observation.

Perhaps for this special case I can do something with computing an
estimate of the extension point, i.e. which coefficients of my initial
kleinJ series will actually be relevant for the result and then do the
whole computation by computing with univariate (Laurent) polynomials.

> I would have doubts about number of series that you keep. Namely,
> to be able to compute next terms series keep references to arguments.
> For multiplication that means argument to product are needed as long
> as product is needed. If you do say 1000 series multiplications,
> then you actually keep all arguments to multiplications which
> probably goes in thousends.

OK, that would explain, why

ar := reduce(aj0^estart * af, aab)$QMRED(CX,An)
ars := [ar]
for i in estart .. eend repeat
print(i)
-- ars := cons(ar,ars)
ar := reduce(aj0 * ar, aab)

fills my memory with or without the cons line. (But I need all of the
coefficients -300..0 of all the elements in ars, eventually.

However, as you see, that are only be roughly one multiplication per
round. I must investigate the reduction, but do not expect that there is
any mutliplication there or perhaps only one. (Maybe, I'm wrong -- more
after my checking this.)

For FriCAS:
Maybe, it would be worth to introduce a special function reduce(x,c,y)
for Laurent and Taylor series, and streams so that it behaves like
x-c*y, but does not create an intermediate series c*y for some constant c.

Ralf

Ralf Hemmecke

unread,
Jun 12, 2023, 9:44:12 AM6/12/23
to fricas...@googlegroups.com
In my last message I copied a huge test into the subject line without
realizing it.

Sorry for that.

Ralf
Reply all
Reply to author
Forward
0 new messages