16gb of the divisors of the monster group

56 views
Skip to first unread message

Jim Dupont

unread,
Apr 30, 2026, 12:22:16 PMApr 30
to seq...@googlegroups.com
https://archive.org/download/monster_divisors/monster_divs.txt 16gb of the divisors of the monster group for your reading pleasure, one day this could be an oeis sequence maybe

Fred Lunnon

unread,
Apr 30, 2026, 7:23:17 PM (14 days ago) Apr 30
to seq...@googlegroups.com
   << divisors of the monster group >> 
Does the subject line mean "divisors of the order of the monster group"? 
No way am I going to try downloading that text file to find out! 

WFL 
_

On Thu, Apr 30, 2026 at 5:22 PM Jim Dupont <jmiked...@gmail.com> wrote:
https://archive.org/download/monster_divisors/monster_divs.txt 16gb of the divisors of the monster group for your reading pleasure, one day this could be an oeis sequence maybe

--
You received this message because you are subscribed to the Google Groups "SeqFan" group.
To unsubscribe from this group and stop receiving emails from it, send an email to seqfan+un...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/seqfan/CAAaxPMtT0B-nYopOdZTAXuukhaRED4s89LxMJyoG-p7LE%3DrknQ%40mail.gmail.com.

Sean A. Irvine

unread,
Apr 30, 2026, 7:27:30 PM (14 days ago) Apr 30
to seq...@googlegroups.com
We have a sequence for the divisors of the order already: 

Jim Dupont

unread,
Apr 30, 2026, 7:27:57 PM (14 days ago) Apr 30
to seq...@googlegroups.com

M F Hasler

unread,
Apr 30, 2026, 7:55:48 PM (14 days ago) Apr 30
to seq...@googlegroups.com
On Thu, Apr 30, 2026 at 7:27 PM Sean A. Irvine <sai...@gmail.com> wrote:
We have a sequence for the divisors of the order already: 

where Charles Greathouse added in 2015:
(PARI) divisors(Mnr) \\ Warning: output is ~13 GB

I think it's not very interesting to fill disk space and server load with data that can probably be computed faster than downloaded (and unzipped etc). Of course assuming you have enough (free) memory;
otherwise it's even less interesting.
(Divisors can be produced in increasing order without computing all of them using a "priority queue" approach, so one can also parse them programmatically without need to store them.)
FWIW, I'll add Python code in A174670  to generate the sequence term by term.

- Maximilian

On Fri, 1 May 2026 at 11:23, Fred Lunnon <fred....@gmail.com> wrote:
   << divisors of the monster group >> 
Does the subject line mean "divisors of the order of the monster group"? 
No way am I going to try downloading that text file to find out! 

WFL 
_

On Thu, Apr 30, 2026 at 5:22 PM Jim Dupont <jmiked...@gmail.com> wrote:
https://archive.org/download/monster_divisors/monster_divs.txt 16gb of the divisors of the monster group for your reading pleasure, one day this could be an oeis sequence maybe.
 

Jim Dupont

unread,
Apr 30, 2026, 8:05:47 PM (14 days ago) Apr 30
to seq...@googlegroups.com
Yeah I mean this was a simple scripted generated and I'm working on some interesting packed representations I'll share later I just thought I would drop the link in case someone wants it

--
You received this message because you are subscribed to the Google Groups "SeqFan" group.
To unsubscribe from this group and stop receiving emails from it, send an email to seqfan+un...@googlegroups.com.

Brendan

unread,
Apr 30, 2026, 9:48:28 PM (14 days ago) Apr 30
to seq...@googlegroups.com
It would be more interesting to see the orders of the subgroups of the Monster.  Do we have that sequence?  Brendan.


Tim Peters

unread,
Apr 30, 2026, 9:53:10 PM (14 days ago) Apr 30
to seq...@googlegroups.com
'M F Hasler <mha...@dsi972.fr>]
> (Divisors can be produced in increasing order without computing all
> of them using a "priority queue" approach, so one can also parse them
> programmatically without need to store them.)
> FWIW, I'll add Python code in A174670 to generate the sequence term by term.

That would be welcome! I wonder whether it's practical, though. I find
424_488_960 divisors of the order of the monster group, with 15
distinct prime divisors. I fear the "frontier" of the priority queue
will grow too fast for most boxes to handle too. It only holds a
fraction of the results at a time, but seems to need to save
substantial state too for each of them (like a tuple recording the
exponent to which each base prime had already been raised). For
example, after popping 1 from the queue at the start, 15 new entries
have to be pushed (one for each base prime)).

So I used `itertools.product() `to generate them without regard to
order, which requires a trivial amount of memory..

FWIW, converting each to a decimal string and adding a newline summed to

12_078_286_020

bytes of output.

Jim Dupont

unread,
Apr 30, 2026, 9:54:12 PM (14 days ago) Apr 30
to seq...@googlegroups.com
I'll update the archive and split it up in groups because that large file contains the orders of the smaller groups inside of it they're just areas of the larger file and also the irreducible representations can be seen as sections of it I have much more to say on this topic

M F Hasler

unread,
Apr 30, 2026, 10:23:59 PM (14 days ago) Apr 30
to seq...@googlegroups.com
On Thu, Apr 30, 2026 at 9:48 PM Brendan <brend...@gmail.com> wrote:
It would be more interesting to see the orders of the subgroups of the Monster.  Do we have that sequence?  Brendan.

I think that's still an area of active research, and we know only the maximal subgroups of the monster,
and the smaller ones (Sylow subgroups, cyclic groups, ...), but not all of them (not even their orders, I think).

On Thu, Apr 30, 2026 at 9:53 PM Tim Peters <tim.p...@gmail.com> wrote:
'M F Hasler <mha...@dsi972.fr>]

> FWIW, I'll add Python code in A174670  to generate the sequence term by term.

That would be welcome!

It's done, just not yet "accepted" = published.

I wonder whether it's practical, though. I find 424_488_960 divisors of the order of the monster group, with 15 distinct prime divisors.

That's right.
 
I fear the "frontier" of the priority queue will grow too fast for most boxes to handle too.

I think that it might grow to a few million, which might require 1-2 GB for the heap, 
which would be ok for many modern laptops. But maybe I underestimate...

FWIW, converting each to a decimal string and adding a newline summed to 12_078_286_020 bytes of output.

That might correspond to Charles' 13GB value he meant the b-file with 424_488_960  additional "indices" (mostly nine-digit) + 1 space/line.)

- Maximilian

Allan Wechsler

unread,
Apr 30, 2026, 11:07:11 PM (14 days ago) Apr 30
to seq...@googlegroups.com
This thread, together with the previous one about trying to rigorize the notion of the "Fibonaccishness" of the divisor-sequence of a number, have synergized in my brain and produced a lot of curiosity about exactly what one ought to expect from the divisor sequences of various numbers. I'm having trouble actually posing any coherent, rigorous questions, though.

I started by looking at the graph of oeis.org/A174670. I was very surprised by how smooth and analytic it looked -- as if it had a simple closed-form expression or an easy power series. Obviously it doesn't, but, is it close to some simple function?

I'm pretty sure this is not simply typical behavior for big numbers. The integers just below and above the Monster number are squarefree and have four and two prime factors, respectively, so they have 16 and 4 divisors, not enough to draw a curve that looks like anything at all.

Is it typical for big numbers with lots of divisors? I'm very far from sure this question even makes sense.

Is it possible that only really magical numbers, like the orders of gargantuan sporadic simple groups, have divisor-sequences that approximate such elegant curves? Surely not. I mean, 2^1061*3^400*7*43 probably has a really nice smooth divisor-curve too, and there's nothing special about it because I just pulled it out of, um, the air.

Call a growth curve inexorable if all the coefficients of its power series are positive. These divisor-curves all look like approximations to inexorable functions to me. Are there any obvious counter-examples? Are there any numbers whose divisor-sets are more "clumpy", where you have a relative drought, and then a glut, and then another drought?

Am I, in fact, talking about anything, or am I just seeing camels, weasels, and whales in the clouds?

-- Allan

--
You received this message because you are subscribed to the Google Groups "SeqFan" group.
To unsubscribe from this group and stop receiving emails from it, send an email to seqfan+un...@googlegroups.com.

Tim Peters

unread,
May 1, 2026, 11:09:08 PM (13 days ago) May 1
to seq...@googlegroups.com
[Tim]
>> I fear the "frontier" of the priority queue will grow too fast for most
>> boxes to handle too.

[M F Hasler <mha...@dsi972.fr>]
> I think that it might grow to a few million, which might require 1-2 GB for the heap,
> which would be ok for many modern laptops. But maybe I underestimate...

It "blew up" for me, I went to your pending edit for A174670 and tried
your Python code there. As feared, the frontier is "too large" for my
box. At the time it generated the 70 millionth divisor (a good start,
but still far short of finishing), the heap contained over 30 million
divisors, and about 9 GB of RAM were in use. As of now, about 11 GB
after 72 million divisors.

The 15-element exponent list attached to each frontier divisor is a
major RAM hog. An easy way to cut that is to use an array.array("B")
instead (stored as C array with 15 bytes). But that's slower.

I killed the job, because it's obvious it will exhaust my RAM.

A different approach runs under 400 MB total. Basically throw out the
2^46 factor; use itertools.product() to compute the ~9M unique
divisors remaining directly,without regard to order; sort that set;
then use a 47-way heapq.merge() to merge the sorted list multiplied by
the 47 possible powers of 2. "Almost all" the total time is spent in
the merge step, but that's built up with generators, so "almost all"
the memory is consumed by the sorted list near the start. Under PyPy,
that finishes generating all the divisors in under 4 minutes (closer
to 8 under CPython). But while not sprawling, that's a lot more code
than yours.

M F Hasler

unread,
May 2, 2026, 5:45:38 PM (12 days ago) May 2
to seq...@googlegroups.com
On Fri, May 1, 2026 at 11:09 PM Tim Peters <tim.p...@gmail.com> wrote:
[M F Hasler <mha...@dsi972.fr>]
> I think that it might grow to a few million, which might require 1-2 GB for the heap,
> which would be ok for many modern laptops. But maybe I underestimate...

It "blew up" for me, I went to your pending edit for A174670 and tried
your Python code there. As feared, the frontier is "too large" for my
box. At the time it generated the 70 millionth divisor (a good start,
but still far short of finishing), the heap contained  over 30 million
divisors, and about 9 GB of RAM were in use. As of now, about 11 GB
after 72 million divisors.

Martin Fuller confirmed that my script used a maximum of 1GB when it reached the "middle divisor" of size ~ sqrt(|M|),
(i.e., after computing 424488960/2 ~ 200 million divisors), which took about 600 seconds for him.
Of course, if you keep all of the divisors in your memory, it DOES take between 13 - 16 GB,
as we know since Charles' comment from 2015 in the naive PARI code, 
and as is written in the title of this email thread.

The best algorithm cannot change the amount of memory needed to store 400 million integers of average size 
sqrt(808017424794512875886459904961710757005754368000000000)!

The 15-element exponent list attached to each frontier divisor is a major RAM hog.

You must refer to the first version of my algorithm which stored all 15 exponents, 
but shortly after (still on April 30, 24 hours before your email) 
I improved that to a version that stores only one exponent in each triple (divisor, index, exponent) pushed on the stack.

- Maximilian

Jim Dupont

unread,
May 2, 2026, 7:02:45 PM (12 days ago) May 2
to seq...@googlegroups.com
gist.github.com/jmikedupont2/b3ae63c048b7faec439c15d8335d65f8 here is the code I used if you are interested it ran surprisingly fast

--
You received this message because you are subscribed to the Google Groups "SeqFan" group.
To unsubscribe from this group and stop receiving emails from it, send an email to seqfan+un...@googlegroups.com.

Jim Dupont

unread,
May 2, 2026, 8:19:28 PM (12 days ago) May 2
to seq...@googlegroups.com
https://archive.org/details/monster-divisors-mod2 here i split it up into 46 x200 mb files for each power of 2. 

Tim Peters

unread,
May 2, 2026, 8:29:53 PM (12 days ago) May 2
to seq...@googlegroups.com
[M F Hasler <mha...@dsi972.fr>]
> Martin Fuller confirmed that my script used a maximum of 1GB
> when it reached the "middle divisor" of size ~ sqrt(|M|),
> (i.e., after computing 424488960/2 ~ 200 million divisors), which took
> about 600 seconds for him.
> Of course, if you keep all of the divisors in your memory, it DOES
> take between 13 - 16 GB,

I copied and pasted the draft edit code verbatim, and ran it _as_ a
generator. Didn't save any of the yielded results in any way,

> You must refer to the first version of my algorithm which stored all 15 exponents,
> but shortly after (still on April 30, 24 hours before your email)

Bingo! Not the first time I've been fooled by unwittingly assuming the
most recent changes are at the top.

> I improved that to a version that stores only one exponent in each
> triple (divisor, index, exponent) pushed on the stack.

I had done that too, but "only" reduced max RAM on my box (Python
3.14.4, Win10 64-bit) to 4GB. The heap grew to a maximum size of
45_505_038 entries.

It takes over 3 GB of RAM just to store that many 3-tuples containing
singleton tiny ints.

>>> sys.getsizeof(((1,2,3)))
72
>>> 72 * 45_505_038
3276362736
>>> _ >> 30
3

I have no guess as to how max 1GB could be possible.

Max heap was reached after about 192 million divisors generated
before the 200 million the other report reached.

But I'll stay out of this now, with apologies for the noise..
Reply all
Reply to author
Forward
0 new messages