Personal musings on the Norman/Watt paper

23 views
Skip to first unread message

Tim Daly

unread,
Mar 20, 2026, 1:26:27 AM (3 days ago) Mar 20
to fricas...@googlegroups.com, Tim Daly, Arthur Norman, smw...@uwaterloo.ca
This is a very interesting document. Thanks for bringing this to light.

I have several opinionated comments to make. These are clearly not a
criticism of the authors but only my personal viewpoint.

I include a page number for the quotations.

------------------------------------------------------------------

[p3] In our discussion of Aldor [49] and the Axiom [27] system it was
associated with, we note that at the time NAG Ltd took over
responsibility for marketing Axiom it could run on personal-scale
hardware at least in cases where the machine was generously (for the
time) endowed with memory.

[me] I was involved with the NAG transition including spending time in
England during the transfer. Prior to the transfer by a year or so I
had ported Scratchpad to one of the very early laptops where memory
was measured in kilobytes. Several modifications were made to make it
"fit". One is still visible called "compress.nrlib" which I wrote to
replace long symbol names with shorter versions. Axiom would run on
minimal hardware before NAG ever saw it. It fit on 3 1.44MB floppies
and ran under the original Linux.

NAG did a major rewrite of the lowest level Lisp primitives to make
the system "more portable". When they released the code to me in 2000
I "unwrote" their Lisp back to AKCL. In fact it was the death of
William Schelter, the author of AKCL, that alerted me to the fact that
Axiom was withdrawn from the market.

I disagreed with the NAG rewrite portability decision. I built and ran
Axiom on a daily basis on half-a-dozen Lisp systems including Gold
Hill (a small PC version) up to my Symbolics Common Lisp. The build
system was specifically designed to build on all systems in parallel.
Thus any conforming Common Lisp was acceptable.

I will note that Bill Schelter and I spent a lot of time and effort
optimizing AKCL specifically for Axiom. I helped with issues like
tail-recursive calls and garbage collection.

-------------------------------------------------------------------

[p4] Also re-working build scripts and the schemes for packing
software for distribution such that recipients who have both old and
new architectures both have a seamless experience takes effort and
sometimes really detailed work

[me] A great deal of work went into build scripts. Scott Morrison gave
great advice on structuring the builds into system independent and
system dependent steps. The end result was that a simple call to
"make" would kick off a 3 day distributed system build across many
platforms in parallel.

The systems were quite diverse from Microsoft DOS (using DJGPP), IBM's
RS6000 using AIX, IBM RT, a laptop, and a Symbolics machine. The
various "Common Lisps" (which were "evolving in parallel with the
work) also varied. The Linux box was a reworked IBM PC.

The underlying hardware was diverse. The RT was one of the early
Reduced Instruction Set Computers (RISC), the IBM PS2 had the "micro
channel architecture", the laptop used the 68000, the DOS box used an
8086 with the 8087 floating point unit. The Symbolics machine was Lisp
all the way to the hardware. The RS6000 was another kind of RISC
architecture.

To say "sometimes really detailed work" understates the effort.

-------------------------------------------------------------------

[p4] A more serious “porting” challenge can arise when operating
system vendors change their systems.

[me] Vendor system changes were always a problem. The original build
used X10 and then transitioned to X11. Jim Wen suffered most of that
transition trauma. Scott Morrison did an amazing job on graphics.

The fight between the 68000 and the 8086/8087 was ugly. The systems
were "innovating" on what seemed like an hourly basis.

We also had to deal with an IBM SCCS spinoff (think git) which had the
annoying feature that only one person could "check out" a piece of
code, all others had to wait until it was "checked in".

The RS6000 used 3 disks in a Raid configuration. We centralized all of
our work onto it which was great until one of the disks failed and
thus the system could not reach "quorum". All of the code was "lost"
and had to be recovered by hand scanning the disks in "raw mode".
Sigh.

----------------------------------------------------------------

[p4] Albert Rich had been taking an almost diametrically opposite
approach to indefinite integration, and had been developing a very
extensive rule-based system called RUBI [26].

[p5] The code there is still available and can be useful for those who
have a Mathematica license (or for that company to embed within their
commercial system), but porting it out for use elsewhere has proved
non-trivial [40].

[me] Prior to joining the Scratchpad group I was one of the co-authors
of the IBM Rule-Based system ECLPS based on Charles Forgy's Rete
algorithm so I was quite familiar with rule-based programming.

I had several interactions with Albert based on my efforts to extend
Axiom with RUBI-like rules. It was a non-trivial effort to try to
insert RUBI rules into Axiom's algebraic structure.

RUBI makes "leaps" as I called them that simply stepped over
mathematical structures that Axiom supported. The strong typing of
Axiom made the two systems incompatible, at least at the level of
skill I could bring to bear.

I tried to "lift" both systems results to Hypergeometric Functions
which would regularize both Axiom and RUBI results but failed, mostly
due to needing to learn too much new mathematics. Serge Lang's book
was a bridge too far for me.

That said, I think Albert's work was excellent.

----------------------------------------------------------------

[p6] So there are perhaps three classes of people who worry about
portability:

[3] Developers of open source software that manages to attach
attention and get used, because the users may be distributed across a
wide range of computing environments and will report when the software
fails to behave for them.

[me] Open source software development relies on trust and cooperation.
Shared goals are vital to the success of a project. A great deal of
people-related management skill is also required. As I am barely
able to manage myself I was clearly not the right leader for an
open source project which gradually became obvious.

In addition open source requires an enormous amount of time and
commitment to handle a wide range of problems. Reporting an issue
requires a lot of detective work. Reported issues often don't come
with recommended fixes as few people have the expertise to do deep
debugging on complex software systems.

Not everyone has a deep understanding of the various mathematics such
as Trager and Bronstein's integration algorithms. It is even more
challenging when the authors (e.g. Bronstein) are no longer available
for consultation.

Another aspect is to try to understand the domain of the user. For
example, the need to understand Reed-Solomon and Low-Density
Parity-Check codes in order to have a clue about what the user
actually is trying to do. This takes a lot of time.

Another aspect of portability is the merging of and/or cooperation
with various other systems. I spent time corresponding with William
Stein (Sagemath). I tried to integrate Gilbert Baumslag's Infinite
Group Theory work when I was at CCNY. As mentioned above I tried to
merge Rich's RUBI pattern matching. I translated BLAS and LAPACK to
Common Lisp and worked to provide "cover functions" so they could be
used directly.

Even deeper efforts involved spending 6 years at CMU working on
merging LEAN program proof technology (always "in flux") with Axiom to
support proving Axiom's algorithms correct. That work extended all the
way down to the hardware using an FPGA running a RISC-V kernel and an
embedded LEAN proof verifier in Verilog. Challenges included modifying
the ELF file format to proof-carry code.

It ain't easy being free.

----------------------------------------------------------------

[p6] But more than that we are in the third category where we take
pride in keeping the software we support going on older systems nut
also arranging it can take advantage of whatever the latest platform
it. In that context we also believe that if open source software is to
attract a new generation of maintainers the code should look tidy (and
ideally this associates with portability) and is not limited by only
running on archaic platforms.

[me] Axiom runs on Linux, Windows, MACs, and Raspberry PIs. The Linux
and Raspberry PIs are transitioning from X11 to Wayland. The Windows
version runs under Windows Subsystem For Linux (WSL) and the graphics
is remoted to the Windows desktop using a native X11 server.

As for "taking advantage of whatever" there is the growing question of
working with LLMs and re-hosting some of the algorithms using GPUs.
All of the linear algebra algorithms could benefit wildly by using GPU
kernels. In addition attaching to an MCP server would enable answering
mathematical questions using the system as an agent.

----------------------------------------------------------------------

[p6-7] Making code portable to begin with and keeping it able to run
in a changing world is not always glamorous and and if done well will
be almost invisible — so we believe that there are not a huge number
of presentations at academic gatherings outside those that specialise
in software engineering that report on it. We hope here to in some
small way address the imbalance where the total work going into
invisible (but challenging) system maintenance may be very comparable
with the amount that goes into developing a new and flashy specialist
extension. That will then need long term support if it is to truly
deliver on its promise!

[me] The only recent report on the effort to merge LEAN and Axiom was
a talk given at Notre Dame. As I have no current University
affiliation there is no point attempting to publsh in competition with
those whose grants depend on publications.

----------------------------------------------------------------------

[p8] In each of these cases, the Lisp system isolated the algebra
system from the details of the supporting hardware, and storage
management, system checkpointing and file handling issues were handled
with it — with extended precision integer arithmetic soon joining the
collection of expected capabilities.

[me] I did not participate in the Aldor effort. I felt that it was a
distraction from the focus on improving Axiom. The team did some very
impressive work but the massive effort yielded little of long term
consequence. That's just my biased professional opinion. Others will
disagree.

----------------------------------------------------------------------

[p9] 3.3 1980s and beyond: Lisp and C

[p9] In the mid 1980s, the development of Scratchpad II [28] was
underway. Its implementation used a layered set of programming
languages.

[me] Scratchpad originally had Dick Jenks' META system as the first
stage of compilation. This was the result of his PhD work I believe.
As Bill Burge was constantly changing the SPAD syntax for the
new-new-new version the META compilation step was a waste of time. I
removed it. Dick was most unhappy about that.

----------------------------------------------------------------------

[p9] At the core was the Lisp/VM system, itself built on LAP
assembly code and Lisp bootstrap files. The next layer, including most
of the interactive system and first compiler, was written in Boot, a
syntactic sugaring of Lisp to an Algol-like syntax and with
destructuring assignments and other features that its authors expected
to be particularly relevant while constructing a symbol manipulating
program.

[me] Boot is a pointless language known only to a very few people. It
adds abstraction using macros but adds no value. The idea was that
Boot would eventually be "lifted" to Spad but nobody ever even
entertained more than the idea of that effort. I removed it.

----------------------------------------------------------------------

p12 The second challenge was of course performance.

[me] Bill Schelter and I spent a LOT of time on AKCL performance. We
used a suite of tools including C-level function counting, memory
churn, time-in-function metrics, "fast path cheating" where we could
pre-compute the answers, deep macro optimizations, cache tracking,
paging tools, etc. to see where time was spent. Every cycle mattered
and was optimized.

I will also comment that Bill was an astonishing programmer. I learned
a lot working with him.

----------------------------------------------------------------------

p18 the Common Lisp language standard

[me] I read and responded, usually through Fred Blair but occasionally
directly to the Common Lisp community thread, to every proposal and
publication the X3J13 produced. I ran every version of Common Lisp
available. I got pre-release versions of Lucid Common Lisp to test. I
was responsible for large portions of the effort to move Scratchpad
from the Lisp/VM mainframe version to other hardware running Common
Lisp versions.

----------------------------------------------------------------------

[p19] libraries of formalized mathematical proofs (e.g. Lean’s
mathlib, As yet, these areas are not problematic, but it is not
unreasonable to anticipate that one or more of them may become
difficulties until standardised.

[me] LEAN 3 has matured to LEAN 4 which is a full programming language
that generates code that can be proven correct by the kernel.

It is my opinion that Spad should be re-implemented (rather more
cleanly) in LEAN 4 which would make the effort to prove the algebra
correct. Correct computational mathematics is a worthwhile goal.

----------------------------------------------------------------------

[p22] The second conclusion is that portability is not automatically
easy, and the main challenges have altered over time-scales that are
relevant to some software

[me] There needs to be a focus on LLMs, MCP, and Agents. Without this
connection all of the current efforts will be irrelevant in the very
near future. If an agent can't talk to your system you won't continue
to exist.

----------------------------------------------------------------------

[p23] The final conclusion is that the people who care about portability
are mostly invisible, but rest of the world depends on the work they
do. Take, for example, the many many who keep Linux afloat. For (much)
smaller projects there is a (much) smaller pool of expertise, but
maintenance and porting are very close neighbours.

[me] It is not only portability it is the need for long-term goals.
Knowing where you want to go is fundamental.

----------------------------------------------------------------------

As mentioned at the outset this is just my opinion so feel free to
ignore me. I felt the need to respond as the paper misses so much.

I have the utmost respect for the people working on this project.
Nothing I have said is personally directed or intended.

Tim

Hill Strong

unread,
Mar 20, 2026, 7:59:20 AM (2 days ago) Mar 20
to fricas...@googlegroups.com, Tim Daly, Arthur Norman, smw...@uwaterloo.ca
Tim,

Thank you for your review and history. This is the kind of thing that goes missing over time.



--
You received this message because you are subscribed to the Google Groups "FriCAS - computer algebra system" group.
To unsubscribe from this group and stop receiving emails from it, send an email to fricas-devel...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/fricas-devel/CAJn5L%3D%2BvDt1QT56aYO73-%2B0_WUPARNHcLYkPTfiVwWM0u1ZMBw%40mail.gmail.com.

Waldek Hebisch

unread,
Mar 20, 2026, 9:30:53 AM (2 days ago) Mar 20
to fricas...@googlegroups.com
On Fri, Mar 20, 2026 at 01:26:12AM -0400, Tim Daly wrote:
>
> [p4] Albert Rich had been taking an almost diametrically opposite
> approach to indefinite integration, and had been developing a very
> extensive rule-based system called RUBI [26].
>
> [p5] The code there is still available and can be useful for those who
> have a Mathematica license (or for that company to embed within their
> commercial system), but porting it out for use elsewhere has proved
> non-trivial [40].
>
> [me] Prior to joining the Scratchpad group I was one of the co-authors
> of the IBM Rule-Based system ECLPS based on Charles Forgy's Rete
> algorithm so I was quite familiar with rule-based programming.
>
> I had several interactions with Albert based on my efforts to extend
> Axiom with RUBI-like rules. It was a non-trivial effort to try to
> insert RUBI rules into Axiom's algebraic structure.
>
> RUBI makes "leaps" as I called them that simply stepped over
> mathematical structures that Axiom supported. The strong typing of
> Axiom made the two systems incompatible, at least at the level of
> skill I could bring to bear.
>
> I tried to "lift" both systems results to Hypergeometric Functions
> which would regularize both Axiom and RUBI results but failed, mostly
> due to needing to learn too much new mathematics. Serge Lang's book
> was a bridge too far for me.
>
> That said, I think Albert's work was excellent.

I think that there are no principal difficultines integrating
Rubi or something Rubi-like as long as you accept fact that
this is "foregign body". That is it needs its own data structures
which probably should be organized as specific domain.

I think that Albert did not fully understand open source dynamics.
Namely, nobody wanted to fork Rubi, that is maintain their own
version. But Rubi was distributed as Mathematica code, so
using "official: Rubu means need to implement Mathematica
parser. There was smaller annoyance: Albert used various
special symbols to name rules so merely tracking evolution
a single rule required some effort. Bigger problem was
dependence on specific features of Mathematica. One example
of this is using Mathematica partial fraction decomposition,
that one is easy to emulate in other system once you understand
that Rubi uses it. But apparently there are subtler dependecies
on Mathematica pattern matcher. Namely IIUC Sympy folks implemented
Mathematica-compatible data structures and pattern matcher. But
their success rate on Rubi testsute was quite low, They attributed
problem to differences in their pattern matcher and Mathematica
one.

From my point of view big problem with Rubi is bulk, rules amount
to about 20000 lines. Most of the rules are quite unintersting,
they just implement special cases of Hermite integration (which
is rather simple algorithm). But if one wants to mechanically
adapt Rubi rules, then one needs to deal with Mathematica
specifics. Due to bulk trying to find interesting rules and
adapting them by hand would require quite a lot of work.

One extra remark about hipergeometric functions: IMO their use
is misleading. "True" hipergeometric functions should never
appear in result of integration of elementary function. What
may appear are "degenetate" cases, where hipergeometric
function reduces to a Liovilian one. In particular probably
about 90% of cases where Rubi testsuite has answer in terms
of hipergeometric functions should be expressed in terms of
incomplete beta function.

Another thing is that AFAIK Albert never explained methods that
he used to create and manage the rules. Basic thing is clar,
namely he systematically generated collection of expressions
fed them to various CAS-es (mainly Mathematica) and looked
for expression that given CAS could integrate. This lead
to initial candidates for rules. But beynd that, it is not clear
how he handled possible redundancy, how he resolved possible
cycles, etc.

BTW: Mathematica has an algorithm that expresses various integrals
in terms of hypergeometric functions. Clearly Mathematica output
is the source of hypergeometric functions in Rubi results.

--
Waldek Hebisch

Tim Daly

unread,
Mar 20, 2026, 10:24:58 AM (2 days ago) Mar 20
to Arthur Charles Norman, fricas...@googlegroups.com, smw...@uwaterloo.ca
Given the many years at IBM Research and the last 25 years of
maintaining Axiom as open source combined with my obvious
lack of social skills I fear I am too biased to be rational.

There is a lot of history that never saw the light of day. For example,
the original NAG release to me needed a running Axiom to build a
new Axiom. NAG would not let me distribute their binary so it took a
year or so to refactor the code to build stand-alone. In this regard Gilbert 
Baumslag (CCNY) deserves a lot of credit. My original employment contract 
stipulated that I would not do any development work on CCNY time
or equipment as that would enable them to claim ownership. Gilbert
eventually said I could do some of the development during working
hours if I also did work on Magnus, his Infinite Group Theory software.
So a fair portion of my time was spent trying to merge the two code piles.
That effort sorta-worked but never saw the light of day for non-technical
reasons, thus "dying on the vine". Lost among that heap was a truly
unique mathematical user interface, much better than anything existing
today. I called it the Apple GUI vs DOS text transition.

Gilbert wanted his Magnus system introduced into the wider computer
algebra community. To that end I created a "live CD" that could run all
of the currently available computer algebra and Magnus software.
I gave out copies of the live CD at the next conference which please him.

The almost daily work on the system for the last 25 years has been one
of the great rewards of my life. I got to work with and correspond with some
of the best-and-brightest in the world, something my talents didn't
really deserve.

Sadly "The 30 Year Horizon" goals (e.g. literate programming, proving
Axiom's algorithms correct, etc.) were never accepted by the community.

I felt the stated goals were fundamental. For instance, without literate
programming there will come a time when no-one knows enough to
maintain the code. WIthout proofs Axiom isn't "mathematics" it is just
some seriously intelligent hand-waving. Most don't agree. But there is
a world of difference between "it works for me" and creating a solid
foundation for future mathematics research. I want the latter.

But, as I said, I'm something of a curmudgeon. :-)

It was good to hear from you. I hope you and yours are doing well.
If you have the misfortune of visiting Pittsburgh look me up.
I live near CMU and the airport.

Tim



On Fri, Mar 20, 2026 at 6:56 AM Arthur Charles Norman <ac...@cam.ac.uk> wrote:
Thank you very much. I am just starting a trip around patagonia and all your comments a d insights call for more time and a bigger screen to absorb properly. I suspect related experiences viewed from differing perspectives may reveal varying emphasis, but on a very quick scan I think I see you really having to confront the pain of keeping stuff working and up to date and to your taste. Re your very last remark, knowing where you want to go is vital for the big project, but does not help firefighting operating system and underpinning library and compiler etc changes. Keeping software alive in any sense at all takes a lot of work! Doing so ideally needs superhuman energy and skill. 
But now I am off to say hello to elephant seals!!!!!!
Arthur


From: Tim Daly <axio...@gmail.com>
Sent: Friday, March 20, 2026 2:26:12 AM
To: fricas...@googlegroups.com <fricas...@googlegroups.com>; Tim Daly <axio...@gmail.com>
Cc: Arthur Charles Norman <ac...@cam.ac.uk>; smw...@uwaterloo.ca <smw...@uwaterloo.ca>
Subject: Personal musings on the Norman/Watt paper
 

Tim Daly

unread,
Mar 20, 2026, 10:40:20 AM (2 days ago) Mar 20
to Arthur Charles Norman, fricas...@googlegroups.com, smw...@uwaterloo.ca
On another interesting note I have a "common law trademark" in the word
"Axiom" when applied to computational mathematics. There are dozens of
other uses of the word, e.g. the new Axiom rocket company which are not
infringing uses.

The trademark law requires me to "actively defend the trademark" or I lose it.
I really hate that part as I hate studying law but was forced to.

There is a new company axiommath.ai using the trademark without permission.
They have not responded to my attempts at contact.

Given that this company just raised dozens of millions and is run by a person
who has legal training I suspect I will lose the trademark. The potential downside
would be that Axiom would need to be removed from the public and I could no
longer publish using that term.

The really painful part for me is that they are working with LEAN and CMU,
something I have devoted over a decade of effort to.

Forking was painful but legal erasure feels worse.

Tim


On Fri, Mar 20, 2026 at 6:56 AM Arthur Charles Norman <ac...@cam.ac.uk> wrote:
Thank you very much. I am just starting a trip around patagonia and all your comments a d insights call for more time and a bigger screen to absorb properly. I suspect related experiences viewed from differing perspectives may reveal varying emphasis, but on a very quick scan I think I see you really having to confront the pain of keeping stuff working and up to date and to your taste. Re your very last remark, knowing where you want to go is vital for the big project, but does not help firefighting operating system and underpinning library and compiler etc changes. Keeping software alive in any sense at all takes a lot of work! Doing so ideally needs superhuman energy and skill. 
But now I am off to say hello to elephant seals!!!!!!
Arthur


From: Tim Daly <axio...@gmail.com>
Sent: Friday, March 20, 2026 2:26:12 AM
To: fricas...@googlegroups.com <fricas...@googlegroups.com>; Tim Daly <axio...@gmail.com>
Cc: Arthur Charles Norman <ac...@cam.ac.uk>; smw...@uwaterloo.ca <smw...@uwaterloo.ca>
Subject: Personal musings on the Norman/Watt paper
 

Dima Pasechnik

unread,
Mar 20, 2026, 10:57:13 AM (2 days ago) Mar 20
to fricas...@googlegroups.com
there is a grant call for use of AI in maths software etc, see
<https://www.renaissancephilanthropy.org/insights/renaissance-philanthropy-and-xtx-markets-additional-13-million>

The deadline for an initial submission is pretty close, but we can still throw together a few things - I am writing a proposal along the lines of, among other things, securing the future of existing computer algebra software.

Dima

Tim Daly

unread,
Mar 20, 2026, 11:32:38 AM (2 days ago) Mar 20
to FriCAS - computer algebra system
Don't bother. There was a fund announcement many years ago to fund
various open source projects. I won one of the grants. Within a week
they withdrew the funding and gave it to a different project. I have applied
to a dozen companies (IBM, TI, etc.) and a few Philanthropic funds (one was
run by Ralph Gomery the ex-IBM Research CEO). I also tried for various NSF
and Navy research grants. My DARPA contact told me off the record that they
would never consider Axiom for funding as it was not "leading edge" but was
considered "old 70s technology". Sigh.

In addition this work does not fit their goals.

Their goal statement reads:

The AI for Math Fund seeks to advance the space and impact of math
discovery by supporting projects that are important for the field, yet 
unlikely to be undertaken by any single academic or industry lab. 
The fund will support projects that
1) are less likely to happen in a business-as-usual scenario; and
2) have the potential to advance the field as a whole.

These include: developing open-source, production-quality tools; 
increasing the size, diversity, and quality of datasets required for 
training AI models; and increasing the ease-of-use of tools so that 
they are adopted by mathematicians.

Goal 1 fails since Mathematica exists.
Goal 2 fails as there is no interest or effort being applied to
things like connecting to an MCP server and agents and this
is clearly an AI-focused company. No AI, no funds.

Philanthropy is defined as the desire to promote the welfare of others, 
expressed especially by the generous donation of money to good causes.

From their mission statement "The organization designs time-bound, 
thesis-driven funds led by field experts". As far as I know there are no
"thesis-driven" efforts in this open source project.

Of course, after 11 years in the PhD program in my past, perhaps this
is my "golden ticket" to a long-desired PhD :-)

Tim

Dima Pasechnik

unread,
Mar 20, 2026, 11:53:21 AM (2 days ago) Mar 20
to fricas...@googlegroups.com


On March 20, 2026 10:32:38 AM CDT, Tim Daly <axio...@gmail.com> wrote:
>Don't bother. There was a fund announcement many years ago to fund
>various open source projects. I won one of the grants. Within a week
>they withdrew the funding and gave it to a different project. I have applied
>to a dozen companies (IBM, TI, etc.) and a few Philanthropic funds (one was
>run by Ralph Gomery the ex-IBM Research CEO). I also tried for various NSF
>and Navy research grants. My DARPA contact told me off the record that they
>would never consider Axiom for funding as it was not "leading edge" but was
>considered "old 70s technology". Sigh.
>
>In addition this work does not fit their goals.
>
>Their goal statement reads:
>
>The AI for Math Fund seeks to advance the space and impact of math
>discovery by supporting projects that are important for the field, yet
>unlikely to be undertaken by any single academic or industry lab.
>The fund will support projects that
>1) are less likely to happen in a business-as-usual scenario; and
>2) have the potential to advance the field as a whole.
>
>These include: developing open-source, production-quality tools;
>increasing the size, diversity, and quality of datasets required for
>training AI models; and increasing the ease-of-use of tools so that
>they are adopted by mathematicians.
>
>Goal 1 fails since Mathematica exists.

Mathematica is known to have long-standing bugs, and it falls well short on many specialised areas, such as group theory or commutative algebra/algebraic geometry.

Because it is closed source, there is no way it can be certified to be correct by the community
(insert appropriate quote from Yuri Manin here: 'A proof only becomes a proof after the social act of "accepting it as a proof".')


>Goal 2 fails as there is no interest or effort being applied to
>things like connecting to an MCP server and agents and this
>is clearly an AI-focused company. No AI, no funds.
>

They are apparently funding a full-blown effort of developing a computer algebra system in Idris (?), seemingly from scratch, AI or not. I know a few UK academics who left to work on this project.

But any new system needs to be benchmarked and tested against old ones.
>>>> Sent from Outlook for Android <https://aka.ms/AAb9ysg>
>>>> ------------------------------
>>>> *From:* Tim Daly <axio...@gmail.com>
>>>> *Sent:* Friday, March 20, 2026 2:26:12 AM
>>>> *To:* fricas...@googlegroups.com <fricas...@googlegroups.com>; Tim Daly
>>>> <axio...@gmail.com>
>>>> *Cc:* Arthur Charles Norman <ac...@cam.ac.uk>; smw...@uwaterloo.ca <
>>>> smw...@uwaterloo.ca>
>>>> *Subject:* Personal musings on the Norman/Watt paper

Tim Daly

unread,
Mar 20, 2026, 12:21:28 PM (2 days ago) Mar 20
to FriCAS - computer algebra system
>They are apparently funding a full-blown effort of developing a computer 
> algebra system in Idris (?), seemingly from scratch

That's a mistake. I spent years "qualifying" all of the available proof
systems to use in Axiom, from the J Strother Moore's work in Austin
to Lamport's system to Idris, Coq, etc.

I'd love to read THAT funding proposal.

In my professional opinion they chose the wrong path. So far the
only system I trust at this point is LEAN.

And why, oh why, "start from scratch"? Do they really think they have
the deep expertise to correctly implement integration? Davenport,
Trager, Bronstein, etc. don't come along all that often. Risch is hard.

Tim

Dima Pasechnik

unread,
Mar 20, 2026, 12:49:16 PM (2 days ago) Mar 20
to fricas...@googlegroups.com
On Fri, Mar 20, 2026 at 11:21 AM Tim Daly <axio...@gmail.com> wrote:
>
> >They are apparently funding a full-blown effort of developing a computer
> > algebra system in Idris (?), seemingly from scratch
>
> That's a mistake. I spent years "qualifying" all of the available proof
> systems to use in Axiom, from the J Strother Moore's work in Austin
> to Lamport's system to Idris, Coq, etc.
>
> I'd love to read THAT funding proposal.
>
> In my professional opinion they chose the wrong path. So far the
> only system I trust at this point is LEAN.
I would not trust Lean too much, they are too much hype-oriented.
Although one cannot blame them, as they come from industry research labs,
and they always had to justify their existence in a way understood by
pointy-headed suits.

>
> And why, oh why, "start from scratch"? Do they really think they have
> the deep expertise to correctly implement integration? Davenport,
> Trager, Bronstein, etc. don't come along all that often. Risch is hard.

They start from the very top, from computable category theory, as far
as I understand.
Although perhaps I misunderstood what they are doing. They say they "verify AI".
Here is their proposal:
https://www.renaissancephilanthropy.org/a-structured-representation-of-tactics-for-machine-assisted-theorem-proving
> --
> You received this message because you are subscribed to the Google Groups "FriCAS - computer algebra system" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to fricas-devel...@googlegroups.com.
> To view this discussion visit https://groups.google.com/d/msgid/fricas-devel/ffbd4a0d-7133-4ece-9ed5-904074bbe035n%40googlegroups.com.

Tim Daly

unread,
Mar 21, 2026, 3:04:50 AM (yesterday) Mar 21
to FriCAS - computer algebra system
> I would not trust Lean too much, they are too much hype-oriented.

LEAN relies on reducing everything to review by a proven kernel.
What you program in LEAN 4 is "compiled" into something that
has to pass the kernel. All of the proofs have to pass the kernel.
It all comes down to whether the kernel is correct. If you have an
algorithmic proof of the GCD it has to pass through the kernel.

> I'd love to read THAT funding proposal.

The proposal you sent tries to find tactics automatically. Tactics
are "compiled forms of reasoning". Rather than walking through
the primitive steps of reasoning the tactic will perform the steps.

Think of a tactic as something you might mention in a lecture
where you say "I rely on the known fact that every number has
a prime representation" without having to include the steps.
The tactic steps are expanded before the kernel sees them
so they are still required to be correctly applied.

Using AI to collect these "proven hand-wavings" is interesting.
The proposal seems to suggest "watching" forms of reasoning
and generalizing them to form tactics. This is interesting and
worth funding. Then you can "hand-wave" about primes.

>They start from the very top, from computable category theory, as far
> as I understand.

Category theory is very general, trying to find fundamental patterns.
It is developing an interesting general language for talking about the
structure of mathematics. This is similar to the development of group
theory but on a more general level. Category theory is not a basis for 
proofs but like Axiom could form a general tower of "categories".

My work on the SANE (rational, coherent, judicious, sound) version
of Axiom builds a new LEAN tower parallel to the category and domain
towers. Each layer of the tower holds theorems. These theorems
are available to proofs of domain functions. Thus when trying to prove
the GCD function in Axiom there is a whole tower of theorems that
can be applied. Once the GCD function is proven the proof is added
to the theorem tower. Later functions which use the GCD can refer
to that theorem in the proof.

In the ideal case the Axiom functions would be written in LEAN
rather than SPAD. In the current case all of the functions have
been translated to Common Lisp. Everything everywhere has been
"compiled" to Common Lisp so SPAD is gone.

This has the advantage that I can use things like CLOS. This allows 
deep "adjustments" such as walking the Axiom type tower in novel
ways during type lookup. It also allows the LEAN theorem tower
theorems to be automatically collected as part of the type lookup.

This has the disadvantage that you either "get Lisp" or you don't.
This "epiphany" is a rare thing, almost religious in nature :-)
So between the three of us (me, myself, and I) the circle is small :-)

Tim

Tim Daly

unread,
Mar 21, 2026, 8:35:20 AM (yesterday) Mar 21
to FriCAS - computer algebra system
Andrej Karpathy On Code Agents, Autoresearch, and the Loopy Era of AI

I'm pondering how this affects computer algebra research.
Can we take a math research result and ask an AI to implement it?

One of my areas of research is cryptography. One-time pads are
provably secure. If you have an infinite stream of one-time pad
data you can encrypt anything. There are an infinite number of
irrational numbers between 0 and 1. Like Pi, the digits can be
computed but have no obvious pattern. So if you and a partner
choose one of these numbers you have a secure one-time pad.
Is there a way to choose these irrationals and implement the
generation of the infinite expansion? Doing so completely eliminates
the quantum computing threat.

As a programmer and psuedo-mathematician, the whole game has changed.
How can we apply this in computer algebra? Perhaps that is a path toward
getting funding from the Philanthropy group.

Tim
Reply all
Reply to author
Forward
0 new messages