For the cognoscenti, following up on a -announce post just now. Even an AI tale
if you read far enough.
Support for LaTeX's \intertext{} feature may have been a rookie mistake. But
we have it, and it is something rather nice that demonstrates LaTeX's prowess at
typesetting. So it has not gone away.
A nice thing about XSL is, if you are outputting HTML/XML, it is impossible to
make an opening element without a closing element. Being unbalanced is
impossible. At first it is frustrating, but then you realize it encourages good
habits. Not so when you output LaTeX - as far as XSL knows, you are just making
text. So you can have a template output a \end{align} followed by a paired
\begin{align}. And we did.
Until today, we just naturally made something like a \begin{align} with
intervening \intertext{} and a concluding \end{align} - *for the conversion to
LaTeX*. Once MathJax gets involved, we made each contiguous run of "mrow" into
its own "md" and the "intertext" just became intervening text. Any notion of
preserving alignments was surrendered. Once for HTML, and once for WeBWorK
(PG), and all the conversions derived from the HTML code.
Now, we are exploding a single "md" with "intertext" into lots of "md" with
intervening text *in the pre-processor*. But we also record information about
the structure of the orginal "md". So now *every* conversion has surrendered
any hope of preserving alignments. But for LaTeX output only, enough
information is recording during the explosion, that *we can reconstruct one big
LaTeX math environment with \intertext{}*. Quality typesetting preserved.
[@Alex - could you look at the before/after diff for PG? Some macro definitons
are being added (good?), there is an extra newline at the end of displayed math
(harmless?), and the intertext now seems to be wrapped in a #p (too much? and
why?).]
The explosion in the pre-processor is a bit delicate. (Study
https://github.com/PreTeXtBook/pretext/commit/f2baea87c7e5156dcd80beb37812e263b2a3e928
if you want to really increase your XSL-fu.) So I thought I would give careful
instructions to Chat GPT to see what I might get back. First draft was really
not bad. But two naive approaches to certain aspects. Quickly corrected when I
constructively complained. Then I looked at the part I really thought I needed
help with (collecting maximum groups of #md between #intertext). Looked good
at first. No, on more careful inspection, it was very wrong. Once I pointed it
out, it was corrected. Now I complained that the result was overly complicated,
and suggested a simpler alternative. My turn, I was wrong. And I got an
explanation that accurately identified the misunderstanding I was making.
Interesting experience, and the final result is quite clean, I think.
Rob