I assumed MathML would be involved when I first started thinking about this as well. And if we wanted to implement support for this inside MathML, I think in theory we could do it that way, but I also think it would involve using MathML in a non-standard way.
Essentially, what we want is something akin to "numpy.average(foo, axis=0)", i.e. a function with multiple arguments. But the 'mean' function (or max, min, etc.) in MathML has no such 'second argument':
(I suppose MathML v3 might have something we could use more cleanly here, but that would clearly involve a larger change to SED-ML than we can afford right now.)
The other problem with this approach is that in general, any MathML that we use in SED-ML is assumed to work on an element-by-element basis. In essence, we say that each symbol in the MathML is supposed to be interpreted as a scalar, and the vectors/matrices that are built from those are built up from the scalar level, and never work 'backwards' from a vector back to a scalar again.
There are ways to 'annotate' elements in MathML, so it would be possible to annotate the 'mean' function to say 'leave this dimension'. So this is in theory a viable possibility, but we've struggled to make the concepts work in our previous meetings.
This way of encoding 'dimension-reducing' functions allows us to 'work backwards' in a clean separate way from the MathML, which can remain at the scalar level for all symbols. I don't remember who came up with this scheme other than to say it wasn't me--I remember the discussion that led to the concept of the 'RemainingDimensions' class, but not how it ended up getting used outside of MathML instead of inside it. But having implemented (basic) support for it at this point, I do like it.
-Lucian