Looking at our wiki table here:
https://wiki.mozilla.org/Accessibility/Math_Accessibility
Braille support would come from a library. However, there are a number 
of libraries available now, each supporting different Braille codes. How 
to choose?
The front runners would seem to be liblouisxml and UMCL. Some comparison 
here:
* They are both LGPL.
* Liblouisxml works with liblous, which NVDA and Orca already use. I'm 
not sure whether that factors in much, and I'm not sure what UMCL uses 
for any text it encounters within the math.
* They both can translate from MathML. However, UMCL also translates 
from TeX. This is done by first transforming the TeX to MathML, which 
liblouisxml could also do. John Boyer mentioned that he's considering 
developing a liblouistex, and was looking for feedback on that.
* They both can translate to Nemeth, Marburg and British codes. However, 
UMCL supports also adds French and Italian.
* They are both written in C, but have different formats for the tables. 
UMCL uses XSLT, and liblouisxml uses a special format it has developed.
Questions that would be good to answer. Given the similarities in 
philosphy, license and programming language, is it possible for the two 
projects to join forces? My experience says this is unlikely -- once 
projects go in different directions, it's nearly impossible to share 
code. I'd love for someone to look into the possibility anyway.
Another idea is to come up with a more abstract API for dealing with 
Braille translation in general. This API would allow Braille translation 
services to be installed like plugins. A user could install a Braille 
translation package and the screen reader would add it, possibly 
providing preferences if more than 1 translator is registered for the 
same type of content. Not only screen readers, but Braille publishing 
systems could also benefit from this approach. It would allow 
translation engines to compete on speed, features and quality 
separately, and make it easier for users to experiment with which ones 
work best, mixing and matching depending on their needs.
Now, for text-to-speech, a different approach may be needed. I think 
that there is a need to tie more tightly to a given screen reader, so 
that the navigation, voices and rest of the experience matches the rest 
of the screen reader experience. A screen reader project like NVDA would 
probably look for ideas from things like ASTER and latex-access, and 
implement something for the community to at least play with, tweaking as 
they go. Since latex-access is in Python & GPL as well, it would be 
worth investigating whether a few changes can allow to become part of 
NVDA. Or perhaps, the same principle of abstracting the API for this 
should apply to TTS as well as Braille.
I hope this starts a useful conversation. Thoughts are welcome.
- Aaron