Thank you Rif (love the name btw) -- I'll try to keep it super-brief, to not bore you. We're a K-12 education company. We receive loads of digitized student work, and the job of the R&D team is to write online scoring systems that make suggestions to teachers about what each student needs. We already have transformer-based scorers for student writing running in production at scale, so we're comfortable with the operational aspects of all this stuff. Other scoring software uses non-DL tools, like R libraries, custom code, even C.
So we'd like to unify onto a single platform and toolset. The _idea_ of TFP seems perfect for this. The issue is we'd have to retrain a bunch of "traditional" statisticians, so the tooling needs to be solid and, especially, well-documented. The TFP project seemed to have a lot of momentum in 2019 -- videos, conference talks, tutorials -- but not much since then, and the state of the documentation is really not great, tbh. In terms of the ecosystem, I have taken the Coursera course from Imperial, which also feels like it's aging and is sparsely-attended. The Manning book is superficial and riddled with errors.
I guess I'm trying to get a sense of whether TFP is basically in maintenance mode (if active), or whether there are Big Plans. I realize though that Google may not want to commit one way or the other.