Attached is the cited paper. The maths are beyond me, but the summary is clear enough:
"...we obtain models that are between one and five orders of magnitude faster in training and inference compared with differential equation-based counterparts. More importantly, ...closed-form networks can scale remarkably well compared with other deep learning instances. Lastly, as these models are derived from liquid networks, they show good performance in time-series modelling compared with advanced recurrent neural network models."
Edward