Hello -- Thank you for your email. We would like to have them, and I'm embarrassed we don't. The reason is basically technical: we don't know how to decode and render CTA-708/608 captions from individual segments of the user data stream (i.e. without requiring a persistent process or inter-segment state -- the Puffer system invokes its video and audio filers and encoders to completion, in parallel, on decoded segments of the input). Or, plan B, how to modify our one persistent ingest process to maintain that intermediate state on the CTA-708/608 stream so it can write out individual standalone captions along with the video/audio segments, and then how to decode and render those.
If anybody has (or knows somebody with) the technical know-how to solve this problem, we have some funding that could pay for a solution -- please get in touch.
Sincerely,
Keith