LTC bit depth

342 views
Skip to first unread message

Rich Walsh

unread,
Oct 13, 2018, 5:41:00 AM10/13/18
to ql...@googlegroups.com
I have a dumb LTC question.

I need to generate a new LTC audio file (there are reasons I can't have QLab generate it on the fly). I've found an online resource to do it as the only DAW I know that can is Reaper, which I don't have. There is a bit-depth option of "8-bit unsigned int" or "16-bit signed int". I can't find any reference to the difference… Which one is "standard"? Is there a clever way of analysing the existing file I have used successfully before to work out which it is?

Thanks.

Rich

fishmonkey

unread,
Oct 14, 2018, 4:35:03 AM10/14/18
to QLab
my guess would be that 16-bit is more commonly used, but either should work fine, since they both effectively contain the exact same audio signal.

you can compare raw LTC audio files by opening them in Quicktime Player and using the Inspector.

Rich Walsh

unread,
Oct 14, 2018, 5:04:56 AM10/14/18
to ql...@googlegroups.com
I have many tools alongside QuickTime that can tell me that the file is currently 24-bit, but that doesn't tell me the bit-depth of the data, ie: how many of those 24 bits are being used. If the file has been normalised I can't even use something like Ozone's Bit Meter: the bottom 8 bits will still be "used" even if they only contain zero crossings.

However, watching the flashing bits it looks like the bottom 8 follow a pattern that the top 16 (or in fact 14 of them, as it's not peaking at 0dBFS) don't, so I'm going to go with it being 16-bit originally. With an 8-bit source file the bottom 16 bits flash in a pattern when converted to 24-bit. I think this is probably the tool I was trying to remember.

Rich

fishmonkey

unread,
Oct 14, 2018, 5:57:10 AM10/14/18
to QLab
oh, i misunderstood what you are asking. i was assuming that the two generated signals would be the same, except that the 16-bit one potentially has a lower noise floor.

if you are planning on increasing the bit-depth to 24-bits later anyway, isn't this a moot point?

Rich Walsh

unread,
Oct 14, 2018, 6:21:25 AM10/14/18
to ql...@googlegroups.com
I realise I'm slightly thinking out loud on this – and wasn't paying proper attention. The bit-depth does of course relate to the wav the site makes, not the way the timecode is encoded – which is made up of 80-bit words encoded in a bi-phase signal oscillating between two audio tones, I think. Something like that anyway.

The choice is between an 8-bit audio file and a 16-bit audio file to represent that analogue signal. The "unsigned" bit threw me… I thought there were two different types of LTC on offer!

You're right, if it is just about audio sampling depth it doesn't matter.

Thanks.

Rich

richardmoores

unread,
Oct 14, 2018, 6:24:19 AM10/14/18
to QLab
doesn't the integer type refer to the PCM encoding irrespective of whether the file contains music or timecode? I think all 8bit wav's use unsigned ints by convention, and 16bit wav's use signed ints. Most DACs would honour that convention i should think
The LTC encoding is another thing...that's 80bits long Manchester code or something like that..

apologies if i've got the wrong end of the stick here
r

John Huntington

unread,
Oct 14, 2018, 9:29:04 AM10/14/18
to QLab
AFAIK, as you said, the bit depth refers to the wav file, not time code. 

Your 80-bit description of LTC is correct, it's digital bits encoded using bi-phase modulation in the audio frequency range. So the digital time code bits are an audio signal, somehow A-D converted for the wav file, and then D-A converted for your target device which then takes the biphase signal and pulls the bits out. :-)

Someday we might get a networked time code standard :-) SMPTE was looking to update time code a couple years ago, I will follow up on that when I'm on sabbatical in the spring...

John
p.s. I had a book from one time code interface manufacturer (EECO) who said that the origins of LTC were back to the Apollo program to synchronize tapes.

Reply all
Reply to author
Forward
0 new messages