smpte reader, work in progress

16 views
Skip to first unread message

Davide D'Angelo

unread,
Mar 18, 2019, 2:48:11 PM3/18/19
to PraxisLIVE software discussion group
Hi, I'm very new to Praxis, which seems wonderful, but I'm a veteran of Processing.

I'm trying to port a class I've written to read SMPTE Linear Time Code, but it is not so simple..
First of all, I want to ask How can one debug with System.out.print()? where is the output?

Then, With my class I used the javax.sound.sampled library, and in particular the TargetDataLine to read the audio samples and convert from Hi/Low signals to bits, and every 80 bits a smpte packet gave me the time in HH:MM:SS:FF.

It doesn't work into Praxis, so do you have any suggestion on how to read the audio buffer samples?

Thank you very much, I wish I can soon program easily on Praxis, because it seems very powerful.
Davide.

Neil C Smith

unread,
Mar 18, 2019, 2:56:15 PM3/18/19
to Praxis LIVE software discussion group
On Mon, 18 Mar 2019 at 18:48, Davide D'Angelo <77d...@gmail.com> wrote:
> Hi, I'm very new to Praxis, which seems wonderful, but I'm a veteran of Processing.

Welcome! And thanks.

> I'm trying to port a class I've written to read SMPTE Linear Time Code, but it is not so simple..
> First of all, I want to ask How can one debug with System.out.print()? where is the output?

You can't use this at the moment, but you can use log(INFO,
"<message>"); This correctly handles sending output to the main IDE
even if the graph is running in a different process or on a different
machine (distributed hubs)

> Then, With my class I used the javax.sound.sampled library, and in particular the TargetDataLine to read the audio samples and convert from Hi/Low signals to bits, and every 80 bits a smpte packet gave me the time in HH:MM:SS:FF.
>
> It doesn't work into Praxis, so do you have any suggestion on how to read the audio buffer samples?

While it might be possible to get what you're trying there to work,
the best way to handle this is to create an audio patch, add and
audio:input, and a custom audio component between in and out, then
send the info from there to where you need it using core:routing:send.
Unlike Processing, PraxisLIVE is built to correctly handle lock-free
message between different media pipelines (eg. audio and video). You
should keep your audio and other code separate.

If you can share (some of) the code you have I can help with how to convert it.

Best wishes,

Neil


--
Neil C Smith
Artist & Technologist
www.neilcsmith.net

PraxisLIVE - hybrid visual live programming
for creatives, for programmers, for students, for tinkerers
www.praxislive.org

Davide D'Angelo

unread,
Mar 18, 2019, 3:17:58 PM3/18/19
to PraxisLIVE software discussion group
Hi Neil, Thanks for your fast answer.
Indeed I started from a audio custom component, added an AudioIn, but then I didn't know how to go on. Just found in the methods AudioIn.process() to get the buffer, but was stuck and couldn't find info on how to go on.

Anyway, this is my Processing class:

import javax.sound.sampled.*;

class SmpteReader implements Runnable {
 
// Class that reads SMPTE LTC from the selected audio input.
 
// tested with a sample rate of 44100 and 48000 Hz
 
// tested with 24 and 25 frames per second
 
// tested with a quantization of 8 and 16 bit (ready to work with 24 bit, but not tested yet).
 
 
// TODO:
 
// check and code for other non-integer fps with drop frame.
 
 
private AudioFormat format;
 
 
private float sampleRate;  // sample freq.
 
private int sampleSizeInBits;   // quantization.
 
private int channels;            // 1 = mono.
 
private boolean signed;       // true = 0 in the middle.
 
private boolean bigEndian;    // usually true.
 
private int framePerSecond;  // PAL system = 25 fps.
 
private int bitsPerMessage = 80;    // A SMPTE message is composed of 40 bits

 
private TargetDataLine targetDataline; // audio input
 
private Thread thread;
 
private int precRead = 0;
 
private int thisRead = 0;
 
private int word;
 
private boolean readComplete = true;
 
private int dataIntCnt = 0;
 
private int bCnt = 0;
 
private int precCnt = 0;
 
private int threshold;  
 
private int bufferLengthInBytes, bufferSubLength, frameSizeInBytes;
 
public int numBytesRead;
 
private byte data[];
 
private int dataInt[];
 
private int bits[] = new int[bitsPerMessage];

 
public int frame, second, minute, hour;
 
public String timeStr = "";
 
 
public SmpteReader() {  // Empty constructor for most common SMPTE:
   
this(44100.0, 16, 1, true, true, 25);
 
}
 
 
public SmpteReader(float _sampleRate, int _sampleSizeInBits, int _channels, boolean _signed, boolean _bigEndian, int _framePerSecond) {
    sampleRate
= _sampleRate;
    sampleSizeInBits
= _sampleSizeInBits;
    channels
= _channels;
   
signed = _signed;
    bigEndian
= _bigEndian;
    framePerSecond
= _framePerSecond;
   
    format
=
new AudioFormat(sampleRate, sampleSizeInBits, channels, signed, bigEndian);

   
// Opening the audio line in:
   
DataLine.Info info =
new DataLine.Info(TargetDataLine.class, format);
   
if (!AudioSystem.isLineSupported(info)) {
     
System.out.println(
"Line " + info + " is not supported.");
   
}
else {
     
try {
        targetDataline
= (TargetDataLine) AudioSystem.getLine(info);
        targetDataline
.open(format);
     
}
     
catch (LineUnavailableException ex) {
       
System.out.println(
"Cannot open input line: " + ex);
       
return;
     
}
   
}  

    targetDataline
.start();

   
// Reading audio format:
    frameSizeInBytes
= format.getFrameSize();
   
//println("Frame Size: "+frameSizeInBytes+" bytes");
    bufferSubLength
= (
int)(targetDataline.getBufferSize()/8);
   
//println("Buffer sub length: "+bufferSubLength+" frames");
    bufferLengthInBytes
= (bufferSubLength * frameSizeInBytes);
   
//println("Buffer length: "+bufferLengthInBytes+" bytes");
   
   
// This threshold is the number of samples read after the zero crossing for the half of a bit:
   
// If those are more then Threshold means we have a bit = 0 (two consecutive equal readings);
   
// Else, on the second consecutive change before threshold samples, we have a bit = 1 (two consecutive different readings);
   
// More on that here: https://en.wikipedia.org/wiki/SMPTE_timecode
    threshold
=
int(((sampleRate/framePerSecond)/(bitsPerMessage*2))*1.5); // *2 is because we need 2 peaks to make a bit, and *1.5 is because we are reading the peak between one half bit and the other.

   
// Array to record incoming bytes (samples):
    data
=
new byte[bufferLengthInBytes];
   
if (sampleSizeInBits == 24) {
      dataInt
=
new int[data.length/3]; //
   
}
else if (sampleSizeInBits == 16) {
      dataInt
=
new int[data.length/2];
   
}
else if (sampleSizeInBits == 8) {
      dataInt
=
new int[data.length];
   
}
    println
(
"SMPTE Thread started:");
    println
(
"Sample Rate: "+sampleRate+" Hz");
    println
(
"Sample Size: "+sampleSizeInBits+" bit");
    println
(
"Frame Per Second: "+framePerSecond+" fps");
    thread
=
new Thread(this);
    thread
.start();
 
}

 
public void stop() {
    thread
=
null;
 
}

 
public void run() {
   
try {
     
while (thread != null) {
       
if (targetDataline.available() > 0) {

         
// Let's sample audio into data[] and return read byte number:
          numBytesRead
= targetDataline.read(data,
0, bufferLengthInBytes);
         
// println("Read byte number: "+numBytesRead);
         
if (sampleSizeInBits == 24) {
           
           
// Sum bytes to get Ints (24 bit quantization):
            dataIntCnt
=
0;
           
for (int i = 2; i < data.length; i+=3) {            
              dataInt
[dataIntCnt] = (data[i-
2] << 16) + (data[i-1] << 8) + int(data[i]);
              dataIntCnt
++;
           
}
         
}
else if (sampleSizeInBits == 16) {
           
           
// Sum bytes to get Ints (16 bit quantization):
            dataIntCnt
=
0;
           
for (int i = 1; i < data.length; i+=2) {            
              dataInt
[dataIntCnt] = (data[i-
1] << 8) + int(data[i]);
              dataIntCnt
++;
           
}
         
}
else if (sampleSizeInBits == 8) {
           
           
// just copy bytes to int (8 bit quantization):
            dataInt
=
int(data);
         
}

         
// Samples reading iteration:
         
for (int k = 0; k < dataInt.length; k++) {
            thisRead
= dataInt[k];
           
           
// If we read something:
           
if (thisRead != 0 && precRead != 0) {
             
             
// Sync on Zero Crossing (if reading goes from <0 to >0 , or from >0 to <0):
             
if (thisRead/abs(thisRead) != precRead/abs(precRead)) {
               
// precCnt is the last reading counter.
               
if (k < precCnt) {
                 
// means we started back to read from the beginning of dataInt[], without finding the end of the smpte message;
                 
// so we translate it into a negative counter to sync the next reading.
                  precCnt
= -(dataInt.length - precCnt);
               
}                
               
if ((k - precCnt) > threshold) {
                 
// TODO: throw away the packet if > 2*threshold ?.
                 
// We have read 2 consecutive high samples or low samples, means we suppose this bit = 0;
                  readComplete
=
true;
                  precCnt
= k;
                 
// bCnt is the smpte message counter.
                  bits
[bCnt] =
0;
                 
// word is the message as an int to check the sync (bits from 64 to 79).
                  word
= word <<
1;
                  bCnt
++;
               
}
else if ((k - precCnt) < threshold && (k - precCnt) > 0) {                  
                 
// readComplete = false, if it is the first peak we are reading;
                  readComplete
= !readComplete;
                  precCnt
= k;
                 
// on the second different consecutive peak:
                 
if (readComplete) {                    
                   
// this bit = 1
                    bits
[bCnt] =
1;                    
                    word
= (word <<
1) + 1;
                    bCnt
++;
                 
}
               
}
               
if (char(word) == 16381) {                  
                 
// Sync word:
                  calcTime
();
                  word
=
0;        
                  bCnt
=
0;
               
}
               
               
// check to avoid array overflow. We translate all the bits of one position to fill the last:
               
if (bCnt > bits.length-1) {
                 
for (int i = 1; i < bits.length; i++) {
                    bits
[i-
1] = bits[i];
                 
}
                  bCnt
--;
               
}
             
}
           
}
            precRead
= thisRead;
         
}
       
}
     
}
      targetDataline
.stop();
      targetDataline
.close();
   
}
   
catch (Exception e) {
      println
(
"Error: Force quit...");
     
print(e);
   
}
 
}

 
private void calcTime() {
    frame
= ((bits[
9] << 1) + bits[8])*10 + (bits[3] << 3) + (bits[2] << 2) + (bits[1] << 1) + bits[0];
    second
= ((bits[
26] << 2) + (bits[25] << 1) + bits[24])*10 + (bits[19] << 3) + (bits[18] << 2) + (bits[17] << 1) + bits[16];
    minute
= ((bits[
42] << 2) + (bits[41] << 1) + bits[40])*10 + (bits[35] << 3) + (bits[34] << 2) + (bits[33] << 1) + bits[32];
    hour
= ((bits[
57] << 1) + bits[56])*10 + (bits[51] << 3) + (bits[50] << 2) + (bits[49] << 1) + bits[48];
    timeStr
= nf(hour,
2)+":"+nf(minute, 2)+":"+nf(second, 2)+"."+nf(frame, 2);
 
}

 
public String getTimeStr() {
   
return timeStr;
 
}
 
public int getHour() {
   
return hour;
 
}
 
public int getMinute() {
   
return minute;
 
}
 
public int getSecond() {
   
return second;
 
}
 
public int getFrame() {
   
return frame;
 
}
 
public int[] getTimeArrayInt() {
   
int time[] = {hour, minute, second, frame};
   
return time;
 
}
 
public long getTimeInFramesLong() {
   
long time = frame + (second * 25) + (minute * 1500) + (hour * 90000);
   
return time;
 
}
}


Here's a link to SMPTE LTC in wikipedia, if others need to know:

with the class I wrote I need to read every sample, to know the sampleRate, sampleFrequency and frameNumber, and then translate sample readings to bits.
Maybe you can find a easier way.... or just send me on the correct way to start, and I will complete that.

Thank you in advance.
Davide.

Neil C Smith

unread,
Mar 18, 2019, 3:35:28 PM3/18/19
to Praxis LIVE software discussion group
On Mon, 18 Mar 2019 at 19:18, Davide D'Angelo <77d...@gmail.com> wrote:
>
> Hi Neil, Thanks for your fast answer.
> Indeed I started from a audio custom component, added an AudioIn, but then I didn't know how to go on

This might be a start using audio:custom

```
@In(1) AudioIn in;
@Out(1) AudioOut out;

@P(1) double last;

@Override
public void init() {
log(INFO, "Sample rate is " + sampleRate);
log(INFO, "Buffer size is " + blockSize);

link(in, fn(s -> {
if (s > 0 && last < 0 || s < 0 && last > 0) {
log(INFO, "zero crossing at " + millis());
}
last = s;
return s;
}), out);
}

```

If using fn() you can only supply a lambda that works a sample at a
time. millis() will give you the clock time (not audio time) at the
start of the sample block (default 64 samples). You could also count
samples in the lambda, or alternatively look at the FFT component in
the examples projects which uses an alternative for block processing.
Also take a look at how the audio:clock uses sample buffers and
update() to measure time.

Hope somewhere in there is what you need.

Davide D'Angelo

unread,
Mar 18, 2019, 4:22:52 PM3/18/19
to PraxisLIVE software discussion group
Ok, nice.
I added a double to count the samples, but before going on I have to ask one thing.
the SMPTE Audio file I'm sending into the player is 44100 Hz, but the log on sampleRate return 48000. Why?

@In(1) AudioIn in; 
    @Out(1) AudioOut out; 
    
    @P(1) double last; 
    @P(2) double cnt;
    
    @Override
    public void init() {
        
        log(INFO, "Sample rate is " + sampleRate); 
        log(INFO, "Buffer size is " + blockSize); 

        link(in, fn(s -> {
            cnt++;            
            if (s > 0 && last < 0 || s < 0 && last > 0) { 
                log(INFO, "zero crossing at time " + millis()); 
                log(INFO, "cnt: "+cnt);
                cnt = 0;

Davide D'Angelo

unread,
Mar 18, 2019, 4:25:39 PM3/18/19
to PraxisLIVE software discussion group

..And also, it is difficult for me to read the function you wrote, I mean, where does 's' take its value? is it a prebuilt var?
I understand, ofcourse, it is the sample level reading, but where does it come from?

Neil C Smith

unread,
Mar 18, 2019, 5:10:42 PM3/18/19
to Praxis LIVE software discussion group
Hi,


On Mon, 18 Mar 2019, 20:25 Davide D'Angelo, <77d...@gmail.com> wrote:

..And also, it is difficult for me to read the function you wrote, I mean, where does 's' take its value? is it a prebuilt var?

No. Have you used Java lambdas before? The fn(...) method accepts a DoubleUnaryOperator and creates a unit generator that calls a method on that interface for every sample. So the 's' is just the name of the input parameter. You can use any name you want. 


The samplerate is defined by the root properties. Click on the background and it will show up in the properties tab. Or click the button next to play in the audio patch tab toolbar.

You can only set the samplerate when the patch isn't playing. 

Best wishes, 

Neil

Davide D'Angelo

unread,
Mar 18, 2019, 5:15:01 PM3/18/19
to PraxisLIVE software discussion group
Thank you very much for your help.
No I didn't use lambda before, I have read java docs, and now I can read your code.
Didn't know fn() creates a ugen, is that written in the docs?

I think I have enough things to go on for now.
Thanks, Davide.

Neil C Smith

unread,
Mar 18, 2019, 5:18:39 PM3/18/19
to Praxis LIVE software discussion group
Hi,


On Mon, 18 Mar 2019, 21:15 Davide D'Angelo, <77d...@gmail.com> wrote:
Didn't know fn() creates a ugen, is that written in the docs?

Yes, although maybe should be clearer? 


Neil

Davide D'Angelo

unread,
Mar 18, 2019, 7:48:49 PM3/18/19
to PraxisLIVE software discussion group
...aaaaand Thank you so much!!! I did it! (better say we did it..)

It is waaay more simple than what I did in Processing.

Here's the code:
@In(1) AudioIn in; 
    @Out(1) AudioOut out; 
    @AuxOut(1)
    Output smpteTime;
    
    private double last; 
    private double sampleCnt;
    private int bitsPerMessage = 80;
    private int bits[] = new int[bitsPerMessage];
    private int bitArrayCnt;
    private boolean bitReadComplete = true;
    private int word;
    
    public int frame, second, minute, hour;
    public String timeStr = "";
  
    @UGen Gain gain;
    
    @P(2) @Type.Number(min=0, max=2, skew=4)
            Property level;
    
    @Override
    public void init() {
        level.link(gain::level);
        
        log(INFO, "Sample rate is " + sampleRate); 
        log(INFO, "Buffer size is " + blockSize); 

        link(in, gain, fn(s -> {
            sampleCnt++;            
            if (s > 0 && last < 0 || s < 0 && last > 0) {
                //log(INFO, "cnt: "+sampleCnt);
                // Zero Crossing:
                if (sampleCnt < 13 && sampleCnt > 10) {
                    // every half bit is: (44100 / 25)/(80*2) = 11.025
                    // 11.025 frames
                    bitReadComplete = !bitReadComplete;
                    if (bitReadComplete) {
                        //log(INFO, "Adding a UNO 1");
                        word = (word << 1) + 1;
                        bits[bitArrayCnt] = 1;
                        bitArrayCnt++;
                    } else {
                        //log(INFO, "Read first short.......");
                    }
                    
                } else if (sampleCnt > 21 && sampleCnt < 24) {
                    // 22.050 frames means a bit = 0;
                    //log(INFO, "Adding a ZERO");
                    bitReadComplete = true;
                    word = word << 1;
                    bits[bitArrayCnt] = 0;
                    bitArrayCnt++;
                }
                //log(INFO, "Word = "+Integer.toBinaryString(word & 65535));
                if ((word & 65535) == 16381) {                  
                  // Sync word:
                  //log(INFO, "Sync word!");
                  calcTime();
                  sendTime();
                  word = 0;        
                  bitArrayCnt = 0;
                  for (int i = 0; i < bits.length-1; i++) {
                      bits[i] = 0;
                  }
                }
                if (bitArrayCnt > bits.length-1) {
                  for (int i = 1; i < bits.length; i++) {
                    bits[i-1] = bits[i];
                  }
                  bitArrayCnt--;
                }
                sampleCnt = 0;
            } 
            last = s; 
            return s; 
        }), out); 
    }

    
    @Override
    public void update() {

    }
    
    private void calcTime() {
        frame = ((bits[9] << 1) + bits[8])*10 + (bits[3] << 3) + (bits[2] << 2) + (bits[1] << 1) + bits[0];
        second = ((bits[26] << 2) + (bits[25] << 1) + bits[24])*10 + (bits[19] << 3) + (bits[18] << 2) + (bits[17] << 1) + bits[16];
        minute = ((bits[42] << 2) + (bits[41] << 1) + bits[40])*10 + (bits[35] << 3) + (bits[34] << 2) + (bits[33] << 1) + bits[32];
        hour = ((bits[57] << 1) + bits[56])*10 + (bits[51] << 3) + (bits[50] << 2) + (bits[49] << 1) + bits[48];
        timeStr =  String.format("%02d" , hour)+":"+String.format("%02d" , minute)+":"+String.format("%02d" , second)+"."+String.format("%02d" , frame);
        log(INFO, timeStr);
    }
    
    private void sendTime() {
        try {
            smpteTime.send(timeStr);
        } catch (Exception ex) {
            log(ERROR, ex);
        }
    }
and the output string is sent to a p2d sketch to write the smpte time..
It works like a charm!!


A stupid question, is there something like the processing textAlign(CENTER);?

and, is there a way to automagically set the project sampleFrequency from the incoming audio? or at least with a switch? I ask this because now I'm working with an audio file, but usually this is meant to work on audio in, which maybe one doesn't know the sample frequency, or fps or sample bits..

If you have some correction on my code you're welcome! do you think it is useful enough to be added in the extra components?

Thank you very much...

p.s. ..now my next goal will be reading a rtsp streaming camera..
Schermata del 2019-03-19 00-29-54.png

Neil C Smith

unread,
Mar 19, 2019, 8:29:46 AM3/19/19
to Praxis LIVE software discussion group
On Mon, 18 Mar 2019 at 23:48, Davide D'Angelo <77d...@gmail.com> wrote:
> ...aaaaand Thank you so much!!! I did it! (better say we did it..)

:-) You're welcome!

> It is waaay more simple than what I did in Processing.

Good to hear!

A few comments on your source.

- that's quite a lot of code for a lambda - you could also split that
out into a separate method eg. `double processSample(double s) { ...
}` then use a method reference (like you have with level.link) inside
init() - so you end up with link(in, fn(this::processSample), out);

- ideally you shouldn't use the Output inside a unit generator,
although in your case this shouldn't cause issues. You'd be better
having an update() method something along the lines of

```
public void update() {
if (!timeStr.isEmpty()) {
smpteTime.send(timeStr);
timeStr = "";
}
}
```

- you shouldn't need a try / catch around the send??

> A stupid question, is there something like the processing textAlign(CENTER);?

hmm .. not mapped yet, and can't remember why - problem with all the
different places CENTER is used I think. PraxisLIVE aims to be source
code compatible, but all the constants are enums rather than ints -
makes code completion a lot more useful, but has the odd annoyance
like this.

Assuming you've got the custom components installed, add a gl-text.pxg
from video:gl:custom and have a look at how it achieves alignment.

> and, is there a way to automagically set the project sampleFrequency from the incoming audio? or at least with a switch? I ask this because now I'm working with an audio file, but usually this is meant to work on audio in, which maybe one doesn't know the sample frequency, or fps or sample bits..

I'm not sure that makes sense?! The sample frequency of the graph is
what is used to open the soundcard - it should not be possible for the
audio in to be at a different rate.

> If you have some correction on my code you're welcome! do you think it is useful enough to be added in the extra components?

With a couple of changes as above, it would definitely be interesting.
There are a few components in https://github.com/praxis-live/pxg/ that
do not get automatically installed in the palette - it would at least
be good to add to the repository if you're happy for that. Or the
whole project might be an interesting example?

> p.s. ..now my next goal will be reading a rtsp streaming camera..

Well, that should hopefully be easy! Add a video:player component,
and set the video property to an rtsp url instead of a file (eg. just
tested with rtsp://184.72.239.149/vod/mp4:BigBuckBunny_115k.mov )
Only problem might be with that component expecting the stream to have
a duration.

Davide D'Angelo

unread,
Mar 19, 2019, 10:08:14 AM3/19/19
to PraxisLIVE software discussion group
Hi, thank you for your hints.
I made the proposed corrections, and I feel happy with it.

I attach the component here, then If you think it's ok, i'll upload into the repository.

The problem (not so important..) with the sampleFrequency is that if the project is set at 48000, but the smpte I'm going to read has been sampled at 44100 the maths go wrong.
because the way I decode the smpte bits is based on counting the number of samples between two different audio peaks (Low to High or opposite), because two lows or two highs are a 0, while two different samples are a 1. 
(sampleFreq / framesPerSeconds) / (wordBitNumber * 2) gives me the number of samples for half a bit (two samples give me a bit)
so (48000/25)/(80*2) = 12
while (44100/25)/(80*2) = 11.025

But I think I'll find a workaround for this... no problem.


There is one last thing in this project that seems odd to me. I've added a gain input to regulate smpte audio level to be read, and another gain linked to smpte output, to use as a volume output, which normally is going to be set at zero, but if I set it to zero the smpte component stops reading.





smpte_reader.pxg

Neil C Smith

unread,
Mar 19, 2019, 10:15:00 AM3/19/19
to Praxis LIVE software discussion group
Hi,

On Tue, 19 Mar 2019 at 14:08, Davide D'Angelo <77d...@gmail.com> wrote:
> I attach the component here, then If you think it's ok, i'll upload into the repository.

Thanks! I'll take a look, although might not be today.

> The problem (not so important..) with the sampleFrequency is that if the project is set at 48000, but the smpte I'm going to read has been sampled at 44100 the maths go wrong.

I'm confused by this - how do you get SMPTE at 44100 into a soundcard
running at 48000? If it's from a file then that might be at a
different sample rate, but it should be resampled automatically IIRC.

> There is one last thing in this project that seems odd to me. I've added a gain input to regulate smpte audio level to be read, and another gain linked to smpte output, to use as a volume output, which normally is going to be set at zero, but if I set it to zero the smpte component stops reading.

Yes, this is covered at https://docs.praxislive.org/coding-audio/#gain
It's for dynamic switching off of processing.

If you really want silence output then you could replace the gain with
fn(s -> 0) instead.

Davide D'Angelo

unread,
Mar 19, 2019, 10:51:08 AM3/19/19
to PraxisLIVE software discussion group
I'm confused by this - how do you get SMPTE at 44100 into a soundcard 
running at 48000?  If it's from a file then that might be at a 
different sample rate, but it should be resampled automatically IIRC. 
It just changes the number of samplesPerSeconds, and since the way I decode the smpte words depends on this, syncing the reader sampleFreq with the generator sampleFreqOutput makes the difference. But, I repeat, it's not a real problem, because there are workarounds for this.
Reply all
Reply to author
Forward
0 new messages