Hi folks,
This node allows me to get an MJPEG stream from an IP camera, and create an infinite stream of messages:

Each output message payload contains a single camera image.
This works nice: I get an average of 15 messages per second (depending on the load of my home network).
However, now I want to take the next step: in this case audio streaming.
My Panasonic camera also offers an URL to get an audio stream from it's build-in microphone:

Haven't tested it yet, but I should be able to get this multipart stream easily with my multipart decoder node.
Problem is that I don't know have a clue at the moment how to handle the audio data :-(
- Have been look at the code of the various audio contributions for Node-Red and also at this issue. However it looks to me that all those nodes expect a Buffer containing a full audio file. But I may be mistaken! Anyway that is not what I want, since I will receive an infinite stream of audio samples. Do I have to create finite length audio buffers (containing N samples) and pass them to those contributions. But I assume then I will get delays and as a result jitter ?
- And the sample rate seems to much higher (e.g. 44000 samples per second) compared to image rates. Do I have to create a single message for each sample? Probably not, but how many samples should I collect in a single message?
- In which format should those samples be passed through the flow. I assume as Buffer's. But if I need to collect N samples in a single message: does the payload needs to be a single Buffer containing N samples, or does it need to be an array of N buffers (and each buffer contains a single sample).
- When I need to make a collection of N samples, I assume that there needs to be some start stuff and some end stuff? Or do I need to convert N samples to some audio format (e.g. wav) ?
- ...
Hopefully I will get some sound from the community, and preferably not too noisy ;-)
Thanks in advance !!!!
Bart