Algorithm to compute mean/std on a per sample basis:

23 views
Skip to first unread message

Tom Enos

unread,
Jun 29, 2015, 11:51:36 PM6/29/15
to lightsh...@googlegroups.com
I've been playing with the algorithm that Todd has pointed out in synchronized_lights.py

I can only test it out by using it in play_song(), (I don't have a usb sound card) but it looks like it works great (in fact the initial playback of an audio file looks very close to playback using a sync file.  It might be worth adding it in there to for initial playback of audio files.

I've attached the RunningStats module and synchronized_lights with the audio_in() function setup to test the algorithm.   Could someone test it out and let me know if I am at least going in the right direction?   
This is just a first draft.  I'm sure there well be a few things that need to be worked out.  For example I did not account for not connected channels.  But if this first draft at least shows promise then it should be an easy matter to add that in.

rolling_std_mean.tar.gz

Tom Enos

unread,
Jul 2, 2015, 1:02:31 PM7/2/15
to lightsh...@googlegroups.com
So I happened to look in a box of my old junk and found an old usb headset that had a mic.  
So that code sample seems to work fine.  It's a little heavy on the processor but as I said it is a first draft.
I've come up with a new way to implement it using numpy, and that works a lot better.

I also noticed that it was really easy to also add output to the raspberryPi's audio out jack.  So there is no need to use a splitter on the line-in, you could also probable just use the the usb sound cards audio out also (the headset I have is really old, usb 1.1, and I had a few hiccups).

After the latest pull requests get finished I up these mods for review.

Tom Enos

unread,
Jul 10, 2015, 2:58:49 AM7/10/15
to lightsh...@googlegroups.com
I have an additional question.  With stable and master I see pauses in the show with audio-in.  I seems to happen when there is a quiet spot in the audio and when the std and mean are being updates.  Either it is my old hardware (an old usb 1.0 headset that I cut-up) or it is in the code.  Can somebody set me straight on what is currently happening?  Is it just my old sound card, or is this the norm?

And is audio-in hard on the CPU?  With the current stable/master audio-in maxes out the CPU with 16 channels in pwm mode.  I know that I am sending the audio back to the RPI line-out instead of using a splitter, but I'm getting a lot of stutter with just adding in the output to stable.  Is this normal?

Todd Giles

unread,
Jul 13, 2015, 10:12:05 PM7/13/15
to lightsh...@googlegroups.com
On Fri, Jul 10, 2015 at 12:58 AM Tom Enos <tom....@overclocked.net> wrote:
I have an additional question.  With stable and master I see pauses in the show with audio-in.  I seems to happen when there is a quiet spot in the audio and when the std and mean are being updates.  Either it is my old hardware (an old usb 1.0 headset that I cut-up) or it is in the code.  Can somebody set me straight on what is currently happening?  Is it just my old sound card, or is this the norm?

And is audio-in hard on the CPU?  With the current stable/master audio-in maxes out the CPU with 16 channels in pwm mode.  I know that I am sending the audio back to the RPI line-out instead of using a splitter, but I'm getting a lot of stutter with just adding in the output to stable.  Is this normal?

This is normal - audio-in mode has to do the fft on-the-fly as well as calculation of mean / std-dev on-the-fly so is much more CPU intensive than using cached fft / std-dev / mean data when working with known audio.

Any performance improvements here would of course be welcome, but for my personal use-case (karaoke) the code as-is has worked perfectly fine for me.  That said, I've been using a splitter and not using audio-out from the RPi when using audio-in mode ... 
 

On Thursday, July 2, 2015 at 10:02:31 AM UTC-7, Tom Enos wrote:
So I happened to look in a box of my old junk and found an old usb headset that had a mic.  
So that code sample seems to work fine.  It's a little heavy on the processor but as I said it is a first draft.
I've come up with a new way to implement it using numpy, and that works a lot better.

I also noticed that it was really easy to also add output to the raspberryPi's audio out jack.  So there is no need to use a splitter on the line-in, you could also probable just use the the usb sound cards audio out also (the headset I have is really old, usb 1.1, and I had a few hiccups).

After the latest pull requests get finished I up these mods for review.

On Monday, June 29, 2015 at 8:51:36 PM UTC-7, Tom Enos wrote:
I've been playing with the algorithm that Todd has pointed out in synchronized_lights.py

I can only test it out by using it in play_song(), (I don't have a usb sound card) but it looks like it works great (in fact the initial playback of an audio file looks very close to playback using a sync file.  It might be worth adding it in there to for initial playback of audio files.

I've attached the RunningStats module and synchronized_lights with the audio_in() function setup to test the algorithm.   Could someone test it out and let me know if I am at least going in the right direction?   
This is just a first draft.  I'm sure there well be a few things that need to be worked out.  For example I did not account for not connected channels.  But if this first draft at least shows promise then it should be an easy matter to add that in.

--
http://www.lightshowpi.org/
---
You received this message because you are subscribed to the Google Groups "LightShow Pi Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to lightshowpi-d...@googlegroups.com.
To post to this group, send email to lightsh...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/lightshowpi-dev/b80526cb-dc53-42ad-b35e-a22765461588%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Tom Enos

unread,
Jul 13, 2015, 10:35:20 PM7/13/15
to lightsh...@googlegroups.com
Good to know,  this method for updating is a lot better.  With a modified version of the one I posted here and some tweaks to the fft I figured out, we have a big increase in performance.  

Caching the hanning window and the indices that piff returned (added some global variables) and using numpy to a fuller extent makes a big difference.  I can playback 16 pwm channels without stutter. 
Reply all
Reply to author
Forward
0 new messages