Audio Analysis using Beads (Noise level, reverb time and spectrum analysis)

303 views
Skip to first unread message

Tom Collins

unread,
Jan 26, 2017, 7:35:23 PM1/26/17
to beadsproject
Hello,

I'm doing a project in Java for analysing room acoustics (a bit like roomeqwizard, if you're familiar with that).

From what I've seen so far, it seems like this library has the functionality I need to perform the basic measurements I'd like to (background noise level, impulse/reverb time and frequency response), do you agree?

I've just started code for measuring the background noise level. At the moment I've worked out how to record audio from the microphone, but I'd like a good way of seeing how loud it is so i can compare it with other recordings. From looking at the docs, the RMS class seems like the place to start, but I'm not sure how to use it? Also, is there a way to draw this as a waveform (similar to what you get in DAW software or Audacity)?

Any advice or help would be much appreciated,

Tom

Oliver Bown

unread,
Jan 29, 2017, 4:19:05 PM1/29/17
to beadsp...@googlegroups.com
Hi Tom,

yes Beads can do this, but the analysis classes are quite basic and don’t employ the very best algorithms for analysis. However, the analysis classes like Power and PowerSpectrum are pretty solid.

The basic idea is that you set up a ShortFrameSegmenter. This is a UGen that has one input and no outputs. Like Clock it needs to be added as a dependent to be updated. It turns a stream of audio into a stream of overlapping grains for analysis.

ShortFrameSegmenter sfs = new ShortFrameSegmenter(ac);
ac.out.addDepdendent(sfs);

Then from there you start to add listeners in analysis chains.

FFT fft = new FFT();
sfs.addListener(fft);
PowerSpectrum ps = new PowerSpectrum();
fft.addListener(ps); //the power spectrum takes FFT as its input. It’s not obvious what chains are available unfortunately!

Then lastly, you need to add a segment listener which is where you receive the notification that you have new analysis data. Here’s the whole thing….

AudioContext ac = new AudioContext();
ShortFrameSegmenter sfs = new ShortFrameSegmenter(ac);
ac.out.addDependent(sfs);

FFT fft = new FFT();
sfs.addListener(fft);
PowerSpectrum ps = new PowerSpectrum();
fft.addListener(ps);

sfs.addSegmentListener(new SegmentListener() {
@Override
public void newSegment(TimeStamp timeStamp, TimeStamp timeStamp1) {
float[] psData = ps.getFeatures();
//do your stuff here
}
});
So based on this you may want to try writing your own feature extractors. Sorry the documentation for this stuff has never been very good.

Ollie


--
You received this message because you are subscribed to the Google Groups "beadsproject" group.
To unsubscribe from this group and stop receiving emails from it, send an email to beadsproject...@googlegroups.com.
To post to this group, send email to beadsp...@googlegroups.com.
Visit this group at https://groups.google.com/group/beadsproject.
For more options, visit https://groups.google.com/d/optout.

Tom Collins

unread,
Jan 30, 2017, 7:06:04 AM1/30/17
to beadsproject
Hi Ollie, 

Thanks so much for your detailed reply, I will see what I can do and get back to you :)

Tom

Tom Collins

unread,
Feb 14, 2017, 5:26:09 AM2/14/17
to beadsproject
Hi again Ollie,

I realise this is slightly off topic so you may be unable to help, but I figured it was worth a try because I don't know Java audio very well - there might be an easy solution.

I found a good microphone GUI element on stack overflow and managed to implement it in my project, but because it is using import javax.sound.sampled rather than beads I get a clash. It seems that when this is accessing the audio input (microphone), beads is unable to access it at the same time (the mic meter is in a separate thread). Is there a way I can let them both use the microphone at the same time? Or maybe a way to use beads for this as well that solves the problem? At the moment I cannot have the mic input meter running and record from the microphone (which I do with beads in my project) at the same time.

The error from beads is:
net.beadsproject.beads.core.io.JavaSoundAudioIO$JavaSoundRTInput : Error getting line

The mic meter code is in the top answer here (I've modified it to be an part of my GUI rather than a separate application):

I appreciate any help or advice you can offer (I'm in over my head with this project...).


On Sunday, 29 January 2017 21:19:05 UTC, Ollie wrote:

Oliver Bown

unread,
Feb 14, 2017, 5:35:57 PM2/14/17
to beadsp...@googlegroups.com
Not fixable without looking into JavaSoundAudioIO and hacking it so that it used the same SourceDataLine or TargetDataLine (I always forget which is which) as the other one. Might even be easier to dig into the mic GUI code and work out how to make a Beads UGen out of it. You can make a Beads UGen pretty easily…

UGen myUGen = new UGen(ac, ins, outs) {
public void calculateBuffer() {
//do your computation using bufOut[i][j] and bufIn[i][j].
}

}

Ah, I guess it’s not that simple. Beads does also have mic input of course, so might be easiest to make your own gui?

O

Tom Collins

unread,
Feb 20, 2017, 5:47:49 AM2/20/17
to beadsproject
Hi Ollie,

Thanks for your suggestion. If I understand correctly, I think the only section that would need changing is the run() function.

public void run() {
           
AudioFormat fmt = new AudioFormat(44100f, 16, 1, true, false);
           
final int bufferByteSize = 2048;


           
TargetDataLine line;
           
try {
                line
= AudioSystem.getTargetDataLine(fmt);
                line
.open(fmt, bufferByteSize);
           
} catch(LineUnavailableException e) {
               
System.err.println(e);
               
return;
           
}


           
byte[] buf = new byte[bufferByteSize];
           
float[] samples = new float[bufferByteSize / 2];


           
float lastPeak = 0f;


            line
.start();
           
for(int b; (b = line.read(buf, 0, buf.length)) > -1;) {


               
// convert bytes to samples here
               
for(int i = 0, s = 0; i < b;) {
                   
int sample = 0;


                    sample
|= buf[i++] & 0xFF; // (reverse these two lines
                    sample
|= buf[i++] << 8;   //  if the format is big endian)


                   
// normalize to range of +/-1.0f
                    samples
[s++] = sample / 32768f;
               
}


               
float rms = 0f;
               
float peak = 0f;
               
for(float sample : samples) {


                   
float abs = Math.abs(sample);
                   
if(abs > peak) {
                        peak
= abs;
                   
}


                    rms
+= sample * sample;
               
}


                rms
= (float)Math.sqrt(rms / samples.length);


               
if(lastPeak > peak) {
                    peak
= lastPeak * 0.875f;
               
}


                lastPeak
= peak;


                setMeterOnEDT
(rms, peak);
           
}
       
}

So I would need to make a ugen, and have all the code from the run() function inside the calculatebuffer() function? The part I'm struggling with is the beads equivalent of this section, i think the other parts are straightforward:
TargetDataLine line;
           
try {
                line
= AudioSystem.getTargetDataLine(fmt);
                line
.open(fmt, bufferByteSize);
           
} catch(LineUnavailableException e) {
               
System.err.println(e);
               
return;
           
}


           
byte[] buf = new byte[bufferByteSize];

is this where I use the 
bufOut[i][j] and bufIn[i][j] parts?

If I used a beads UGen, does this solve the problem of having multiple threads accessing the mic at once, or would I need to combine this with my recording class for it to work?

thanks again for your help,

Tom

Oliver Bown

unread,
Feb 21, 2017, 5:55:42 PM2/21/17
to beadsp...@googlegroups.com
Yeah so assuming you’re working in mono then bufIn[0] will give you an array with a buffer of floats in it.
This should be the equivalent of your variable samples[]. So you can get rid of most of the code there.
With thread issues, you might need to do some synchronisation management, but if you’re just pulling the peak value from the UGen (or pushing it into the GUI element) then you should be all good.

O
Reply all
Reply to author
Forward
0 new messages