Measuring actual sound volume rather than set sound volume

407 views
Skip to first unread message

Kate Cox

unread,
Apr 30, 2014, 10:27:39 PM4/30/14
to psychop...@googlegroups.com
Hi I this is my first time using psychopy. I am loving the program but have come across a problem I cant figure out a solution to and was hoping someone could help. 

Basically I want to know, is there a way to get psychopy to measure the actual volume of a sound at each trial, rather than the volume that was set for the sound component properties? I've tried using getVolume() but it just reports the set volume, even when there is a blank "dead air" section in my audio file.

In case it helps answer my question here are the details of my experiment:

I'm making a dual attention task where participants have to use the mouse to track a moving target while listening to an unrelated audio file which tells them information that they will need to recall later. The auditory information is given pieces with breaks in between during which the participant can focus wholly on the tracking task. The idea is to compare tracking task performance during times when auditory information is and isn't being presented.

I'm using v1.80.03

I made the experiment in builder and then added a couple of extra bits in the complied code because I couldn't make them work in builder (but I have never used python before).  

The experiment is structured so that a code component starts the audio file at the start of the experiment and then loops through a routine which tracks the mouse location and target location (the latter is specified in my excel conditions file) every 10ms.  I've pasted the compiled code below so you can see exactly what I mean.

I thought maybe there might be some way of reworking the microphone response component so that it is looking at the sound being played rather than looking for a microphone (which I don't have) but couldn't work out how to do that.

Any help that anyone can offer would be hugely appreciated.
A big thank you in advance
Kate

from __future__ import division  # so that 1/3=0.333 instead of 1/3=0
from psychopy import visual, core, data, event, logging, sound, gui
from psychopy.constants import *  # things like STARTED, FINISHED
import numpy as np  # whole numpy lib is available, prepend 'np.'
from numpy import sin, cos, tan, log, log10, pi, average, sqrt, std, deg2rad, rad2deg, linspace, asarray
from numpy.random import random, randint, normal, shuffle
import os  # handy system and path functions

# Store info about the experiment session
expName = 'Working_running_adding_sound_measurer'  # from the Builder filename that created this script
expInfo = {u'session': u'001', u'participant': u''}
dlg = gui.DlgFromDict(dictionary=expInfo, title=expName)
if dlg.OK == False: core.quit()  # user pressed cancel
expInfo['date'] = data.getDateStr()  # add a simple timestamp
expInfo['expName'] = expName

# Setup filename for saving
filename = 'data/%s_%s_%s' %(expInfo['participant'], expName, expInfo['date'])

# An ExperimentHandler isn't essential but helps with data saving
thisExp = data.ExperimentHandler(name=expName, version='',
    extraInfo=expInfo, runtimeInfo=None,
    originPath=None,
    savePickle=True, saveWideText=True,
    dataFileName=filename)
#save a log file for detail verbose info
logFile = logging.LogFile(filename+'.log', level=logging.EXP)
logging.console.setLevel(logging.WARNING)  # this outputs to the screen, not a file

endExpNow = False  # flag for 'escape' or other condition => quit the exp

# Start Code - component code to be run before the window creation

# Setup the Window
win = visual.Window(size=(1280, 1024), fullscr=True, screen=0, allowGUI=True, allowStencil=False,
    monitor='testMonitor', color='black', colorSpace='rgb',
    blendMode='avg', useFBO=True,
    units='pix')
# store frame rate of monitor if we can measure it successfully
expInfo['frameRate']=win.getActualFrameRate()
if expInfo['frameRate']!=None:
    frameDur = 1.0/round(expInfo['frameRate'])
else:
    frameDur = 1.0/60.0 # couldn't get a reliable measure so guess

# Initialize components for Routine "TargetSound"
TargetSoundClock = core.Clock()
Words = sound.Sound(u'LIST1 with 1 sec words.wav') 
polygon = visual.Polygon(win=win, name='polygon',units='pix', 
    edges = 100, size=[3, 3],
    ori=0, pos=[0,0],
    lineWidth=.1, lineColor=[1,1,1], lineColorSpace='rgb',
    fillColor='red', fillColorSpace='rgb',
    opacity=1,interpolate=True)
mouse = event.Mouse(win=win)
x, y = [None, None]

# Create some handy timers
globalClock = core.Clock()  # to track the time since experiment started
routineTimer = core.CountdownTimer()  # to track time remaining of each (non-slip) routine 

# set up handler to look after randomisation of conditions etc
trials = data.TrialHandler(nReps=1, method='sequential', 
    extraInfo=expInfo, originPath=None,
    trialList=data.importConditions('workedXY20sec.xlsx'),
    seed=None, name='trials')
thisExp.addLoop(trials)  # add the loop to the experiment
thisTrial = trials.trialList[0]  # so we can initialise stimuli with some values
# abbreviate parameter names if possible (e.g. rgb=thisTrial.rgb)
if thisTrial != None:
    for paramName in thisTrial.keys():
        exec(paramName + '= thisTrial.' + paramName)

for thisTrial in trials:
    currentLoop = trials
    # abbreviate parameter names if possible (e.g. rgb = thisTrial.rgb)
    if thisTrial != None:
        for paramName in thisTrial.keys():
            exec(paramName + '= thisTrial.' + paramName)
    
    #------Prepare to start Routine "TargetSound"-------
    t = 0
    TargetSoundClock.reset()  # clock 
    frameN = -1
    routineTimer.add(0.010000)
    # update component parameters for each repeat
    if trials.thisTrialN == 1: Words.play() 
    # i.e. only on the very first trial, start the sound, 
    # which will keep playing in the background for 90 s. 
    polygon.setPos(TargetXY)
    # setup some python lists for storing info about the mouse
    # keep track of which components have finished
    TargetSoundComponents = []
    TargetSoundComponents.append(polygon)
    TargetSoundComponents.append(mouse)
    for thisComponent in TargetSoundComponents:
        if hasattr(thisComponent, 'status'):
            thisComponent.status = NOT_STARTED
    
    #-------Start Routine "TargetSound"-------
    continueRoutine = True
    while continueRoutine and routineTimer.getTime() > 0:
        # get current time
        t = TargetSoundClock.getTime()
        frameN = frameN + 1  # number of completed frames (so 0 is the first frame)
        # update/draw components on each frame
        
        
        # *polygon* updates
        if t >= 0.0 and polygon.status == NOT_STARTED:
            # keep track of start time/frame for later
            polygon.tStart = t  # underestimates by a little under one frame
            polygon.frameNStart = frameN  # exact frame index
            polygon.setAutoDraw(True)
        elif polygon.status == STARTED and t >= (0.0 + (.01-win.monitorFramePeriod*0.75)): #most of one frame period left
            polygon.setAutoDraw(False)
        
        # check if all components have finished
        if not continueRoutine:  # a component has requested a forced-end of Routine
            routineTimer.reset()  # if we abort early the non-slip timer needs reset
            break
        continueRoutine = False  # will revert to True if at least one component still running
        for thisComponent in TargetSoundComponents:
            if hasattr(thisComponent, "status") and thisComponent.status != FINISHED:
                continueRoutine = True
                break  # at least one component has not yet finished
        
        # check for quit (the Esc key)
        if endExpNow or event.getKeys(keyList=["escape"]):
            core.quit()
        
        # refresh the screen
        if continueRoutine:  # don't flip if this routine is over or we'll get a blank screen
            win.flip()
    
    #-------Ending Routine "TargetSound"-------
    for thisComponent in TargetSoundComponents:
        if hasattr(thisComponent, "setAutoDraw"):
            thisComponent.setAutoDraw(False)
    currentT = globalClock.getTime()
    WrdVol = Words.getVolume()
    # store data for trials (TrialHandler)
    x, y = mouse.getPos()
    buttons = mouse.getPressed()
    trials.addData('mouse.x', x)
    trials.addData('mouse.y', y)
    trials.addData('Timer',currentT)
    trials.addData('Sound',WrdVol)
    thisExp.nextEntry()
    
# completed 1 repeats of 'trials'


win.close()
core.quit()

ShoinExp

unread,
May 1, 2014, 4:56:57 AM5/1/14
to psychop...@googlegroups.com
Hiya,
    I'm not sure if this answers your question, but as I understand it you can't get software to measure the "actual" volume at which a sound is played (in absolute terms) because it is dependent on the quality of your speakers and many hardware components.  I've run into the issue of needing to measure actual sound levels on experiments before and the conclusion always seems to be that you need to use an external sound level meter.  If you are instead asking whether there is a way to confirm whether the software correctly played the (relative) sound level that was specified (rather than how much sound ended up coming out of the speakers), then that's a software issue that I know nothing about.
    Mark

Daniel E. Shub

unread,
May 1, 2014, 5:35:33 AM5/1/14
to psychop...@googlegroups.com
You cannot easily do this in absolute terms. If your sound card, amplifier, and
transducer (headphone/speaker) are linear, then you can estimate the
relationship between sound level and PsychoPy "volume" in relative terms.
(N.B. the system is unlikely to be linear if you are driving low impedance
headphones or a speaker directly from the sound card.) The "volume" in
PsychoPy is essentially the proportion of the maximum output voltage that the
sound card can produce. For a linear system and a pressure transducer (i.e.,
typical headphones and speakers) the relation ship between output voltage and
sound level is 20log10. This means that changing the PsychoPy "volume" from 1
to 0.1 or 0.1 to 0.01 will result in a 20 dB drop in the sound level (this is
approximately 6 dB drop in the sound level for every halving of the volume).
Sound cards, headphones/speakers, and typical listening environments have
limited headroom so the range this approximation works over is pretty limited
(maybe 40 dB).

The absolute sound level (e.g., in dB SPL) at the ear depends on a number of
factors including the sound file that is being played, the PsychoPy volume, the
maximum output voltage of the sound card, the voltage across the transducer,
the sensitivity of the transducer (i.e., the model of headphone or speaker),
the location of the listener and transducer (speaker/headphone) in the room,
and the room itself.

For a fixed set of hardware and sound file, you can make an estimate of the
sound level for any "volume" in PsychoPy by making the same linearity
assumptions and knowing/measuring the sound level for a single PsychoPy
volume. If you cannot measure the sound level, the specifications for most
transducers include a reference sensitivity (often measured in dB/Volt at 1
kHz). If you are willing to assume the sensitivity of the average transducer
matches the sensitivity of your transducer (this is generally reasonable for
high quality transducers) then you can estimate the sound level for any
"volume" by measuring the voltage across the transducer for a single PsychoPy
volume. This requires using a splitter between the sound card and transducer
so the system can be properly loaded. If you are using a power amplifier
between the sound card and transducer, then you can measure the output voltage
at the sound card directly.

To generalize this across sound files you need to define what you mean by sound
level. If the sound file contains both blank/dead air sections and non-blank
sections then the sound level is changing with time. Often a long term RMS of
the sound file is used. If the transducer is flat this is fine, but if the
transducer doesn't have a good low/high frequency response characteristics
your estimate will be off.

Hope this helps.
> --
> You received this message because you are subscribed to the Google Groups
> "psychopy-users" group. To unsubscribe from this group and stop receiving
> emails from it, send an email to
> psychopy-user...@googlegroups.com. To post to this group, send
> email to psychop...@googlegroups.com. To view this discussion on the
> web visit
> https://groups.google.com/d/msgid/psychopy-users/066b5616-8681-47f6-a562-b9
> 59022db797%40googlegroups.com. For more options, visit
> https://groups.google.com/d/optout.

This message and any attachment are intended solely for the addressee and may contain confidential information. If you have received this message in error, please send it back to me, and immediately delete it. Please do not use, copy or disclose the information contained in this message or in any attachment. Any views or opinions expressed by the author of this email do not necessarily reflect the views of the University of Nottingham.

This message has been checked for viruses but the contents of an attachment
may still contain software viruses which could damage your computer system, you are advised to perform your own checks. Email communications with the University of Nottingham may be monitored as permitted by UK legislation.




Kate Cox

unread,
May 5, 2014, 5:37:41 AM5/5/14
to psychop...@googlegroups.com, danie...@nottingham.ac.uk
Thank you both.
It sounds like want I'm wanting to do is either not possible or probably more likely, far beyond my capabilities at this stage.  So I think I'll have to make do with out it.

But your help is very much appreciated.


Cheers
Kate
Reply all
Reply to author
Forward
0 new messages