Viewing turns-taking data

33 views
Skip to first unread message

Giacomo Verlicchi

unread,
Nov 13, 2019, 8:45:12 AM11/13/19
to Rhythm Badges
Dear community,

I'd like to know if there is a way to see, in terms of turn-taking, how the participation changed over the course of the meeting. In other words, just like df_stitched dataframe tells for each time interval which member spoke (true or false), I would like to have a chart which tells how many turns each member 'took' in given intervals (e.g. every minute). I guess I have to count how many times, in a given interval, the boolean value has changed, but I find it hard to code it.
With that information, I would be able to see in a more detailed way how participant's contribution changed as the meeting advanced.

Of course, the only turn-taking output I can see directly using the GitHub openbadge-analysis core.py is the Total Turn data for each member.

All advice welcome :)
Thanks a lot in advance for your help.

Oren Lederman

unread,
Nov 13, 2019, 11:28:55 AM11/13/19
to Rhythm Badges
Hi Giacomo!  I'm re-posting some info from an older post:
Turns calculation - first, a disclaimer - while the badges can do this, this feature works in relatively quiet environments (meeting rooms). Don't expect it to work well in a noisy environment (open spaces, large gatherings). It might, with a high enough threshold and data loss, but I haven't tried that yet. Also, the code supporting this feature is one of the oldest parts of our analysis code and is a bit messy (in particular the hard coded timezone used in the function that reads the raw data). You can find examples on how to handle audio data here - https://github.com/HumanDynamics/openbadge-analysis-examples/blob/master/notebooks/meeting_simple_plots.ipynb (focus on the sample2data and make_df_stitched functions). There is also a new voice activity detection function (VAD) created by my colleague that is much better than the one I use in my examples, but we haven't fully integrated it into the pipeline. It seems to be much better though, and supports overlaps and interruptions. There is a detailed explanation (with code examples) on how to use it - https://github.com/HumanDynamics/openbadge-analysis-examples/blob/master/notebooks/multi-channel_VAD_illustration.ipynb 

Is this helpful? I haven't worked with audio data for a couple of years, so I don't have newer examples. The last graph in the first link above might be a good start. It's using an old version of Bokeh, and the code for generating it is buried in our analysis package, but hopefully it can be a good starting point. 


Cheers,
Oren
Reply all
Reply to author
Forward
0 new messages