tools for notation of EA / digital media

28 views
Skip to first unread message

CEC jef chippewa

unread,
Aug 25, 2016, 2:31:57 PM8/25/16
to cec-con...@googlegroups.com

a simple question: what tools do you use?

jef

--
[CALL] eContact! 18.3 -- DIY, Hacking, Repurposing (deadline 15 Sep)
[CALL] eContact! 18.4 -- Notation (deadline 30 Sep)
http://econtact.ca/call.html

Communauté électroacoustique canadienne (CEC)
Canadian Electroacoustic Community
HOME http://cec.sonus.ca \|\ JOURNAL http://econtact.ca
http://FACEBOOK.com/cec.sonus \|\ http://TWITTER.com/cec_ca

David Gray

unread,
Aug 26, 2016, 2:11:29 PM8/26/16
to cec-con...@googlegroups.com
I have a PhD thesis 'Visualization and Representation of Electroacoustic Music. I can email it if you'd be interested but in about a week when I get back home and have Wifi. It's about 360 meg. I did it in 2013 at de Montfort with Simon Emmerson.
Best wishes

Sent from my iPhone
> --
> You received this message because you are subscribed to the Google Groups "CEC-Conference" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to cec-conferenc...@googlegroups.com.
> To post to this group, send email to cec-con...@googlegroups.com.
> Visit this group at https://groups.google.com/group/cec-conference.
> For more options, visit https://groups.google.com/d/optout.

Richard Scott

unread,
Aug 26, 2016, 3:19:02 PM8/26/16
to cec-con...@googlegroups.com
despair mostly

On Thursday, 25 August 2016, CEC jef chippewa <j...@econtact.ca> wrote:

a simple question: what tools do you use?

jef

--
[CALL] eContact! 18.3 -- DIY, Hacking, Repurposing (deadline 15 Sep)
[CALL] eContact! 18.4 -- Notation (deadline 30 Sep)
http://econtact.ca/call.html

Communauté électroacoustique canadienne (CEC) Canadian Electroacoustic Community
HOME http://cec.sonus.ca  \|\  JOURNAL http://econtact.ca
http://FACEBOOK.com/cec.sonus  \|\  http://TWITTER.com/cec_ca

--
You received this message because you are subscribed to the Google Groups "CEC-Conference" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cec-conference+unsubscribe@googlegroups.com.
To post to this group, send email to cec-conference@googlegroups.com.

David Gray

unread,
Aug 26, 2016, 3:40:08 PM8/26/16
to cec-con...@googlegroups.com
It has lots of examples in it and lots of composer case studies both current and historical.

Sent from my iPhone

CEC jef chippewa

unread,
Aug 26, 2016, 3:47:03 PM8/26/16
to cec-con...@googlegroups.com

nice! is it open source or shareware?

pmre...@gmail.com

unread,
Aug 26, 2016, 4:06:27 PM8/26/16
to cec-con...@googlegroups.com
I'm guessing it's subscription. 

Sent from my iPhone
--
You received this message because you are subscribed to the Google Groups "CEC-Conference" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cec-conferenc...@googlegroups.com.
To post to this group, send email to cec-con...@googlegroups.com.

Bedard martin

unread,
Aug 26, 2016, 4:15:49 PM8/26/16
to cec-con...@googlegroups.com

Hi David,


I think it's possible to find you thesis here…


https://www.dora.dmu.ac.uk/handle/2086/10561


Is it the complete document?


Martin Bédard


De : 'David Gray' via CEC-Conference <cec-con...@googlegroups.com>
Envoyé : 26 août 2016 18:11
À : cec-con...@googlegroups.com
Objet : Re: [cec-c] tools for notation of EA / digital media
 
I have a PhD thesis 'Visualization and Representation of Electroacoustic Music. I can email it if you'd be interested but in about a week when I get back home and have Wifi. It's about 360 meg. I did it in 2013 at de Montfort with Simon Emmerson.
Best wishes

Sent from my iPhone

> On 25 Aug 2016, at 20:31, CEC jef chippewa <j...@econtact.ca> wrote:
>
>
> a simple question: what tools do you use?
>
> jef
>
> --
> [CALL] eContact! 18.3 -- DIY, Hacking, Repurposing (deadline 15 Sep)
> [CALL] eContact! 18.4 -- Notation (deadline 30 Sep)
> http://econtact.ca/call.html
Call for Contributions. 18.4. eContact! 18.4 — Media-Specific Music Artists creating media-specific work might design their very own media: Jens Brand’s “G ...

David Gray

unread,
Aug 27, 2016, 1:57:12 AM8/27/16
to cec-con...@googlegroups.com
Yes looks like it but I've not got enough data here on holiday to check the illustrations. Roaming charges aaah!

Sent from my iPhone

couprie...@free.fr

unread,
Aug 27, 2016, 3:25:07 AM8/27/16
to cec-con...@googlegroups.com
Hello,

You can use my free software EAnalysis: http://eanalysis.pierrecouprie.fr.

If you read French, I studied relations between analysis, representation and theory in my Habilitation Thesis (http://www.pierrecouprie.fr/?page_id=1350). You also can find several papers in French and English on my website (http://www.pierrecouprie.fr/?page_id=8)

Best,

Pierre Couprie

--
Pierre Couprie
Associate professor
School of Education, Paris-Sorbonne University
http://www.pierrecouprie.fr
Institut de recherche en musicologie (UMR 8223 CNRS)
http://www.iremus.cnrs.fr




----- Mail original -----
De: "Bedard martin" <fjo...@hotmail.com>
À: cec-con...@googlegroups.com
Envoyé: Vendredi 26 Août 2016 22:15:43
Objet: RE: [cec-c] tools for notation of EA / digital media




Hi David,




I think it's possible to find you thesis here…




https://www.dora.dmu.ac.uk/handle/2086/10561





Is it the complete document?



Martin Bédard





David Gray

unread,
Aug 27, 2016, 6:37:40 AM8/27/16
to cec-con...@googlegroups.com
I've checked with Wifi and it is complete. 
Best wishes

Sent from my iPhone
On 26 Aug 2016, at 22:15, Bedard martin <fjo...@hotmail.com> wrote:

Michael Matthews

unread,
Aug 27, 2016, 8:12:06 AM8/27/16
to cec-con...@googlegroups.com
are there plugins?

David Gray

unread,
Aug 27, 2016, 8:24:41 AM8/27/16
to cec-con...@googlegroups.com
No

Sent from my iPhone

Kevin Austin

unread,
Aug 27, 2016, 4:05:47 PM8/27/16
to cec-con...@googlegroups.com
In my work I have found Dis-Pair to be extremely buggy. While being open source and shareware, I have found it to be problematic in any specific individual application. Rather than plugins, I would suggest work-arounds — avoidance being the better part of pallor. The work-arounds often include prescriptions, and frequently, for short-term, I have have found proscription to be adequate.

I think that it is the ‘duality’ nature of Dis-Pair that needs to be addressed. Sometimes a simple reboot, in the right place, fixes issues, but sometimes a more complete system reinstall has been needed. In more extreme cases, a system re-write has been necessary. In extreme cases, a personality transplant can be attempted.

A mentor of mine proposed that it is possible to go to non-prescription options to smooth the path — anything to get the work done.


Kevin



On 2016, Aug 27, at 8:24 AM, 'David Gray' via CEC-Conference <cec-con...@googlegroups.com> wrote:

No


On 27 Aug 2016, at 14:12, Michael Matthews <michaelm...@mac.com> wrote:

are there plugins?


On Aug 26, 2016, at 22:06, pmre...@gmail.com wrote:

I'm guessing it's subscription. 


On 26 Aug 2016, at 22:46, CEC jef chippewa <j...@econtact.ca> wrote:

Kevin Austin

unread,
Aug 27, 2016, 4:11:47 PM8/27/16
to cec-con...@googlegroups.com
Thank you.

A treasure trove.

Kevin

David Hirst

unread,
Aug 28, 2016, 12:35:39 AM8/28/16
to cec-con...@googlegroups.com, David Hirst
I have used a number of tools and notations for the analyses I have done of ea music. Many are documented on the OREMA Project website.

As a part of my PhD I did a detailed analysis of Smalley's Wind Chimes. A summary can be found here:

This analysis used a scheme I developed called the SIAM Framework. SIAM stands for Segregation Integration Assimilation and Meaning, it is summarised in the following PDF:

In this analysis I developed a display that showed a spectrogram of two minute sections of the work, underneath this I included symbols, text and timing information relating to sound objects. The whole thing was drawn and programmed using Flash - to be able to play the sound file, see the sound objects, and see where they were placed in relation to the time scale, with a cursor that followed the play back. For copyright reasons, I can't put the whole animation online, but there are screen shots for the whole piece in the following PDF. You could import them and a purchased copy of Wind Chimes into Pierre Couprie's Eanalyse program:
Also included in that PDF are screen shots of a tabulated "reduction" of all the initial information summarised into half the time frame. (Another form of representation) = "Time span Reduction".

I used Pierre Couprie's program and Sonic Visualiser to carry out an analysis of Jonty Harrison's Unsound Objects in the first edition of the eOREMA Journal. There are a number of different representations in that article, for the pictures and text see:

This analysis provoked a further interest in the representation of "activity" in ea music and how this kind of temporal analysis could be automated and represented. The following paper summarises work to explore the "Rhythmogram" representation of sonic events, and work to represent and automate activity and segmentation creation:

Related papers on this work can be found in the following:

Hirst, D. (2014) The Use of Rhythmograms in the Analysis of Electro-acoustic 
Music, with Application to Normandeau’s Onomatopoeias Cycle. Proceedings of the 
International Computer Music Conference 2014. Athens, Greece, 14-20 Sept, 2014. 
pp 248-253. 

Hirst, D. (2014) Determining Sonic Activity In Electroacoustic Music. Harmony
Proceedings of the Australasian Computer Music Conference 2014. Hosted by The 
Faculty of the Victorian College of the Arts (VCA) and the Melbourne 
Conservatorium of Music (MCM). 9 – 13 July 2014. pp 57-60. 

The Rhythmogram can be used to depict both long term structures (the whole piece), or short term, detailed structures say 10 seconds. The pictures have been said to have some resemblance to the hierarchical diagrams of tonal music by Lerdahl and Jackendoff.

MATLAB and the MIRtoolbox, plus the Auditory Toolbox were used for the Rhythmograms and automated sound segregation.

A lot more detail can be found in my book:
Hirst, D. (2008). A Cognitive Framework for the Analysis of Acousmatic Music: 
Analysing Wind Chimes by Denis Smalley VDM Verlag Dr. Muller 
Aktiengesellschaft & Co. KG. Saarbrücken.
Which is available through your local amazon.com online store.

Or just download my PhD from:

I have some other stuff too, but that is enough for now. 👴🏼

Cheers,
David

Dr David Hirst
Honorary Principal Fellow
Melbourne Conservatorium of Music 
University of Melbourne 
Parkville, Vic
Australia

Andreas Bergsland

unread,
Aug 28, 2016, 3:14:44 PM8/28/16
to cec-con...@googlegroups.com
A lot of interesting approaches here that I would like to check out!
I have developed a framework for the analysis of voices in EAM using a seven axis model, where I have made "graphs" in seven colours with the Acousmographe and then combined it with multi-axes diagrams made with Excel.
Here is an example (without the music)
http://folk.ntnu.no/andbe/PhD/Images/Min-max-model_Bergsland.png
(the whole thesis is at folk.ntnu.no/andbe/PhD/PhD_Thesis_Bergsland_WEB.pdf, analyses are given in chapter 12, from p.310).
See also my article in Organised Sound: (http://ejournals.ebsco.com/Article.asp?ContributionID=30674110)
Best,
Andreas Bergsland
-- 
Andreas Bergsland

Associate professor - førsteamanuensis
Music Technology Programme - Musikkteknologiseksjonen
Department of Music - Institutt for musikk
Olavskvartalet
NTNU (Norwegian University of Science and Technology)
7491 Trondheim
NORWAY

Visiting address/besøksadresse: Fjordgt.1 (3.etg.)
e-mail: andreas....@ntnu.no
Web: http://folk.ntnu.no/andbe
Office phone: 7359 0096
Mobile:       4566 3316
andreas_bergsland.vcf

Phivos-Angelos Kollias

unread,
Aug 29, 2016, 10:32:12 AM8/29/16
to cec-con...@googlegroups.com

I find rather useful – at least as a tool for my mental “””representation””” of sound – the approach of Lasse Thoresen, that combining Schaeffer and Smalley. They have also built a font for representation called "Sonova”. See: Lasse Thoresen: Spectromorphological Analysis of Sound Objects. https://is.gd/nantan  

In addition, there is a useful plugin for Acousmographe including the font of Thoresen. https://is.gd/2YKfT7

Regards,

Phivos


Phivos-Angelos Kollias


http://phivos-angelos-kollias.com



Le 28/8/16 22:14, Andreas Bergsland a écrit :

David Hirst

unread,
Aug 30, 2016, 1:53:38 AM8/30/16
to cec-con...@googlegroups.com, David Hirst
Nice reference, and work, thanks for that.

BTW: If you’re having trouble with any of the OREMA links, like

I think the site is a bit flakey, sometimes up and sometimes down. (I get errors too.) But ...

This Youtube video is a 20 min presentation I made at DMU:

If you skip the start and jump to 9’02”, there is an explanation and demonstration of the “Interactive Study Score” for Smalley’s Wind Chimes - with excerpts from here and there within the piece, highlighting different phenomena from different sections of the work. Using an Interactive Study Score allows one to jump to different parts of the work to make these sorts of comparisons.

Cheers,
David

On 30 Aug 2016, at 12:32 AM, 'Phivos-Angelos Kollias' via CEC-Conference <cec-con...@googlegroups.com> wrote:

I find rather useful – at least as a tool for my mental “””representation””” of sound – the approach of Lasse Thoresen, that combining Schaeffer and Smalley. They have also built a font for representation called "Sonova”. See: Lasse Thoresen: Spectromorphological Analysis of Sound Objects. https://is.gd/nantan  

In addition, there is a useful plugin for Acousmographe including the font of Thoresen.https://is.gd/2YKfT7

David Hirst

unread,
Aug 30, 2016, 1:57:20 AM8/30/16
to cec-con...@googlegroups.com
Or even: 

dum, doh!

DH

Risto Holopainen

unread,
Aug 30, 2016, 9:12:09 AM8/30/16
to cec-con...@googlegroups.com

Here is an analysis of Parmerud's Les objets obscures using those tools:

As with Smalley's Wind Chimes the texture is rather sparse, which perhaps makes it feasible to analyse the piece. On the other hand, really dense textures such as Bohor by Xenakis should be representable by just a few signs.

I agreee that this type of notation can be useful for the mental representation of sound, it could even be used in the planning phase of composition. But I do think there is a limitation in its disregard for sound sources or indexical listening. It doesn't have the advantage of an orchestral score where you can see what timbre to expect from the vertical position on the page.

By the way, another resource is Stéphane Roy's book L'analyse des musiques électroacoustiques, at least the first part of it. It doesn't introduce any new notations, but there are some of those stylised spectrograms that have been used by some composers.


Risto Holopainen

David Hirst

unread,
Aug 30, 2016, 7:25:10 PM8/30/16
to cec-con...@googlegroups.com
Very interesting visually and a lot of work!

I had a talk with Cat Hope once about the notion of scrolling scores. My view is that it is harder on the eye to keep up, and seems a higher cognitive load, when compared with using a static 'page' or screen with a moving cursor - then changing pages at the appropriate time. The following has the same notation for the same piece, but uses static screens and moving cursor:

I agree with your comment regarding sparser pieces and denser pieces.

Re disregard for sound sources: I agree, and that is why I like to include hard information in my representations like exact times, frequencies, and a label or descriptor. In the case of the latter, it enables indelicately or semantic listening or interpretation (at least). In the case of my analysis of Harrison's Unsound Objects, I was able to tabulate these semantic type descriptors, in time order, to try and observe any kind of patterning in their use.

See a Table 1 and the appendix of:
... And Section 4, which deals with Sonic Archetypes, where it talks about the fact that: "Sonic archetypes can be mimetic, or they can be functional or structural models."
BTW: the OREMA site seems to be up and running ok now.

BTW 2: With regard to composition: I am currently working on a piece where, working with a lot of sound source files, I'm kind of reverse engineering the above analysis process whereby I am listing sound sources, grouping them and re-ordering them prior to processing and assemblage. A sort of pre-planning process, to organise a large amount of sounds. How it will turn out will be totally up to the interactions between me, the sounds and each other in the creation process - that mix of mimetic, functional, structural, decorative. That combination of working in the spectral domain and the soundscape referent domain that is unique to certain ea music. (And that is so difficult to notate for/about!)

Best Regs,
David

Dr David Hirst
Melbourne - where it is Spring tomorrow!

Kevin Austin

unread,
Aug 31, 2016, 12:20:22 AM8/31/16
to cec-con...@googlegroups.com
In my reading there has been an occasional conflation of ‘description’ and ‘analysis’.

In my classes I propose about seven ways of describing / analyzing pieces listened to. The major division is whether the system is based on a temporal method — some kind of timeline, or is non-temporal, such as an image, or a textual commentary.

For me, “analysis” will reflect relationships, and in a more advanced form, the ‘meaning’ [semantic] of these relationships. The analysis also, often, requires that a system of data reduction be invoked. For example, from the classical repertoire, the substitution of the bassoon in mm303- for the horn, mm 59— in the transition between first and second subjects in the first movement of the Beethoven Fifth, carries ‘little’ meaning. It is done because the transposed notes could not be played by the horn.

As Risto mentions, the analysis [description] of sparse works is simpler than those who basic language is very complex, such as the tape part in Katinka’s Gesang. On a simple listening, many of the ‘phased-like’ sounds will appear integrated, but a careful listening will allow the listener to perceptually start to separate out [segregate] inner elements. My listening hear is extremely hierarchical. I am unable to provide a mental representation of what I hear.

Part of this is in the extensive use of multi-channel playback systems. Sorry, but I will largely discount amplitude-based ‘panning’ [sic] or diffusion of two-channel pieces. With point sources in from 4 to 8 to nn channels, sounds may be intended to perceived as occupying three dimensions. Individual listeners will integrate or segregate sounds as they occur.

The possibility of complex composed hierarchical relationships multiplies in multi-channel space. The mind will group elements — first of all having decided where to segment the auditory stream. :-] If the elements are integrated, there is for me a sense of ‘texture’, [see also harmony]; if they remain segregated, there is gestural evolution, see also counterpoint].

The segmentation and their accompanying, or defining points of articulation, are, in my experience mostly language-based. There is no ‘naive listening’, for everyone bring their own listening experience upon which they build their individual representation of auditory reality.

I read jef’s invitation as a prod, or stimulus for discussion of some of the larger topics related to the question of “What do you mean by analysis, anyway?” I would like to see and hear responses to this question presented in a way which is more 21st century than 19th century, that is, as documents with sound files linked directly to the text.  :-)

Pablo Garcia-Valenzuela

unread,
Aug 31, 2016, 2:20:35 PM8/31/16
to cec-con...@googlegroups.com

Hi everybody,


I am looking for a way to do live spatialisation with electronic drums (pads) and keyboard. A "multichannel sampler via midi data" I would say. For example, velocity info on a snare will determine a specific multichannel design, including multichannel reverb of course, which may either be a particular spatial scene or movement/s (panning) or both! So I can create a complex system.


Any advice? I guess the obvious choice is Max? would you confirm this? any other tools/systems/samplers I should consider?


Thanks! Best wishes!


Pablo

Kevin Austin

unread,
Aug 31, 2016, 2:59:58 PM8/31/16
to cec-con...@googlegroups.com

Probably depending upon your skills and how far you want to go, you may want to look at the new ‘Core’ structure in Reaktor 6 as well.

Kevin




-- 

Eliot Britton

unread,
Sep 1, 2016, 9:29:09 AM9/1/16
to cec-con...@googlegroups.com
If you are looking for a stable / straightforward solution, NI Kontakt has functional surround panning / modulation capabilities built in. More importantly it has excellent velocity layering and a deep sampler engine.

Surround features are added functions in the engine so they take a little figuring, but are very functional when up and running. I think my setup was 16 channel surround. One neat bonus feature is that you can re-map to various surround formats with a few clicks. Modulation and sample data can stay the same, which is very convenient for obvious reasons. 

I have mapped velocity to XY position within a multi channel setup and it was effective. Each velocity curve can be adjusted, which is useful for drum pads. (Roland V-Drums) Subtle modulation can be introduced to panning position to give a wider and more dynamic spatial image. Not the same mindset as working with SPAT though. More sampler / cinema driven workflow / conception.

Multi-channel wave files can be triggered as samples, or created from scratch in the kontakt editor etc. Reverb isn't so great but you can always put something on the mixer stage.

Also, Kontakt is limited to cinematic surround formats, quad, 5.1, 7.1 11.2 etc. etc. In performance I have found them to be a functional substitution for SPAT. However, at the time when I was using this system, spat hadn't reached a stable build and this was the best solution to a difficult problem. Spat and Komplete could probably be used in conjunction, with Kontakt triggering / panning the samples in the surround image and SPAT v3 taking care of the heavy lifting reverb.

Best,

-Eliot 

Pablo Garcia-Valenzuela

unread,
Sep 5, 2016, 4:57:07 PM9/5/16
to cec-con...@googlegroups.com

Thanks Eliot, thanks Kevin. I was expecting a bit more enthusiasm from the list on this subject (perhaps it was discussed at full in the past and I missed it?). Anyway once again thank you, your reply was very useful, I am already into Kontakt all day long and I am getting results much faster than I thought. 


Best!


Pablo




From: cec-con...@googlegroups.com <cec-con...@googlegroups.com> on behalf of Eliot Britton <eliot....@gmail.com>
Sent: Thursday, September 1, 2016 1:25 AM
To: cec-con...@googlegroups.com
Subject: Re: [cec-c] Live SPAT - Multichannel sampler?
 

Kevin Austin

unread,
Sep 5, 2016, 5:38:59 PM9/5/16
to cec-con...@googlegroups.com
Hmmm . . . I’m not sure that live spatialization is very ‘big’, and there are comparatively few opportunities to try it. Some universities / institutions have 8 or more channels, but access to these facilities is a luxury.

As you note, many of the traditional problems with interfaces have disappeared as more packages contain automatic features for receiving, assigning and sending data and signals. The issue as I have seen it was one of successful design of the instrument [assignment of information].


Kevin



On 2016, Sep 5, at 4:56 PM, Pablo Garcia-Valenzuela <elem...@hotmail.com> wrote:

Thanks Eliot, thanks Kevin. I was expecting a bit more enthusiasm from the list on this subject (perhaps it was discussed at full in the past and I missed it?). Anyway once again thank you, your reply was very useful, I am already into Kontakt all day long and I am getting results much faster than I thought. 

Best!
Pablo


From: cec-con...@googlegroups.com <cec-con...@googlegroups.com> on behalf of Eliot Britton <eliot....@gmail.com>
Sent: Thursday, September 1, 2016 1:25 AM
To: cec-con...@googlegroups.com
Subject: Re: [cec-c] Live SPAT - Multichannel sampler?
 

Pablo Garcia-Valenzuela

unread,
Sep 5, 2016, 6:22:36 PM9/5/16
to cec-con...@googlegroups.com

yes agreed Kevin it is about successful design of the instrument, I just need the tools thanks




From: cec-con...@googlegroups.com <cec-con...@googlegroups.com> on behalf of Kevin Austin <kevin....@videotron.ca>
Sent: Monday, September 5, 2016 4:38 PM

Eliot Britton

unread,
Sep 5, 2016, 8:09:06 PM9/5/16
to cec-con...@googlegroups.com
With Kontakt it is straightforward to combine 8 ch wave files and vector panned / modulated mono files, creating a dynamic / real time, multi-channel setup. The trick is really getting those sample layers, velocity curves and gain stages into expressive alignment.  I

If there is something else that can do that type of thing I would be interested in hearing about it as well.

Best,

-Eliot
Reply all
Reply to author
Forward
0 new messages