paths vs samples

16 views
Skip to first unread message

Jeffrey Trevino

unread,
Nov 21, 2011, 3:09:47 AM11/21/11
to athenacl
Hi there,

I have another "guts" question about how the sampler-oriented CSoundNative event mode instruments work in athenaCL:

The documentation for the rhythmic objects refers to pitches in a path; for example, in the documentation for binaryAccent,

"Deploys two Pulses based on event pitch selection. Every instance of the first pitch in the current set of a Texture's Path is assigned the second Pulse; all other pitches are assigned the first Pulse. Amplitude values of events that have been assigned the second pulse are increased by a scaling function."

However, if I'm using a sampler, and parameter s0 is set to choose randomly from a specified set of filenames, what happens to this pitch-based selection? Is there some kind of mapping between the file names and pitches that happens automatically? For example, does it treat the first listed file as the "first pitch," playing it at each instance of the second specified pulse? Should I have the same number of files and pitches in a path in order for the rhythm object to work as it's described?

I'm curious how others approach the use of non-pitch-shifting, sample-based instruments that nonetheless have pitch-based paths attached to them. When I render new event lists, I get a funny cluster of lots of samples at the beginning of the file (this may or may not be related).

Here's the texture instance I'm working with:

TI: pops, TM: DroneArticulate, TC: 0, TT: TwelveEqual
pitchMode: pitchSpace, silenceMode: off, postMapMode: on
midiProgram: piano1
      status: +, duration: 000.0--60.13
(i)nstrument        31 (csoundNative: samplerRaw)
(t)ime range        00.0--60.0
(b)pm               constant, 120
(r)hythm            binaryAccent, ((9,6,+),(9,2,+))
(p)ath              forBob
                    (C4,D4,E4,F4,G4,A4,B4)
                    60.00(s)
local (f)ield       constant, 0
local (o)ctave      constant, 0
(a)mplitude         randomBeta, 0.4, 0.4, (constant, 0.7), (constant, 0.9)
pan(n)ing           constant, 0.5
au(x)iliary
      x0            sampleSelect,
                    (drum01.aif,mpHit2.wav,mpHit3.wav,mpHit4.wav,mpHit5.wav,mpHit6.wav),
                    randomChoice
texture (s)tatic
      s0            maxTimeOffset, 0
      s1            levelFieldMonophonic, event
      s2            levelOctaveMonophonic, event
texture (d)ynamic   none

wondering,
Jeff

--
《〠》】〶【〖〠〗〶〛〷〚
Jeff Treviño
PhD Candidate in Music Composition
@ the University of California, San Diego
〖〠〗〶〛〷〚《〠》】〶
Skype: jeffreytrevino
E-mail: jeffrey...@gmail.com
〚《〠》】〶【〖〠〗〶〛〷
9310H Redwood Dr.
La Jolla, CA 92037
USA
〖〠〗〶〛〷〚《〠》】〶【

Jeffrey Trevino

unread,
Nov 21, 2011, 6:15:57 PM11/21/11
to athenacl
Ah - I now see that I misunderstood the droneArticulate texture module. As it renders each pitch of the path throughout the specified duration, I was getting six voices per texture module; now I realize that if I want to specify a monophonic texture instance with the droneArticulate parent module, I need to associate my texture instance with a path that contains just one pitch. Each instance of this pitch, in the raw sampler instrument, turns into a randomly selected sample. At least -- this is what I hear!
J

christopher ariza

unread,
Nov 22, 2011, 8:06:52 AM11/22/11
to athe...@googlegroups.com

hi jeff. great questions.

binaryAccent is one of the oldest parameterObjects (circa 2000, in fact!). as you have found it, deploys different rhythms based on pitch selection from the Path. however, as you have found, it works independently of any particular instrument, including csound instruments. thus, the selection of pitch will not correlate with samples selection (which itself is controlled by its own parameterObject, which is sampleSelect in your case).

now, the DroneArticulate texture module, as you have found, will try to create as many voices as there are pitches in the Path. if the instrument realizing these voices does not take into account pitch information (such as with the csound sample playback) then you will simply get as many voices as specified by the Path.

nonetheless DroneArticulate may still be useful, as it writes each voice one at a time, looping around (over the duration) to write additional voices. this behavior causes a sort of shifting/phasing of parameter generator output, and may be useful even for non-pitched output.

for the simplest, linear monophonic behavior, TM LiteralHorizontal might be a good choice. for creating overlapping (non-metered, as DroneArticulate) multi-voice textures, TMs TimeFill and TimeSegment are good choices.

Reply all
Reply to author
Forward
0 new messages