Jon, either way, those were just ideas. The real point was that we
have flexibility of design. We have the event list and we can output
it however we want. The main thing is that we're not tying the event
list down to MIDI. Carl posted a huge list of instruments that
supported MTS a while ago on the original "Composition Software"
thread that I made on MMM. I'm not sure if all of those really have
MTS, but some of them do.
I suggested using only MTS and OSC or something straightforward like
that (i.e. not pitch bending and channel swapping) for the first
version because at least we'll be able to have a working sequencer. If
we really want to get at compatibility with other synths that don't
support MTS, pitch bending and channel swapping is one way to do it,
and I'm sure it's not the only way.
Just to get the core done, I suggested only MTS and OSC simply to have
something that works. As for other ways besides those two to implement
microtonal playback... We need to carefully examine how people are
doing it NOW, and see what parts of the process we can automate. Do
people use Scala files, for example? Then if we make it so the
software can read Scala files, we load the same file into the software
that we load into the synth.
You are correct in stating that coming up with completely revamped
underlying technologies to handle microtonal playback is too lofty of
a goal for right now. And that never was my goal - I only want to
round up what we have now into one complete package, which in and of
itself is enough of an improvement over what I've got that I'd find it
useful.
The technologies we have right now, as far as I know, are MTS, OSC,
Scala files, and MIDI with pitch bends and channel swapping - is there
anything else that people do? And if by the time we have all of that
implemented, Yamaha meets up with Roland and drafts a brand new
microtonal spec and all of the new synths are using it, then we'll add
that too. Adding different MIDI-esque output engines is simple with
the model I've been using. It's as trivial as following the spec.
It's, in a sense, as easy as writing a "plugin" to read the events
list differently. There are going to be a few extremely complicated
parts to this project, but formatting the output data shouldn't be one
of them.
The question is, how much of this should we tackle as part 1 of this
project, and how much is stuff to worry about later?
-Mike
I really don't think it's possible to condense this discussion into
short posts. I'm trying the best I can here. I lack the relevant
skillset, perhaps.
Either way, I'm going to bed. Good night, sir.
-Mike
Carl said "We're relying on 3rd party synths for audio production.
The
notation editor needs to send signals to the synth(s). MIDI
is one way to do that, OSC is another. MIDI will require the
user run multiple instances of synths in some cases, due to
the 128 note wall. OSC doesn't have that limitation, but I
only know of two synths that support it, and I don't know if
either of them support it enough to make it work microtonally.
Yeah, like Carl said I do understand OSC. Conceptually its quite simple and just
takes a couple of days to figure out. Programming encoders and decoders is a bit
more complicated but its not hard, also there are a number of existing libraries
for dealing with OSC so that in itself isnt really a concern.
For me the important aspect would be to define a protocol on top of OSC because
OSC itself doesnt define anything specifically related to audio or music it just
defines an addressing scheme and how to format data for transport from one place
to another.
I sent a link in an earlier email which has ideas of how to control synths via
OSC which takes microtuning into account :
http://stud3.tuwien.ac.at/~e0725639/OSC-SYN.txt
So something like that can be used for tuning and triggering instruments.
Then the other side is the possibility of making various bits of API available
as OSC commands so each tool is open to communication from anywhere : other
tools in the toolset we're trying to establish here, from existing OSC capable
tools (reaktor, plogue bidule, super collider, max-msp) and also directly from
scripts and code that anyone wants to create for themselves.
> I don't know the answer, folks, other than one suggestion I made:
> there are a couple people with connections to MMM that have created
> microtunable VSTi, and if one could find some decent code for basic
> VST hosting, you could have someone write a Very Simple Instrument
> (hell, sine or square waves or something!) that would be internal, and
> could at least give you aural feedback as to what your score is
> sounding like.
We can embed Timidity or FluidSynth or something. Not a big
deal. Not something we need to discuss now.
Graham
OSC ends up in Reaktor as any other event data. Reaktor does have some
limitations in its OSC implementation which I cant remember off the top of my
head, but i think its something stupid like no strings in the data section and
no bundles.
But for doing something like transmitting tuning tables or note events its got
everything you need.
> I get the sense that OSC is so vague that no synth will be able to
> use it as a MIDI replacement without an additional standard on top.
> Is that wrong?
No, thats absolutely correct and I think this an area which needs to be dealt
with so that anyone creating an instrument (hardware or software) and who is
considering OSC can find that the hard work is done. Then they can just pull the
bits they need 'off the shelf' and we all live happily ever after in a world of
microtonally capable instruments where MIDI is but a distant and sordid memory. :)
Martin.
I get the sense that OSC is so vague that no synth will be able to
use it as a MIDI replacement without an additional standard on top.
Is that wrong?
No, thats absolutely correct and I think this an area which needs to be dealt
with so that anyone creating an instrument (hardware or software) and who is
considering OSC can find that the hard work is done. Then they can just pull the
bits they need 'off the shelf' and we all live happily ever after in a world of
microtonally capable instruments where MIDI is but a distant and sordid memory. :)
As for the score editor, if all an OSC implementation requires is to
write an alternate output engine to deal with the sequential list,
then I don't expect it would be too much of a problem. Might be
something someone focuses on in a separate branch.
-Mike
Being as this this list is about making micro tools, and since it's
inevitable that there is going to be more than one tool we're making,
at some point some further organization and subforuming will be
necessary anyway.
-Mike
> For microtonal music, Open Sound Control (OSC) has a very important
> advantage over MIDI: MIDI note pitches are limited to 0-127. In OSC,
> the pitches can be floats. For example, you can specify 60.5 to mean a
> quarternote above the MIDI pitch 60 (middle C). This means that you
> can specify any microtonal pitch with a single note message -- you
> don't necessarily need something like pitchbend.
>
Two very early music languages -- both in FORTH -- Formula and HMSL --
allowed for decimals past the midi pitch number to control pitch bends.
djw
This is sort of how MTS works, but support for it is pretty limited right now.
-Mike
Sure, many formats support this (synthesis systems where users freely
define the meaning of parameters like Csound scores, and also several
compositions systems like Common Music, PWGL's ENP, ...).
The advantage of OSC, however, is that you can use it much like MIDI.
You can send it in realtime to many applications (including
commercial apps like Reaktor), even across the network.
Best
Torsten
--
Torsten Anders
Interdisciplinary Centre for Computer Music Research
University of Plymouth
Office: +44-1752-586219
Private: +44-1752-558917
http://strasheela.sourceforge.net
http://www.torsten-anders.de
He uses the concept of a 'voice' rather than notes :
In the SYN proposal, you don't turn on notes, you turn on voices which have got
a note or frequency argument. The big advantage of this system is that you can
e.g. have as many 'c3' notes playing as you like, each one with different filter
cutoff values set.
Then he defines a selection of voice commands with features like absolute or
relative pitch, velocity, volume, pan and arbitrary control parameters.
Although the paremeter section is something where I would say the same as Thor
in that rather than specifically mapping synth params to a non-meaningful
namespace (P1, P2 etc.. in his proposal) it would be better to have a system
whereby the you can discover the synth parameters dynamically and use the
appropriately named messages e.g. filter1/cutoff, osc1/pulsewidth etc..
Also what helps me to mentally tackle OSC from a technical perspective is just
to consider it another type of RPC (remote procedure call) albeit one where you
dont necessarily get a response.
From there its easy to imagine mapping an OSC namespace to an API which could
be generated dynamically either at compile time or runtime (or a mixture of
both..). Of course that leads into the realm of things like type conversion /
mapping and other fun stuff, but all those kinds of ideas can wait until there
is actually something more concrete to deal with.
Martin.
If anyone wants to go straight to the technical details its here :