Virtual instruments based on samples

1 view
Skip to first unread message

Jorge W

unread,
Mar 13, 2023, 4:07:57 AM3/13/23
to SpectMorph
Hi y'all. I just came across SpectMorph while looking for algorithms that can be applied to sample looping and sample transitions in virtual instruments made out of samples.
Kontakt, one of the most known sample engines, paradoxically is very limited for that purpose. The methods that are commonly used for this task are just based on matching waveforms in time domain and crossfading between audio fragments. Results are far from perfection.
So i decided to look for something that takes a deeper approach to the issue.
An example of practical use might be playing with human voice samples (of different pitches) in legato mode, in a similar way as monophonic analog synths, and similar to "real life" singing.

Have you ever consider this kind of use for the SpectMorph engine? I would be available for collaboration since i have some technical background.

Best

Stefan Westerfeld

unread,
Mar 15, 2023, 9:36:37 AM3/15/23
to spect...@googlegroups.com
Hi!

When I initially developed SpectMorph, one of the use cases I tested was
crescendo between samples. So you would have (for instance) three looped
samples for different volume levels, like p, mf and f recordings of the
same instrument. Then you could morph between these samples using
SpectMorph. The results were usually much better than a crossfade,
because in the SpectMorph case you wouldn't have any cancellation
effects between the partials of the samples. Of course that is assuming
that the input material was suitable for SpectMorph at all (orchester
instruments typically are).

Technically the tests were done using a MorphGrid operator containing
SpectMorph versions of the three samples and the input of the MorphGrid
operator had the volume level. As far as I remember it was good to do
volume normalization before doing the morphing, and re-applying the
correct volume afterwards. One reason that SpectMorph currently doesn't
do that "in production" is mainly that we don't have the required
annotated sample material of all instruments.

I would also expect that the SpectMorph algorithm should work well
between recordings of the same human voice on different notes. The
question wouldn't be so much if it works at all, but how to get it into
the existing infrastructure. Would you use VST+MPE with pitch bend to
control the morphing? How would it look at the UI? How would it be
saved? ...

In principle a first step you could make to play around with it could be
using a morph grid and just use a few different voice recordings (maybe
via instrument editor / WavSource). Then you'd have to select the
morphing of the morph grid operator according to the pitch. For a first
experiment you could use a DAW and control the pitch (pitch bend/MPE)
and the morphing of the grid at the same time.

In general it would be possible to include whatever is needed to do this
nicely into SpectMorph (like additional operators or using different
morphing strategies when receiving pitch bend) but it would have to be
really stable as once it is in a release, backwards compatibility needs
to be kept.

Cu... Stefan

Am 13.03.23 um 09:07 schrieb Jorge W:
> --
> You received this message because you are subscribed to the Google
> Groups "SpectMorph" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to spectmorph+...@googlegroups.com
> <mailto:spectmorph+...@googlegroups.com>.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/spectmorph/1784da50-f8bf-493f-b39f-a130640a2cb6n%40googlegroups.com <https://groups.google.com/d/msgid/spectmorph/1784da50-f8bf-493f-b39f-a130640a2cb6n%40googlegroups.com?utm_medium=email&utm_source=footer>.

--
Stefan Westerfeld, http://space.twc.de/~stefan

Jorge W

unread,
Mar 15, 2023, 12:11:45 PM3/15/23
to spect...@googlegroups.com

I still don't know how instruments are loaded into SpectMorph. Would it be possible to load a bunch of samples mapped to the keyboard as in virtual samplers? In that case the melismatic/gliding/legato effect could be automatically produced in the same way as when playing a monophonic analog synth: when you play a key while another key is kept down, you get that gliding effect between both notes, so there is no need for pitch bend control. There would be a glide time parameter to adjust the speed of the gliding.

Stefan Westerfeld

unread,
Mar 15, 2023, 1:55:06 PM3/15/23
to spect...@googlegroups.com
Hi!

Well, there are instruments made by me, which are pre-packaged, so they
ship with SpectMorph. These are already multi samples, so they contain
many samples for different pitches. There are voice samples available
like "Sven Ah" or "Mirko Ah". If you want to experiment on your own
samples, there is a builtin instrument editor

https://www.youtube.com/watch?v=JlugWYPDp84

so you can assign different samples to different midi notes and set loop
points.

There is also a "mono mode" which can be enabled in the output operator,
and it has a "glide time" in milliseconds. However, SpectMorph currently
keeps playing the "sample" it selected for the midi note on event while
gliding. So you get rather artificial sounding voice output if you pitch
"Sven Ah" one octave up.

For SpectMorph currently processing pitch bend events and using the mono
mode does the same thing: it keeps playing the sample it has, just
faster/slower. For pitch bend (also for the new clap plugin there is a
note expression for tuning), we don't really know what sample we are
morphing into. We just know the pitch we want.

So if this is the generic case that needs to be handled, one would have
to dynamically reselect a good recording, possibly for each frame that
is synthesized. Thats why I suggested a grid with a few samples as first
experiment, which is pretty close to this idea. It is also similar to
the crescendo case which also used a grid.

Cu... Stefan

Am 15.03.23 um 17:11 schrieb Jorge W:
> I still don't know how instruments are loaded into SpectMorph. Would it
> be possible to load a bunch of samples mapped to the keyboard as in
> virtual samplers? In that case the melismatic/gliding/legato effect
> could be automatically produced in the same way as when playing a
> monophonic analog synth: when you play a key while another key is kept
> down, you get that gliding effect between both notes, so there is no
> need for pitch bend control. There would be a /glide time/ parameter to
> https://groups.google.com/d/msgid/spectmorph/f61e1a22-99b7-d02b-c68b-048443ef1a7f%40gmail.com <https://groups.google.com/d/msgid/spectmorph/f61e1a22-99b7-d02b-c68b-048443ef1a7f%40gmail.com?utm_medium=email&utm_source=footer>.
Reply all
Reply to author
Forward
0 new messages