Sample Modeling - The Saxophones 1.1.1 VSTi X86 X64

0 views
Skip to first unread message

Mariam Obregon

unread,
Aug 19, 2024, 6:05:22 PM8/19/24
to oxosenib

I am surprised not more companies are doing this type of development work. Instead of sampling gigs and gigs of wave forms, every articulation etc. why not analyze the structure of an instrument and resulting wave signatures when played, and model that, using AI perhaps?

Sample Modeling - The Saxophones 1.1.1 VSTi x86 x64


DOWNLOAD https://lomogd.com/2A3eUb



And to have an expansion of NKS to standardize the parameters of interpreting the physical realm and the articulations of interaction within... and to develop a new generation of controllers that are optimized for human kinetics and ergonomics.

Modeling is great. I have almost all of the SWAM products, as well as modeled or hybrid libraries by Sample Modeling and IK Multimedia. After working with modeled libraries, sample libraries can seem so frustratingly rigid.

But, while modeled libraries are the cat's meow in terms of flexibility, sampling captures a snapshot of the instrument in-performance. When it comes to sound, nothing (yet) beats an actual sampled articulation. Straight Ahead Samples, for example, took sample libraries to another level with their Smart Delay feature that assembles performances from a huge pool of samples. The result is unbeatable sound, but it comes at the cost of the instruments being arguably intolerably inflexible.

There are companies out there continually researching and developing modeling and other technologies. AcousticSamples V-Horns are relatively new hybrid libraries. And just look at what Dreamtonics has done with vocal synthesis in the past two years. The industry will undoubtedly produce more what you're looking for, but we're still in the early stages of development. The best is yet to come. It won't be long.

I'm a big fan of modeling, but in the right places. And for me when it comes to instruments, that's not the right place. Modeling, by definition, is about creating a mathematical formula that simulates a real world characteristic and that's pretty hard to define with something like a trumpet or string section with various articulations. It can be done, and it's better than synth based models, but it doesn't compare with an instrument captured being played by a seasoned professional in a near perfect environment, because a lot of that depends on playing technique that varies from person to person. That's why one of the biggest concerns of many people is that models can often strip away the emotions of the player because that's pretty hard to mathematically simulate.

What is also true, modeling is very much a different specialty than sampling with people that aren't necessarily recording engineers, but are highly capable technical developers that can code realtime processor applications.

Native Instruments should think deeply about how this first thing could exist within an enhanced NKS. NI is positioned with NKS to set an "industry standard" in how a this collection of physical masses is presented as parameters for interaction. This could be their biggest thing ever.

Getting this first thing right would set the stage (pun intended) for a rich environment of intelligent Performance Libraries. This would be a huge new space for 3rd parties to contribute. Leaping over MIDI 2.0 and creating something far more forward-thinking.

I think the key for modeling to supplant samples won't necessarily be found in the quality of libraries from NI, but probably in libraries from deep sampling companies such as 8DIO or Spitfire. If it could successfully simulate NI libraries you might be willing to do modeling for the savings in storage over sampled libraries and maybe even some savings in latency. But the hyper-realistic sounds one can achieve through deep sampling of instruments such as those in the 8DIO Ostinato libraries which are sensitive to the proximity of notes in fast or slow ostinato phrasing with multiple instruments would need to be the target for modeling to be worthy of displacing sampled libraries IMHO. And what you end up with in modeling is the more complex the modeling characteristics are, the less people are willing to overcome the learning curve in how to set things up correctly to get the sound they're after. Maybe if in the near future companies can successfully emulate the expertise of the 10 to 15% of the people that can successfully use the complex modelers to their fullest extent through some form of AI, you might have something.

Pulled from quote: "Maybe if in the near future companies can successfully emulate the expertise of the 10 to 15% of the people that can successfully use the complex modelers to their fullest extent through some form of AI, you might have something."

Exactly. Separating the generative AI for Performance from that of the Instruments itself is a pathway to achieving this. Imagine performing a "sketch" version of a part you imagine (humming, finger taps and/or whatever skill level you may have on some purpose-built controller) and having an AI Performance Library flesh it out in ways for you to choose between.

Perhaps, but I find Audio Modeling violins, that I am playing with, are far more emotional, and expressive then the sampled versions. Articulations can also be modeled, actions performed on the instrument. snapshots of recordings are great but, are outdated technology, literally coming from tape strings, just in digital form(s). But yes, I am hopeful, one of my favorite companies' Native instruments will eventually get into this technology and make something more interesting than a FM / Analog synthesizer model.

Nice sound, but it's hard to really get a feel for whether or not there's some realistic articulation capabilities such as growls and growl vibrato, rips, grace notes, grace vibrato, trills and forte-piano-crescendo in that particular piece. Not to mention the mouth elements like "ta" versus "da" all of which are generally individually captured in samples.

I think one of the things that keeps me locked into sampling is that the technology proficiency and the musical proficiency don't come from the same place. The musician can provide depth in terms of the various ways the instrument is commonly used in practice and the technologist can translate that into what samples are needed in order to achieve those things. I think we'll eventually get there as we get more and more sophisticated in layering synths, but it may be a while for some of the processing power to catch up to avoid any latency. When it does happen I'll be more than happy to get back some of my disk storage space!!!

The saxophone is probably the most difficult instrument to emulate. Despite several attempts using different technologies, ranging from traditional sampling to sophisticated physical modeling, the results obtained so far fell short of the expectations. This is particularly true if real time playing is concerned.

The Sax Brothers employ an entirely new technology, developed by Stefano Lucato when it became clear that all the previously applied approaches simply could not do. The technical name is Synchronous Wave Triggering. It uses samples as a base material, chromatically performed by a professional sax players over a very wide dynamic range, and recorded with state-of-the-art technology. The resulting timbre is therefore that of the real instrument. But the analogy with a sample based library ends here. The underlying, proprietary technology allows continuous interpolation among different vectors like time, dynamics, pitch and formants. Advanced real time processing techniques yield realistic legato/portamento, vibrato, ornamentations & trills with phase continuity, constant-formant pitchbends, subharmonics, growl and flutter tongue to be performed in real time.

Mr. Sax A, B and T are all supplied with the NI Kontakt Player 4, the read-only version of Native Instruments flagship sampler. It involves many new features and bugfixes, including 64 bit, extended memory and multicore support, DFD optimisation, better compliance with some OS and hosts, aftertouch support etc. The Player is included as a separate installer, and no additional software is required to play the instrument. Stand-alone mode, as well as plugin formats VST, DXi, RTAS and AU are supported.

The instruments can be also loaded and played in the full version of Kontakt. Please note, however, that they cannot be opened or modified, and no access to the samples, impulse responses or instrument programming is provided.

Pretty good simulation. Would be acceptable to most non-sax players. Real sax players (i'm one) hear the synthetic sound right away. I have a Roland Aerophone AE-10 (wind controller). Some of the people who have this digital sax-like instrument use it with the SWAM sax sounds and like it.

The sound realism is maybe 85%, because it"s modeled, and dead-on real phrasing is tough. Okay. So I wouldn"t use this where the sax is 'the" lead instrument. I wouldn"t use it in a bare mix. But it might add something when inside a mix that could cover up the deficiencies. After all this is going on in every film that has CGI effects.

But even if the sound were100% on, I wouldn"t use it in a jazz band. Just like I wouldn"t use a harpsichord VST in a baroque orchestra. Now we"re talking about the aesthetics of particular music scenes. I imagine that the SWAM instruments could have been big on stage in the 80s. But those days are gone. Rock bands are moving into the past, like jazz combos and big bands have gone before. In a live setting I would rather hear a guitar play the Baker Street lick than a synth trying to sound like a sax. Or a rich distorted B3 playing the Stevie Wonder horn lines in Superstition. YMMV.

For music that is newer than rock, has there been a bifurcation, where some is completely acoustic (think Live From Here) and some has electronics with no attempt to sound like acoustic instruments? If so, I don"t see a place to use these hyper-realistic synthetic acoustic VSTs in those scenes.

I'm a trumpet and flgelhorn player for 30-some years and to be honest, these instruments sound fake. The dirty attach on the trumpet sounds nasty and sounds virtually the same among all 3 bundles. As with Pianoteq however, although the sound could be way off or unrealistic, the articulation options for these instruments is well worth it. Mixing these with sample based instruments will get you a long way. I've pre-ordered the bundle. The pre-order discount is probably the highest discount Audio Modeling will ever give you, so if you're interested I would take it now.

b37509886e
Reply all
Reply to author
Forward
0 new messages