Can someone point me to an in-depth explanation of these attributes in spatialized? I kind of understand what they do at a high level, but am having trouble grasping what behavior to expect from stereo files in 3d events. Thank you!
I think what you are seeing is a slightly degenerate case of multichannel rotation. Instead of thinking of stereo, consider 5.1, both source and target. The 5.1 source is a sound field and when the listener is in it the event rotation moves the source channels around the listener channels. Then as the event moves away the 5.1 source signals get collapsed to the listener arc more and more, yet still respect the rotation of the Event.
SAVE SAMPLE COPY will save the complete sample to the Compact Flash card as a new file. Audio outside the trim section will also included. Both mono and stereo samples can be saved, in either 16 bit or 24 bit format, depending on the original sample format. Recorder buffer content is always saved as stereo samples,
FILE BROWSER
When the cursor is moved over a sample, the smiley symbol at the bottom of the screen will show a happy face if the sample is ready to be loaded without any problems. If the file is too big to be loaded, or if the file is incompatible with the Octatrack audio engine, the smiley will look sad. Samples with an unsupported sample rate, like 48 kHz, will make the smiley look indifferent, indicating the sample will be played back albeit at the wrong speed. To the right of the smiley the sample rate, bit depth and number of channels of the selected sample is shown.
It is possible to load samples to the recorder buffers, just as if they were Flex sample slots. The recorder buffers, which contain any audio captured by the track recorders, are found in the Flex sample slot list, located above Flex sample slot position 1. The length of the sample is restricted by the reserved memory of the buffer. If the loaded sample is longer than allowed by the buffer it will be truncated. Mono samples will also be converted to stereo.
The SuperPhat algorithm is a playback spatializer that takes a stereo image, and widens the signal for playback from two closely spaced speakers. The algorithm accepts a stereo signal and outputs an enhanced stereo image for playback. The algorithm is based on proprietary filtering and gain adjustment in order to produce the widened image.
The two parameters available for adjustment Spread Frequency and Effect Gain, change the responsiveness of the effect. Depending on the actual physical end system, different values should be used to obtain the optimal effect. Subjective listening tests are the recommended way to set the values for these parameters.
The following schematic image shows the SuperPhat algorithm in comparison with the Phat-Stereo algorithm. Both algorithms have similar functions, but the effect is more pronounced with the SuperPhat algorithm at the cost of more instructions. This image shows the Inputs, stereo mux, and outputs, along with the two methods for stereo spatialization offered in the SigmaStudio library.
ADI's SuperPhat Spatializer works well with stereo program material and is especially useful with headphones. It doesn't do much, however, for monaural audio fed to both its inputs. Here's a way to get a "spatial" sound from a mono signal (for example, from a Zoom call). It replaces the "in your head" effect often heard through headphones with a sweet room-like ambience.
The mono input arriving from the left is phase-shifted into two signals via the Hilbert Transform. The Spatializer works nicely with this pseudo-stereo input. Adjust Delay1 to center the sound field in your headphones -- otherwise the sound favors one side or the other. The effect is quite natural, it feels like you're in a live setting rather than listening to a remote meeting through cans. In fact, you may get used to it and thus forget it until you switch it OFF -- you'll want to turn it back ON. Have fun with it.
This is written for the ADAU1452RevB eval board that has an AUXADC volume pot that is used with the external volume control cell. Replace it with a manual volume control in SigmaStudio if you are not running the RevB board.
I added the mono version to the Spatial Widening test project. A mono source can be applied to either or both channels, and a stereo source mixes to mono before processing. As a result, the effect with stereo sources compares poorly with the other selections -- yet that's normal since this one is designed for mono input.
The Clarity algorithm allows you to minimize colorations resulting from the binaural rendering and individually adjust an optimal balance between the externalized spatial perception and the overall tonal preservation.
Using the included SPATIAL CONNECT ADAPTER application and a compatible OSC head tracker, you benefit from an extended head tracking control and a much more natural way to judge immersive productions.
You can use dearVR PRO 2 with any stereo or mono source. The spatializer works perfectly with regular microphones, direct signals, and virtual instruments. Just insert dearVR PRO 2 as the last plugin in your plugin-chain and place the audio in a three-dimensional space.
Multi-channel speaker output: Nothing additional necessary as long as you have an appropriate speaker setup, just route the output as appropriate.
You don't have a multi-channel loudspeaker setup, or you want to check your mixes on the road? Use dearVR MONITOR to listen to all common speaker setups, from stereo up to 9.1.6, in a virtual mixing room using your headphones.
Ambisonics: You will need an additional plugin to listen to your mix, as Ambisonics always requires decoding. Fortunately, we make available for free dearVR AMBI MICRO which can easily render Ambisonics to binaural audio suitable for listening to on headphones.
3D audio enriches all kinds of audio production with a new dimension - whether music, podcast, film, game audio, VR and AR or any other. Let your audio become a full 360 sound experience on headphones or loudspeakers.
Discover how your music production benefits from the latest immersive audio developments. Dive deeper into the world of audio spatializer plugins, advanced virtual monitoring solutions, and innovative mixing controller.
Add a new level of user experience to your interactive productions by creating a realistic 3D sound field around the listener. Rely on proven solutions offering a true-to-life perception of direction, distance, reflections and reverb.
The audio spatializer SDK provides controls to change the way your application transmits audio from an audio sourceA component which plays back an Audio Clip in the scene to an audio listener or through an audio mixer. More info
See in Glossary into the surrounding space. It is an extension of the native audio plugin SDK.
The built-in panning of audio sources is a simple form of spatialization. It takes the source and regulates the gains of the left and right ear contributions based on the distance and angle between the Audio ListenerA component that acts like a microphone, receiving sound from Audio Sources in the scene and outputting to the computer speakers. More info
See in Glossary and the Audio Source. This provides simple directional cues for the player on the horizontal plane.
To provide flexibility and support for working with audio spatialization, Unity has an open interface, the Audio Spatializer SDK, as an extension on top of the Native Audio Plugin SDK. You can replace the standard panner in Unity with a more advanced one, and give it access to important meta-data about the source and listener needed for the computation.
For an example of a native spatializer audio plugin, see the Unity Native Audio Plugin SDK. The plugin only supports a direct Head-Related Transfer Function (HRTF), and is intended for example purposes only.
Unity applies spatialization effects directly after the Audio Source decodes audio data. This produces a stream of audio data in which each source has its own separate effect instance. Unity only processes the audio from that source with its corresponding effect instance.
If you set the UnityAudioEffectDefinitionFlags_IsSpatializer flag, Unity recognizes the plugin as a spatializer during the plugin scanning phase. When Unity creates an instance of the plugin, it allocates the UnityAudioSpatializerData structure for the spatializerdata member of the UnityAudioEffectState structure.
Then, in the InspectorA Unity window that displays information about the currently selected GameObject, asset or project settings, allowing you to inspect and edit the values. More info
See in Glossary window for an Audio Source that you want to use with the spatializer plugin, enable Spatialize:
In an application with a lot of sounds, you might want to only enable the spatializer on your nearby sounds, and use traditional panning on the distant ones, to reduce the CPU load on the mixing thread for the spatializer effects.
If a plugin initializes with the UnityAudioEffectDefinitionFlags_NeedsSpatializerData flag, the plugin receives the UnityAudioSpatializerData structure, but only the listenermatrix field is valid. For more information about UnityAudioSpatializerData, see the Spatializer effect meta-data section.
The UnityAudioEffectDefinitionFlags_AppliesDistanceAttenuation flag indicates to Unity that the spatializer handles the application of distance-attenuation. For more information on distance-attenuation, see the Attenuation curves and audibility section.
Unlike other Unity audio effectsAny effect that can modify the output of Audio Mixer components, such as filtering frequency ranges of a sound or applying reverb. More info
See in Glossary that run on a mixture of sounds, Unity applies spatializers directly after the Audio Source decodes audio data. Each instance of the spatializer effect has its own instance of UnityAudioSpatializerData, mainly associated with data about the Audio Source.