Iuse the Fireface UFX (not the UFX II) for music listening as a DAC with AES or SPDIF (Toslink) input. The digital signal is routed to a software convolution engine (Acourate) as ASIO. When the source sample rate is constant (e.g. 44.1, 96 etc), it works well. However, if the source rate changes and I do not change the input source rate in Fireface USB settings panel, my setup fails.
The RME ADI-192DD appears to be a solution that would allow any source rate to be sent to the UFX by means of sample rate conversion to a constant input, such as 192. However, it is a big rig for a single source conversion job.
I use UFX with acourateconvolver on USB.
Player is JRiver MC24.
I can play stereo as well as 5.1 surround.
By the way, JRiver does samplerateconversion if you wish (up or down or DSD etc.). Cheapest solution I am aware of (whole player software under USD 50.-)
Thanks Peter. Unfortunately, I do not like the J River interface for a classical music (I use Daphile, a Squeezebox player and server). But I am happy to know that JRMC can pass rate changes to Acourate Convolver.
Murray
I listen to a broad variety form classical to Rock.
I like the flexibility of organization in Jriver. Its all based on the tages (not depending on Jriver but a feature of ALAC, flac, mp3).
Jriver just sorts and displays the music as you decide to sort. Various views are possible.
It would be great if there was the option to resample in the output section. Then it could be saved as a preset and all the different file outputs could be done in the Render tab. It would save a lot of time and less to go wrong.
In other words, all the processing aside from dither gets locked in at 64-bit float and native sample rate by rendering a full WAV of the entire montage in one pass. This also helps mitigate any plugin rendering issues, often found at the very first and very last samples and cold starts and hard endings if you render track by track with lots of plugins going.
I downsample that 64-bit float and native Sample Rate WAV of the full montage with RX (I could also use WaveLab batch processor) to floating point 44.1k WAV and then use Custom Montage Duplicate to have WaveLab again recreate the montage at 44.1k sample rate.
The other bonus is that I can analyze the 44.1k downsample before rendering the 44.1k master WAV files and decide if I want add something like Tokyo Dawn Limiter 6 GE using JUST the true peak mode set to -0.1 ceiling before the dither. In this mode, it only catches new peaks from the SRC process that go higher than -0.1dB or whatever you decide you want to do.
Thanks. Yes I am rendering the whole montage as 64 float before I output different file formats and dither. If I send a DDP to a client or even MP3s from this montage and they want one song louder or a different gap, I guess I need to go back to the montage with separate clips, make the change and render the montage at 64 float again. It seems a bit long winded when making all those final adjustments. Or is there a quicker way?
I guess I need to go back to the montage with separate clips, make the change and render the montage at 64 float again. It seems a bit long winded when making all those final adjustments. Or is there a quicker way?
Any shortcuts would involve throwing that first part out the window and then you still have to insert the newly render part somewhere right? That seems like more work and opens up room for error in a number of ways.
No matter what change I make, big or small, one or many, the master montage gets a new version number (V2, V3, etc.) so that I always know the approved version number of the full project, and I can always open previous versions to see what might have been done if we need to revert to a previous version.
I know a shortcut seems tempting even if the change is minor but my preferred method is to render the full project again. There are always emails to write, messages to send, and other tasks to do while the rendering is occurring. I also know some engineers that use a dedicated rendering computer so they can be doing other things while something renders if it will take time.
With a few modern exceptions (Sound Devices units), nearly all analog to digital converters are 24-bit so it makes no sense to record an analog source at 32-bit or 64-bit float. It will just be empty data. The bit-depth meter of WaveLab can help show you this.
I record back from analog at 24-bit but since I always have some additional digital processing after the capture from analog such as a digital limiter, or perhaps subtle EQ before the digital limiter, or whatever, I first render to 64-bit floating point (and native sample rate) to lock in all the digital processing and then I insert a dither plugin with the correct dither setting (24-bit or 16-bit) when I render my final master files such as WAV files of each album track.
These files are files that require ZERO additional processing. No fades, no sample rate conversion, no level adjustments. I only apply dithering and render to 24-bit or 16-bit as the VERY VERY VERY VERY last step. Very last.
I do apply dither as the very last plug-in before feeding my analog chain because even if the source audio is 24-bit, my pre-analog digital processing will increase that audio to 64-bit float and in theory, one could argue it is worth dithering to 24-bit before it goes analog via your 24-bit digital to analog converter.
WaveLab has a bit-depth meter that can be helpful in showing you what happens to 24 and 16-bit audio when digital processing is applied, and what is actually going on in your WaveLab session so yo can determine if dithering should be considered or not.
So after capture/record from analog back at wavelab, do you mean you render the files, or anyway its already in a folder automatically when capturing ? ( in my case I create a file under recording and capture etc and choose it when recording)
-Do you mean to do this when capturing through analog? I added a screen shot, like this or you meant at the end of the process when you do the in the box master etc
Screenshot 2024-03-25 at 12.03.56 AM19201080 166 KB
I usually set my projects to 44100 sample rate with 24-bit resolution.
A few days ago I imported some WAVs sent to me by a client for mastering, they were recorded at 88200 sample rate, 24-bit resolution.
I first noticed it when we were doing our CD 10 years ago. We used a couple of musicians, one of which was using keyboards, so his were rendered MIDI, but the other did all his as recorded audio. I just thought the latter was a bit sloppy with minimising recording noises, until I noticed the clicks in the sample editor, and realised the noises occurred before AND after some notes.
On Topic - I once got some exports (ProTools ) from from a studio and they had occasional glitches, it was only for guide tracks but still a bit of a nuisance at the time. But these things happen so I asked for them to be redone and all was OK again.
Sample-rate conversion, sampling-frequency conversion or resampling is the process of changing the sampling rate or sampling frequency of a discrete signal to obtain a new discrete representation of the underlying continuous signal.[1] Application areas include image scaling[2] and audio/visual systems, where different sampling rates may be used for engineering, economic, or historical reasons.
For example, Compact Disc Digital Audio and Digital Audio Tape systems use different sampling rates, and American television, European television, and movies all use different frame rates. Sample-rate conversion prevents changes in speed and pitch that would otherwise occur when transferring recorded material between such systems.
Conceptual approaches to sample-rate conversion include: converting to an analog continuous signal, then re-sampling at the new rate, or calculating the values of the new samples directly from the old samples. The latter approach is more satisfactory since it introduces less noise and distortion.[3] Two possible implementation methods are as follows:
The two methods are mathematically identical: picking an interpolation function in the second scheme is equivalent to picking the impulse response of the filter in the first scheme. Linear interpolation is equivalent to a triangular impulse response; windowed sinc approximates a brick-wall filter (it approaches the desirable brick-wall filter as the number of points increases). The length of the impulse response of the filter in method 1 corresponds to the number of points used in interpolation in method 2.
In method 1, a slow pre-computation (such as the Remez algorithm) can be used to obtain an optimal (per application requirements) filter design. Method 2 will work in more general cases, e.g. where the ratio of sample rates is not rational, or two real-time streams must be accommodated, or the sample rates are time-varying.
The slow-scan TV signals from the Apollo Moon missions were converted to the conventional TV rates for the viewers at home. Digital interpolation schemes were not practical at that time, so analog conversion was used. This was based on a TV rate camera viewing a monitor displaying the Apollo slow-scan images.[6]
3a8082e126