He is very carefull not to to hit Full Scale on his
A-to-D converters, but i don't see how you can do this very easily,
especially
with an erratic drummer, who will certainly hit the snare or drums
harder
than when you ask him to do a sound check. So maybe he is ultra
conservative,
and sets the levels really low, but then he is giving up dynamic
range.
And if he hits Full Scale on his ADCs, then the digital distortion
sounds
horrible, and no amount of DSP processing can fix that kind of
distortion.
When I recorded with my Roland VS-840, i often would soft-knee
compress
or limit BEFORE i went into the inputs, to prevent saturating the
ADCs.
What do you professionals out there do?
Most of "us professionals" record to 24bit audio.
On 16 bit you have 65.336 steps in "volume".
With 24 bit you'll have 16.777.216 steps.
You can see that with 24bit low volume is not a big issue anymore...
F.
You mean 65536 steps.
> With 24 bit you'll have 16.777.216 steps.
> You can see that with 24bit low volume is not a big issue anymore...
> F.
Good point, and since you will dither to 16 bits
for a Redbook CD anyways, you can compress quite
a bit.
But you can STILL over-saturate the ADC, and
end up with bad distortion you cannot get rid of.
> But you can STILL over-saturate the ADC, and
> end up with bad distortion you cannot get rid of.
And you still have to watch the level meters during recording, even though
you might have thought, the levels were set low enough... If a drummer hits
too hard, lower the input level into the ADC and have the drummer re-do the
take.
Phil
Paul wrote:
> My buddy who has Protools claims that he doesn't apply
> compression and limiting until after recording tracks, until
> after everything is in the digital domain.
>
> He is very carefull not to to hit Full Scale on his
> A-to-D converters, but i don't see how you can do this very easily,
> especially with an erratic drummer, who will certainly hit the snare
> or drums
> harder than when you ask him to do a sound check. So maybe he is
> ultra
> conservative, and sets the levels really low, but then he is giving up
> dynamic
> range.
Well, there's usually more than enough of that @ 24 bit !
Graham
Paul wrote:
> On Apr 11, 12:15 am, "Federico" <8....@tiscali.it> wrote:
> > > What do you professionals out there do?
> >
> > Most of "us professionals" record to 24bit audio.
> > On 16 bit you have 65.336 steps in "volume".
>
> You mean 65536 steps.
He's using the method of showing numbers as practiced in Italy. They use a
'full stop' where we'd use a comma and vice-versa IIRC.
> > With 24 bit you'll have 16.777.216 steps.
> > You can see that with 24bit low volume is not a big issue anymore...
> > F.
>
> Good point, and since you will dither to 16 bits
> for a Redbook CD anyways, you can compress quite
> a bit.
>
> But you can STILL over-saturate the ADC, and
> end up with bad distortion you cannot get rid of.
But that will be after you done your DRC @ 24 bit accuracy.
Graham
Phil W wrote:
Exactly. Keeping 10-12 dB shy of a digital clip is no big deal.
Graham
>> > Most of "us professionals" record to 24bit audio.
>> > On 16 bit you have 65.336 steps in "volume".
>>
>> You mean 65536 steps.
>
>He's using the method of showing numbers as practiced in Italy. They use a
>'full stop' where we'd use a comma and vice-versa IIRC.
How do they show a decimal?
That's what most folks do today.
>He is very carefull not to to hit Full Scale on his
>A-to-D converters, but i don't see how you can do this very easily,
>especially
>with an erratic drummer, who will certainly hit the snare or drums
>harder
>than when you ask him to do a sound check. So maybe he is ultra
>conservative,
>and sets the levels really low, but then he is giving up dynamic
>range.
Right but this is 2009 and we have outrageous amounts of dynamic range,
far more than we'll ever need even for high grade classical work. In
most cases we have more dynamic range in the recording process than we
have on the release medium, so there's no reason not to lose a little
bit.
>And if he hits Full Scale on his ADCs, then the digital distortion
>sounds
>horrible, and no amount of DSP processing can fix that kind of
>distortion.
Right. Don't ever come anywhere near FS. If you set your peaks during
rehearsal so they come around -18dBFS, that means during the actual take
they'll tend to be around -12dBFS, and that's a reasonable place to be.
>When I recorded with my Roland VS-840, i often would soft-knee
>compress
>or limit BEFORE i went into the inputs, to prevent saturating the
>ADCs.
>
>What do you professionals out there do?
Depends, lots of professionals today are still using 2" analogue tape,
where the dynamic range of the tracking medium is comparatively limited
and where it's important to keep levels up to tape. But folks using
modern digital stuff don't need to worry about any of that... leave
plenty of headroom.
--scott
--
"C'est un Nagra. C'est suisse, et tres, tres precis."
Laurence Payne wrote:
With a comma. The French call it flying comma ( literally translated ).
http://en.wikipedia.org/wiki/Floating_point
" There are several mechanisms by which strings of digits can represent
numbers. In common mathematical notation, the digit string can be of any
length, and the location of the radix point is indicated by placing an explicit
"point" character (dot or comma) there. "
Graham
Scott Dorsey wrote:
> Paul <Quill...@gmail.com> wrote:
> >
> >What do you professionals out there do?
>
> Depends, lots of professionals today are still using 2" analogue tape,
> where the dynamic range of the tracking medium is comparatively limited
> and where it's important to keep levels up to tape. But folks using
> modern digital stuff don't need to worry about any of that... leave
> plenty of headroom.
I know a certain studio that owns no less than 5 Studer 2" A800s that are
rarely used and a 1" stereo headblock ATR100.
Most of their work goes through Prism Sound converters straight to HD.
http://prismsound.com/music_recording/products_subs/ada8xr/ada8xr_home.php
I used to work for Prism Sound once FWIW. Most of the top engineers were
ex-Neve including myself.
Graham
> My buddy who has Protools claims that he doesn't apply
> compression and limiting until after recording tracks,
> until after everything is in the digital domain.
Seems like good practice.
> He is very carefull not to to hit Full Scale on his
> A-to-D converters,
Again good practice, but not rocket science to do.
> but i don't see how you can do this
> very easily, especially with an erratic drummer,
Everything has limits, even an erratic drummer.
> who will certainly hit the snare
> or drums harder than when you ask him to do a sound check.
Thats why you leave headroom.
> So maybe he is ultra conservative,
> and sets the levels really low, but then he is giving up
> dynamic range.
Not necessarily.
Look at it this way. People were able to make fairly good recordings back in
the early days, when the available dynamic range on the then-available media
was maybe 50-55 dB. With 16 bits you have about 96 dB dynamic range which is
41-46 dB more. 40 dB is lots.
Look at it another way. The noise floor in a home studio might be 30-40 dB.
Add 96 dB to that and you might have over 130 dB which is above the
threshold of doing physical damage to the listener's organs, not just his
ears. The point is that we don't go there, so with 16 bit digital, available
dyamic range actually abounds.
> And if he hits Full Scale on his ADCs, then the digital> distortion sounds
> horrible, and no amount of DSP processing can fix that
> kind of distortion.
Not necessarily. While the distortion in analog tape is more like gentle
comression, clipping in a mic preamp is not very nice. It's often
indistinguishable from clipping in a modern ADC. Protracted clipping sounds
bad, but brief clipping of an occastional peak will sneak past even the
pickiest ear.
Look at it yet another way. A good garden variety condenser mic will have a
noise floor around 16-18 dB SPL. Add 96 db to that and you have 112 dB. 112
dB is pretty loud at most frequencies. Very few rooms have background noise
that is at or below 20 dB SPL. Most have 10-15-20 dB more.
> When I recorded with my Roland VS-840, i often would soft-knee compress
> or limit BEFORE i went into the inputs, to prevent saturating the ADCs.
That's what you did, but was it really necessary?
I record band and chorus festivals. One band will be Jr. High and have 20
little kids, the next has 80 high school seniors who can be very big kids.
Yet, I record them with a CD recorder (16 bits for sure) and rarely if ever
have any clipping or a noisy background.
> What do you professionals out there do?
Leave 10-20 dB headroom depending on my level of experience with the group
being recorded. If I guess wrong I can make it up in the mixdown.
And we think of drummers as being tough for this, but they're really
among the easiest. If you use a stick and bring it down as hard as
you can on a drum, that's more or less as loud as it's going to be.
If a gorilla drummer plays an unbelievably hard fill, well, it's not
going to get any louder on that drum track than that. He can't step
on a pedal, turn a knob or move closer to the mic. Even very dynamic
drummers are limited in where the meter's going to go. It's the
nature of hitting things.
In fact, that's what's horribly uncreative about being a recording
engineer these days. See where the loudest is and back it down so it
never clips. Remember when we had to hit tape at all different levels
for the different sources? Snare: let it peak, kick: don't let it
fart the tape, hats: keep them low or the highs will distort. You
don't have to do any of that anymore. Just get in in there and don't
clip, back it down more if it looks like it might. Some
creativity! : )
*sigh*
d
Also, of course... slight clipping is less audible on a drum strike than on
most other instruments....
>In fact, that's what's horribly uncreative about being a recording
>engineer these days. See where the loudest is and back it down so it
>never clips. Remember when we had to hit tape at all different levels
>for the different sources? Snare: let it peak, kick: don't let it
>fart the tape, hats: keep them low or the highs will distort. You
>don't have to do any of that anymore. Just get in in there and don't
>clip, back it down more if it looks like it might. Some
>creativity! : )
If you like working like that (and it can be fun), there are plenty of
2" machines going for a song right now.
>
> Also, of course... slight clipping is less audible on a drum strike than on
> most other instruments....
And plus, you clip the snare track and then bring it up with the 8
other tracks that have snare in it, and no one points out the clipped
snare.
> If you like working like that (and it can be fun), there are plenty of
> 2" machines going for a song right now.
If only the business to use it for were going for a song as
well : ) And the tape ops! Come back! (And I don't even have the
storage for the tapes much less a corner to put a 2" machine)
Oh well...I guess for creativity I can record 8 bit and go in really
low for that vintage 1985 sound.
This is a very sensible way of working. People today are far too
worried about not recording at a high enough level. I think this is
a result of everything commercial being at such a high level that
they've forgotten what the volume control on their monitor system
does.
If the drummer is erratic, he shouldn't be recording. You're going
to have to do too much fix-up to the track and it won't sound natural.
Sometimes it's convenient to use some compression when tracking
because it actually improves the sound, but using it as a substitute
for setting the level or having players who can't control themselves
is never going to make for good recordings.
For 50 years before we had DAWs and cheap compressors, we
set the record level with the understanding that people usually do
play a little louder when recording than when checking, but we
also LISTEN to what's going on during the take. If the band starts
too loud, you stop them, pardon yourself, drop the record level,
and tell them to go again. If they're listening on headphones, turn
up the headphone level so they don't play louder still.
One of the reasons why "engineers" today don't take quite so much
care with tracking as they used to is because they don't have a lot of
experience. Another reason is that so many people record themselves
and it's really hard to concentrate on both your playing and your
engineering. But the good news is that you can be more conservative
with your recording level, make adjustments later, and still get a
good recording.
> And if he hits Full Scale on his ADCs, then the digital distortion
> sounds
> horrible, and no amount of DSP processing can fix that kind of
> distortion.
Any overload sounds terrible. So you do the same thing that engineers
have done for 50 years. You say "Sorry, we had an overload and we
have to do another take." You listen to what's being recorded, you listen
to a playback, you pay attention, and if there's a problem you discover it
right then and fix it. You don't wait until you're recorded 16 tunes and
get around to mixing them six months after the band breaks up. You so
often read someone's plight of "We can't do a retake because the singer
moved to India to study with a guru and won't be back for two years."
> What do you professionals out there do?
Be professional, which means doing what makes technical sense. If putting
a compressor in line with your recorder makes it sound better, then do
it. If
you're putting it there to protect yourself from carelessness, then leave it
out and be more careful. Or at least set the input level so that the
compressor
is evening out small variations in level (which will make mixing easier)
rather
than simply preventing overloads.
--
If you e-mail me and it bounces, use your secret decoder ring and reach
me here:
double-m-eleven-double-zero at yahoo -- I'm really Mike Rivers
(mriv...@d-and-d.com)
Ok, this sounds good. Because it's 6 dB/bit, so if you
peak around -12dBFS, that means you are losing 2 bits, so
it's like you are recording at 22 bits instead of 24. Which
is no big deal, since you will dither to 16 bits in the end anyways.
Wow, technology moves very fast. The last time
i recorded was about 10 years ago, and now 24/96k is
the new standard.
I suppose everyone records at 24bit/96kHz, and then
dithers to 16bit/44.1kHz for the Redbook CD, right?
I remember people being blown away by having a
Nyquist of 96kHz. But the reason to have it is that the
alias frequencies are much higher, so your recording and
digital processing is much cleaner, even though it will move
to 44.1 anyways.
And i'm blown away by all the Plug-ins that
can fix bad vocal takes, and off-time drumming.
I suppose soon they will have Plug-ins that
can write hit songs, or fix bad songs! Or Plug-ins
that can get the band good drugs, etc.....
:)
Actually, most folks record at 24 bit, 44.1 ksamp/sec if they are
intending on releasing on CD. The higher sampling rates usually don't
buy you anything other than storage space and rate conversion artifacts.
> I remember people being blown away by having a
>Nyquist of 96kHz. But the reason to have it is that the
>alias frequencies are much higher, so your recording and
>digital processing is much cleaner, even though it will move
>to 44.1 anyways.
No, not really. There are some things where the wider bandwidth can
be useful, like for LP transcription where the ultrasonic stuff can
help the noise reduction algorithm figure out what is there. But
for the most part the wider bandwidth just brings people grief.
Aliasing issues are pretty much a non-issue due to sigma-delta
converters and oversampling systems which started to become popular
in the late eighties.
> And i'm blown away by all the Plug-ins that
>can fix bad vocal takes, and off-time drumming.
>
> I suppose soon they will have Plug-ins that
>can write hit songs, or fix bad songs! Or Plug-ins
>that can get the band good drugs, etc.....
Sadly all of this processing doesn't really fix things, it just hides
things, and I don't think it's a good thing for music in general. But
that's just me, and a lot of folks fell differently about that.
Ok, so if there are conversion artifacts when you move
from 96k to 44.1k, then i suppose you should stay at 96k if
you decide to record there.
But i didn't know most people use 24Bits/44.1k. They
still want the extra dynamic range, as per my compression
question.
> Aliasing issues are pretty much a non-issue due to sigma-delta
> converters and oversampling systems which started to become popular
> in the late eighties.
>
> > Â Â Â Â And i'm blown away by all the Plug-ins that
> >can fix bad vocal takes, and off-time drumming.
>
> > Â Â Â Â I suppose soon they will have Plug-ins that
> >can write hit songs, or fix bad songs! Â Or Plug-ins
> >that can get the band good drugs, etc.....
>
> Sadly all of this processing doesn't really fix things, it just hides
> things, and I don't think it's a good thing for music in general. Â But
> that's just me, and a lot of folks fell differently about that.
> --scott
>
Yeah, i was just kidding about that one. But it's funny
how bad even artists from the old, pre-computer days can be,
when you hear a live recording of them:
http://www.youtube.com/watch?v=yRv34Cat3Vw
Although, granted, the Beatles didn't have proper
monitors here. But my point is that even back then, the
studio was your friend. You could do take after take, and
punch-in until you had what you wanted. The computer just
makes it a bit easier and faster.
> Ok, this sounds good. Because it's 6 dB/bit, so if you
> peak around -12dBFS, that means you are losing 2 bits, so
> it's like you are recording at 22 bits instead of 24. Which
> is no big deal, since you will dither to 16 bits in the end anyways.
The no big deal part is that we still don't have real 24-bit
converters, so the bits you're "losing" are noise anyway.
> Wow, technology moves very fast. The last time
> i recorded was about 10 years ago, and now 24/96k is
> the new standard.
Not around here, yet. I still record at 44.1 kHz, though I'll
use 24 bits if I have the space.
> I remember people being blown away by having a
> Nyquist of 96kHz. But the reason to have it is that the
> alias frequencies are much higher
No, the reason to use 96 kHz sample rate is so that you'll
have wider frequency response. If you have aliasing, you
have a bad converter. Before they used oversampling
converters and digital filtering, if you sampled at 2X and
started your anti-aliasing filter at 24 kHz, you could use
a filter that worked better out at half the sample rate where
it needed to reject darn near everything.
> And i'm blown away by all the Plug-ins that
> can fix bad vocal takes, and off-time drumming.
I'm not. I make the players work for their records. If
they can't sing, I can find them a vocalist. If they can't
keep time, I can find them a drummer. And if they HAVE
to do it themselves and patch it up, I send them to another
studio.
But that's getting off the subject, I think.
It's just with 16 bit systems the converters usually sounded bad
unless you used most of the available bits. This isn't the case with
24 bit systems. Simply put you needn't push every bit for good sound
anymore - in fact that can make the end result sound worse.
Realize more bits don't mean you can get louder, they mean you
can get *softer.* And regarding dither. When the reverb in a
recording of a real hall diminishes to the last available bits, those
last couple bits will start toggling on and off. This produces a
ragged sound, very very soft but still ragged. Dithering noise
prevents that toggling back and forth, so get a smoother sounding
tail.
You can hear this, play a track and record a fade out with and
without dither, then normalize both recorded tracks at the tails.
Watch your speaker volume, it'll be loud. You will hear the bits
toggling back and forth on the track with no dither. When they chop
off the bottom bits - reduce the bitrate for CD prep - adding dither
makes it sound smoother.
>
> Wow, technology moves very fast. The last time
> i recorded was about 10 years ago, and now 24/96k is
> the new standard.
>
> I suppose everyone records at 24bit/96kHz, and then
> dithers to 16bit/44.1kHz for the Redbook CD, right?
I think most use 24/44.1, and dither is added at CD prep. If you
want to record hypersonic material like birdcalls and wolf whistles,
you definitely need 96k, that provides stratospheric frequency
response. But in practice music that really requires that is
exceptional, practically with most material it doesn't make a
difference - besides doubling file size.
It is a good sales point, and if you could be mixing classical
music or filmscores you want to be able to do that.
> I suppose soon they will have Plug-ins that
> can write hit songs, or fix bad songs!
And replace audio guys, anyone who can sing or actually play an
instrument, and eventually all human beings entirely.
Will Miho
NY TV/Audio Post/Music/Live Sound Guy
"The large print giveth and the small print taketh away..." Tom Waits
> Ok, so if there are conversion artifacts when you move
> from 96k to 44.1k, then i suppose you should stay at 96k if
> you decide to record there.
Sure, if you just want to listen to your own stuff at home. But if
you give someone a disk with a 96 kHz file on it, they probably
won't be able to play it in their car.
> Yeah, i was just kidding about that one. But it's funny
> how bad even artists from the old, pre-computer days can be,
> when you hear a live recording of them:
Does this spoil the music for you? Perhaps it sounds odd if
you've become accustomed to listening to artificially perfect
recordings, but there's something nice about knowing that
the guy who gets paid the big bucks for singing isn't perfect
all the time, but he knows how to put over a song so you
don't even notice that he's not perfect. That's what makes
the difference between a performer and a songwriter with
a guitar or piano and an ego.
> my point is that even back then, the
> studio was your friend. You could do take after take, and
> punch-in until you had what you wanted. The computer just
> makes it a bit easier and faster.
Sure, there was a lot of that, but there were a lot of good
records recorded directly to disk masters, or were edited
by cutting tape.
They aren't severe, and they get better every day, but it's more
processing that isn't hard to avoid.
> But i didn't know most people use 24Bits/44.1k. They
>still want the extra dynamic range, as per my compression
>question.
You can never have too much dynamic range. With good 24 bit converters
these days, though, dynamic range has ceased to become a real issue.
On the OTHER hand, the outrageously low noise floors that are available
today are part of what makes it possible to abusively overcompress
tracks.... people routinely crush stuff today in ways that would have
been impossible on a good 16-track machine because the noise floor would
have come up way too high in the process.
> Yeah, i was just kidding about that one. But it's funny
>how bad even artists from the old, pre-computer days can be,
>when you hear a live recording of them:
>
> http://www.youtube.com/watch?v=3DyRv34Cat3Vw
>
> Although, granted, the Beatles didn't have proper
>monitors here. But my point is that even back then, the
>studio was your friend. You could do take after take, and
>punch-in until you had what you wanted. The computer just
>makes it a bit easier and faster.
Right... on the other hand, we had guys like Sinatra who would refuse
to do a second take for protection because he thought the first take
was just fine. And it usually was.
There were a lot of 16 bit systems out there that were really only
acceptably linear down to the 10th or 12th bit... thank God those
days are over.
Most 24 bit converters today have at least 20 valid bits. And a
20 bit converter calling itself a 24 bit converter is a whole hell
of a lot better than a 10 bit converter calling itself a 16 bit one.
> Â Â Â Â Â I suppose everyone records at 24bit/96kHz, and then
> dithers to 16bit/44.1kHz for the Redbook CD, right?
>
I'd say 24/48 is more of a regular occurrence...generally speaking.
Used to be if you were going to release on CD, you'd record at 44.1,
if you were going to release on video, you'd record at 48. But these
days with so many projects going to both....
---scott
Same difference. The higher Nyquist gives you
a wider bandwidth, and the alias frequencies are also
higher....
Are there substantial conversion artifacts going from 48 to 44.1?
You'd have to do it eventually.
The VS-840 did pretty good for my CD ten years ago.
However, at very, very low playback fadeouts, you could
hear the quantization noise....a very ragged sound as you say.
Yes, they call that the Effective Number of Bits, or ENOB
for short. It's set by the self-noise, or the self-jitter of the
ADC.
There is thermal noise added to the moment when the sample-and-hold
capacitor is measured for voltage, so that the sampling period is
never the same amount of time, and leads to distortion.
And so the Signal to Noise + Distortion, or SINAD noise
floor, sets the ENOB.
There are some.
> You'd have to do it eventually.
Not if you never release on CD. Which is the case for lots of film
and video projects.
Can be, but it can also be limited by the linearity of the converter.
Some older ladder-type converters weren't linear at low levels... a
1 KHz sine wave sounds fine at 0 dBFS, but drop it down to -60 dBFS,
and it sounded very buzzy and distorted, even with 16-bit converters.
The noise floor was nice and low, but the bits coming out weren't
the right ones.
>There is thermal noise added to the moment when the sample-and-hold
>capacitor is measured for voltage, so that the sampling period is
>never the same amount of time, and leads to distortion.
>
> And so the Signal to Noise + Distortion, or SINAD noise
>floor, sets the ENOB.
Yes, that distortion is a big deal. In the case of ladder dacs, there
are issues due to the sample and hold rattling around, the ladders not
being perfectly trimmed, the ladders not staying trimmed as the
temperature changes, etc.
Just thinking about it makes me appreciate my Ampex.
<< I suppose everyone records at 24bit/96kHz, and then
dithers to 16bit/44.1kHz for the Redbook CD, right?>>
No; unless recording for future DVD use, a lot of folks (around here at
least, and in my experience many places in the audio industry) record 24 bit
44.1kHz, not wanting to accept the degradation of a sample rate conversion
(see the thread on that, currently in your theatres) for CD release.
Peace,
Paul
> Are there substantial conversion artifacts going
> from 48 to 44.1?
Depends on the hardware or software you use to do the SRC.
The one artifact that I object to is the time it takes.
>
> It's just with 16 bit systems the converters usually
> sounded bad unless you used most of the available bits.
Most of the availble bits of a 16 bit converter would be 9 bits, and yes
that is not enough for good sound.
OTOH, 15 or 16 bits are a whole different story.
> This isn't the case with 24 bit systems. Simply put you
> needn't push every bit for good sound anymore - in fact
> that can make the end result sound worse.
While the best 24 bit systems have 19 or 20 bits, some of them have 15 or 16
bits.
> Realize more bits don't mean you can get louder, they
> mean you can get *softer.* And regarding dither. When
> the reverb in a recording of a real hall diminishes to
> the last available bits, those last couple bits will
> start toggling on and off.
Actually, the last couple of bits are toggling on and off like crazy, all
the time. You do not seem to have a good practical understanding of what
dither does.
What quiet sounds do is fail to toggle the upper 8 or more bits.
> This produces a ragged sound,
> very very soft but still ragged.
The noise floor of a good quiet room is usualy from 60 to 70 dB below the
loudest sound. Rarely more. That leaves 26 dB or more of "foot room" with
a 16 bit system. Thats a factor of 12, or almost 4 bits.
> Dithering noise
> prevents that toggling back and forth, so get a smoother sounding tail.
Now you've proven me right - you really don't know what dither does. In fact
what dither does is make sure that the low-order bit is toggling back and
forth, no matter what.
> You can hear this, play a track and record a fade out
> with and without dither, then normalize both recorded
> tracks at the tails. Watch your speaker volume, it'll be
> loud. You will hear the bits toggling back and forth on
> the track with no dither. When they chop off the bottom
> bits - reduce the bitrate for CD prep - adding dither
> makes it sound smoother.
This little anecdote is almost exactly the opposite of what actually
happens. Just goes to show that you hear what you believe, no matter how
wrong your beliefs are.
> I think most use 24/44.1, and dither is added at CD
> prep.
I did a bunch of work at 44/24 f and noticed that long recordings with many
tracks took up about 1/2 more space. I switched to 44/16 and found that
nothing changed, audibly.
> Same difference. The higher Nyquist gives you
> a wider bandwidth, and the alias frequencies are also
> higher....
True, if you let them through, but that's a no-no in converter
design. There ARE no alias frequencies, at least not in the
converters that I use, and I use some pretty cheap converters.
This is a design issue that has been dealt with and is no longer
a potential problem. Higher sample rate, in practice, today, only
gives wider bandwidth. While it has been shown that there's
sound above 20 kHz and that it can be captured by a microphone,
(or, in the case of Scott's example, by a phono cartridge) there
are few speakers that can reproduce it and few ears that can
appreciate it.
So 96 kHz sample rate is not yet a "requirement" for contemporary
recording, and assuming you're not using a ten year old 96 kHz A/D
converter, won't make your recordings sound any better in the end.
It won't make them sound worse, (though many converters in the
ten year old time frame DID sound worse at 2x sample rates than at
standard, because of the components available to the designers at
the time) but you'll have bigger pieces to deal with that aren't necessary.
> Are there substantial conversion artifacts going from 48 to 44.1?
Will you accept "no" as an answer and move on? This is a well established
process. Some programs and devices do it better than others, but you can
avoid the ones that you don't like. Mastering engineers who have the job of
not causing any harm have this well in hand today.
And before you ask, converting from 88.2 kHz to 44.1 kHz is no simpler
and less damaging than converting from any other sample rate. You can't
just leave out every other sample, you have to resample in order to do it
right.
> The VS-840 did pretty good for my CD ten years ago.
>
> However, at very, very low playback fadeouts, you could
> hear the quantization noise....a very ragged sound as you say.
But that was ten years ago, and how many people listen to fadeouts
with the volume turned up high enough so that you can hear the
deficiencies in the system? If you look and listen hard enough you
can find something wrong with just about anything in audio.
So I offend you Arny? This is a hardly a question of "belief",
it's an educational issue. An AES guy who is a manufacturer's rep who
explained and demonstrated it to me this way a while back, I didn't
create the example as a proof. If my concept is wrong, fine - how
about explaining exactly what _is_ happening and - maybe leave the RAO
attitude outside?
You say the "exact opposite" of preventing bits from toggling on
and off is what dither does - explain that, please. And when one
dithers down from 24 to 16 bits with dither, is something besides
simple truncation with added noise is going on? If that is all that
is happening, why does dither make it sound smoother?
> On Apr 12, 7:55 am, "Arny Krueger" <ar...@hotpop.com>
> wrote:
>> "WillStG" <will...@aol.com> wrote in message
>>> You can hear this, play a track and record a fade
>>> out with and without dither, then normalize both
>>> recorded tracks at the tails. Watch your speaker
>>> volume, it'll be loud. You will hear the bits toggling
>>> back and forth on the track with no dither. When they
>>> chop off the bottom bits - reduce the bitrate for CD
>>> prep - adding dither makes it sound smoother.
>> This little anecdote is almost exactly the opposite of
>> what actually happens. Just goes to show that you hear
> what you believe, no matter how wrong your beliefs are.
> So I offend you Arny?
Not really, you just screwed up, and it provided a good example of how
audiophile myths propagate.
> This is a hardly a question of "belief", it's an educational issue.
The outcome of education is hopefully belief in the principles that the
education is conveying.
> An AES guy who is a
> manufacturer's rep who explained and demonstrated it to
> me this way a while back, I didn't create the example as
> a proof. If my concept is wrong, fine - how about
> explaining exactly what _is_ happening and - maybe leave
> the RAO attitude outside?
First off, just about any real world recording has enough environmental
noise in it that conversion from 24 to 16 bits can be done with zero added
dither.
I've been looking at the noise floors of recordings of all kinds for about a
decade now, and there are very clear trends. Most recordings have an ambient
noise floor that is from 65 to 75 dB down. The noise is usually from the
venue but there's also significant electronic noise from microphones and mic
preamps. All it takes for a proper job of dithering is 1/2 LSB of noise,
which translates into a noise floor that is about 93 dB down. While the
environmental noise is not ideal, there's also about 10 times more of it
than the bare minimum.
If you get a really loud set of instruments working in a very quiet room,
the noise floor might be about 80 dB down. After searching for over a
decade, I found some orchestral recordings of Beethoven symphonies on the
Swedish BIS label that are truly exceptional - their noise floor is about 80
dB down. When you push environmental noise that low, the noise from the
electronics predominates. Nose from electronics generally more closely
approximates ideal dither. Notice that in this case the environmental noise
is still about 5 times more than the bare minimum.
Even if you screw up and record with the peaks 20 dB lower than FS, there
will generally still be enough ambient noise to properly dither a conversion
from 24 bit to 16 bits. Note that I'm not saying don't dither, I'm saying
that any added dither will probably be far more insurance than actual
audible benefit.
Secondly, if somehow one were to make a recording that was so noise free
that it actually needed dither to add dither for a good-sounding recording,
actually analyzing the quantization error shows that the nature of
quantization error is bits flipping. So the sound of bits flipping is not
the issue as they will be flipping with or without dither.
> You say the "exact opposite" of preventing bits from
> toggling on and off is what dither does - explain that,
> please.
Without dither, the bits will be toggling based on mixing of the sampling
frequency and the music being truncated. The sampling frequency is high, so
the toggling will be very rapid. With dither this toggling will be altered
by the dither. The audible problem comes when the truncation error is
correlated with the music, since the sampling frequency is a single
frequency and is fixed. The key to good subjective performance during a
truncation is breaking the correlation of the truncation error with the
music. For example, adding another high frequency, low level tone can break
this correlation in an audibly effective, but less theoretically satisfying
way.
> And when one dithers down from 24 to 16 bits
> with dither, is something besides simple truncation with
> added noise is going on?
No. However, the usual case is that the truncation error is adequately
randomized by the noise that is already in the signal. You can google
"self-dither" and find that a goodly number of people have been observing
this for quite a while.
> If that is all that is
> happening, why does dither make it sound smoother?
The smoother sound is generally a product of self-suggestion, since almost
all musical recordings have enough background or environmental noise, both
acoustic and electronic, to randomize truncation error without added dither.
Again, I'm not saying don't dither, but I am saying don't expect any audible
benefits most of the time.
These days most hardware and software that people commonly use to convert 24
bit recordings to 16 bits will dither by default. It would be stupid to try
to minimize or eliminate this added dither. It is shall we say very
optimistic to believe that it is anything more than low cost insurance
against something that rarely if ever actually is an audible problem.
If you like distortion you can just truncate anything.
>
> I've been looking at the noise floors of recordings of all kinds for about a
> decade now, and there are very clear trends. Most recordings have an ambient
> noise floor that is from 65 to 75 dB down. The noise is usually from the
> venue but there's also significant electronic noise from microphones and mic
> preamps. All it takes for a proper job of dithering is 1/2 LSB of noise,
> which translates into a noise floor that is about 93 dB down. While the
> environmental noise is not ideal, there's also about 10 times more of it
> than the bare minimum.
>
> If you get a really loud set of instruments working in a very quiet room,
> the noise floor might be about 80 dB down. After searching for over a
> decade, I found some orchestral recordings of Beethoven symphonies on the
> Swedish BIS label that are truly exceptional - their noise floor is about 80
> dB down. When you push environmental noise that low, the noise from the
> electronics predominates. Nose from electronics generally more closely
> approximates ideal dither. Notice that in this case the environmental noise
> is still about 5 times more than the bare minimum.
>
> Even if you screw up and record with the peaks 20 dB lower than FS, there
> will generally still be enough ambient noise to properly dither a conversion
> from 24 bit to 16 bits.
Electronics noise will rarely prevent distortion. Perhaps had I
specifically said "truncation distortion" rather than "bits flipping"
it wouldn't have pissed you off as much. And your comment about noise
- in my recording - is really an assumption. If I record a software
instrument in my DAW, rendered the track ("freeze" the track) and
fade that out in the DAW, what's the noise floor then?
>
> Secondly, if somehow one were to make a recording that was so noise free
> that it actually needed dither to add dither for a good-sounding recording,
> actually analyzing the quantization error shows that the nature of
> quantization error is bits flipping. So the sound of bits flipping is not
> the issue as they will be flipping with or without dither.
A fade in a DAW crosses the noise floor threshold of a recording
at some point. Without dither, this causes distortion. Dither allows
the noise floor to fade smoothly. Dither is effective at levels at
least 10 db lower than the noise floor. Record some fades with and
without dither, normalize them, and listen to them.
>
> > You say the "exact opposite" of preventing bits from
> > toggling on and off is what dither does - explain that,
> > please.
>
> Without dither, the bits will be toggling based on mixing of the sampling
> frequency and the music being truncated. The sampling frequency is high, so
> the toggling will be very rapid. With dither this toggling will be altered
> by the dither. The audible problem comes when the truncation error is
> correlated with the music, since the sampling frequency is a single
> frequency and is fixed. The key to good subjective performance during a
> truncation is breaking the correlation of the truncation error with the
> music. For example, adding another high frequency, low level tone can break
> this correlation in an audibly effective, but less theoretically satisfying
> way.
>
Bob Ohlsson has said "self dithering is bs," dither may be
noise but noise isn't dither. Electronics or room noise isn't
particularly well distributed and has very different spectral
content. To prevent distortion artifacts, you need the right kind of
noise, not just any noise.
> > And when one dithers down from 24 to 16 bits
> > with dither, is something besides simple truncation with
> > added noise is going on?
>
> No. However, the usual case is that the truncation error is adequately
> randomized by the noise that is already in the signal. You can google
> "self-dither" and find that a goodly number of people have been observing
> this for quite a while.
Electronics noise is not broadband enough to prevent distortion.
Dither absolutely prevents truncation distortion - you suggest nothing
is quiet enough for dither to make any difference, but dither prevents
distortion at levels lower than random electronics noise.
Look at it this way, electronics and room noise sounds better
with dither.
>
> > If that is all that is
> > happening, why does dither make it sound smoother?
>
> The smoother sound is generally a product of self-suggestion, since almost
> all musical recordings have enough background or environmental noise, both
> acoustic and electronic, to randomize truncation error without added dither.
> Again, I'm not saying don't dither, but I am saying don't expect any audible
> benefits most of the time.
"Self Suggesting". This is also the condescending - albeit more
tactfully offered - opinion that DigiDesign had, when they told users
there was no real world advantage in offering a 24 bit mixer. It
should be sufficient to point out that you are directing this "Self
Suggesting" comment at the majority of working Mastering Engineers.
In fact, haven't you also been saying we might as well work at 16
bits, as you can't hear the difference?
Arny, Arny...
> I've been looking at the noise floors of recordings of all kinds for about a
> decade now, and there are very clear trends. Most recordings have an ambient
> noise floor that is from 65 to 75 dB down. The noise is usually from the
> venue but there's also significant electronic noise from microphones and mic
> preamps. All it takes for a proper job of dithering is 1/2 LSB of noise,
> which translates into a noise floor that is about 93 dB down. While the
> environmental noise is not ideal, there's also about 10 times more of it
> than the bare minimum.
>
> If you get a really loud set of instruments working in a very quiet room,
> the noise floor might be about 80 dB down. After searching for over a
> decade, I found some orchestral recordings of Beethoven symphonies on the
> Swedish BIS label that are truly exceptional - their noise floor is about 80
> dB down. When you push environmental noise that low, the noise from the
> electronics predominates. Nose from electronics generally more closely
> approximates ideal dither. Notice that in this case the environmental noise
> is still about 5 times more than the bare minimum.
> (etc)
Arny, correct me if I'm wrong, but all of your noise floor examples
seem to be taken from (either in your recording or listening
experience) live stage performances going directly to two track mix.
Many of the parameters in these situations don't apply for indoor,
controlled studio multitracking. I'd be interested in your
examination of isolated tracks in a multitrack project. My own
informal tests told me that live stage recordings don't benefit much
from 24 over 16 bit, but the rest of my day to day work absolutely
does, no question about it. Completely worth the extra file size.
I could probably toggle my DAW to 16 bit for live stage direct to 2
track mix and not miss anything. But if I toggled it for most else I
likely would.
RB
> If you like distortion you can just truncate anything.
Probably true if you truncate to 10 bits or so, but frankly, I have a hard
time hearing the effect of truncating a 24-bt recording to 16 bits even by
the gross (and surely truncated - all mechanics and no mathematics) method
of connecting the digital output of the 24-bit source to the digital
input of
my plain old CD recorder.
Of course if I'm making a CD "master" I'll use one of the dithering options
provided by WaveLab or Sound Forge, whatever I'm using, probably
triangular. But because our modern 24-bit hardware is plenty good enough,
truncation, unless you have program material that's particularly revealing
and you really crank things up so you can hear it, just isn't the big deal
that we used to consider it to be.
This is another case of why not do it right because it's no more
difficult to
do it wrong. But if you don't have the tools, it's not something to
worry about.
> If I record a software
> instrument in my DAW, rendered the track ("freeze" the track) and
> fade that out in the DAW, what's the noise floor then?
Whatever the noise is in the originally recorded samples plus whatever
the sampling engine contributes in manipulating the samples to make a
complete instrument. Nothing is noise-free. Good virtual instruments are
good. Cheap and crummy ones are probably not as good as you could
do in a decent studio with a good mic, preamp, and A/D converter.
> A fade in a DAW crosses the noise floor threshold of a recording
> at some point. Without dither, this causes distortion.
Anything that you don't want to hear is technically "distortion." It may
be confusing to some people to say it that way because it's not the
kind of distortion that we associate with the term - clipping, introducing
harmonically related frequencies that weren't present in the original
input, and so on. And it's only a form of noise that you can hear if
you bring a practically inaudible signal up to audible level. In most
practical listening situations, it will be below the acoustic noise floor.
> If you like distortion you can just truncate anything.
That is a potentially failing scheme. The failure is that you truncate the
audio, and the expected distortion does not materialize.
>> Even if you screw up and record with the peaks 20 dB
>> lower than FS, there will generally still be enough
>> ambient noise to properly dither a conversion from 24
>> bit to 16 bits.
> Electronics noise will rarely prevent distortion.
Except of course that it will, and I've proven it in the real world.
> Perhaps had I specifically said "truncation distortion"
> rather than "bits flipping" it wouldn't have pissed you
> off as much.
No, your two previous comments in this reply have continued your pattern of
behavior.
> And your comment about noise - in my
> recording - is really an assumption.
It is an assumption that has considerable theoretical and practical support.
> If I record a software instrument in my DAW, rendered the track
> ("freeze" the track) and fade that out in the DAW, what's
> the noise floor then?
That would depend on some details that you haven't presented, like how many
bits in the Daw, how long the fade is over, etc.
>> Secondly, if somehow one were to make a recording that
>> was so noise free that it actually needed dither to add
>> dither for a good-sounding recording, actually analyzing
>> the quantization error shows that the nature of
>> quantization error is bits flipping. So the sound of
>> bits flipping is not the issue as they will be flipping
>> with or without dither.
> A fade in a DAW crosses the noise floor threshold of
> a recording at some point.
Agreed.
> Without dither, this causes distortion.
My DAW software dithers its fades, doesn't yours?
I generally do my production on files that have the same number of bits as
the distribution media does, don't you?
> Dither allows the noise floor to fade smoothly.
Yes, which is one reason that most DAW software does that by default.
> Dither is effective at levels at least 10 db
> lower than the noise floor.
Dither that is effective at all is effective at all signal levels, whether
above or below the LSB.
> Record some fades with and
> without dither, normalize them, and listen to them.
I don't do that for production work, so why would I do that to guide how I
do production work?
>>> You say the "exact opposite" of preventing bits from
>>> toggling on and off is what dither does - explain that,
>>> please.
>> Without dither, the bits will be toggling based on
>> mixing of the sampling frequency and the music being
>> truncated. The sampling frequency is high, so the
>> toggling will be very rapid. With dither this toggling
>> will be altered by the dither. The audible problem comes
>> when the truncation error is correlated with the music,
>> since the sampling frequency is a single frequency and
>> is fixed. The key to good subjective performance during
>> a truncation is breaking the correlation of the
>> truncation error with the music. For example, adding
>> another high frequency, low level tone can break this
>> correlation in an audibly effective, but less
>> theoretically satisfying way.
> Bob Ohlsson has said "self dithering is bs,"
That's not the only technical mistake he's made. However, if you wish to
diefy him, be my guest! ;-)
> dither may be noise but noise isn't dither.
That's a cute phrase and at its limits its true. However, most real world
audio is done some distance from the limits. Furthermore I've been very
consitant about saying that everything should be dithered even though it may
be futile. I've also said that the true meaning of "self dither" is that it
means that much of the time, dithering makes no audible difference at all.
This may conflict with people of the audio high priest persuasion, but it
simply is how things are out here in the real world.
> Electronics
> or room noise isn't particularly well distributed and has
> very different spectral content.
Agreed, but when your noise is 10-20-30 dB above LSB, it need not be optimal
to be highly effective. All of the noise in the world can be broken into
two categories. In one category, the noise could be modeled as being some
really good optimal dither, with other stuff added. In the other category,
things are so spread out or concentrated in the wrong places that no way can
you model it as being really good optimal dither with other stuff added.
Obviously, noises in the second category are inadequate to be called "self
dither". But noises in the first category can be called "self dither".
> To prevent distortion artifacts, you need the right kind of noise, not
> just any
> noise.
Agreed, but I never said that any noise will do. I said that many common
noises will do, but don't bet your life on it. I said don't be surprised if
dithering with the latest-greatest dither doesn't make any audible
difference at all, because you may have stumbled into one of the multitude
of cases where self-dither is working.
>>> And when one dithers down from 24 to 16 bits
>>> with dither, is something besides simple truncation with
>>> added noise is going on?
>> No. However, the usual case is that the truncation
>> error is adequately randomized by the noise that is
>> already in the signal. You can google "self-dither" and
>> find that a goodly number of people have been observing
>> this for quite a while.
> Electronics noise is not broadband enough to prevent distortion.
That's a global generalization that many will immediately see as being
false. Back in the day, there were applications for dither, but digital
dither was not always readily available. People used analog noise generators
to produce dither, and it worked. Analog noise generators were built by
finding some "electronics noise" and amplifying it.
> Dither absolutely prevents truncation
> distortion - you suggest nothing is quiet enough for
> dither to make any difference,
Never said any such thing. What I said is that few things that we encounter
in the real world are quiet enough all by themselves for dither to make a
difference. Sure, you can attenuate something by 70 dB and that is pretty
well guaranteed to put its internal noise below LSB.
> but dither prevents
> distortion at levels lower than random electronics noise.
Lets put it this way. The effectiveness of dither is based on its spectral
content and amplitude distribution. TPDF is known to be generally the best
amplitude distribution. Concentrating energy just below the Nyquist
frequency is often the best spectral distritubution. Neither spectral
content nor amplitude distribution are inherently analog or digital.
> Look at it this way, electronics and room noise
> sounds better with dither.
Not necessarily a generality, but people who pontifcate on the effectiveness
of blind testing won't ever find out.
>>> If that is all that is
>>> happening, why does dither make it sound smoother?
>
>> The smoother sound is generally a product of
>> self-suggestion, since almost all musical recordings
>> have enough background or environmental noise, both
>> acoustic and electronic, to randomize truncation error
>> without added dither. Again, I'm not saying don't
>> dither, but I am saying don't expect any audible
>> benefits most of the time.
> "Self Suggesting". This is also the condescending -
Anybody who denies the effects of self-suggestion on their audio production
work has got a problem, whether with education, practical experience or
their egos.
Show me someone who says that they never self-suggest themselves into making
errors, and I'll show you someone who is doing it a lot of the time.
> albeit more tactfully offered - opinion that DigiDesign
> had, when they told users there was no real world
> advantage in offering a 24 bit mixer.
You seem to be basing global truths on anecdotes.
> It should be
> sufficient to point out that you are directing this "Self
> Suggesting" comment at the majority of working Mastering
> Engineers.
"Majority of working Mastering Engineers". Sounds like a global truth based
on presumptions.
> In fact, haven't you also been saying we might as
> well work at 16 bits, as you can't hear the difference?
Condescending comment noted.
In my defense, I follow my own suggestion and dither all of the work that I
do whether I think its absolutly necessary or not.
> Arny, Arny...
Cascading condescending comments noted.
> Arny, correct me if I'm wrong, but all of your noise
> floor examples seem to be taken from (either in your
> recording or listening experience) live stage
> performances going directly to two track mix. Many of the
> parameters in these situations don't apply for indoor,
> controlled studio multitracking.
Not all of them.
> I'd be interested in
> your examination of isolated tracks in a multitrack
> project.
Put zip files of wav file excerpts on some free file sharing service
someplace, and we'll see what is going on.
> My own informal tests told me that live stage
> recordings don't benefit much from 24 over 16 bit, but
> the rest of my day to day work absolutely does, no
> question about it. Completely worth the extra file size.
Almost all the gear I use - the Microtrack, the Delta cards from 1010 to
24/192, and the LynxTwo card run at either 16 or 24 bits, and sample up to
at last 96 KHz. I run jobs at more than 16/44 whenever the spirit moves. I
both look at them and listen to them. I don't see or hear anything that is
worth the extra space.
> I could probably toggle my DAW to 16 bit for live stage
> direct to 2 track mix and not miss anything. But if I
> toggled it for most else I likely would.
So you say - show me your time-synched, level-matched, blind tests.
I understand the "you can't hear it" argument you are repeating.
That's why I said do an A/B comparison, record with and without dither
and normalize the tails. Then we can hear what is actually going
on.
>
> Of course if I'm making a CD "master" I'll use one of the dithering options
> provided by WaveLab or Sound Forge, whatever I'm using, probably
> triangular. But because our modern 24-bit hardware is plenty good enough,
> truncation, unless you have program material that's particularly revealing
> and you really crank things up so you can hear it, just isn't the big deal
> that we used to consider it to be.
>
Audio standards and conventions are of course, purposed to keep us
out of trouble in the widest variety of circumstances. And mastering
and two track work is one thing, multi track recording another, the
effect is certainly cumulative.
I'm not saying how big a deal it is. But it isn't pleasant
sounding.
> This is another case of why not do it right because it's no more
> difficult to
> do it wrong. But if you don't have the tools, it's not something to
> worry about.
>
> > If I record a software
> > instrument in my DAW, rendered the track ("freeze" the track) and
> > fade that out in the DAW, what's the noise floor then?
>
> Whatever the noise is in the originally recorded samples plus whatever
> the sampling engine contributes in manipulating the samples to make a
> complete instrument. Nothing is noise-free. Good virtual instruments are
> good. Cheap and crummy ones are probably not as good as you could
> do in a decent studio with a good mic, preamp, and A/D converter.
>
> > A fade in a DAW crosses the noise floor threshold of a recording
> > at some point. Without dither, this causes distortion.
>
> Anything that you don't want to hear is technically "distortion." It may
> be confusing to some people to say it that way because it's not the
> kind of distortion that we associate with the term - clipping, introducing
> harmonically related frequencies that weren't present in the original
> input, and so on. And it's only a form of noise that you can hear if
> you bring a practically inaudible signal up to audible level. In most
> practical listening situations, it will be below the acoustic noise floor.
Well all recording is by definition "distortion" but that's not
exactly what we're talking about either. Truncation distortion
sounds crappy, even with modern converters if you listen to it. And
dither also has the beneficial effect of reducing the non-linearities
of software mixers and plugins, as I understand it.
But umm - close enough for Government work, eh Mike? <g>
Why do you bother with a comment like that? Do it, and tell me
what you hear.
> > Electronics noise will rarely prevent distortion.
>
> Except of course that it will, and I've proven it in the real world.
That you don't hear it does not prove there is no distortion.
Making it audible might, and comparing null tests might.
> > If I record a software instrument in my DAW, rendered the track
> > ("freeze" the track) and fade that out in the DAW, what's
> > the noise floor then?
>
> That would depend on some details that you haven't presented, like how many
> bits in the Daw, how long the fade is over, etc.
You are being overly broad is my point Arny.
> > A fade in a DAW crosses the noise floor threshold of
> > a recording at some point.
>
> Agreed.
>
> > Without dither, this causes distortion.
>
> My DAW software dithers its fades, doesn't yours?
There is so-called "intermediate dither", typical of 32 bit float
mixers and some plugins, but that alone does not prevent distortion
and non-linearities when "bouncing"/truncating with fades. According
to the tests by guys over on Brad Blackwell's Mastering Forum, adding
dither prevents measurable non-linearities that "intermediate dither"
does not in that situation.
You like to measure stuff, right Arny? Hang out with those guys
for a while.
> > Dither allows the noise floor to fade smoothly.
>
> Yes, which is one reason that most DAW software does that by default.
So what's the problem? You have ceded my point, but hardly
explained the "delusion" you have assigned me.
>
> > Record some fades with and
> > without dither, normalize them, and listen to them.
>
> I don't do that for production work, so why would I do that to guide how I
> do production work?
>
Because your snarky comments have their basis in you insisting
there is no difference. And it is better to prove yourself wrong,
than have someone else prove you wrong.
>
> >>> You say the "exact opposite" of preventing bits from
> >>> toggling on and off is what dither does - explain that,
> >>> please.
> >> Without dither, the bits will be toggling based on
> >> mixing of the sampling frequency and the music being
> >> truncated. The sampling frequency is high, so the
> >> toggling will be very rapid. With dither this toggling
> >> will be altered by the dither. The audible problem comes
> >> when the truncation error is correlated with the music,
> >> since the sampling frequency is a single frequency and
> >> is fixed. The key to good subjective performance during
> >> a truncation is breaking the correlation of the
> >> truncation error with the music. For example, adding
> >> another high frequency, low level tone can break this
> >> correlation in an audibly effective, but less
> >> theoretically satisfying way.
> > Bob Ohlsson has said "self dithering is bs,"
>
> That's not the only technical mistake he's made. However, if you wish to
> diefy him, be my guest! ;-)
It is the general concensus of Mastering Engineers, Arny.
Certainly enough well respected Professionals with a lot more
experience than you understand it to be so, that you might consider
tempering the snide comments, perhaps try to qualifiy or limit them to
within the actual parameters of your own work.
>
> > dither may be noise but noise isn't dither.
>
> That's a cute phrase and at its limits its true. However, most real world
> audio is done some distance from the limits. Furthermore I've been very
> consitant about saying that everything should be dithered even though it may
> be futile. I've also said that the true meaning of "self dither" is that it
> means that much of the time, dithering makes no audible difference at all.
>
> This may conflict with people of the audio high priest persuasion, but it
> simply is how things are out here in the real world.
Maybe your Church and the school kids you record do not care if
you truncate their concerts, or whether you add dither or not. That's
fine, my 4 year old watches Barney and Dave Hood.
Go over to Brad Blackwell's Mastering Forum and tell all of them
how self deluded you think they are. Even Ethan Winer, who doesn't
think truncation distortion matters because of it's low level, doesn't
deny it objectively exists - as he has measured it.
> > To prevent distortion artifacts, you need the right kind of noise, not
> > just any
> > noise.
>
> Agreed, but I never said that any noise will do. I said that many common
> noises will do, but don't bet your life on it. I said don't be surprised if
> dithering with the latest-greatest dither doesn't make any audible
> difference at all, because you may have stumbled into one of the multitude
> of cases where self-dither is working.
You didn't say that here, to me, at all. Here you made an overly
broad claim about dither being not needed, because the noise inherent
in even the best recordings "self dither" a truncated file. I'm
telling you if your normalize the fade tail of most truncated files,
you will hear all kinds of crap that you will not hear if you apply
dither to the fade.
Why do you refuse to do a simple normalizing test Arny? Pride
goeth before a fall. You can certainly do a "blind test" on the
results.
>
> >>> If that is all that is
> >>> happening, why does dither make it sound smoother?
>
> >> The smoother sound is generally a product of
> >> self-suggestion, since almost all musical recordings
> >> have enough background or environmental noise, both
> >> acoustic and electronic, to randomize truncation error
> >> without added dither. Again, I'm not saying don't
> >> dither, but I am saying don't expect any audible
> >> benefits most of the time.
> >>
> > "Self Suggesting". This is also the condescending -
>
> Anybody who denies the effects of self-suggestion on their audio production
> work has got a problem, whether with education, practical experience or
> their egos.
>
Either truncation of all 24 bit files - recorded in the "real
world" as you put it - will self dither, or they will not. Either
there is truncation distortion from doing that, or there is not. You
made a broad generalization that there is none, and you have "proved
it."
Calling me self deluded - while refusing a simple normalization
test of your claim - that's weak Arny. This is why you have ended up
swapping spit with all your buddies over on Rec.Audio.Opinion.
> Show me someone who says that they never self-suggest themselves into making
> errors, and I'll show you someone who is doing it a lot of the time.
Sigh... Arny. You have issues.
> I understand the "you can't hear it" argument you are repeating.
> That's why I said do an A/B comparison, record with and without dither
> and normalize the tails. Then we can hear what is actually going
> on.
Why normalize the tails? Do you watch your TV set 2 inches from the
screen? Why conduct your listening tests under unrealistic conditions?
The classic experiment is to record something, keeping the peaks below
-60 dBFS, then amplify it by 60 dB. Sure you'll hear noise and distortion.
But amplify a "normal" recording by 60 dB and you'll probably blow your
speakers.
> Audio standards and conventions are of course, purposed to keep us
> out of trouble in the widest variety of circumstances.
That's certainly a valid argument. As long as you have the tools, why not
use them?
> And mastering
> and two track work is one thing, multi track recording another, the
> effect is certainly cumulative.
But you don't normally truncate or bit-reduce individual tracks before
mixing. Most DAWs have gobs of bit-headroom and don't bring the mix
down to the desired word length until the final output. Back when DAW
mix engines worked at 16-bit resolution, it was essential to dither after
every processing application, but they don't work that way any more.
> But umm - close enough for Government work, eh Mike? <g>
I retired from the Government nearly ten years ago. I can afford to get
closer if I think it's justified. <g>