Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Note to the Idiot

8 views
Skip to first unread message

George M. Middius

unread,
Dec 24, 2003, 5:03:54 PM12/24/03
to

I just read Sanders' latest salvo at you in which he claimed that at a
social dinner, you launched into a rant about Stereophile not doing
blind testing of equipment.

If that's true -- and I accept it at face value based on the little I
know of you -- I am absolutely certain that any time I spent in your
company would be wasted and a total bore.

ScottW

unread,
Dec 24, 2003, 6:08:59 PM12/24/03
to

"George M. Middius" <Spam-...@resistance.org> wrote in message
news:073kuv0p4524k8th4...@4ax.com...

If you accept anything Sanders says at face value you're much stupider
than I thought.

But explain this contradiction in Stereophile.

They provide eloquent subjective appraisals of equipment including
lots of words on the "sound" of the equipment.

They also provide detailed test measurements.

Sometimes the two don't fully concur with one another. Why?

ScottW


George M. Middius

unread,
Dec 24, 2003, 6:14:05 PM12/24/03
to

Yappity-yappity-yap.

> > I just read Sanders' latest salvo at you in which he claimed that at a
> > social dinner, you launched into a rant about Stereophile not doing
> > blind testing of equipment.
> >
> > If that's true -- and I accept it at face value based on the little I
> > know of you -- I am absolutely certain that any time I spent in your
> > company would be wasted and a total bore.
>
> If you accept anything Sanders says at face value you're much stupider
> than I thought.

You don't really "think", anyway, so that's not much of an insult.

> But explain this contradiction in Stereophile.

The weight of evidence tilts the scale toward Sanders' version.

ScottW

unread,
Dec 24, 2003, 6:21:24 PM12/24/03
to

"George M. Middius" <Spam-...@resistance.org> wrote in message
news:8c7kuvslmefgeokrs...@4ax.com...

He spent at least 10x the time on his "joke" than I did on
stating my view on Stereophile reviews.
You can let him spin that into a rant if it suits your
purpose. Truth does not often suit your purpose.

ScottW


Sockpuppet Yustabe

unread,
Dec 24, 2003, 9:09:46 PM12/24/03
to

"ScottW" <Scot...@hotmail.com> wrote in message
news:p0pGb.37791$m83.36994@fed1read01...

The measurements do not adequately describe the perceptions of the sound of
the music


----== Posted via Newsfeed.Com - Unlimited-Uncensored-Secure Usenet News==----
http://www.newsfeed.com The #1 Newsgroup Service in the World! >100,000 Newsgroups
---= 19 East/West-Coast Specialized Servers - Total Privacy via Encryption =---

John Atkinson

unread,
Dec 24, 2003, 10:18:18 PM12/24/03
to
"ScottW" <Scot...@hotmail.com> wrote in message
news:<p0pGb.37791$m83.36994@fed1read01>...
> explain this contradiction in Stereophile.
>
> They provide eloquent subjective appraisals of equipment including
> lots of words on the "sound" of the equipment.
>
> They also provide detailed test measurements.
>
> Sometimes the two don't fully concur with one another. Why?

Hi Scott, I have written at length in the magazine about this occasional
lack of correlation in the magazine, most recently in the current
(January) issue. I don't see it as an indictment of my policy, merely a
byproduct of my trying to be open about the subject with my readers and
of giving them as much information about a product as I can.

Happy holidays.

John Atkinson
Editor, Stereophile

ScottW

unread,
Dec 24, 2003, 10:18:27 PM12/24/03
to

"Sockpuppet Yustabe" <idk...@comcast.net> wrote in message
news:3fea46e7$1...@127.0.0.1...

>
> "ScottW" <Scot...@hotmail.com> wrote in message
> news:p0pGb.37791$m83.36994@fed1read01...
> >
> > "George M. Middius" <Spam-...@resistance.org> wrote in message
> > news:073kuv0p4524k8th4...@4ax.com...
> > >
> > >
> > > I just read Sanders' latest salvo at you in which he claimed that at
a
> > > social dinner, you launched into a rant about Stereophile not doing
> > > blind testing of equipment.
> > >
> > > If that's true -- and I accept it at face value based on the little I
> > > know of you -- I am absolutely certain that any time I spent in your
> > > company would be wasted and a total bore.
> >
> > If you accept anything Sanders says at face value you're much stupider
> > than I thought.
> >
> > But explain this contradiction in Stereophile.
> >
> > They provide eloquent subjective appraisals of equipment including
> > lots of words on the "sound" of the equipment.
> >
> > They also provide detailed test measurements.
> >
> > Sometimes the two don't fully concur with one another. Why?
> >
> > ScottW
> >
> >
>
> The measurements do not adequately describe the perceptions of the sound
of
> the music

Possibly, and occasionally the preceptions only exist in the ear of the
beholder.

ScottW


George M. Middius

unread,
Dec 24, 2003, 10:59:14 PM12/24/03
to

The moribund M&M "life"style gets an infusion of doggie breath.

> > The measurements do not adequately describe the perceptions of the sound of
> > the music

> Possibly, and occasionally the preceptions only exist in the ear of the
> beholder.

Nobody's buying your antihuman propaganda, little 'borg. Go suck a
bone.

This post reformatted by the Resistance,
laboring tirelessly to de-Kroogerize Usenet.

Sockpuppet Yustabe

unread,
Dec 24, 2003, 11:19:56 PM12/24/03
to

"ScottW" <Scot...@hotmail.com> wrote in message
news:hGsGb.37833$m83.16466@fed1read01...

Then you must worship at the feet of the Gods of Accuracy
and listen to music that 'tests' perfectly, no matter whether
it is perceived to sound good, or not.

I will listen to what I perceive as sounding good.

The interesting point is whether or not one would
change one's perception of the sound of the music,
after learning how accurate or not the equipment tests.
A reverse expectation effect.

George M. Middius

unread,
Dec 24, 2003, 11:36:44 PM12/24/03
to

Sockpuppet Yustabe said:

> The interesting point is whether or not one would
> change one's perception of the sound of the music,
> after learning how accurate or not the equipment tests.


If you ask the Krooborg, it will tell you that music is "irrelevant"
for evaluating audio equipment.


S888Wheel

unread,
Dec 25, 2003, 12:09:52 AM12/25/03
to
>
>But explain this contradiction in Stereophile.
>
>They provide eloquent subjective appraisals of equipment including
>lots of words on the "sound" of the equipment.
>
>They also provide detailed test measurements.
>
>Sometimes the two don't fully concur with one another. Why?
>
>ScottW
>
>
>
>
>
>

Interesting question. Could it be that in some cases the measured performance
doesn't really say much about the subjective peformance? Maybe in some cases
there were other influences including component synergy. Maybe in some cases
the reviewer was just off the mark.

George M. Middius

unread,
Dec 25, 2003, 12:28:08 AM12/25/03
to

S888Wheel said:

> >Sometimes the two don't fully concur[sic] with one another. Why?



> Interesting question. Could it be that in some cases the measured performance
> doesn't really say much about the subjective peformance? Maybe in some cases
> there were other influences including component synergy. Maybe in some cases
> the reviewer was just off the mark.


Maybe measurements are meaningless for consumers.


ScottW

unread,
Dec 25, 2003, 12:37:57 AM12/25/03
to

"Sockpuppet Yustabe" <idk...@comcast.net> wrote in message
news:3fea6...@127.0.0.1...

>
> > > The measurements do not adequately describe the perceptions of the
sound
> > of
> > > the music
> >
> > Possibly, and occasionally the preceptions only exist in the ear of
the
> > beholder.
> >
>
> Then you must worship at the feet of the Gods of Accuracy
> and listen to music that 'tests' perfectly, no matter whether
> it is perceived to sound good, or not.

Don't go and stick words in my mouth. You didn't hear
me profess the need for absolute accuracy or even realism.

What I am referring to are the reviews where different units
are compared and perceptions of differences in sonic
performances are claimed which can't
be validated through differences in measured performance.
Accuracy or lack thereof is irrelevant. I would like to see
these subjective perceptions of difference validated through
DBTs. I don't think that is too much to ask of the
professionals performing these reviews.

>
> I will listen to what I perceive as sounding good.

As do I. I am not talking about listening. I am talking
about reading, actually paying for a professionals opinion
on the sonic characteristics of equipment.


>
> The interesting point is whether or not one would
> change one's perception of the sound of the music,
> after learning how accurate or not the equipment tests.
> A reverse expectation effect.

I have heard systems which are supposedly far more accurate
than mine which weren't as pleasing to me. I do realize that
we get accustomed to things. I still enjoy my old Large Advents.
Everytime I play Selling England I long for those speakers just
because of the unique way they'd nearly explode on that low
organ note on Firth of Fifth. Nothing accurate about it, but I still like
it.

BTW, Merry Christmas. I hope you're recovering from your flood.

ScottW


JBorg

unread,
Dec 25, 2003, 12:45:29 AM12/25/03
to
> ScottW wrote:


> ...

> If you accept anything Sanders says at face value you're much stupider
> than I thought.
>
> But explain this contradiction in Stereophile.
>
> They provide eloquent subjective appraisals of equipment including
> lots of words on the "sound" of the equipment.
>
> They also provide detailed test measurements.
>
>
>
>
> Sometimes the two don't fully concur with one another. Why?

Why? It's because you're attempting to compare and then collate
the results from two incongruent sources.


Merry Christmas!


> ScottW

S888Wheel

unread,
Dec 25, 2003, 12:47:48 AM12/25/03
to
>
>What I am referring to are the reviews where different units
>are compared and perceptions of differences in sonic
>performances are claimed which can't
>be validated through differences in measured performance.
>Accuracy or lack thereof is irrelevant. I would like to see
>these subjective perceptions of difference validated through
>DBTs. I don't think that is too much to ask of the
>professionals performing these reviews.

Actually I think it might be too much to ask. I don't know that I would call
all the reviewers for Stereophile professionals in that most of them are not
making a living reviewing equipment and it really is a hobby for them. Asking
such people to do worth while DBTs is asking a lot IMO.

S888Wheel

unread,
Dec 25, 2003, 12:49:22 AM12/25/03
to
>
>
>Maybe measurements are meaningless for consumers.
>

I think some are and some are not.

George M. Middius

unread,
Dec 25, 2003, 12:52:44 AM12/25/03
to

S888Wheel said:

> >Maybe measurements are meaningless for consumers.

> I think some are and some are not.

You can have the ones allocated for me. My Xmas present to you.


ScottW

unread,
Dec 25, 2003, 12:54:51 AM12/25/03
to

"S888Wheel" <s888...@aol.com> wrote in message
news:20031225000952...@mb-m13.aol.com...

> >
> >But explain this contradiction in Stereophile.
> >
> >They provide eloquent subjective appraisals of equipment including
> >lots of words on the "sound" of the equipment.
> >
> >They also provide detailed test measurements.
> >
> >Sometimes the two don't fully concur with one another. Why?
> >
> >ScottW
> >
> >
> >
> >
> >
> >
>
> Interesting question. Could it be that in some cases the measured
performance
> doesn't really say much about the subjective peformance?

There is almost infinite depth of detail one can explore
measurements. Occasionally my work is to conduct a
detailed performance evaluation of a cellular data modem
in harsh environments.
I can almost guarantee you I can find a deficiency if you let
me test long enough. Last one dropped 10 db in receive
sensitivity only after a channel handoff at cold temps.
Dumb luck we found it.

> Maybe in some cases
> there were other influences including component synergy.

Still, that should be measurable.

>Maybe in some cases
> the reviewer was just off the mark.

I think the easiest way to know is the DBT.
If the reviewer is not off the mark, then I'd like
to see Anderson embark on figuring out which
measurement needs to be added to his repertoire
to show the delta.

ScottW


ScottW

unread,
Dec 25, 2003, 1:17:51 AM12/25/03
to

"S888Wheel" <s888...@aol.com> wrote in message
news:20031225004748...@mb-m13.aol.com...

I don't see the big deal. Lets have Arny create a PC controlled
switch box which stores results over the net in a secure server.
All the reviewer has to do is hook it up and make his
selections. Results tallied and bingo.

Performing the DBTs would be a snap if Atkinson set 'em up
with the tools to do it. The fact that they don't even create
the tools to do it is telling to me.

ScottW


Arny Krueger

unread,
Dec 25, 2003, 7:49:29 AM12/25/03
to
"ScottW" <Scot...@hotmail.com> wrote in message
news:p0pGb.37791$m83.36994@fed1read01

> But explain this contradiction in Stereophile.

> They provide eloquent subjective appraisals of equipment including
> lots of words on the "sound" of the equipment.

> They also provide detailed test measurements.

> Sometimes the two don't fully concur with one another. Why?

First off, Stereophile doesn't always do appropriate kinds of listening
tests. Their dogmatic adherence to sighted, level-matched, single
presentation method listening techniques, minimizes real listener
sensitivity and maximizes the possibility of imaginary results. The only
thing they do right is the level-matching and I suspect that their reviewers
don't always adhere to that.

Stereophile goes out of its way to avoid time-synchronization and formal
bias controls, despite all the evidence that these are critical if
sensitive, reliable results are desired. I've concluded that Stereophile
does not want to do listening tests that are sensitive and reliable, because
they are afraid of the results. Science can be very unpredictable and the
results could easily go against years of a grotesquely-flawed editorial
policies such as the RCL, and embarrass many advertisers.

So, any Stereophile comparison of ear versus gear can easily be garbage-in,
garbage out; on the ear side of the equation.

Secondly, Stereophile does some really weird measurements, such as their
undithered tests of digital gear. The AES says don't do it, but John
Atkinson appears to be above all authority but the voices that only he
hears. He does other tests, relating to jitter, for which there is no
independent confirmation of reliable relevance to audibility. I hear that
this is not because nobody has tried to find correlation. It's just that the
measurement methodology is flawed, or at best has no practical advantages
over simpler methodologies that correlate better with actual use.

Thirdly, there are whole classes of equipment, mostly relating to snake oil
toys and vinyl, for which Stereophile doesn't perform any relevant technical
tests of at all. No test gear is used, so therefore no possibility of a
valid ear versus gear comparison.

Finally, Stereophile seems to bend over backward to avoid mentioning an
increasingly-common situation where the equipment is so accurate that it has
no sonic character at all, or very little sonic character. In these cases
Stereophile's measurements are effectively meaningless when it comes to
describing sonic character, because there is precious little or no sonic
character to describe.


Arny Krueger

unread,
Dec 25, 2003, 7:55:25 AM12/25/03
to
"JBorg" <JB...@eudoramail.com> wrote in message
news:ff0ab388.03122...@posting.google.com


> Why? It's because you're attempting to compare and then collate
> the results from two incongruent sources.

Here's another idiot who obvious doesn't know the difference between collate
and correlate. Probably due to a lifetime of dead-end clerical jobs.


Sockpuppet Yustabe

unread,
Dec 25, 2003, 9:36:05 AM12/25/03
to

"ScottW" <Scot...@hotmail.com> wrote in message
news:2JuGb.38272$m83.24241@fed1read01...
>

>
> I have heard systems which are supposedly far more accurate
> than mine which weren't as pleasing to me. I do realize that
> we get accustomed to things. I still enjoy my old Large Advents.
> Everytime I play Selling England I long for those speakers just
> because of the unique way they'd nearly explode on that low
> organ note on Firth of Fifth. Nothing accurate about it, but I still like
> it.
>

If you are ever in Maryland, you will be able to hear that on
stacked Advents

JBorg

unread,
Dec 25, 2003, 2:35:37 PM12/25/03
to
> Arny Krueger wrote:
>> JBorg wrote in message

>
>
>
>
>> Why? It's because you're attempting to compare and then collate
>> the results from two incongruent sources.
>
>
>
> Here's another idiot who obvious doesn't know the difference between
> collate and correlate. Probably due to a lifetime of dead-end clerical
> jobs.


Shooooooo... not you. Go awayyy.


To correlate is to bring into causal, complementary, parallel, or
reciprocal relation. That is by way of saying-- to bring the
reviewer's perception into causal relation with the detailed test
measurements.

To collate is to examine and compare carefully in order to note
points of disagreement. That is, to establish and to verify the
point of differences between the reviewer's perception against the
results of the detailed test measurements. Here lies the original
poster's curiosity.

To wit: The eloquent subjective appraisals of the reviewers do
not concur with test measurements.

Scott Gardner

unread,
Dec 25, 2003, 2:56:03 PM12/25/03
to
On Thu, 25 Dec 2003 07:49:29 -0500, "Arny Krueger" <ar...@hotpop.com>
wrote:
<snip>

>Finally, Stereophile seems to bend over backward to avoid mentioning an
>increasingly-common situation where the equipment is so accurate that it has
>no sonic character at all, or very little sonic character. In these cases
>Stereophile's measurements are effectively meaningless when it comes to
>describing sonic character, because there is precious little or no sonic
>character to describe.
>
>
Along these lines, who was it back in the sixties that first said "All
sonically-accurate equipment must, by definition, sound alike"? (I'm
paraphrasing, but that's the gist of the statement.

Scott Gardner

Lionel

unread,
Dec 25, 2003, 3:32:10 PM12/25/03
to
George M. Middius a écrit :

Asslicker !

Arny Krueger

unread,
Dec 25, 2003, 4:58:23 PM12/25/03
to
"Scott Gardner" <gardn...@cox.net> wrote in message
news:3feb4063...@news.east.cox.net

Sounds like the sort of thing that the late Julian Hirsch would say. I
don't know if he said this in the 60s or 70s but it was about then that at
least a modest amount of sonically-accurate or nearly-sonically-accurate
started showing up on the market.


Scott Gardner

unread,
Dec 25, 2003, 5:08:01 PM12/25/03
to
On Thu, 25 Dec 2003 16:58:23 -0500, "Arny Krueger" <ar...@hotpop.com>
wrote:

>"Scott Gardner" <gardn...@cox.net> wrote in message
>news:3feb4063...@news.east.cox.net
>> On Thu, 25 Dec 2003 07:49:29 -0500, "Arny Krueger" <ar...@hotpop.com>
>> wrote:
>> <snip>
>>> Finally, Stereophile seems to bend over backward to avoid mentioning
>>> an increasingly-common situation where the equipment is so accurate
>>> that it has no sonic character at all, or very little sonic
>>> character. In these cases Stereophile's measurements are effectively
>>> meaningless when it comes to describing sonic character, because
>>> there is precious little or no sonic character to describe.
>>>
>>>
>> Along these lines, who was it back in the sixties that first said "All
>> sonically-accurate equipment must, by definition, sound alike"? (I'm
>> paraphrasing, but that's the gist of the statement.
>
>Sounds like the sort of thing that the late Julian Hirsch would say. I
>don't know if he said this in the 60s or 70s but it was about then that at
>least a modest amount of sonically-accurate or nearly-sonically-accurate
>started showing up on the market.

I came across the quote when I was reading about Richard
Clark's "Amplifier Challenge". The statement seems pretty obvous to
me, but the author of the article I was reading implied that it was a
pretty ground-breaking assertion at the time it was originally made.
The idea that audible differences between two high-end pieces
of equipment is proof that one (or both) of them is noticeably
inaccurate is a powerful statement, and one that doesn't seem to get
much mention in the literature these days.

Scott Gardner


Arny Krueger

unread,
Dec 25, 2003, 5:26:13 PM12/25/03
to
"Scott Gardner" <gardn...@cox.net> wrote in message
news:3feb5e7a...@news.east.cox.net

Having been a reader of Stereo Review when Hirsch first started saying
things like this, I would be prone to agree with Richard Clark. BTW, I have
quite a bit of respect for Richard Clark.

> The idea that audible differences between two high-end pieces
> of equipment is proof that one (or both) of them is noticeably
> inaccurate is a powerful statement, and one that doesn't seem to get
> much mention in the literature these days.

As you are no doubt aware, I touched on that possibility that with the
following sentence from my earlier post:

"Science can be very unpredictable and the results could easily go against
years of a grotesquely-flawed editorial
policies such as the RCL, and embarrass many advertisers."

Stereophile's basic editorial policy is clearly that everything sounds
different, with heavy emphasis on the word everything. In fact, some things
sound different, and some things don't.


S888Wheel

unread,
Dec 25, 2003, 7:17:37 PM12/25/03
to
<<
I don't see the big deal. Lets have Arny create a PC controlled
switch box which stores results over the net in a secure server.
All the reviewer has to do is hook it up and make his
selections. Results tallied and bingo.
>>

I don't see Arny working with Stereophile.


<<
Performing the DBTs would be a snap if Atkinson set 'em up
with the tools to do it. >>


I am not so convinced it is a snap to do them well. I think if Stereophile were
to do something like this it would be wise for them to consult someone like JJ
who conducted such tests for a living. Would you suggest that such DBTs be
limmited to comparisons of cables amps and preamps? I think DBT with speakers
and source components are quite a bit more difficult. Would you limmit such
tests to varification of actual audible differences? Personally, I like blind
comparisons for preferences. They are more difficult than sighted comparisons
for obvious reasons.


<< The fact that they don't even create
the tools to do it is telling to me.
>>


How so?

George M. Middius

unread,
Dec 25, 2003, 7:26:01 PM12/25/03
to

S888Wheel said to The Idiot:

> << The fact that they don't even create
> the tools to do it is telling to me.

> How so?

Did you notice the title of this thread?


S888Wheel

unread,
Dec 25, 2003, 7:27:41 PM12/25/03
to

I said


<<
There is almost infinite depth of detail one can explore
measurements. Occasionally my work is to conduct a
detailed performance evaluation of a cellular data modem
in harsh environments.
I can almost guarantee you I can find a deficiency if you let
me test long enough. Last one dropped 10 db in receive
sensitivity only after a channel handoff at cold temps.
Dumb luck we found it. >>


Maybe we are simply being to general in this discussion. You seem to think
there have been specific measurements that would suggest audible performance
that is in conflict with the subjective report of specific gear. If that is an
accurate assesment then it might be better to discuss such specific reports.


I said


<<
> Maybe in some cases
> there were other influences including component synergy.
>>

Scott said


<<
Still, that should be measurable.
>>


But they have to be measured. Are you suggesting that maybe Stereophile is not
making measurements they should be making?

I said

<<
>Maybe in some cases
> the reviewer was just off the mark.
>>


Scott said

<<

I think the easiest way to know is the DBT.
If the reviewer is not off the mark, then I'd like
to see Anderson embark on figuring out which
measurement needs to be added to his repertoire
to show the delta. >>


I am not against it but I think you are suggesting that Stereophile should
conduct some very challenging research to corolate subjective impressions with
measured performance.

George M. Middius

unread,
Dec 25, 2003, 7:37:30 PM12/25/03
to

S888Wheel said to The Idiot:

> I think the easiest way to know is the DBT.


> If the reviewer is not off the mark, then I'd like
> to see Anderson embark on figuring out which
> measurement needs to be added to his repertoire
> to show the delta. >>

> I am not against it but I think you are suggesting that Stereophile should
> conduct some very challenging research to corolate subjective impressions with
> measured performance.

From the 'borg viewpoint, no expense is too great, no undertaking too
complex, if there's the tiniest chance that the E.H.E.E. will be
"exposed" as the "scam operation" the 'borgs know it to be.


Lionel

unread,
Dec 25, 2003, 8:02:48 PM12/25/03
to
Arny Krueger wrote:

> Stereophile's basic editorial policy is clearly that everything sounds
> different, with heavy emphasis on the word everything. In fact, some things
> sound different, and some things don't.
>

As soon as a manufacturer is an
"interesting-potential-advertisement-customer" his products start to
sound different... ;-)

IMO as soon as you have eliminated the poor constructed electronics you
can focus at 99.999% on speakers, their placement and the listening room
acoustic.
The expensive accessories are only 0.001 % of the final result... But
good customers for magazines advertisers.

Arny Krueger

unread,
Dec 25, 2003, 8:20:22 PM12/25/03
to
"S888Wheel" <s888...@aol.com> wrote in message
news:20031225191737...@mb-m21.aol.com

>
> I am not so convinced it is a snap to do them well. I think if
> Stereophile were to do something like this it would be wise for them
> to consult someone like JJ who conducted such tests for a living.

JJ was a free agent for a while after Lucent fired him, and before Microsoft
hired him. However, JJ seems to be too much of a closet golden ear to be as
aggressive and pragmatic as scientific objectivity demands. This allows him
to curry favor with the golden ear press which he actively did for a while.
Yet he talks the talk, maintaining a veneer of scientific respectability.
Hey, its what he seems to need to be comfortable.

> Would you suggest that such DBTs be limited to comparisons of cables
> amps and preamps?

It's not that tough to DBT just about any audio component if you are
pragmatic enough. JJ's incessant public mindless and evidenceless criticism
of PCABX convinced me that he's simply not pragmatic enough to be worth much
trouble.

> I think DBT with speakers and source components are
> quite a bit more difficult.

Shows how little you know, sockpuppet wheel.

> Would you limit such tests to
> verification of actual audible differences? Personally, I like blind


> comparisons for preferences. They are more difficult than sighted
> comparisons for obvious reasons.

Preference comparisons make no sense if there are no audible differences.
There are two major DBT protocols:

ABX for sensitive detection of differences.

ABC/hr for determining degree of impairment or degradation, which roughly
equates to preferences if you presume that audiophiles naturally prefer
undegraded sound or sound that is less degraded or less impaired. Since
there are so-called audiophiles who prefer the sound of tubes and vinyl
which can be rife with audible degradations, its not clear that one can
blithely presume that all audiophile prefer sound that has less impairment.

> << The fact that they don't even create the tools to do it is telling to
me.

The tools for doing DBTs of just about *everything* are readily available,
presuming that the investigator is sufficiently pragmatic. Since we're
talking religious beliefs, we can't presume pragmatic investigators in every
case.

In the case of Stereophile, the use of DBTs would no doubt embarrass the
management and many of the advertisers. Therefore, Stereophile has maximal
incentive to be as non-pragmatic as possible. They simply behave
predictably.

Arny Krueger

unread,
Dec 25, 2003, 8:23:11 PM12/25/03
to
"George M. Middius" <Spam-...@resistance.org> wrote in message
news:dk0nuv02qqgka1rdb...@4ax.com

> S888Wheel said to The Idiot:

>> I think the easiest way to know is the DBT.
>> If the reviewer is not off the mark, then I'd like
>> to see Anderson embark on figuring out which
>> measurement needs to be added to his repertoire
>> to show the delta. >>


>> I am not against it but I think you are suggesting that Stereophile

>> should conduct some very challenging research to correlate subjective
>> impressions with measured performance.

Tain't gonna happen. Properly-run DBTs would and have exposed Stereophile
for the scam we've long thought it was.

> From the 'borg viewpoint, no expense is too great, no undertaking too
> complex, if there's the tiniest chance that the E.H.E.E. will be
> "exposed" as the "scam operation" the 'borgs know it to be.

The exposure was a done deal a decade or more ago. It has taken a long time
for it to sink in on a few die-hards, but anybody who wants to know what's
really happening has the tools at their disposal for doing so.


Arny Krueger

unread,
Dec 25, 2003, 8:24:55 PM12/25/03
to
"Lionel" <lionel....@piegeaspam.free.fr> wrote in message
news:bsg1aq$hkk$1...@news-reader1.wanadoo.fr

> Arny Krueger wrote:
>
>> Stereophile's basic editorial policy is clearly that everything
>> sounds different, with heavy emphasis on the word everything. In
>> fact, some things sound different, and some things don't.

> As soon as a manufacturer is an
> "interesting-potential-advertisement-customer" his products start to
> sound different... ;-)

Stereophile seems to be a little more farsighted than that.

> IMO as soon as you have eliminated the poor constructed electronics
> you can focus at 99.999% on speakers, their placement and the
> listening room acoustic.

Agreed.

> The expensive accessories are only 0.001 % of the final result... But
> good customers for magazines advertisers.

Agreed.


ScottW

unread,
Dec 25, 2003, 10:57:13 PM12/25/03
to

"S888Wheel" <s888...@aol.com> wrote in message
news:20031225191737...@mb-m21.aol.com...

> <<
> I don't see the big deal. Lets have Arny create a PC controlled
> switch box which stores results over the net in a secure server.
> All the reviewer has to do is hook it up and make his
> selections. Results tallied and bingo.
> >>
>
> I don't see Arny working with Stereophile.

The point is the creation of a tool that would
minimize the labor involved in a DBT is
no great endeavour.


>
>
> <<
> Performing the DBTs would be a snap if Atkinson set 'em up
> with the tools to do it. >>
>
>
> I am not so convinced it is a snap to do them well.

I guess I need your definition of well.
No more difficult than listening to gear,
subjectively characterizing the sound and putting
that to paper.

> I think if Stereophile were
> to do something like this it would be wise for them to consult someone
like JJ
> who conducted such tests for a living. Would you suggest that such DBTs
be
> limmited to comparisons of cables amps and preamps?

Those are certainly the easiest components.

Digital sources being next with a challenge to sync them
such that the subject isn't tipped off.

> I think DBT with speakers
> and source components are quite a bit more difficult.

Speakers are definitely out. It could be done but not without
significant difficulty.

> Would you limmit such
> tests to varification of actual audible differences?

Yes, if that fails then the preference test is really
kind of pointless.

> Personally, I like blind
> comparisons for preferences. They are more difficult than sighted
comparisons
> for obvious reasons.
>
>
> << The fact that they don't even create
> the tools to do it is telling to me.
> >>
>
>
> How so?

I think they are afraid of the possible (or even probable)
outcome.

ScottW


ScottW

unread,
Dec 25, 2003, 11:21:17 PM12/25/03
to

"S888Wheel" <s888...@aol.com> wrote in message
news:20031225192741...@mb-m21.aol.com...

>
> I said
>
>
> <<
> There is almost infinite depth of detail one can explore
> measurements. Occasionally my work is to conduct a
> detailed performance evaluation of a cellular data modem
> in harsh environments.
> I can almost guarantee you I can find a deficiency if you let
> me test long enough. Last one dropped 10 db in receive
> sensitivity only after a channel handoff at cold temps.
> Dumb luck we found it. >>
>
>
> Maybe we are simply being to general in this discussion. You seem to
think
> there have been specific measurements that would suggest audible
performance
> that is in conflict with the subjective report of specific gear. If that
is an
> accurate assesment then it might be better to discuss such specific
reports.

I'll have to browse the archives. I'm sure a good example shouldn't be
hard to find. I'm also sure avid Stereophile readers could point out a few
examples with ease. I've been a casual reader at best.


>
>
> I said
>
>
> <<
> > Maybe in some cases
> > there were other influences including component synergy.
> >>
>
> Scott said
>
>
> <<
> Still, that should be measurable.
> >>
>
>
> But they have to be measured. Are you suggesting that maybe Stereophile
is not
> making measurements they should be making?

No, not until the measurements say there isn't an audible
difference yet a DBT confirms there is.

>
> I said
>
> <<
> >Maybe in some cases
> > the reviewer was just off the mark.
> >>
>
>
> Scott said
>
> <<
>
> I think the easiest way to know is the DBT.
> If the reviewer is not off the mark, then I'd like
> to see Anderson embark on figuring out which
> measurement needs to be added to his repertoire
> to show the delta. >>
>
>
> I am not against it but I think you are suggesting that Stereophile
should
> conduct some very challenging research to corolate subjective impressions
with
> measured performance.

Well, let's first remove the subjectivity and simply
confirm audible differences.

ScottW


S888Wheel

unread,
Dec 26, 2003, 12:54:45 AM12/26/03
to
I said

<<
> I am not so convinced it is a snap to do them well. I think if
> Stereophile were to do something like this it would be wise for them
> to consult someone like JJ who conducted such tests for a living.
>>


Arny said

<<
JJ was a free agent for a while after Lucent fired him, and before Microsoft
hired him. However, JJ seems to be too much of a closet golden ear to be as
aggressive and pragmatic as scientific objectivity demands. >>


That's a load of crap. Unlike you, he made his living at it.

Arny said

<< This allows him
to curry favor with the golden ear press which he actively did for a while. >>


Nonsense. It is his professional pedagree that gives him credibility.

Arny said

<< Yet he talks the talk, maintaining a veneer of scientific respectability. >>


No, he simply is respectable scientifically.

Arny said

<< Hey, its what he seems to need to be comfortable. >>


No, it was what he needed to do his job all those years.

Arny said

<<
It's not that tough to DBT just about any audio component if you are
pragmatic enough. JJ's incessant public mindless and evidenceless criticism
of PCABX convinced me that he's simply not pragmatic enough to be worth much
trouble. >>


So said the novice about the pro.

I said

<<
> I think DBT with speakers and source components are
> quite a bit more difficult.
>>


Arny said

<<
Shows how little you know, sockpuppet wheel.
>>


Nonsense.


I said

<<
> Would you limit such tests to
> verification of actual audible differences? Personally, I like blind
> comparisons for preferences. They are more difficult than sighted
> comparisons for obvious reasons. >>


Arny said

<<

Preference comparisons make no sense if there are no audible differences.
There are two major DBT protocols: >>


No shit Sherlock. No one said otherwise.

Arny said

<<
ABC/hr for determining degree of impairment or degradation, which roughly
equates to preferences if you presume that audiophiles naturally prefer
undegraded sound or sound that is less degraded or less impaired. Since
there are so-called audiophiles who prefer the sound of tubes and vinyl
which can be rife with audible degradations, its not clear that one can
blithely presume that all audiophile prefer sound that has less impairment.
>>


One can do preference tests blind the same way they do them sighted by
comparing A to B and forming a preference only without knowing what is A and
what is B. One can from a preference regardless of your hangups and do it
without the effects of sighted bias.

Arny said

<<

The tools for doing DBTs of just about *everything* are readily available,
presuming that the investigator is sufficiently pragmatic. Since we're
talking religious beliefs, we can't presume pragmatic investigators in every
case. >>


We are not talking about religious beliefs here unless you insist on inserting
your religious beliefs. We were talking about the practice of subjective review
by a particular publication


Arny said

<<

In the case of Stereophile, the use of DBTs would no doubt embarrass the
management and many of the advertisers. Therefore, Stereophile has maximal
incentive to be as non-pragmatic as possible. They simply behave
predictably. >>


So says the novice who thinks he is objective. You wear your prejudices like a
badge.

S888Wheel

unread,
Dec 26, 2003, 1:09:51 AM12/26/03
to
Scott said

<<
> Performing the DBTs would be a snap if Atkinson set 'em up
> with the tools to do it. >>
> >>


I said

<<
> I am not so convinced it is a snap to do them well. >>


Scott said

<<

I guess I need your definition of well.
No more difficult than listening to gear,
subjectively characterizing the sound and putting
that to paper. >>


Well would be within the bounds of rigor that would be scientifically
acceptable. I see no point in half-assing an attempt to bring greater
reliability to the process of subjective review. Let's just say Howard fell way
short in his endevours and the results spoke to that fact.

I said

<<

> I think if Stereophile were
> to do something like this it would be wise for them to consult someone
like JJ
> who conducted such tests for a living. Would you suggest that such DBTs
be
> limmited to comparisons of cables amps and preamps? >>


Scott said

<<
Those are certainly the easiest components.

Digital sources being next with a challenge to sync them
such that the subject isn't tipped off.
>>

I said


<<
> I think DBT with speakers
> and source components are quite a bit more difficult. >>


Scott said

<<

Speakers are definitely out. It could be done but not without
significant difficulty. >>


I did do some single blind comparisons. The dealer was very nice about it.

I said

<<
> Would you limmit such
> tests to varification of actual audible differences? >>


Scott said

<<

Yes, if that fails then the preference test is really
kind of pointless.
>>


I don't think so. It has been shown that with components that are agreed to
sound different sighted bias can still have an affect on preference.

Scott said

<<
>
> << The fact that they don't even create
> the tools to do it is telling to me.
> >> >>


I said

<<
>
> How so?
>>


Scott said

<<
I think they are afraid of the possible (or even probable)
outcome. >>


Maybe but I am skeptical of this. It didn't seem to hurt Stereo Review to take
the position that all amps, preamps and cables sounded the same. Stereophile
did take the Carver challenge. They weren't afraid of the outcome of that.

S888Wheel

unread,
Dec 26, 2003, 1:14:20 AM12/26/03
to
<<
Well, let's first remove the subjectivity and simply
confirm audible differences.

ScottW >>


I don't have a problem with that. But if we are going to expect Stereophile to
hang their hats on the results the protocols have to be up to valid scientific
standards IMO. I think this is where DBTs stop being a snap.

Arny Krueger

unread,
Dec 26, 2003, 5:47:48 AM12/26/03
to
"S888Wheel" <s888...@aol.com> wrote in message
news:20031226005445...@mb-m25.aol.com

>
> Arny said
>
> <<
> JJ was a free agent for a while after Lucent fired him, and before
> Microsoft hired him. However, JJ seems to be too much of a closet
> golden ear to be as aggressive and pragmatic as scientific
> objectivity demands. >>

> That's a load of crap.

Prove it.

>Unlike you, he made his living at it.

Not proof. The basic fallacy is that anybody who does something for a money
is perfect.

> Arny said
>
> << This allows him
> to curry favor with the golden ear press which he actively did for a
> while. >>

> Nonsense. It is his professional pedagree that gives him credibility.

I'm quite sure that JJ has no "pedagree". Sockpuppet Wheel, why don't you
learn to spell at the 6 th grade level and work up to the adult level from
there?

> Arny said
>
> << Yet he talks the talk, maintaining a veneer of scientific
> respectability. >>

> No, he simply is respectable scientifically.

In your mind all kinds of charlatans seem to be credible, and people who do
work to extend scientific objectivity are fools.

> Arny said
>
><< Hey, its what he seems to need to be comfortable. >>


> No, it was what he needed to do his job all those years.

Didn't work in the long run, did it?

> Arny said
>
> <<
> It's not that tough to DBT just about any audio component if you are
> pragmatic enough. JJ's incessant public mindless and evidenceless
> criticism of PCABX convinced me that he's simply not pragmatic enough
> to be worth much trouble. >>

> So said the novice about the pro.

Not proof. The basic fallacy is that anybody who does something for a money
is perfect.

> I said
>
> <<
>> I think DBT with speakers and source components are
>> quite a bit more difficult.
> >>
>
>
> Arny said
>
> <<
> Shows how little you know, sockpuppet wheel.
> >>
>
>
> Nonsense.
>
>
> I said
>
> <<
>> Would you limit such tests to
>> verification of actual audible differences? Personally, I like blind
>> comparisons for preferences. They are more difficult than sighted
>> comparisons for obvious reasons. >>
>
>
> Arny said
>
> <<
>
> Preference comparisons make no sense if there are no audible
> differences. There are two major DBT protocols: >>

> No shit Sherlock. No one said otherwise.

Defensive little turd, aren't you?

> Arny said
>
> <<
> ABC/hr for determining degree of impairment or degradation, which
> roughly equates to preferences if you presume that audiophiles
> naturally prefer undegraded sound or sound that is less degraded or
> less impaired. Since there are so-called audiophiles who prefer the
> sound of tubes and vinyl which can be rife with audible degradations,
> its not clear that one can blithely presume that all audiophile
> prefer sound that has less impairment. >>

> One can do preference tests blind the same way they do them sighted by
> comparing A to B and forming a preference only without knowing what
> is A and what is B. One can from a preference regardless of your
> hangups and do it without the effects of sighted bias.

I never said otherwise, did I? ABC/hr just happens to be a recognized,
standardized means for doing that.

> Arny said
>
> <<
>
> The tools for doing DBTs of just about *everything* are readily
> available, presuming that the investigator is sufficiently pragmatic.
> Since we're talking religious beliefs, we can't presume pragmatic
> investigators in every case. >>

> We are not talking about religious beliefs here unless you insist on
> inserting your religious beliefs.

You may be too naive to recognize religious beliefs about audio when you see
them, sockpupppet Yustabe.

>We were talking about the practice
> of subjective review by a particular publication

That publication seems to have a lengthy track record for forming unfounded
and therefore irrational beliefs in its readers minds. These kinds of
beliefs are often called "religious". Since you can't spell worth a hill of
beans, and are too arrogant to use a proper spell-checker, I thought I'd try
to bring you up-to-date, sockpuppet wheel.

> Arny said
> <<

> In the case of Stereophile, the use of DBTs would no doubt embarrass
> the management and many of the advertisers. Therefore, Stereophile
> has maximal incentive to be as non-pragmatic as possible. They simply
> behave predictably. >>

> So says the novice who thinks he is objective.

Shows how little you understand. I favor bias controls BECAUSE I believe
that I am biased. If I thought that any listener including myself could be
perfectly objective I wouldn't favor the use of bas controls, now would I.

>You wear your prejudices like a badge.

I simply know a little something about myself and everybody else in the
world. We have a hard time behaving in perfectly objective ways. Every human
has biases. Therefore listening tests that are at all difficult or
controversial, need bias controls.


Arny Krueger

unread,
Dec 26, 2003, 5:53:11 AM12/26/03
to
"ScottW" <Scot...@hotmail.com> wrote in message
news:DkOGb.41378$m83.22809@fed1read01

> "S888Wheel" <s888...@aol.com> wrote in message
> news:20031225191737...@mb-m21.aol.com...
>> <<
>> I don't see the big deal. Lets have Arny create a PC controlled
>> switch box which stores results over the net in a secure server.
>> All the reviewer has to do is hook it up and make his
>> selections. Results tallied and bingo.
>> >>
>>
>> I don't see Arny working with Stereophile.
>
> The point is the creation of a tool that would
> minimize the labor involved in a DBT is
> no great endeavour.

It's been done.

>> <<
>> Performing the DBTs would be a snap if Atkinson set 'em up
>> with the tools to do it. >>

>> I am not so convinced it is a snap to do them well.

> I guess I need your definition of well.
> No more difficult than listening to gear,
> subjectively characterizing the sound and putting
> that to paper.

Having done dozens of DBTs, probably 100's by now, I think that doing DBTs
takes more work than the schlock procedures that the Stereophile reviewers
use.

>> I think if Stereophile were
>> to do something like this it would be wise for them to consult
>> someone like JJ who conducted such tests for a living. Would you

>> suggest that such DBTs be limited to comparisons of cables amps and
>> preamps?

> Those are certainly the easiest components.

Digital players are just as easy.

> Digital sources being next with a challenge to sync them
> such that the subject isn't tipped off.

The same requirements for synchronization apply to analog sources as well.

>> I think DBT with speakers
>> and source components are quite a bit more difficult.

> Speakers are definitely out. It could be done but not without
> significant difficulty.

The biggest problem is not with DBTs, but a factor that affects sighted
evaluations as well. Speakers are profoundly affected by their location in
the room, and two speakers can't occupy the same location at the same time.

>> Would you limit such
>> tests to verification of actual audible differences?

> Yes, if that fails then the preference test is really
> kind of pointless.

>> Personally, I like blind
>> comparisons for preferences. They are more difficult than sighted
>> comparisons for obvious reasons.
>
>> << The fact that they don't even create
>> the tools to do it is telling to me.
>> >>
>>
>>
>> How so?

> I think they are afraid of the possible (or even probable)
> outcome.

I'm quite sure of it. Atkinson has tried to slip hidden sources of bias into
his alleged DBTs. He seems to have this need to control the outcome of the
listening tests that are done for his ragazine.

Arny Krueger

unread,
Dec 26, 2003, 5:55:26 AM12/26/03
to
"S888Wheel" <s888...@aol.com> wrote in message
news:20031226010951...@mb-m25.aol.com

>
> Scott said
>
> <<
> I think they are afraid of the possible (or even probable)
> outcome. >>
>
>
> Maybe but I am skeptical of this. It didn't seem to hurt Stereo
> Review to take the position that all amps, preamps and cables sounded
> the same.

Stereo Review works in a different, more pragmatic market than the high end
ragazines.

>Stereophile did take the Carver challenge.

Really? What Stereophile issue describes that?

> They weren't afraid of the outcome of that.

Awaiting details of this test.


Arny Krueger

unread,
Dec 26, 2003, 5:56:21 AM12/26/03
to
"S888Wheel" <s888...@aol.com> wrote in message
news:20031226011420...@mb-m25.aol.com
> <<


> I don't have a problem with that. But if we are going to expect
> Stereophile to hang their hats on the results the protocols have to
> be up to valid scientific standards IMO. I think this is where DBTs
> stop being a snap.

Shows how little you know about the existing scientific standards for doing
blind tests, sockpuppet wheel.


John Atkinson

unread,
Dec 26, 2003, 9:00:47 AM12/26/03
to
In message <U5KdnRwCRMT...@comcast.com>
Arny Krueger (ar...@hotpop.com) wrote:
> The only thing [Stereophile does] right is the level-matching and
> I suspect that their reviewers don't always adhere to that.

Amazing! I never suspected that when I perform listening tests you
are right there in the room observing me, Mr. Krueger. Nevertheless,
whatever you "suspect," Mr. Krueger, I match levels to within less
than 0.1dB whenever I directly compare components. See my review
of the Sony SCD-XA9000ES SACD player in the December issue for an
example (http://www.stereophile.com/digitalsourcereviews/1203sony).

> Stereophile does some really weird measurements, such as their
> undithered tests of digital gear. The AES says don't do it, but
> John Atkinson appears to be above all authority but the voices
> that only he hears.

It always gratifying to learn, rather late of course, that I had
bested Arny Krueger in a technical discussion. My evidence for
this statement is his habit of waiting a month, a year, or even more
after he has ducked out of a discussion before raising the subject
again on Usenet as though his arguments had prevailed. Just as he has
done here. (This subject was discussed on r.a.o in May 2002, with Real
Audio Guys Paul Bamborough and Glenn Zelniker joining me in pointing
out the flaws in Mr. Krueger's argument.)

So let's examine what the Audio Engineering Society (of which I am
a long-term member and Mr. Krueger is not) says on the subject of
testing digital gear, in their standard AES17-1998 (revision of
AES17-1991):

Section 4.2.5.2: "For measurements where the stimulus is generated in
the digital domain, such as when testing Compact-Disc (CD) players,
the reproduce sections of record/replay devices, and digital-to-analog
converters, the test signals shall be dithered.

I imagine this is what Mr. Krueger means when wrote "The AES says don't
do it." But unfortunately for Mr. Krueger, the very same AES standard
goes on to say in the very next section (4.2.5.3):

"The dither may be omitted in special cases for investigative purposes.
One example of when this is desirable is when viewing bit weights on an
oscilloscope with ramp signals. In these circumstances the dither signal
can obscure the bit variations being viewed."

As the first specific test I use an undithered signal for is indeed for
investigative purposes -- looking at how the error in a DAC's MSBs
compare to the LSB, in other words, the "bit weights" -- it looks as if
Mr. Krueger's "The AES says don't do it" is just plain wrong.

Mr. Krueger is also incorrect about the second undithered test signal
I use, which is to examine a DAC's or CD player's rejection of
word-clock jitter, to which he refers in his next paragraph:

> He does other tests, relating to jitter, for which there is no
> independent confirmation of reliable relevance to audibility. I hear
> that this is not because nobody has tried to find correlation. It's
> just that the measurement methodology is flawed, or at best has no
> practical advantages over simpler methodologies that correlate better
> with actual use.

And once again, Arny Krueger's lack of comprehension of why the latter
test -- the "J-Test," invented by the late Julian Dunn and implemented as
a commercially available piece of test equipment by Paul Miller -- needs
to use an undithered signal reveals that he still does not grasp the
significance of the J-Test or perhaps even the philosophy of measurement
in general. To perform a measurement to examine a specific aspect of
component behavior, you need to use a diagnostic signal. The J-Test
signal is diagnostic for the assessment of word-clock jitter because:

1) As both the components of the J-Test signal are exact integer
fractions of the sample frequency, there is _no_ quantization error.
Even without dither. Any spuriae that appear in the spectra of the
device under test's analog output are _not_ due to quantization.
Instead, they are _entirely_ due to the DUT's departure from
theoretically perfect behavior.

2) The J-Test signal has a specific sequence of 1s and 0s that
maximally stresses the DUT and this sequence has a low-enough frequency
that it will be below the DUT's jitter-cutoff frequency.

Adding dither to this signal will interfere with these characteristics,
rendering it no longer diagnostic in nature. As an example of a
_non-diagnostic terst signal, see Arny Krueger's use of a dithered
11.025kHz tone in his website tests of soundcards at a 96kHz sample
rate. This meets none of the criteria I have just outlined.

> He does other tests, relating to jitter, for which there is no
> independent confirmation of reliable relevance to audibility.

One can argue about the audibility of jitter, but the J-Test's lack of
dither does not render it "really weird," merely consistent and
repeatable. These, of course, are desirable in a measurement technique.
And perhaps it worth noting that, as I have pointed out before,
consistency is something lacking from Mr. Krueger's own published
measurements of digital components on his website, with different
measurement bandwidths, word lengths, and FFT sizes making comparisons
very difficult, if not impossible.

I have snipped the rest of Mr. Krueger's comments as they reveal merely
that he doesn't actually read the magazine he so loves to criticize. :-)

John Atkinson
Editor, Stereophile

S888Wheel

unread,
Dec 26, 2003, 11:57:16 AM12/26/03
to
<<
> Arny said
>
> <<
> JJ was a free agent for a while after Lucent fired him, and before
> Microsoft hired him. However, JJ seems to be too much of a closet
> golden ear to be as aggressive and pragmatic as scientific
> objectivity demands. >>
>>


I said

<<
> That's a load of crap.
>>


Arny said

<<
Prove it.
>>


His career is proof. You are the one challenging his objectivity despite the
fact that his objectivity was an important factor in his ability to do his job
correctly. You made the attack on JJ it is up to you to prove it is true.

I said

<<
>Unlike you, he made his living at it.
>>


Arny said

<<
Not proof. The basic fallacy is that anybody who does something for a money
is perfect.
>>


Yes it is proof. I made no such premise that pros are inherently perfect. My
premise is that pros are more likely to do their job better than hobbyists
would do the same job.

<<
> Arny said
>
> << This allows him
> to curry favor with the golden ear press which he actively did for a
> while. >>
>>


I said

<<

> Nonsense. It is his professional pedagree that gives him credibility.
>>


Arny said

<<
I'm quite sure that JJ has no "pedagree". Sockpuppet Wheel, >>


"Why note?"

Arny said

<< why don't you
learn to spell at the 6 th grade level and work up to the adult level from
there? >>


Of coure this is typical of you Arny. Attack the spelling once youv'e been
beaten by the logic of a post. You are quite the "characture" and hypocrite.

<<

> Arny said
>
> << Yet he talks the talk, maintaining a veneer of scientific
> respectability. >>
>>


I said

S888Wheel

unread,
Dec 26, 2003, 12:25:25 PM12/26/03
to
Arny said

<<
In your mind all kinds of charlatans seem to be credible, and people who do
work to extend scientific objectivity are fools.
>>


So Arny considers legitimate scientists and accomplished industry pros to be
charlatans. That figures. The creationists consider the body of scientists who
believe life evolved to be charlatans. You fit in quite well with the
creationists with your audio religion and facade of science. No wonder you
would attack a real scientist like this.

<<
> Arny said
>
><< Hey, its what he seems to need to be comfortable. >>

>>


I said

<<
> No, it was what he needed to do his job all those years.
>>


Arny said

<<
Didn't work in the long run, did it?
>>


Compared to you? LOL

<<
> Arny said
>
> <<
> It's not that tough to DBT just about any audio component if you are
> pragmatic enough. JJ's incessant public mindless and evidenceless
> criticism of PCABX convinced me that he's simply not pragmatic enough
> to be worth much trouble. >>
>>


I said

<<
> So said the novice about the pro.
>>


Arny said

<<
Not proof. The basic fallacy is that anybody who does something for a money
is perfect. >>


It's a matter of credibility. JJ has it and you don't.


<<
> I said
>
> <<
>> I think DBT with speakers and source components are
>> quite a bit more difficult.
> >>
> >>


<<
>
> Arny said
>
> <<
> Shows how little you know, sockpuppet wheel.
> >> >>


I said

<<
> Nonsense >>


No response? Did you figure out how stupid your comment was?

<<
> I said
>
> <<
>> Would you limit such tests to
>> verification of actual audible differences? Personally, I like blind
>> comparisons for preferences. They are more difficult than sighted
>> comparisons for obvious reasons. >>
> >>


<<
> Arny said
>
> <<
>
> Preference comparisons make no sense if there are no audible
> differences. There are two major DBT protocols: >>
>>


I said

<<
> No shit Sherlock. No one said otherwise.
>>


Arny said

<<
Defensive little turd, aren't you?
>>


One must be defensive when dealing with one who is so offensive.

<<
> Arny said
>
> <<
> ABC/hr for determining degree of impairment or degradation, which
> roughly equates to preferences if you presume that audiophiles
> naturally prefer undegraded sound or sound that is less degraded or
> less impaired. Since there are so-called audiophiles who prefer the
> sound of tubes and vinyl which can be rife with audible degradations,
> its not clear that one can blithely presume that all audiophile
> prefer sound that has less impairment. >>
>>


I said

<<
> One can do preference tests blind the same way they do them sighted by
> comparing A to B and forming a preference only without knowing what
> is A and what is B. One can from a preference regardless of your
> hangups and do it without the effects of sighted bias.
>>


Arny said

<<
I never said otherwise, did I? >>


Who is being defensive now?

Arny said

<< ABC/hr just happens to be a recognized,
standardized means for doing that.
>>


It is different though which is why I pointed this out. A/B comparisons do not
need a reference. ABC/hr presumes the reference is the ideal. One does not
always have access to the ideal when judging audio.

<<
> Arny said
>
> <<
>
> The tools for doing DBTs of just about *everything* are readily
> available, presuming that the investigator is sufficiently pragmatic.
> Since we're talking religious beliefs, we can't presume pragmatic
> investigators in every case. >> >>


I said

<<

> We are not talking about religious beliefs here unless you insist on
> inserting your religious beliefs. >>


Arny said


<<
You may be too naive to recognize religious beliefs about audio when you see
them, sockpupppet Yustabe.
>>


And you may be to stupid to keep track of who you are talking to in a single
post. I recognize your religious beliefs about audio Arny. I'm one up on you
there. You clearly don't recognize your religious beliefs on audio for what
they are.

I said

<<
>We were talking about the practice
> of subjective review by a particular publication
>>


Arny said

<<
That publication seems to have a lengthy track record for forming unfounded
and therefore irrational beliefs in its readers minds. These kinds of
beliefs are often called "religious". Since you can't spell worth a hill of
beans, and are too arrogant to use a proper spell-checker, I thought I'd try
to bring you up-to-date, sockpuppet wheel. >>


"Religious" is the correct spelling fool. You remain quite the hypocrite for
making poor spelling an issue. You remain quite the "characture." You also
managed to fall on your face in your feeble attempt to actually get back onto
the thread subject. Good job.

<<
> Arny said
> <<

> In the case of Stereophile, the use of DBTs would no doubt embarrass
> the management and many of the advertisers. Therefore, Stereophile
> has maximal incentive to be as non-pragmatic as possible. They simply
> behave predictably. >>
>>


I said

<<
> So says the novice who thinks he is objective.
>>


Arny said

<<

Shows how little you understand. I favor bias controls BECAUSE I believe
that I am biased. If I thought that any listener including myself could be
perfectly objective I wouldn't favor the use of bas controls, now would I.
>>


Shows how lacking you are in self-awareness. You were the one attacking
legitimate researchers like JJ who not only believe in bias control but
actually made a living using it.

I said

<<
>You wear your prejudices like a badge.
>>


Arny said

<<
I simply know a little something about myself and everybody else in the
world. >>


If that were really true you would never stop vomiting.

Arny said

<< We have a hard time behaving in perfectly objective ways. Every human
has biases. Therefore listening tests that are at all difficult or
controversial, need bias controls.


>>


Yep. And it gets worse when people with an agenda hide behind a facade of
science to try to give their biases more credibility. I'll take honest biases
over your agenda any day.

S888Wheel

unread,
Dec 26, 2003, 12:28:58 PM12/26/03
to
<<
Stereo Review works in a different, more pragmatic market than the high end
ragazines. >>


They reviewed some very expensive equipment including highend tubed
electronics. Stereophile OTOH has reviewed some very inexpensive equipment.


I said

<<
>Stereophile did take the Carver challenge.
>>


Arny said

<<
Really? What Stereophile issue describes that? >>


I don't remember. you could always check their archives.

I said

<<

> They weren't afraid of the outcome of that. >>


Arny said

<<

Awaiting details of this test.

>>


Look 'em up yourself.

Arny Krueger

unread,
Dec 26, 2003, 12:33:33 PM12/26/03
to
"John Atkinson" <Stereophi...@Compuserve.com> wrote in message
news:113bd5e2.03122...@posting.google.com

> In message <U5KdnRwCRMT...@comcast.com>
> Arny Krueger (ar...@hotpop.com) wrote:
>> The only thing [Stereophile does] right is the level-matching and
>> I suspect that their reviewers don't always adhere to that.
>
> Amazing! I never suspected that when I perform listening tests you
> are right there in the room observing me, Mr. Krueger. Nevertheless,
> whatever you "suspect," Mr. Krueger, I match levels to within less
> than 0.1dB whenever I directly compare components. See my review
> of the Sony SCD-XA9000ES SACD player in the December issue for an
> example (http://www.stereophile.com/digitalsourcereviews/1203sony).

I guess that Atkinson wants us to believe that when one speaks of "all their
reviewers", one speaks only of him.

>> Stereophile does some really weird measurements, such as their
>> undithered tests of digital gear. The AES says don't do it, but
>> John Atkinson appears to be above all authority but the voices
>> that only he hears.

> It always gratifying to learn, rather late of course, that I had
> bested Arny Krueger in a technical discussion.

I mention Atkinson's delusions, and he gifts us with another one - that he's
bested me in a technical discussion.

> My evidence for
> this statement is his habit of waiting a month, a year, or even more
> after he has ducked out of a discussion before raising the subject
> again on Usenet as though his arguments had prevailed. Just as he has
> done here. (This subject was discussed on r.a.o in May 2002, with Real
> Audio Guys Paul Bamborough and Glenn Zelniker joining me in pointing
> out the flaws in Mr. Krueger's argument.)

So, as Atkinson's version of the story evolves, it wasn't him alone that
bested me, but the dynamic trio of Atkinson, Bamboroguh, and Zelniker.
Notice how the story is changing right before our very eyes! In fact
Bamborough and Zelniker use the same hit-and-run confuse-not-convince
"debating trade" tactics that Atkinson uses here.

> So let's examine what the Audio Engineering Society (of which I am
> a long-term member and Mr. Krueger is not) says on the subject of
> testing digital gear, in their standard AES17-1998 (revision of
> AES17-1991):

> Section 4.2.5.2: "For measurements where the stimulus is generated in
> the digital domain, such as when testing Compact-Disc (CD) players,
> the reproduce sections of record/replay devices, and digital-to-analog
> converters, the test signals shall be dithered.

> I imagine this is what Mr. Krueger means when wrote "The AES says
> don't do it." But unfortunately for Mr. Krueger, the very same AES
> standard goes on to say in the very next section (4.2.5.3):

> "The dither may be omitted in special cases for investigative
> purposes. One example of when this is desirable is when viewing bit
> weights on an oscilloscope with ramp signals. In these circumstances
> the dither signal can obscure the bit variations being viewed."

At this point Atkinson tries to confuse "investigation" with "testing
equipment performance for consumer publication reviews" Of course these are
two very different things, but in the spirit of his shifting claims in the
matter already demonstrated once above, let's see where this goes...

> As the first specific test I use an undithered signal for is indeed
> for investigative purposes -- looking at how the error in a DAC's MSBs
> compare to the LSB, in other words, the "bit weights" -- it looks as
> if Mr. Krueger's "The AES says don't do it" is just plain wrong.

The problem here is that again Atkinson has confused detailed investigations
into how individual subcomponents of chips in the player works (i.e.,
"inivestigation") with the business of characterizing how it will satisfy
consumers. Consumers don't care about whether one individual bits of the
approximately 65,000 levels supported by the CD format works, they want to
know how the device will sound. It's a simple matter to show that nobody,
not even John Atkinson can hear a single one of those bits working or not
working. Yet he deems it appropriate to confuse consumers with this sort of
minutae, perhaps so that they won't notice his egregiously-flawed subjective
tests.


Notice that *none* of the above mintuae and fine detail addresses my opening
critical comment:

"He does other tests, relating to jitter for which there is no independent


confirmation of reliable relevance to audibility".

Now did you see anything in Atkinson two numbered paragraphs above and the
subsequent unnumbered paragraph that address my comment about listening
tests and independent confirmation of audibility? No you didn't!

What you saw is the same-old, same-old old Atkinson song-and-dance which
reminds many knowledgeable people of that old carny's advice "If you can't
convince them, confuse them!".

>> He does other tests, relating to jitter, for which there is no
>> independent confirmation of reliable relevance to audibility.

> One can argue about the audibility of jitter, but the J-Test's lack of
> dither does not render it "really weird," merely consistent and
> repeatable.

A repeatable test with no real-world confirmation (i.e., audibility) is just
a reliable producer of meaningless garbage. Is a reliable producer of
irrelevant garbage numbers better or worse than an unreliable producer of
irrelevant garbage numbers?

>These, of course, are desirable in a measurement
> technique. And perhaps it worth noting that, as I have pointed out
> before, consistency is something lacking from Mr. Krueger's own
> published measurements of digital components on his website, with
> different measurement bandwidths, word lengths, and FFT sizes making
> comparisons very difficult, if not impossible.

This is just more of Atkinson's "confuse 'em if you can't convince 'em"
schtick. My web sites test a wide range of equipment, in virtually every
performance category from the sublime to the ridiculous. Of course I pick
testing parameters that are appropriate to the general quality level and
characteristics of the equipment I test. I've also evolved my testing
techniques as I learned more about how audio equipment works.

BTW, note that Atkinson complains that I use different measurement
bandwidths, word lengths and FFT sizes. Atkinson doesn't test the wide range
of equipment I test, and he doesn't test it as thoroughly. For example,
compare his test of the Card Deluxe to mine. The relevant URLs' are

http://www.pcavtech.com/soundcards/CardDDeluxe/index.htm


and

http://www.stereophile.com/digitalsourcereviews/280/index4.html


Compare Atkinson's Figure 2

to

my

http://www.pcavtech.com/soundcards/CardDDeluxe/SNR_2496-a_FS.gif

http://www.pcavtech.com/soundcards/CardDDeluxe/SNR_2444-a_FS.gif

http://www.pcavtech.com/soundcards/CardDDeluxe/SNR_1644-a_FS.gif


First off, you will notice that Atkinson's shows the results of his 1 KHz
performance test under just one operational mode, while I provided data
about three different and highly relevant operational modes.

Note that my plots document measurement bandwidths, word lengths and FFT
sizes, while Atkinson's figure 2 and supporting text don't document this
very information that Atkinson complained about. So, he's complaining about
information that I publish with every test as a matter of course, while he
doesn't put the same information into his own reports as they are published
in his magazine and on his web site.

Note that my plots provide high resolution information down to below 20 Hz
while Atkinson's plot squishes all data below 1 KHz into a tiny strip along
the left edge of the plot where it is difficult or impossible to analyze. My
plots allow people to determine if there are low frequency spurious
responses, hum or significant amounts of 1/F noise. Atkinson's don't.

> I have snipped the rest of Mr. Krueger's comments as they reveal
> merely that he doesn't actually read the magazine he so loves to
> criticize. :-)

It appears that Atkinson is up to tricks as usual. If you analyze his
technical critique of my published tests you find that he's basically
faulting me for testing equipment in more operational modes than he does,
and providing more documentation about test conditions than he provides.

Let me add that I am fully aware of the effects of testing equipment with
different measurement bandwidths, word lengths and FFT sizes. I take steps
to ensure that any variations in test conditions don't adversely affect my
summary evaluation of equipment performance.

Furthermore, while Atkinson asks you to trust his poorly-contrived listening
tests, I provide the means for people to audition the Card Deluxe with their
own speakers and ears at PCAVTech's sister www.pcabx.com web site.

S888Wheel

unread,
Dec 26, 2003, 12:32:17 PM12/26/03
to
I said

<<

> I don't have a problem with that. But if we are going to expect
> Stereophile to hang their hats on the results the protocols have to
> be up to valid scientific standards IMO. I think this is where DBTs
> stop being a snap.
>>


Arny said

<<
Shows how little you know about the existing scientific standards for doing
blind tests, sockpuppet wheel >>


given your comment about it on a previous post.

"Having done dozens of DBTs, probably 100's by now, I think that doing DBTs
takes more work than the schlock procedures that the Stereophile reviewers
use."

You are clearly talking out both sides of your mouth. Business as usual.

Arny Krueger

unread,
Dec 26, 2003, 12:36:05 PM12/26/03
to
"S888Wheel" <s888...@aol.com> wrote in message
news:20031226122525...@mb-m27.aol.com

> Arny said
>
> <<
> In your mind all kinds of charlatans seem to be credible, and people
> who do work to extend scientific objectivity are fools.
> >>
>
>
> So Arny considers legitimate scientists and accomplished industry
> pros to be charlatans.

Obviously, if they are legitimate scientists then they aren't charlatans and
if they are charlatans, they aren't legitimate scientists, so on the face of
it, this claim is inherently false.

The rest of this post is full of similar non-sequitors, not to mention a lot
of bad writing and misformatted text that would be more trouble to respond
to than its worth.


S888Wheel

unread,
Dec 26, 2003, 12:49:28 PM12/26/03
to
<< > Arny said
>
> <<
> In your mind all kinds of charlatans seem to be credible, and people
> who do work to extend scientific objectivity are fools.
> >>
> >>


I said

<<
>
> So Arny considers legitimate scientists and accomplished industry
> pros to be charlatans.
>>


Arny said

<<
Obviously, if they are legitimate scientists then they aren't charlatans and
if they are charlatans, they aren't legitimate scientists, so on the face of
it, this claim is inherently false.
>>


It seems false until we start naming names. Quiz time. Which of the following
are and are not legitimate scientists. 1. Arny Krueger and 2. JJ?

Arny said

<<
The rest of this post is full of similar non-sequitors, >>


Similar nonsequitors? How can you claim similar nonsequitors when you have
failed to establish the first nonsequitor? Obviously you are burning a straw
man. You lost the argument and now you are running away while tossing out
jargon as excuses.

Arny said

<< not to mention a lot
of bad writing and misformatted text that would be more trouble to respond
to than its worth.
>>


Prove the writing was bad. You actually accused me of misspelling words that
weren't misspelled. You are in over your head as usual. Run away.

Powell

unread,
Dec 26, 2003, 1:35:24 PM12/26/03
to

"S888Wheel" wrote

> >Stereophile did take the Carver challenge.
> >

> Really? What Stereophile issue describes that? >>
>
>
> I don't remember. you could always check their archives.
>
>

> > They weren't afraid of the outcome of that. >>
>

> Awaiting details of this test.
>

The Carver Challenge. Bob Carver made statements
that he could replicate any amplifier design using a
technique called "transfer function." Stereophile
took up his challenge wanting Bob to replicate the
sound of the Conrad-Johnson Premier 5 mono-blocks.
I think that over a two-day period he accomplished
that task to Holt's and Archibald's satisfaction. From
there the Carver M1.0t was born.

Based on this experience (Carver Challenge) Bob
set about to refine the "transfer function." He next
built a reference vacuum amp of his own, this later
became the Silver Seven. Based on this amp he
developed a technique called "Vacuum Tube
Transfer." The TMF line of amps was created from
this experience, the hallmark being the Silver Seven-t.

On the dark side to this tail, Stereophile was later
prohibited from publishing any reference to Carver
after trying to undo (publish/verbally) the results of the
empirical findings.

George M. Middius

unread,
Dec 26, 2003, 1:45:57 PM12/26/03
to

S888Wheel said:

> So Arny considers legitimate scientists and accomplished industry pros to be
> charlatans.

Shall we compile a list? ;-)

• Phoebe Johnston
• John Atkinson
• Paul Frindle
• Earl Geddes
• Glenn Zelniker
• Dan Lavry
• Paul Bamborough

And those are just some of the Real Audio Guys whom Krooger has
trashed on RAO. I'm sure there are others. Plus the equally
accomplished people whom Krooger has "deconstructed" <snicker> on
RAPro and probably other forums.

All of these guys appear to be successful, knowledgeable, talented,
and productive, but according to Krooger, they are time-wasters,
blowhards, ignorant twerps, etc. What would we do without
Mr. Shit's fabulous enlightenment to set us right? ;-)


Bruce J. Richman

unread,
Dec 26, 2003, 3:22:54 PM12/26/03
to
George M. Middius wrote:

If memory serves, we could also add Pete Goudreau to the list. Yes, indeedy.
Krueger's omniscient "exposure" of all these evil conspirators proves that we
all sreiously erred in thinking that he was perhaps a little bit (or more)
paranoid and delusional. Perhaps some day those twin bastions of authenticity,
McDonald's and Circuit City, will give him his long overdue medals for
accomplishment in the fields of statistical analysis and objective audio
reviewing.

Bruce J. Richman

ScottW

unread,
Dec 26, 2003, 3:29:56 PM12/26/03
to

"S888Wheel" <s888...@aol.com> wrote in message
news:20031226010951...@mb-m25.aol.com...

> Scott said
>
> <<
> > Performing the DBTs would be a snap if Atkinson set 'em up
> > with the tools to do it. >>
> > >>
>
>
> I said
>
> <<
> > I am not so convinced it is a snap to do them well. >>
>
>
> Scott said
>
> <<
>
> I guess I need your definition of well.
> No more difficult than listening to gear,
> subjectively characterizing the sound and putting
> that to paper. >>
>
>
> Well would be within the bounds of rigor that would be scientifically
> acceptable. I see no point in half-assing an attempt to bring greater
> reliability to the process of subjective review. Let's just say Howard
fell way
> short in his endevours and the results spoke to that fact.

I am talking about reviews in a magazine. Who said it
had to be "scientifically acceptable"? That would require
independent witnesses which would make it way beyond the
scope of what I am talking about.
A tool to allow a single person toconduct and report
statistically valid results (if not independently witnessed)
would be required. After that, conducting the tests would
be relatively easy.

>
> I said
>
> <<
>
> > I think if Stereophile were
> > to do something like this it would be wise for them to consult someone
> like JJ
> > who conducted such tests for a living. Would you suggest that such DBTs
> be
> > limmited to comparisons of cables amps and preamps? >>
>
>
> Scott said
>
> <<
> Those are certainly the easiest components.
>
> Digital sources being next with a challenge to sync them
> such that the subject isn't tipped off.
> >>
>
> I said
>
>
> <<
> > I think DBT with speakers
> > and source components are quite a bit more difficult. >>
>
>
> Scott said
>
> <<
>
> Speakers are definitely out. It could be done but not without
> significant difficulty. >>
>
>
> I did do some single blind comparisons. The dealer was very nice about
it.

Problem is speaker location. What if both speakers optimal location is
within the
same space? Even if it is not, the spacing should be relatively easy to
differentiate.
It is quite difficult to conduct such a test truly blind.


>
> I said
>
> <<
> > Would you limmit such
> > tests to varification of actual audible differences? >>
>
>
> Scott said
>
> <<
>
> Yes, if that fails then the preference test is really
> kind of pointless.
> >>
>
>
> I don't think so. It has been shown that with components that are agreed
to
> sound different sighted bias can still have an affect on preference.

I did say if the difference test fails. Obviously if the difference test
is passed
then a preference test could be undertaken. I'm not that concerned about
people being influenced on preference. Preference can be learned.


>
> Scott said
>
> <<
> >
> > << The fact that they don't even create
> > the tools to do it is telling to me.
> > >> >>
>
>
> I said
>
> <<
> >
> > How so?
> >>
>
>
> Scott said
>
> <<
> I think they are afraid of the possible (or even probable)
> outcome. >>
>
>
> Maybe but I am skeptical of this. It didn't seem to hurt Stereo Review to
take
> the position that all amps, preamps and cables sounded the same.
Stereophile
> did take the Carver challenge. They weren't afraid of the outcome of
that.

Then why not? A chance to cater to both objectivists and subjectivists.
Sounds like a win-win.

ScottW


George M. Middius

unread,
Dec 26, 2003, 3:34:34 PM12/26/03
to

Terriers are bothered by fleas. The RAO Terrierborg appears seriously
bothered. Does it follow that he must, perforce, have fleas?

> > > Performing the DBTs would be a snap if Atkinson set 'em up
> > > with the tools to do it. >>

> > > I am not so convinced it is a snap to do them well. >>

> > I guess I need your definition of well.

> > Well would be within the bounds of rigor that would be scientifically
> > acceptable.

> I am talking about reviews in a magazine. Who said it


> had to be "scientifically acceptable"?

Are you always this stupid? Wait, I know that one......

ScottW

unread,
Dec 26, 2003, 4:07:14 PM12/26/03
to

"S888Wheel" <s888...@aol.com> wrote in message
news:20031226011420...@mb-m25.aol.com...

Ok, then lets not impose such a level of rigor.
Nothing else reported in Stereophile has to meet this criteria,
why impose it on DBTs?

ScottW


ScottW

unread,
Dec 26, 2003, 4:17:25 PM12/26/03
to

"John Atkinson" <Stereophi...@Compuserve.com> wrote in message
news:113bd5e2.03122...@posting.google.com...

Isn't there a question about the validity of applying this test to CD
players which don't have to regnerate the clock?

I thought it was generally applied to HT receivers with DACs
and external DACs?

ScottW


S888Wheel

unread,
Dec 26, 2003, 4:22:38 PM12/26/03
to
Scott said

>
>>
>> I guess I need your definition of well.
>> No more difficult than listening to gear,
>> subjectively characterizing the sound and putting
>> that to paper. >>
>>

I said

>
>> Well would be within the bounds of rigor that would be scientifically
>> acceptable. I see no point in half-assing an attempt to bring greater
>> reliability to the process of subjective review. Let's just say Howard
>fell way
>> short in his endevours and the results spoke to that fact.

Scott said

>
>
> I am talking about reviews in a magazine. Who said it
>had to be "scientifically acceptable"?

You asked for my definition of well done DBTs.

Scott said

> That would require
>independent witnesses which would make it way beyond the
>scope of what I am talking about.

I don't think it would require independent witnesses but it would require
Stereophile to establish their own formal peer review group. But we are talking
about Stereophile dealing with the current level of uncertainty that now exists
with the current protocols.I think to do standard DBTs right would be a major
pain in the ass for them. Even the magazines which make a big issue out of such
tests don't often actually do such tests and when they do they often do a crap
job of it.

Scott said

>A tool to allow a single person toconduct and report
>statistically valid results (if not independently witnessed)
>would be required. After that, conducting the tests would
>be relatively easy.


Is it ever easy? Look what Howard did with such a tool.

I said

>
>> I did do some single blind comparisons. The dealer was very nice about
>it.

Scott said

>
>Problem is speaker location. What if both speakers optimal location is
>within the
>same space? Even if it is not, the spacing should be relatively easy to
>differentiate.
>It is quite difficult to conduct such a test truly blind.
>>

It was difficult. The speakers had to be moved between each listening
session.It was blind because I had my eyes closed. It all felt a bit ridiculous
but it worked. I didn't know which speaker was which on the first samples. It
didn't take long for me to figure which was which just by listening though. At
that point I didn't bother with the closing of eyes.

Scott said

>
> I did say if the difference test fails. Obviously if the difference test
>is passed
>then a preference test could be undertaken. I'm not that concerned about
>people being influenced on preference. Preference can be learned.

Yes you did. My mistake. But sighted bias does affect preference. That has been
proven. I wanted to compare the Martin Logans to the Apogees blind for that
very reason. I knew I liked the looks of the Martin Logans.

I said

>
>> Maybe but I am skeptical of this. It didn't seem to hurt Stereo Review to
>take
>> the position that all amps, preamps and cables sounded the same.
>Stereophile
>> did take the Carver challenge. They weren't afraid of the outcome of
>that.

Scott said

>
>Then why not? A chance to cater to both objectivists and subjectivists.
>Sounds like a win-win.
>

I cannot speak for Stereophile and I cannot rule out your hunch. But you cannot
rule out the possibility that cost and inconvenience of propper implimentation
of such protocols on a staff comprised largely of hobbyists is a factor.

S888Wheel

unread,
Dec 26, 2003, 4:33:46 PM12/26/03
to
>
> Ok, then lets not impose such a level of rigor.
> Nothing else reported in Stereophile has to meet this criteria,
> why impose it on DBTs?

For the sake of improving protocols to improve reliability of subjective
reports. When it comes to such protocols I think quality is more important than
quantity. If DBTs aren't done well they will not improve the state of reviews
published by Stereophile.The source of your beef with Stereophile is that it
lacks reliability now is it not?

John Atkinson

unread,
Dec 26, 2003, 4:46:14 PM12/26/03
to
"Powell" <nos...@noquacking.com> wrote in
message news:<vuovogq...@corp.supernews.com>...

> On the dark side to this tail, Stereophile was later
> prohibited from publishing any reference to Carver
> after trying to undo (publish/verbally) the results of the
> empirical findings.

This is simply not true, Mr. Powell. You can retrieve previous
disucssions of this subject from Google, but I will dig up the story
from my archives and post it to r.a.o.

John Atkinson
Editor, Stereophile

ScottW

unread,
Dec 26, 2003, 4:58:50 PM12/26/03
to

"S888Wheel" <s888...@aol.com> wrote in message
news:20031226162238...@mb-m02.aol.com...

> Scott said
>
> >
> >>
> >> I guess I need your definition of well.
> >> No more difficult than listening to gear,
> >> subjectively characterizing the sound and putting
> >> that to paper. >>
> >>
>
> I said
>
> >
> >> Well would be within the bounds of rigor that would be scientifically
> >> acceptable. I see no point in half-assing an attempt to bring greater
> >> reliability to the process of subjective review. Let's just say Howard
> >fell way
> >> short in his endevours and the results spoke to that fact.
>
> Scott said
>
> >
> >
> > I am talking about reviews in a magazine. Who said it
> >had to be "scientifically acceptable"?
>
> You asked for my definition of well done DBTs

But I don't agree that is necessary for Stereophile to
implement.

>
> Scott said
>
> > That would require
> >independent witnesses which would make it way beyond the
> >scope of what I am talking about.
>
> I don't think it would require independent witnesses but it would require
> Stereophile to establish their own formal peer review group.

Let me be clear, I don't want to impose a bunch of requirements that
make this effort too difficult to implement.

> But we are talking
> about Stereophile dealing with the current level of uncertainty that now
exists
> with the current protocols.

? Stereophile has conducted elaborate DBTs with more rigourous
protocols than I call for. Let them establish a protocol
that is workable for their reviewers to conduct.
Publish it for comment, should be very interesting.

>I think to do standard DBTs right would be a major
> pain in the ass for them. Even the magazines which make a big issue out
of such
> tests don't often actually do such tests and when they do they often do a
crap
> job of it.

If they have a tool which controls switching and tabulates results,
I really don't see what the problem is.
What needs to happen is a level of automation is provided
to match the skill level of the tester. That wouldn't be that difficult.


>
> Scott said
>
> >A tool to allow a single person toconduct and report
> >statistically valid results (if not independently witnessed)
> >would be required. After that, conducting the tests would
> >be relatively easy.
>
>
> Is it ever easy? Look what Howard did with such a tool.

I don't believe Howards ABX box provided the level of automation
I am talking about.

ScottW


George M. Middius

unread,
Dec 26, 2003, 5:04:59 PM12/26/03
to

S888Wheel said to the Terrierborg:

> The source of your beef with Stereophile is that it
> lacks reliability now is it not?

Of course not. Woofies hates Stereophile because it describes,
evaluates, and ultimately endorses luxury goods. Communists don't need
luxury goods.


George M. Middius

unread,
Dec 26, 2003, 5:05:46 PM12/26/03
to

John Atkinson said:

> > On the dark side to this tail, Stereophile was later
> > prohibited from publishing any reference to Carver
> > after trying to undo (publish/verbally) the results of the
> > empirical findings.
>
> This is simply not true, Mr. Powell.

Powell has told us he prefers to be addressed as "No Stick Um". No
honorific, of course.

Ernst Raedecker

unread,
Dec 26, 2003, 5:24:35 PM12/26/03
to
On Wed, 24 Dec 2003 21:37:57 -0800, "ScottW" <Scot...@hotmail.com>
wrote:

>What I am referring to are the reviews where different units
>are compared and perceptions of differences in sonic
>performances are claimed which can't
>be validated through differences in measured performance.

This is a very valid point. Many times what we hear is not what we
measure, and what we measure is not confirmed by our hearing. How is
it possible that our hearing and our measurements many times do NOT
correlate?

The answer is that there are problems with:
(1) our measurements: they are NOT really objective.
(2) our hearing: this depends MORE on our signal processing ability in
the brain than on our data-collecting ability of the ear.

(1) Let's discuss the "measured performance". The hearing stuff will
have to wait.

Contrary to common belief there is NOT an objective standard for
"measured performance" or "THE measured performance". What you choose
to measure is subjective, and the weight you give to certain elements
of your measurements is also subjective. Of course statisticians have
known all this for 70 years and more. Unfortunately very few audio
testers have a thorough knowledge of statistics and the fallacies of
statistics.

So if you claim that certain things we hear, or think we hear, or
Stereophile has heard, cannot "...be validated through differences in
measured performance" then you should first try to establish which
measurements under which conditions are relevant and which processing
of the results is relevant.

Recently there has been renewed interest in the old question of HOW we
should measure audio equipment, and WHICH measurements are relevant
and give us results that correlate with what we hear.

I would like to remind you of the work of Richard Cabot, for example
his "Fundamentals of modern audio measurement", first presented at the
103rd convention of the AES, 1997, available in pdf format on the
internet. In another paper, "COMPARISON OF NONLINEAR DISTORTION
MEASUREMENT METHODS", also on the internet in pdf format, he
introduces his famous FastTest methodology.

Reading these two papers alone will make it clear to you that there is
so much more to say about measurements that it far too simple to speak
of "measured performance" an sich.

But there is more.

Not so long ago Daniel Cheever has written a nice paper presenting new
measurements that SHOW that Single Ended Triode amps without negative
feedback do distort LESS, FAR LESS than comparable transistor
amplifiers. Would you believe that?
All those years the Objectivist League has told us that transistor
amps **measure** "objectively" much better than SETs, and that SETs AT
BEST add "euphonic distortions" that are pleasing to the ear, and now
this guy tells us that SETs "objectively" MEASURE BETTER than
transistor amps!!!!

So the Hard Line Objectivists were wrong all the time, not only
soundwise but also measurementwise. What Subjectivists had heard all
the time, namely that SETs sound better, HAS NOW BEEN VALIDATED
through differences in measured performance!!!!!

(See: Daniel Cheever, "A NEW METHODOLOGY FOR AUDIO FREQUENCY
POWER AMPLIFIER TESTING BASED ON PSYCHOACOUSTIC
DATA THAT BETTER CORRELATES WITH SOUND QUALITY", dec 2001, also in pdf
format on the internet)

As it is, I believe there are some qualifications to be made on
Cheever's paper, but I won't make them. I leave it up to you to look
his paper up on the internet and read HOW he construes his set of
measurements and processing methods. After all, you show an interest
in discussing the validity of measurements, so you are allowed to do
some homework.

Well, I will help you out a bit. Cheever's basic tenet is that the
supposedly "objective" measure of THD is not objective at all. THD is
measured as the root mean square of all the harmonics of a fundamental
in the audible range. This leads to an unweighted sum of harmonics
relative to the fundamental as a distortion percentage.
His point is that the SUM of the distortion is not really important,
but that the STRUCTURE of the produced harmonics is important. The
more this structure deviates from the natural nonlinearities produced
in the ear itself, the more audible the distortions become.

You see, whether he is completely correct in his stressing of aural
harmonics as the basis of distortion measurements (I believe there is
more to say than he does), is not the point.

The point is that he makes clear that the so-called "objective"
measurement of THD is not at all objective. There is no reason at all
why an unweighted summation of (the energy in) harmonics relative to
the fundamental would be an "objective" or a relevant measurement. It
is weighting with a value of 1 for each harmonic. Why not diminishing
weights? Why not increasing weights?

By the way, I **personally** do not think that SETs are really the
excellent amplifiers that Subjectivists and Cheever take them to be,
but that is not the point either.

The point is that it IS possible, and it IS done, to construe a set of
serious measurements that DO show that SETs measure "objectively"
BETTER than transistor amps, while the whole Objectivist community
lives in the mind-set that this cannot "objectively" be done.

In short, measurements, and especially the processing of measurements,
are NOT objective. If they correlate with what we hear, we may
consider them relevant. If they don't han they are not so relevant.

You are also advised to take notice of the recent work or Earl and
Lidia Geddes on sound quality and the perception and measurement of
distortion, also presented recently at the AES. See their website at:

http://www.gedlee.com/distortion_perception.htm

You are also advised to take a look at the newest issue of the Journal
of the AES (nov 2003, vol 51, no 11). As you know, these guys of the
AES are not really soft-in-the-head Subjectivists. Let's look at the
contents and quote the abstract of the main paper in this issue:

[quote]
The Effect of Nonlinear Distortion on Perceived Quality of Music and
Speech Signals
Chin-Tuan Tan, Brian C. J. Moore, and Nick Zacharov 1012

The subjective evaluation of nonlinear distortions often shows a weak
correlation with physical measures because the choice of distortion
metrics is not obvious. In reexamining this subject, the authors
validated a metric based on the change in the spectrum in a series of
spectral bins, which when combined leads to a single distortion
metric. Distortion was evaluated both objectively and subjectively
using speech and music. Robust results support the hypothesis for this
approach.
[unquote]

So you are not the only one asking himself why ...
"The subjective evaluation of nonlinear distortions often shows a weak
correlation with physical measures..."

It is, as I have made now ABUNDANTLY clear, ...
"because the choice of distortion metrics is not obvious."

Yeah.

There is another very interesting article in the same issue of the
JAES, one which will make the Hard Line Objectivists puke, as it makes
clear that even the tough AES boys roll over to the soft-in-the-head
camp:

[quote]
Large-Signal Analysis of Triode Vacuum-Tube Amplifiers
Muhammad Taher Abuelma'atti 1046

With the renewed interest in vacuum tubes, the issue of intrinsic
distortion mechanisms becomes relevant again. The author demonstrates
a nonlinear model of triodes and pentodes that leads to a closed-form
solution when the nonlinearity is represented by a Fourier expansion
rather than the conventional Taylor series. When applied to a two-tone
sine wave, the analysis shows that the distortion in tube amplifiers
is similar to that of the equivalent transistor amplifier. A SPICE
analysis confirms the approach.
[unquote]

Yeah, even with simple two-tone sine waves it is now ESTABLISHED
OBJECTIVELY that tube amps do NOT distort more than transistor amps.
So it says.

===========

Oh, WHAT a field day for the Subjectivists today.

Oh, WHAT a dismal day for the HLOs like Pinkerton, Krueger, Ferstler
and all the rest of them.

All those years they have thought that they have at least the AES on
their side, and now the AES deserts them. It must be an annus
horribilis for them.

Merry Xmas.

Ernesto.


"You don't have to learn science if you don't feel
like it. So you can forget the whole business if
it is too much mental strain, which it usually is."

Richard Feynman

ScottW

unread,
Dec 26, 2003, 5:38:42 PM12/26/03
to

"S888Wheel" <s888...@aol.com> wrote in message
news:20031226163346...@mb-m02.aol.com...

> >
> > Ok, then lets not impose such a level of rigor.
> > Nothing else reported in Stereophile has to meet this criteria,
> > why impose it on DBTs?
>
> For the sake of improving protocols to improve reliability of subjective
> reports.

I don't agree. Sufficient DBT protocols exist. Stereophile has used them.
No "improvement" in DBT protocols is required. Applying existing DBT
protocols
would be sufficient to confirm or deny audible differences exist.

>When it comes to such protocols I think quality is more important than
> quantity.

Unfortunately this opens a major loophole. Cherry picking the
units to be tested such that audible differences are assured.
I would like to see DBTs become part of the standard review
protocol for select categories of equipment.
Most reviewers like to compare equipment under review to
their personal reference systems anyway.

> If DBTs aren't done well they will not improve the state of reviews
> published by Stereophile.

We differ on how well done they need to be to add
credibility to audible difference claims.

>The source of your beef with Stereophile is that it
> lacks reliability now is it not?

Not exactly. I find the subjective perceptions
portion of some reviews to lack credibility.

ScottW


Socko Van Puppet

unread,
Dec 26, 2003, 5:49:57 PM12/26/03
to
>From: ern...@xs4all.nl (Ernst Raedecker)

>Oh, WHAT a field day for the Subjectivists today.

I'm guessing, too, that sooner or later (I myself have neither the time nor the
inclination) someone will demonstrate that the mental gymnastics required by
fast-switching ABX type tests themselves are the critical variable in any
experiment. But for now... Who cares about that happy horseshit? Enjoy!!!

Socko

Lionel

unread,
Dec 26, 2003, 5:53:13 PM12/26/03
to
George M. Middius a écrit :

George my pooooooooor little RAO's baby...
When will you stop to write such naive absurdity ?

ScottW

unread,
Dec 26, 2003, 6:12:07 PM12/26/03
to

"Ernst Raedecker" <ern...@xs4all.nl> wrote in message
news:3feca680...@newszilla.xs4all.nl...

> On Wed, 24 Dec 2003 21:37:57 -0800, "ScottW" <Scot...@hotmail.com>
> wrote:
>
> >What I am referring to are the reviews where different units
> >are compared and perceptions of differences in sonic
> >performances are claimed which can't
> >be validated through differences in measured performance.
>
> This is a very valid point. Many times what we hear is not what we
> measure, and what we measure is not confirmed by our hearing. How is
> it possible that our hearing and our measurements many times do NOT
> correlate?
>

Interesting stuff. You have supported very well one of my original
comments that there is near infinite depth of detail to measurements
which can be explored.

I think it is far easier to first confirm audible differences
and then pursue validating those differences through measurement.

Without the listening tests, there is still no demonstration that
measurement
differences are in fact audible or not.
I think the cart is before the horse.

With regard to the work of the Geddes, it is apparent that distortion
measures that are more applicable to amplifiers don't work
particularly well for speakers. This does not in any
way, invalidate those measures for amplifier performance.

ScottW


Sockpuppet Yustabe

unread,
Dec 26, 2003, 6:17:44 PM12/26/03
to

"George M. Middius" <Spam-...@resistance.org> wrote in message
news:j2cpuv8odu2st9l87...@4ax.com...

It depends if they are Communist leaders or Communist followers


----== Posted via Newsfeed.Com - Unlimited-Uncensored-Secure Usenet News==----
http://www.newsfeed.com The #1 Newsgroup Service in the World! >100,000 Newsgroups
---= 19 East/West-Coast Specialized Servers - Total Privacy via Encryption =---

George M. Middius

unread,
Dec 26, 2003, 6:28:49 PM12/26/03
to

The Terrierborg must've had a very disappointing Xmas.

> > This is a very valid point. Many times what we hear is not what we
> > measure, and what we measure is not confirmed by our hearing. How is
> > it possible that our hearing and our measurements many times do NOT
> > correlate?

> Interesting stuff.

No, it's exceedingly boring, trivial, and pointless. Oh wait -- you
were masturbating again, weren't you?


> You have supported very well one of my original
> comments that there is near infinite depth of detail to measurements
> which can be explored.

Or you could just listen to some music.

> I think it is far easier to first confirm audible differences
> and then pursue validating those differences through measurement.

"Far easier" than gouging out your eyeballs?

> Without the listening tests, there is still no demonstration that
> measurement differences are in fact audible or not.
> I think the cart is before the horse.

I'll bet you're an expert at fucking yourself with a garden hose.


George M. Middius

unread,
Dec 26, 2003, 6:29:14 PM12/26/03
to

Sockpuppet Yustabe said:

> > > The source of your beef with Stereophile is that it
> > > lacks reliability now is it not?
> >
> > Of course not. Woofies hates Stereophile because it describes,
> > evaluates, and ultimately endorses luxury goods. Communists don't need
> > luxury goods.

> It depends if they are Communist leaders or Communist followers

The Terrierborg is your buddy. You tell us.

John Atkinson

unread,
Dec 26, 2003, 7:26:17 PM12/26/03
to
"Arny Krueger" <ar...@hotpop.com> wrote in message
news:<WLOdnRHohKn...@comcast.com>...

>"John Atkinson" <Stereophi...@Compuserve.com>wrote in message
>news:113bd5e2.03122...@posting.google.com
>>In message <U5KdnRwCRMT...@comcast.com>
>>Arny Krueger (ar...@hotpop.com) wrote:
>>> Stereophile does some really weird measurements, such as their
>>> undithered tests of digital gear. The AES says don't do it, but
>>> John Atkinson appears to be above all authority but the voices
>>> that only he hears.
>>
>> It always gratifying to learn, rather late of course, that I had
>> bested Arny Krueger in a technical discussion. My evidence for

>> this statement is his habit of waiting a month, a year, or even more
>> after he has ducked out of a discussion before raising the subject
>> again on Usenet as though his arguments had prevailed. Just as he has
>> done here. (This subject was discussed on r.a.o in May 2002, with Real
>> Audio Guys Paul Bamborough and Glenn Zelniker joining me in pointing
>> out the flaws in Mr. Krueger's argument.)
>
> So, as Atkinson's version of the story evolves, it wasn't him alone
> that bested me, but the dynamic trio of Atkinson, Bamborough, and

> Zelniker. Notice how the story is changing right before our very eyes!

"Our?" Do you have a frog in your pocket, Mr. Krueger? No, Mr. Krueger.
The story hasn't changed. I was merely pointing out that Paul Bamborough
and Glenn Zelniker, both digital engineers with enviable reputations,
posted agreement with the case I made, and as I said, joined me in
pointing out the flaws in your argument.


>> So let's examine what the Audio Engineering Society (of which I am
>> a long-term member and Mr. Krueger is not) says on the subject of
>> testing digital gear, in their standard AES17-1998 (revision of
>> AES17-1991):
>> Section 4.2.5.2: "For measurements where the stimulus is generated in
>> the digital domain, such as when testing Compact-Disc (CD) players,
>> the reproduce sections of record/replay devices, and digital-to-analog
>> converters, the test signals shall be dithered.
>>
>> I imagine this is what Mr. Krueger means when wrote "The AES says
>> don't do it." But unfortunately for Mr. Krueger, the very same AES
>> standard goes on to say in the very next section (4.2.5.3):
>> "The dither may be omitted in special cases for investigative
>> purposes. One example of when this is desirable is when viewing bit
>> weights on an oscilloscope with ramp signals. In these circumstances
>> the dither signal can obscure the bit variations being viewed."
>
> At this point Atkinson tries to confuse "investigation" with "testing
> equipment performance for consumer publication reviews" Of course these
> are two very different things...

Not at all, Mr. Krueger. As I explained to you back in 2002 and again
now, the very test that you describe as "really weird" and that you
claim the "AES says don't do" is specifically outlined in the AES
standard as an example of a test for which a dithered signal is
inappropriate, because it "can obscure the bit variations being viewed."

It is also fair to point out that both the undithered ramp signal and
the undithered 1kHz tone at exactly -90.31dBFS that I use for the same
purpose are included on the industry-standard CD-1 Test CD, that was
prepared under the aegis of the AES.

If you continue to insist that the AES says "don't do it," then why on
earth would the same AES help make such signals available?


>> As the first specific test I use an undithered signal for is indeed
>> for investigative purposes -- looking at how the error in a DAC's MSBs
>> compare to the LSB, in other words, the "bit weights" -- it looks as
>> if Mr. Krueger's "The AES says don't do it" is just plain wrong.
>
> The problem here is that again Atkinson has confused detailed
> investigations into how individual subcomponents of chips in the player
> works (i.e., "[investigation]") with the business of characterizing how
> it will satisfy consumers.

The AES standard concerns the measured assessment of "Compact-Disc (CD)

players, the reproduce sections of record/replay devices, and
digital-to-analog converters." As I pointed out, it makes an exception
for "investigative purposes" and makes no mention of such "purposes"
being limited to the "subcomponents of chips." The examination of "bit
weights" is fundamental to good sound from a digital component, because
if each one of the 65,535 integral step changes in the digital word
describing the signal produces a different-sized change in the
reconstructed analog signal, the result is measureable and audible
distortion.


> Consumers don't care about whether one individual bits of the
> approximately 65,000 levels supported by the CD format works, they
> want to know how the device will sound.

Of course. And being able to pass a "bit weight" test is fundamental to
a digital component being able to sound good. This is why I publish the
results of this test for every digital product reviewed in Stereophile.
I am pleased to report that the bad old days, when very few DACs could
pass this test, are behind us.


> It's a simple matter to show that nobody, not even John Atkinson can
> hear a single one of those bits working or not working.

I am not sure what this means. If a player fails the test I am describing,
both audible distortion and sometimes even more audible changes in pitch
can result. I would have thought it important for consumers to learn
of such departures from ideal performance.


> Yet he deems it appropriate to confuse consumers with this sort of
> [minutiae], perhaps so that they won't notice his egregiously-flawed
> subjective tests.

In your opinion, Mr. Krueger, and I have no need to argue with you
about opinions, only when you mistate facts. As you have done in this
instance. To recap:

I use just two undithered test signals as part of the battery of tests
I perform on digital components for Stereophile. Mr. Krueger has
characterized my use of these test signals as "really weird" and has
claimed that their use is forbidden by the Audio Engineering
Society. Yet, as I have shown by quoting the complete text of the
relevant paragraphs from the AES standard on the subject, one of the
tests I use is specifically mentioned as an example as the kind of test
where dither would interfere with the results and where an undithered
signal is recommended.

As my position on this subject has been supported by two widely
respected experts on digital audio, I don't think that anything more
needs to said about it.

And as I said, Mr. Krueger is also incorrect about the second

undithered test signal I use, which is to examine a DAC's or CD player's
rejection of word-clock jitter. My use is neither "really weird," nor
is it specifically forbidden by the Audio Engineering Society.
>> example of a _non_-diagnostic test signal, see Arny Krueger's use of

>> a dithered 11.025kHz tone in his website tests of soundcards at a
>> 96kHz sample rate. This meets none of the criteria I have just
>> outlined.
>
> Notice that *none* of the above mintuae and fine detail addresses my
> opening critical comment: "He does other tests, relating to jitter for
> which there is no independent confirmation of reliable relevance to
> audibility".
>
> Now did you see anything in Atkinson two numbered paragraphs above and
> the subsequent unnumbered paragraph that address my comment about
> listening tests and independent confirmation of audibility? No you
> didn't!

You are absolutely correct, Mr. Krueger. There is nothing about the
audibility of jitter in these paragraphs. This is because I was
addressing your statements that this test, like the one examining bit
weights, was "really weird" and that "The AES says don't do it, but

John Atkinson appears to be above all authority but the voices that
only he hears."

Regarding audibility, I then specifically said, in my next paragraph,
that "One can argue about the audibility of jitter..." As you _don't_
think it is audible but my experience leads me to believe that it _can_
be, depending on level and spectrum, again I don't see any point in
arguing this subject with you, Mr. Krueger. All I am doing is
specifically addressing the point you made in your original posting
and showing that it was incorrect. Which I have done.

Finally, you recently claimed in another posting that your attacking me
was "highly appropriate," given that my views about you "are totally
fallacious, libelous and despicable." I suggest to those reading this
thread that they note that Arny Krueger has indeed made this discussion
highly personal, using phrases such as "the voices that only [John
Atkinson] hears"; "Notice how [John Atkinson's] story is changing right
before our very eyes!"; "the same-old, same-old old Atkinson

song-and-dance which reminds many knowledgeable people of that old
carny's advice 'If you can't convince them, confuse them!'"; "This is
just more of Atkinson's 'confuse 'em if you can't convince 'em'
schtick"; "Atkinson is up to tricks as usual."

I suggest people think for themselves about how appropriate Mr.
Krueger's attacks are, and how relevant they are to a subject where
it is perfectly acceptable for people to hold different views.

John Atkinson
Editor, Stereophile

Powell

unread,
Dec 26, 2003, 8:02:20 PM12/26/03
to

"John Atkinson" wrote

> > On the dark side to this tail, Stereophile was later
> > prohibited from publishing any reference to Carver
> > after trying to undo (publish/verbally) the results of the
> > empirical findings.
>
> This is simply not true, Mr. Powell. You can retrieve
> previous disucssions of this subject from Google,
> but I will dig up the story from my archives and post
> it to r.a.o.
>

"from my archives "... Please do and post TAS's
version, too. I take it you have no problem with
the other facts stated in the post.


George M. Middius

unread,
Dec 26, 2003, 8:20:12 PM12/26/03
to

John Atkinson said:

> I suggest people think for themselves about how appropriate Mr.
> Krueger's attacks are, and how relevant they are to a subject where
> it is perfectly acceptable for people to hold different views.

So you're saying audio is an intellectual, financial, vocational (or
avocational), and otherwise mundane endeavor? I'm sorry, but this is
where you earn your excommunication. Krooger is clearly high up on his
overflowing toilet, preaching to his choir. This is a matter of faith
for Turdborg, and well you should note that, sir.


Arny Krueger

unread,
Dec 26, 2003, 8:51:03 PM12/26/03
to
"S888Wheel" <s888...@aol.com> wrote in message
news:20031226122858...@mb-m27.aol.com

> <<
> Really? What Stereophile issue describes that? >>
>
>
> I don't remember. you could always check their archives.

Bad idea, given how brain deed their search engine is.

Google yields:

http://www.google.com/groups?selm=2klvgd%24qui%40rmg01.prod.aol.net

"Okay, the first part involved a challenge that Bob Carver made, that he
could
make one of his amplifiers sound identical to any amplifier selected by the
Stereophile editors. Although the article didn't specifically state this,
they
chose a Conrad-Johnson tube amplifier (as far from a solid state mid-priced
amplifier as you can get), which Bob proceeded to match up against his M1.0
amplifier. The article stated that the Stereophile editors could not hear a
difference between their source amplifier and the modified Carver amplifier.
The modified amplifier was then used as a prototype for a production model,
the
M1.0t. The "t" stands for transfer-function-modified.

"Stereophile later claimed that the production amplifier didn't match the
original tube amplifier. Carver said that it did--and the beat went on.

"The lawsuit was a separate issue. Carver Corporation charging that
Stereophile
had engaged in a campaign to discredit Carver or some such. In a settlement,
they agreed not to mention Carver in their editorial pages, although the
company was free to continue to advertise in the magazine.

"My feeling, based on knowing Carver and talking with him about his
technique,
is that there is no black magic involved in infusing the sonic character of
a
tube amplifier onto a solid state amplifier, so long as the destination
amplifier have equal or superior output/current and distortion
characteristics.
Since I don't believe in audio mysticism yet, I have no reason to believe
that
any knowledgeable audio engineer, given enough time, couldn't do the same
thing.

Arny Krueger

unread,
Dec 26, 2003, 9:34:18 PM12/26/03
to
"Ernst Raedecker" <ern...@xs4all.nl> wrote in message
news:3feca680...@newszilla.xs4all.nl
> On Wed, 24 Dec 2003 21:37:57 -0800, "ScottW" <Scot...@hotmail.com>
> wrote:

>> What I am referring to are the reviews where different units
>> are compared and perceptions of differences in sonic
>> performances are claimed which can't
>> be validated through differences in measured performance.

> This is a very valid point. Many times what we hear is not what we
> measure, and what we measure is not confirmed by our hearing. How is
> it possible that our hearing and our measurements many times do NOT
> correlate?

> The answer is that there are problems with:
> (1) our measurements: they are NOT really objective.

They are plenty objective, the real problem is that we can't relate them to
subjective perceptions as well as we might like.

> (2) our hearing: this depends MORE on our signal processing ability in
> the brain than on our data-collecting ability of the ear.

Not news. Hence generalized listener training and other more specific
procedures for improving listener sensitivity in a particular test. Working
examples of this can be found at the www.pcabx.com web site.

> (1) Let's discuss the "measured performance". The hearing stuff will
> have to wait.

> Contrary to common belief there is NOT an objective standard for
> "measured performance" or "THE measured performance". What you choose
> to measure is subjective, and the weight you give to certain elements
> of your measurements is also subjective. Of course statisticians have
> known all this for 70 years and more. Unfortunately very few audio
> testers have a thorough knowledge of statistics and the fallacies of
> statistics.

There are standards for measured performance. The real problem is that they
aren't generally agreed-upon. It's a complex situation. A major stumbling
block is the radical subjectivist rejection of any and all reliable and
bias-controlled listening procedures that have been proposed to date.

> So if you claim that certain things we hear, or think we hear, or
> Stereophile has heard, cannot "...be validated through differences in
> measured performance" then you should first try to establish which
> measurements under which conditions are relevant and which processing
> of the results is relevant.

I think that JJ once said that if all artifacts are 100 dB down, no audible
differences will be heard. I think that this is correct as far as it goes,
but in a great many circumstances this is far too rigorous of a standard.

> Recently there has been renewed interest in the old question of HOW we
> should measure audio equipment, and WHICH measurements are relevant
> and give us results that correlate with what we hear.

The interest never stopped except in the minds that stopped thinking
rationally.

> I would like to remind you of the work of Richard Cabot, for example
> his "Fundamentals of modern audio measurement", first presented at the
> 103rd convention of the AES, 1997, available in pdf format on the
> internet. In another paper, "COMPARISON OF NONLINEAR DISTORTION
> MEASUREMENT METHODS", also on the internet in pdf format, he
> introduces his famous FastTest methodology.

Good paper, lots of good ideas, but also something that has been arguably
eclipsed by more recent developments. These papers don't really talk about
what levels of distortion are good or bad. They focus on how to measure
common forms of noise and distortion. Therefore, their introduction in the
discussion at this point is actually irrelevant, because we're discussing
how much noise and distortion is audible, not how to measure it.

> Reading these two papers alone will make it clear to you that there is
> so much more to say about measurements that it far too simple to speak
> of "measured performance" an sich.

Not at all. If one understands what these papers are trying to say and what
they don't say, its just reading material about how to measure noise and
distortion, and they don't say much at all about what the resulting numbers
mean.

> But there is more.

Yes, ranging from Zwicker and Fastl to the two Geddes/Lee papers from the
last major AES.

> Not so long ago Daniel Cheever has written a nice paper presenting new
> measurements that SHOW that Single Ended Triode amps without negative
> feedback do distort LESS, FAR LESS than comparable transistor
> amplifiers. Would you believe that?

The whole discussion obviously rests on how you characterize distortion and
what you call "comparable" transistor amplifiers. SETs are basically what
results when you throw away just about every important technical innovation
relating to power amps that was developed between about 1925 and 1965. This
includes biasing, load lines, push-pull operation and the long and bloody
development of a number of different flavors of inverse feedback. I suspect
that if one is equally stupid about designing SS amps, some really horrid
equipment just might result. Garbage in, garbage out.

> All those years the Objectivist League has told us that transistor
> amps **measure** "objectively" much better than SETs, and that SETs AT
> BEST add "euphonic distortions" that are pleasing to the ear, and now
> this guy tells us that SETs "objectively" MEASURE BETTER than
> transistor amps!!!!

Cheever's paper http://web.mit.edu/cheever/www/cheever_thesis.pdf is a bit
of a joke. He arbitrarily assigns sound quality characterizations to a
number of amps and them attempts to justify his arbitrary choices by
mathematical means. Bad science or weird science?

> So the Hard Line Objectivists were wrong all the time, not only
> soundwise but also measurementwise. What Subjectivists had heard all
> the time, namely that SETs sound better, HAS NOW BEEN VALIDATED
> through differences in measured performance!!!!!

No, its been supported by the usual subjectivist means - arbitrary personal
decisions offered without any reliable, believable support.

> (See: Daniel Cheever, "A NEW METHODOLOGY FOR AUDIO FREQUENCY
> POWER AMPLIFIER TESTING BASED ON PSYCHOACOUSTIC
> DATA THAT BETTER CORRELATES WITH SOUND QUALITY", dec 2001, also in pdf
> format on the internet)

Here's the URL again, for people who need a good laugh:

http://web.mit.edu/cheever/www/cheever_thesis.pdf

> As it is, I believe there are some qualifications to be made on
> Cheever's paper, but I won't make them. I leave it up to you to look
> his paper up on the internet and read HOW he construes his set of
> measurements and processing methods. After all, you show an interest
> in discussing the validity of measurements, so you are allowed to do
> some homework.

Homework, which rather obviously the author of the post I' responding to
didn't to.

> Well, I will help you out a bit. Cheever's basic tenet is that the
> supposedly "objective" measure of THD is not objective at all. THD is
> measured as the root mean square of all the harmonics of a fundamental
> in the audible range. This leads to an unweighted sum of harmonics
> relative to the fundamental as a distortion percentage.

What's really going on is that the means used to weight harmonics in a THD
measurement are arbitrary, and have never been seriously claimed by anybody
to have psychoacoustic justification. The measurement is objective, the
analysis is objective, but the particular analysis is not justified by
modern perceptual research.

> His point is that the SUM of the distortion is not really important,
> but that the STRUCTURE of the produced harmonics is important.

It's not Cheever's point, its Crowhurst's point. BTW, Crowhursts paper is on
the Cheever site at

http://web.mit.edu/cheever/www/crow1.htm

BTW the official info about this paper (date, abstract, etc) is:

Some Defects in Amplifier Performance Not Covered by Standard
Specifications 1039524 bytes (CD aes7)
Author(s): Crowhurst, Norman H.
Publication: Preprint 12; Convention 9; October 1957
Abstract: Physiological research has shown that the very low orders of
distortion reepresented by the specifications of modern amplifiers should
be inaudible, but the fact remains that considerable distortion can be heard
in cases where the measured distortion, by methods at present standard, is
far below the limits determined to be audible. This paper examines
critically some of the possible forms of distortion that can be audible
under such circumstances. Methods of detecting their presence are described,
with the intention of providing a basis for future forms of specification,
more indicative of significant practical amplifier performance than are the
present standards.


>The
> more this structure deviates from the natural nonlinearities produced
> in the ear itself, the more audible the distortions become.

This is a wonderfully ignorant statement. The nonlinerities of the ear are
strongly SPL-dependent, and by this I mean the SPL at the ear. So, if you
keep the SPL levels down, the ear's nonlinearities are more-or-less under
control. The more significant source of the problem is well-known to
everybody who has seriously studied psychoacoustics since about 1985 -
masking.

> You see, whether he is completely correct in his stressing of aural
> harmonics as the basis of distortion measurements (I believe there is
> more to say than he does), is not the point.

Well there is certainly more to say than Cheever's says, as Lee and Gedees
recently said it.

> The point is that he makes clear that the so-called "objective"
> measurement of THD is not at all objective.

Sure it is. The problem is not that it isn't objective, the problem is that
it is not perceptually-based.

>There is no reason at all
> why an unweighted summation of (the energy in) harmonics relative to
> the fundamental would be an "objective" or a relevant measurement. It
> is weighting with a value of 1 for each harmonic. Why not diminishing
> weights? Why not increasing weights?

Any consistent means for weighting harmonics is objective, but it just might
be very suboptimal because it is not based on what has developed in terms of
knowledge about human perception since THD was first suggested (the early
1930's, I believe)

> By the way, I **personally** do not think that SETs are really the
> excellent amplifiers that Subjectivists and Cheever take them to be,
> but that is not the point either.

Nice job of building Cheever up and then cutting him down.

> The point is that it IS possible, and it IS done, to construe a set of
> serious measurements that DO show that SETs measure "objectively"
> BETTER than transistor amps, while the whole Objectivist community
> lives in the mind-set that this cannot "objectively" be done.

The recent Geddes/Lee AES papers disprove this claim quite thoroughly.

> In short, measurements, and especially the processing of measurements,
> are NOT objective.

Sure they are, but that doesn't mean that measures dating back to maybe 1925
are the best that we can do.

> If they correlate with what we hear, we may

> consider them relevant. If they don't than they are not so relevant.

How about that.

> You are also advised to take notice of the recent work or Earl and
> Lidia Geddes on sound quality and the perception and measurement of
> distortion, also presented recently at the AES. See their website at:

> http://www.gedlee.com/distortion_perception.htm

Nice job of self-deconstruction.

> You are also advised to take a look at the newest issue of the Journal
> of the AES (nov 2003, vol 51, no 11). As you know, these guys of the
> AES are not really soft-in-the-head Subjectivists. Let's look at the
> contents and quote the abstract of the main paper in this issue:
>
> [quote]
> The Effect of Nonlinear Distortion on Perceived Quality of Music and
> Speech Signals
> Chin-Tuan Tan, Brian C. J. Moore, and Nick Zacharov 1012

> The subjective evaluation of nonlinear distortions often shows a weak
> correlation with physical measures because the choice of distortion
> metrics is not obvious. In reexamining this subject, the authors
> validated a metric based on the change in the spectrum in a series of
> spectral bins, which when combined leads to a single distortion
> metric. Distortion was evaluated both objectively and subjectively
> using speech and music. Robust results support the hypothesis for this
> approach.
> [unquote]

> So you are not the only one asking himself why ...
> "The subjective evaluation of nonlinear distortions often shows a weak
> correlation with physical measures..."

No, its something that those nasty old objectivists have been discussing
quite a bit, and for years.

> It is, as I have made now ABUNDANTLY clear, ...
> "because the choice of distortion metrics is not obvious."

This is supposed to be news?

> Yeah.
>
> There is another very interesting article in the same issue of the
> JAES, one which will make the Hard Line Objectivists puke, as it makes
> clear that even the tough AES boys roll over to the soft-in-the-head
> camp:

> [quote]
> Large-Signal Analysis of Triode Vacuum-Tube Amplifiers
> Muhammad Taher Abuelma'atti 1046

> With the renewed interest in vacuum tubes, the issue of intrinsic
> distortion mechanisms becomes relevant again. The author demonstrates
> a nonlinear model of triodes and pentodes that leads to a closed-form
> solution when the nonlinearity is represented by a Fourier expansion
> rather than the conventional Taylor series. When applied to a two-tone
> sine wave, the analysis shows that the distortion in tube amplifiers
> is similar to that of the equivalent transistor amplifier. A SPICE
> analysis confirms the approach.
> [unquote]

> Yeah, even with simple two-tone sine waves it is now ESTABLISHED
> OBJECTIVELY that tube amps do NOT distort more than transistor amps.

Actually, that's not what it says, but straightening out Raedecker would be
a full-time job for a larger committee than just me.

> So it says.

> ===========

> Oh, WHAT a field day for the Subjectivists today.

Why?

> Oh, WHAT a dismal day for the HLOs like Pinkerton, Krueger, Ferstler
> and all the rest of them.

Really?

> All those years they have thought that they have at least the AES on
> their side, and now the AES deserts them. It must be an annus
> horribilis for them.

Not at all. This is just more of Raedecker's ignorant, self-contradictory
posturing.

John Atkinson

unread,
Dec 26, 2003, 9:55:32 PM12/26/03
to
"ScottW" <Scot...@hotmail.com> wrote in message
news:<Pz1Hb.41708$m83.13206@fed1read01>...

> Isn't there a question about the validity of applying this test to CD
> players which don't have to regnerate the clock?
>
> I thought it was generally applied to HT receivers with DACs
> and external DACs?

Hi ScottW, yes, the J-Test was originally intended to examine devices where
the clock was embedded in serial data. What I find interesting is that
CD players do differ quite considerably in how they handle this signal,
meaning that there are other mechanisms going on producing the same effect.
(Meitner's and Gendron LIM, for example, which they discussed in an AES paper
about 10 years ago.) And of course, those CD players that use an internal
S/PDIF link stand revealed for what they are on the J-Test.

BTW, you might care to look at the results on the J-Test for the Burmester
CD player in our December issue (avaiable in our on-line archives). It did
extraordinaruly well on 44.1k material, both on internal CD playback and
on external S/PDIF data, but failed miserably with other sample rates.
Most peculiar. My point is that the J-Test was invaluable in finding
this out.

John Atkinson
Editor, Stereophile

John Atkinson

unread,
Dec 26, 2003, 10:01:13 PM12/26/03
to
"ScottW" <Scot...@hotmail.com> wrote in message
news:<Pz1Hb.41708$m83.13206@fed1read01>...
> I thought it was generally applied to HT receivers with DACs
> and external DACs?

Oh, and one more thing. I have no problem with people not thinking this
test is useful, or is not being applied appropriately, or offers no
proven correlation with audible problems. If those are your opinions, I
have no intention of arguing with you. They just don't happen to be _my_
opinions. I see nothing wrong in us agreeing to disagree. What I _am_
objecting to is Arny Krueger's trying to disseminate something that is
not true, which is his statement that tests that don't use dither are
forbidden by an AES standard. For him to keep repeating this falsehood
is dirty pool.

John Atkinson
Editor, Stereophile

Arny Krueger

unread,
Dec 26, 2003, 10:21:28 PM12/26/03
to

It says that it's your mother, Mr. Atkinson. I don't believe it.

>No, Mr.
> Krueger. The story hasn't changed. I was merely pointing out that
> Paul Bamborough and Glenn Zelniker, both digital engineers with
> enviable reputations, posted agreement with the case I made, and as I
> said, joined me in pointing out the flaws in your argument.

Zelniker and Bamborough have both gone on long and loud about their personal
disputes with me and their low personal opinions of me are. Therefore, they
are not unbiased judges of this matter and should be ignored. If they were
honest men, they would recuse themselves, but of course they are not honest
men.

>>> So let's examine what the Audio Engineering Society (of which I am
>>> a long-term member and Mr. Krueger is not) says on the subject of
>>> testing digital gear, in their standard AES17-1998 (revision of
>>> AES17-1991):
>>> Section 4.2.5.2: "For measurements where the stimulus is generated
>>> in the digital domain, such as when testing Compact-Disc (CD)
>>> players, the reproduce sections of record/replay devices, and
>>> digital-to-analog converters, the test signals shall be dithered.

>>> I imagine this is what Mr. Krueger means when wrote "The AES says
>>> don't do it." But unfortunately for Mr. Krueger, the very same AES
>>> standard goes on to say in the very next section (4.2.5.3):
>>> "The dither may be omitted in special cases for investigative
>>> purposes. One example of when this is desirable is when viewing bit
>>> weights on an oscilloscope with ramp signals. In these circumstances
>>> the dither signal can obscure the bit variations being viewed."

>> At this point Atkinson tries to confuse "investigation" with "testing
>> equipment performance for consumer publication reviews" Of course
>> these are two very different things...

> Not at all, Mr. Krueger. As I explained to you back in 2002 and again
> now, the very test that you describe as "really weird" and that you
> claim the "AES says don't do" is specifically outlined in the AES
> standard as an example of a test for which a dithered signal is
> inappropriate, because it "can obscure the bit variations being
> viewed."

Apparently the meaning of the word "investigation" is very unclear to you,
Atkinson. At this late time in your education, I won't be of much help in
terms of clarifying it for you.

> It is also fair to point out that both the undithered ramp signal and
> the undithered 1kHz tone at exactly -90.31dBFS that I use for the same
> purpose are included on the industry-standard CD-1 Test CD, that was
> prepared under the aegis of the AES.

Irrelevant to the issue of subjective relevancy.

> If you continue to insist that the AES says "don't do it," then why on
> earth would the same AES help make such signals available?

For the purpose of scientific and advanced technical investigation, not for
routine testing for a consumer publication.

>>> As the first specific test I use an undithered signal for is indeed
>>> for investigative purposes -- looking at how the error in a DAC's
>>> MSBs compare to the LSB, in other words, the "bit weights" -- it
>>> looks as if Mr. Krueger's "The AES says don't do it" is just plain
>>> wrong.

>> The problem here is that again Atkinson has confused detailed
>> investigations into how individual subcomponents of chips in the
>> player works (i.e., "[investigation]") with the business of
>> characterizing how it will satisfy consumers.

> The AES standard concerns the measured assessment of "Compact-Disc
> (CD) players, the reproduce sections of record/replay devices, and
> digital-to-analog converters." As I pointed out, it makes an exception
> for "investigative purposes" and makes no mention of such "purposes"
> being limited to the "subcomponents of chips."

Apparently the meaning of the word "investigation" is very unclear to you,
Atkinson. At this late time in your education, I won't be of much help in
terms of clarifying it to you.

>The examination of "bit
> weights" is fundamental to good sound from a digital component,
> because if each one of the 65,535 integral step changes in the
> digital word describing the signal produces a different-sized change

> in the reconstructed analog signal, the result is measurable and
> audible distortion.

It is an absolute and total falsehood that missing one of the 65,535


integral step changes in the

digital words describing the signal produces a different-sized change
in the reconstructed analog signal, will result in measurable and audible
distortion.

A careful reading of the following AES standards documents will show why:

AES-6id-2000 AES information document for digital audio -- Personal computer
audio quality measurements (35 pages) [2000-05-01 printing]

AES17-1998 (revision of AES17-1991) AES standard method for digital audio
engineering -- Measurement of digital audio equipment

If you are unaware of the reasons for my claim, I would be happy to explain
them to you, Atkinson.


>> Consumers don't care about whether one individual bits of the
>> approximately 65,000 levels supported by the CD format works, they
>> want to know how the device will sound.
>
> Of course. And being able to pass a "bit weight" test is fundamental
> to a digital component being able to sound good.

Absolutely not. It's easy to show that many bits can be dropped and/or
otherwise mangled with zero measurable and audible results.

> This is why I
> publish the results of this test for every digital product reviewed
> in Stereophile. I am pleased to report that the bad old days, when
> very few DACs could pass this test, are behind us.

True but loss of a few bit levels here and there can have zero audible and
measured effects.

>> It's a simple matter to show that nobody, not even John Atkinson can
>> hear a single one of those bits working or not working.

> I am not sure what this means.

It means that you are wrong.

> If a player fails the test I am
> describing, both audible distortion and sometimes even more audible
> changes in pitch can result.

I can cite a great many cases where neither audible nor measurable effects
occur, other than of course in some of your sonically-irrelevant tests.

> I would have thought it important for
> consumers to learn of such departures from ideal performance.

Atkinson, you simply don't know what you are talking about when it comes to
the audibility of small differences.

>> Yet he deems it appropriate to confuse consumers with this sort of
>> [minutiae], perhaps so that they won't notice his egregiously-flawed
>> subjective tests.

> In your opinion, Mr. Krueger, and I have no need to argue with you

> about opinions, only when you misstate facts. As you have done in this
> instance.

I have done no such thing.

> To recap:


> I use just two undithered test signals as part of the battery of tests
> I perform on digital components for Stereophile. Mr. Krueger has
> characterized my use of these test signals as "really weird" and has
> claimed that their use is forbidden by the Audio Engineering
> Society. Yet, as I have shown by quoting the complete text of the
> relevant paragraphs from the AES standard on the subject, one of the
> tests I use is specifically mentioned as an example as the kind of
> test where dither would interfere with the results and where an
> undithered signal is recommended.

Apparently the meaning of the word "investigation" is very unclear to you,
Atkinson. At this late time in your education, I won't be of much help in
terms of clarifying it to you.

> As my position on this subject has been supported by two widely
> respected experts on digital audio, I don't think that anything more
> needs to said about it.

Zelniker and Bamborough have both gone on long and loud about their personal
disputes with me and their low personal opinions of me are. Therefore, they
are not unbiased judges of this matter and should be ignored. If they were
honest men, they would recuse themselves, but of course they are not honest
men.


> And as I said, Mr. Krueger is also incorrect about the second
> undithered test signal I use, which is to examine a DAC's or CD
> player's rejection of word-clock jitter. My use is neither "really
> weird," nor is it specifically forbidden by the Audio Engineering
> Society.

But it is. The problem is that Atkinson doesn't know what the word
"investigation" means. He has obviously forgotten that he is testing audio
gear for consumers, and that perceived sound quality should guide his
testing procedures.

Not at all. You've just talked around the issue one more time without
dealing with it.

> Finally, you recently claimed in another posting that your attacking
> me was "highly appropriate," given that my views about you "are
> totally fallacious, libelous and despicable." I suggest to those
> reading this thread that they note that Arny Krueger has indeed made
> this discussion highly personal, using phrases such as "the voices
> that only [John Atkinson] hears"; "Notice how [John Atkinson's] story
> is changing right before our very eyes!"; "the same-old, same-old old
> Atkinson song-and-dance which reminds many knowledgeable people of
> that old carny's advice 'If you can't convince them, confuse them!'";
> "This is just more of Atkinson's 'confuse 'em if you can't convince
> 'em' schtick"; "Atkinson is up to tricks as usual."

I can't change the facts about your many deceptions, Atkinson. Only you can
do that.

> I suggest people think for themselves about how appropriate Mr.
> Krueger's attacks are, and how relevant they are to a subject where
> it is perfectly acceptable for people to hold different views.

The real problem is that there's an unhidden agenda in this discussion,
which is audibility. Atkinson refuses to deal with this issue directly and
appropriately because he knows that he will lose. His many deceptions,
attempts to cite false witnesses, and inability to clearly and directly deal
with the issues that I raised should be clear to all readers but numskulls
like George Middius.

Arny Krueger

unread,
Dec 26, 2003, 10:31:29 PM12/26/03
to
"John Atkinson" <Stereophi...@Compuserve.com> wrote in message
news:113bd5e2.03122...@posting.google.com

>What I _am_ objecting to is Arny Krueger's trying to


> disseminate something that is not true, which is his statement that
> tests that don't use dither are forbidden by an AES standard. For him
> to keep repeating this falsehood is dirty pool.

Apparently the meaning of the word the AES uses, namely "investigation" is


very unclear to you,
Atkinson. At this late time in your education, I won't be of much help in
terms of clarifying it for you.

Since all of the media that consumers play on their digital equipment is
supposed to be dithered and generally is, there's no justification for the
use of undithered test signals when testing consumer equipment for the
purpose of reporting performance to consumers.

In your last post you made a fairly telling false claim Atkinson:

"The examination of "bit weights" is fundamental to good sound from a
digital component, because if each one of the 65,535 integral step changes
in the digital word describing the signal produces a different-sized change
in the reconstructed analog signal, the result is measurable and audible
distortion."

This is easy to show to be a grotesque false claim. At this time I'm leaving
disproving it as an exercise, but in due time I will conclusively show why
it is a false claim on the measurement side, using the following AES
standards documents:

AES-6id-2000 AES information document for digital audio -- Personal computer
audio quality measurements (35 pages) [2000-05-01 printing]

AES17-1998 (revision of AES17-1991) AES standard method for digital audio
engineering -- Measurement of digital audio equipment

Anybody who wants to study the audibility of systematic bit reduction for
themselves need only visit www.pcabx.com .

The most relevant page at the PCABX web site would be:

http://www.pcabx.com/technical/bits44/index.htm

and

http://www.pcabx.com//technical/sample_rates/index.htm

Both pages show musical samples with fairly gross removal of bits that is
also completely undetectable to even the most sensitive listeners using SOAT
or near-SOTA monitoring equipment. In this particular case, PCABX is a
rigorous and exact methodology as opposed to some other cases where PCABX is
merely a highly convenient and effective approximation of rigorous and exact
methods.


ScottW

unread,
Dec 26, 2003, 10:36:10 PM12/26/03
to

"John Atkinson" <Stereophi...@Compuserve.com> wrote in message
news:113bd5e2.03122...@posting.google.com...

Of course the test is valid for external data. Still interesting
that the reviewer never picked this up.

I would be interested to see a DBT to support Brian's perception
that, "as the centerpiece of a digital-only system, running balanced
from stem to stern, the Burmester 001 is the best digital front-end
I've ever heard." I'd be interested in seeing if he really could
identify balanced from single ended.

ScottW


ScottW

unread,
Dec 26, 2003, 10:41:08 PM12/26/03
to

"John Atkinson" <Stereophi...@Compuserve.com> wrote in message
news:113bd5e2.03122...@posting.google.com...
> "ScottW" <Scot...@hotmail.com> wrote in message
> news:<Pz1Hb.41708$m83.13206@fed1read01>...
> > I thought it was generally applied to HT receivers with DACs
> > and external DACs?
>
> Oh, and one more thing. I have no problem with people not thinking this
> test is useful, or is not being applied appropriately, or offers no
> proven correlation with audible problems. If those are your opinions, I
> have no intention of arguing with you. They just don't happen to be _my_
> opinions.

I have no opinion on correlation with audible problems as I haven't
seen anyone show it does or does not exist.

Upon what basis have you formed your opinion?

Looks like you might have a good candidate for testing
in the Burmester.

> I see nothing wrong in us agreeing to disagree. What I _am_
> objecting to is Arny Krueger's trying to disseminate something that is
> not true, which is his statement that tests that don't use dither are
> forbidden by an AES standard. For him to keep repeating this falsehood
> is dirty pool.

Agreed. He is becoming a bit repititous with this assertion.
I try not to let him be too distracting in these moments.

ScottW


S888Wheel

unread,
Dec 26, 2003, 10:49:17 PM12/26/03
to
I said

>
>> You asked for my definition of well done DBTs

Scott said

>
> But I don't agree that is necessary for Stereophile to
>implement.
>

Fair enough but we disagree. I believe that if Stereophile were to go through
the trouble to enforce a standard DBT for all reviewers to use in order to
improve reliability of reviews it only makes sense to me to have the protocols
adhere to a scientifically acceptable standard. Anything less looks like a
large demand made on hobbyists reviewers with little assurance tht it will
improve the reliability of such reviews.

I said

>
>> I don't think it would require independent witnesses but it would require
>> Stereophile to establish their own formal peer review group.

Scott said

>
>Let me be clear, I don't want to impose a bunch of requirements that
>make this effort too difficult to implement.
>

Then how could the readers be sure valid tests are being conducted and
reported?

I said

>
>> But we are talking
>> about Stereophile dealing with the current level of uncertainty that now
>exists
>> with the current protocols.

Scott said

>
> ? Stereophile has conducted elaborate DBTs with more rigourous
>protocols than I call for.

Yes. So they know how difficult it would be to make this a requirement before
any subjective review may go forward.


Scott said

> Let them establish a protocol
>that is workable for their reviewers to conduct.
>Publish it for comment, should be very interesting.
>

I'm not clear what you want now. You want Stereophile to establish a protocol
but you want the protocol to be easily executed and it need not meet the rigors
demanded by science? What would such protocols look ike that they would
substantially improve the reliability of subjective reviews and yet not meet
the standards demanded by science and be easy for all reviewers to do?

I said

>>I think to do standard DBTs right would be a major
>> pain in the ass for them. Even the magazines which make a big issue out
>of such
>> tests don't often actually do such tests and when they do they often do a
>crap
>> job of it.

Scott said

>
> If they have a tool which controls switching and tabulates results,
> I really don't see what the problem is.

Would you want someone to do this without any training? Would you like this to
happen with no calibration of test sensitivity or listener sensitivity? Would
you want such testing to take place absent varification with a sperate set of
tests conducted by another tester?

Scott said

>
> What needs to happen is a level of automation is provided
>to match the skill level of the tester. That wouldn't be that difficult.

I'm sure it would help. I'm not sure it wouldn't still be challenging for
hobbyists to get reliable results.

>
>> Scott said
>>
>> >A tool to allow a single person toconduct and report
>> >statistically valid results (if not independently witnessed)
>> >would be required. After that, conducting the tests would
>> >be relatively easy.

I said

>
>> Is it ever easy? Look what Howard did with such a tool.
>

Scott said

>
> I don't believe Howards ABX box provided the level of automation
>I am talking about.

I think the box was fine. I think the problem was Howard. Were it not for some
stupid mistakes he made in math we may have never known just how bad his tests
really were. Lets not forget they were published.

John Atkinson

unread,
Dec 26, 2003, 10:51:04 PM12/26/03
to
"Powell" <nos...@noquacking.com> wrote in message
news:<vuovogq...@corp.supernews.com>...
> The Carver Challenge. Bob Carver made statements
> that he could replicate any amplifier design using a
> technique called "transfer function." Stereophile
> took up his challenge wanting Bob to replicate the
> sound of the Conrad-Johnson Premier 5 mono-blocks.
> I think that over a two-day period he accomplished
> that task to Holt's and Archibald's satisfaction. From
> there the Carver M1.0t was born...

>
> On the dark side to this tail, Stereophile was later
> prohibited from publishing any reference to Carver
> after trying to undo (publish/verbally) the results of the
> empirical findings.

Hi Powell, I dug into my archives as promised. Here is a blow-by-blow
account of what happened:

Sometime in 1985, Bob Carver challenged Stereophile's then editor, J.
Gordon Holt, and its publisher, Larry Archibald, that he could match
the sound of any amplifier they chose with one of his inexpensive
"Magnetic-Field" power amplifiers. Mr. Carver subsequently visited
Santa Fe to try to modify his amplifier so that it matched the sound
of an expensive tube design. (This was indeed a Conrad-Johnson
monoblock, though that was not reported at the time. I am not aware of
the reasons why not as I didn't join Stereophile's staff until May 1986.)

The degree to which the two amplifiers matched was confirmed using a
null test. At first, however, even though the measured null was 35dB
down from the amplifiers' output levels (meaning that any difference
was at the 1.75% level), JGH and LA were able to distinguish the Carver
from the target amplifier by ear. It was only when Bob Carver lowered
the level of the null between the two amplifiers to -70dB -- a 0.03%
difference -- that the listeners agreed that the Carver and the
reference amplifier were sonically indistinguishable. (This entire
series of tests was reported on in the October 1985 issue of
Stereophile, Vol.8 No.6.)

Neither Gordon Holt nor myself had further access to the original
prototype amplifier that had featured in the 1985 tests. Some 18 months
later, however, after I joined the magazine, Stereophile was sent a
production version of the Carver M-1.0t. This was an amplifier that we
had understood Bob Carver intended to sound and measure identically to
the prototype that had featured in the 1985 listening tests. Because of
this sonic "cloning," the production M-1.0t was advertised by Carver
as sounding identical to the tube amplifier that had featured in the
1985 Stereophile tests: "Compare the new M-1.0t against any and all
competition. Including the very expensive amplifiers that have been
deemed the M-1.0t's sonic equivalent," stated Carver Corporation
advertisements in Audio (October 1986), Stereo Review (February 1987),
and in many other issues of those magazines.

After careful and independent auditioning of the production M-1.0t,
Gordon and I felt that while the M-1.0t was indeed similar-sounding
to the tube design, it did not sound identical. Certainly it could not
be said that we deemed it the sonic equivalent of the "very expensive
amplifiers." Measuring the outputs of the two amplifiers at the
loudspeaker terminals while each was driving a pair of Celestion
SL600 speakers revealed minor frequency-response differences due to
the amplifiers having different output impedances. (The Carver had a
conventionally low output impedance; in subsequent auditioning, Bob
Carver connected a long length of Radio Shack hook-up cable in series
with its output to try to reduce the sonic differences between the
amplifiers.)

In addition, carrying out a null test between the production M-1.0t
amplifier sent to Stereophile by Carver and the reference tube
amplifier revealed that, at best, the two amps would only null to
-40dB at 2kHz, a 1% difference, diminishing to -20dB below 100Hz and
above 15kHz, a 10% difference.

These null figures were not significantly altered by changing the tube
amplifier's bias or by varying the line voltage with a Variac. We then
borrowed another sample of the M-1.0t from a local dealer and nulled
that against the target amplifier. The result was an even shallower
null. In the original, 1985, tests, JGH and LA had proven to Mr. Carver
that they could identify two amplifiers that had produced a 35dB null
by ear. These 1987 measurements therefore reinforced the idea that the
production M-1.0t did not sound the same as the target tube amplifier
even though the original hand-tweaked prototype did appear to have done
so, driving the midrange/treble panels of Infinity RS1B loudspeakers
(but not the woofers).

Upon being informed of these results, Bob Carver flew out to Santa Fe
at the end of February 1987 and carried out a set of null tests that
essentially agreed with my measurements, a fact confirmed by Mr. Carver
in a letter published in Stereophile: "...the null product between the
amplifiers...is 28dB, not 70dB! Your tests showed this, and so did mine,"
he wrote, and went on to say that "Since my own research has shown that
the threshold for detecting differences is about 40dB, I knew there was
enough variance between the amps to be detectable by a careful listener."

Mr. Carver then asked us to participate in a series of single-blind
tests comparing the production M-1.0t with the tube amplifier, with
himself acting as the test operator. We agreed, and Gordon Holt proved
to be able to distinguish the two amplifiers by ear alone in these tests.
"J. Gordon was able to hear the difference between my M-1.0t and the
reference amp in a blind listening test," Mr. Carver wrote in the letter
referred to above, continuing, "An earlier test had shown Gordon's
hearing to be flawless, like that of a wee lad." (All of Stereophile's
tests and auditioning of the production M-1.0t, the events that occurred
during the February '87 weekend Mr. Carver spent in Santa Fe, and Mr.
Carver's subsequent letter were reported on and published in the
April/May 1987 issue of Stereophile, Vol.10 No.3.

Bob Carver subsequently reinforced the idea that it is hard to
consistently manufacture amplifiers which differ from a target design
by less than 1% or so: ie, cannot produce a null deeper than -40dB. In
an interview with me that appeared in the February 1990 issue of
Stereophile (Vol.13 No.2), he said, "A 70dB null is a very steep null.
It's really down to the roots of the universe and things like that.
70dB nulls aren't possible to achieve in production." I asked him,
therefore, what his target null was between the Silver Seven-t and the
original Silver Seven amplifier, two more-recent Carver amplifiers that
are stated in Carver's literature to sound very similar. "About 36dB,"
was his reply. "When you play music, the null will typically hover
around the 36dB area." (Bob Carver subsequently confirmed that I had
correctly reported his words in this interview.)

Far from recanting their original findings, Stereophile's staff
reported what they had measured and what they had heard under
carefully controlled conditions regarding the performance of the
production Carver M-1.0t amplifier, just as they do with any component
being reviewed. The fact that those findings were at odds with their
earlier experience can be explained by the fact that the amplifiers
auditioned were _not_ identical: the 1985 tests involved a
hand-tweaked prototype based on a Carver M-1.5 chassis; the 1987 tests
involved a sample M-1.0t taken from the production line and it is my
understanding that there was no production hand-nulling.

In addition, the 1985 tests only involved the midrange and treble towers
of Infinity RS-1b loudspeakers, the woofers being driven by a different
amplifier. FOr the 1987 review and subsequent tests, the Carver and
C-J amps were used full-range.

While it would indeed appear possible for Mr. Carver on an individual,
hand-tweaked basis to achieve a null of -70dB between two entirely
different amplifiers (meaning that it would be unlikely for them to
sound different), routinely repeating this feat in production is not
possible (something implied by Mr. Carver's own statements). And if it
is not possible, then it is likely that such amplifiers could well sound
different from one another, just as Stereophile reported.

Regarding subsequent events, we published reviews of the Carver TFM-25
and Silver Seven-t amplifiers in 1989 and the Carver Amazing loudspeaker
in 1990. Also in 1990, an edited version of a Stereophile review
appeared in Carver literature and in an advertisement that appeared in
the May/June 1990 issue of The Absolute Sound. We took legal action to
prevent this from happening, as we do in any instance of infringement
of our copyright. Bob Carver responded by filing a countersuit for
defamation, trade disparagement, product disparagement, and
interference with a business expectancy, against Stereophile Inc.,
against Larry Archibald, against Robert Harley, and against myself,
claiming "in excess of $50,000" in personal damages for Mr. Carver
and "in excess of $3 million" in lost sales for the Carver Corporation.
(This sum was later raised to $7 million on the appearance of a Follow-Up
review of the Carver Silver Seven-t amplifier in our October 1990 issue.)

Carver's countersuit included some 42 individual counts of purported
defamation dating back to J. Gordon Holt's reporting of the original
"Carver Challenge" in the middle of 1985. J. Gordon Holt was _not_
named in the countersuit, however, and neither were Dick Olsher and Sam
Tellig who had also written reviews of Carver products. In effect, I
was being sued for things had had been published in Stereophile a year
before I joined.

What we had expected to be a conventional copyright case had turned
into something much greater in its scope and financial consequences.
Neither case never went to court, however, the two sides agreeing to
an arbitrated settlement in late December 1990, with the help of a
court-appointed mediator. Agreement was reached for a settlement with
prejudice (meaning that none of the claims and counterclaims can be
revived by either side) that took effect on 1/1/91.

The settlement agreement was made a public document. The main points
were:

a) Neither side admitted any liability.

b) Neither side paid any money to the other.

c) Carver recalled all remaining copies of the unauthorized reprint for
destruction.

d) With the exception of third-party advertisements, Stereophile agreed
not to mention in print Carver the man, Carver the company, or Carver
products for a cooling-off period of three years starting 1/1/91 or
until the principals involved were no longer with their respective
companies.

That's it. Stereophile returned to giving review coverage to Carver
products in the usual manner after 1/1/94.

John Atkinson
Editor, Stereophile

S888Wheel

unread,
Dec 26, 2003, 11:04:54 PM12/26/03
to
Scott said

>
>> > Ok, then lets not impose such a level of rigor.
>> > Nothing else reported in Stereophile has to meet this criteria,
>> > why impose it on DBTs?
>>

I said

>
>> For the sake of improving protocols to improve reliability of subjective
>> reports.

Scott said

> I don't agree. Sufficient DBT protocols exist. Stereophile has used them.
>No "improvement" in DBT protocols is required. Applying existing DBT
>protocols
>would be sufficient to confirm or deny audible differences exist.

What exactly are those protocols? I would say the protocols used by those
advocating the use of DBTs in other publications were lacking and they were
being pawned off as definitive proof of universal truths. That would be
counterproductive IMO. I don't know what protocols Stereophile used in their
DBTs but I am willing to bet that if they were scientifically valid they were
far to burdensome to be done before every subjective review published in
Stereophile. Maybe John Atkinson could comment on that. I am speculating.

I said

>
>>When it comes to such protocols I think quality is more important than
>> quantity.

Scott said

>
> Unfortunately this opens a major loophole. Cherry picking the
>units to be tested such that audible differences are assured.

I don't follow.

Scott said

>I would like to see DBTs become part of the standard review
>protocol for select categories of equipment.

Isn't that cherry picking?

Scott said

>
>Most reviewers like to compare equipment under review to
>their personal reference systems anyway.

That in and of itself presents a problem. One cannot always draw universal
truths about any component based on use in one system. I think mulitple
reviewers for any one piece of equipment woiuld be a very good idea but I
suspect it isn't practical or financially feasable.

I said

>
>> If DBTs aren't done well they will not improve the state of reviews
>> published by Stereophile.
>

Scott said

>
> We differ on how well done they need to be to add
>credibility to audible difference claims.

I agree. So in light of our disagreement imagine the difficulty of enforcing a
standard of rigor and protocol on a group of reviewers who mostly review as a
hobby. Imagine the expense involved in initiating such protocols. I think even
if we don't agree on the standard we probably agree that there should be a
standard.

I said

>
>>The source of your beef with Stereophile is that it
>> lacks reliability now is it not?

Scott said

>
> Not exactly. I find the subjective perceptions
>portion of some reviews to lack credibility.
>

I take them as anecdotes just as I take opinions of other audiophiles as
anecdotes. Thier level of unreliability does not bother me because I assume
they are unreliable as anecdotes tend to be.

ScottW

unread,
Dec 26, 2003, 11:14:17 PM12/26/03
to

"S888Wheel" <s888...@aol.com> wrote in message
news:20031226224917...@mb-m06.aol.com...

Same way they accept the measurements.
Faith in integrity.

>
> I said
>
> >
> >> But we are talking
> >> about Stereophile dealing with the current level of uncertainty that
now
> >exists
> >> with the current protocols.
>
> Scott said
>
> >
> > ? Stereophile has conducted elaborate DBTs with more rigourous
> >protocols than I call for.
>
> Yes. So they know how difficult it would be to make this a requirement
before
> any subjective review may go forward.

From what I recall, they were somewhat manually conducted
and required more than one person.
This does not have to be the case if the right tools
are developed.


>
>
> Scott said
>
> > Let them establish a protocol
> >that is workable for their reviewers to conduct.
> >Publish it for comment, should be very interesting.
> >
>
> I'm not clear what you want now. You want Stereophile to establish a
protocol
> but you want the protocol to be easily executed and it need not meet the
rigors
> demanded by science?

Please define the "rigors demanded by science" so I know what you mean.
I keep thinking of tests conducted by pharmaceutical companies against
FDA requirments.


>What would such protocols look ike that they would
> substantially improve the reliability of subjective reviews and yet not
meet
> the standards demanded by science and be easy for all reviewers to do?

A Laptop that controlled a ABX switch device which captured results
and downloaded them to a secure website where they were statistically
analyzed. The reviewer would have the capability to conduct the trials
without assistance and only be required to perform the connections.
Listen (which he is doing anyway), and choose.
Science would not be satisfied as no one independently confirmed the
connections and witnessed the procedure.
A fair amount of faith in the integrity of the reviewer would be granted.
The reviewer could conduct as many trials as they wish over the course
of the review.


>
> I said
>
> >>I think to do standard DBTs right would be a major
> >> pain in the ass for them. Even the magazines which make a big issue
out
> >of such
> >> tests don't often actually do such tests and when they do they often
do a
> >crap
> >> job of it.
>
> Scott said
>
> >
> > If they have a tool which controls switching and tabulates results,
> > I really don't see what the problem is.
>
> Would you want someone to do this without any training?

The reviewer has plenty of time for training. Aren't they
doing this as a normal part of the review process. Familiarizing
themselves with the sonic characteristics of the equipment and
comparing them to reference pieces?

> Would you like this to
> happen with no calibration of test sensitivity or listener sensitivity?

No. We are specifically trying to validate the reviewers
perception that it sounds different. Take the Bermester review
where he said it sounded better in balanced mode than SE.
Prove it sounded different. The tests didn't support that assertion
.

>Would
> you want such testing to take place absent varification with a sperate
set of
> tests conducted by another tester?

Yes, I trust their integrity not to cheat. I also accept that
requiring verification is a substantial cost prohibiter.


>
> Scott said
>
> >
> > What needs to happen is a level of automation is provided
> >to match the skill level of the tester. That wouldn't be that
difficult.
>
> I'm sure it would help. I'm not sure it wouldn't still be challenging for
> hobbyists to get reliable results.

Define reliable? Subjective listening tests on one subject
can only apply to that subject. Still, in the case of an equipment
reviewer that one subject is of interest to a large number
of people.


>
> >
> >> Scott said
> >>
> >> >A tool to allow a single person toconduct and report
> >> >statistically valid results (if not independently witnessed)
> >> >would be required. After that, conducting the tests would
> >> >be relatively easy.
>
> I said
>
> >
> >> Is it ever easy? Look what Howard did with such a tool.
> >
>
> Scott said
>
> >
> > I don't believe Howards ABX box provided the level of automation
> >I am talking about.
>
> I think the box was fine. I think the problem was Howard. Were it not for
some
> stupid mistakes he made in math we may have never known just how bad his
tests
> really were. Lets not forget they were published.

The math is trivial. We can set up a spreadsheet. In fact I've seen a
couple on the
Web. Let John do the math and include the outcome in his measurements
section.

ScottW


ScottW

unread,
Dec 26, 2003, 11:25:30 PM12/26/03
to

"John Atkinson" <Stereophi...@Compuserve.com> wrote in message
news:113bd5e2.03122...@posting.google.com...
> "Powell" <nos...@noquacking.com> wrote in message
> news:<vuovogq...@corp.supernews.com>...
> > The Carver Challenge. Bob Carver made statements
> > that he could replicate any amplifier design using a
> > technique called "transfer function." Stereophile
> > took up his challenge wanting Bob to replicate the
> > sound of the Conrad-Johnson Premier 5 mono-blocks.
> > I think that over a two-day period he accomplished
> > that task to Holt's and Archibald's satisfaction. From
> > there the Carver M1.0t was born...
> >
> > On the dark side to this tail, Stereophile was later
> > prohibited from publishing any reference to Carver
> > after trying to undo (publish/verbally) the results of the
> > empirical findings.
>
> Hi Powell, I dug into my archives as promised. Here is a blow-by-blow
> account of what happened:
>
> Sometime in 1985, Bob Carver challenged Stereophile's then editor, J.
> Gordon Holt, and its publisher, Larry Archibald, that he could match
> the sound of any amplifier they chose with one of his inexpensive
> "Magnetic-Field" power amplifiers. Mr. Carver subsequently visited
> Santa Fe to try to modify his amplifier so that it matched the sound
> of an expensive tube design. (This was indeed a Conrad-Johnson
> monoblock, though that was not reported at the time. I am not aware of
> the reasons why not as I didn't join Stereophile's staff until May 1986.)
>
Fascinating tale.
I wonder if two production versions of the Conrad Johhnson
monoblocks would null better than 40 dB?

ScottW


George M. Middius

unread,
Dec 26, 2003, 11:29:51 PM12/26/03
to

Scottie CellPhone is roaming again.

> Upon what basis have you formed your opinion?

Have you tried combining your dedication to beating on cellphones with
your futile obsession over tests that will never come to pass? You
could try "testing" stereos with the earphone, on speaker, and in a
three-way, and then report to the group about how much of an
audiophile you are.

ScottW

unread,
Dec 26, 2003, 11:55:20 PM12/26/03
to

"S888Wheel" <s888...@aol.com> wrote in message
news:20031226230454...@mb-m06.aol.com...

> Scott said
>
> >
> >> > Ok, then lets not impose such a level of rigor.
> >> > Nothing else reported in Stereophile has to meet this criteria,
> >> > why impose it on DBTs?
> >>
>
> I said
>
> >
> >> For the sake of improving protocols to improve reliability of
subjective
> >> reports.
>
> Scott said
>
> > I don't agree. Sufficient DBT protocols exist. Stereophile has used
them.
> >No "improvement" in DBT protocols is required. Applying existing DBT
> >protocols
> >would be sufficient to confirm or deny audible differences exist.
>
> What exactly are those protocols?

ITU BS.116 would be a good place to start.

> I would say the protocols used by those
> advocating the use of DBTs in other publications were lacking and they
were
> being pawned off as definitive proof of universal truths.

I think there have been a couple of blind tests documented for AES

Heres a paper that discusses analysis with 2 tailed (better or worse than
chance)

http://www.aes.org/journal/toc/oct96.html

>That would be
> counterproductive IMO. I don't know what protocols Stereophile used in
their
> DBTs but I am willing to bet that if they were scientifically valid they
were

My only exception to "scientific validity" is allowing the reviewer to
conduct the test for himself alone, unmonitored.
If he cheats, he cheats. If not the results will withstand scrutiny.

> far to burdensome to be done before every subjective review published in
> Stereophile. Maybe John Atkinson could comment on that. I am speculating.
>
> I said
>
> >
> >>When it comes to such protocols I think quality is more important than
> >> quantity.
>
> Scott said
>
> >
> > Unfortunately this opens a major loophole. Cherry picking the
> >units to be tested such that audible differences are assured.
>
> I don't follow.

I've seen a few tests with positive results where the amps selected
were substantially different.

For example

http://www.stereophile.com/reference/587/index.html

I've seen people point to this test as some kind of revelation
that amps aren't amps.

>
> Scott said
>
> >I would like to see DBTs become part of the standard review
> >protocol for select categories of equipment.
>
> Isn't that cherry picking?

Faced with reality that some categories, speakers for example
may be too difficult we are forced to allow some exceptions.

>
> Scott said
>
> >
> >Most reviewers like to compare equipment under review to
> >their personal reference systems anyway.
>
> That in and of itself presents a problem. One cannot always draw
universal
> truths about any component based on use in one system. I think mulitple
> reviewers for any one piece of equipment woiuld be a very good idea but I
> suspect it isn't practical or financially feasable.
>
> I said
>
> >
> >> If DBTs aren't done well they will not improve the state of reviews
> >> published by Stereophile.
> >
>
> Scott said
>
> >
> > We differ on how well done they need to be to add
> >credibility to audible difference claims.
>
> I agree. So in light of our disagreement imagine the difficulty of
enforcing a
> standard of rigor and protocol on a group of reviewers who mostly review
as a
> hobby. Imagine the expense involved in initiating such protocols. I think
even
> if we don't agree on the standard we probably agree that there should be
a
> standard.

I think it already exists.

ScottW

ScottW

unread,
Dec 27, 2003, 12:02:10 AM12/27/03
to

"George M. Middius" <Spam-...@resistance.org> wrote in message
news:ih2quvsghtrc75n9p...@4ax.com...

Poor George, the man (or boy or ?) who can't tell a futile obsession
from a mild interest.

Anyone notice that as the audio content of the group rises,
George appears to become almost frantic in his attempts
to derail discussion? Calm down George, I'm sure
the group will return to the normalcy you seek in short order.

ScottW


John Atkinson

unread,
Dec 27, 2003, 8:52:05 AM12/27/03
to
"Powell" <nos...@noquacking.com> wrote in message news:<vupmdpr...@corp.supernews.com>...
> "John Atkinson" wrote

>> I will dig up the story from my archives and post it to r.a.o.
>
> "from my archives "...

Yes. As many of the questions I am asked return on a regular basis, I
keep a file of what I have written on the subjects. Do you have a problem
with that term?


> Please do and post TAS's version, too.

"TAS"'s version? As no-one at TAS at that time was involved in the Carver
Challenge -- Round 1 in 1985 involved Bob Carver, Larry Archibald, and J.
Gordon Holt; Round 2 in 1987 involved those 3 people plus myself -- I
don't see what anyone at TAS could know about it.


> I take it you have no problem with the other facts stated in the post.

I refer you to my previous posting for a complete discussion of what
happened. I am assuming your knowledge of the case is based on some of the
reports published in magazines other than Stereophile. Subsequent to the
settling of the lawsuit, a number of stories appeared in some audio
magazines, based on interviews with Bob Carver. None of the journalists
involved had spoken with Larry Archibald, Gordon Holt, or myself. At least
one even appeared to have a direct financial arrangement with Mr. Carver.
Both of these facts makes their reporting suspect, in my opinion.

John Atkinson
Editor, Stereophile

John Atkinson

unread,
Dec 27, 2003, 8:57:27 AM12/27/03
to
"Arny Krueger" <ar...@hotpop.com> wrote in message
news:<m9OdncI8HY6...@comcast.com>...
> http://www.google.com/groups?selm=2klvgd%24qui%40rmg01.prod.aol.net

>
> "The lawsuit was a separate issue. Carver Corporation charging that
> Stereophile had engaged in a campaign to discredit Carver or some such.
> In a settlement, they agreed not to mention Carver in their editorial
> pages, although the company was free to continue to advertise in the
> magazine."

This not correct. Here are the relevant paragraphs from the settlement,
which was made a public document in order to clarify the affair:


a) Neither side admitted any liability.

b) Neither side paid any money to the other.

c) Carver recalled all remaining copies of the unauthorized reprint for
destruction.

d) With the exception of _third-party advertisements_ [my underlining],

Stereophile agreed not to mention in print Carver the man, Carver the
company, or Carver products for a cooling-off period of three years
starting 1/1/91 or until the principals involved were no longer with
their respective companies.

John Atkinson
Editor, Stereophile

Arny Krueger

unread,
Dec 27, 2003, 9:32:05 AM12/27/03
to
"John Atkinson" <Stereophi...@Compuserve.com> wrote in message
news:113bd5e2.03122...@posting.google.com

> The degree to which the two amplifiers matched was confirmed using a


> null test. At first, however, even though the measured null was 35 dB
> down from the amplifiers' output levels (meaning that any difference
> was at the 1.75% level), JGH and LA were able to distinguish the
> Carver from the target amplifier by ear.

This statement is self-congratulatory (" even though"), likely to be false
( no evidence that "able to distinguish" involved a blind test), and
ludicrous (the 1.75% difference can include nearly 2% IM, when the threshold
of hearing for IM can be as little as 0.1%).

> It was only when Bob Carver

> lowered the level of the null between the two amplifiers to -70 dB --


> a 0.03% difference -- that the listeners agreed that the Carver and
> the reference amplifier were sonically indistinguishable. (This entire
> series of tests was reported on in the October 1985 issue of
> Stereophile, Vol.8 No.6.)

This statement is still likely to be false ( no evidence that "able to
distinguish" involved a blind test).


> Neither Gordon Holt nor myself had further access to the original
> prototype amplifier that had featured in the 1985 tests. Some 18
> months later, however, after I joined the magazine, Stereophile was
> sent a production version of the Carver M-1.0t. This was an amplifier
> that we had understood Bob Carver intended to sound and measure
> identically to the prototype that had featured in the 1985 listening
> tests. Because of this sonic "cloning," the production M-1.0t was
> advertised by Carver as sounding identical to the tube amplifier that
> had featured in the 1985 Stereophile tests: "Compare the new M-1.0t
> against any and all competition. Including the very expensive
> amplifiers that have been deemed the M-1.0t's sonic equivalent,"
> stated Carver Corporation advertisements in Audio (October 1986),
> Stereo Review (February 1987), and in many other issues of those
> magazines.

AFAIK the most visible of Carver's mods to the M-1.0t was the addition of a
switchable resistor in series with the output terminals. More evidence that
the most audible difference between SS amps and tubed amps is their output
impedance, not all the far-more-subtle nonlinear distortions that tube
bigots tend to obsess over.

> In addition, carrying out a null test between the production M-1.0t
> amplifier sent to Stereophile by Carver and the reference tube
> amplifier revealed that, at best, the two amps would only null to

> -40 dB at 2 kHz, a 1% difference, diminishing to -20 dB below 100 Hz and
> above 15 kHz, a 10% difference.

> These null figures were not significantly altered by changing the tube
> amplifier's bias or by varying the line voltage with a Variac.

It's pretty foolish to try to make substantial sonic improvements by varying
line voltage away from the optimum that the amplifier was designed for. The
same bad logic gives us some of the heaviest grade of audio snake oil -
power line conditioners and regenerators.


> Upon being informed of these results, Bob Carver flew out to Santa Fe
> at the end of February 1987 and carried out a set of null tests that
> essentially agreed with my measurements, a fact confirmed by Mr.
> Carver in a letter published in Stereophile: "...the null product

> between the amplifiers...is 28 dB, not 70 dB! Your tests showed this,


> and so did mine," he wrote, and went on to say that "Since my own
> research has shown that the threshold for detecting differences is

> about 40 dB, I knew there was enough variance between the amps to be


> detectable by a careful listener."

Using PCABX technology, I've been able to improve the threshold for
detecting differences from 40 dB to nearly 60 dB.


> Regarding subsequent events, we published reviews of the Carver TFM-25
> and Silver Seven-t amplifiers in 1989 and the Carver Amazing
> loudspeaker in 1990. Also in 1990, an edited version of a Stereophile
> review appeared in Carver literature and in an advertisement that
> appeared in the May/June 1990 issue of The Absolute Sound. We took
> legal action to prevent this from happening, as we do in any instance
> of infringement of our copyright. Bob Carver responded by filing a
> countersuit for defamation, trade disparagement, product
> disparagement, and interference with a business expectancy, against
> Stereophile Inc., against Larry Archibald, against Robert Harley, and
> against myself, claiming "in excess of $50,000" in personal damages
> for Mr. Carver and "in excess of $3 million" in lost sales for the
> Carver Corporation. (This sum was later raised to $7 million on the
> appearance of a Follow-Up review of the Carver Silver Seven-t
> amplifier in our October 1990 issue.)

Remember this is the same Bob Carver that was using legal threats to extract
(see, I didn't use the other ex-word that was in my mind) money from
subwoofer manufacturers too economically weak to defend themselves from his
IMO illegal and patently ridiculous claims.


> Carver's countersuit included some 42 individual counts of purported
> defamation dating back to J. Gordon Holt's reporting of the original
> "Carver Challenge" in the middle of 1985. J. Gordon Holt was _not_
> named in the countersuit, however, and neither were Dick Olsher and
> Sam Tellig who had also written reviews of Carver products. In
> effect, I was being sued for things had had been published in
> Stereophile a year before I joined.

Atkinson took this way too personally. Perhaps in retrospect, he might learn
from this and recent events.

Arny Krueger

unread,
Dec 27, 2003, 9:33:49 AM12/27/03
to
"ScottW" <Scot...@hotmail.com> wrote in message
news:8R7Hb.41748$m83.22687@fed1read01

>> Sometime in 1985, Bob Carver challenged Stereophile's then editor, J.
>> Gordon Holt, and its publisher, Larry Archibald, that he could match
>> the sound of any amplifier they chose with one of his inexpensive
>> "Magnetic-Field" power amplifiers. Mr. Carver subsequently visited
>> Santa Fe to try to modify his amplifier so that it matched the sound
>> of an expensive tube design. (This was indeed a Conrad-Johnson
>> monoblock, though that was not reported at the time. I am not aware
>> of the reasons why not as I didn't join Stereophile's staff until
>> May 1986.)

> Fascinating tale.
> I wonder if two production versions of the Conrad Johhnson
> monoblocks would null better than 40 dB?

I'd expect two well-matched, recently tubed samples to null within at least
60 dB, even with a loudspeaker-like loads.

Arny Krueger

unread,
Dec 27, 2003, 9:55:53 AM12/27/03
to
"ScottW" <Scot...@hotmail.com> wrote in message
news:Da2Hb.41709$m83.29230@fed1read01

>
> Let me be clear, I don't want to impose a bunch of requirements that
> make this effort too difficult to implement.

I'd be just fine with Streophile doing a few comparisons of SS amps that
have similar ratings and are reasonably well made, but that Stereophile
rates vastly differently on their RCL. Tye results are easy to predict.
Seeing done would be fun just in terms of watching Atkinson try to worm his
way out of the obvious logical conclusions.

>> But we are talking
>> about Stereophile dealing with the current level of uncertainty that
>> now exists with the current protocols.

The uncertainly mainly comes from the fact that DBTs tend to be science, and
unlike prejudice science bows to no man.


> ? Stereophile has conducted elaborate DBTs with more rigorous


> protocols than I call for.

The protocols were twisted to favor false positives within the context of
apparently-blind tests. There's other ways to bias a test than just letting
it be sighted.

> Let them establish a protocol
> that is workable for their reviewers to conduct.

As I said before, and Atkinson has recently sluffed denying, it's probable
that many of the Stereophile reviewers don't even do proper level matching.
Level-matching takes some technical skills and some fairly good equipment.
I'm not talking multi-deca-killobuck analyzers, I'm talking hand-held Fluke
meters that cost less than $500. I don't think that the Stereophile
reviewers are all that interested in this kind of stuff. They tend to be
poets, not test bench street fighters. Therefore, they lack the motivation,
expertise, and experience it takes to do this in a professional, timely sort
of way.

Back in the day of the ABX company, anybody who had a $kilobuck could buy
some really-pretty-good DBT switching equipment for under $1k. But those
days are gone. I think that an ABX test set would cost over $5K if someone
trotted out the blueprints, updated them to cover new technology, and
started turning the production line crank again.


>> I think to do standard DBTs right would be a major
>> pain in the ass for them.

Sound and Vision does some special DBT studies at least once a year.

>> Even the magazines which make a big issue
>> out of such tests don't often actually do such tests and when they
>> do they often do a crap job of it.

I only know about S&V's tests, and while I might do things a bit different,
they don't slough off on the basics.

> If they have a tool which controls switching and tabulates
> results, I really don't see what the problem is.

Most of the blind tests that S&V have done lately are the kind of test that
are exact when done with PCABX-type software. I believe that in one case
they used a PCABX-like tool they got from Microsoft, and in another they
used something that they got some other undisclosed way. There are currently
eight (8) PCABX software products downloadable on the web. I'm sure that any
of them can be used effectively.

> What needs to happen is a level of automation is provided
> to match the skill level of the tester. That wouldn't be that
> difficult.

It's been done at least 10 times by 10 different people or groups.

>>> A tool to allow a single person to conduct and report


>>> statistically valid results (if not independently witnessed)
>>> would be required. After that, conducting the tests would
>>> be relatively easy.

>> Is it ever easy? Look what Howard did with such a tool.

It wasn't nearly as bad as some would like to make out. Note all the ranting
and raving on RAO about the evils of my PCABX tool. More than 20,000 copies
have been downloaded and used with nearly vanishing amounts of negative
comment in the real world.

> I don't believe Howards ABX box provided the level of automation
> I am talking about.

All of the PCABX-type tools tabulate the results, some do the statistical
analysis automatically at the end of the test. Can't get much easier than
that!

Of course it's very fashionable to ignore the vast resources and tremendous
numbers of DBTs that are going on in the real world outside RAO and
Stereophile. In fact both RAO and Stereophile are intellectual deserts when
it comes to DBTs and DBT resources. Middius and Atkinson like it that way!

Arny Krueger

unread,
Dec 27, 2003, 10:07:23 AM12/27/03
to

> What exactly are those protocols?

Shows how ignorant you are, Mr. Hi-IQ.

> I would say the protocols used by
> those advocating the use of DBTs in other publications were lacking
> and they were being pawned off as definitive proof of universal
> truths.

Shows how ignorant you are, Mr. Hi-IQ.

>That would be counterproductive IMO.

You're so ignorant that you don't have a relevant opinion, Mr. Hi-IQ.

> I don't know what
> protocols Stereophile used in their DBTs but I am willing to bet that
> if they were scientifically valid they were far to burdensome to be
> done before every subjective review published in Stereophile.

By the time Stereophile started publishing articles about their own DBTs the
basics were exceedingly well-known. Atkinson's refinements to the tests
mainly existed to hide some built-in biases towards positive results.
Backing out these gratuitous complexifications took additional statistical
work, but this work was done and showed that the results were the same-old,
same-old random guessing.

By PCABX standards, the actual listening sessions that Atkinson did were
crude and biased towards false negatives. So you have an ironic situation
where Atkinson structured the test for false positives, but the listening
sessions themselves were biased towards false negatives. I suspect there's
some chance that the PCABX version of a comparison of the actual components
he used would have a mildly positive outcome.

> Maybe
> John Atkinson could comment on that. I am speculating.

Ignorantly talking out your butt would be more like it.

> I said
>
>>
>>> When it comes to such protocols I think quality is more important
>>> than quantity.
>
> Scott said


>> Unfortunately this opens a major loophole. Cherry picking the
>> units to be tested such that audible differences are assured.

> I don't follow.
>
> Scott said
>
>> I would like to see DBTs become part of the standard review
>> protocol for select categories of equipment.
>
> Isn't that cherry picking?
>
> Scott said
>
>>
>> Most reviewers like to compare equipment under review to
>> their personal reference systems anyway.

> That in and of itself presents a problem. One cannot always draw
> universal truths about any component based on use in one system. I

> think multiple reviewers for any one piece of equipment would be a


> very good idea but I suspect it isn't practical or financially

> feasible.

It's very easy using the PCABX approach, but PCABX of equipment with analog
I/O requires more pragmatism than many ignorant and semi-ignorant people can
muster. Anybody who thinks that all audio components sound different is a
jillion miles away from the pragmatism that is required if one is to be
comfortable with PCABX tests of equipment with analog I/O. PCABX testing of
equipment with digital I/O and audio software is exact.

> I said
>
>>
>>> If DBTs aren't done well they will not improve the state of reviews
>>> published by Stereophile.

Well dooh!

Scott said
>
>>
>> We differ on how well done they need to be to add
>> credibility to audible difference claims.

> I agree. So in light of our disagreement imagine the difficulty of
> enforcing a standard of rigor and protocol on a group of reviewers
> who mostly review as a hobby. Imagine the expense involved in
> initiating such protocols. I think even if we don't agree on the
> standard we probably agree that there should be a standard.

There is a fairly detailed and rigorous recommendation from a standards
organization - AES/EBU Recommendation BS-1116. It's been around for years.

> I said
>
>>
>>> The source of your beef with Stereophile is that it
>>> lacks reliability now is it not?
>
> Scott said
>
>>
>> Not exactly. I find the subjective perceptions
>> portion of some reviews to lack credibility.

Especially considering all the equipment that is perceived to sound
different, but actually doesn't.

> I take them as anecdotes just as I take opinions of other audiophiles

> as anecdotes. Their level of unreliability does not bother me because


> I assume they are unreliable as anecdotes tend to be.

It's fairly easy to make an anecdote very convincing. Just toss a
level-matched, time-synched, bias-controlled listening test into it. Been
there, done that many times.

When are you bozos going to get off your duffs and stop eating my dust?


Arny Krueger

unread,
Dec 27, 2003, 10:09:24 AM12/27/03
to
"ScottW" <Scot...@hotmail.com> wrote in message
news:T67Hb.41745$m83.31020@fed1read01

>
> I would be interested to see a DBT to support Brian's perception
> that, "as the centerpiece of a digital-only system, running balanced
> from stem to stern, the Burmester 001 is the best digital front-end
> I've ever heard." I'd be interested in seeing if he really could
> identify balanced from single ended.

As a rule, one can only audibly distinguish balanced from balanced from
unbalanced if the implementation of both or either is horribly flawed, or if
the test environment includes some outside sources of potentially-audible
noise or other interfering signal.


It is loading more messages.
0 new messages