******************************8
It is generally accepted that audio frequencies above 20 kHz are
beyond the audible range of humans. Absolute thresholds usually start
to increase sharply above 15 kHz and reach about 80 dB SPL at 20 kHz.
Threshold values at 24 kHz and above are more than 90 dB SPL. Some
humans can perceive tones up to 28 kHz when their level exceeds about
100 dB SPL (Ashihara 2007)
Ultrasound whose frequency ranges up to at least 120 kHz can be
perceived by bone conduction. Bone conduction requires an ultrasound
source (e.g. ceramic vibrator) being mechanically coupled to the
temporal bone (Nishimura et al. 2003).
High resolution audio formats such as SACD and DVD-A have frequency
response up to 100 and 96 kHz, respectively. They have not experienced
general acceptance in the market so that scientific research of
ultrasonic perception seems to be performed only for reasons of
medical use such as treatment of tinnitus, diagnosis of sudden
deafness, Meniere’s disease, noise-induced hearing loss, and hearing
aids (Nishimura et al. 2003).
Some research, however, is specifically performed in the context of
music reproduction, so I thought it might be interesting to look into
what this particular research has found so far. I’m extracting only
the physiological experiments from the various papers, not the
psychological and behavioural. The latter result in general in
experiencing the sound more comfortable to the ear and in increased
“comfortable” listening level when high frequency components are
present. Some behavioural experiments were performed with the
participants blindfolded and they were not informed of the purpose of
the experiments.
*********************************************************************
ULTRASONIC HEARING
Muruoka et al (1981) tried to find out what upper band limit audio
equipment should have. According to their results 15 kHz was
sufficient with highly trained listeners being able to discriminate a
20 kHz cutoff.
Ashihara et al. (2001) found that under conditions in which
experimental artefacts had been adequately eliminated, ultrasounds
would be extremely difficult to be perceived and that they may have
little influence on the sound image and its localisation.
Ashihara’s paper has been discussed by Griesinger (2003). Griesinger
also did some experiments himself: “Result: nothing significant is
heard. No difference could be heard with and without the ultrasonics.
When ultrasonics only were played at high levels, intermodulation
products from the input signals were easily heard at levels consistent
with amplifier distortion.”
Nishiguchi et al. (2003) found no significant difference between sound
stimuli with and without frequency components above 21 kHz.
Hamasaki et al. (2004) compared sound stimuli with to stimuli without
frequency components above 22 kHz. They found no significant correct
responses except for two subjects, who perceived the differences
between with and without higher frequencies band above 22 kHz only for
a longer stimulus with the highest level of very high frequency
components.
In experiments performed by Sugimoto et al (2005) the subjects judged
various aspects of subjective sound quality, in particular REALITY. It
was found that the presence of high frequency components enhanced the
apparent REALITY. The sound was guided towards the ears by small
tubes.
*********************************************************************
Ashihara “Hearing thresholds for pure tones above 16 kHz”, The Journal
of the Acoustical Society of America 2007, Volume 122, Issue 3, pp.
EL52-EL57
Nishimura et al., “Ultrasonic masker clarifies ultrasonic perception
in man”, Hearing Research 2003, Volume 175, pp.171-177
Muraoka et al., “Examination of audio-bandwidth requirements for
optimum sound signal transmission”, Journal of the Audio Engineering
Society 1981, p.2
Ashihara et al., “Detection threshold for tones about 22 kHz”, 110th
AES convention 2001, preprint no. 5401
D. Griesinger, “Perception of mid frequency and high frequency
intermodulation distortion in loudspeakers and its relationship to
high-definition audio”, 24th International AES Conference 2003, Banff,
Canada
www.davidgriesinger.com/intermod.ppt
Nishiguchi et al., “Perceptual discrimination between music sounds
with and without very high frequency components”, 115th AES convention
2003, preprint no. 5876
Hamasaki et al., “Perceptual Discrimination of Very High Frequency
Components in Musical Sound Recorded with a Newly Developed Wide
Frequency Range Microphone”, 117th AES convention 2004, preprint no.
6298
Sugimoto et al, “Human perception model for ultrasonic difference
tones”, Proceedings of the 24th IASTED International Conference, Feb
16-18, 2005, Innsbruck, Austria
*********************************************************************
HYPERSONIC EFFECT
The phenomenon of increased brain activity in the presence of
inaudible high frequency sounds has been termed “hypersonic effect” by
Oohashi et al (2000).
There are several papers investigating the hypersonic effect:
Oohashi et al. (1991), “High frequency sound above the audible range
affects brain electric activity and sound perception”, 91st AES
convention, preprint no. 3207
Nakamura et al. (1999), “Analysis of music-brain interaction with
simultaneous measurement of regional cerebral blood flow and
electroencephalogram beta rhythm in human subjects”, Neuroscience
Letters 275, p.222-226
Oohashi et al. (2000), “Inaudible high-frequency sounds affect brain
activity: hypersonic effect”, Journal of Neurophysiology 83, p.
3548-3558
http://www.linearaudio.nl/Documents/high%20freq%20inpact%20on%20brain.pdf
Oohashi et al. (2002), “Multidisciplinary study on the hypersonic
effect”, International Congress Series 1226, pp.27-42
Yagi et al. (2002), “Auditory display for deep brain activation:
hypersonic effect”,
Proceedings of the 2002 International Conference on Auditory Display,
Kyoto, Japan, July 2-5, 2002
http://www.icad.org/websiteV2.0/Conferences/ICAD2002/proceedings/Oohashi.pdf
Yagi et al. (2003), “Modulatory effect of inaudible high-frequency
sounds on human acoustic perception”, Neuroscience Letters 351, p.
191-195
Oohashi et al. (2006), “The role of biological systems other than
auditory air-conduction in the emergence of the hypersonic effect”,
Brain Research 1073-1074, p. 339-347
******************************
In general, a bi-channel sound system was used with cross-over
frequency at 26 kHz (170 dB/octave) or at 22 kHz (80 dB/octave). Sound
stimulus was a 200 – 400 s extract of traditional Gamelan music of
Bali. Test subjects were aged 19-43 years.
Two types of measurements were made:
EEG (electroencephalogram) at 12 scalp points (10-20 electrode
system), data were determined in alpha (8-13 Hz) and beta (13-30 Hz)
bands.
PET (positron emission tomography) measurement of regional cerebral
blood flow (rCBF)
Presentation of stimulus:
1) FRS = full-range sound = HFC + LFC (High Frequency Components + Low
Frequency Components)
2) high-cut sound (HCS) = only LFC
3) low-cut sound (LCS) =only HFC
4) baseline = no sound except for ambient noise
Results of the measurements were:
EEG
Alpha
Alpha-EEG was greater during FRS than during LFC, HFC or baseline.
When sound was presented through earphones, no difference was found
between FRS and LFC. When FRS was presented through earphones and LFC
through speakers, alpha –EEG was significantly greater during FRS than
during LFC alone. When head and body surface were insulated from
exposure to HFC, the increase in alpha-EEG during FRS was markedly
suppressed.
Beta
Beta power was significantly higher during music condition than during
rest condition.
PET
Compared with resting, listening to music caused an increase in rCBF
in the temporal regions bilaterally. An increase in rCBF in the
bilateral superior temporal gyri was observed, including the primary
and secondary auditory cortices. Increased rCBF in certain regions
(brainstem and the lateral part of the left thalamus, bilateral
superior temporal gyri, primary and secondary auditory cortices) was
observed with FRS as compared to the other conditions
When HFC was compared with the baseline, no significant differential
activation was observed anywhere in the brain, and neither the left
thalamus nor the brainstem showed changes in rCBF.
The following conclusions were drawn:
Oohashi et al. (2002):
“Despite the fact that nonstationary HFC was not perceived as a sound
itself, we demonstrated that the presentation of sounds that contained
a considerable amount of nonstationary HFC (i.e. FRS) introduced
various kinds of responses to listeners. In the physiological study,
FRS significantly increased rCBF in the deep-lying brain structures,
including the brainstem and thalamus, and enhanced the power of the
spontaneous EEG activity of alpha range, compared with the same sound
lacking HFC (i.e. HCS).
Although how inaudible HFC produces a physiological affect on brain
activity is still unknown, we need to consider at least two possible
explanations. The first is that HFC might change the response
characteristics of the tympanic membrane in the ears and produce more
realistic acoustic perception, which might increase pleasantness. An
alternative explanation is that HFC might be conveyed through
pathways, distinct from the usual air-conduction auditory pathway,
affecting the central nervous system. It was reported that the
vibratory stimulus of ultrasound modulated by the human voice
activated the primary auditory cortex and was successfully recognized
by people with normal hearing as well as those whose hearing was
totally impaired. Although we cannot conclude that the neural
mechanisms incorporating ultrasound hearing are the systems
responsible for the hypersonic effect, it is notable that the
ultrasound can reach to the central nervous system.”
Yagi et al. (2003):
“Taken together, the results of the present study demonstrate that an
enhanced HFC increased the comfortable listening level and improved
the subjective impression of the sound in association with an increase
in the alpha-EEG. These results further suggest that the inaudible HFC
has a modulatory effect on human sound perception and that such effect
may not linearly increase as the intensity of the HFC increases, but
has some optimum point.”
Oohashi et al. (2006):
“These data indicate that the hypersonic effect was evoked only when
HFC was presented to the head and/or body surface. The point of the
present experimental design is to focus on the fact that the
hypersonic effect does not emerge at the presentation of HFC alone but
it emerges only when HFC and LFC were simultaneously presented. The
fact that absolutely no hypersonic effect was observed under this
condition [LFC and HFC through earphones] demonstrates that the air-
conducting auditory system does not respond to HFC.”
“The finding compares well with the previous report that the
activation of the brainstem and thalamus serve as a neurophysiological
basis of the hypersonic effect. It is reasonable to consider,
therefore, that the hypersonic effect detected by the increase in the
power of the alpha-EEG in the present study reflects the activation of
the deep-lying brain structure, including the brainstem and thalamus.
The experimental findings in this study cannot be explained by the air-
conducting auditory system alone; they can be explained with less
contradiction by assuming the existence of some hitherto unknown
sensing mechanism somewhere on the body surface, even on the head. We
must also consider the possible existence of an unrecognized sensing
mechanism.”
The J. Neurophysiology article has been discussed on Audio Asylum and
Kal Rubinson, Neuroscientist himself, mentioned some points of
concern:
1. The full-range sound in Oohashi papers is an acoustic combination,
not an electronic one, so that difference tones are possible (Kal
refers to the paper by Sugimoto et al, 2005)
2. A time-lag between the presentation of the stimulus and the EEG-
response has been observed. A tighter temporal correlation between
signals and response would be desirable. Is the spectral distribution
of the music uniform with time?
3. The mapped statistical data of thalamus and brainstem are
unilateral and asymmetric, whereas the ears are known to project
bilaterally to brainstem, thalamus and auditory cortex, as can be seen
in the PET data for cortex activity. Oohashi indicates however that
they found responses in the left thalamus.
http://www.audioasylum.com/audio/general/messages/459943.html
*********************************************************************
WOW!
What a lot of words to say that Music does not consist only of nice,
orderly, pure sine-waves. And that those awful overtones and other
pollutants are necessary. And that we as listeners actually respond to
them even if we cannot consciously distinguish them.
There is a lot of science behind this, of course, which I find
fascinating as a sideline. And this may even start to delve into
legitimate research on how a 'sampling' (digital) medium may differ
from a 'continuous' medium (analog). And how the same observational
techniques (PET/EEG et.al) might show or not show differences in
response. THAT would interest me as that is NOT otherwise obvious even
if intuition suggests that there must be differences. The analogy that
comes to mind is that the flicker effect of films is not readily
apparent when we view them. *Unless* we are subject to certain types
of seizure disorders or have other visual/brain conditions.
But, cutting to the chase, as long as Middle A on a piano sounds
different from Middle A on a harpsichord, "Full Spectrum" recordings
right up into the hypersonics will be more lifelike (for lack of a
better word give that it is attached to 'recording') than such of a
limited spectrum. We now have - at least in part - some science to
support the obvious.
Peter Wieck
Melrose Park, PA
> WOW!
> What a lot of words to say that Music does not consist only of nice,
> orderly, pure sine-waves.
That's not even *close* to what those words were saying. The issue is not
about pure sine waves. It's about effect, or not, of frequencies
above the accepted audible range.
> And that those awful overtones and other
> pollutants are necessary. And that we as listeners actually respond to
> them even if we cannot consciously distinguish them.
Overtones can be within or beyond the audible range. It hasn't
been established that the ones beyond the classical audible range
are 'necessary' for much except perhaps some brain flow effects in
very contrived situations.
> There is a lot of science behind this, of course, which I find
> fascinating as a sideline.
There isn't a whole lot, actually. That's one of the points in
Klaus' presentation.
> And this may even start to delve into
> legitimate research on how a 'sampling' (digital) medium may differ
> from a 'continuous' medium (analog).
Good luck with that...particularly as the output of the
'sampling' is 'analog' .
> And how the same observational
> techniques (PET/EEG et.al) might show or not show differences in
> response. THAT would interest me as that is NOT otherwise obvious even
> if intuition suggests that there must be differences. The analogy that
> comes to mind is that the flicker effect of films is not readily
> apparent when we view them. *Unless* we are subject to certain types
> of seizure disorders or have other visual/brain conditions.
> But, cutting to the chase, as long as Middle A on a piano sounds
> different from Middle A on a harpsichord,
....which it does on hi-rez digital, redbook CD, low-bitrate mp3, tape, and
LP....
> "Full Spectrum" recordings
> right up into the hypersonics will be more lifelike (for lack of a
> better word give that it is attached to 'recording') than such of a
> limited spectrum.
That doesn't follow. In fact, it's what remains controversial.
> We now have - at least in part - some science to
> support the obvious.
It's not 'obvious'. If it was, it would hardly require the
rather herculuan efforts Oohashi et al. have put into demonstrating it.
--
-S
I know that most men, including those at ease with problems of the greatest complexity, can
seldom accept the simplest and most obvious truth if it be such as would oblige them to admit
the falsity of conclusions which they have proudly taught to others, and which they have
woven, thread by thread, into the fabrics of their life -- Leo Tolstoy
> That's not even *close* to what those words were saying. The issue is not
> about pure sine waves. It's about effect, or not, of frequencies
> above the accepted audible range.
Yeah, of course it is. And the only reason that this "science" would
even be attempted is *BECAUSE* music is not pure sine waves - it
consists of all sorts of overtones and additional information after
the pure tone. WERE it only pure sine waves, there would be no
overtones or ultrasonics or hypersonics with which to contrive such a
test, would there?
> ....which it does on hi-rez digital, redbook CD, low-bitrate mp3, > tape, and LP....
Yeah, sure. But the closeness-to-live just *might* depend on the
fullness of the inclusions of what makes those sounds different. When
you read literally, you miss colorations. *YOU* are reading sine-
waves. Try reading the entire spectrum. A pin on a piece of tin-foil
on the original Edison cylinder machine will likely differentiate
between a piano or a harpsichord. We have evolved sound reproduction
systems since then. The question is have we evolved them past the
point that we can actually hear or not? And I don't mean by double-
blind conscious testing -done to death in this venue over-and-over in
the past, but by the same sort of science as was called upon for the
tests noted by the OP. I see far more interesting avenues to explore
there than what is intuitively and patently obviously demonstrated by
the noted tests - and lest you miss that, what was demonstrated even
by highly contrived and managed conditions is that "more information
appears to be discernable by the brain over less information - even if
consciously inaudible". Simple enough.
William of Occam demonstrated somewhere around 700 years ago that the
opposite of Black is not White. But, in fact, anything that is (and
simply) NOT Black.
As to herculean efforts - hardly that. Sometimes things that are
pretty intuitively obvious are very hard to demonstrate if only
because they are differentiating items that cannot be easily or
consciously measured. This particular set of tests seems to support
coloration as an important aspect of sound - even up into regions not
typically admitted as significant. How much of the difference between
a Middle A on a harpsichord or a piano is relevant? Apparently a lot.
Makes sense on its face.
Peter Wieck
Melrose Park, PA
Peter Wieck
> What a lot of words to say that Music does not consist
> only of nice, orderly, pure sine-waves.
I'm trying to figure out where this came from? Definately not the OP.
If the idea is that any well-informed person thinks that music is composed
of just fundamentals, then its just too ludicrous to believe that anybody
would consider this to be news.
The harmonic structure of musical instruments and music has been widely
taught most if not all of the past 100 years or more.
Let's see, when was Helmholz working with this stuff? He actually had some
new ideas about it. Helmholtz 1821-1897. Yup more than a century. In 1863
Helmholtz published Die Lehre von den Tonempfindungen als physiologische
Grundlage für die Theorie der Musik (On the Sensations of Tone as a
Physiological Basis for the Theory of Music).
> And that those awful overtones and other pollutants are necessary.
Again so far from being news as to be totally unecessary to discuss. A straw
man?
> And that we as listeners actually respond to them even if we
> cannot consciously distinguish them.
Again, so widely accepted and taught as to almost be a truism.
> There is a lot of science behind this, of course, which I
> find fascinating as a sideline.
Been there, done that.
> And this may even start
> to delve into legitimate research on how a 'sampling'
> (digital) medium may differ from a 'continuous' medium
> (analog).
That was all settled by Nyquist 1889-1976 and Shannon, 1916-2001.
<snip>
Sorry but if you are familiar with Fourier analysis, you may recall
that all physically realizable waveforms can be broken down into a
series of sine waves. Whether or not ultrasonic frequencies are
significant or even perceptible is a very gray area at best. Some
supposedly positive results were later nullified by the presence of
intermodulation distortion that generated spurious frequencies in the
audio range. Without independent verifiable results with accurate
measurements of signal quality, there is little reason to speculate on
the significance of the cited studies. The practical issues with
ultrasonic emissions in general and transducers in particular are also
quite daunting given their extreme directivity.
>> That's not even *close* to what those words were saying.
Agreed.
>> The issue is not
>> about pure sine waves. It's about effect, or not, of frequencies
>> above the accepted audible range.
> Yeah, of course it is.
Saying it doesn't make it so. The appropriate approach at this time would be
for people who believe that there is a connection would be to reproduce the
OP and provide a detailed explanation of how its relevant to the current
topic. The inverse - showing how it is totally irrelevant, is like proving a
negative hypothesis - much harder for the rest of us do do.
> And the only reason that this
> "science" would even be attempted is *BECAUSE* music is
> not pure sine waves - it consists of all sorts of
> overtones and additional information after the pure tone.
> WERE it only pure sine waves, there would be no overtones
> or ultrasonics or hypersonics with which to contrive such
> a test, would there?
Not news. Trivial information. Your problem - show anybody on this forum who
denies that music is composed of both fundamentals and harmonics. AFAIK,
that person does not exist in the real world. If I thought I wasn't beating
my head against an insensible stone wall, I'd show where I've posted that
there is no doubt in my mind that not only are there harmonics, but there
are harmonics outside the normal 20-20 KHz frequency range. The issue is
not whether these harmonics or other sounds exist, the question relates to
how they relate to the perception of music.
Repeat the tests with classical or rock/pop music and western
subjects, maybe the results are very different. So far it has been
shown that Japanese subjects show brain reaction when listening to
Gamelan music, not more.
Klaus
Thank you for this summary of information. Did Yagi et al use the same
acoustic setup as Oohashi? This is important because of the caveat's
Rubenstein has raised. But if not, then it tends to confirm the earlier
Oohashi work, which in a separate phase also measured statistically more
satisfaction with music reproduced with the ultrasonic content present, and
it would help explain why some of us find SACD "more realistic"?
> > That's not even *close* to what those words were saying. ?The issue is not
> > about pure sine waves. ?It's about effect, or not, of frequencies
> > above the accepted audible range.
> Yeah, of course it is. And the only reason that this "science" would
> even be attempted is *BECAUSE* music is not pure sine waves - it
> consists of all sorts of overtones and additional information after
> the pure tone. WERE it only pure sine waves, there would be no
> overtones or ultrasonics or hypersonics with which to contrive such a
> test, would there?
Again, overtones can be audible or 'ultrasonic'. The ones that make different instruments
sound different, are audible ones, not 'ultrasonic' or hypersonic.
> > ....which it does on hi-rez digital, redbook CD, low-bitrate mp3, > tape, and LP....
> Yeah, sure. But the closeness-to-live just *might* depend on the
> fullness of the inclusions of what makes those sounds different. When
> you read literally, you miss colorations.
I've read the actual papers. Have you?
> Thank you for this summary of information. Did Yagi et al use the same
> acoustic setup as Oohashi? This is important because of the caveat's
> Rubenstein has raised. But if not, then it tends to confirm the earlier
> Oohashi work, which in a separate phase also measured statistically more
> satisfaction with music reproduced with the ultrasonic content present, and
> it would help explain why some of us find SACD "more realistic"?
the 'et al.' on the Yagi et al paper are... Nishina E, Honda M, and Oohashi T.
All of the confirmatory work on Oohashi et al seems to come from Oohashi et al.
No argument that all sorts of harmonics above 20k are going to be
created. But the amplitude of them is going to be very minimal and
I'm very skeptical that their presence affects the listener. None of
my bones are protruding, yukyuk.
Well, gamelan is Southeast Asian music, not Japanese, so it wouldbn't necessarily be
beloved there. (Meanwhile, Western 'classical' has been huge among
Japanese listeners for decades)
It's intersting though that nearly all of the 'hypersonic' publications come
from the same core group of researchers...and even the attempts to replicate, by other
groups, have been Japanese -- it doesn't appear to be something that's
caught much attention elsewhere.
> As you may have noted, the EEG/PET tests were done using Gamelan
> music, which is extremely rich in HF components when compared to other
> music. It may well be that Gamelan music is the favorite music for
> Japanese music lovers, it certainly is not mine and I doubt that it is
> for most occidental folks.
The gamelan is an Indonesian percussion instrument, "related" to the marimba,
xylophone, and vibraphone. There is nothing particularly Japanese about it.
I'd go so far as to say that gamelan music is as foreign to the average
Japanese listener as it is to the average American listener. Maybe more-so.
Gamelins and Indonesian tuned drums were prominent in the music of
contemporary American composer Lou Harrison in '70's and '80's.
> Repeat the tests with classical or rock/pop music and western
> subjects, maybe the results are very different. So far it has been
> shown that Japanese subjects show brain reaction when listening to
> Gamelan music, not more.
I suspect that the music type is irrelevant although rock music is so
artificial with its heavy electronic effects, that I certainly can't assess
anything using it, but perhaps a rock afficionado could.
> <klausra...@hotmail.com> wrote in message
> news:6sk2bhF...@mid.individual.net...
>> This is a short version of the literature overview, for the extensive
>> version simply drop me a mail.
>>
>> ******************************8
>>
>> It is generally accepted that audio frequencies above 20 kHz are
>> beyond the audible range of humans. Absolute thresholds usually start
>> to increase sharply above 15 kHz and reach about 80 dB SPL at 20 kHz.
>> Threshold values at 24 kHz and above are more than 90 dB SPL. Some
>> humans can perceive tones up to 28 kHz when their level exceeds about
>> 100 dB SPL (Ashihara 2007)
>>
[excessive quoted text deleted -- deb]
> Thank you for this summary of information. Did Yagi et al use the same
> acoustic setup as Oohashi? This is important because of the caveat's
> Rubenstein has raised. But if not, then it tends to confirm the earlier
> Oohashi work, which in a separate phase also measured statistically more
> satisfaction with music reproduced with the ultrasonic content present, and
> it would help explain why some of us find SACD "more realistic"?
>
>
I've been doing a little "research" of my own. I have a little Zoom H2 which
I piggyback on to the output of my mixer when doing live recording. The H2
allows me to record at Redbook 16-bit/44.1 KHz all the way up to 24-bit/96
KHz. While my master is being made at 32-bit floating-point/96KHz, I'll
simultaneously record the same performance on my Zoom at 16-bit/44.1 Khz.
When I play these recordings back, at home, over my stereo system, for myself
and others, nobody can (including yours, truly) can tell the difference
between the 16-bit/44.1KHz recording and the 32-bit/96Khz version (which is
played back at 24-bit). Certainly not real scientific, but there it is. Of
course, I make sure that the levels are equal (easy to do. I record a 400 Hz
tone at the beginning of every session. I do this to calibrate the meter on
the mixer with the meter on the H2 and also on the audio capture program on
my computer which is Audacity, usually).
Either that or the possibility that there is no research funding available
elsewhere because there are neither commercial nor government organizations
sizeable enough, affluent enough, and vested enough in the outcome to fund
such research. These studies aren't cheap.
That's why the reseach summarized is important and not trivial. There are
now at least two serious research cases published which suggest the
conventional wisdom may be wrong. That deserves to be looked at carefully,
not discarded because it doesn't conform with the conventional wisdom.
Also, let's not forget that they are using playback equipment
specifically designed to deliver flat FR out to 35 kHz or more. Unless
that describes your home rig, this research may not be terribly
relevant to you.
bob
I'll grant you that even those of us who heard it on SACD doen't hear it on
everything, or all the time. From my standpoint, it comes across more as a
sense of ease and "naturalness" rather than a perceptable sonic difference
per se.
>> ...I've posted that there is no doubt in my mind
>> that not only are there harmonics, but there are
>> harmonics outside the normal 20-20 KHz frequency range.
>> The issue is not whether these harmonics or other sounds
>> exist, the question relates to how they relate to the
>> perception of music.
> No argument that all sorts of harmonics above 20k are
> going to be created.
> But the amplitude of them is going to be very minimal
Two reasons come to mind pretty quickly:
1. Musical instruments have been historically and intuitively designed to
concentrate the energy they create at frequencies that people hear. There
may be high harmonics, but above 10 KHz, they almost universally fall off in
amplitude pretty quickly.
2. There are tremendous losses of high frequencies due to the absorbtion of
air. The absorbtion of air at 20 KHz, 68 degrees, and 30% RH (my living
room right now) is 0.5 dB per meter, and doubles at 40 KHz. At 400 Hz, it
is 0.002 dB per meter. At 4000 Hz it is 0.05 dB per meter. This has some
effect in the listening room, but is ruinous in a concert hall.
> and I'm very skeptical that their presence affects the listener.
It's quite clear that nobody got used to hearing 40 KHz in the concert hall.
Granted, right now it is research. But it may influence what speaker
manufacturers do in the future.
Interesting. Which "supposedly postivie results" were nullified, and by
what research (who did it, when, where results published). I know Oohashi's
work has been challenged here on the web by some in that regard, but insofar
as I know it has been mere speculation, not proven fact, nor based on any
subsequent research. So I'd really like to know.
This is dismissing the studies based more on prejudice than logic. Any
physiological or psychological study attempts to maximize the clarity of the
stimulus in order to have a clear reading of results. The fact that gamelan
music is reach in harmonics is exactly the reason it was chosen for the
first serious attempt to reseach whether hypersonics had a perceived impact
when part of music, as opposed to being tested as a discrete signal.
Assuming this research stands up, obviously the next step would be to extend
it. And if its pervasiveness is more subtle, it may require more sensitive
testing, both neurophysically and psychologically.
More accurate to say the only attempt at confirmatory work stems from
researchers intimately aware of Oohashi's work. The fact that a different
researcher is taking the lead doesn't automatically mean the research is not
valid. Nor does it mean that the same conclusion must be reached. It might
(and probably does) mean, however, that the same type of reproduction system
was used....which if true doesn't necessarily satisfy the skeptics. That's
why I asked the question of the OP.
Until someone replicates the "hypersonic effect" with something other
than gamelan music, the type of music must be presumed to be relevant.
Oo. & Co. chose gamelan specifically for its overtone spectrum.
Whatever is going on here (and heaven only knows what, if anything, is
actually going on), I think even the researchers suspect that other
genres would not produce similar results, or they'd be striving for
more robust findings.
bob
Not only are most home hi fi speakers response challenged > 20KHz
(particularly even slightly off axis) so are most professional microphones.
You'd think that a carefully recorded master tape using good condenser
microphones and decent electronics would reveal this characteristic were it
actually there.
> This is dismissing the studies based more on prejudice
> than logic.
No, its merely raising proper questions of relevance.
> Any physiological or psychological study
> attempts to maximize the clarity of the stimulus in order
> to have a clear reading of results.
But, stimulus is usually limited to things that are relevant.
> The fact that
> gamelan music is reach in harmonics is exactly the reason
> it was chosen for the first serious attempt to research
> whether hypersonics had a perceived impact when part of
> music, as opposed to being tested as a discrete signal.
Let's say that conclusive results are obtained with gamelan music listened
to at a close distance.
How many people are interested in scrapping the audio system they have now
for something that narrow and irrelevant to how westerners listen to music?
In a sense, this question was already asked, when there was an attempt to
popularize SACD and DVD-A. Music lovers stayed away from it in droves.
Oohashi used a bi-channel system, Yamasaki digital processor, Yamaha
digital recorder, Accuphase power amps, speaker was X-overed are 26
kHz (170 dB/octave) or at 22 kHz (80 dB), with a Pioneer diamond
diaphragm dome-type super-tweeter. He also used earphones.
Yagi used Action Research SACD-player, Action Research power amps,
Pioneer beryllium ribbon-type super-tweeter, X-overed at 22 kHz (80 dB/
octave)
The fact that some people find SACD more realistic perhaps also due to
the fact that the temporal resolution of CD is worse than that of
human hearing.
Klaus
Agreed. But the first of those papers is from 1991 and all the others
are simply variations on a theme, not really steps forward. And it's
perhaps interesting to note that all of the Yagi papers are co-
authored by Oohashi.
Klaus
> > This is dismissing the studies based more on prejudice
> > than logic.
> No, its merely raising proper questions of relevance.
> > Any physiological or psychological study
> > attempts to maximize the clarity of the stimulus in order
> > to have a clear reading of results.
> But, stimulus is usually limited to things that are relevant.
> > The fact that
> > gamelan music is reach in harmonics is exactly the reason
> > it was chosen for the first serious attempt to research
> > whether hypersonics had a perceived impact when part of
> > music, as opposed to being tested as a discrete signal.
> Let's say that conclusive results are obtained with gamelan music listened
> to at a close distance.
Oohashi et al. aren't saying it has to do with listening, at all. It's an
effect that isn't transmitted through the ear canals (headphone listening
does not produce it). Only when the 'whole body' is subject to the stimulus
(loudspeaker listening) is the effect produced....or so they are saying.
One wonders if this effect would actually be produced at all by exposure to, say,
live music.
> The fact that some people find SACD more realistic perhaps also due to
> the fact that the temporal resolution of CD is worse than that of
> human hearing.
Unlikely, given that we cannot demonstrate any audible benefit to
increasing the sampling rate.
Much more likely is that such people are NOT doing apples-to-apples
comparisons of CD to SACD.
bob
Probably the most famous example is Rupert Neve's claim that is
mentioned in these two links:
http://mixonline.com/mag/audio_world_above/
The second link is a thread in which Scott Dorsey dismisses both
Neve's observations and results from the Kanagawa Institute.
I've seen that claim go down in flames before:
http://www.hydrogenaudio.org/forums/index.php?showtopic=49043&hl=time+resolution
How so?
If the claim is that the "temporal resolution" of
a CD is on the order of 23 microseconds, i.e., one
sample period, that's simply false: it's MUCH better
than that: the temporal resolution is in fact
determined by the product of the bandwidth and the
dynamic range, just like in any other system.
To the degree that responders in the test reported the music more natural
and more pleasant, one probably can assume so.
Not raising questions...dismissing relevance a priori.
>
>> Any physiological or psychological study
>> attempts to maximize the clarity of the stimulus in order
>> to have a clear reading of results.
>
> But, stimulus is usually limited to things that are relevant.
And who gets to decide in advance what is relevant other than those
undertaking the research?
>
>> The fact that
>> gamelan music is reach in harmonics is exactly the reason
>> it was chosen for the first serious attempt to research
>> whether hypersonics had a perceived impact when part of
>> music, as opposed to being tested as a discrete signal.
>
> Let's say that conclusive results are obtained with gamelan music listened
> to at a close distance.
How close we don't know for sure. We don't know how recorded. We don't
know how far from the speakers the subjects were....we don know that it was
a room with a comfortable chair, a peaceful (if fake) window view, and a
loudspeaker system somewhere in front or slightly to the side of the
subject, and we do know they heard and responded differently when the full
frequency spectrum was used for playback as opposed to only audible
frequencies. As I have said in the past...if you really have concerns about
these things why don't you just write to the researchers?
>
> How many people are interested in scrapping the audio system they have now
> for something that narrow and irrelevant to how westerners listen to
> music?
A priori "relevance" and judgement noted.
> In a sense, this question was already asked, when there was an attempt to
> popularize SACD and DVD-A. Music lovers stayed away from it in droves.
Unfortunately Sony's attempt to popularize SACD was woefully
inadequte...just compare it to what they are doing for Blu-ray. And the
DVD=A consortium never got a promotional campaign off the ground. And
despite that, SACD hangs around as a legitimate technology sought out by
many classical music and jazz lovers.
Thank you Klaus. That settles one issue.
Klaus, Oohashi had a very large team of specialists on his team. It is not
unusual nor particularly telling that another researcher doing research and
perhaps sharing the same hypothesis to be explored would ask the pioneer of
such reseach to be part of his team in reviewing setup, critiquing and
analyzing research results. Does the article explain what Oohashi's role
was? And Yagi's role in the original project? My understanding is that
they simply were colleagues both intrigued by the same possibility.
This claim led partisans of high-resolution audio to two further
claims:
1) that reproduction of sounds above 20 kHz mattered.
2) that ABX-type listening tests are not capable of demonstrating
this, and should be discarded in favor of other methods.
The 2001 paper by Ashihara (AES preprint 5401) undermines both of
these claims. Ashihara showed that the presence of sounds above 22 kHz
could in fact be detected in ABX-type tests—when the sound was played
from a single loudspeaker (or two in stereo). However, a difference
could not be detected when the signal was divided among several
loudspeakers playing different frequency bands.
Ashihara concluded that, “additions of ultrasounds might affect sound
impression by means of some non-linear interaction that might occur in
the loudspeakers”—in other words, that the cause of the “hypersonic
effect” when detectable might be intermodulation distortion in the
audible band.
I haven’t read any of the post-2001 papers Klaus cited, so I don’t
know if they have anything further to add to this. I gather from one
of Klaus’s follow-up posts that the Oohashi team is still using single
loudspeakers, which suggests that whatever neural responses they are
uncovering cannot be definitively attributed to the subjects’ ability
to detect (somehow) ultrasonic frequencies.
bob
I don't recall Oohashi saying beans about ABX, pro or con. Nor did ABX
enter into the discussions we had about Oohashi's paper. I made the
original refence to that paper here and in other forums, and you, Steven,
and Arny before you even read it claimed it couldn't be true. Then after
reading the paper you grasped at the straw of intermodulation
distortion...which later Kal picked up on. It may be true, or it may not
be, as I've already said. But it is not a proven fact...that is for sure.
> I don't recall Oohashi saying beans about ABX, pro or con. Nor did ABX
> enter into the discussions we had about Oohashi's paper.
Oh, good heavens. Here's a post from early in the thread in which you
first cited the Oohashi article of 2000:
http://groups.google.com/group/rec.audio.high-end/msg/7e5c0c6fe61b99fd?hl=en
And here's an excerpt from that post:
> 3) Is Quick-Switch ABX testing a la Arnie the best way to evaluate music?
> The CCIR protocols oft cited by Arnie as the "bible" for quick-switch ABX
> testing are directly challenged by Oohashi and the nine other authors. They
> flatly suggest that the research conducted in the early eighties and given
> as support for the CD 22khz cutoff were wrong (because both the
> quick-switching and the short excerpts used "masked" the true reaction of
> the subjects by not allowing enough time for the brain to react to any but
> the audible portion of the signal.. Oohashi et al devote a summary section
> specifically to this, which I've included as a footnote to this post.(1).
Bashing ABX was part and parcel of your drum-beating for Oohashi's
work from the beginning, Harry.
bob
The 2000 and 2002 articles had SIX authors in common. It's the same
team, with a few new research assistants.
bob
I stand corrected....I guess I did make that reference to it. But Oohashi's
comments were incidental in a prologue to the main thesis....he no way
"bashed" ABX...he simply noted that prior testing which suggested no affect
may have missed it....primarily due to the short segments, which was a
concern of his in almost all prior tests, not just ABX.
I followed up on that...but I focused more on the short-segments as
advocated by Arnie and you guys in conjunction with ABX testing....whereas
Oohashi was talking about prior audio testing in general, not just ABX.
Still the main objection was to short-segment testing, not ABX per se.
>
> I haven’t read any of the post-2001 papers Klaus cited, so I don’t
> know if they have anything further to add to this. I gather from one
> of Klaus’s follow-up posts that the Oohashi team is still using single
> loudspeakers, which suggests that whatever neural responses they are
> uncovering cannot be definitively attributed to the subjects’ ability
> to detect (somehow) ultrasonic frequencies.
>
> bob
>
>
I will just add that I verified this in the 1970s. The distortion
was actually measurable by comparing digitized signals from
a research-grade microphone with and
without added ultrasonics. I should add that I had a very nice
digital setup with an 18 bit dynamic range / 16 bit resolution digitizer
that went to 40 kHz in 1972.
This was not cheap, nor common, nor was the computer it
was attached to cheap, but Uncle Sam was generous back then
to scientists.
Doug McDonald
I can't find back the paper where the figures were mentioned but I
remember 11 ěs for CD and 6 ěs for human hearing.
Then there's another paper (Woszczyk, "Physical and perceptual
considerations for high-resolution audio", AES preprint 5931) that
relates sampling rates to transient rise times. Clearly CD is too
"slow".
Be that as it may, what I wanted to indicate is you can't say that
SACD sounds more realistic because of extended bandwidth (ultrasonic
content), because there are other parameters such as higher sampling
rate.
Klaus
Some of the papers give more details about the speaker system:
cone-type woofer, bending-wave tweeter for the audible range, super-
tweeter
twin cone-type woofers, horn-type tweeter, super-tweeter
Oohashi X-overed at 22 and 26 kHz, respectively, Ashihara used 5
speakers for each of the 11th, 13th, 15th, 17th, 19th harmonics of the
2 kHz fundamental, the 3rd - 9th harmonics were played together
through a 6th loudspeaker.
Klaus
> "Arny Krueger" <ar...@hotpop.com> wrote in message
> news:gk7m2...@news3.newsguy.com...
>> "Harry Lavo" <hl...@hotmail.com> wrote in message
>> news:gk6it...@news5.newsguy.com
>>> This is dismissing the studies based more on prejudice
>>> than logic.
>>
>> No, its merely raising proper questions of relevance.
>
> Not raising questions...dismissing relevance a priori.
>>> Any physiological or psychological study
>>> attempts to maximize the clarity of the stimulus in
>>> order to have a clear reading of results.
>> But, stimulus is usually limited to things that are
>> relevant.
> And who gets to decide in advance what is relevant other
> than those undertaking the research?
Obviously, nobody but perhaps those doing the funding. Do we know who is
funding this work?
>>> The fact that
>>> gamelan music is reach in harmonics is exactly the
>>> reason it was chosen for the first serious attempt to
>>> research whether hypersonics had a perceived impact
>>> when part of music, as opposed to being tested as a
>>> discrete signal.
>> Let's say that conclusive results are obtained with
>> gamelan music listened to at a close distance.
> How close we don't know for sure.
Well if you know anything about how hypersonic sounds are attenuated by the
air, then you have an idea.
>> How many people are interested in scrapping the audio
>> system they have now for something that narrow and
>> irrelevant to how westerners listen to music?
> A priori "relevance" and judgment noted.
There's no secret that I know something about the attenuation of hypersonic
energy by air. One word: LOTS!
Also, I did my homework and helped thousands of others do the same.
Something about bias-controlled testing. ;-)
I do have this preconceived notion that neither the ultrasonic attenuation
of the air nor the ear's insensitivity to ultrasonic waves are not going to
be changed very much by advertising campaigns and bad science by advocates
of the "weird science" school of audio design.
>> In a sense, this question was already asked, when there
>> was an attempt to popularize SACD and DVD-A. Music
>> lovers stayed away from it in droves.
> Unfortunately Sony's attempt to popularize SACD was
> woefully inadequate...
It appears that millions were lost and a few careers were umm attenuated by
the SACD/DVD-A debacle.
> just compare it to what they are doing for Blu-ray.
It is true that it appears that SACD and DVD-A are being dumped for Blu Ray.
AFAIK the core technology of SACD, namely DSD has no Blu Ray equivalent at
this time. DVD-A does seem to have a dotted line connection to Blu Ray.
> And the DVD=A consortium never got a
> promotional campaign off the ground.
Just goes to show that its hard to sell a solution for an imperceptible
problem.
> And despite that, SACD hangs around as a legitimate technology sought out
> by many classical music and jazz lovers.
Die hards squared, it seems.
I wouldn't say that. I would say that music BUYERS "stayed away from it in
droves." And they did so for various reasons. Most people's music tastes
don't lend themselves to noticing any kind of difference (assuming, for a
moment that differences exist) that either SACD or DVD-A brought to the
party. Utilizing the new formats required buying new equipment, which the
ordinary consumer was reluctant to do. Titles in either of the new formats
tended to cost more (at least, initially) than the standard Redbook CD of the
same performances and finally, most consumers simply don't care about quality
beyond some basic level (as witnessed by the incredible popularity of MP3
downloads, most of which sound dreadful).
Blu-Ray video is struggling for the same reason, and believe me, there is a
far larger segment of the home-video market equipped to notice and enjoy the
improvements wrought by the high-definition picture afforded by Blu-Ray than
there are audiophiles who are equipped (or inclined) to enjoy SACD or DVD-A.
And, I think that you are being much to literal here about the importance of
gamelan music. The gamelin was chosen as a test case simply because it's rich
harmonic content reaches into the stratosphere, and as such, it serves as a
surrogate for all kinds of music and/or instrumentation. The results are not
confined merely to "gamelan music listened to at a close distance." The
results are applicable to all music that uses any instruments that rich in
harmonic content
I wonder where the ongoing stream of cash is coming from to fund this kind
of an effort. There are cancer research projects with far smaller staff
sizes.
>> One wonders if this effect would actually be produced at
>> all by exposure to, say,
>> live music.
>
> To the degree that responders in the test reported the
> music more natural and more pleasant, one probably can
> assume so.
You can say this knowing how badly hypersonic frequencies are attenuated by
air?
- Ministry of Education, Science and Culture
- Japan Science and Technology Agency
- Japan Society for the Promotion of Science
- Nissan Science Foundation
Klaus
> Then there's another paper (Woszczyk, "Physical and
> perceptual considerations for high-resolution audio", AES
> preprint 5931) that relates sampling rates to transient
> rise times. Clearly CD is too "slow".
You've changed the subject and fallen into yet another urban myth.
BTW, 5931 is a conference preprint and has not been peer reviewed. In fact
just about any fairy tale you want to talk about can nd has been presented
as a paper at an AES conference.
> Be that as it may, what I wanted to indicate is you can't
> say that SACD sounds more realistic because of extended
> bandwidth (ultrasonic content), because there are other
> parameters such as higher sampling rate.
The fact that a sample rate is higher is a potential cause, not the same as
an audible effect. Until it has been proven to be a generally audible
effect, its just numbers.
In the context of "Japan Inc." that could mean almost anything.
Was there widespread piracy?
AFAIK there was only one way to hear SACD/DVD-A and that was to purchase
it.
> And they did so for various reasons.
I had no problem purchasing SACD and DVD-A software or hardware.
> Most people's music tastes don't lend
> themselves to noticing any kind of difference (assuming,
> for a moment that differences exist) that either SACD or
> DVD-A brought to the party.
How can you generalize that way?
The CD format brought a widely-perceived improvement in sound quality and
sold in droves.
> Utilizing the new formats
> required buying new equipment,
Which was readily available for far lower prices (either current dollars or
constant dollars as compared to the CD).
> which the ordinary
> consumer was reluctant to do.
There was no sonic benefit, and I say that as a purchaser.
> Titles in either of the new
> formats tended to cost more (at least, initially) than
> the standard Redbook CD
As did CDs cost far, far more (If memory serves 4-5 times as much, street
price) than LPs. I only paid a nominal premium (maybe 40% or less) for my
SACDs and DVD-As.
> of the same performances and finally, most consumers simply don't care
> about quality
> beyond some basic level (as witnessed by the incredible
> popularity of MP3 downloads, most of which sound dreadful).
Depends what you call incredible popularity, and dreadful sound. I know that
there a lot of hysteria among self-professed audiophiles about downloads,
and some downloads (particularly bootlegs) do sound bad, but some don't
sound dreadful. There's another problem with hysteria that could be
addressed by proper blind tests here.
> Blu-Ray video is struggling for the same reason, and
> believe me, there is a far larger segment of the
> home-video market equipped to notice and enjoy the
> improvements wrought by the high-definition picture
> afforded by Blu-Ray than there are audiophiles who are
> equipped (or inclined) to enjoy SACD or DVD-A.
There are a number of reasons why Blu Ray is struggling. One big problem is
availability of titles, both to buy and for rental. Another is the fact that
Blu-Ray players haven't generally slipped below the *magic* $200 mark, below
which a consumer product will enjoy broader acceptance. Another is the fact
that Blu Ray player technology has not stabilized enough - enough people
bought a player a year ago that is now somewhat obsolete. Another problem
is that many Blu Ray titles don't look all that much better than the same
title as an upscaled DVD. The latter is the major problem that Blu Ray and
DVD-a/SACD share, and that is the problem of actually being perceived as
being a better product.
> And, I think that you are being much to literal here
> about the importance of gamelan music.
Gamelan is been there, done that, don't buy, don't play music in my house.
> The gamelan was
> chosen as a test case simply because it's rich harmonic
> content reaches into the stratosphere, and as such, it
> serves as a surrogate for all kinds of music and/or
> instrumentation.
Gamelan is about as relevant to American tastes as a 20 KHz sine wave. It's
probably used for a similar reason - absence of the usual maskers in the
audible range.
> The results are not confined merely to
> "gamelan music listened to at a close distance." The
> results are applicable to all music that uses any
> instruments that rich in harmonic content
Wrong. Just because music has substantial harmonics > 20 KHz does not mean
that it is richer in harmonics than instruments that don't.
> The fact that some people find SACD more realistic
> perhaps also due to the fact that the temporal resolution
> of CD is worse than that of human hearing.
What, the idea that the CD format can't resolve time differences shorter
than one sample period or 22 uSec? Or maybe half of that?
That's a long-discredited urban myth. The actual number is down in the
picosecond range. In contrast the ear is a goodly number of millions of
times worse.
> I followed up on that...but I focused more on the
> short-segments as advocated by Arnie and you guys in
> conjunction with ABX testing....whereas Oohashi was
> talking about prior audio testing in general, not just
> ABX. Still the main objection was to short-segment
> testing, not ABX per se.
Actually Harry, current scientific evidence about how we hear suggests that
short-segment testing is the only way that we directly hear differences. It
is now known that our brains remember sounds (as opposed to a short list of
extracted information that is mostly musical) for no longer than 30 seconds.
Thus, you can remember which voice, which note, which chord, which
instrument, etc., for vastly longer periods of time than 30 seconds, but not
the actual sound itself.
While the brain is a marvelous computer with a great associative memory, it
sucks as a literal recorder of audible sounds.
Yes, but some of the papers show power spectra and SPL of various
sound stimuli and when you look at those you will see that there's not
much to trigger the hypersonic effect. That's why it would be
interesting to see whether or not small amounts of ultrasonic
components are capable of trigggering the effect. If the effect is
present for some selected instruments only that come into play only
occasionally during a piece of music, then why bother?
Klaus
Yes, Arny, despite that since that is what the subjects of the test recorded
as responses, and what followers of SACD and DVD-A ascribe to those two high
bandwidth media.
You keep harping on air, Arny, when the mystery of how this gets transmitted
is acknowledged. It might not have anything to do with air....does
magnetism have anything to do with air? Infrared?
Even if the improvement is only occasional, it would still be an
improvement. But I see no evidence yet that there's really an
improvement. So far as I can tell, the Oohashi team can't even show
that whatever effects it is getting are the result of ultrasonic
signal, rather than audible IM distortion or ultrasonic noise.
bob
I don't doubt this at all, and it certainly tallies with my experience. Last
night, for instance, an audiophile friend of mine was over. He brought his
Sony 777ES SACD player and two copies of the same recording. These were
hybrid SACD/CD copies of some female contemporary "folk" singer accompanied
by acoustic guitar. On one 777ES, we had the disc set to play the SACD layer,
and on the other, we had the player set to the Redbook layer. Both players
were fed into adjacent high-level inputs on my trusty SP-11 preamp and we
confirmed with an audio voltmeter and a test CD playing the 400 Hz tone that
the output from both players, one in the SACD mode and the other in the
Redbook mode were identical. With my friend (with his back to me concealing
his facial expressions, body language and most importantly, which switch
position he was selecting) switching between the two 777ES's playing the
same cut and sync'd as well as possible, for the first time, I detected a
difference between SACD and Redbook. What surprised me is that the
differences I heard were not in the guitar (which I would expect, if, indeed,
any differences existed), but rather in the woman's voice. On the one of the
two, the woman's voice sounded closed in, and slightly dry compared to the
other player. Since I didn't know which was which, I said nothing and
continued to listen for a while as my accomplice switched back and forth.
Still keeping my observations to myself, we switched places and I operated
the pre-amp source switch while he listened. From my position, I could still
clearly hear the differences and it was then I noticed that when the woman's
voice sounded more open and airy, it was the SACD layer that was playing.
When we compared notes, we both had noticed the same thing. To make sure that
this wasn't a player-related anomaly, we switched modes for each player; that
is to say that the player that was in SACD mode was now switched to CD layer
mode, and vice versa. The same difference was noted. Just to make sure, we
played the SACD layer through both players and switched off between them - no
difference was noted.
I don't assert that this test was either definitive or even particularly
rigorous, but it is the first time that I heard a difference between SACD (or
any so-called hi-res format really) and regular 16-bit, 44.1KHz PCM. Of
course, I've never had the ability to switch between Hi-res and regular
digital sources this quickly because SACD players cannot switch between
layers on the fly. They must be stopped and switched. The Sony needs to
re-read the disc's TOC before starting up in the "other" mode. The result is
more than a 30-second delay between hearing one layer, and then switching to
the other mode and locating the same passage to play. This is in keeping with
what you say, Arny. I would switch and not hear any difference after the
minute of so the switch actually took to complete, ergo, no difference
existed. It looks like I may have been hasty in my assessment. The
differences aren't large, but on this disc, anyway, they were clearly there
and could be recognized every time.
> > Then there's another paper (Woszczyk, "Physical and
> > perceptual considerations for high-resolution audio", AES
> > preprint 5931) that relates sampling rates to transient
> > rise times. Clearly CD is too "slow".
> You've changed the subject and fallen into yet another urban myth.
> BTW, 5931 is a conference preprint and has not been peer reviewed. In fact
> just about any fairy tale you want to talk about can nd has been presented
> as a paper at an AES conference.
Which didn't stop Stereophile from highlighting it. Funny that.
However, Dr. Woszczyk is a serious researcher, with a good JAES
publication record (mostly on multichannel sound), as well as numerous convention
presentations going way back, so I'd be interested to
see the work formally published.
--
-S
I know that most men, including those at ease with problems of the greatest complexity, can
seldom accept the simplest and most obvious truth if it be such as would oblige them to admit
the falsity of conclusions which they have proudly taught to others, and which they have
woven, thread by thread, into the fabrics of their life -- Leo Tolstoy
<snip>
Can you confirm that the two mixes are the same? If you have a
reasonably good hardware for digitizing an analog signal, you might
want to digitize the analog SACD output at CD resolution and then see
if you can hear the difference.
Well, preference tests are not unknown in psychoacoustics; I wouldn't
call the use of preference tests nonstandard. What is curious is
the disparate result from ABX vs. preference tests in Oohashi et al.'s
work. They certainly suggest that the preference test protocol used, needs
to be examined. See Zielinski et al. in the JAES Jun 2008 issue, to read about
some factors that need to be addressed in order to perform
high-quality preference DBTS.
> BTW, 5931 is a conference preprint and has not been peer reviewed. In fact
> just about any fairy tale you want to talk about can nd has been presented
> as a paper at an AES conference.
True, but just because research hasn't been published in a peer-
reviewed journal doesn't make it junk science. It ought to be
evaluated as peer reviewers would, rather than merely dismissed.
bob
No, not with any certainty. All I can say is that they SEEMED to be the same.
IOW, the instrumentation sounds like it's coming from the same spots on the
typical pop 3-channel "sound stage." There are only two instruments playing
on the recording; acoustic guitar on the left, singer in the middle, and drum
set on the left. The liner notes DO say that the original sessions were
captured using DSD and the mix was then converted to PCM for the CD layer and
for the CD-only release.
> If you have a
> reasonably good hardware for digitizing an analog signal, you might
> want to digitize the analog SACD output at CD resolution and then see
> if you can hear the difference.
Next time my friend who owns this disc comes over, we'll try that (the music
is not my cup of tea. I mean, I like folk music but these people are much too
modern "pop" oriented in their playing and singing styles for me. IOW, Ian &
Sylvia they ain't).
> Well, preference tests are not unknown in psychoacoustics; I wouldn't
> call the use of preference tests nonstandard. What is curious is
> the disparate result from ABX vs. preference tests in Oohashi et al.'s
> work. They certainly suggest that the preference test protocol used, needs
> to be examined.
Oohashi didn't use a simple preference test. And, interestingly, his
test subjects did not report a consistent preference. I've never seen
a difference test that used anything like what Oohashi did. It's
really not a good way to do it.
bob
According to
Lenhardt, “Ultrasonic hearing in humans: applications for tinnitus
treatment”, Int. Tinnitus J. Vol.9, no.2, pp.69-75 (2003)
ultrasonic hearing is possible in humans but only by bone conduction.
From magnetoencephalography studies it is known that the auditory
cortex is activated by ultrasound:
Hosoi et al., “Activation of the auditory cortex by ultrasound”, The
Lancet, vol. 351, Febr. 14, 1998
Lenhardt: “Three lines of evidence suggest that the resonance of the
brain is critical for an audible ultrasonic experience. Support for a
brain ultrasound demodulation theory stems from spherical models of
brain and psychoacoustic metrics of masking audio frequencies by
ultrasonic noise and by matching the pitch of audible ultrasound with
conventional air conduction sound.”
and
“Ultrasound sets the brain into forced vibration, and it is the brain
oscillation that is detected on the base of the cochlea in normally
hearing individuals. With hearing loss, greater ultrasonic energy is
needed to spread the displacement on the basilar membrane toward the
region of intact hair cells. In the case of complete deafness, the
increased ultrasonic energy likely displaces the otolith organs,
resulting in saccular stimulation”
From Lenhardt’s paper
“Eyes as fenestrations to the ears: a novel mechanism for high-
frequency and ultrasound hearing”, Int. Tinnitus J. Vol.13, no.1, pp.
3-10 (2007)
“Broadband noise (5-70 kHz) delivered to the skull at the mastoid,
occiput, or forehead excites vibration in the eye. The frequency
response of eye excitation ranged from 25-60 kHz. When that band of
noise was presented as airborne sound directed at the eye, we measured
vibration of the brain and skull at the mastoid, occiput, and forehead
in the same frequency range of 25-60 kHz.”
Lenhardt states that “direct vibration of the brain can be
communicated to the cochlea via intracranial fluid conduction” and
“thus, transmission of airborne ultrasonic frequencies through the eye
to the ear via intracranial fluid conduction helps to explain two
mysteries in the human extended range of hearing”.
And he further states: “A case is made here for a separate airborne
ultrasonic input, but the final pathway is the same because ultrasound
activates the auditory cortex in normal-hearing and deaf listeners.
Clearly, the eye, with its ultrasonic passband of 25-60 kHz, could
transmit energy from instruments with ultrasonic energy (e.g. cymbals)
to the ear and would activate both the auditory thalamus and the other
nuclei in the auditory pathway.”
And Lenhardt concludes: "Consistent with the current findings is that
the eye is the input window into the ear for high-frequency airborne
sound and music. There is no need to postulate an additional unknown
somatosensory route to the ears; nonetheless, the concept of
multisensory coding in music is intriguing."
Klaus