Audiolense 4.6 and JRiver MC18---Summary of my testing and debugging so far (long, detailed post)

2,895 views
Skip to first unread message

Bob Katz

unread,
Jan 24, 2013, 12:12:11 AM1/24/13
to audio...@googlegroups.com
1/23/13

This is an open letter to Bernt, the brilliant developer of Audiolense and to Matt, the equally brilliant developer of the JRiver Media Center and to the members of this forum. Excuse the long letter, but I found it necessary. I did edit it carefully!

I need to tell you that even before implementing Audiolense, my audio mastering room is already acoustically designed, trapped and has excellent reverberation vs. frequency time curves, good treatment, symmetrical walls and cathedral ceiling. The Energy-time curves and the reverberation times at different frequencies have been tweaked and optimized. My equipment includes my own custom-built analog crossover and analog matrix mixer for the 5.1 system. The loudspeakers are all of "audiophile quality" and I think its frequency response, headroom, musicality, imaging, and soundstage are impressive. I can work for 8 hours a day producing music without fatigue, and the masters translate to my critical clients. Or I can use this room for pleasure listening or as a home theater.

Although I can get excellent work done with this system as it stands, I knew that there is something even beyond this, i.e. properly-done digital room correction. If I can achieve digital time-alignment of the subwoofers, a digital crossover, and digitally-implemented filters, the sound could potentially be even better, the bass tighter and better-defined. I had no idea that Audiolense and JRiver existed until a few short weeks ago when Mitch Global, a participant in the REW forum and this forum, brought them to my attention. Thank you, Mitch!

So I've been very busy in the past couple or three weeks trying to get this system working for me, and definitely have great enthusiasm for the potential of the system, but I've found that some features and capabilities have to be added to both Audiolense and JRiver to satisfy sound-quality requirements and in order to use them in a professional capacity from day to day. You know that I can't tell my clients, "excuse me, I have to reboot my speakers so I can get sound." So we have some work to do, and right now I have to retire both Audiolense and JRiver until, I hope soon, both systems can be modified to reach their clear potential, all outlined below.

Let's start with JRiver, because this is the easy part! I'm very impressed by its power as a playback engine, the convenience features that I've seen are amazing, and the JRiver development team is clearly the most dedicated I've ever seen. I venture to say that JRiver even beats Apple's own iTunes for power, sound quality and ergonomics. I could go on about the fabulous playback features and flexibility I've found but we'll save that for another letter.

I. JRiver bugs needed to fix:

A. Live input. I (or any audio professional doing audio mastering or post-production) really need to have live input working with a single ASIO I/O driver if possible, able to accept any digital input between 44.1 kHz and 192 kHz. Instead of selecting the input sample rate in a menu, live input needs to detect the incoming sample rate and change all the DSP parameters automatically. And then this sample rate needs to govern all the filters in the DSP chain. Detecting sample frequency input changes is not always easy or foolproof, but it turns out that good drivers like Lynx or RME take care of the deglitching and provide signals to the host software whenever a sample rate change takes place. It would also be nice if Jriver could switch the interface between external and internal sync whenever choosing between live input or JRiver's playback engine. I've tried several freeware and shareware solutions for live input, including ConvolverVST, Console, and VST Host, and in all cases the ergonomics are so horrendous or the bugs so large that I had to give up on them. I'm hoping that JRiver can make just a few improvements and then we can "do it all" in the one app. I know that Matt has his priorities and his hands full, but well, we can dream.

B. Until the Audiolense issues that I found can be corrected (noted in part II below), I am confident that I can live without Audiolense, and can implement JRiver's simple parametric equalizer and room correction/crossover/time alignment facility. I'm betting even JRiver's existing simple room correction facility can sound better than my current analog crossover. It would be nice if the increments could be in ms. instead of feet, but I could live with the approximation for the time being. But I need live input in order to set it up and test it using an analysis program like Room EQ Wizard (REW) or FuzzMeasure or Spectrafoo or MLSSA (all of which I am quite familiar with).

C. True 24-bit dithering. I discovered that the "dither" check box in the parametric equalizer is just a simple truncation, and true channel-independent non-correlated dither needs to be implemented. I realize that many people doubt the need for 24-bit dithering, but my experiences are that it will very subtly improve the purity of sound and depth of a quality reproduction system. Or, if someone knows of a 64-bit VST plugin that can implement multichannel (I need 8 channels) dither, I would be happy to use it instead of begging JRiver to develop it themselves. I honestly don't know if one exists yet, and I fear Matt has to invent one himself. If I could write plugins, I'd write one, I feel it's that necessary. Think of it as the icing on the cake. JRiver already has the cake!

D. When Audiolense has been updated, then JRiver's own convolution engine will need some fixes: It is supposed to change the filters automatically with different sample rates, but I have not been able to do it. I understand that "The Lion" has found a trick that does this using zones, but I think that it should be fairly simple to debug this feature and I even heard that early versions of MC 18 or perhaps late versions of MC17 had working filter switching loading, but I'm not sure about that.

That's it for JRiver from my point of view.


II. Audiolense

This is a fabulously-conceived, powerful program with very high audio resolution and unique features that allow us audio engineers to create and conceive a system integrating crossover points, routing, and room correction all in one package. So it has great promise and great utility. The simulation feature is priceless, it allows us to check scenarios and optimize a system "offline".

But I did find one major issue that prevents me from using it and prevents it from being the audiophile-quality application that it promises to be (see section C below). I am sure that Bernt can overcome this issue in a very short time. As soon as I point it out to him (see C below), he will say, "of course, why didn't I think of that?" Please be patient and read through this letter, Section C will come soon enough.


A. Bugs

Don't feel like the lone ranger! Nearly every audio measurement program that I've used is buggy and you have to learn its quirks. It would be nice to fix all the bugs I list below, but I'd say Audiolense is better than average in terms of bug count considering its power. It would be nice to fix them, but we're used to working around these things in typical audio measurement/correction programs.

Workaround for nearly every bug mentioned below: Many of the following bugs can be worked around by simply exiting the program and then relaunching it. This implies an issue with initialization of variables (which is not at all uncommon) coupled with Audiolense's convenience feature of remembering the last used speaker setup, measurement and target (or most of it), and these being conflicts with the user's desire to change and save new versions of same. So, if you encounter any of the following bugs, until they are fixed, I suggest you quit the program and relaunch it and chances are you will exorcise the bug until next time.

1. Speaker setup section. I had a terrible time creating a new 5.1 setup. The naming section is in two parts and it is not clear which part is which. It tended to refuse my input and stubbornly revert to 2.0 setup time and time again. When I got desperate and "deleted" all setups except the last one, it kept on coming back with the setups I had deleted. I know some of this is my misunderstanding of how this section is supposed to work, and I guess its gui doesn't meet with my intuition.   :-). Anyway, using the trick mentioned above, and with perseverance, I was able to create a setup which "stuck" for the rest of my work.  Until I was able to figure out the workaround, EVERY time I loaded a measurement file, the speaker setup reverted to 2.0 when I had just set it to 5.1. It was frustrating, indeed, as it happened "behind my back", and the only way I knew was to choose "edit speaker setup" and notice that in that tab it was just a 2.0 routing instead of 5.1. Once I got it to "stick" by quitting and relaunching, it did stick, and seems to stick from this point on. Though I fear if I create or modify this speaker setup the vicious circle will begin again.

The same goes for crossover point. I would change the crossover point, save this to a new Speaker setup, and loading a measurement file would cause the XO point to change back. The "reset setup" button also caused the XO point to revert to the last setting as well. Again, I think that quitting the application is a workaround for this bug.

A related question: If I send Bernt a measurement file, does it include the speaker setup that I have? If so, then this might explain the weird connections between the two files and the buggy operation.

2. Target section. The Chart Editor which comes up when you right click on a frequency point. This is very useful when adjusting frequency points. Pity you can't delete a frequency point from within the chart editor, maybe there is the feature and I can't find it. Anyway, chart editor display of data points is sometimes out of step with the actual target being edited. Sometimes it does not display the points that you add or delete on the graph. It depends on if you do new target, or open an existing target file. Work around is as above: Save the target, quit, then open the target and it should clean up the situation.

3. Minor issue: Change the name of a measurement, open the measurement and the display name in the lower left hand corner is still the old one. (It doesn't update). Perhaps there is some kind of internal name in a measurement file that will never be in sync with the actual file name. Something to fix for version 5 or 6, not that urgent. Similarly, the target name is not always displayed, but I did discover the little pulldown menu on the bottom right that displays the current parameters (nice).

4. Minor issue: I don't think it's a good idea to try to rename a cfg file outside of the application (e.g. in windows explorer). Because the cfg files point to specifically-named waves….   I find the only thing to do is to save the filter to a new cfg file with a different name and then Audiolense will generate all the files accordingly. HOWEVER, it would be nice someday to have a rename filter function.

5. Squiggles in the measured high frequency response near 20 kHz: Not exactly a bug. I found this was due to too-short a measurement window at the high frequency end. Increasing this from the default 0.227 ms. to about 0.5 ms. fixed it with no problem. It's probably related to diffraction issues in the tweeter or cabinet which are smoothed out with just a little bit of time averaging in the window. And in fact, it was Bernt who alerted me to the fact that anomalies like this are why he chose a slightly longer measurement time than Jim Johnston had recommend in his papers. Not an issue at all once you know what causes it. Careful Bernt if you implement this in the default correction; I recommend when different sample rates are chosen, do not make the measurement window change in ms. for consistency in measurement. Store it as time value, not a number of samples.

6. When enabling partial correction with no correction above 200 Hz (see section C below as to why) I ran into several issues that prevented me from fully evaluating this option:

a. Do NOT use 0 octaves transition. This screws up a number of parameters. If you want a short transition band between corrected and uncorrected, choose 0.1 octaves instead. Probably 0 is a troublesome number in one of Bernt's equations. At least that's what I found, anomalies just below and above the crossover frequency were eliminated when not using 0 octaves.

b. With an "above 200 Hz no correction" as a choice, even with 0.1 octave transition, it caused a severe dip in the supposed "corrected" band at 117 Hz (where the measured loudspeakers have an issue). By moving the transition band to 225 Hz, the problem went away. 0.1 octave below 200 Hz should be above 117 Hz, shouldn't it?

c. It appears that there is some interaction at the extreme low frequency part of the band near 20 Hz when implementing this partial correction near 200 Hz. At least for me. As soon as I implemented the partial correction up to 225 Hz, my low frequency response at 20 Hz reverted to about 3 or more dB down, when with the same target it had been fine. I had to add some additional points in the target, and it took me 1 hour fiddling with points below and above 20 Hz until I could get the simulated 20 Hz to 40 Hz response near to flat without causing a severe correction boost below 20 Hz or simulated response down to 10 Hz, which I think is a bad idea. When I had a wideband (not partial) correction, the target was easy to configure and there were no inconsistencies or need to add so many points to keep it flat. I cannot explain why changing to a partial correction circa 225 Hz affected the 20 Hz response, but for me it did.

The most common approach to this is a high pass filter instead. I think a well-implemented linear phase approximately 20 Hz 24 dB/octave filter might be a good idea here, instead of relying on the target shape to fix the issue. Then there would be no phase shift.

d. Setting partial correction up to 225 Hz exposes issues that make it impossible with the current structure to implement this option. First of all, the +/- amplitude control for the uncorrected portion does not work desirably in my opinion. The object is to adjust the uncorrected gain to make a seamless splice to the uncorrected loudspeaker response at the 225 Hz transition point. Adding boost or cut here does not exactly offset the uncorrected section. Instead it appears to interact with the overall attenuation of the corrected section, in a kind of unpredictable way. Finally I was able to make a seamless splice just by trial and error of adjusting the uncorrected amplitude and watching the simulated response curve.

But this was to no avail. Due to slight, natural differences in the naked (uncorrected) amplitude response of the front main loudspeakers, the end result of trying this option resulted in uneven frequency response between the left and right speakers, left-right image shift at different frequencies, etc. Which, weirdly, I do not get with my analog correction system, but we'll let that sleeping dog lie. So it would be necessary if implementing this partial correction, to have separate transition frequencies for each loudspeaker. In fact, this feature really depends on each loudspeaker having nearly perfectly-matched response and level to begin with, at least near the splice point. So it's not a practical option. But anyway, I started this partial correction in order to debug the sound quality issue described below in Section C.

7. TTD bugs. Stubborn dips in the low frequency response with TTD that were fixed with no problem with moderate frequency domain correction. With TTD, I tried many different permutations of bass boost limit (or setting bass boost very high) and the checkboxes and the different subwindows and could not eliminate these artifacts. I know that Bernt has a green thumb for this plant but in that case, he would becomes= a consultant for everyone who buys the software and wants to try TTD. I could send my setup to him to analyze, but anyway, I think TTD is still a work in progress which is not quite useable (see section B below. )

B. Feature requests:

The following notes are for frequency domain correction unless otherwise noted:

1. Relaxed amplitude correction. It seems the default correction is to get to + or - 1 dB. Many of my professional colleagues and I believe that a correction this strong can cause more sonic harm than good. In other words, the cure sounds worse than the disease. However, I was not able to find a way to reduce the severity or strength of the correction trying every permutation of every parameter in the correction section that I could do. I tried maximum boost issues and I tried permutations of the checkboxes for "no treble boost" or "no bass boost". And still the program brought the results to + or - 1 dB. You'd think I'd be happy with that, but I am concerned it is overcorrection. So in my professional opinion I think you need to implement some kind of feature that determines the maximum amount of "flatness" the program will permit.

2. Relaxed number of filter points (poles and zeros) proportional to the octaves as the frequency increases. See Section C below for a full discussion of the urgent reason to need this and what needs to be done. This is the most urgent of all the issues in this letter.

3. Manual attenuation override. This may be helpful instead of relying on the gain controls in the convolution engine. I found at least 6 dB of headroom that I could use in order to get the SPL I desire at a given analog attenuation in my system (calibrated volume control). Given the analog gain structure of my system. But my workaround is to find some digital gain in the convolver or another dsp element in JRiver, and given it's all floating point, this is a relatively low-priority request.

4. Ability to compare simulation graphs of different scenarios, e.g. different crossover points or different partial corrections. Is it possible in the analysis section to overlay the simulated impulse responses from one correction or target scenario against another? This would be the equivalent of making a correction with one correction approach or target and loading the filter into the convolution engine, then measuring the response with REW for all speakers. Then making a correction with a different setting and taking another REW response, and finally being able to overlay the two graphs and examine the differences. I'm getting out of breath already thinking of not going through that! Still, this is a relatively low-priority request.

5. TTD is a work in progress. I think it's going to be great some day and I can't wait to hear it! But I think only by finding a way to somehow integrate TTD below, say, 500 or 200 Hz with frequency correction above that freq. I imagine this would be very difficult to code, very complex coding, but maybe there is a way. I think the pre-ringing solutions attempts are not sonically acceptable. Basically, TTD has the best bass sound I've ever heard---tight, beautiful, coherent, fat and seductive! Like having several very effective active bass traps but even better! But unfortunately the apparent loss of transient response and lack of transparency in the midrange through the treble of TTD are not acceptable. It could easily be introduction of echos instead of removal of same. I did try the partial correction approach below 250 Hz with TTD, but it produced a disembodied, hollow effect, indicating the difficulty of splitting a time domain correction procedure at a particular transition frequency. But another possible approach (and it's easy for me to suggest, I'm not the person who has to do Bernt's hard work, the coding) might be to limit the time delay correction to certain reflections or limit the time delay correction amount or strength. Jim Johnston alluded to just dealing subtly with the first reflection in one of his papers. I think a little more could be done than just first reflection, but the amount of the correction has to be carefully watched or the cure sounds worse than the disease. There's no substitute for a good room to start with, too!


C. The real problem with the Audiolense frequency domain algorithm. (Be patient, I'm getting there!).

Let's abbreviate the analog room correction system ARC as opposed to DRC for the Digital Room correction:

I have a few sonic standards that I can compare Studio A against. Studio A, with the ARC, is pure-sounding and very transparent. It sets the bar for sound quality, that I would hope to get from a new DRC. I am familiar with frequency-domain DRC systems, having built one in my Studio B (mixing room). It is a very good one, but it is not as transparent as the analog system in Studio A (mastering room). Studio B is a difficult room, and the filters I have implemented are limited in certain respects and also there is a chip-based ASRC (Asynchronous sample rate converter) in the chain to allow changing of incoming sample rates without crashing the DSP. It all takes its toll, but Studio B is still very good sounding but not up to the standards of Studio A.

To enable the changeover in Studio A to the new DRC and digital crossover, I designed some adjustable passive attenuators, some passive switching, some changes in the digital router, and also a new analog cable harness that allows me to switch back and forth between the level-calibrated ARC and the DRC in about 5 minutes time! You should see me scramble behind the power amplifiers :-). So I can "return to zero" and compare the analog to the digital room correction systems very rapidly. The identical DAC is used in both situations, except four channels of DAC are needed for the stereo two-way digital crossover and only two channels of DAC are needed for the stereo analog. The identical analog components are used, but there are fewer of them in the DRC. All active filters and unnecessary active components which were used for the ARC were completely bypassed for the comparison with the DRC. Believe me, that was a difficult thing to set up, but worth it, as I can "return to zero" or switch to DRC mode at will.

LISTENING TEST:

On Monday, my acoustical consultant Mike Chafee and his assistant came over. Mike has 40 years in this business, he's an expert with digital-domain correction systems and knows many of them well. I have 40 years as well, and we're both audiophiles with critical ears but open minds. But we know where all the bodies are buried and between the two of us we don't miss much. Travis, Mike's assistant, also is experienced, loves to listen, and also has audiophile ears.

Note that in this listening test, I have calibrated digital meters on all the DRC outputs, and know exactly how far from clipping the DACs are going and in no case did a DAC go into clipping in DRC mode, with a safety factor of at least -1 dBFS on a sample-reading digital meter.

So, first we had a detailed listening session to the current system with ARC, playing several high-quality musical cuts that we know and noting how they sound. Then, after getting a frequency domain correction in Audiolense that looked real good in simulation (actually, TOO GOOD, as you will soon see), I switched the system over to DRC mode, using the Convolver in JRiver and played the identical musical selections. The improvement in bass response was instantly obvious, the bass was perfectly even and tight, fat, beautiful, we loved it! However, the sound from, say, 200 Hz on upward was unsatisfactory. It sounded grainy and unresolved compared to the ARC, and what really was disappointing was an edge in the sound, particularly exhibited with a vocal on one selection that was unpleasant to listen to, it sounded like a vocal distortion. Mike and I had a theory as to why.

The theory is quite simple: Audiolense is overcorrecting. It's easy to see all the wiggles in the corrected frequency response in the simulation. We see a remarkable plus or minus 1 dB response, but with curvy wiggles that represent what must be 50 to 100 or more filters in the path, in many cases I think with steep slopes and narrow bandwidths. Both Mike and I know from years of experience that narrow filters sound bad, they can sound edgy, and the more of them you use, the harsher it sounds. This could be due to phase shift, time domain effects or other effects. But if you ask well-known authorities like Rupert Neve or George Massenburg, they will tell you the same: gentle-slope filters live! I use digital EQ in my work all the time, including George Massenburg's digital equalizers, and they sound pure and beautiful to me, so I know that digital EQ can work.

Mike has tested a Dirac processor that implements many many narrow-band filters and measures great but he reports that it sounds horrid, harsh and veiled. So we thought we had found the culprit, but we wanted to prove it to ourselves. I wanted to prove it by changing only one variable, so as to nail the answer and be 100% sure of it. I slept on it and in the morning I came up with a simple single-variable experiment: Enable partial correction with a 225 Hz transition. Keep the original (excellent) loudspeaker response above 225 and below that correct for the room modes and introduce the digital crossover. The long and the short of it is that this second listening test was very very successful and I think I found the culprit! I found that with the partial correction, the purity of tone of the sound returned, the harshness disappeared including on that problem vocal, and the sound depth was somewhat restored. Despite the image shifts that I noted above which are not the fault of Audiolense per se. The image shifts did not keep me from immediately recognizing that the purity of sound had returned, with no correction above 225 Hz. And you can see this is nearly a single-variable experiment! Eliminate the sharp filters but keep the same dsp chain.

So, the simple solution to the harshness problem is this:

Bernt: Please design a frequency-domain correction algorithm that purposely limits the number of filter points. Please try to draw curves between the extremes of peak and dips in the raw loudspeaker response. Please try to limit the slope of any filter implemented (at least above 225 Hz) to 1/6 octave or larger as much as possible. In places where a 1/6 octave or wider filter does not correct the response, allow a relaxation of the amplitude correction to, say, + or - 2 dB or even higher from the target, which is a perfectly acceptable standard for a loudspeaker system. Please make the amount of maximum amplitude correction user-settable. Please implement this amount differently above and below a certain frequency. All this is in the interest of having fewer filters, with wider slope and with no sonically-undesirable over-correction.

I feel we will find the sonic results of this algorithm to be a purer sound, the harshness will go away and you will have an audiophile-quality winner! I have a lot of faith in high-resolution DSP, I am not an "analog luddite", I always pick the best of both worlds. I do like your system and approach very much. You just have to set it so it does not overcorrect.

I hope this helps. I'm sending part I of this letter to the JRiver forum and leaving parts I and II on the Audiolense forum. Happy coding, Bernt! I think you can accomplish this in a really short time, and I can't wait to hear it!

I'm not surprised other listeners who have bought Audiolense have not noted this harshness. You have to have a standard for comparison and possess master-quality original material to compare and reveal the issues. It helps to be able to quickly A/B compare ARC vs. DRC and judge the purity of tone of each approach. It helps to have experience  using high-quality analog and digital equalizers in a mastering context. Lastly, the listening position in this room is in a reflection-free-zone, so it is very easy to identify sound quality without degrading early reflections masking any issues. A reflection-free zone is defined as one in which there are no early reflections above -15 or -20 dB below full scale for at least the first 20 ms. after the initial impulse.

Best wishes, and again, Bernt, thanks for making such a great DRC system with such great potential!



Bob Katz

Bob Katz

unread,
Jan 24, 2013, 12:18:53 AM1/24/13
to audio...@googlegroups.com
One more think, Bernt. Channel routing. As you know, in 5.1, there are two competing standards for channel order. ITU is LF, RF, C, LFE, LS, RS. SMPTE is
LF, RF, LS, RS, C, LFE. And it appears that Bernt (fortunately) selected the ITU standard. But many people use the other standard and not every convolver or playback system allows change of routing. So I suggest that in the future Bernt include both routing scenarios in his software. Right now he goes by channel names on the routing page...

I don't think this is an urgent request, but I would note it.

BK

Bob Katz

unread,
Jan 24, 2013, 12:48:22 AM1/24/13
to audio...@googlegroups.com
Bernt: It's possible that a few filters as narrow as 0.1 octave might still be acceptable. 1/6 octave is 0.17 octaves and I'm definitely comfortable with that. If you can make your algorithm permit certain limits adjustable by the user, then we users can practically tell you what is our threshold of acceptability for how narrow a filter limit we can tolerate.

mojave

unread,
Jan 24, 2013, 11:05:37 AM1/24/13
to audio...@googlegroups.com
Bob, I appreciate the time you took to write this all out. I think it will be very helpful. I've only been using Audiolense since October and with the holidays and having four kids I haven't had a lot of chance to "play" with everything yet. Last night I did spend a few hours taking new measurements working on just a 2 channel setup with dual subwoofers. 

Here are some comments:
JRiver
-Delay:  you can add Delay as a filter in the Parametric EQ and it is in ms.
-Parametric EQ:  I've used it now on many systems and the flexibility to enable one to try different things is incredible.
-Automatic switching:  Are you following the correct naming of your config files? It always works for me. Start a thread at JRiver if it isn't working.
-"TheLion" is Walter here on this forum
-24-bit Dither:  I thought Matt added true 24 bit dither at your request. It is in Tools > Options > Audio > Output mode settings and then check "dither bitdepth conversions." Maybe there is a bug where the bitdepth simulator in Parametric EQ isn't using dither even if "dither" is checked there.

Audiolense
-Speaker setup section:  This area caused some frustration for me at first until I understood how it works. After understanding it, it works flawlessly for me. First, and I consider it a bug, after saving a setup and exiting you have to open the setup from Setup > Open Speaker Setup for it to actually be loaded and used for measurements.
-Playback Format and Channel Routing Tab:  The speaker setup reverting to 5.1 from 2.0 was frustrating until I understood this section. It actually isn't reverting, both setups are used to create filters. You can also add a 7.1 setup or you can delete setups and just use a 2.0 setup. Audiolense then creates filters for 2.0, 5.1, and 7.1 playback formats. For each format you designate how you want the input to be routed to your speakers and subs.
-Crossovers not sticking:  This is resolved by re-opening the speaker setup you just changed and saved as mentioned above.
-The measurement file is in the measurement folder and the speaker setup file is in the setup folder.
-You can rename a config file in Explorer and it will still point to the correct wav file since this info is inside the file.
-Is your simulated result smoothed or unsmoothed? If you change it to unsmoothed, you can see that the amplitude correction performed by Audiolense is more relaxed than it lookes with "smoothed." You have to re-generate the correction filter after switching to unsmoothed.
-So far, I've liked a partial correction up to about 300 Hz best, but I really haven't spent a lot of time listening to full vs partial. Also, my speakers are new as of October so I've been trying to get them positioned, etc. where they sound the best.
-You can export Audiolense measurements and import into REW and overlay with REW measurements. Audiolense exports the measurements as a .pcm file. Just rename the extension to .frd and in REW go to File > Import Frequency Response.
-Channel Routing:  You change this in the measurement. Just go to Advanced Settings and check "Output channel override enabled." Now you can manually change the output channel numbers to the left of the speakers. This is the final order for the convolution. When I setup a 7.1 system with two subwoofer, Audiolense puts the subs on channels 7 and 8 in speaker setup. It doesn't matter at all. In the measurement, I just manually change it so my subs are on channels 3 and 4 and I use this order: LF, RF, LSub, Rsub, BL, BR, SL, SR. By the way, I've never had any content use SMPTE order.

Walter

unread,
Jan 24, 2013, 1:42:19 PM1/24/13
to audio...@googlegroups.com
Bob,

thank you so much for your enlightening and honest report.

Bernt knows my personal opinion about most of the issues you mention. I have discussed them with him "excessively" ;-) 

Partial correction in 4.6 is vastly better than with previous versions and it is the way to go. 

"Overcorrection" is the doom of any automatic EQ/DRC.  There are basically two effective ways to fight it in Audiolense: Using partial correction for as large a freq. range as your speaker in-room response allows. And use very short correction windows. 

I take it TTDC in general (other than the bass response) was not acceptable to you. Have tried using very short correction windows like 2/2 = 200ms/0.083ms at 10Hz/24kHz with freq. only correction? The default 5/5 window is way too large for my liking and leads to many very narrow corrections. Combine this with partial correction and the filter activity is quite limited.

I agree that the "+/- amplification of uncorrected freq." doesn't work well. It is relative to the exact amplitude at the "no correction freq." and this is always different between two speakers. What you can do is align the correction filter to the same amplification for all "equal speakers".

I am looking forward to Bernt's comments! 

Thanks again for your great contribution! 

Brad

unread,
Jan 24, 2013, 10:38:50 PM1/24/13
to audio...@googlegroups.com
Thanks for the great feedback. Many new reasons to experiment some more.
 
As to "Relaxed number of filter points (poles and zeros) proportional to the octaves as the frequency increases"
This is a great suggestion. Right now you can select fewer filter taps but they may not be distributed optimally if the goal is to correct low frequency performance the most.

Brad

Bernt Ronningsbakk

unread,
Jan 25, 2013, 10:04:33 AM1/25/13
to audio...@googlegroups.com

Dear Bob,

 

Everybody who takes on Audiolense have a steep learning curve for the first few weeks. A lot of the issues that you bring up will disappear if you stick to it and get used more used with the work flow in Audiolense. Also, people who has a lot of experience with traditional EQ and a lot of vested knowledge in that direction tend to struggle to unlearn what they need to unlearn to understand the pros and cons of Audiolense.  A lot of the things that are true for EQ aren’t true for FIR based correction done right.

 

It is just too bad that I can’t drop by your studio, do some filter tweaking hands on and prove my point. I could make a filter that responded the same as your analog solution with regards to the artifacts you were hearing. And I could make a filter that attenuate those “problem” frequencies a tad compared to you default settings. Then you would start to hear how the frequency correction made a difference.

 

I am confident that the artifacts you were hearing are either caused by a target that doesn’t fit the bill or by technical issues in your playback chain. The shape of the target response has a profound influence of the end result in ways that few realize before they start to fiddle with different targets that are almost but not entirely similar. Digital clipping will sometimes lead to the kind of problem that you were describing, but so could a frequency correction that emphasized a certain part of the frequency spectrum.

 

I expect increased transparency from a frequency correction done well. Worst case for a decent frequency correction is that the perceived  transparency stays basically the same while the speaker sounds more “correct” but not necessarily better. But I’ve only experienced that with not so transparent hardware so I would expect better results in your system.

 

1              About the speaker setup issue: The speaker setup should always be completed before the speakers are measured. Or, to put it another way, if you change the speaker setup in a substantial way you can no longer use your old measurements. Audiolense checks that there is a match between setup and measurement, and if there isn’t one of them is thrown out. It can’t be any other way. Based on what you wrote I got the feeling that you were trying to change the setup from a 2.0 to a 5.1 and still use a measurement that was produced under a 2.0 setup….

 

I don’t know what you did in the end here, but deleting all those speaker setup files made no difference ….

 

When I work with your measurement I can change all the crossover points any way I like and they come out just fine. And those crossover changes do stick even when I load the other measurement that you sent me.  But if I change the speaker configuration, the measurement will be thrown out when I save the setup and go back to the main form, which is exactly how it is supposed to work.

 

2              Target designer:  There is a save target bug there. If you open a new target and haven’t saved the current, you will be asked if you want to save it. And even though you decline, a save target dialog will appear.  And if you think that you’re about to open a saved target, you will most likely overwrite the target you plan to open before you open it, because the save file and open file dialog looks almost identical. I’ll fix that as soon as I get the time.

 

I do a lot of grabbing and dragging of points when I make targets. Sometimes it doesn’t grab.

 

3              Measurement name: There is a text field in the measurement module where you can make any name you want. This name will stick even though you use a different name on the file. So this is not a bug.

 

5              The window sizes are stored as time values. But the high frequency window will change frequency because the Nyquist changes for different sample rates.

 

6              I tried the partial here and it works as it should. Please see attached image. But problems could arise with different crossover settings. Audiolense allows the user to do things that can make it difficult to create good crossovers.

A             I used 0 octave width by the way.

 

By the way the db adjustment of the no corr zone doesn’t work as intended. I’ll have to fix that. But I don’t think you need to use it, though.

 

B             As I’ve written before, a TTD with partial will use at least 0.5 octaves transition for getting the time domain in order.

 

I think this is about the right time for me to try to explain a bit physics here.

 

0.1 octave, and even 0.5 octave is not much when you get down towards 100 Hz. It is only about 50 Hz. Any substantial change in frequency or time domain that happens over 50 Hz will be a very sudden change. We humans perceive sound on a logarithmic scale, and we look at frequency charts that are log scaled. It looks as if the difference between 10,000Hz and 20,000Hz is the same size as the difference between 10Hz and 20Hz. And it sounds like that too. But the physics of sound is not logarithmic. It is linear. And around 100Hz we’re dealing with long wavelengths as well. You have a couple of difficult room reflections around 60Hz. You tried to run crossovers straight through them, and you ask for a transition from TTD correction to no correction - all within a few Hz. That means that you have ordered a lot of tasking DSP inside a span of approx 150 Hz. And since Audiolense operates with strict control in the time domain, and since the time allocated to get the job done is too short, you get artifacts. The underlying mathematics works as they are supposed to work. It’s when the program shortens the filter according to the frequency dependent window settings that the artifacts appear. This is basically ALWAYS the case when artifacts appear in the correction filters or in the simulations. The artifacts are a sign that you’re asking for more correction than what’s achievable inside the TTD window and/or the correction window. So instead of regarding this as a bug in Audiolense I recommend that you try to get around it by changing a few parameters. We basically want to do as much correction as needed in the shortest time possible.

 

C             These issues didn’t happen when I tried the same on your speakers. Probably because I used less tasking crossovers.

 

D             A linear phase cutoff filter 20Hz / 24dB will not create a phase shift, that is true. But it will create a LOT of ringing. Slow rise and slow decay. Pre-ringing and post-ringing. Equally amounts of time domain distortion on both sides of the peak – that’s how you get a linear phase behavior from something that takes a lot of time. And it will also add more complexity to the correction filters.

 

7              Again, these are not bugs. You are just asking the program to do more than there is time to get done. I don’t get that problem when I make corrections to your speakers. With your measurements. Don’t underestimate the significance of how you set your crossovers here…. Frequency correction only is btw a lot easier so it takes less time.

 

B – feature requests

1 – Relaxed frequency correction. The relaxation happens with the measurement filtering and by using short correction windows. When I use a moderately short window on your measurement, the smoothed measurement only contains the most basic and fundamental fluctuations. Small changes across several thousand Hz. There is very little left to correct, and it takes very little to correct it with a FIR filter. There isn’t a +/- 1dB regulator in Audiolense. The precision you see in the smoothed simulation is created by the time domain restrictions. If there were none, the simulation would be identical to the target. The time domain restrictions are your best friends with regards to avoiding overcorrection.

 

I understand where you come from and why caution is practiced in the business. From my perspective this is the only proper response to the limitations that comes with using traditional EQ. It is the wrong tool for the task and it is a mystery to my why it isn’t been replaced by FIR based correction on a rapid scale.  The advantages I see with IIR has nothing to do with sound quality. They are inflexible, subject to mathematical instability and operate without control in the time domain. But they are cheap and well known. With Audiolense you have a very different tool in your hands. It is capable of doing a lot more magnitude correction with a lot more precision – and with less strings attached – than what you are used to.

 

IIR stands for Infinite Impulse Response. INFINITE. The only way to control the time domain behavior somewhat is to be cautious in the frequency domain. With Audiolense you have steel control over the time domain. Anything substantial that you do inside a short time window, that makes the frequency response look significantly better is usually worthwhile doing.

 

Second, it is basically impossible to do a precise correction with IIR. The IIR filters do not do less correction, but they do less of what you need to get a better magnitude response. They come in certain frequency domain shapes and those shapes are a poor fit with the typical room and speaker problems. Every time you specify a notch filter you do some improvement and some damage to the frequency response. The skilled user ensures that the damage is substantially smaller than the improvement.

 

Third, and this is equally important: I still haven’t seen an EQ based toolkit that produces a good analysis of the unfiltered frequency response. Most of the smoothing techniques used will produce wide band artifacts somewhere from the upper midrange and onwards, and the dips that appear to be deeper than they really are. If you fully correct a dip based on the most commonly used smoothing techniques you will create temporary peaks. Dip lifting has a bad reputation among EQ users because  it is not done right. They are creating audible peaks because they work from the wrong frequency charts and with the wrong tools. And when the get the  “hollow” sound they blame it on the wrong causes.

 

What I’m trying to say here is that your worries do not apply to Audiolense. Audiolense comes with its own set of worries.

 

2 – When you talk about poles and zeroes and filter points you speak the IIR language. FIR filters are a lot different. The way to reduce the scope of FIR filters in Audiolense is to devise shorter time windows. If you use a measurement and correction window that has 3 cycles in the top, you will use something like 7 samples to correct around 20Hz. For the human ear this is like doing an instant correction. These 7 samples may be involved in dealing with a number of poles and zeroes, but hardly any of them will be completely corrected. Only partially. Only what can be done by a few samples of correction.

 

3 –I am not enthusiastic about enabling manual gain tweaking on the filters. New users often get the wrong impression of the transparency of Audiolense because they create digital clipping during playback. When I worked with your measurement I only saw a potential gain of 2dB, and that was with the +10dB for LFE checked. Customers who look for an uncompromised quality should assure to have enough gain in the analog domain to not having to flirt with digital clipping. I don’t know how you measured actual gain during playback when you found the 6-8dB of available gain, but there are a lot of methods out there that I don’t trust when it comes to these things. You really have to look at every sample after correction to be on the safe side.

 

4 – Having several measurements and correction side by side would be a nice feature. Unfortunately there are users who run Audiolense with so huge systems that this will create memory problems, and we have enough of those already. A simple alternative is to open several instances of Audiolense and have two screens side by side.

 

5 – TTD is usually easy to do on speakers with a behavior such as yours, but it is vital to get the frequency correction nailed down before starting to work towards a TTD correction.

 

A partial TTD correction is a mixed blessing. I really don’t think it is a good idea to run a  partial TTD correction to 200 Hz as long as the system responds well to a TTD correction that goes higher up. Part of the explanation was given further up. The other part is that TTD correction through the midrange usually sounds substantially better – if you get the target response right. There are speakers who are perhaps too much to handle for a TTD correction through the treble but your speakers have a very clean pulse.

 

C – the Audiolense frequency correction

 

You launched a very serious criticism against Audiolense, and I have to comment on that. The frequency correction has been literally problem free since the launch of Audiolense and it has stood the test time very well.

 

The frequency correction is  IMO the best thing in Audiolense and the best thing you can do with DSP on a hifi system to improve the sound quality. And this is also where the biggest upside in moving from EQ to FIR correction lives. If there’s nothing seriously wrong with the measurement it will sound like the target after correction. But if the target is a hair off, the sound quality will suffer. And the target is usually off during the first few trials.

 

I have challenged professional users as well as domestic users on several occasions to test Audiolense for transparency.  If you draw a target that follows the smoothed response reasonably close they are likely to sound identical. The transparency has been confirmed by several professional users who had their doubts early on and who had access to first grade equipment. From a physical and mathematical point of view, there is no reason to believe that it isn’t 100% transparent. You can do a similar test by measuring your system with the analog eq in place through Audiolense. And make a target that is more or less a replica of the frequency response you have with the analog eq in place. Then you can disable the analog eq, do a new measurement and make a frequency correction towards the target you made from the first measurement. Then you can compare. If there’s no digital clipping and no other crap going on in the digital domain, this will be a good test of the transparency of your analog eq, but also of the frequency correction of Audiolense.

 

After you start to fully appreciate that Audiolense can do a transparent frequency correction you can get back on working on the frequency correction. And when you get that nailed down you are ready for trying out the TTD correction.

 

This probably sounds like I regard Audiolense as a flawless solution. Well I don’t. But I don’t think you have come far enough down the road to appreciate the benefits and recognize the real issues. You still haven’t made your first decent sounding filter from what I can see. Further I believe you have to challenge some of your EQ- related knowledge and assumptions. If you keep suspecting that the frequency correction filters are fundamentally flawed, if you stick to the same guiding rules as you do with EQ and if you keep believing that a precise correction of a heavily smoothed measurement is too much I doubt that you will be able to capitalize on a first class FIR correction.

 

It also needs to be said that Audiolense, EQ and other DSP devices are just tools. Tools that enable the users to modify the sound quality for better or worse. The skills of the user makes a big difference. You obviously have a lot of skills in tuning a system with digital and analog EQ, but you’re not an Audiolense expert yet - and that could mean that EQ is the best way for you to do it even though Audiolense is a more capable method in general. By looking on your measurements I believe there is room for improvement. If you decide to dismiss Audiolense you can always use the satisfaction guarantee and get the license fee back. But nothing would please me more than if you stick around and have another go at it later.

 

It was very difficult for me to respond to your summary. I hope it didn’t come out the wrong way.

 

 

Kind regards,

 

Bernt

--
Audiolense User Forum.
http://groups.google.com/group/audiolense?hl=en?hl=en
To post to this group, send email to audio...@googlegroups.com
To unsubscribe, send email to audiolense+...@googlegroups.com

bob partial.jpg

Walter_TheLion

unread,
Jan 25, 2013, 10:31:48 AM1/25/13
to audio...@googlegroups.com
Bernt,

I cannot see/read your post - it seems to be corrupted. 

Am Freitag, 25. Januar 2013 16:04:33 UTC+1 schrieb BerntR:
Inhalt der Nachricht kann nicht geparst werden.

Walter_TheLion

unread,
Jan 25, 2013, 10:35:06 AM1/25/13
to audio...@googlegroups.com
Can you please repost it? Thanks

rlebrette

unread,
Jan 25, 2013, 10:51:37 AM1/25/13
to Audiolense User Forum
Switch to the former google groups GUI, you will be able to read
Bernt's answer, it seems that the image file is corrupted which makes
the engine dying.

Walter_TheLion

unread,
Jan 25, 2013, 10:57:49 AM1/25/13
to audio...@googlegroups.com
Sorry, but how do I do that?

Walter_TheLion

unread,
Jan 25, 2013, 10:59:06 AM1/25/13
to audio...@googlegroups.com
Upps, found it. Thanks!

Bob Katz

unread,
Jan 25, 2013, 4:38:12 PM1/25/13
to Audiolense User Forum
Dear Bernt:

I had to go back to the old google groups and the jpeg you sent was
unviewable even there... sorry.

I appreciate the thorough response you gave. To summarize: I
understand and agree with about 90% of what you have to say but still
have certain doubts. I think that some of the other responders who
agreed with me should take heart with what you have to respond as you
do cover some important points. I didn't take your response the wrong
way and neither did you take my response the wrong way. You were
thoroughly diplomatic, respectful and understanding of what's going
on. Clearly, though, a very thorough FAQ and tips and tricks website
will be necessary as no one using a powerful program such as this
could go through it without your help. If one or more of the other
participants in this forum would volunteer along with me to distill an
FAQ it would truly help this product. Please Keep this thread on file,
it will help many professionals. Pity there is no FAQ or WIKI
mechanism built into Google groups.

Here goes, editing as much of your reply as possible. I hope you don't
take any of my new replies the wrong way, either. It's difficult to
avoid some of the emotions and potential egotism that comes from 40
years of experience in the audio field, if you get my drift. I'm more
than willing to learn a new concept, but not until we have proved that
any of my contentions are wrong.


On Jan 25, 10:04 am, "Bernt Ronningsbakk"
<bernt.ronningsb...@lyse.net> wrote:
> Dear Bob,
>
> Everybody who takes on Audiolense have a steep learning curve for the first
> few weeks.


> I am confident that the artifacts you were hearing are either caused by a
> target that doesn't fit the bill or by technical issues in your playback
> chain.

Let me reply first by saying that if in the end it does turn out to be
target or issues in my playback system that caused the harshness that
I and my two other experts heard I'll buy you a case of Ringnes and
have it shipped overnight to you in Oslo or wherever you live! Fair
bet?

You did not clarify whether the FIR filters you are using when doing
frequency domain correction are linear phase or minimum phase, but to
my ears they sound like minimum phase so I'll make that assumption. To
repeat the long-standing principle of the sound of minimum phase
filtering regardless of whether it is IIR or FIR: "Too many minimum
phase filters with too much correction and too narrow a bandwidth can
cause a harshness in sound quality REGARDLESS OF THE TARGET". You
mention a method below (by reducing the size of the window) of
reducing the number of filters or their widths. So I feel that below
you end up backtracking a bit on your claim that target or some
undefined system issues are causing the harshness. So I choose to
address the number of filters issue and report back first. Let's not
forget that the target that I chose mimics the average curve of the
loudspeakers.

Sadly, for this test I'm going to have to backtrack to a 2-channel
system with no crossover, because I want to listen with 24-bit dither
to eliminate truncation issues from the discussion. Unfortunately,
with every playback engine I have been able to find, I have not been
able to dither the output except in JRiver with WASAPI and it only
seems to perform in stereo for me and only at 44.1 kHz. But at least
this can be a fair listening test comparison. But the listens have to
be about 1 minute apart as I hardwire bypass the analog parametric
equalizer that's in the chain.

Linear phase equalization is an entirely different animal and has its
own issues, so let's stick for the moment with the frequency domain
correction, assume that it is minimum phase FIR. Keep in mind that I
already feel it sounds "transparent". My goal is to remove the
distortions that we hear and to assert that my claim is number of and
slope of filters and not any other, including target. Unless you have
a miracle equalizer that no one else has invented before and which I
have not heard. :-). If I were to list the number of equalizers, both
analog and digital that I have available to me in this mastering room,
the count would approach 100.

> The shape of the target response has a profound influence of the end
> result in ways that few realize before they start to fiddle with different
> targets that are almost but not entirely similar. Digital clipping will
> sometimes lead to the kind of problem that you were describing, but so could
> a frequency correction that emphasized a certain part of the frequency
> spectrum.
>

Did you miss in my reply that I have calibrated digital metering on
all the outputs feeding the monitor DACs and assured you in the letter
that absolutely no clipping, including intersample clipping after
upsampling, was occurring?

You also cannot miss that with only two high frequency points in the
target, one at 1 kHz, one at 5 kHz, and one at 20 kHz, it would be
pretty hard to create a frequency correction with an emphasis on a
certain part of the frequency spectrum. The goal right now using the
variable window you designed, is to arrive at a response at 20 khz
about 4-5 dB below the 1 kHz response. In fact, it would be more
highly likely to cause harshness when the filters are relaxed than
with my current filters! And remember that when I removed filtering
above 225 Hz the harshness disappeared, so logically speaking, you are
blaming the 1 kHz to 20 kHz rolloff created by the simplest target
shape known to man rather than the change of filtering and number of
filters. And that the target mimics the loudspeakers and is centered
exactly between hinge points each located at the visible center
between the peaks and dips of the smoothed measurement! Therefore I
think we have to start looking elsewhere than at the target as the
cause of the harshness I heard. As a scientist, an engineer and an
audiophile, I'm going to carefully eliminate variables, use a
scientific discovery method, and I hope you are going through the same
thought process step by step as well.

1 Spkr setup. Thanks for the explanation how it works! Try to put that
in the manual :-).

2 Target. Thanks for the explanation how it currently works.

3 Measurement name. Thanks for the explanation of why it's not the
same as the file!

5 Window sizes saved as time values. The problem with the Nyquist
change I see is that as the sample rates go up and down the window
size at the high frequency changes. Somehow it has to be saved as the
minimum window size acceptable at 20 kHz and not increase as it
apparently does when the sample rate is increased.

> 6              I tried the partial here and it works as it should. Please
> see attached image. But problems could arise with different crossover
> settings. Audiolense allows the user to do things that can make it difficult
> to create good crossovers.

I couldn't see the image. But anyway... I understand the interaction
issues. With the frequency domain correction we were able to make a
gorgeous 55 Hz crossover by the way with minimal and acceptable
frequency aberrations. When you wrote me privately saying that the
woofers had about 10 dB headroom above the measurement point, can you
please tell me how you know that?

>
> A             I used 0 octave width by the way.

I could show you the anomalies I found and you could diagnose them,
but we'll let sleeping dogs lie as this is a low priority item.


> B             As I've written before, a TTD with partial will use at least
> 0.5 octaves transition for getting the time domain in order.

We'll approach that in graduate school as right now I'm just trying to
get frequency domain correction to sound good.


>
> I think this is about the right time for me to try to explain a bit physics
> here.
>
> 0.1 octave, and even 0.5 octave is not much when you get down towards 100
> Hz. It is only about 50 Hz. Any substantial change in frequency or time
> domain that happens over 50 Hz will be a very sudden change. We humans
> perceive sound on a logarithmic scale, and we look at frequency charts that
> are log scaled. It looks as if the difference between 10,000Hz and 20,000Hz
> is the same size as the difference between 10Hz and 20Hz. And it sounds like
> that too. But the physics of sound is not logarithmic. It is linear. And
> around 100Hz we're dealing with long wavelengths as well. You have a couple
> of difficult room reflections around 60Hz. You tried to run crossovers
> straight through them,

You haven't seen the change that we made :-). In the past with the
analog tools I could not move the subs to the corners or it caused an
extreme time discrepancy and disconnect between the mains and the
subs. For the fully digitally-crossed over system, the woofers are now
in the corners, no longer causing the 55 Hz issue. And Audiolense can
then make the time domain correction of subs to mains. This allows a
smooth transition at 55 Hz with no longer any trouble. Unfortunately,
with my WASPI versus ASIO issues in JRiver and JRiver's current
architecture, I cannot have my cake and eat it, too, so we're going to
not implement a crossover, just go back to pure 2 channel in/2 channel
out, and evaluate the sound of your frequency domain correction
dithered to 24---with, of course a relaxed frequency correction using
the methods you describe below.

> D             A linear phase cutoff filter 20Hz / 24dB will not create a
> phase shift, that is true. But it will create a LOT of ringing. Slow rise
> and slow decay. Pre-ringing and post-ringing. Equally amounts of time domain
> distortion on both sides of the peak - that's how you get a linear phase
> behavior from something that takes a lot of time. And it will also add more
> complexity to the correction filters.

Complexity an issue? Awww, gee, complexity is not an obstacle any
more with anything you do and a modern computer. By the way, I just
had a talk with Jim Johnston and he says that a 65,536 long filter is
for Carnegie hall and 1.5 seconds of reverberation time. So we can
definitely make shorter filters :-). But anyway, we can go back to a
minimum phase gentle slope filter, but we do have to try to keep
subsonics out of the woofers to reduce issues. Keep in mind that I am
evaluating original source material whose response often goes down to
the center of the earth, and have to make judgments on whether to
apply subsonic filters on raw bass drum tracks. Keeps me awake,
anyway!


> 1 - Relaxed frequency correction. The relaxation happens with the
> measurement filtering and by using short correction windows. When I use a
> moderately short window on your measurement, the smoothed measurement only
> contains the most basic and fundamental fluctuations. Small changes across
> several thousand Hz. There is very little left to correct, and it takes very
> little to correct it with a FIR filter. There isn't a +/- 1dB regulator in
> Audiolense. The precision you see in the smoothed simulation is created by
> the time domain restrictions. If there were none, the simulation would be
> identical to the target. The time domain restrictions are your best friends
> with regards to avoiding overcorrection.

I'm sorry that you didn't have a paragraph on this in the manual or we
wouldn't be having this discussion. Are you trying to tell me that
relaxed correction will solve the harshness issues or simply saying
that the above is the method you would recommend to achieve it?
Anyway, I'll be performing another single variable experiment this
weekend with relaxed correction and get back to you all on Monday!


> IIR stands for Infinite Impulse Response. INFINITE. The only way to control
> the time domain behavior somewhat is to be cautious in the frequency domain.
> With Audiolense you have steel control over the time domain. Anything
> substantial that you do inside a short time window, that makes the frequency
> response look significantly better is usually worthwhile doing.

So, are you now feeling it is the long time window that's causing the
harshness by causing too much filtering, or are you still feeling it's
a target or "system implementation" issue? In both of the latter
cases I believe I've made a strong case for my position. We shall see
after Monday.


> Third, and this is equally important: I still haven't seen an EQ based
> toolkit that produces a good analysis of the unfiltered frequency response.
> Most of the smoothing techniques used will produce wide band artifacts
> somewhere from the upper midrange and onwards, and the dips that appear to
> be deeper than they really are. If you fully correct a dip based on the most
> commonly used smoothing techniques you will create temporary peaks. Dip
> lifting has a bad reputation among EQ users because  it is not done right.
> They are creating audible peaks because they work from the wrong frequency
> charts and with the wrong tools. And when the get the  "hollow" sound they
> blame it on the wrong causes.

I was wondering how you were able to correct dips with your toolkit
that I have not been able to. The laws of physics and every book say
that if the room is creating negative energy at frequency X, that no
amount of amplitude restoration will correct it. But at the same time
you are talking about the frequency domain correction procedure, yes?
But you are doing some sort of time-domain fix as well as amplitude
fix to deal with the dips? I'm a bit confused still how you are
saying you are doing it and want to confirm you are talking about
minimum phase FIR filters that also correct time and group delay
issues as much as possible. Of course the time and group delay issues
fix many of the dips even before amplitude correction, but you are not
using any linear phase filters in the frequency correction module,
correct? Forgive me if I use the wrong terminology or the language, I
am still learning about your new techniques.


> flirt with digital clipping. I don't know how you measured actual gain
> during playback when you found the 6-8dB of available gain, but there are a
> lot of methods out there that I don't trust when it comes to these things.
> You really have to look at every sample after correction to be on the safe
> side.

I know EXACTLY how I got the 6-8 dB of available gain. By measuring
using sample-accurate digital meters on each digital output. Since I'm
going to make the weekend tests with dither and the old 2.0 system
with no digital crossover (in order to get the dither in JRiver) I'll
report back to you whether the 6-8 dB of additional gain is still
available. If not, then it is simply because the peak amplitude
content of the digital crossover is less in each band than the total
was before crossing over. That would be the simple explanation.

>
> 4 - Having several measurements and correction side by side would be a nice
> feature. Unfortunately there are users who run Audiolense with so huge
> systems that this will create memory problems, and we have enough of those
> already. A simple alternative is to open several instances of Audiolense and
> have two screens side by side.

I never thought of that. I definitely will try it! Two instances. Put
it in the manual :-).

> A partial TTD correction is a mixed blessing. I really don't think it is a
> good idea to run a  partial TTD correction to 200 Hz as long as the system
> responds well to a TTD correction that goes higher up.


I have to repeat that the apparent transient response loss of TTD due
to impulse response spreading and "noise" in the decay curve is always
going to be more apparent at mid to higher frequencies to the ear. I
have a lot of experience evaluating several methods of linear-phase
equalization, which also spreads the impulse response and causes
preringing, and I heard much of the same symptoms with Audiolense TTD
that I have heard with linear phase eq. You can't beat the laws of
physics. Instead of trying to beat them, if you have a method in the
back of your mind that could split the system between TTD and
frequency domain, I'll be the first to want to hear it. I believe that
the problems with the bass drum ringing that your friend heard with
TTD and you heard with one particular Eagles music cut are high
frequency anomalies, not bass fundamental ones. As you well know, when
people talk about "fast woofers" they're either talking about harmonic
distortion or upper harmonics.


Part of the
> explanation was given further up. The other part is that TTD correction
> through the midrange usually sounds substantially better - if you get the
> target response right. There are speakers who are perhaps too much to handle
> for a TTD correction through the treble but your speakers have a very clean
> pulse.

I just did not like the high frequency sound and I tried a number of
different targets. I am convinced it is impulse response degradation
and spreading. Similarly, with linear phase FIR filters I was NEVER
able to get a clear, transparent high end. But it made a good dynamics
compressor :-). Jim Johnston was over the house one day and he
explained to me that linear phase filtering produces a kind of a comb
filter without it being a comb filter. The delays are there but
without the frequency response peaks and dips. I believe that your TTD
is not a linear phase filter set per se, but the symptoms of the
preringing in the impulse response are very similar. I think the key
to good TTD is in JJ's papers (sorry to hark back to them so often)
where he says not to overcorrect too many reflections. This of course
would reduced the artifacts in the impulse response at least. Anyway,
we'll get back to TTD when (if?) I conquer frequency domain with
Audiolense.


> You can do a similar test by
> measuring your system with the analog eq in place through Audiolense. And
> make a target that is more or less a replica of the frequency response you
> have with the analog eq in place. Then you can disable the analog eq, do a
> new measurement and make a frequency correction towards the target you made
> from the first measurement. Then you can compare. If there's no digital
> clipping and no other crap going on in the digital domain, this will be a
> good test of the transparency of your analog eq, but also of the frequency
> correction of Audiolense.

It sounds like a nice test, but it's real hard to draw a target that
looks exactly like the original analog-eq-corrected system. My
approach is simply to listen to the harshnes I hear and if it goes
away with minimal correction, then it will prove to my satisfaction
that it is not the target that is the cause of the issue, but simply
overcorrection. Permit me my obsession until proven otherwise!

> If you keep suspecting that the frequency correction filters
> are fundamentally flawed,

You have misread me, Bernt. I DID not say that the frequency
correction filters are flawed. I did mention in the letter that I have
many many high-quality digital equalizers and analog equalizers at my
beck and call. The good ones sound quite pure in their sound. And none
of the good ones cause the harshness that we observed until you try to
do EQ with too narrow a bandwidth and especially too many filters. It
is a sound that Massenburg, Neve, Katz, Chafee and many other
authorities know and recognize very well and we know about the the
causes of same. I claim that the sound that I heard is NOT the same
sound as would be caused with an overly aggressive or improper Target
shape. We shall see who's right in hopefully a short time. And there
is no digital or analog clipping in my system!

> if you stick to the same guiding rules as you do
> with EQ and if you keep believing that a precise correction of a heavily
> smoothed measurement is too much I doubt that you will be able to capitalize
> on a first class FIR correction.

We shall see how my single-variable experiment comes out. Again, the
target I picked mimics the loudspeaker curve almost exactly. And
therefore to my mind I already performed a single-variable experiment
with the partial correction below 225 Hz. That's one ace in the hole.
This weekend I'll perform another single variable experiment with a
narrower time domain analysis/correction window, fewer filters, and
we'll see if the harshness remains or goes away. Simple as that.


> But nothing would please me more than if you
> stick around and have another go at it later.
>
> It was very difficult for me to respond to your summary. I hope it didn't
> come out the wrong way.

I appreciate that. Please keep an open mind as I am keeping one as
well. Let the tests continue with an open mind.


Best wishes,


Bob

Bob Katz

unread,
Jan 25, 2013, 10:13:30 PM1/25/13
to audio...@googlegroups.com
Update. I am able to get six channels of dither from a line in so I'm going to do a full crossover and optimally place the subwoofer for the Audiolense listen. Here's how I did it:

http://yabb.jriver.com/interact/index.php?topic=76912.msg527090#msg527090     and see my post with the screen shot taken from VST Host.

Bernt Ronningsbakk

unread,
Jan 26, 2013, 11:41:27 AM1/26/13
to audio...@googlegroups.com
Dear Bob,

You're a good sport and a gentleman.

I did read your mail thoroughly and I was more than halfway through a long
and thorough response to you. But I think it is better to keep it short and
take one thing at a time.

>Let me reply first by saying that if in the end it does turn out to be
target or issues in my playback system that caused the harshness that I and
my two other experts heard I'll buy you a case of Ringnes and >have it
shipped overnight to you in Oslo or wherever you live! Fair bet?

That is fair and generous. I guess I'll have to match that one.

If you can produce a plausible explanation of how a typical frequency
correction generated with Audiolense, a correction designed to alter the
frequency response towards less linear distortion, can have harshness
producing side effects that isn't a direct result of the change in frequency
response itself, measurement errors and other usual suspects... in other
words present some sort of distortion mechanism that I'm not aware of ...

If you can do that I'll place an order with Maple Leaf Farms for delivery at
your place. They have some really good tasting duck confit there.

I believe that the harshness you heard in your studio was a consequence of a
change in the frequency response and that alone. And I believe that it can
be negotiated by a modifying the target. I don't believe there is a side
effect of a detailed correction in place here that made the difference. But
I will keep an open mind about it. There is always a lesson to be learned.

If I understand you right you think that a full range frequency correction
as executed by Audiolense will have negative time domain effects.

I have attached two screen shot. They are from a full range frequency
measurement of your front left speaker. I disabled the subwoofer so that we
can compare the speaker before and after correction. Measurement and
correction window setting was 3-5. The first chart shows the smoothed
frequency response before and after correction. The second shows the log
impulse response before and after correction. The peaks are aligned in the
second chart to make it easy to compare decay pattern before and after. The
general tendency as far as I can see is that they have practically the same
time domain behavior. This view doesn't show what's going on at lower
frequencies and there will be changes there, but in my judgement it will
just be neither better or worse. Anyway I believe were mostly discussing
high frequency correction here.

In these comparisons it is important to keep the overall tendency of
uncorrected speaker intact to raise or lower the first peak by design so to
speak. Small variations in the target response makes a difference to the
relative height of the main spike in the impulse response. But I figure
you're ok with such a target since we are really discussing the audibility
of correcting more narrow banded frequency deviations.


Bernt




full range bob fr.jpg
full range bob log IR.jpg

Bob Katz

unread,
Jan 27, 2013, 6:29:54 AM1/27/13
to Audiolense User Forum
Dear Bernt: Fair enough! I think the terms have been drawn! Google
groups is misbehaving apparently, perhaps due to something in your
post (the jpeg attachments?). When I clicked on your post I got the
dreaded spinning browser update cursor. I then reverted to the old
Google groups and I was able to read your post, but all I got was
weird characters on screen when I tried to view your screenshots.

Bob Katz

unread,
Jan 27, 2013, 6:37:25 AM1/27/13
to Audiolense User Forum
It's frustrating not to be able to edit my own posts, I hit send
prematurely. I was able to see your jpegs by downloading them, but not
viewing them on screen. Something in the file name of the jpeg perhaps
is making Google choke. I'm back in the old Google groups, which is
going away they say. Anyway, I suggest you take screenshots as png
instead of jpegs. They take up less room and apparently don't kick
Google. In my next post I'm going back to the new Google groups and
I'll post two png images to begin with and then some further images.
It could also be the name of your file with the + sign in it, but I'm
not sure.

Stand by, I'm going to the new google groups,


Bob

Bob Katz

unread,
Jan 27, 2013, 6:47:18 AM1/27/13
to audio...@googlegroups.com
OK, I did my listening, very carefully, all day Saturday in fact. I have my answer and I'm quite confident in it. Single variable test: Identical target, two different corrections, one more "relaxed" and one using the more standardized correction procedure of frequency correction.

Attached are two sections of images comparing low frequency sections or high frequency sections of two different corrections. I took the screenshots with two instances of Audiolense so the zooms are slightly different although I tried to match them as best as possible. I'll ask this question: Which do you think sounds better, figure 1A or figure 1B? Figure 2A or figure 2B?

And your answers?

Hints: 

1) Psychoacoustics
2) Phase shift?
3) "With great power comes great responsibility."  --- Spiderman.

Duck confit yet?   Not quite. I'm working on a microphone measurement of the system in REW and will post it, hopefully, shortly. But I do have the answer that my ears tell me.
Fig 1 AB.PNG
Fig 2 AB.PNG

Bernt Ronningsbakk

unread,
Jan 27, 2013, 7:59:11 AM1/27/13
to audio...@googlegroups.com
It seems like googlegroups doesn't like my attachments anymore. Anyone knows
the secret here? Are jpg attachements not OK?

I don't want to spend the better part of today troubleshooting googlegroups
so sent the post directly to Bob so he can see the images. If anyone else
wants them just send me a mail.

Kind regards,

Bernt


-----Original Message-----
From: audio...@googlegroups.com [mailto:audio...@googlegroups.com] On
Behalf Of Bob Katz
Sent: Sunday, January 27, 2013 12:30 PM
To: Audiolense User Forum
Subject: [audiolense] Re: Audiolense 4.6 and JRiver MC18---Summary of my
testing and debugging so far (long, detailed post)

--

Bernt Ronningsbakk

unread,
Jan 27, 2013, 8:00:46 AM1/27/13
to audio...@googlegroups.com
I don't have PNG on the tools I'm using here. What are you using?

Kind regards,

Bernt


-----Original Message-----
From: audio...@googlegroups.com [mailto:audio...@googlegroups.com] On
Behalf Of Bob Katz
Sent: Sunday, January 27, 2013 12:37 PM
To: Audiolense User Forum
Subject: [audiolense] Re: Audiolense 4.6 and JRiver MC18---Summary of my
testing and debugging so far (long, detailed post)

--

Bob Katz

unread,
Jan 27, 2013, 8:40:46 AM1/27/13
to audio...@googlegroups.com
Hi, Bernt. For screenshots I use the Windows snipping tool (start menu, search for snip and you'll find it). And it can export in png format.

I was able to download and view your jpegs, only Google couldn't display them.

Best wishes,


Bob

Bob Katz

unread,
Jan 27, 2013, 8:58:11 AM1/27/13
to audio...@googlegroups.com
OK, here are my sonic reactions. Basically I trust my ears. The aggressive (over)correction sounds veiled and harsh to my ears, with poor depth as well. The relaxed correction sounds pure and open and musical, more like the analog filter but better! In fact, the relaxed correction sounds so good, and so pure I think I am ready to start working with it, if I can deal with the sample rate switching issues and VST Host during my daily work. I love the sound with the subwoofers in the corners, but I can't put them there when using the analog correction system because there is too much loss of coherency between the woofers and the mains. But with the time correction provided by Audiolense, the sound is coherent and quite beautiful with the woofers in the corners----using the relaxed correction algorithm.

Viewing the two frequency responses overlaid, it appears the relaxed one is brighter at the top end though the identical target was used, I promise! And anyone here could argue that the relaxed one could sound harsh because it is brighter, so that argument goes out the window. I claim that with identical targets, the relaxed correction will sound better and that averaged frequency responses and target adjustment are not as important as minimizing the number of filter points and amount of correction in the audio. In fact, I heard the improvement in purity of sound and reduction of veiling as soon as I made the relaxed change, long before I settled on the most optimum high frequency rolloff. I repeat, independent of the target, I heard the improvement.

Attached for your visual enjoyment are the aggressive versus the relaxed frequency response measurements of the left channel, displayed overlayed in REW. I also took phase and group delay measurements of left versus right channel, aggressive versus relaxed, etc. Impulse response measurements and ETC log are identical, as you noted. If there is a smoking gun here, I think it is phase shift, but it is very hard to judge and all I can do is measure it. All I can do to justify the audible differences is listen and try to point to a smoking gun that I call "overcorrection". Anyone with ears can hear the sonic deterioration of the overcorrection, and I stake my reputation on this.

REW is using 1/6 octave smoothing, and a hann window with 500 ms before and after the impulse. These tests were performed at 44.1 kHz.

It's impossible to have a single variable experiment because 1) the frequency responses are radically different, 2) the amplitudes and perceived amplitudes are slightly different but I tried to compensate for that in the listening, however, the partial loudness (in different frequency bands) makes it impossible to exactly compare the two. But the sonic differences are intuitively obvious to the most casual observer. Unfortunately, it can be argued that the relaxed frequency response is louder. See attached REW measurements, taken with a microphone.

It was extremely difficult to take these REW measurements!!!!!  The digital patching was very complex. I had to use two ASIO interfaces, sample-locked to the same time base because neither of my ASIO interfaces is multiclient. I had to set up REW to run on my Lynx card and VST Host to run on my RME card, which fortunately are installed in the same computer! VST Host allows me to play convolved music or test tones "live" digitally from an external source because unfortunately, currently I have not been able to get live input working in JRiver.

By the way, I conquered the multichannel 24-bit dither issue, thanks Mojave. I bought Voxengo Elephant. It works perfectly in JRiver (measurements and tests are textbook perfect) and in VST Host.
left ch aggressive vs relaxed freq resp.png

Bob Katz

unread,
Jan 27, 2013, 9:05:41 AM1/27/13
to audio...@googlegroups.com
One problem with the relaxed correction method in Audiolense is that with frequency correction the measurement window and the correction window are tied together. So only REW was able to reveal the greater extremes of low frequency excursion that the relaxed method produced. The measurement window and the correction window need to be separated in the frequency method as they are in the TTD method. Audiolense does not show as large an excursion of the bass response as REW for this reason. It is so difficult for me to do an REW analysis via microphone, however. Maybe I can bring a second computer down to do it. Still a bitch to do.

It can be argued, viewing the REW analysis, that the bass is now not adequately controlled with the relaxed correction, and I agree, but as I said, the problem does not show itself because the measurement window and the correction window are tied together in the frequency domain correction. It would take me hours and hours to iterate back and forth between Audiolense and REW so that I could get the correction window optimized below about 150 Hz. Currently I have 500 ms. up to 1 kHz, then a few milliseconds (I'd have to look to give you the number) until 20 kHz where it's about 0.3 ms. (again I'd have to look to give you the exact number). So enabling and optimizing relaxed correction is a cumbersome technique right now in Audiolense if you want to do frequency correction. If I get the energy today I'll try the same philosophy with TTD and see how it looks and sounds.

Bernt Ronningsbakk

unread,
Jan 27, 2013, 9:06:24 AM1/27/13
to audio...@googlegroups.com

Hi Bob,

 

I assume that the LF roundoff is around 20 Hz and the subs are engaged. And that the target starts to downslope between 1kHz and 2kHz. If I’ve got the x scale all wrong I may need to have another look.

 

Provided I’ve got the x axis right….

 

I think 1 is better sounding than 2. I am almost certain that I would prefer 1.

 

How about you?

 

Kind regards,

 

Bernt

--

Bob Katz

unread,
Jan 27, 2013, 9:09:35 AM1/27/13
to audio...@googlegroups.com
One more note, since REW does not implement a variable window function, the 500 ms. analysis does not correctly show the perceived HF response compared with Audiolense. It also smooths out the differences between relaxed and aggressive in the high frequency region. I'd have to do a composite measurement to show it to you in REW. So please ignore the curve of the HF response in the above attached document and just look at the differences between relaxed and aggressive.

Bernt Ronningsbakk

unread,
Jan 27, 2013, 9:09:48 AM1/27/13
to audio...@googlegroups.com

Thanks,

 

Kind regards,

 

Bernt

Alan Jordan

unread,
Jan 27, 2013, 9:16:00 AM1/27/13
to audio...@googlegroups.com
Hi Bob,

Could you list or show screenshots of the Audiolense Correction Procedure Designer window parameters you used to make both of these filters, and maybe the target designer windows?  I would enjoy trying a similar experiment.

Thanks,
Alan

--

Bob Katz

unread,
Jan 27, 2013, 9:21:39 AM1/27/13
to audio...@googlegroups.com
Actually, based on the attenuation line of the target line in the graphs Fig 1AB and Fig 2AB I uploaded, the relaxed uses about 1 dB less overall attenuation than the aggressive. Strange that the more aggressive correction requires more attenuation. But anyway, to be fair to Bernt and make sure it's not "being fooled by loudness" I'll do a level matched listening comparison turning up the aggressive correction by 1 dB. I'll do it a little later today.


Best wishes,

Bob

Bob Katz

unread,
Jan 27, 2013, 9:40:10 AM1/27/13
to audio...@googlegroups.com
Hi, Alan. I'd enjoy hearing other people's reactions to my hypothesis  :-). Attached are screenshots of the two procedures here. The relaxed one has a change in the window size at a 1 kHz midrange point. Both procedures enable partial correction stopping at 20 kHz but that is of course optional depending on your room, etc. When comparing, compare the attenuation of the target line and note the attenuation differences between the two types of corrections so you can play back both at equal loudness (at least based on the target, not the frequency partials) to be fair in the listening. I wish there were presets in JRiver's DSP setup (if you are using JRiver). But you could set up two zones, or use a calibrated volume control, or set up two parametrics with just a different gain adjustment in each and check one or the other depending on which convolution filter you choose.

Make sure that neither one clips your interface! Use a good digital meter or hopefully at least the sample meter in your interface and ensure the digital sample level does not exceed -1 dBFS to be safe.

Best wishes,


Bob
Aggressive (standard).PNG
Bobs relaxed part 1.PNG
Bobs relaxed part 2.PNG

Bob Katz

unread,
Jan 27, 2013, 9:53:37 AM1/27/13
to audio...@googlegroups.com
Guys. Sorry about the figure name confusion. I'll try to clear it up here.

On Sunday, January 27, 2013 9:06:24 AM UTC-5, BerntR wrote:

Hi Bob,

 

I assume that the LF roundoff is around 20 Hz and the subs are engaged. And that the target starts to downslope between 1kHz and 2kHz. If I’ve got the x scale all wrong I may need to have another look.

 

Provided I’ve got the x axis right….

 

I think 1 is better sounding than 2. I am almost certain that I would prefer 1.


The X scale in fig. 1 is from 10 Hz (showing the bass rolloff of the woofers at 20 Hz) through about 600 Hz. You can recognize 100 Hz where the first decade of the log scale ends. The X scale in fig. 2 is from about 500 Hz through 20 kHz though it goes off the screen somewhere about 10 kHz. So the first decade ends at 1 kHz, the second decade ends at 10 kHz in fig. 2.

Fig 1AB contains the bass region comparison of the two corrections with the relaxed on top = Fig. 1A. That would be figure 1A = relaxed correction bass response. Figure 1B = aggressive correction bass response. Fig 2A = relaxed correction midrange through treble response. Fig 2B = aggressive correction midrange through treble response.

I'm sorry for not being able to put frequency labels on these graphs. I tried to overlay them using a screen grab and two instances of Audiolense.
 

Bernt Ronningsbakk

unread,
Jan 27, 2013, 11:06:37 AM1/27/13
to audio...@googlegroups.com

Dear Bob,

 

You gave away your verdict before I managed to post my prediction. Anyway….

 

As I said, I would expect method 1 to be the better sounding procedure, but for very different reasons than the one you have forwarded.

 

None of the correction looks like winners to me. The target itself is designed for coloration. And that’s the main problem here.

 

I am reposting the charts that you posted, Bob and refer to fig 1.

 

Please observe that the target in figure 1 has a concave shape. There is a little “loudness” effect in the target response on figure 1 between 20 Hz, and 1.5kHz, with “low point” at around 100 Hz. It is not much, but ¼ of a dB across several octaves is likely to be somewhat audible.

 

I have never been able to produce a good sounding target that has a concave shape anywhere on the curve, and believe me I have tried. When I started with sound correction “everybody” seemed to be using targets with bass lift. And the side effect of this was concave in the lower midrange, so I assumed that it was how it should be. And that made me try to make such a target work for a very very very very long time. In the end I was so sensitive to the artifacts associated with this frequency manipulation that I heard it on even the slightest bass lifts. A concave sharp will always color the timbre and it will usually create an emphazise somewhere else because it is very difficult to have something convex in the middle and not a peak elsewhere. I started to get my speakers working really well when I finally realized this, because that also made it a whole lot easier to take care of problems in the upper midrange and treble too….

 

I’ve tried to duplicate your target, Bob. I do hope it is accurate enough to get the message through. Overall very flat between 20 Hz and 1.5 kHz with a tiny depression and then  downslope that counts some 4.5dB from 1.5kHz to 10kHz. Refer to the bob target.png.

 

After I drew that target I made a copy that I rotated upwards, so that it is overall flat. The rotated copy will have less meat down low, more air up high and so forth. The timbre quality of an instrument or a female voice will for the most part be very well preserved. I may have exaggerated the tilt somehow here to get my point through, and you may have to play really loud to produce a decent bass - but this is anyway how I expect your target to work as far as timbre is concerned.

 

Quite a few years have passed since I learned that this tilted display shows how the target tends to affect the timbre. The concave target shape around 1.5kHz is a peak on a tilted response. It will lead to a substantial emphasis on the 1-2 kHz region and frequencies on both sides will be masked by it.  The relative rise towards this frequency is likely to add a somewhat “outdoor” quality to the playback. The ambience will be more outdoor like. At best it will sound more transparent than what’s actually on the recording, but quite often this will be perceived as better than the real deal..  The serious issue is that music material with high energy in the 1-2kHz region will bite you. And that’s what we’re discussing here.

 

Still on figure 3. Let’s do a mental zoom out from your fig 1 and the 1.5kHz emphasis. When we observe the whole frequency range we see a concave overall tendency in the target. Such an overall concave tendency will color the perceived sound stage and also the perceived size on anything and anyone making sound on that stage. Things will tend to move forward and to shrink. As I’ve already pointed out, you have opposite tendency below 2 kHz, Bob. So you will get mixed result here. Something will sound bigger than life and something will sound smaller, and sometimes it will be a mix of both - all depending on the frequency content of whatever he is playing.  In any case I suspect that a frequency response like this target will substantially ruin the presentation of a recorded ambience above a few 100 Hz.

 

Which brings me to the final criteria in this post, and perhaps the criteria that is most difficult to meet: Proper discrimination between various recorded ambiences. The icing on the cake. Even slight colorations to the frequency balance tend to have a substantial influence on how ambience is represented. The more colored the frequency response is the more different ambiences tend to sound the same.

 

If we ever get to the finish with this, and if you commit yourself to a create a really good correction towards minimum linear distortion from 20 to 20k, you may need some time to get used to it. Because it will immediately sound less transparent than what you’re used to. Because less frequency regions will stand out from the crowd. And it will take some time before you get enough used to it to appreciate the things was somewhat masked before. The target response is all about spatial masking and it is a big deal. Sometimes the ears need to be calibrated too. I have the impression that some of the most dedicated Audiolense users progress towards more and more neutral target as time goes by.

 

Of course the target response isn’t exactly what you were listening to. I will address that in the next post.

 

 

 

Kind regards,

 

Bernt

 

From: audio...@googlegroups.com [mailto:audio...@googlegroups.com] On Behalf Of Bob Katz


Sent: Sunday, January 27, 2013 2:58 PM
To: audio...@googlegroups.com

--

Fig 1 AB.PNG
Fig 2 AB.PNG
bob target.png

Bob Katz

unread,
Jan 27, 2013, 11:58:01 AM1/27/13
to audio...@googlegroups.com
As a workaround I'm excited. I'll use Audiolense on the RME on internal sync set up for 2.0 analysis, sending just a 2.0 signal to the Lynx interface with VST Host running convolver to the loudspeakers. Then I can immediately analyze in Audiolense using any window I wish and get a full cycle of try-correct-retry until I find a setting in Audiolense that makes me happy with the bottom end but still not having too many filters above a certain frequency.

Brad

unread,
Jan 27, 2013, 2:04:57 PM1/27/13
to Audiolense User Forum
On Jan 27, 7:58 am, Bob Katz <bobkatz24...@gmail.com> wrote:
> .....The relaxed correction sounds pure and open and musical, more like
> the analog filter but better! In fact, the relaxed correction sounds so
> good, and so pure I think I am ready to start working with it, if I can
> deal with the sample rate switching issues and VST Host during my daily
> work......

High Bob
Thanks for continuing to fight your way through the infinite solution
possibilities that Audiolense presents to us. It's great to have you
share this experience with us. I'll have to re-do my 16 channel system
now.

I want to get some type of verification measurement also and I want a
variable window as you mention. Maybe I can run an Audiolense
measurement through JRiver loopback with the correction filters? Bernt
uses some type of variable window and I would like to verify with the
same measurement method as used for the filter generation.

Matt seems to be working a JRiver 24 bit dither solution that will
work with the Lynx Aurora ASIO driver.

Brad

Bob Katz

unread,
Jan 27, 2013, 2:41:56 PM1/27/13
to audio...@googlegroups.com
Hi Brad...


On Sunday, January 27, 2013 2:04:57 PM UTC-5, Brad wrote:
On Jan 27, 7:58 am, Bob Katz <bobkatz24...@gmail.com> wrote:
> .....The relaxed correction sounds pure and open and musical, more like
> the analog filter but better! In fact, the relaxed correction sounds so
> good, and so pure I think I am ready to start working with it, if I can
> deal with the sample rate switching issues and VST Host during my daily
> work......

High Bob
Thanks for continuing to fight your way through the infinite solution
possibilities that Audiolense presents to us. It's great to have you
share this experience with us. I'll have to re-do my 16 channel system
now.

I want to get some type of verification measurement also and I want a
variable window as you mention. Maybe I can run an Audiolense
measurement through JRiver loopback with the correction filters? Bernt
uses some type of variable window and I would like to verify with the
same measurement method as used for the filter generation.



I think it is possible to do a verification through the corrected system measuring in Audiolense, at least with two interfaces. This is to overcome the current limitation in the frequency correction procedure that uses the same window for analysis as correction, so the simulation is inaccurate. Assuming you buy my claim that relaxed correction sounds better, you still shouldn't "relax" the analysis  :-). So by reading back from a corrected system and then using the standard windowing in Audiolense for analysis (which is accurate and not relaxed) you can verify what you have done beyond the simulation, because of course simulation with a narrow window will not reveal the error of your ways.

Make sure you have an analog domain volume control and mute button in your system when you try this complex of a patch, because feedback could blow your speakers! Proceed with caution and check and recheck.

 
Matt seems to be working a JRiver 24 bit dither solution that will
work with the Lynx Aurora ASIO driver.

Brad

I noticed  :-). But I couldn't wait so I bought Voxengo elephant. All my listens either in JRiver or in VST Host are properly dithered to 24 bits.

Best wishes,


Bob

Bernt Ronningsbakk

unread,
Jan 27, 2013, 2:53:12 PM1/27/13
to audio...@googlegroups.com

Hi Bob,

 

It seems like I’m able to post illustrations on the forum again.

 

I will now explain why I said that I probably would prefer the lesser correction over the detailed correction.

 

First of all, by a closer examination of your target I discovered that you had a 1/2dB depression through the lower midrange and not 1/4dB depression as I thought. This was worse than I first thought.

 

Please have a look at the attached bob target 2 image.

 

The black target is my version of a quick and dirty target towards a natural timbre. A straight line with a couple of dB down slope and round off at both extremes. It is probably not a winner but usually a good start. Sometimes a great start. My thinking is plain and simple: The correction that gets it closest to this target will the best sounding correction.

 

I was basically looking for the correction that added as little as possible around 1-2kHz and removed as little as possible around 100-400 Hz. From where I’m sitting you have problems in those two regions both in your speaker and in your target.

 

The attached charts are quite messy but I don’t know how to display this any better.

 

Let’s start with the smoothing in the bob less corr illustration. Observe how the relaxed smoothing ignores a lot of local peaks in the lower midrange. This is a region where your speakers need more energy and not less. These peaks are audible and not taking them down will sound better. When they are ignored by the smoothing they will not be taken down.

 

Contrast that with the smoothing in the bob more corr image and you will see that the smoothing here captures some of the peaks and sets up Audiolense to remove much more energy in the lower midrange.  The degree of detail in the smoothing works both ways, so the detailed correction will also do more lifting of narrow dips in the same region. But Peaks are much more audible than dips and a detailed correction towards your target will do more harm than good to the overall frequency response.

 

The detailed correction will add some 4 dB for half an octave centered around 2 kHz.  This is in a region where your target and your current speaker has too much energy already. This is almost guaranteed to create more harshness. The relaxed correction “only” adds a couple of dB there. I am not sure those 2 dB are any good either.

 

In general, with the target you’ve designed, the detailed correction will do more harm than the relaxed correction in a couple of frequency regions.

 

The real problem is the target itself.

 

If you wish I can prepare a few different corrections for you based on different targets. But I will need a measurement for your current setup that reflects your new subwoofer placement (and a mic calibration file if you use a mic that needs calibration).  Or you could start off by making a target that is ruler straight, with a small downslope between 30 Hz and 8 kHz and make your own correction. Just assure that you get that 1.5kHz peak shaved down and that you get enough energy in the 100-1kHz region.

 

 

Kind regards,

 

Bernt

bob target 2.png
bob less corr.png
bob more corr.png

Bob Katz

unread,
Jan 27, 2013, 3:09:46 PM1/27/13
to audio...@googlegroups.com
Dear Bernt: I really appreciate your experience with targets, but even after reading your reactions with a "concave target", not to disappoint you, but I have a flat target, between 24 Hz and 1 kHz, which you had difficulty seeing with the previous graphs I posted.

I still feel strongly that target is not the cause of the harshness and veiling that I hear. I do not hear an inconsistency in the sound with different sources. They all sound veiled and edgy compared with the relaxed target. I believe the cause to be phase distortion due to excessive filtering but I cannot swear to it since I cannot see anything in the phase graphs of REW that are a smoking gun, so I can only theorize what it is I'm hearing and what I know about minimum phase filter design and the sound of them.

I realize you tried to figure out my target from a really messed up series of graphs I sent, so I'm attaching a picture of "Bob's actual target", which as you can see, IS FLAT! Not concave as you claimed. So I beg you to give up on your theory that my target is causing irregular listening issues with different frequency sources. Let me repeat: I heard the veiling and harshness with the aggressive correction regardless of the musical source and regardless of the target.

Attached is a picture of my target with the filtered measured response of the left front speaker overlaid. You will see that all the points are flat between 24 Hz and 1 kHz. The only reason for the extra point at 20 Hz is that in order to arrive at a flat curve interpolation in the target designer down to 20 Hz, I had to add a second anchor point at 20 Hz. As for the high frequency response, wanting to emulate the speaker curve as much as possible, I wondered if the inherent measured dip in the speakers at 2 kHz adds some necessary "sweetness" that a simple target curve from 1k to 20k could not emulate, so I'm fiddling with that extra point. And so far, the sound of this curve, which is 1.5 dB down at 2k and 7 dB down at 20000* sounds very nice to me. Apply this identical FLAT (not concave to my mind) curve to the two different correction methods, and one of them sounds veiled and harsh and the other sounds open and pure, on all sources, not just on some or one, on all, to stress your point.


* 19965, an artifact of my originally placing the point with the mouse instead of the spreadsheet)

I also consider myself an expert on timbre and the effects of tilted frequency responses on listener reaction. I spend much of my day in mastering taking advantage of those same factors, so I need a neutral reproduction system so as not to distort the sound for my clients. As such I constantly poll the feedback whether my clients are telling me whether I am adding too much bass, too little bass, treble, etc. And I use that feedback to tweak my system if necessary, so my current analog system and its response is the result of a combination of measurement, critical listening and feedback from clients. As you know, measurement is partly art and science, and this choice of a variable measurement window is one step closer to the science, but still there is a lot of art going into it. At this point the art is the target design more than the measurement. And I appreciate your feedback on the target method, but please keep in mind my unique perspective. I am not just a casual listener, I also produce program material.

So I really don't need a lesson on timbre from you, though I'm glad to hear your insights and experiences, I listen to everything you say, and I keep an open mind and question my own decisions every step of the way.

You may not know it, but I have been a recording and mastering engineer since 1970. In the 80's through the early 2000's I was the audiophile recording engineer for the Chesky Records label. Thus I have a unique perspective, I have as a reference recordings that I have recorded and mastered myself. I know how they sound from the original session, I know how they sound on many many reproduction systems throughout the globe, and I know how all my masters translate to many different reproduction systems. They can't all be wrong and yours can't be the only one right!

My reference for purity of tone here continues to be my analog-filtered reproduction system, which sounds as transparent and pure as the best audiophile systems that I have auditioned in many places. And using as reference, the finest recordings, many of which I made myself, using minimalist miking, customized converters and preamps, and absolute minimum of processing between the artist and your reproduction. I know exactly how any of my recordings sound, their transparency, their purity of tone, their timbre and frequency balance and I intimately know the sound of the artists and vocalists themselves, in person as well as recorded on the day of the session. I know these recordings so intimately and have heard them in so many places that I can recognize when any reproduction system playing them is accurate, inaccurate, distorted, clean, or veiled. And I apply that unique perspective and years of experience to the audition process.

You should take it as a compliment that I said that the relaxed correction in Audiolense sounds so good and pure that I might even begin to use it in my professional monitoring work! Instead of looking towards target as the cause, I suggest you critically listen to some high quality recordings using the relaxed versus aggressive correction and the target of your choice, and give us your honest reaction here.

Check it out, let's continue this conversation!


Bob


On Sunday, January 27, 2013 11:06:37 AM UTC-5, BerntR wrote:

Dear Bob,

 

You gave away your verdict before I managed to post my prediction. Anyway….



snip

Bobs real target.PNG

Bob Katz

unread,
Jan 27, 2013, 3:38:53 PM1/27/13
to Audiolense User Forum


On Jan 27, 2:53 pm, "Bernt Ronningsbakk" <bernt.ronningsb...@lyse.net>
wrote:
> Hi Bob,
>
> It seems like I'm able to post illustrations on the forum again.


snip


Dear Bernt: Could be Google groups, because yes you can post
illustrations and we can see your posts, but I had to go back to the
old Google groups or all I got was "loading loading loading" on the
last 10 posts which would not display.

In regards to your new interpretation of my target, our letters are
cross posting, because I posted my actual target here just a post or
two back. It is flat, it is not what you said from reading those messy
graphs. Please look at the image "Bob's real target.png" and give me
your reactions based on that and the measured response of the
speakers. From what I see in the measured response there is not a
deficiency between 200 and 1 kHz, if anything there is a rise there,
as illustrated in that picture.

I have vaguely considered the idea of a constant tilt from 20 to 20
kHz as endorsed by Mitch, but right now I'm trying to get a "flat to 1
kHz, rolled off above that" target to sound nice, and I'm real
close....

Next step for me is to see if I can manipulate Audiolense's three
point correction window, changing time of analysis/correction and
frequency points so as to smooth out the bass response without adding
more serious correction wiggles above, say, 200 Hz and I'll hopefully
report on that. The only way I can do this is to use Audiolense to
measure the actual corrected (convolved) speaker system with a
microphone and evaluate them but using a wider window!

You have not yet convinced me with your arguments but I continue to
listen with an open mind. I recommend you do the same and that tour
next step is to actually listen critically to your loudspeakers with a
relaxed correction and your preferred target, and tell us what you
think. I am also bringing over experts here whose ears I trust and
hopefully some other audiolense users here will try the relaxed
correction and give your verdicts. But this is not a democracy. If in
the end I still feel that relaxed correction is the solution for me,
then that's what it's going to be. (Sorry).


Best wishes,


Bob Katz

Bernt Ronningsbakk

unread,
Jan 27, 2013, 5:49:33 PM1/27/13
to audio...@googlegroups.com

Dear Bob,

 

Your revised target certainly looks a lot better than the one I were examining. Hopefully we are getting closer to common ground.

 

I still thinks the two target points just past 1 kHz may emphasize the 1-2kHz region and add some coloration.

 

I know that you have extensive training in listening for distortion that very few pay attention to. And I fully expect you to be familiar with colorations that I have never even noticed. And I trust that what you report to hear is for real. So I am really debating the causes here. And I know how some of the artifacts look on the charts. The problems you were reporting correlated with the frequency responses. It will be a lot easier for me to really get your message when I see a good looking correction and you still hear the same issues.

 

My hifi computer is in the middle of a hardware upgrade at the moment. I will do a critical listening to more and less frequency correction as soon as the lid is back on with everything working. If you have a recording that highlights the problem and you can provide a detailed listening instruction it will be easier for me to recognize what you’re hearing.

 

Please find enclosed a chart comparing phase before and after correction. This is without any windowing, which makes it steep towards the treble. I am not sure if it has enough resolution to display what you’re looking for,  and there are at least a couple of phase unwrapping errors there - but this is the best before after comparison I can do in Audiolense with a frequency correction.

 

I am really glad to hear that you’ve found a frequency correction in Audiolense that works for you. It means a lot to me.

 

Kind regards,

 

Bernt

 

From: audio...@googlegroups.com [mailto:audio...@googlegroups.com] On Behalf Of Bob Katz
Sent: Sunday, January 27, 2013 9:10 PM
To: audio...@googlegroups.com
Subject: Re: [audiolense] Re: Audiolense 4.6 and JRiver MC18---Summary of my testing and debugging so far (long, detailed post)

 

Dear Bernt: I really appreciate your experience with targets, but even after reading your reactions with a "concave target", not to disappoint you, but I have a flat target, between 24 Hz and 1 kHz, which you had difficulty seeing with the previous graphs I posted.

--

phase bob.png

Mitch Global

unread,
Jan 27, 2013, 7:52:05 PM1/27/13
to audio...@googlegroups.com
Re: I have vaguely considered the idea of a constant tilt from 20 to 20 kHz as endorsed by Mitch, but right now I'm trying to get a "flat to 1 kHz, rolled off above that" target to sound nice, and I'm real close....
 
Bob, I changed my mind through more listening tests. I have a similar target that is flat out to 1 to 2 kHz and roll-off. 
Attached is my target/simulation I am listening to now (with TTD).  Sounds pretty good to me.  Does not sound harsh, even though I have mid/high freq drivers/horns.
 
I tried a frequency correction only, but for some reason could not get it to work in the sense that the simulation did not follow the target closely at all.  Not sure what I am doing wrong there...
 
As a side-note, and could be because I have old school horn loaded system, but with a 1" compression driver, that 1" wavelength translates into a freq of 13,386 Hz.  I used that as the partial correction transition frequency and has provided the most seamless response in my system to date.
 
I am using a short TTD window.  When I look at my impulse response. I have a few peak reflections off the wall/windows directly behind my speakers that are almost -15 dB down (see attached), but after that, not so much.  I figure 11ms of TTD correction @ 10Hz should be enough.  However, I am more than happy to be corrected if this is wrong thinking...
 
Best regards,
 
Mitch
 
Mitch TTD.JPG
impulse response.JPG
Mitch Stereo.JPG

Bob Katz

unread,
Jan 27, 2013, 9:02:58 PM1/27/13
to Audiolense User Forum
Hi, Mitch. I'm back in the old Google groups! Hope they keep it cause
my browser display is more stable. Let's see, your simulated frequency
response with TTD shows a reasonable correction and not
overcorrection, so maybe TTD is the way to go. But I fear Bernt may
tear his hair out with my complaints about TTD relating to the impulse
response and preecho, which my ears are well tuned to.

Your impulse response looks great. Can you please post the simulated
impulse response after correction, though?


Thanks,


Bob
>  Mitch TTD.JPG
> 428KViewDownload
>
>  impulse response.JPG
> 358KViewDownload
>
>  Mitch Stereo.JPG
> 224KViewDownload

Bob Katz

unread,
Jan 27, 2013, 9:23:52 PM1/27/13
to audio...@googlegroups.com
Well, I've progressed mightily this Sunday night and it's 9:00 PM and I'm exhausted. Let me start by saying that the correction that I have gotten is the best sounding room correction I've EVER heard, analog OR digital, in my 43 years of professional listening! Which means it is now one of the best-sounding stereo systems I've ever heard!

Problems and notes.

Attached is the relaxed correction procedure I settled on in order to get a reasonable amplitude excursion in the bass but retain minimum correction in the rest of the frequency range. I would like more control in knowledge of where the window is, but I quibble, because the actual measured (not simulated) results from a full microphone loopback show very nice results.

[ Hey, Google, that's not nice, there's no attach button that I can find in the old Google... ok, I'm going to discard and reenter the new Google. Google gives and Google takes away. ]

Notice that I chose a midfrequency change of 250 Hz. This was all by exhaustive trial and error going back and forth between correcting with a relaxed filter and analysis of the loopback from the corrected (convolved) result measured back into Audiolense through a microphone and then measuring with a standard default window width for frequency correction.

So, if I am to endorse relaxed frequency correction in Audiolense, Bernt is going to have to make a separate measurement and correction window choice for frequency domain correction. In other words, we want to do a RELAXED correction, but a CORRECT ("aggressive" analysis. Or you will get the impression that there are less positive and negative excursions in the simulation than actually are perceived by the ear.

Another issue of this approach is that if you are doing any partial correction and you switch back and forth between a 5.1 setup (for doing digital crossover design, for example) and a 2.0 setup (for measuring a stereo return into a mike from the full convolved result) you will LOSE partial correction every time you switch. That goes both ways. So, be aware of this and check your partial correction checkboxes until you get used to the bug. This isn't exactly a fair bug to report. Bernt never anticipated that a user would switch back and forth and back and forth between 2.0 and 5.1...

It is very satisfying and wonderful to watch the meters during the test sweep to see the low frequencies being directed to the subwoofers and the rest going to the mains. However, listening to the sweep I heard an echo effect in the lower midrange that was deteriorating the quality of the sweep. I judge it to be due to defects in the convolver. I was listening through VSTHost at the time for getting the sweep output from AudioLense into the live input to the convolver.

Then, when listening to one of my most beloved recordings, Rebekka Pidgeon's "Spanish Harlem", the ambience of the low notes of her vocal were strangely resonant, at that same frequency range, around 150 to 200 Hz, and it was I am certain an artifact of the convolver. But it could be the correction, who knows? I tried different modes in ConvolverVST, some of which crashed, but of the ones I could use, "measure" and "patient", neither one fixed this annoying artifact. I have to decide tonight or tomorrow with careful listening if I'm willing to put up with this artifact for the pleasure of having a system that's so nicely regulated and so nice sounding in other ways. I then switched to the convolver in JRiver and it sounds considerably better, but there is still that residual ringing effect at around 180 Hz that's exposed with Rebekka's voice that I KNOW 100% sure is not there. (See why it's so useful to have been the recording engineer on a recording and actually know what it's supposed to sound like!). Hey, I wonder if it is the extra window change at 250 Hz that's causing this resonance? Uh oh, I have to listen to that sweep again!
new relaxed correction.PNG

Bill Street

unread,
Jan 27, 2013, 9:39:31 PM1/27/13
to audio...@googlegroups.com
I'm no where near experience wise as Bob and other posters here but based on what experience I do have with Audiolense in a 2.0 setup, I would say the "ringing" effect could be a result of the extremely high mid frequency setting of 62.5 cycles before and after peak in the posted pic of measurement and correction window. I've made hundreds of filters and have never approached a setting that high. I would consider a setting of 20 cycles very high. I have found extreme settings can definitely induce an audible ringing/echo effect.

I've really enjoyed these recent posts. I've found them very educational.

Thanks.

Bob Katz

unread,
Jan 27, 2013, 9:45:18 PM1/27/13
to audio...@googlegroups.com
Google Groups are becoming a pain. In the New groups I get "loading loading loading' of the last 10 or so posts so I can't view or reply to them....

So I chose a random post I could see and I'm replying from here.
I am sorry to report that adding a midfrequency correction window as attached in my last post, creates that unnatural echo effect, which is distinctly audible in the sweep if you sweep a signal INTO the convolver and listen. This resonance is NOT audible if you do a simple two frequency correction. This is very disappointing, sorry, Bernt, you have to look into this. It is NOT due to the convolver as I had first suspected.

This means that I think I cannot do a relaxed correction that gives reasonable bass excursion. Jim Johnston himself I think recommends 500 ms. starting at 20 Hz, but he's unclear about where the transition should occur or when, or how, only that by 20 kHz it should be "near anechoic". I was hoping that adding the mid frequency to define the turnover point would help, but unfortunately this causes a serious sonic artifact.

I might look into TTD now while we're waiting for Bernt to get off the floor and recover, since Mitch's correction looks good. But I'm not optimistic about it, given my previous negative reactions to its impulse response. And please don't tell me "I have to get used to it".....   if the cause is transient response degradation due to excess preringing, then a high frequency boost is a bandaid, not a cure. I went through that while flirting with linear phase equalizers for several years, so I know exactly what preringing sounds like.


Brad

unread,
Jan 27, 2013, 9:45:52 PM1/27/13
to audio...@googlegroups.com
Bob,

Why do you check "minimum delay crossovers".  You may get better results if you use crossovers that are linear phase (unchecked).

Brad

Bob Katz

unread,
Jan 27, 2013, 9:48:18 PM1/27/13
to audio...@googlegroups.com


Hmmm! Because I didn't know it...  I read the manual but forgot that detail. Thanks for the advice, Brad. Yes, linear phase crossovers are what I was looking for but somehow I missed that detail. So much to learn...

Thanks very much,


Bob

Brad

unread,
Jan 27, 2013, 9:52:52 PM1/27/13
to audio...@googlegroups.com
Bob,

As you suggest, we are in need of a users guide on that Wiki page. I'll try to help but I can't promise too much.

Brad

Bob Katz

unread,
Jan 27, 2013, 10:58:47 PM1/27/13
to Audiolense User Forum
Dear Bill:

I may have lots of experience, but not in this area so I'm definitely
treading new ground and you have more experience than me! Thanks for
pointing out the 62 cycles. What are cycles anyway? Number of samples
at a given samples per second? If so then they would be thousands of
cycles so now I have no idea what a cycle is! You may have found the
smoking gun that's causing the echo. The thing is I went by
milliseconds, and I cannot understand why the 500 ms. top line shows
only 5 cycles, but the 250 ms. line shows 62 cycles? Aren't these
supposed to be equivalent for a given single sample rate? Do I trust
the milliseconds or the cycles? What's going on here?

I'd like to repost the image for convenience (It's called "new relaxed
correction.png" but I'm in the old google groups and there is no post
button...


Thanks,


Bob

Mitch Global

unread,
Jan 28, 2013, 12:08:37 AM1/28/13
to audio...@googlegroups.com
Hi Bob, Cool! 
 
Attached are a bunch of charts.  First your request for the simulated impulse response (unsmoothed).  I tried to use the same horizontal scale of capturing the first 37ms.
 
I like to zoom in on the first 1ms of the impulse response to look for any HF ringing or anomolies.  Posted is both the raw response and simulated (unsmoothed) response.
 
Last, and I may have this wrong, but I think Bernt mentioned that looking at the simulated log response under the analysis menu, with simulation with noise reduction toggled on, will show more detail if there is preringing.  I also look at the analysis->measurement->log measurement to see where the start of the vertical impusle spike occurs as a reference and then compare that to the log simulation. 
 
Looking at the log analysis of the measured signal, it seems the knee is around - 70 dB.  If I look at the simulation, the knee is around -65 dB using TTD.  I am using a minphase measurement and minphase target.
 
Perhaps Bernt can comment more on the log capabilites...
 
Regards,
 
Mitch

log measure.JPG
raw imp 1ms.JPG
sim imp 1ms unsmooth.JPG
sim imp unsmooth.JPG
sim log response.JPG

Brad

unread,
Jan 28, 2013, 1:32:37 AM1/28/13
to audio...@googlegroups.com
On Sunday, January 27, 2013 9:58:47 PM UTC-6, Bob Katz wrote:

What are cycles anyway? Number of samples
at a given samples per second? If so then they would be thousands of
cycles so now I have no idea what a cycle is!

I'll answer for Bill since I'm here at the moment. A cycle is the number of 360 degree acoustic sinusoidal pressure cycles (waves) at the frequency in question. 

Bernt Ronningsbakk

unread,
Jan 28, 2013, 2:52:23 AM1/28/13
to audio...@googlegroups.com

Well said, Bill

 

You never know for sure until you listen, but that midrange frequency window seems excessive even for a pure frequency correction.

 

 

Kind regards,

 

Bernt

 

From: audio...@googlegroups.com [mailto:audio...@googlegroups.com] On Behalf Of Bill Street
Sent: Monday, January 28, 2013 3:40 AM
To: audio...@googlegroups.com
Subject: Re: [audiolense] Re: Audiolense 4.6 and JRiver MC18---Summary of my testing and debugging so far (long, detailed post)

 

I'm no where near experience wise as Bob and other posters here but based on what experience I do have with Audiolense in a 2.0 setup, I would say the "ringing" effect could be a result of the extremely high mid frequency setting of 62.5 cycles before and after peak in the posted pic of measurement and correction window. I've made hundreds of filters and have never approached a setting that high. I would consider a setting of 20 cycles very high. I have found extreme settings can definitely induce an audible ringing/echo effect.

Bernt Ronningsbakk

unread,
Jan 28, 2013, 4:27:21 AM1/28/13
to audio...@googlegroups.com

Bob,

 

Keep this sensation and experience for further reference.

 

Well, I've progressed mightily this Sunday night and it's 9:00 PM and I'm exhausted. Let me start by saying that the correction that I have gotten is the best sounding room correction I've EVER heard, analog OR digital, in my 43 years of professional listening! Which means it is now one of the best-sounding stereo systems I've ever heard!

 

In spite of the pre-ringing you reported shortly after this is a big step in the right direction. You’ve just heard some of what a good frequency correction can do to the sound quality. And you also heard some of the benefits that sometimes (but not always) can be had with a detailed and forceful correction through the lower midrange. I fully expect that you soon will have a correction filter with all the benefits you experienced and none of the problems.

 

Allowing 62.5 cycles at any frequency is asking for insane levels of correction.  You were also doing it in the most troubled frequency region in your room, which means that the filter will find something to do for a very long time.

 

I suggest that you make a minimum delay crossover filter with 8-8-3 cycles window (the second 8 at 250 Hz), 65536 length filter, and  allow for 15dB of correction boost. Stick to that for a while and focus on making the best sounding target you’re capable of. The only thing that can cause problems here is the crossover settings. I can help you with that.

 

After you have made an outstanding sounding frequency correction with this procedure we can direct our attention to window setting, degree of correction and our side bet. But let’s harvest the low hanging & best tasting fruits first:

 

Target is THE key!

 

PS: If you get tired of google groups problems you can sign up for getting all posts directly in your mail box.

 

Kind regards,

 

Bernt

Bob Katz

unread,
Jan 28, 2013, 6:26:26 AM1/28/13
to Audiolense User Forum
Mitch, your TTD impulse graphs look much cleaner in terms of
preringing than mine did when I tested TTD. Do you have any preringing
control engaged?

Thanks,


Bob
>  log measure.JPG
> 362KViewDownload
>
>  raw imp 1ms.JPG
> 363KViewDownload
>
>  sim imp 1ms unsmooth.JPG
> 361KViewDownload
>
>  sim imp unsmooth.JPG
> 352KViewDownload
>
>  sim log response.JPG
> 349KViewDownload

Bob Katz

unread,
Jan 28, 2013, 6:43:04 AM1/28/13
to Audiolense User Forum
Bernt:

Yes, we're getting there. But the side bet is still a
possibility :-). I still wager that the harshness I heard was due to
excessive correction, not to frequency response anomalies. We'll get
to that after I reach a target I'm fully happy with in "relaxed
correction" mode and then we'll have a shootout and an analysis by
Bernt.

Anyway,

On Jan 28, 4:27 am, "Bernt Ronningsbakk" <bernt.ronningsb...@lyse.net>
wrote:
> Bob,
>
> Keep this sensation and experience for further reference.
>


> Allowing 62.5 cycles at any frequency is asking for insane levels of
> correction.  You were also doing it in the most troubled frequency region in
> your room, which means that the filter will find something to do for a very
> long time.
'
I swear there was nothing in the manual about these "cycles" and I had
no idea that I also had to enter in a number of cycles as well as
milliseconds in the window setting, and I'm completely puzzled as to
the rules and exceptions of this part of the program!

>
> I suggest that you make a minimum delay crossover filter with 8-8-3 cycles
> window (the second 8 at 250 Hz), 65536 length filter, and  allow for 15dB of
> correction boost. Stick to that for a while and focus on making the best
> sounding target you're capable of. The only thing that can cause problems
> here is the crossover settings. I can help you with that.

Thanks for the numbers. By minimum delay I assume you meant the
frequency correction algorithm (as opposed to the TTD algorithm)? I am
always concerned about latency and for the time being I'm tolerating
it, but it is the last issue I need to conquer. I have no idea how you
arrived at the need to increase the filter length to 65536, nor the
consequences of shortening the filter length on Audiolense's
operation! As you say, though, let's harvest the low hanging fruit
first, as I too am working in priority order. Getting a good-sounding
correction is my first priority, of course.

I have no idea how you determined the number of cycles and I need a
graduate level course on that, so by rote I'll plug them into the
settings this morning and listen! Please give us a tutorial. The need
for an FAQ is rapidly arriving, and if we don't get at least 2
volunteers in addition to me I cannot devote any free time to it. I
suggest, Bernt, that you look for a smart intern from the University
to come to your place, who, in exchange for free lessons on FFT-based
coding, will start to compile an FAQ and Wiki for you. The more volume
of posts here on Google that don't get compiled, the harder it will be
to do that FAQ, so I suggest you start on that right away.

Best wishes to you! As for the Google groups, disorganized and
unthreaded emails are a pain to me. But maybe my Thunderbird can
handle that. We shall see! If I can upload images to Google groups
through my Thunderbird, then that's the solution to this Google groups
issue on browsers.


Take care, wish me luck this morning. I'm really hoping that I can get
Audiolense functional enough for me to enjoy the fruits of its sound
and actually work with it, so I don't have to relocate my woofers once
again :-).



Bob

Bill Street

unread,
Jan 28, 2013, 8:59:49 AM1/28/13
to audio...@googlegroups.com
Hi Bob,

My understanding has always been that the cycles and ms are showing the same thing, which is the length of the window being used for the corrections. I always thought that the difference in each (bass, mids if used and highs) was based on the length of each cycle at those different frequencies in ms (one bass cycle takes much longer to complete than one mid cycle which takes much longer than one high frequency cycle). That's why if we manually change any of the frequencies themselves, than the ms value will be changed for the same number of cycles. There's also a check box where you can have the number of cycles/window length shown in meters instead of ms (top right just above the Measurement and Correction Window). When I enter my values for the windows, I always enter as cycles. As an aside, I always assumed it was best to use full cycles for the window size. I've noticed some screenshots being posted where the cycles will be .104 or 3.279 just as examples. I'm assuming those values are a result of people entering their values before/after peak in the ms windows, and as a result it's the cycles that are being calculated based on the ms entered, as opposed to in my case, I enter the number of cycles and Adiolense then calculates the ms from that. I don't understand the logic behind having Audiolense basing its calculations on fractions of a cycle, unless this is intentional for some reasons I'm unaware.

I'm trying to explain the above in as much detail as possible, which makes it appear overly basic I'm sure, with the intent of Bernt hopefully reading it and clarifying my explanation if it's incorrect. This has really never come up for discussion. If I've been looking at this wrong all this time, it will be good to get it corrected.

Bill

Alan Jordan

unread,
Jan 28, 2013, 9:09:20 AM1/28/13
to audio...@googlegroups.com
People like me, who studied the liberal arts, need overly basic explanations on things math and science.  My ability to make optimal filters only goes as far as the help files go, which unfortunately don't very far.  I hope that Bernt will eventually update the help files based on the needs shown in these discussions.

Alan

Bill Street

unread,
Jan 28, 2013, 9:40:18 AM1/28/13
to audio...@googlegroups.com
To add to my post above, the numbers we're entering in the Measurement and Correction window represent the length, whether in ms or cycles, that will be used before and after the original measured impulse response peak, by Audiolense to perform it's corrections. If larger values are entered (more cycles or ms) then this means Audiolense is trying to correct more of that impulse, which is really showing the speaker and room response, so higher values would mean Audiolense will likely be getting into the part of the measurement that represents room reflections, while smaller values would tend to confine the correction more to direct speaker output before reflections. For the example in Bob's case of 62.5 cycles of correction in mids, that's a lot of correction in that range for Audiolense to have to try and do compared to the much lower portion of that impulse being corrected in bass and highs (based on the values entered for cycles/ms).

Again maybe very basic, but as Alan pointed out, it may be good. Hopefully Bernt reads these posts and can clarify.

Bill

--
--
Audiolense User Forum.
http://groups.google.com/group/audiolense?hl=en?hl=en
To post to this group, send email to audio...@googlegroups.com
To unsubscribe, send email to audiolense+...@googlegroups.com
 
---
You received this message because you are subscribed to the Google Groups "Audiolense User Forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to audiolense+...@googlegroups.com.

For more options, visit https://groups.google.com/groups/opt_out.
 
 

Erik

unread,
Jan 28, 2013, 10:02:58 AM1/28/13
to audio...@googlegroups.com
Hi Bob (and everyone else),

I have also been following this thread with great interest. And it would be awesome to have a FAQ or better yet a Wiki containing both shallow level knowledge/instructions (i.e. tutorial) and deep level knowledge (i.e. tweaking the last bit, help Bernt develop new features and so on). What is it you need from 2 volunteers? 1. Pure administrative help (like drawing up the framework for a wiki and then help to add content)? 2. Searching for knowledge in the existing google-group? 3. Adding own knowledge (about Audiolense and/or concepts of acoustics/measuring/digital-filters/and so on)? I don't have much knowledge about this, but I might be able to help with 1. I'm reading to become an engineer in computer science so I should be able to build a wiki (although I have to do some reading to get started), I would have loved free lessons from Bernt in exchange for some administrative work - but sadly I reside in Stockholm. Regarding 2, I don't have enough knowledge to filter the information - basically I don't know what I'm looking for. And that pretty much concludes point 3 for me as well :)

Is this the way to build a Wiki though? Isn't the best way to "just have someone" draw up the framework and then everybody can add content (and corrections if needed). Then we'll change the framework as needed? I have a very basic understanding for what I'm doing with Audiolense so there are many aspects that I don't understand (measuring window for instance). I have read some math by now and even a little physics so perhaps that's a good foundation for learning?

What I was thinking was that maybe I could start a new thread, a "Please help me get started Bernt!"-thread :) Actually I've been playing around for about two months and I believe I have pretty good sound by now (I'm gonna try the relaxed filter). But I could start from scratch and then have Bernt help me as we go (and everyone else of course, but Bernt will be regarded as GOD! haha). I (and everybody else) can ask all the stupid questions for a basic understanding first (for newbies like me) and then hopefully get to a deeper understanding and ask better questions (for all advanced users). At the end, or as we go, I/we can make this into a tutorial and adding sections to the wiki and so on. Would this be a good idea? Would it be a problem that I'm not using Audiolense as a cross-over on my primary setup, only as room-correction (maybe that's a good start though?)? And finally the most important question, is Bernt up for it? :)

What do you all think?

lasker 98: Basic is good and good explanation :)

/Erik

Bob Katz

unread,
Jan 28, 2013, 10:47:12 AM1/28/13
to audio...@googlegroups.com
test post

--
Bob Katz 407-831-0233 DIGITAL DOMAIN | "There are two kinds of fools,
Recording, Mastering, Manufacturing  | One says-this is old and therefore good.
Author: Mastering Audio              | The other says-this is new and therefore
Digital Domain Website               | better."

No trees were killed in the sending of this message. However a large number
of electrons were terribly inconvenienced.
No more Plaxo, Linked-In, or any of the other time-suckers. Please contact me by regular email. Yes, we have a facebook page and a You-Tube site!

Mitch Barnett

unread,
Jan 28, 2013, 10:51:38 AM1/28/13
to audio...@googlegroups.com
Bob, no preringing control engaged.  Regards, Mitch

From: Bob Katz
Sent: ‎2013-‎01-‎28 3:26 AM
To: Audiolense User Forum
Subject: [audiolense] Re: Audiolense 4.6 and JRiver MC18---Summary of mytesting and debugging so far (long, detailed post)

> > Thanks,
>
> > Bob
>
> > > > --
> > > > --
> > > > Audiolense User Forum.
> > > >http://groups.google.com/group/audiolense?hl=en?hl=en
> > > > To post to this group, send email to audio...@googlegroups.com
> > > > To unsubscribe, send email to audiolense+...@googlegroups.com
>
> > >  Mitch TTD.JPG
> > > 428KViewDownload
>
> > >  impulse response.JPG
> > > 358KViewDownload
>
> > >  Mitch Stereo.JPG
> > > 224KViewDownload
>
> > --
> > --
> > Audiolense User Forum.
> >http://groups.google.com/group/audiolense?hl=en?hl=en
> > To post to this group, send email to audio...@googlegroups.com
> > To unsubscribe, send email to audiolense+...@googlegroups.com
>
>
>
>  log measure.JPG
> 362KViewDownload
>
>  raw imp 1ms.JPG
> 363KViewDownload
>
>  sim imp 1ms unsmooth.JPG
> 361KViewDownload
>
>  sim imp unsmooth.JPG
> 352KViewDownload
>
>  sim log response.JPG
> 349KViewDownload

--
--
Audiolense User Forum.
http://groups.google.com/group/audiolense?hl=en?hl=en
To post to this group, send email to audio...@googlegroups.com
To unsubscribe, send email to audiolense+...@googlegroups.com

---
You received this message because you are subscribed to the Google Groups "Audiolense User Forum" group.
To unsubscribe from this group, send email to audiolense+...@googlegroups.com.

Bob Katz

unread,
Jan 28, 2013, 11:00:38 AM1/28/13
to audio...@googlegroups.com
Thanks Bill, Brad and Bernt.

I'll follow Bernt's recommendations on 8-8-3 cycles by rote and see if I get a good sounding result, including measuring and verifying by a loopback-microphone measurement. These are acoustical cycles, not related to number of samples per second, right?

I need to see if the milliseconds and cycles are linked (which they would be by the frequency chosen). In that case, how can I adequately control the measurement accuracy? According to authorities whose name I trust, we need 500 ms. at the low end, and so I try to "cross it over" to, say, 250 ms. at 250 Hz, thus, (I think) the large number of cycles.

I'll see how it behaves in a moment in the app, but you can see that's what's puzzling me now.


Bob

Bob Katz

unread,
Jan 28, 2013, 11:42:37 AM1/28/13
to audio...@googlegroups.com
Dear Erik:

I started a separate thread on a wiki/FAQ, but ironically I'm on Thunderbird right now, it's more stable than the Google Groups browser interface, so I can't see the thread directly.

Anyway, thanks for possibly offering to volunteer. I really don't have that much math, either, but I do know my logarithms  :-). Seriously, you are correct that we really need a "wiki" template and then everyone can add, modify and contribute to it, in an "open source" manner. That's why I tried to start one using a kind of "project wiki" template at Google sites:

https://sites.google.com/site/audiolensewiki/

Write me privately at bobkatz24bit[at sign]gmail.com and I'll share you as an editor of the site. Please take a look at the site, and see if you think the structure would be useful as a Wiki. Something tells me it is not. So we have to find a generic wiki template and see if it can be imported into Google sites or something similar.

Take care,



Bob

Walter_TheLion

unread,
Jan 28, 2013, 1:55:53 PM1/28/13
to audio...@googlegroups.com, bob...@digido.com
Bob,

it is good to see that you are making progress. 

Cycles are used for frequency depended windowing. Therefor if you use a constant correction window (like 5 cycles @ 10Hz and 5 cycles @ 24khz) the "amount of smoothing" will be about equal in all freq. bands. Mathematically it is a simple relationship: 1000ms times correction window in cycles divided by Frequency.

Therefor setting 8 cycles at 10 Hz results in a 1000*8/10=800ms correction window. 8 cycles at 24khz = 1000*8/24000=0.333ms

I am still not sure what you mean by "relaxed correction"?
Using correction windows at about 5 cycles is far from "relaxed correction" - as in less filter activity. Something like 2-3 cycles fit that bill. Using 5-8 cycles is quite heavy correction in my opinion - lots of narrow bandwidth corrections.

I hope this helps.

Kind regards
Walter

Bernt Ronningsbakk

unread,
Jan 28, 2013, 4:01:29 PM1/28/13
to audio...@googlegroups.com

You explain it very well, Bill.

 

Thank you

 

Bernt

 

From: audio...@googlegroups.com [mailto:audio...@googlegroups.com] On Behalf Of Bill Street


Sent: Monday, January 28, 2013 3:00 PM
To: audio...@googlegroups.com

Bernt Ronningsbakk

unread,
Jan 28, 2013, 5:51:50 PM1/28/13
to audio...@googlegroups.com

The cycles are acoustical and independent of sample rate and samples.

 

Kind regards,

 

Bernt

 

From: audio...@googlegroups.com [mailto:audio...@googlegroups.com] On Behalf Of Bob Katz
Sent: Monday, January 28, 2013 5:01 PM
To: audio...@googlegroups.com
Subject: Re: [audiolense] Re: Audiolense 4.6 and JRiver MC18---Summary of my testing and debugging so far (long, detailed post)

 

Thanks Bill, Brad and Bernt.

Bob Katz

unread,
Jan 29, 2013, 8:11:36 AM1/29/13
to audio...@googlegroups.com
Guys, after a full day yesterday, I have arrived at a relaxed correction and a target design that sounds beautiful, accurate, seductive, marvelous. There is no harshness, the sound is transparent and pure. It is so nice that I listened for hours to large parts of my collection, and marvelled at the sound. I can keep the target at this setting for a while and produce very good sounding masters, I'm certain. As time goes on, I'm going to play with the hinge point between the flat section (currently below 1 kHz) and the hinged section (a diagonal line between 1 kHz and 20 kHz). A 1 kHz hinge point perhaps sounds a bit too "sweet" and may be producing a tiny subjective depression in the 2 kHz range. So I may fiddle with moving the hinge point to 1.2, 1.3 kHz, or even up to 2 kHz to see. I think the 20 kHz point is working well, and that is, in my system, -6 dB at 20k relative to 1k with as perfect a straight line as I can create between the hinge point and 20 kHz.



On 1/28/13 1:55 PM, Walter_TheLion wrote:
Bob,

it is good to see that you are making progress. 

Cycles are used for frequency depended windowing. Therefor if you use a 
constant correction window (like 5 cycles @ 10Hz and 5 cycles @ 24khz) the 
"amount of smoothing" will be about equal in all freq. bands. 
Mathematically it is a simple relationship: 1000ms times correction window 
in cycles divided by Frequency.

Therefor setting 8 cycles at 10 Hz results in a 1000*8/10=800ms correction 
window. 8 cycles at 24khz = 1000*8/24000=0.333ms

I am still not sure what you mean by "relaxed correction"?
Dear Walter:

I moved the topic away from Audiolense Wiki. I'm posting from my Thunderbird, so it will be interesting if Google Groups puts this reply into the right thread.

Thanks for that formula and help. It should go into an FAQ!

What I mean by "relaxed correction" is as if Audiolense had a built in "tolerance" level. That it would tolerate, say, plus or minus 2 dB of end result and not force so much correction as to end up with, say, plus or minus 1 dB. But since Audiolense does not work that way, it analyzes in the time domain (which is a good thing), then the method of changing the window size and putting a midrange hinge point in there amounts to creating the desired amplitude tolerance in the end. Assuming you subscribe to my notion that overcorrection results in a veiling and harshness to the sound.

The weakness currently of the frequency method is that it does not have a separate analysis and correction window as there is in TTD. So the apparent simulated display ends up looking overcorrected with a "relaxed" analysis window. "Relaxed" is probably not the right word to apply to the analysis window, so let's say, "a psychoacoustically-correct analysis window". And I believe Audiolense's frequency correction default is just about perfect as an analysis window, but not as a correction window. In that respect, with one quibble at the high frequency end, where I find that 0.3 ms or slightly above is better than 0.2 ms or below, as it avoids some anomalies in the analysis/correction in the 20 kHz range, averaging out some tiny deficiencies in the speaker, I imagine.

Hope this helps! Many many thanks to all of you for your help.


Stand by for the fireworks,


Bob


 and I'll share you 
as an editor of the site. Please take a look at the site, and see if you 
think the structure would be useful as a Wiki. Something tells me it is 
not. So we have to find a generic wiki template and see if it can be 
imported into Google sites or something similar.

Take care,



Bob



On 1/28/13 10:02 AM, Erik wrote:
 
Hi Bob (and everyone else),

I have also been following this thread with great interest. And it would be 
awesome to have a FAQ or better yet a Wiki containing both shallow level 
knowledge/instructions (i.e. tutorial) and deep level knowledge (i.e. 
tweaking the last bit, help Bernt develop new features and so on). What is 
it you need from 2 volunteers? 1. Pure administrative help (like drawing up 
the framework for a wiki and then help to add content)? 2. Searching for 
knowledge in the existing google-group? 3. Adding own knowledge (about 
Audiolense and/or concepts of acoustics/measuring/digital-filters/and so 
on)? I don't have much knowledge about this, but I might be able to help 
with 1. I'm reading to become an engineer in computer science so I should 
be able to build a wiki (although I have to do some reading to get 
started), I would have loved free lessons from Bernt in exchange for some 
administrative work - but sadly I reside in Stockholm. Regarding 2, I don't 
have enough knowledge to filter the information - basically I don't know 
what I'm looking for. And that pretty much concludes point 3 for me as well 
:)

Is this the way to build a Wiki though? Isn't the best way to "just have 
someone" draw up the framework and then everybody can add content (and 
corrections if needed). Then we'll change the framework as needed? I have a 
very basic understanding for what I'm doing with Audiolense so there are 
many aspects that I don't understand (measuring window for instance). I 
have read some math by now and even a little physics so perhaps that's a 
good foundation for learning?

What I was thinking was that maybe I could start a new thread, a "Please 
help me get started Bernt!"-thread :) Actually I've been playing around for 
about two months and I believe I have pretty good sound by now (I'm gonna 
try the relaxed filter). But I could start from scratch and then have Bernt 
help me as we go (and everyone else of course, but Bernt will be regarded 
as GOD! haha). I (and everybody else) can ask all the stupid questions for 
a basic understanding first (for newbies like me) and then hopefully get to 
a deeper understanding and ask better questions (for all advanced users). 
At the end, or as we go, I/we can make this into a tutorial and adding 
sections to the wiki and so on. Would this be a good idea? Would it be a 
problem that I'm not using Audiolense as a cross-over on my primary setup, 
only as room-correction (maybe that's a good start though?)? And finally 
the most important question, is Bernt up for it? :)

What do you all think?

lasker 98: Basic is good and good explanation :)

/Erik


 
-- 
   
Bob Katz 407-831-0233 DIGITAL DOMAIN | "There are two kinds of fools,
Recording, Mastering, Manufacturing  | One says-this is old and therefore good.
Author: *Mastering Audio              *| The other says-this is new and thereforeDigital Domain Website <http://www.digido.com/>               | better."

No trees were killed in the sending of this message. However a large number
of electrons were terribly inconvenienced.*No more Plaxo, Linked-In, or any of the other time-suckers. Please contact me by regular email. Yes, we have a facebook page and a You-Tube site!*

    

    

Alan Jordan

unread,
Jan 29, 2013, 9:17:01 AM1/29/13
to audio...@googlegroups.com
Hi Bob,

Would be so kind as to share the particulars of your target and correction designer parameters?

Thank you,
Alan

Walter_TheLion

unread,
Jan 29, 2013, 2:25:39 PM1/29/13
to audio...@googlegroups.com, bob...@digido.com
Hi Bob,

I would be very interested in the parameters you have used for this result. As I said before: I consider something like 2 cycles across the freq. spectrum a "relaxed correction". Have you done some more experimenting with TTDC versus freq. only correction? Thanks!

Best regards
Walter 

Bob Katz

unread,
Jan 29, 2013, 3:54:26 PM1/29/13
to audio...@googlegroups.com
Here we go, another long post! And answers to you all.

First of all, I'm very excited, once again I have to say I've arrived at a very seductive and musical and accurate (all at the same time) result with Audiolense. I hope Google doesn't choke on all the attachments I'm about to put on this email from my Thunderbird. It's much easier to work on my desktop email browser and I can see the threads and find unread emails more accurately and much easier than in the browser.

I know that "seductive and musical" are suspicious adjectives and potentially contradictory to "accurate". The seductive part is the purity of tone, the transparency, the separation and the depth, which I was missing with the "aggressive correction" in the default frequency domain settings of Audiolense. There are those of you who will attribute that to my imagination, or to the subjective coloration of a less-flat presentation, with more excursions in the amplitude domain. There are those who say, "you've never heard a flat system", but I trust my ears and what they tell me is that the artifacts are now gone.

I listened for hours and hours just for listening pleasure yesterday. I rarely make time to do that, but the sound of ALL my great reference recordings sounds really really good. The brightest ones are just on the edge of too bright and the dullest ones just on the edge of too dull. The bottom end is tight and extended and makes you jump off your feet with a good reference recording. I'm thrilled. Not to mention the depth, soundstage, purity of tone, and overall naturalness of the timbre.

So, controversy aside, here's my story:

Let me start with the target, and that's what I'll talk about in this post:

TARGET

What a pain in the ass to manipulate, when you want to be perfect and quibble over 0.25 dB aberration which can be very important when you have a tilted line over many octaves. I agree 100% with Bernt that 0.25 dB slope in the wrong place can ruin your day. As a mastering engineer I know what adding 0.1 dB at 10 kHz achieves (or does not achieve). I work with these effects every day. So it is VERY important for me to have an accurate system to make judgment on!

The positions of the squares AND the interpolated curve in the low resolution display of the target designer do NOT tell the whole story, not in the least. If your goal (with frequency domain correction) is to have a ruler flat target from your low frequency limit up to 1 kHz (or whatever hinge point you choose, usually up to a maximum of 2 kHz), and then a pure straight diagonal line tilt from 1 kHz to your selected amplitude at 20 kHz, then achieving that using the present state of Audiolense is like pulling teeth. It is very very easy to end up with the concave shape that is a coloration, or a tiny 0.25 dB aberration from ideal at 10 kHz that will tend to emphasize sibilance (by creating a rise  across many octaves) and drive your ears up a wall.

Then if you (like me) construct targets to accomodate your measurements at the double sample rates and permit using the supersonic response of your loudspeakers, then you absolutely have to construct different targets at the different sample rates. I did find for the current extension of my speakers and microphone, that I could construct one target for 88 kHz and upward, which was a great relief. The reason you have to have different targets is if you turn off partial correction, then you can end up with a severe correction boost at the supersonic end as Audiolense tries to overcompensate for the reduced HF response of your loudspeakers. Perhaps I can go back and try reenabling partial correction for the 44.1 kHz sampling case, and use just one target set for 88 kHz and above. But then I will have to make three different correction procedures instead of 3 different targets (44.1, 48, and 88+) so it's probably six of one and half a dozen of the other. And the partial correction algorithm makes it a little difficult to make a good splice, but I could go back to that, but anyway, I like full correction now and just a good target to take care of things. After all, at the low end below your cutoff (nominally 30 or sometimes 20 Hz with good subs), extra correction boost can be managed with a good target shape, so why not at the supersonic end as well.

Watching the correction curve in the main window is an absolute requirement to getting your target right and not overstraining your system or ending up with too much attenuation.

Speaking of too much correction curve, Bernt's recommendation of 15 dB max boost did not do me any good, all it did was cause more and more attenuation. Sometimes it permitted filling in a hole somewhere that should be fixed by acoustic techniques, but it scared me. For me 6 dB max boost was more than enough for me and I'm not exactly sure what those two bass and treble checkboxes do, and the boost choice. Maybe I'm missing something obvious in the simulated display, but In most cases I just found they affected how much overall attenuation so I played with the three of them until I got the least attenuation and then left them alone.

Target images: The first was my first attempt to make a pure diagonal target with only two points. It's called "10 dB from 1K to 20k in main display". Which is the result of "but only 6 dB from 1k to 20k in target designer". So you absolutely have to view the target result in the simulation to see what god hath wrought!

So in order to get that ruler-flat line from 20 to 1k and a ruler flat diagonal from 1k to 20k, you definitely need more points on the target line than just 3.

Attached is an image "Target not flat issues" to illustrate the problem at the low end that results without having additional anchor points at the midband.

In fact, you will need several points at the low end to anchor the left side of that target to the flat line. You will need at least 3 points near the hinge point to anchor both sides of the hinge point without altering either one. And you will need several points at the supersonic point to anchor the right hand side of the diagonal before you begin to alter it. And to evaluate the flatness of these lines you will have to save and leave the target designer. To combat the bug I then recommend you "load" the changed target but that might not be necessary. Then run the correction and inspect the resulting target line (which when the simulation is nicely superimposed it will straddle that line very nicely!). Zoom in on the flat line with less than 1 dB/step resolution in the low, mid and high frequency range, and play with those target dots until that line is straight and has no kinks in it. The example images that I've attached should give you a guide. It took me hours to eliminate any kinks or curves in the sections that are supposed to be straight, and then plug in those settings into my three targets for each sample rate. First I listened to the 44.1 kHz target to get it right, and matched the settings for the other two targets, but this was very difficult because the 20 kHz point has to be slightly different for the double sample rates in order to both not get overcorrection in the supersonic zone AND to have the target end up at the correct point at 20 kHz. It's an art and a science, folks. But work at it.

See attached image "the problem of the 3 point window.png", which is an actual loopback at 44.1 kHz of my first attempt, of the convolved, crossed over system returning back into Audiolense for a remeasurement! This is not a final result! Notice the spike at 20 kHz because I did not yet deal with the target above 20 kHz. But we have to go through hell to fix that (with no partial correction) AND yield a perfect diagonal line shaped high frequency tilt.

To fix that I ended up with the attached three target settings, "actual target points @ 44.1", "actual target points at 48", and "actual target points at 96" (which also works for 88). Notice to get the double sampled target working right I had to add a point at 17 kHz, which I got right only by trial and error and extreme zooming on the target in the main screen, not in the target designer. Also notice as I describe, the point at 900 Hz, 100 Hz and 24 Hz to keep that line straight, otherwise it will stray when you see it in the main page zoomed in.

TARGET TILTING?

Now what do you do if you want to make your target 0.1 dB brighter or whatever? The point you have at 2 kHz and near 20 kHz to keep the target diagonal line perfectly straight have to be manipulated differently, proportional to the slope of the line! This is a bitch and I'm already dreading what has to happen if I have to tweak my HF target, which I'm already thinking of doing by moving the hinge point a little upward from 1 kHz. It would create a double hinged diagonal line if I don't move the other points accordingly, damn...

My suggestion to dealing with this "tilting" issue is this: We have to deal with all the extra points at the top end. It is to add a new tilt function in the target designer. This function should define the "hinge frequency" and keep everything there steady, and tilt (or rotate) the points that are above the hinge point. That would be terrific. I think it would work out, at least conceptually it works.

By the way, I have tried the tilt check box and it doesn't seem to make a difference, clicking on the arrows tilts the target regardless of the state of the check box.


In part two I'll get into the nitty gritty of the measured freq. response, the loopback measurement and the correction procedure.


Until then,


Bob
10 dB from 1k to 20k in main display.PNG
but only 6 dB from 1k to 20k in target designer.PNG
the problem of the 3 point window.PNG
Actual target points @ 441.PNG
Actual target points @ 48.PNG
Actual target points @ 96.PNG
Target not flat issues.PNG

Bob Katz

unread,
Jan 29, 2013, 4:52:49 PM1/29/13
to audio...@googlegroups.com
Lastly, for a production engineer, we have the issues of inconsistent attenuation and inconsistent frequency response among the four major sample rates. Let me explain. Generally I master with my system at 96 kHz and make careful judgments. At the end I sample rate convert to 44.1 kHz using what I consider to be the world's best-sounding sample rate converter: Weiss Saracon. When I switch and compare the source with the master I expect the result not to be influenced by a room correction system's minute variations in its correction curves or attenuation.

As noted in part 1, it is very very difficult to get a consistent target shape at the different sample rates. If you want a target that allows supersonic response at the double rates (and I do). I think I have one that's within 0.1 or 0.2 dB of the same at 10 kHz at all rates, but I'm not sure and it will always bug me in the back of my mind. So I may redo those targets some day "in my copious free time". The other issue is inconsistent attenuation. There has to be a way in Audiolense for the professional to manually offset the attenuation until the flat part of the target line arrives at the same for all the measured sample rates.

Because when I compare two "supposedly idential" presentations, they must be presented at the same loudness, within 0.2 dB or less! Or the louder will sound subtly superior. It's a fact of life. And the origin of the loudness war. The first time around I measured the 500 Hz point on the target line to be: 

-18.8 dB at 96 kHz
-18.7 dB at 88 kHz
-18.25 dB at 48 kHz
-18.1 dB at 44.1 kHz

This difference is going to influence my listening and I hope that Bernt will put in some kind of an offset control or show me a trick to get all the attenuations to be the same.

The next issue is SPL calibration. I require a constant output attenuation setting so I can set my SPL calibration to produce 83 dB SPL per channel, slow, C-weighted, at the 0 dB setting of my Avocet monitor controller. Then I can master for a consistent goal. With these different attenuations, it will always be a compromise, though it is within less than a dB. But I can hear it, and I need to find a way to get it better. So, Bernt, please find a way to tweak the attenuation. It can be an advanced setting with a warning that only engineers who can examine the peak levels of the output digital to confirm it is not overloading should "fool with the attenuation".

Maybe I could go into the cfg files and manually make a change, but I don't want to be a programmer. It would be like going back to compiling assembler all over again, which would be an ergonomic mess when trying different targets, different rates, etc.


----

AUTOMATIC SAMPLE RATE SWITCHING:

The next issue is the naming of output files. I discovered finally the reason why JRiver was not switching filters for the different sample rates. Simple: the cfg files that were coming out were not correct. Somehow they were being named   "name of file 5.1 Bob_441.cfg" instead of "name of file 5.1_441.cfg". I haven't been able to trace that down. Bob wins again :-). But it's easy to rename the cfg files in the output folder (the one that contains the cfg and the wavs). The other thing is to ensure that the output file is always the same when you save the filter, and I had been in the habit of inserting the sample rate in the name of each filter file, which of course changed the name for each file and also broke the automatic sample rate switching. I wouldn't put this high on Bernt's list, but it is something that could be addressed some day. Once I figured out the issue, I was able to solve it.

Gosh, I think that's enough for today. I have to get back to work and stop having so much fun! Isn't tweaking great!


Best wishes,

Bernt Ronningsbakk

unread,
Jan 29, 2013, 6:41:36 PM1/29/13
to audio...@googlegroups.com

Congratulations Bob!

 

I am happy for you and the sound quality you’ve achieved with Audiolense this far. You have made an impressive effort with Audiolense these last few days, so you’ve certainly earned any inch of progress you’ve achieved.

 

It makes me proud to know that Audiolense will play a part in your monitoring system from now on. It feels like a milestone.

 

I wish to pay my tribute to the regular contributors on this forum. The competence you guys share with me personally and with the forum is first class material. It makes a big difference to the continued development of Audiolense and in getting new users off to a good start. And the experience you guys  share with other music lovers outside this forum widens the acceptance of Audiolense. There is no question about it. If new businesses were valued by the quality of their client base I would be a wealthy man by now. And with Bob’s entry the client base has gotten even better. He and many others would never have found their way over here without your contribution. You guys are the best.

 

Too bad it’s only Tuesday. I feel like celebrating.

 

Kind regards,

 

Bernt

--

--
Audiolense User Forum.
http://groups.google.com/group/audiolense?hl=en?hl=en
To post to this group, send email to audio...@googlegroups.com
To unsubscribe, send email to audiolense+...@googlegroups.com
 

---
You received this message because you are subscribed to the Google Groups "Audiolense User Forum" group.

To unsubscribe from this group and stop receiving emails from it, send an email to audiolense+...@googlegroups.com.

Reply all
Reply to author
Forward
0 new messages