Dear Bob, Everybody who takes on Audiolense have a steep learning curve for the first few weeks. A lot of the issues that you bring up will disappear if you stick to it and get used more used with the work flow in Audiolense. Also, people who has a lot of experience with traditional EQ and a lot of vested knowledge in that direction tend to struggle to unlearn what they need to unlearn to understand the pros and cons of Audiolense. A lot of the things that are true for EQ aren't true for FIR based correction done right. It is just too bad that I can't drop by your studio, do some filter tweaking hands on and prove my point. I could make a filter that responded the same as your analog solution with regards to the artifacts you were hearing. And I could make a filter that attenuate those "problem" frequencies a tad compared to you default settings. Then you would start to hear how the frequency correction made a difference. I am confident that the artifacts you were hearing are either caused by a target that doesn't fit the bill or by technical issues in your playback chain. The shape of the target response has a profound influence of the end result in ways that few realize before they start to fiddle with different targets that are almost but not entirely similar. Digital clipping will sometimes lead to the kind of problem that you were describing, but so could a frequency correction that emphasized a certain part of the frequency spectrum. I expect increased transparency from a frequency correction done well. Worst case for a decent frequency correction is that the perceived transparency stays basically the same while the speaker sounds more "correct" but not necessarily better. But I've only experienced that with not so transparent hardware so I would expect better results in your system. 1 About the speaker setup issue: The speaker setup should always be completed before the speakers are measured. Or, to put it another way, if you change the speaker setup in a substantial way you can no longer use your old measurements. Audiolense checks that there is a match between setup and measurement, and if there isn't one of them is thrown out. It can't be any other way. Based on what you wrote I got the feeling that you were trying to change the setup from a 2.0 to a 5.1 and still use a measurement that was produced under a 2.0 setup.. I don't know what you did in the end here, but deleting all those speaker setup files made no difference .. When I work with your measurement I can change all the crossover points any way I like and they come out just fine. And those crossover changes do stick even when I load the other measurement that you sent me. But if I change the speaker configuration, the measurement will be thrown out when I save the setup and go back to the main form, which is exactly how it is supposed to work. 2 Target designer: There is a save target bug there. If you open a new target and haven't saved the current, you will be asked if you want to save it. And even though you decline, a save target dialog will appear. And if you think that you're about to open a saved target, you will most likely overwrite the target you plan to open before you open it, because the save file and open file dialog looks almost identical. I'll fix that as soon as I get the time. I do a lot of grabbing and dragging of points when I make targets. Sometimes it doesn't grab. 3 Measurement name: There is a text field in the measurement module where you can make any name you want. This name will stick even though you use a different name on the file. So this is not a bug. 5 The window sizes are stored as time values. But the high frequency window will change frequency because the Nyquist changes for different sample rates. 6 I tried the partial here and it works as it should. Please see attached image. But problems could arise with different crossover settings. Audiolense allows the user to do things that can make it difficult to create good crossovers. A I used 0 octave width by the way. By the way the db adjustment of the no corr zone doesn't work as intended. I'll have to fix that. But I don't think you need to use it, though. B As I've written before, a TTD with partial will use at least 0.5 octaves transition for getting the time domain in order. I think this is about the right time for me to try to explain a bit physics here. 0.1 octave, and even 0.5 octave is not much when you get down towards 100 Hz. It is only about 50 Hz. Any substantial change in frequency or time domain that happens over 50 Hz will be a very sudden change. We humans perceive sound on a logarithmic scale, and we look at frequency charts that are log scaled. It looks as if the difference between 10,000Hz and 20,000Hz is the same size as the difference between 10Hz and 20Hz. And it sounds like that too. But the physics of sound is not logarithmic. It is linear. And around 100Hz we're dealing with long wavelengths as well. You have a couple of difficult room reflections around 60Hz. You tried to run crossovers straight through them, and you ask for a transition from TTD correction to no correction - all within a few Hz. That means that you have ordered a lot of tasking DSP inside a span of approx 150 Hz. And since Audiolense operates with strict control in the time domain, and since the time allocated to get the job done is too short, you get artifacts. The underlying mathematics works as they are supposed to work. It's when the program shortens the filter according to the frequency dependent window settings that the artifacts appear. This is basically ALWAYS the case when artifacts appear in the correction filters or in the simulations. The artifacts are a sign that you're asking for more correction than what's achievable inside the TTD window and/or the correction window. So instead of regarding this as a bug in Audiolense I recommend that you try to get around it by changing a few parameters. We basically want to do as much correction as needed in the shortest time possible. C These issues didn't happen when I tried the same on your speakers. Probably because I used less tasking crossovers. D A linear phase cutoff filter 20Hz / 24dB will not create a phase shift, that is true. But it will create a LOT of ringing. Slow rise and slow decay. Pre-ringing and post-ringing. Equally amounts of time domain distortion on both sides of the peak - that's how you get a linear phase behavior from something that takes a lot of time. And it will also add more complexity to the correction filters. 7 Again, these are not bugs. You are just asking the program to do more than there is time to get done. I don't get that problem when I make corrections to your speakers. With your measurements. Don't underestimate the significance of how you set your crossovers here.. Frequency correction only is btw a lot easier so it takes less time. B - feature requests 1 - Relaxed frequency correction. The relaxation happens with the measurement filtering and by using short correction windows. When I use a moderately short window on your measurement, the smoothed measurement only contains the most basic and fundamental fluctuations. Small changes across several thousand Hz. There is very little left to correct, and it takes very little to correct it with a FIR filter. There isn't a +/- 1dB regulator in Audiolense. The precision you see in the smoothed simulation is created by the time domain restrictions. If there were none, the simulation would be identical to the target. The time domain restrictions are your best friends with regards to avoiding overcorrection. I understand where you come from and why caution is practiced in the business. From my perspective this is the only proper response to the limitations that comes with using traditional EQ. It is the wrong tool for the task and it is a mystery to my why it isn't been replaced by FIR based correction on a rapid scale. The advantages I see with IIR has nothing to do with sound quality. They are inflexible, subject to mathematical instability and operate without control in the time domain. But they are cheap and well known. With Audiolense you have a very different tool in your hands. It is capable of doing a lot more magnitude correction with a lot more precision - and with less strings attached - than what you are used to. IIR stands for Infinite Impulse Response. INFINITE. The only way to control the time domain behavior somewhat is to be cautious in the frequency domain. With Audiolense you have steel control over the time domain. Anything substantial that you do inside a short time window, that makes the frequency response look significantly better is usually worthwhile doing. Second, it is basically impossible to do a precise correction with IIR. The IIR filters do not do less correction, but they do less of what you need to get a better magnitude response. They come in certain frequency domain shapes and those shapes are a poor fit with the typical room and speaker problems. Every time you specify a notch filter you do some improvement and some damage to the frequency response. The skilled user ensures that the damage is substantially smaller than the improvement. Third, and this is equally important: I still haven't seen an EQ based toolkit that produces a good analysis of the unfiltered frequency response. Most of the smoothing techniques used will produce wide band artifacts somewhere from the upper midrange and onwards, and the dips that appear to be deeper than they really are. If you fully correct a dip based on the most commonly used smoothing techniques you will create temporary peaks. Dip lifting has a bad reputation among EQ users because it is not done right. They are creating audible peaks because they work from the wrong frequency charts and with the wrong tools. And when the get the "hollow" sound they blame it on the wrong causes. What I'm trying to say here is that your worries do not apply to Audiolense. Audiolense comes with its own set of worries. 2 - When you talk about poles and zeroes and filter points you speak the IIR language. FIR filters are a lot different. The way to reduce the scope of FIR filters in Audiolense is to devise shorter time windows. If you use a measurement and correction window that has 3 cycles in the top, you will use something like 7 samples to correct around 20Hz. For the human ear this is like doing an instant correction. These 7 samples may be involved in dealing with a number of poles and zeroes, but hardly any of them will be completely corrected. Only partially. Only what can be done by a few samples of correction. 3 -I am not enthusiastic about enabling manual gain tweaking on the filters. New users often get the wrong impression of the transparency of Audiolense because they create digital clipping during playback. When I worked with your measurement I only saw a potential gain of 2dB, and that was with the +10dB for LFE checked. Customers who look for an uncompromised quality should assure to have enough gain in the analog domain to not having to flirt with digital clipping. I don't know how you measured actual gain during playback when you found the 6-8dB of available gain, but there are a lot of methods out there that I don't trust when it comes to these things. You really have to look at every sample after correction to be on the safe side. 4 - Having several measurements and correction side by side would be a nice feature. Unfortunately there are users who run Audiolense with so huge systems that this will create memory problems, and we have enough of those already. A simple alternative is to open several instances of Audiolense and have two screens side by side. 5 - TTD is usually easy to do on speakers with a behavior such as yours, but it is vital to get the frequency correction nailed down before starting to work towards a TTD correction. A partial TTD correction is a mixed blessing. I really don't think it is a good idea to run a partial TTD correction to 200 Hz as long as the system responds well to a TTD correction that goes higher up. Part of the explanation was given further up. The other part is that TTD correction through the midrange usually sounds substantially better - if you get the target response right. There are speakers who are perhaps too much to handle for a TTD correction through the treble but your speakers have a very clean pulse. C - the Audiolense frequency correction You launched a very serious criticism against Audiolense, and I have to comment on that. The frequency correction has been literally problem free since the launch of Audiolense and it has stood the test time very well. The frequency correction is IMO the best thing in Audiolense and the best thing you can do with DSP on a hifi system to improve the sound quality. And this is also where the biggest upside in moving from EQ to FIR correction lives. If there's nothing seriously wrong with the measurement it will sound like the target after correction. But if the target is a hair off, the sound quality will suffer. And the target is usually off during the first few trials. I have challenged professional users as well as domestic users on several occasions to test Audiolense for transparency. If you draw a target that follows the smoothed response reasonably close they are likely to sound identical. The transparency has been confirmed by several professional users who had their doubts early on and who had access to first grade equipment. From a physical and mathematical point of view, there is no reason to believe that it isn't 100% transparent. You can do a similar test by measuring your system with the analog eq in place through Audiolense. And make a target that is more or less a replica of the frequency response you have with the analog eq in place. Then you can disable the analog eq, do a new measurement and make a frequency correction towards the target you made from the first measurement. Then you can compare. If there's no digital clipping and no other crap going on in the digital domain, this will be a good test of the transparency of your analog eq, but also of the frequency correction of Audiolense. After you start to fully appreciate that Audiolense can do a transparent frequency correction you can get back on working on the frequency correction. And when you get that nailed down you are ready for trying out the TTD correction. This probably sounds like I regard Audiolense as a flawless solution. Well I don't. But I don't think you have come far enough down the road to appreciate the benefits and recognize the real issues. You still haven't made your first decent sounding filter from what I can see. Further I believe you have to challenge some of your EQ- related knowledge and assumptions. If you keep suspecting that the frequency correction filters are fundamentally flawed, if you stick to the same guiding rules as you do with EQ and if you keep believing that a precise correction of a heavily smoothed measurement is too much I doubt that you will be able to capitalize on a first class FIR correction. It also needs to be said that Audiolense, EQ and other DSP devices are just tools. Tools that enable the users to modify the sound quality for better or worse. The skills of the user makes a big difference. You obviously have a lot of skills in tuning a system with digital and analog EQ, but you're not an Audiolense expert yet - and that could mean that EQ is the best way for you to do it even though Audiolense is a more capable method in general. By looking on your measurements I believe there is room for improvement. If you decide to dismiss Audiolense you can always use the satisfaction guarantee and get the license fee back. But nothing would please me more than if you stick around and have another go at it later. It was very difficult for me to respond to your summary. I hope it didn't come out the wrong way. Kind regards, Bernt From: audiolense@googlegroups.com [mailto:audiolense@googlegroups.com] On Behalf Of Bob Katz Sent: Thursday, January 24, 2013 6:12 AM To: audiolense@googlegroups.com Subject: [audiolense] Audiolense 4.6 and JRiver MC18---Summary of my testing and debugging so far (long, detailed post) 1. Speaker setup section. I had a terrible time creating a new 5.1 setup. The naming section is in two parts and it is not clear which part is which. It tended to refuse my input and stubbornly revert to 2.0 setup time and time again. When I got desperate and "deleted" all setups except the last one, it kept on coming back with the setups I had deleted. I know some of this is my misunderstanding of how this section is supposed to work, and I guess its gui doesn't meet with my intuition. :-). Anyway, using the trick mentioned above, and with perseverance, I was able to create a setup which "stuck" for the rest of my work. Until I was able to figure out the workaround, EVERY time I loaded a measurement file, the speaker setup reverted to 2.0 when I had just set it to 5.1. It was frustrating, indeed, as it happened "behind my back", and the only way I knew was to choose "edit speaker setup" and notice that in that tab it was just a 2.0 routing instead of 5.1. Once I got it to "stick" by quitting and relaunching, it did stick, and seems to stick from this point on. Though I fear if I create or modify this speaker setup the vicious circle will begin again. The same goes for crossover point. I would change the crossover point, save this to a new Speaker setup, and loading a measurement file would cause the XO point to change back. The "reset setup" button also caused the XO point to revert to the last setting as well. Again, I think that quitting the application is a workaround for this bug. A related question: If I send Bernt a measurement file, does it include the speaker setup that I have? If so, then this might explain the weird connections between the two files and the buggy operation. 2. Target section. The Chart Editor which comes up when you right click on a frequency point. This is very useful when adjusting frequency points. Pity you can't delete a frequency point from within the chart editor, maybe there is the feature and I can't find it. Anyway, chart editor display of data points is sometimes out of step with the actual target being edited. Sometimes it does not display the points that you add or delete on the graph. It depends on if you do new target, or open an existing target file. Work around is as above: Save the target, quit, then open the target and it should clean up the situation. 3. Minor issue: Change the name of a measurement, open the measurement and the display name in the lower left hand corner is still the old one. (It doesn't update). Perhaps there is some kind of internal name in a measurement file that will never be in sync with the actual file name. Something to fix for version 5 or 6, not that urgent. Similarly, the target name is not always displayed, but I did discover the little pulldown menu on the bottom right that displays the current parameters (nice). 4. Minor issue: I don't think it's a good idea to try to rename a cfg file outside of the application (e.g. in windows explorer). Because the cfg files point to specifically-named waves.. I find the only thing to do is to save the filter to a new cfg file with a different name and then Audiolense will generate all the files accordingly. HOWEVER, it would be nice someday to have a rename filter function. 5. Squiggles in the measured high frequency response near 20 kHz: Not exactly a bug. I found this was due to too-short a measurement window at the high frequency end. Increasing this from the default 0.227 ms. to about 0.5 ms. fixed it with no problem. It's probably related to diffraction issues in the tweeter or cabinet which are smoothed out with just a little bit of time averaging in the window. And in fact, it was Bernt who alerted me to the fact that anomalies like this are why he chose a slightly longer measurement time than Jim Johnston had recommend in his papers. Not an issue at all once you know what causes it. Careful Bernt if you implement this in the default correction; I recommend when different sample rates are chosen, do not make the measurement window change in ms. for consistency in measurement. Store it as time value, not a number of samples. 6. When enabling partial correction with no correction above 200 Hz (see section C below as to why) I ran into several issues that prevented me from fully evaluating this option: a. Do NOT use 0 octaves transition. This screws up a number of parameters. If you want a short transition band between corrected and uncorrected, choose 0.1 octaves instead. Probably 0 is a troublesome number in one of Bernt's equations. At least that's what I found, anomalies just below and above the crossover frequency were eliminated when not using 0 octaves. b. With an "above 200 Hz no correction" as a choice, even with 0.1 octave transition, it caused a severe dip in the supposed "corrected" band at 117 Hz (where the measured loudspeakers have an issue). By moving the transition band to 225 Hz, the problem went away. 0.1 octave below 200 Hz should be above 117 Hz, shouldn't it? c. It appears that there is some interaction at the extreme low frequency part of the band near 20 Hz when implementing this partial correction near 200 Hz. At least for me. As soon as I implemented the partial correction up to 225 Hz, my low frequency response at 20 Hz reverted to about 3 or more dB down, when with the same target it had been fine. I had to add some additional points in the target, and it took me 1 hour fiddling with points below and above 20 Hz until I could get the simulated 20 Hz to 40 Hz response near to flat without causing a severe correction boost below 20 Hz or simulated response down to 10 Hz, which I think is a bad idea. When I had a wideband (not partial) correction, the target was easy to configure and there were no inconsistencies or need to add so many points to keep it flat. I cannot explain why changing to a partial correction circa 225 Hz affected the 20 Hz response, but for me it did. The most common approach to this is a high pass filter instead. I think a well-implemented linear phase approximately 20 Hz 24 dB/octave filter might be a good idea here, instead of relying on the target shape to fix the issue. Then there would be no phase shift. d. Setting partial correction up to 225 Hz exposes issues that make it impossible with the current structure to implement this option. First of all, the +/- amplitude control for the uncorrected portion does not work desirably in my opinion. The object is to adjust the uncorrected gain to make a seamless splice to the uncorrected loudspeaker response at the 225 Hz transition point. Adding boost or cut here does not exactly offset the uncorrected section. Instead it appears to interact with the overall attenuation of the corrected section, in a kind of unpredictable way. Finally I was able to make a seamless splice just by trial and error of adjusting the uncorrected amplitude and watching the simulated response curve. But this was to no avail. Due to slight, natural differences in the naked (uncorrected) amplitude response of the front main loudspeakers, the end result of trying this option resulted in uneven frequency response between the left and right speakers, left-right image shift at different frequencies, etc. Which, weirdly, I do not get with my analog correction system, but we'll let that sleeping dog lie. So it would be necessary if implementing this partial correction, to have separate transition frequencies for each loudspeaker. In fact, this feature really depends on each loudspeaker having nearly perfectly-matched response and level to begin with, at least near the splice point. So it's not a practical option. But anyway, I started this partial correction in order to debug the sound quality issue described below in Section C. 7. TTD bugs. Stubborn dips in the low frequency response with TTD that were fixed with no problem with moderate frequency domain correction. With TTD, I tried many different permutations of bass boost limit (or setting bass boost very high) and the checkboxes and the different subwindows and could not eliminate these artifacts. I know that Bernt has a green thumb for this plant but in that case, he would becomes= a consultant for everyone who buys the software and wants to try TTD. I could send my setup to him to analyze, but anyway, I think TTD is still a work in progress which is not quite useable (see section B below. ) B. Feature requests: The following notes are for frequency domain correction unless otherwise noted: 1. Relaxed amplitude correction. It seems the default correction is to get to + or - 1 dB. Many of my professional colleagues and I believe that a correction this strong can cause more sonic harm than good. In other words, the cure sounds worse than the disease. However, I was not able to find a way to reduce the severity or strength of the correction trying every permutation of every parameter in the correction section that I could do. I tried maximum boost issues and I tried permutations of the checkboxes for "no treble boost" or "no bass boost". And still the program brought the results to + or - 1 dB. You'd think I'd be happy with that, but I am concerned it is overcorrection. So in my professional opinion I think you need to implement some kind of feature that determines the maximum amount of "flatness" the program will permit. 2. Relaxed number of filter points (poles and zeros) proportional to the octaves as the frequency increases. See Section C below for a full discussion of the urgent reason to need this and what needs to be done. This is the most urgent of all the issues in this letter. 3. Manual attenuation override. This may be helpful instead of relying on the gain controls in the convolution engine. I found at least 6 dB of headroom that I could use in order to get the SPL I desire at a given analog attenuation in my system (calibrated volume control). Given the analog gain structure of my system. But my workaround is to find some digital gain in the convolver or another dsp element in JRiver, and given it's all floating point, this is a relatively low-priority request. 4. Ability to compare simulation graphs of different scenarios, e.g. different crossover points or different partial corrections. Is it possible in the analysis section to overlay the simulated impulse responses from one correction or target scenario against another? This would be the equivalent of making a correction with one correction approach or target and loading the filter into the convolution engine, then measuring the response with REW for all speakers. Then making a correction with a different setting and taking another REW response, and finally being able to overlay the two graphs and examine the differences. I'm getting out of breath already thinking of not going through that! Still, this is a relatively low-priority request. 5. TTD is a work in progress. I think it's going to be great some day and I can't wait to hear it! But I think only by finding a way to somehow integrate TTD below, say, 500 or 200 Hz with frequency correction above that freq. I imagine this would be very difficult to code, very complex coding, but maybe there is a way. I think the pre-ringing solutions attempts are not sonically acceptable. Basically, TTD has the best bass sound I've ever heard---tight, beautiful, coherent, fat and seductive! Like having several very effective active bass traps but even better! But unfortunately the apparent loss of transient response and lack of transparency in the midrange through the treble of TTD are not acceptable. It could easily be introduction of echos instead of removal of same. I did try the partial correction approach below 250 Hz with TTD, but it produced a disembodied, hollow effect, indicating the difficulty of splitting a time domain correction procedure at a particular transition frequency. But another possible approach (and it's easy for me to suggest, I'm not the person who has to do Bernt's hard work, the coding) might be to limit the time delay correction to certain reflections or limit the time delay correction amount or strength. Jim Johnston alluded to just dealing subtly with the first reflection in one of his papers. I think a little more could be done than just first reflection, but the amount of the correction has to be carefully watched or the cure sounds worse than the disease. There's no substitute for a good room to start with, too! C. The real problem with the Audiolense frequency domain algorithm. (Be patient, I'm getting there!). Let's abbreviate the analog room correction system ARC as opposed to DRC for the Digital Room correction: I have a few sonic standards that I can compare Studio A against. Studio A, with the ARC, is pure-sounding and very transparent. It sets the bar for sound quality, that I would hope to get from a new DRC. I am familiar with frequency-domain DRC systems, having built one in my Studio B (mixing room). It is a very good one, but it is not as transparent as the analog system in Studio A (mastering room). Studio B is a difficult room, and the filters I have implemented are limited in certain respects and also there is a chip-based ASRC (Asynchronous sample rate converter) in the chain to allow changing of incoming sample rates without crashing the DSP. It all takes its toll, but Studio B is still very good sounding but not up to the standards of Studio A. To enable the changeover in Studio A to the new DRC and digital crossover, I designed some adjustable passive attenuators, some passive switching, some changes in the digital router, and also a new analog cable harness that allows me to switch back and forth between the level-calibrated ARC and the DRC in about 5 minutes time! You should see me scramble behind the power amplifiers :-). So I can "return to zero" and compare the analog to the digital room correction systems very rapidly. The identical DAC is used in both situations, except four channels of DAC are needed for the stereo two-way digital crossover and only two channels of DAC are needed for the stereo analog. The identical analog components are used, but there are fewer of them in the DRC. All active filters and unnecessary active components which were used for the ARC were completely bypassed for the comparison with the DRC. Believe me, that was a difficult thing to set up, but worth it, as I can "return to zero" or switch to DRC mode at will. LISTENING TEST: On Monday, my acoustical consultant Mike Chafee and his assistant came over. Mike has 40 years in this business, he's an expert with digital-domain correction systems and knows many of them well. I have 40 years as well, and we're both audiophiles with critical ears but open minds. But we know where all the bodies are buried and between the two of us we don't miss much. Travis, Mike's assistant, also is experienced, loves to listen, and also has audiophile ears. Note that in this listening test, I have calibrated digital meters on all the DRC outputs, and know exactly how far from clipping the DACs are going and in no case did a DAC go into clipping in DRC mode, with a safety factor of at least -1 dBFS on a sample-reading digital meter. So, first we had a detailed listening session to the current system with ARC, playing several high-quality musical cuts that we know and noting how they sound. Then, after getting a frequency domain correction in Audiolense that looked real good in simulation (actually, TOO GOOD, as you will soon see), I switched the system over to DRC mode, using the Convolver in JRiver and played the identical musical selections. The improvement in bass response was instantly obvious, the bass was perfectly even and tight, fat, beautiful, we loved it! However, the sound from, say, 200 Hz on upward was unsatisfactory. It sounded grainy and unresolved compared to the ARC, and what really was disappointing was an edge in the sound, particularly exhibited with a vocal on one selection that was unpleasant to listen to, it sounded like a vocal distortion. Mike and I had a theory as to why. The theory is quite simple: Audiolense is overcorrecting. It's easy to see all the wiggles in the corrected frequency response in the simulation. We see a remarkable plus or minus 1 dB response, but with curvy wiggles that represent what must be 50 to 100 or more filters in the path, in many cases I think with steep slopes and narrow bandwidths. Both Mike and I know from years of experience that narrow filters sound bad, they can sound edgy, and the more of them you use, the harsher it sounds. This could be due to phase shift, time domain effects or other effects. But if you ask well-known authorities like Rupert Neve or George Massenburg, they will tell you the same: gentle-slope filters live! I use digital EQ in my work all the time, including George Massenburg's digital equalizers, and they sound pure and beautiful to me, so I know that digital EQ can work. Mike has tested a Dirac processor that implements many many narrow-band filters and measures great but he reports that it sounds horrid, harsh and veiled. So we thought we had found the culprit, but we wanted to prove it to ourselves. I wanted to prove it by changing only one variable, so as to nail the answer and be 100% sure of it. I slept on it and in the morning I came up with a simple single-variable experiment: Enable partial correction with a 225 Hz transition. Keep the original (excellent) loudspeaker response above 225 and below that correct for the room modes and introduce the digital crossover. The long and the short of it is that this second listening test was very very successful and I think I found the culprit! I found that with the partial correction, the purity of tone of the sound returned, the harshness disappeared including on that problem vocal, and the sound depth was somewhat restored. Despite the image shifts that I noted above which are not the fault of Audiolense per se. The image shifts did not keep me from immediately recognizing that the purity of sound had returned, with no correction above 225 Hz. And you can see this is nearly a single-variable experiment! Eliminate the sharp filters but keep the same dsp chain. So, the simple solution to the harshness problem is this: Bernt: Please design a frequency-domain correction algorithm that purposely limits the number of filter points. Please try to draw curves between the extremes of peak and dips in the raw loudspeaker response. Please try to limit the slope of any filter implemented (at least above 225 Hz) to 1/6 octave or larger as much as possible. In places where a 1/6 octave or wider filter does not correct the response, allow a relaxation of the amplitude correction to, say, + or - 2 dB or even higher from the target, which is a perfectly acceptable standard for a loudspeaker system. Please make the amount of maximum amplitude correction user-settable. Please implement this amount differently above and below a certain frequency. All this is in the interest of having fewer filters, with wider slope and with no sonically-undesirable over-correction. I feel we will find the sonic results of this algorithm to be a purer sound, the harshness will go away and you will have an audiophile-quality winner! I have a lot of faith in high-resolution DSP, I am not an "analog luddite", I always pick the best of both worlds. I do like your system and approach very much. You just have to set it so it does not overcorrect. I hope this helps. I'm sending part I of this letter to the JRiver forum and leaving parts I and II on the Audiolense forum. Happy coding, Bernt! I think you can accomplish this in a really short time, and I can't wait to hear it! I'm not surprised other listeners who have bought Audiolense have not noted this harshness. You have to have a standard for comparison and possess master-quality original material to compare and reveal the issues. It helps to be able to quickly A/B compare ARC vs. DRC and judge the purity of tone of each approach. It helps to have experience using high-quality analog and digital equalizers in a mastering context. Lastly, the listening position in this room is in a reflection-free-zone, so it is very easy to identify sound quality without degrading early reflections masking any issues. A reflection-free zone is defined as one in which there are no early reflections above -15 or -20 dB below full scale for at least the first 20 ms. after the initial impulse. Best wishes, and again, Bernt, thanks for making such a great DRC system with such great potential! Bob Katz -- Audiolense User Forum. http://groups.google.com/group/audiolense?hl=en?hl=en To post to this group, send email to audiolense@googlegroups.com To unsubscribe, send email to audiolense+unsubscribe@googlegroups.com