Iam now running the 30 day trial period of DxO Pure Raw 3 which if I cannot solve this problem will cost me USD129. DxO takes no more than 2 minutes to complete the noise reduction with a 100% success rate. Needless to say, I do not want to spend USD129 when I already have a LRC subscription.
p.s. i don't think the adobe website, and forums in particular, are easy to navigate, so don't spend a lot of time searching that forum list. do your best and we'll move the post if it helps you get responses.
Try logging into another Mac user account (you should make a new one); still crashing?
Try starting up in Safe mode (hold down Shift Key when booting*), still crashing?
You might want to try running a free utility like Onyx: still crashing?
Also view:
-review-macos.html
-to-reset-a-macs-nvram-pram-and-smc.html
From Adobe:
-troubleshooting.html
Hi thedigitaldog
Thanks for the multiple suggestions, much appreciated.
Unfortunately I have not been able to solve the problem and below is the outcome of the suggestions you made.
Logging into another Mac user account - crashed around 30% of dng generation.
In Safe mode - Lightroom said my dng files were not compatible with denoise.
Onyx would not run on my OS 11.7.6 OS.
I reset the nvram/pram and smc - denoise managed to load the enhanced preview and said dng generation in 5 minutes. I pressed Enhance and 5 seconds later I got a message saying denoise was not applied "unknow error"!
So many thanks again and I will wait for further suggestions.
That usually indicates a problem with the graphics driver. On Mac, the only way to update graphics drivers is by updating Mac OS -- you're on Mac OS 11.7.6, for which Apple stopped most updates (except security updates) 1.5 years ago.
Further your graphics hardware, NVIDIA GeForce GT 750M, is very old, launched January 2013. The graphics manufacturers quietly stop fixing bugs in drivers for old hardware, so the updated drivers in Mac OS 12 or 13 may not fix the problem.
You may not have very much graphics memory on that older hardware -- System Info unfortunately doesn't always report that for Macs, but you can do Settings > Preferences > Performance, which can show the amount of graphics memory. LR requires a minimum of 2GB, but reports here suggest that Denoise is very slow with less than 4GB, and Adobe recommends 8GB for best performance.
I have the same problem as Melc88 but I'm using an iMac running Big Sur too.
Question, why is it that I can run DxO, ON1 Denoise AI, Topaz Denoise AI, Nik Software Define all without any crash, freeze or problem? Yet the infamous Adobe LRC, cannot even get passed 30% of the Denoise AI DNG generated file.
The other software are ways around the Adobe issue, but why spend more cash on software when I'm already paying Adobe monthly subs? Buying a new machine is easy said than done.
Has anyone managed to obtain a working solution to this issue? Would love to hear.
Need for speed. Denoise is by far the most advanced of the three Enhance features and makes very intensive use of the GPU. For best performance, use a GPU with a large amount of memory, ideally at least 8 GB. On macOS, prefer an Apple silicon machine with lots of memory. On Windows, use GPUs with ML acceleration hardware, such as NVIDIA RTX with TensorCores. A faster GPU means faster results.
Thank you for your response.
Yes I have turned off GPU in LRC...still the same. Crashes whilst (I assume) generating the DNG file after the preview enhance screen has displayed the denoised image well within a minute.
Thank you. Appreciate everyones inout. So it looks like I'm between a rock and a hard place
Buy a new iMac (that ain't going to happen very soon either), or buy Topaz Denoise AI, which by my current comparison with the manual LRC noise reduction, does a better job and costs way less than a new iMac
@johnb89718280 , we are both locked in the same scenario with macOS Big Sur on our iMac and come October this year when both Apple and Adobe have upgraded systems we will not be able to install the new software and receive the benefits of new technology.
Topaz AI is a set of popular software tools that utilize AI and machine learning to enhance both images and video. On the photo and image side, Topaz offers Gigapixel AI to upscale images, Sharpen AI to sharpen images, and DeNoise AI to remove image noise. For videos, Topaz Video AI can do everything from upscaling, slow motion, deinterlacing, to reducing noise and generally improving video quality.
Since this is the first time we are diving into Topaz AI, we are going to include fairly lengthy sections covering both our test methodology and information on the four hardware platforms and various GPUs we looked at. If you want to skip right to our testing results, feel free to do so!
Each application was given a score based on the simple geometric mean of each batch of tests (based on RAW format). The geometric mean of each of those was then calculated and multiplied by 10 (just to differentiate it from the app-specific scores) in order to generate the Overall Score.
With the results we get in this testing, we hope to be able to fine-tune our testing in the future so that we can focus on the type of hardware that makes the biggest impact on performance in Topaz AI.
Before diving into the analysis of our results, we wanted to include the raw data from our testing. Especially as this is a whole new test set for us, we wanted to be as transparent as possible about what we tested, and the results of those tests.
To start off the analysis of our testing, we are going to look at the CPU performance in each Topaz AI application. Note that this is using the GPU for processing in each application, even though we are looking at CPU performance. We could switch to CPU mode, which would likely show a greater difference between each CPU, but that is rarely used due to how much faster it is to use the GPU for processing.
Looking at the combined results across all four Topaz AI applications we tested, Intel is the clear winner with its Core 13th Gen processors. The Core i9 13900K ended up scoring about 10% faster overall than the fastest AMD CPU (the Ryzen 7900X) and secured the top spot in every single benchmark.
The Core i7 13700K also did very well, although the AMD Ryzen 7700X did manage to sneak by it by about 1.5% for Topaz DeNoise AI. DeNoise is actually very interesting as the AMD Ryzen 7000 series tended to do better with the lower core count CPUs. We did CPU and GPU load logging during these tests and found that DeNoise is one of the more lightly threaded of these applications, so we may be looking at some sort of cache or turbo limitation that is allowing the 7700X to out-perform the 7900X and 7950X.
Overall, it was surprising how little the CPU seems to matter within a single family of products from Intel and AMD. Per-core performance seems to be the main name of the game for Topaz AI, which generally means going with the latest generation consumer-grade CPU if you want the best possible performance. Going with a higher-end model within those families, however, will only give you a marginal increase.
This is talking in generalities, and certain applications (Sharpen AI in particular) showed a much large benefit to using a better CPU than the other applications. Even in Sharpen AI, however, it is much more about getting the right family of processors (Core 13th Gen) than anything else.
Starting off with the combined geometric mean across all four Topaz AI applications, the results are surprisingly uninteresting outside of the Intel Arc A770. For whatever reason, the A770 GPU consistently failed in Gigapixel AI, causing the application to crash when working with specific .CR2 image files. Because of this, we were unable to generate an overall score for that particular GPU.
Sharpen AI (chart #3), on the other hand, is almost precisely the opposite. The Intel Arc A770 did amazing here, beating the next fastest GPU by 55%. We did a lot of double-checking to make sure the exported image was the same between the A770 and the other GPUs, and as far as we could tell, this is a completely valid result. At first, we suspected it had something to do with Intel Hyper Compute (where Topaz AI is specifically listed as being able to use the Arc dGPU can work in conjunction with the iGPU), but we got nearly identical performance even when we disabled the iGPU.
Last up is Topaz Video AI (chart #5). Here, the Intel Arc A770 performs more as expected, coming in on par with the GeForce RTX 3060. AMD, however, once again does very well, performing equal to the RTX 4080. The RTX 4090 can still give you a bit more performance, but only by a few percent which is going to be hard to notice in the real world.
Overall, the best GPU for Topaz AI is very difficult to say as it changes depending on which application you are using. The NVIDIA GeForce 30- and 40-series are consistently solid, but AMD can be on par or even faster in specific applications like Sharpen AI and Video AI. Intel Arc is even more polarized, with crashing issues in Gigapixel AI, but heads-and-shoulders better performance over NVIDIA and AMD in Topaz Shapen AI.
On the GPU side, however, things are not as clear-cut. The NVIDIA GeForce 30- and 40-series cards were consistently very good, especially in Gigapixel AI. On the other hand, the AMD Radeon 6900 XT did extremely well in Sharpen AI and Video AI, often matching or beating even more expensive GPUs from NVIDIA.
Since this is the first time we are looking at Topaz AI, we highly encourage you to let us know your thoughts in the comments section below. Especially if these applications are a part of your normal workflow, we want to hear about how we can tweak this testing to make it even more applicable to you!
3a8082e126