Ithappened to me that after doing an update and booting Garuda it would load as usual until it asks me to select what to boot, then a full black screen appeared with a prompt to insert login and password, I managed to login but then I couldn't do much.
When I posted in a past thread my inxi I was told I had the proprietary Nvidia drivers, but when I checked in the hardware settings of my GPU it tells me I have enabled the video-linux options for the drivers, and using the button to auto install open source drivers says I already have them installed.
Oh, right, that's the thread I had seen about drivers issues. So, am I just supposed to try those commands after trying to update, or like the command 'sudo dkms autoinstall' I can run it in my current snapshot to see if it has issues?
Because, in the hardware configuration panel I tried to install the package video-nvidia-dkms, but it wasn't marked as open source, it didn't have the check, so I thought those were the proprietary drivers... Did I get it wrong maybe?
I've tried following your suggestions, but I've found out I am in a weird situation, I'm not able to restore any of my old snapshots anymore, if I try restoring them I just end up in an emergency shell, but I can't run the commands as I think wifi is disabled, is there a way I can connect to internet from the console? trying to reset dkms tells me the file doesn't exist, so I don't have those drivers
This is shown for example if you mount onto /mnt/broken the wrong partition, e.g. the EFI partition instead of the system partition (the btrfs formatted one).
Can you recap the exact commands you tried and the outputs you got?
Hello, since a few weeks I've been having issue with After Effects (22.5), where every time I tab out of AE it completely freezes my PC and restarts my GPU drivers (TDR) causing monitors to flicker and turn black for few seconds. When this happens, AE doesn't crash immediately, but the preview viewport is completely white and sometimes I get "GPU out of VRAM" errors from AE itself or any of the plugins that use CUDA/OpenGL accleration.
uninstalling Windows 11 updates, using old GPU drivers, downgrading AE versions. Uninstalling all plugins fixes it, but when I try to install any plugin, no matter if it is Sapphire or just Deep Glow (both GPU accelerated), it gives the same "GPU out of memory" issue. I have tried using all the plugins and AE with disabled GPU acceleration and it still crashes, no matter what plugin I use.
Hard to say much. Since the virtual kernel manager for the GPU crashes, that could indicate a broader issue. Perhaps it's simply running out of resources, perhaps it's a trivial sync/ HDMI timing issue with multiple monitors, perhaps it's soem weird RTX-related thing. Eitehr way, one can only advise on the standard steps: Unplug monitors, keep fiddling with the driver settings.
I'm experiencing the same problems since the past few days, also related to nvoglv64.dll and with comparable hardware. Running on Win 10 Pro, and as i'm sure you did, having updated Nvidia drivers to latest version (516.93 Studio).
You mentioned Deep Glow, my issues started after installing it too, so it might be what's triggering the crashes, but I believe the underlying problem stems from After Effects GPU's management in version 22.5 and/or Nvidia drivers.
After testing all 22.X versions it looks like 22.1.1 is the latest version that still works without any crashes (occasional crashes still occur while using Deep Glow and RSMB, both effects updated to the latest version and fully support MFR).
How your monitor is connected to the system? Are you using a doc station? Can you try removing the plug-ins one by one and use After Effects to check if that made any difference? It'll help us to narrow down the issue.
I have experimented with setting the Windows Graphics Settings to opening TouchDesigner, TouchPlayer and TouchEngine with the UHD 630 GPU and it actually works and runs fine at 60fps. If set the Quadro GPU as the preferred GPU for the three applications, I receive the error above.
After diving deep into the Google Rabbit Hole I have learned that the issue may have something to do with OEM settings! Apparently 3D Settings are pre-configured and locked out, therefore they cannot be changed. This is an issue with TouchDesigner as it appears to require these settings to be altered in someway. How I have yet to determine.
I have passed on this information to
Scan.co.uk (builders of fine PCs) to see if they have an answer that may shed some light on the problem. For now though the only solution I can see is to get a retain GeForce card and install that.
I went in to Device Manager and simply disabled the Nvidia card. Started the application and it works. I will now run performance tests to see if there is any noticeable impact. Naturally I will need to enable the Nvidia for my prototyping work but this will suffice for now. I still believe the OEM drivers are to blame and would be interested in trying the retail version of the drivers to see if that affects anything.
After moving the screen to the correct port, I had no trouble getting Touchdesigner to run on the right GPU. Afterwards I have used the built-in tool to save the screen EDID and load it from file, and I have disconnected the screen so the system is now running headless. I am able to access and maintain the system via RealVNC.
Are you using the default nouveau driver for the GPU or have you installed the nvidia driver from rpmfusion? The nvidia driver usually controls the GPU fan speeds and temps.
The nouveau driver does not fully support the newer nvidia GPUs.
Hi - I see you edited in your system information, and in there it looks like it is actively using the nouveau driver for your NVIDIA card? I have the RPM non-free NVIDIA driver installed, and this is what my inxi -Fzxx shows in the Graphics section:
As noted, this clearly shows the nouveau driver in use.
Please show us the installed nvidia drivers.
dnf list installed *nvidia* will show all needed info if you installed from the rpmfusion repo.
Glad to see that you now have the nvidia driver installed and functional.
What is the info you see with the nvidia-settings app related to fan and temp?
How do you know the fan is not running? Have you looked at it while the system is running, or just going by the reported speed in inxi? The reported speed may or may not be correct.
On mine, a GTX 1050, I do not get a fan speed reported in inxi, but the thermal temp remains near 60 C while running a GPU process from boinc using cuda. I can see the temp with lm_sensors and gkrellm as well as both fan speed and temp within the nvidia-settings app. Note that I leave the fan in auto control and it keeps the temp near where shown in the image below.
The only drawback is that the fan is now locked to the speed you select so it may require more tweaking to maintain temps or running the fan at a higher speed than actually required to make sure it does not overheat with load changes.
The thing is that when I'm playing RE4 my game totally freezes and after a minute I get BSOD with the error code: DPC WATCHDOG VIOLATION, I've tried everything like reinstalling the game, uninstalling AMD Drivers with DDU, checked my RAM health, updated BIOS version, updated my chipset drivers, checked my pc for any malware, updated Windows to the latest version and none of them seemed to help me.
I get computer freezing while playing this game with my RX 570 8 gb. I never saw the BSOD because it freezes so hard that the BSOD doesnt show up lol. Try driver 22.11.2 while they ackowledge the issue and care to fix it. This driver works perfect for this game
I tried all of the exact same fixes you can imagine, everything that everyone here recommended and more. It would work once and then when I logged back on it was the same problem all over again.. Then I updated my BIOS and it has worked ever since
The iCUE icon was shown in the notification area, and clicking it, then clicking 'Bring iCUE to Fireground" didn't do anything. Tried everything mentioned in this thread, rebooting, reinstalling, using task manager to kill the process and restart it etc.
Same problem with iCue not working since latest update! I just went into services and changed the iCue from "Automatic" to "Manual" and placed a shortcut of iCue from its directory into desktop and that works fine, every time!
I found that the drivers for the GPU can cause a hang up in the system, iCUE will recognize all active and inactive, I had this happen where the drivers for my old Nvidia GPU were still in the system files even though I am using a AMD GPU now. I used the Display driver uninstaller from here -driver-uninstaller-DDU- in windows safe mode to get rid of all remaining Nvidia software from my pc and iCue started working perfectly fine on the latest version.
3a8082e126