I have a customizable model with multiple textures for the body (4 to be exact). How would I be able to easily switch between these textures with a driver. If I had 2 I'd be able to use a mix modifier and add a driver to the factor slider, but since it only has two inputs I don't know how I could do that.
For more specifity, I want to achieve what is answered in this question, but with more than two inputs: Is there an easy way to create a model with multiple texture inputs and then easily switch between them?
Seems I accidentally found a solution for this. I double clicked a block instance with a wrong texture map, and the block file came up with a correct one. I saved the file and closed it so that the host file came back in front and suddenly all textures where correct again. Very strange, IMO something is a bit flawed with the new way of material handling in Rhino 7.
When driving the glove texture are awful and blurry but when I get out of the car and watch replay they are perfectly fine. I don't remember always being like that. My replay and graphic settings are the same, **Memory slider are already maxed out** but real usage is nowhere near max in afterburner. Does anyone have a solution for this ?
I have a Zotac 1080Ti Amp Extreme 11GB GDDR5X that I just bought second hand. Works on all my games I've tried, benches fine with firestorm installed at 2075MHz. Games are butter on 1440P This game crashes and restarts my PC. Checked Power Plan, drivers, clients, verified files, re-installed game. I know the GPU isn't failing. Anyone else have this issue??? Tried 1080P and 1440P, and played with graphics settings to no avail.
This might be totally wrong, but could it just be an artifact of the fan rotating so much each frame? (Notice how on frame 24 the blades have rotated 1375. So there seems to be some multiplier applied to the driver.)
Because the fan has rotational symmetry it looks like the blades are only moving a bit, while the actual rotation that the textures are tied to is far greater, similar to those videos with the helicopter blades standing still.
Ok got it now, stupid me, the rotation looks slow but it actually isnt, in a frame its jumping like 60 degrees so its actually that the fan is moving very fast but due to 0 motion blur it seems like the texture is moving (which it obviously isnt, had to view the animation frame by frame to realise that)
Sorry for the trouble
This section describes the texture object management functions of the low-level CUDA driver application programming interface. The texture object API is only supported on devices of compute capability 3.0 or higher.
Creates a texture object and returns it in pTexObject. pResDesc describes the data to texture from. pTexDesc describes how the data should be sampled. pResViewDesc is an optional argument that specifies an alternate format for the data described by pResDesc, and also describes the subresource region to restrict access to when texturing. pResViewDesc can only be specified if the type of resource is a CUDA array or a CUDA mipmapped array.
Issue with vanilla minecraft, any version of minecraft between atleast 1.2 and 1.7 is affected by this. Minecraft 1.8+ does not have this issue. Fix this by downgrading your graphics driver to this version -graphics-windows-dch-drivers.html or if you have an NVIDIA graphics card tell it via the control panel to run java/javaw on that instead.
This led me to try out a simplified workflow where i simply feed Unreal with touchdesigner textures. It works pretty well and lets me work procedurally. My intend is to use unreal as a vj-tool heavily relying on its real time abilities.
The amount of storage available for resident images/textures may be less than the total storage for textures that is available. As such, you should attempt to minimize the time a texture spends being resident. Do not attempt to take steps like making textures resident/unresident every frame or something. But if you are finished using a texture for some time, make it unresident.
I am working with large texture data, splitted in chunks.
I have 16GB dedicated memory on my GPU.
Making texture resident works well, until I reach around 1.6GB of texture data. From this point, glMakeTextureHandleResidentARB silently fail for the following texture handles:
I have 16GB dedicated memory on my GPU.
Making texture resident works well, until I reach around 1.6GB of texture data.
From this point, glMakeTextureHandleResidentARB silently fail for the following texture handles:
On Windows, the limited imposed by the Microsoft WDDM on your graphics driver is 4096 maximum allocations. On Linux, this limit is 4 billion. You can query this limit under Vulkan as VkPhysicalDeviceLimits::maxMemoryAllocationCount. See:
When not using bindless textures, the workflow run just fine, regardless of the amount of textures (as long at it fits into GPU memory). And when using fewer texutres, bindless textures work just fine as well.
You should be able to get around this limit by pooling your textures into 2D texture arrays. This should consolidate multiple physical texture block allocations into a single texture block allocation (or at least N, where N is the number of texture levels).
What I am still missing at the moment (besides a proper understanding of the situation) is a way to query/estimate the limit causing the issue.
I can indeed not rely on maxMemoryAllocationCount to determine the maximum amount of simultaneous resident textures. Does that resident textures limit (of 1639 in my case) evoke something to you / to anyone?
This also makes some sense. Ordinarily, the driver can and does shuffle textures and buffer objects on/off the GPU based on application usage to conform to internal limits. But when you lock a resource down to be resident and/or have a specific address, that serves to tie the drivers hands somewhat. In fact, the bindless specs make a point to mention limiting what you make resident to what you plan to actually use, rather than just making everything resident.
On Windows, the limited imposed by the Microsoft WDDM on your graphics driver is 4096 maximum allocations. On Linux, this limit is 4 billion. You can query this limit under Vulkan as VkPhysicalDeviceLimits::maxMemoryAllocationCount . See:
I'm using an Intel Arc A750 LE and after upgrading from the 4887 WHQL driver to 4952, 4953, and 4972, I've noticed that there are occasional texture flickering issues in Counter-Strike 2 (CS2) using DX11. I've used DDU in safe mode for all upgrades and I've cleared the DirectX Shader Cache. The GPU is not overclocked.
I've given the new 31.0.101.5074 driver some more thorough testing with MSAA enabled in Counter-Strike 2 and my conclusion is that the issue has been fully addressed. The issue appears to have been introduced as a regression in driver 4900 or 4952.
This is driver 4972 on an Arc A750 LE using DX11 (DDU'ed, etc). The issue seems to be affected by the anti-aliasing settings. The video above is with MSAA x8, but I've seen the issue with MSAA x2 too. If I use CMAA2 instead, the problem appears to go away. Attached is the SSU report from the system after updating to driver 4972 and reproducing the issue. The issues seems to be a regression since driver 4887 (WHQL) does not exhibit the same behavior.
I've tried with Vulkan too (specifying -vulkan as a launch option), but there is a whole new level of texture glitches on Vulkan both with MSAA x8 and CMAA2. Others have reported the same thing, so I does not appear that using Vulkan is a good choice for Counter-Strike 2 just yet.
I've done a quick round of testing on the new 31.0.101.5074 driver and it appears as if the problem has been addressed, not just on MSAA x4 as noted in the release notes, but also on MSAA x2 and x8. I will start using MSAA anti-aliasing in Counter-Strike 2 from now on and make notes of any issues I discover. Give me a few days and I should be able to confirm if the issue is really gone.
We appreciate your efforts in testing the new driver version. We are glad to know that it seems that the issue has been addressed, but you would like to play the game using g MSAA anti-aliasing in Counter-Strike 2 taking note of any problems you may discover. We will wait for you to test the game. Please do not hesitate to let us know if your original issue persists. If you encounter a different problem, we recommend that you submit a new thread. Reporting one issue per thread will help us keep the topics organized. We will wait for your response after testing the game.
Major issue: Garbage textures on amd cards whenever sparse is enabled.
Major issue: As of 19.3.1 enabling Sparse also causes a driver crash on amd cards, this wasn't an issue on the previous driver 19.2.3 where it just caused garbage textures, driver 19.2.1 or 19.2.2 just caused an entire black screen window. So far 19.2.3 seems to behave the best out of the bunch that were tested.
Small issue: Gregory (pr author) read the sparse spec on amd and it looks like sparse depth isn't actually supported, Gregory said that driver reports a compatible sparse format for depth texture but it isn't attachable to a frame buffer. Link to detection here.
Major issue: Memory leak or a similar issue when using sparse color texture. Monitoring gpuz doesn't show any abnormal/high vram usage however our plugin reports that it's running out of memory which is odd and eventually leading in to a crash, we noticed the same on previous driver releases but we thought it was the same problem related with the previous fixed issues.
8d45195817