Post Processing Quality

0 views
Skip to first unread message

Magdalena Liendo

unread,
Aug 3, 2024, 11:48:26 AM8/3/24
to tistracountwax

I meant I implemented BRDF, Environment Lighting, Tonemapping, gamma correction etc. in a shader language which I think is interchangeable with MeshStandardMaterial. I was trying to be in general. Sorry if it was confusing.

First up, if you use post then the default antialiasing will stop working, so you need an AA pass. You can achieve this either using a pass like SMAA/FXAA/TAA etc. or (preferably) using a MultisampleRenderTarget which is equivalent to the normal antialiasing.

Personally I recommend using the vanruesc/postprocessing library instead of the default three.js post, it will handle the WebGLMultisampleRenderTarget stuff for you and I find it generally easier to work with. It also does things like automatically combining effects to improve performance.

Another important consideration is that some of these passes are far more performance intensive than other. For example, vignette/tone mapping/color grading/film grain will all have only a minor impact on performance, especially if they are combined in a single pass. FXAA will have a medium impact. SSAO in particular I find can have a big impact and you will need to tune it carefully (esp. the number of samples).

Final note - this is all assuming a final display in standard dynamic range sRGB color space. in the future as we start use HDR displays and wider color spaces on the web the tone mapping/gamma correction stuff may change slightly. Everything else should be the same though.

My problem is, that I am working on a project using some other shaders, which enforce this effect even more. Since this really influences the atmosphere, I would be very grateful, if there is any possibility to get rid of it.

Probably on low settings, high dynamic screen is disabled and you have no Eye adaptation. What I usually do is try to approximately match the desired brightness when eye adaptation is disabled (this can be tuned in the view settings) so players that have it disabled still see the scene correctly. This might not be possible if the brightness is the scene changes (caves and outdoors for example).

Forcing the post processing to a specific quality might be an easier approach and, since I am using a Cel Shader, it should even look good if the post processing is downforced to medium. I will have to play around with it a bit. It would be great to know how big the effect on the performance of this would be, is there a way to benchmark this?

Properly establishing color tones and lighting in a game is as necessary to its success as the gameplay itself. A game with stunning graphics will entice players to buy it and make the game a more enjoyable experience overall, with post-processing playing a part in that too.

Game developers strive to create games with visually stunning graphics that bring their creations to life. There are several effects grouped into post-processing that allow developers to achieve this. These effects are applied in a specific order, starting with SSR and ending with Grain:

The problem I have is with Post Processes, those are defined in a GlobalVolume GameObject placed in a scene and will be used if the Camera in the Scene has Post Processing enabled. Does this means that theres no reference to the Quality Settings for those? ie theres no way to specify a GlobalVolume per Quality Settings?

I ran into the same thing recently when I was setting up quality settings for my game. My solution was to write a script that looks for camera objects after every scene load and turns post processing on/off depending on current quality settings.

A common example I've encountered: say I shoot with an old flash gun that doesn't communicate with my DSLR, and it is really up to me to adjust White Balance manually. Am I good if I shoot bunch of unbalanced colour photos (most of the cases, with a blue cast) and fix them through post processing? Or should I spend time getting the colour right (or close) on the spot and be conscious about changing light so I don't need to re-adjust white balance later?

In fact, in RAW, the white balance you set in-camera is nothing but advisory information to the post-processing software. A different multiplier is applied differently to the red, green, and blue channels during RAW conversion depending on the setting, and if you're doing that conversion from a RAW file, you can always choose to do it differently unless you destroy the original.

The only exception is when the lighting is so strongly colored that it affects the metering oddly. If you have the white balance set in camera, it will apply to the displayed histogram. Some people really are concerned out about this, and have invented the idea of "uniwb", a custom white balance designed to balance the three color channels evenly. If you are very meticulous, and if you are trying to make the most of extreme scenes, you may be interested in seeing if that helps. (You probably also want to reduce the default contrast settings, for the same reason.)

Also, see this related question: If shooting RAW, is the white balance selected in camera irrelevant for exposure? I did a simple test, and my conclusion is that even in an extreme situation, the metering isn't thrown off by more than a third of a stop. This is likely to also be the case with the histogram, and therefore, I would recommend not really stressing out about uniwb.

You can easily lose colors if you set your WB incorrectly, so, yes, it matters very much how you set it. Whereas there is a lot of latitude in correcting for it in post processing, it is possible to lose colors.

The first statement of mattdm's answer is simply incorrect. He eludes to it, but doesn't emphasize that you CAN blow out individual colors if WB is set incorrectly, and not notice it. If you are shooting at night on auto-WB, it will be much too warm and the histogram will suggest that your WB is correct (showing all RGB histograms in-range) when, in fact, you are blowing out blue hues. See this blog post for an example: -raw-file-myth-about-white-balance.html and this one for best practices and why to use them: -balance-settings-at-night.html

As a general comment, it is always best to get the exposure right in camera, because it means less post processing. Look at the LCD screen and ask yourself: does this image I just took look like the scenery I am seeing with my eyes? If not, your settings are wrong.

I tried to keep the quality and player settings as close as possible between projects. All materials use Diffuse + Normal maps. All PostProcessing profiles are only using a 16 LUT for color grading (StackV2 and URP versions set to Low Definition Range).

Oldest Unity 2018 works best with oldest Post Processing Stack V1. Switching to Post Processing Stack V2 results in FPS dropping by 10-15 (below 60).
Unity 2020.1 is visibly slower than 2018 using any Post Processing. I assume that a bit more complex scene, and it will be visibly slower without Post Processing as well.
Unity 2020.1 with URP and built-in Post Processing is the worst.

There is a need for some serious improvements right now with mobile in general, but especially with mobile VR. It is very hard to get post processing working with mobile VR, and even harder to get it working at a high frame rate. Some people suggest not using any post processing, but post processing really helps certain visuals.

Similarly, mobile VR has the option to use Multipass, Single Pass, or Single Pass Instanced. However, Single Pass Instanced in nearly completely broken, and Single Pass does not work with most post processing on mobile. So mobile VR developers are largely stuck with Multipass, which is the least performant option.

However, most likely the persons asking for '300 dpi' files simply have no idea what they actually are talking about. Usually they just thoughtlessly repeat a buzzword they once learned without really understanding what it means. In most cases, they'll be happy if you just deliver files with the EXIF ppi field set to '300,' regardless what the actual pixel size really is.

Sending just your camera's full-size files (w/ or w/o some post-processing applied, as you think fit) should be just fine. And don't worry about the RGB colour space; images out of the camera or out of Lightroom will always be properly tagged so both sRGB and Adobe RGB (or whatever) should be equally fine.

The term post-processing (or postproc for short) is used in the video and film industry for quality-improvement image processing (specifically digital image processing) methods used in video playback devices, such as stand-alone DVD-Video players; video playing software; and transcoding software. It is also commonly used in real-time 3D rendering (such as in video games) to add additional effects.

c80f0f1006
Reply all
Reply to author
Forward
0 new messages