Scanline 7d

0 views
Skip to first unread message

Carmen Kalua

unread,
Aug 3, 2024, 5:24:33 PM8/3/24
to vorcherstagtai

Good morning!

It's been years since I worked in 3ds Max, but my current job has gotten me back into it. I was wondering, what should I be doing/where should I be looking to get Arnold quality shadows out of the scanline renderer? I've been messing around with the Radiosity plug-in in Advanced Lighting as well as advanced ray traced shadows on standard lights, but it always comes out looking... eh.

The major shadows look ok, but I'm losing a lot when it comes to the smaller shadows (on the bolts and such). I'm guessing it has something to do with the shadow bias? But as I said before, its been a long time since I worked in Max (2014 was just coming out when I stopped) so I'm still getting used to this stuff.

Having said that THe more you increase the quality, scanline starts to get very slow, there is a point IMO that is not working to fight it and just use other render engine, such Arnold, VRay, or any other.

Box and a Sphere, 2 omni lights, scanline renderer, one standard material for each object. The omni lights are turned on. But everything except the background renders black. If I change the material to self-illuminating then I do see the material render, but right now its diffuse properties are not reflecting lights.

I'm rendering a 10 second animation @ 24fps. The model is quite large. There are 6 buildings and a large (but not insanely detailed) terrain. There are two large cylinders beyond the terrain with respective sky and treeline textures.

On the first camera/render/video the rendering will stop in the middle of rendering a frame. Sometimes it's frame 4. Sometimes it's others. The first few will render fine with a slight hesitation as the line approaches and crosses the buildings in the distance, but usually around frame 8 it stops completely at the buildings in the distance.

Second. Check your RAM, are you using more than you have? ...if so, maybe try switching to mr. I think you could use mr as a scanline engine, and still have it load and unload geometry per bucket. One of the mr guys here might be able to verify this.

Fourth, it might be corrupt geometry. Divide your model up into layers. Turn on 20% of the layers and hit render. Does it crash? ....if not, try the next 20% and so on until it does. Once you find the set of layers causing the crash begin testing in smaller sets of layers until you identify the layer with the corrupt geometry. Then figure out which piece of geometry it is, and rebuild it from scratch.

Fifth, ...reset your render engine. This work best with mr or Vray, but it might work with scanline also. Switch to a different engine, hit render, then switch back to your engine of choice, and redo your settings from scratch, or from presets that are already verified to render correctly.

I still might proceed with your suggestions, but the odd thing is, rendering without the batch actually worked. It still hesitated at the freezing points of the render, but it crunched through to the next frame. I checked all the settings that I can I think to check between the batch setup and the regular setup. They both look the same. Does this shed any light on the issue at hand? Does the batch render option behave differently?

Also yes, I'll start rendering individual frames from now on. It makes sense and seems that it would reduce the potential points for error. Is After Effects the best option for post? I have both AE and premiere.

I've heard a lot of people working on VR talk about scanline racing and that it's supposed to help improve latency for motion-to-photon. However, it isn't clear to me how this can be done with OpenGL. Could someone explain how scanline racing works, and how it can be implemented on modern GPUs.

Traditionally, rendering is double-buffered, which means there are two buffers stored in GPU memory: one that is currently being scanned out ("front buffer"), and one that is being rendered to ("back buffer"). Each frame, the two are swapped. The GPU never renders to the same buffer that's being scanned out, which prevents artifacts due to potentially seeing parts of an incomplete frame. However, a side effect of this is increased latency, since each frame may sit around in the buffer for several ms before it starts being scanned out.

There are a lot of disadvantages to this approach; it has very stringent performance requirements, has to be timed very carefully against vsync, and it greatly complicates the rendering process. But in principle it can shave milliseconds off your latency, which why VR folks are interested in it.

I have come up with the exact microsecond-accurate formulas as a VSYNC offset, to predict the position of a tearline. Tearlines during VSYNC OFF are always raster-exact, so you can steer them out of visibility during strip-level "simulated front-buffer rendering" via repeated VSYNC OFF buffer swapping.

If it's of interest, the Dreamcast had a "racing the beam" rendering mode, whereby it was able to dedicate a relatively small fraction of memory to framebuffer pixels (e.g. 64 scan lines) and it would render rows of 32 in turn, synchronised to the display update. This however, was only used to save memory. I doubt anyone was generating "modified" geometry for latter parts of the display.

Maybe these could be turned into a shader? Shaders seem to perform better than overlays in addition to being easier to use. Or, if someone knows parameter settings for any of the lightweight scanline shaders to achieve similar results, that would be awesome.

Can someone explain to a noob in the most simplest of terms how to add shaders and scanlines to games in retroarch? I have looked and looked and I know how to apply them, but cant get them to show up. Any videos tutorials out there?

So when you launch a game, and press F1, this will load the RA menu and should put you in the Quick Menu by default. If not, the Quick Menu only shows up on the first tab when a game is loaded. From here, go down to Shaders, then go to "Load preset". Choose a category of presets (CG, GLSL and Slang). I normally choose CG or GLSL. I believe one is for DirectX and the other is for OpenGL, which I believe is GLSL, so it depends on the driver you're using and the system. For the most part, those two work though. Then, from there, it will list category folders. Go in to them and search around for something you like. Load the preset and then you'll see the Shaders menu change and list all the shader passes.

The more shader passes there are, the more power it may require. That's not always the case, but the more layers it adds it certainly requires more power. Once you've selected it, choose Apply Changes. If you use custom configs, the shader settings will be made for that config, or you can save it as a core or game preset, which should automatically load for that game / core. If you don't like a shader, reduce the shader passes to 0, then apply changes. Choose another shader to test them out. When you select something you like finally, don't forget to save the core or game preset, or make sure you're loading the right custom config.

Step one: open a game. Games need to be open for the menu to appear. With game open press f1 if you haven't changed it, which will bring up the menu, go to "quick menu" then scroll down to shaders set shader passes to one or more, then click and find the shaders you want.

Sent from my SAMSUNG-SM-N910A using Tapatalk

I switch around once in a while depending on my mood. When I like heavier scanlines I like crt-aperture but easymode-halation or one of the hyllian shaders. Another one I like and used to use it all the time is royale-kurozumi in the \cgp folder. Your display resolution and colours may vary with it and how good it looks. Check it out though because it is a really good shader if you want the high quality PVM look.

I prefer no scanlines myself, and instead prefer as clean of upscaling as I can get. On 3D systems, generally no shaders if it can scale internal resolution or scale up and have it look good. On 2D systems I generally prefer the Pixellate shader in the Retro folder. I like things to not be rounded or blurry.

I rotate, translate and project my points to get a 2d space representation of each triangle.Then, I take the 3 triangle points and I implement the scanline algorithm (using linear interpolation) to find all points[x][y] along the edges(left and right) of the triangles, so that I can scan the triangle horizontally, row by row, and fill it with pixels.

This works. Except I have to also implement z-buffering. This means that knowing the rotated&translated z coordinates of the 3 vertices of the triangle, I must interpolate the z coordinate for all other points I find with my scanline algorithm.

And if current z is closer to the viewer than the previous z at that index THEN write the color to the color buffer AND write the new z to the z buffer. (my coordinate system is x: left -> right; y: top -> bottom; z: your face -> computer screen;)

The problem is, it goes haywire. The project is here and if you select the "Z-Buffered" radio button, you'll see the results... (note that I use the painter's algorithm (-only- to draw the wireframe) in "Z-Buffered" mode for debugging purposes)

PS: I've read here that you must turn the z's into their reciprocals (meaning z = 1/z) before you interpolate. I tried that, and it appears that there's no change.What am I missing? (could anyone clarify, precisely where you must turn z into 1/z and where(if) to turn it back?)

The problem was, in an attempt to increase my frame rate, I was drawing 4px/4px boxes, every 4th pixel, instead of actual pixels on screen. So I was drawing 16px per pixel, but checking the z buffer for only one of them. I'm such a boob.

TL/DR: The question still stands: How/why/when do you have to use the reciprocal of Z (as in 1/z) instead of Z? Because right now, everything works either way. (there's no noticeable difference).

Why? Because for every step in the X' direction, you don't move the same amount in the Z direction (or, in other words, Z is not a linear function of X'). Why? Because the more you go right, the farther away the segment is from the camera, so one pixel represents a longer distance in space.

c80f0f1006
Reply all
Reply to author
Forward
0 new messages