Odin With Us

0 views
Skip to first unread message

Giuliana

unread,
Aug 5, 2024, 1:46:25 AM8/5/24
to chopovija
Yellowtanks suffer by a lot of great purple counters, thank Guinevere for that. Players before 2019 were prone to develop a lot of purple heroes just to counter Guin that all other viable yellow tanks suffer as a consequence. Heroes like Rigard (some players have at least 2-3 of him maxed and perhaps emblemed), Sabina, Tiburtus (now more so with their respective costumes), and Proteus became a staple to most roster, on top of very good dark legendaries such as Sartana, Domitia (now more so with their costumes, too), Seshat, Kageburado, Ursena, Panther, etc. And now that there are a ton of very good purple heroes, i.e. Onyx, Killhare, Jabberwock, Ametrine, Clarissa and Alfrike, among others.

Odin code is organized into packages, where each package is the set of odin files contained within a directory. Any package can be used as the entry point as long as it has a `main` procedure defined.


The odin compiler ships with bindings for SDL2, a popular platform abstraction library. It makes it fairly easy to create a window and start an event loop. The code should work on all platforms without much modification.


There are two pieces of data I want to send: One is the screen resolution (more precisely, the dimensions of the window or the rendering target, in pixels). The other is the list of rectangle data with placement and color.


As I mentioned, the way to receive data on the GPU is to add new parameters to the shader functions and annotate them with `[[buffer(N)]]`. But before we see that, we need to consider what the shader code needs to do.


The original shader code just returns a rectangle with hardcoded coordinates, but now we must recieve the coordinates from the user code. The user code specifies coordinates in screen space with the top-left corner being the 0,0 point, the x-axis advancing from left to right, and the y-axis advancing from top to bottom.


If you would like to follow along, but your OS is Windows, I encourage you to learn how to achieve a similar kind of setup in Direct3D. If your OS is Linux, I encourage you to explore doing this setup in Vulkan.


We\u2019ll deal with the rendering pipeline boilerplate as just that: boilerplate. People who do advanced 3D rendering engines will probably customize many aspects of this pipeline, but for us, we\u2019ll just take it as-is.


Inspecting the Activity Monitor, I notice that our little program is consuming about 4% of CPU and 10% of GPU. I\u2019m not really sure what that means. We\u2019re not really doing any computation on the GPU as far as I can tell.


Fow now I will not bother trying to fix this. However, it does serve as a baseline that we can use to measure against future iterations on our code. If GPU usage jumps to 50% then we can reasonably say that our code is taking a big toll on the GPU. If however it stays around 10% then we\u2019ll just assume that everything is fine and dandy.


This might appear like an odin compiler error, but it\u2019s not. The building of the demo program is not failing. It\u2019s working. The program runs, loads the shader code, encounters an error, and reports it (line 41) then basically exits (line 42).


I can\u2019t claim that I understand exactly what\u2019s going on here, but from the looks of it, a shader is only applicable to a \u201Crender pipeline state\u201D, which the following code sets up. It basically specifies the name for the vertex function, the name for the fragment function, and the pixel format for the \u201Ccolor attachment\u201D.


Notice we are not sending any data to the GPU. We\u2019re just telling it to draw a triangle strip. The rendering pipeline will invoke our vertex function once for each vertex. Once it gets all the vertex coordinates, it will determine where the resulting triangle strip should be rendered to the screen, and for each pixel, it will call the fragment function.


I want to draw a rectangle in the middle of the screen with some interesting color that is different from the background color we\u2019re using so far. This is to verify that things are working properly.


Let\u2019s take a look at the shader code. It\u2019s going to be different than what we\u2019ve seen so far in shadertoy. For one thing, this is the metal shader language; it has a different way of specifying inputs and outputs. For another thing, this includes both a fragment shader and a vertex shader, where as we only saw the fragment shader in shadertoy.


There\u2019s a lot to cover here, but I don\u2019t think that we need to turn this into a tutorial on the Metal Shader Language. I\u2019m not qualified to give such a tutorial anyway. You better consult the official reference manual:


One is the use of special [[tags]] to denote struct fields and function parameters that have a special meaning. You can actually add more parameters to the shader functions. When we pass data from the CPU to the GPU we\u2019ll use some special tag to denote that.


For our vertex shader, notice how the values for vertex coordinates are basically hardcoded. They return a rectangle that is half the screen size and positioned in the middle. The official guide I posted above explains the coordinate system. Here\u2019s a relevant screenshot:


The other important thing to notice is that for each vertex we\u2019re not just returning the coordinates (marked by the [[position]] tag), but we can assign any number of float attributes. These attributes will be interpolated for each pixel as they pass to the fragment shader. This interpolation is done by the GPU for us. We don\u2019t need to do anything to make it happen. Here we\u2019ve assigned a color to each vertex, but we can also assign other things.


The basic concept as far as I could understood it is rather simple. It\u2019s basically a byte buffer assigned a slot number. There are several ways to manage it from the CPU-side, but on the GPU side as far as I can tell you recieve it by annotating a function parameter with `[[buffer(N)]]`, where N is the slot number.


Since it\u2019s just a byte buffer, you have to make sure the layout is interpreted correctly in the shader code. You have to know the size of the data you\u2019re sending. If you send float64 from the CPU but then attempt to read it as float32, you will be in trouble. If you send structs then the padding/alignment might become an issue if you\u2019re not careful.


I decided to split the rectangle data into separate but parallel arrays. The reason for that is the rect position data is of interest to the vertex shader, but the color data is useless to the vertex shader; it\u2019s only going to be used by the fragment shader.


The solution to this is setting the blending mode on the pipeline state descriptor\u2019s color attachment object. As far as I can understand, this achieves the same blending effect as what we did in shadertoy with the composite function.


Although a lot of pipeline setup code appears to be impenetrable magic incantations, I tried to build them up incrementally, so that we at least know what each portion does, even if we don\u2019t grok everything about how or why.


In Television production, the term Outside Broadcasting (OB) is used for sports events, music concerts etc not occurring in a studio. OB-vans contain all the equipment required for producing these events and sending them back (via satellite) to the network for distribution. Among the many different types of equipment installed in OBvans, an intercom system including wireless capability is mandatory. For larger productions, multiple OB-vans are used, and the ability to interconnect the intercom systems becomes important.


In the following example we imagine two trucks, each with its own intercom including ODIN with one single Access Point of ROAMEO and a number of Beltpacks. The Inter-Frame Link (IFL) may be used to interconnect the two systems. In so doing, they become one single intercom. See Figure 2.


This option uses two or several analog connectors. ODIN has 16 RJ-45 style connectors for analog signaling, intended for analog keypanels. Of the available 8 pins on an RJ-45 connector, 6 are used. Both keypanel data and analog audio are transmitted using balanced signaling, also known as differential signals, which is more resilient to induced noise. Two wires are used for data, two for analog audio in one direction, and two in the other. When AIO is used for audio signals between two matrices, keypanel data is not transmitted, so only four wires are used. A special cross-over cable must be used, where pins 4 and 5 on each side connect to pins 3 and 6 of the other, see Figure 4.


Note the number of AIO-cables used in Figure 3 is not locked to two. The number of cables to be used is an intercom design decision, and is determined based on the total number of required individual (point-to-point) and group (partyline) conversations required between the two intercoms.

3a8082e126
Reply all
Reply to author
Forward
0 new messages