Shaders tutorial

1,108 views
Skip to first unread message

Edgar Bermudez

unread,
Feb 10, 2016, 12:28:22 PM2/10/16
to Bonsai Users
Hi all,

I am starting to play with Bonsai, so I am quite new (although I had some lectures by Gonzalo last summer). I want to display a screen to a projector using shaders. I am having a bit of trouble finding an example or tutorial about this.

Could you guys tell me where to start?

Cheers,

E

goncaloclopes

unread,
Feb 11, 2016, 9:36:27 PM2/11/16
to bonsai...@googlegroups.com
Hi Edgar and welcome to the forums!

Nice to hear from you again. The reason that Shaders is so little documented yet is that it is still a  heavily experimental feature, and because of that it keeps changing very fast (although the rate of change has been decelerating somewhat in the past few months).

There is a LOT to say about Shaders, much more than I could try to squeeze on a single forum post. Even so, the general idea of the Shaders package is to provide a very direct access to running shader routines on the GPU for the purposes of generating fast computer graphics. It is often that case that graphical libraries require a large amount of boilerplate code before you can go ahead and run your first "shader". With the Shaders package, this barrier has mostly been eliminated.

So, how does it work exactly? The best way is to start with an example (also attached):


Granted, it doesn't sound like much at first glance... however, there is everything going on under the surface. What the UpdateFrame node does is report an event whenever there is a render step in the Shader window. In order to do that, there needs to be a Shader window, so UpdateFrame will create one.


So what is going on in a Shader window? You can specify that by double-clicking on the UpdateFrame node. When you do that for this example, something like this will show up:



This is the shader collection editor. What it does is specify what shaders are running in the window. What are shaders exactly? Shaders are little programs that tell your graphics card what should be drawn on the screen. In the old times, you were able to directly write to your screen as a matrix of pixels. Nowadays, it is very rare that you are able to do this, both because it is excruciatingly slow, and also because it is unsafe to take complete and direct control of your screen in user applications (think viruses, etc...).


So what you do is write little programs that run massively parallel in your graphics card whenever there is a Draw command. Bonsai uses OpenGL to do this, which uses shader programs written in GLSL. You can read a bit about GLSL shaders here, but there is a whole world of tutorials and discussion forums about them as they form a huge chunk of game development pipelines; or we can just keep exploring the space of possible examples in Bonsai.


In this example, there is only one shader, called "SinusoidGrating". If you play the workflow, it should become obvious what it's doing, it's drawing sinewave gratings moving horizontally across the image:



But how exactly is it doing that if there is nothing else in the workflow? This all depends on how the shader was written.

Nowadays, there are many parts to a shader pipeline, but the ones most commonly used are vertex shaders and fragment shaders. These two types of program do very different things. In order to understand what each of them does, it is useful to know two bits of fact about modern GPUs. Conceptually, you can think of GPUs as being able to do only one operation: draw a triangle. In order to draw a triangle, two main things need to happen:

1) The triangle's vertices must be specified and positioned in screen space.
2) The surface of the triangle defined by its 3 vertices must be filled with colored pixels.

Where does this data come from? The OpenGL application needs to specify draw calls that pass the vertex data (and other data, like colour) to the shader, so it can draw triangles to get pixels to appear on screen.

The Shaders package allows several ways to streamline this process. When you add a new shader, you can pick from a couple of options. Some of them already create a pre-specified array of vertices to draw common shapes. For example, the TexturedQuad template includes commands for drawing a fullscreen quad, which was what we used for this example. This means that on every frame, the Shader window asks to render the "SinusoidGrating" shader with 4 vertices (top-left (-1,1), top-right (1.1), bottom-left (-1,-1), bottom-right(1,-1)). Screen coordinates in OpenGL go from -1 to 1, left to right and bottom to top, so these numbers make sense. Great, so then what happens?

Vertex shaders and fragment shaders allow you to control these two steps of the drawing pipeline:

a) Vertex shaders allow you to control where each vertex of the triangle is positioned.
b) Fragment shaders allow you to control how exactly each triangle will be filled.

You can see in this example the vertex shader and fragment shader already have some code in them. Specifically:

Vertex shader
#version 400
uniform vec2 scale
= vec2(1, 1);
uniform vec2 shift
;
in vec2 vp;
in vec2 vt;
out vec2 tex_coord;

void main()
{
  gl_Position
= vec4(vp * scale + shift, 0.0, 1.0);
  tex_coord
= vt;
}

In the vertex shader, nothing much seems to happen. You can see it takes the position of each vertex (vp) and just scales and shifts it by a fixed amount (in this case nothing changes), and that's it. However, the other thing it does is pass the texture vertex data to this "tex_coord" variable. What does that mean?

Well, it turns out that because we want to control how a whole surface is filled in the fragment shader step, it is useful if we can parameterize our drawing on a pixel-by-pixel basis. However, this would require a huge amount of parameters. To make this work easier, the graphics card already provides linear interpolators that can take fixed values and linearly interpolate them across the whole surface of the triangle. These are called "varying" or "out" (output) variables. When we pass the texture coordinate (vt) to one of these variables, we can know that this value will be interpolated across the whole surface.

But what is a texture coordinate? Texture coordinates are values used to map image bitmaps (textures) onto triangles (see here). This is what allows realistic texturing of models in games, simulations, etc. By definition, OpenGL maps textures using UV coordinates, ranging from 0 to 1, left-to-right, bottom-to-top. If we linearly interpolate the texture coordinates across the whole surface, we will get a whole gradient of these UV coordinates on every drawn pixel.

Fragment shader:
#version 400
uniform
float time;
uniform
float frequency = 100;
uniform
float speed = 0.1;
in vec2 tex_coord;
out vec4 frag_colour;

void main()
{
 
float index = tex_coord.x + time * speed;
 
float value = 0.5 * sin(index * frequency) + 0.5;
  frag_colour
= vec4(value,value,value,1);
}

Now on the fragment shader is where the magic happens. What this program does is specify how each individual "fragment" (you can think of it as a pixel candidate) will be colored. The output is the color of the pixel.

In order to draw the sinewave gratings, we take advantage of the linear interpolation of texture coordinates we did on the previous step. We know that the X component of the texture coordinate will now vary between 0 and 1, from left to right on the full screen quad. We can use this to compute the cyclic intensity of the gratings by simply using the built-in "sin" function.

Notice that colour varies between 0 and 1, but the sin function varies between -1 and 1, so we have to rescale in order to avoid color clipping. After this we have a nice grating across the whole quad, and we can control the frequency by playing with the scale of the input to sin.

Now, how do the gratings move? The trick here is to use a special variable called "time". This uniform variable, if it is declared in your program, will always be filled with the current cumulative time since the simulation began. You can use to play animations, like in this case. What we do is simply add it (with a scale) to the X texture coordinate. Because texture coordinates "wrap-around", we get the effect of getting a continuously repeating moving sinewave grating with very few lines of code.

Hope this helps in some way. Shaders are a really deep and convoluted topic and there is lots to learn before things really start to make sense, but hopefully this example can bring out some pointers on how to get started and what the keywords are.

Best,
Shaders.config
simpleshader.bonsai

Edgar Bermudez

unread,
Feb 17, 2016, 9:21:51 PM2/17/16
to Bonsai Users
Great! Thanks a lot for the very detailed response Gonz(~c)alo! Nice to hear from you again as well.

With this I am starting to implement my virtual scene to be projected.

Thanks again.

Zack Shortt

unread,
Feb 24, 2016, 6:38:50 PM2/24/16
to Bonsai Users
Hi Goncalo,

I have a few questions about the Vertex shader.  One of the parts that I am having troubles with is understanding how the input variables work in the Vertex shader.  In the vertex shader you have this
in vec2 vp;
in vec2 vt;
I know that you use these variables to build the object in the Vertex shader but the part that I am struggling with is understanding where these variables are coming from.  Im wondering if I would be able to get some help understanding where these variables come from and if there are other variable inputs I can use for the Vertex shaders?  

I am trying to get some shapes like circles and triangles to appear onto the screen as shaders are new to me.

Thanks,
Zack

goncaloclopes

unread,
Feb 24, 2016, 10:19:56 PM2/24/16
to Bonsai Users
Hi Zack and welcome to the forums!

The world of shaders can get really complex and convoluted, so it's best to start easy. As I alluded to in the answer to Edgar, one of the best places I found to start reading is this OpenGL4 tutorial series and also this one. All of the Bonsai shader mechanics map onto GLSL and OpenGL 4.0 concepts pretty much one-to-one.

Anyway, the point of shaders is you have (roughly) the following pipeline:

1) vertex data: the control points used to draw your geometry (e.g. the corners of a cube, points in a sphere or circle, etc). Vertex data can be positions, colors, normals, texture coordinates, or really any other number that you want; they are just arrays of numbers. 

2) vertex shader: this is a program that processes a single vertex from the data and transforms it in some way. The most common operation for 3D rendering is a perspective projection transformation that will project points from 3D space onto the 2D screen plane. Each vertex needs to be assigned to a position in the screen plane, specified through the gl_Position variable. This will determine how 3D shapes (triangles) will be mapped onto pixels (see below).

What is interesting about this step is that the GPU will run this program on all the vertices in parallel. You don't have to worry about looping through the array.

3) rasterization: this is the step where 3D shapes are mapped onto pixels. It turns out your graphics card really only knows how to draw triangles. Triangles are surfaces defined by three vertices. Given the positions of the three corners of a triangle in screen space, the GPU knows which pixels need to be filled. For each pixel, the rasterization step will linearly interpolate the vertex values across the triangle surface. For example, in the figure below you have a triangle where Red/Green/Blue has been assigned to vertices 1/2/3, respectively:

Hopefully you can get a feeling for how these colors are interpolated across the surface for each intermediate pixel in the triangle.


4) fragment shader: this is a program very much like the vertex shader, but now mapping every pixel data in the triangle to a color. The final color is specified by the output variable frag_colour. This program also runs in parallel for all pixels in the surface.


Ok, so if all this pipeline makes sense, we can get back to your original questions:

1) where does the vertex data come from in the first place?

2) how does it get mapped in the vertex shader?


The answer to 1) is that it either comes pre-loaded in one of Bonsai's built-in shader configurations or you have to pass it in explicitly using the UpdateVertexBuffer node. There are currently four built-in shader configurations: ShaderConfiguration; PointSprite; TexturedQuad; and TexturedModel. Of these, only the last two actually pre-load vertex data for you.


The TexturedQuad configuration I've used draws a rectangle (or quad) at coordinates (-1,1), (1,1), (1,-1), (-1,-1). Also as the name indicates, the quad is textured (i.e. it will map a bitmap across its surface; useful when you want to display an image for example) so the vertex data also includes texture coordinates indicating which parts of the bitmap are mapped to which vertex (read more about texture coordinates here).


What this means is that each vertex of TexturedQuad will actually have four numbers: positionX, positionY, textureU, textureV. Now we need to answer 2) How do we get these numbers in the vertex shader? That's where the variables you were asking about come in:

in vec2 vp;
in vec2 vt;

Here you are specifying not only the variables you have access to in your shader, but also implicitly the structure of your vertex data. You are literally stating here that your vertex data has four numbers: a 2-vector of positions (vp) and a 2-vector of texture coordinates (vt).

You are also specifying the structure of the output vertex data in the following line:

out vec2 tex_coord;

The difference is this variable is declared as "out". These are variables that will be linearly interpolated across the triangle surfaces as I've described above. Variables that are declared as "out" in the vertex shader can be declared as "in" for the fragment shader.

The last thing to explain is the meaning of those "uniform" variables that show up here and there. These are basically variables that are held constant across all vertices and pixels during a draw call. However, not necessarily across frames. You can update them using the UpdateUniform node to have interactive parameters that allow you to parameterize aspects of the rendering, like global scale and position.

I've attached the simple workflow I used to draw the triangle image, which uses custom generated vertex data. I would suggest you play around with these simple workflows until you get how they work and how to change them.

Shaders require a bit of persistence and patience to learn the details of how GPUs work. It's quite different from CPUs but understanding it from a "shader" point of view is very rewarding when you start to be able to draw very complicated things extremely fast.

Hope this helps and let me know if anything is not clear.
Shaders.config
triangle.bonsai

Zack Shortt

unread,
Feb 25, 2016, 5:22:09 PM2/25/16
to Bonsai Users
This is very helpful,  Thanks for the response!

Zack

On Wednesday, February 10, 2016 at 10:28:22 AM UTC-7, Edgar Bermudez wrote:

Edgar Bermudez

unread,
Mar 11, 2016, 3:28:21 PM3/11/16
to Bonsai Users
Hi Goncalo,

Thanks for the explanation about shaders and the tutorial. I have done a bit of reading about shaders,

I have been playing with the triangle workflow and things make more sense now, thanks. However, I was wondering if you can explain a bit more about the workflow and the python transform node after the updateFrame node? It seems like it is where you specify the vertex coords?

The last node I guess is where you specify what shader is called (in this case triangle) when every vertex is rendered?

Thanks again,

Edgar

goncaloclopes

unread,
Mar 11, 2016, 7:07:21 PM3/11/16
to bonsai...@googlegroups.com
Hi Edgar,

You are correct, the PythonTransform is where all the vertex information is specified. In this case it is not just coordinates but also the color associated with each vertex. You can pass in however many information you want associated with each vertex, as long as it conforms to the vertex declaration in the vertex shader. For example, in the triangle example the declaration was:

in vec2 vp;
in vec3 vc;

which means that each vertex has a 2-vector with positions (x,y) and a 3-vector with colors (r,g,b). This means that for each vertex we need to send in 5 numbers (x,y,r,g,b). That's why in the array created in the PythonTransform there is a total of 15 numbers, 5 numbers for each of the 3 vertices.

The UpdateVertexBuffer then just takes the array information and updates the vertex buffer for the specified shader. If the shader is set to AutoDraw (default) this is the only thing you need to do in order to draw something. If AutoDraw was set to false, you would need to add a DrawShader node after the UpdateVertexBuffer, in order to trigger an individual shader draw command. This is useful in the case where you are using multiple times the same shader with different parameters (e.g. drawing many triangles at different positions).

The DrawMode property in the UpdateVertexBuffer simply specifies how vertices should be linked to form surfaces. Surfaces are the regions of the screen that get filled with "fragments" for the fragment shader stage. The most common types are "Points", "Triangles" and "Quads", but there are other useful primitives to allow drawing multiple surfaces with fewer points specified, like "TriangleStrip" or "LineStrip". If you google for this, you will find many OpenGL articles explaining these.

Hope this helps!
G

Zack Shortt

unread,
Mar 21, 2016, 5:19:52 PM3/21/16
to Bonsai Users
Hi Goncalo,

I have been trying to get the shader to draw a circle.

I am looking at writing a drawCircle function within a PythonTransform node that will pass on information that will draw a circle around a point that I specify.    I am wondering if the best way to do this is write a function that takes parameters like radius, center cordinats(X and Y) and colour and uses trig to write many points to a data structure of some sort which then gets passed on.  If this is the way it should be done then I am wondering what sort of python data structure I could use so it would be passed out of the PythonTransform and into UpdateVertexBuffer


If there is a better way of doing this then it would be great to let me know what it is.

Any help is appreciated,
Zack


On Wednesday, February 10, 2016 at 10:28:22 AM UTC-7, Edgar Bermudez wrote:

goncaloclopes

unread,
Mar 21, 2016, 8:35:04 PM3/21/16
to Bonsai Users
Hey Zack,

A circle is always an interesting exercise to draw as it requires infinite theoretical precision to draw ;-)
There are many funny ways to draw one and you may not even need any geometry at all!

The easiest I find is to just use the following fragment shader:

#version 400
uniform sampler2D tex
;

in vec2 tex_coord;
out vec4 frag_colour;

uniform
float radius = 1;

void main()
{
  vec2 pixelpoint
= tex_coord * 2 - 1;
 
float d = length(pixelpoint) < radius ? 1 : 0;
  frag_colour
= vec4(d,d,d,1);
}

Run this on a TexturedQuad and you should see a circle pop out (see attached files).
You can control the size by adjusting the radius uniform.

This is a good exercise to test intuitions about shaders actually. Remember what's going on in the fragment shader. Every single pixel is running independently to decide which color it has. In order to determine its color, it uses pixel-unique information that is interpolated across the surface, such as texture coords. Texture coords go from [0,1] whereas screen coordinates by default go from [-1,1]. You can transform one to the other easily by a scale and shift (Line #1).

Then you can basically then "draw" a circle by simply saying "every pixel that is less than radius away from the center is white, every other pixel is black".
Quite beautiful, no?

I'll leave it as an exercise to adjust the center of the circle (hint: modify the scale and shift).

If you just want to toggle a single circle on and off, you can use a uniform value to control this as well. If you want multiple dynamic circles, then you probably want to use sprite techniques instead, let me know if this is the case.

Cheers,
G
circletut.bonsai
Shaders.config

Zack Shortt

unread,
Mar 22, 2016, 7:49:27 PM3/22/16
to Bonsai Users
This is perfect.

Thanks for the help,
Zack

On Wednesday, February 10, 2016 at 10:28:22 AM UTC-7, Edgar Bermudez wrote:

Zack Shortt

unread,
Apr 22, 2016, 1:36:28 PM4/22/16
to Bonsai Users
One more question. If I want to have a python node generate the coordinates how can this get sent to the vertex shader? I want to be able to draw the circle in different places based on values I pick in a python script. 

Any help would be much appreciated,
Thanks,
Zack

goncaloclopes

unread,
Apr 22, 2016, 1:59:32 PM4/22/16
to Bonsai Users
One easy way would be to use a uniform variable in the shader to specify the position that you then update using the UpdateUniform node.
Uniforms are variables that are constant across vertices for one draw call, but that you can update in between frames.

By the way, I have uploaded recently a major version of the Shaders module a couple of days ago. This new version introduces a much more flexible configuration system that will finally allow configuration of all the window properties in detail as well as enabling a much more flexible render pipeline, allowing for example the use of hardware instancing which allows you to draw billions of polygons per frame (e.g. for visualization purposes).

However, it did introduce a couple of breaking changes from previous versions. If you are already finishing your protocol specification, then you should stick with your current shader version, otherwise, if you are still at the beginning I would recommend writing it on top of the new version.

Let me know if you would like examples of how the new version works or if you would like me to help with upgrading your current shader scripts. The Shaders module is still experimental, but I think we're finally converging on a design that is flexible and powerful enough for a large number of cases.

Zack Shortt

unread,
Apr 22, 2016, 2:33:11 PM4/22/16
to Bonsai Users
I managed to update to the new shader version. I am wondering if I would be able to get an example of how to use the new version. I am looking at something along the lines of PythonSource which will give me x and y coordinates for the UpdateUniform which I can use to draw the circles anywhere I want onto the shader.

Thanks for the quick response,
Zack

Gonçalo Lopes

unread,
Apr 22, 2016, 4:44:47 PM4/22/16
to Zack Shortt, Bonsai Users
Sorry for the short reply. I can provide a more detailed example layer but basically if you declare a vec2 uniform you can update it with any pair of floating point numbers into UpdateUniform, using for example Vector2 or simply Zip to create the pair.

Uniform declaration van be something like:

uniform vec2 pos;

And then in the UpdateUniform you set the uniform name also to "pos".

From: Zack Shortt
Sent: ‎22/‎04/‎2016 19:33
To: Bonsai Users
Subject: [bonsai-users] Re: Shaders tutorial

--
You received this message because you are subscribed to the Google Groups "Bonsai Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bonsai-users...@googlegroups.com.
Visit this group at https://groups.google.com/group/bonsai-users.
To view this discussion on the web visit https://groups.google.com/d/msgid/bonsai-users/ede1a32e-3c14-4cd7-a781-f60f087ed68c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Zack Shortt

unread,
Apr 22, 2016, 5:16:52 PM4/22/16
to Bonsai Users, short...@gmail.com
Thanks for the response, I will give this a try. I will let you know if I have further questions.

You have been very helpful.
Zack

Zack Shortt

unread,
May 2, 2016, 4:43:43 PM5/2/16
to Bonsai Users, short...@gmail.com
I have gotten the shaders to draw things now but there seems to be something that I can not get working. I have an embedded workflow which is running a shader that will draw a circle. Basiclly in the embedded workflow a circle is on the shader and when its not in the embedded workflow the shader draws nothing so its black. The main problem is that in the shader window you can have more than one shader but whether the UpdateFrame node is inside the embedded workflow or outside the embedded workflow the shader will only draw the first shader which is the circle shader.

 Is there a way to get the shader to draw the circle shader inside the embedded workflow while its still running and have it draw blankShader outside of the embedded workflow?


Thanks,
Zack

goncaloclopes

unread,
May 3, 2016, 4:50:27 AM5/3/16
to Bonsai Users, short...@gmail.com
Cool, looking good :-)

Are you drawing each circle shader using full-screen quads? In this case if both are being drawn, one shader may indeed overwrite the other.

Actually, in the latest version of the Shaders module there is a way to precisely control when each shader is drawn. If you don't assign a default MeshName to the shader (i.e. keep it blank), then the shader will stop drawing itself automatically. You can then control exactly when the shader will be drawn by using the DrawShader node. You basically pass it a ShaderName and a MeshName and it will draw that mesh using that shader on the next frame. Don't forget to keep calling it next to update frame while you want it to be displayed.

To summarize, if I understood correctly, it should be possible to get the effect you want by having the UpdateFrame be both inside and outside the nested workflow and putting a DrawShader following the update. If you set up the shader orders correctly you may even get away with drawing the two shaders at the same time where the circle overwrites the blank only when the nested workflow is active, but anyway, you should be able to control it however you like with this node.

Let me know if you have more difficulties.

Zack Shortt

unread,
May 4, 2016, 4:11:35 PM5/4/16
to Bonsai Users, short...@gmail.com
Fantastic, thanks.
Zack

Johannes Larsch

unread,
Oct 14, 2016, 6:45:12 AM10/14/16
to Bonsai Users, short...@gmail.com
Hi,
I wanted to run the sine grating example using shaders. I think the files you posted are from an old version of bonsai? I re-created it in the current version but I am only getting a static grating. what setting do I have to set to make it move? I understand that the time variable is used to reset the position of the grating in each frame. I am guessing my shader gets executed only once?

Gonçalo Lopes

unread,
Oct 16, 2016, 11:03:44 AM10/16/16
to Johannes Larsch, Bonsai Users, Zack Shortt
Hi Johannes,

Yes, the files from this example have grown out of date. The biggest reason is that the implicit "time" variable is not automatically incremented anymore. In order to animate the grating, you need to update it yourself using the time delta from the UpdateFrame node.

The shader code is executed for every single pixel in every single frame ;-)

I'm attaching updated scripts that reproduce the example in the new version.
Hope this helps!


On 14 October 2016 at 11:45, Johannes Larsch <johanne...@gmail.com> wrote:
Hi,
I wanted to run the sine grating example using shaders. I think the files you posted are from an old version of bonsai? I re-created it in the current version but I am only getting a static grating. what setting do I have to set to make it move? I understand that the time variable is used to reset the position of the grating in each frame. I am guessing my shader gets executed only once?

--
You received this message because you are subscribed to the Google Groups "Bonsai Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bonsai-users+unsubscribe@googlegroups.com.
quad.vert
Shaders.config
sine.frag
sinegratings.bonsai
sinegratings.bonsai.layout
Reply all
Reply to author
Forward
0 new messages