WebGL in Elm

818 views
Skip to first unread message

Tinco Andringa

unread,
Dec 27, 2012, 8:18:22 AM12/27/12
to elm-d...@googlegroups.com
Hi,

I've been working on implementing a WebGL rendering system for Elm but I'm a bit unsure how to continue, so I'm doing a bit of a brain dump here hoping it'll clear up things in my head a bit and maybe some suggestions from you guys.

I started out with this because I wanted to learn more about building graphics engines with OpenGL/WebGL and building a framework for Elm seemed like the right way to do that :P I've built a WebGL game before, but I used Three.js which hides all the gritty stuff beneath a clean highlevel api, this time I wanted to get down and lowlevel.

So I've done just that, you can see the resulting white triangle on a pongGreen background in my webgl branch at github.com:d-snp/Elm.

This is how the OpenGL interface works:

First we initialize buffers. Buffers are just arrays of ints or floats and are usually either vertices or textures. To do something with a buffer later on you need to keep a reference to it. You build the array first in javascript, then you run a webgl function to upload it to the graphics card.

Then we initialize the shaders. Shaders are small programs written in a C derivate called HLSL. You submit the source string to the gl context and call the compileShader function to compile it and upload it to the graphics card. These shaders can have attributes and shared state. You can reference attributes by name and upload values to them from your javascript, this is how for example animation would work (instead of uploading a whole new positions buffer every frame, you upload translation information and have the graphics card execute the translation on the existing vertex buffer). Shaders can be composed on top of eachother by sharing state and being linked together. The result of multiple shaders being linked is called a program.

We set a clear color, this is the color the framebuffer is initialised with whenever we call the gl.clear function.

We enable the DEPTH_TEST functionality of the graphics cards, this lets us issue draw commands without worrying about depth order. (We could draw something in the foreground first, and then something in the background, and the graphics card will automatically skip pixels that are overlapped)

So now we're ready to actually draw something. The process goes like this:

First we select the shader program we're going to use, in a simple game you would have only one shader chain, but in some photorealistic or other fancy type games you might have different shaders for different materials (like water, or fireworks).

Then for each shader attribute we need to provide an input, so we call bindbuffer to select on of our previously created buffers, and then we call vertexAttribPointer to tell the graphics card which attribute this buffer should be bound to. In the case of texture buffers they are first loaded into a texture cache, and then a sampler is selected for that cache.

We also upload our single value attribute values like the translation of a model, or the perspective of the camera. Then finally we bindbuffer the vertex positions buffer of the model we want to draw, and call drawElements to draw it.

The drawing process of drawElements is simple. For every pixel the viewport has it runs the shader chain. So your shaders have to be written in such a way that given a vertex buffer, your other attributes and the various buffers it should return a color and depth for every pixel. At the end of a drawElements call your framebuffer will contain the rendered image of whatever model you gave it. Note that things like perspective are also done by shaders (webgl comes with a default shader that does perspective I think, but the tutorial I used made me write my own, it's not so hard).

That's it! By default the target of the draw calls is the framebuffer that is the direct input of your screen. I can imagine that maybe professionals first draw to a buffer that is copied to the framebuffer at the end of the frame, so players never see a half rendered screen.

So that is the process, I have to go socially engage with family a bit now, later I'll follow this up with my suggestion on how to model this process in a functional way, but if you have time please let me know how you think this could be done!

Kind regards,
Tinco

p.s.: By the way, for Elm in general I think it is a much better idea to just wrap Three.js (if possible), this low level stuff is of no use for anyone that just wants to make a fun 3d game. No one wants to write his own perspective and material shaders or fuss with vertex buffers and write his own animation system. Three.js has all of that done for you.

Tinco Andringa

unread,
Dec 27, 2012, 9:57:48 AM12/27/12
to elm-d...@googlegroups.com
My first plan was to completely mirror the way 'collage' works. 

Just passing a list of elements isn't going to cut it because of the whole ordeal with buffers and their related shaders. I have a feeling this shares properties with the idea of sharing styles between elements of a collage. The reusing of styles now can only be done by defining helper functions that fill in parameters. So for example if we want to draw a blue pentagon a bunch of times we could do:

bluePentagon position = filled clearBlue . ngon 5 20 $ position

layers [ collage w h $ map bluePentagon locs
         , plainText "Click to stamp a pentagon." ]

So the reusing of a shader in Elm would also have to be something along those lines.

myShader = compileShader "myShader sourceCode"

And the buffer would be:

myVertices = initVertexBuffer 3 3 [1, 1, 1, 0, 0, 0, 1, 1, 1]

Then to render a model, we would have to apply some translations to it. But when rendering a complex object, maybe we have translations that are applied to a group of models, and those models themselves might have their own translations too. So there needs to be a translation tree. Something like this:

type Vec3 = (float, float, float)
data Action = Translate Vec3 | Rotate Vec3
data Model = Model [Vec3]
data SceneGraph =  Node Action [SceneGraph] | Model

Then rendering a scene would be something like this:

scene 640 480 pongGreen [ myShader [Translate (-1,-1,0) [Rotate (1.0,0.5,0.25) [standardModel]], Translate (0,0,1) [standardModel]]]

What would require some runtime assistance though is making sure that the two instances of myVertices can refer to the same buffer because in even in a simple game there can be hundreds of instances of the same model, at different locations.

Something else, these Translate and Rotate actions, they are properties of the default shader I have written, but when you're actually writing your own shaders you will have to define actions that upload the right parameters into the right buffers.

Also, model here is just vertices, but a real model will also have a texture, that texture is an input to a shader too.

So if we're to support this low level way of doing WebGL we need a very dynamic system that deals with the inputs of these shaders.

Ofcourse, if instead we go the Three.js, we can use the builtin in shaders and pretend all this shit doesn't exist and live happily ever after :)

What do you guys think?

Dobes Vandermeer

unread,
Dec 27, 2012, 1:18:03 PM12/27/12
to elm-d...@googlegroups.com

At some level FRP is a writing a functional program that returns a todo list for the engine it runs on.  Basically like Monadic programming (assuming I understand Monadic programming and FRP at all).

In the case of collage the result is, to some degree, a single instruction "draw this element graph".

What I understand, however, is that the accelerated 3d programming often requires things to be done in a certain order.  In this case, you can't just return a scene graph, you have to return a list of instructions to execute.

You could return a list of WebGL functions to call and their arguments, but that wouldn't be as nice to use as collage is.  So, perhaps a slightly higher level system like a mix of commands to change GL settings / modes and collage style scene graph objects to render.

It definitely seems a bit more tricky ... I'd probably start with something wrapping three.js if I were you, then move to a lower level once you have a good handle on how that API looks.

John Mayer (gmail)

unread,
Dec 27, 2012, 1:49:11 PM12/27/12
to elm-d...@googlegroups.com
You can just return a scene-graph Elm data structure because the gl context must be reprinted each time. So you just really need a rule in the render function to draw an EWebGL, whose payload is a scene graph.

Sent from my iPhone

Evan Czaplicki

unread,
Dec 28, 2012, 5:09:28 AM12/28/12
to elm-d...@googlegroups.com
The more I think about this, the more I feel like it'd be nice to hook it up to a higher-level physics engine or rendering engine. One of my work mates is working on a JS physics engine, so I'll talk to him about how all of these things work. It sounds like they may end up with somewhat declarative interfaces, making it a nice choice for Elm.

I just want to stress again that the hardest part here is API design. My experience with OpenGL taught me the infinite details of creating a cone with proper triangle meshes, surface normals, backface-culling, and ambient lighting, but none of those details were in any way important to what I wanted to do which was have a camera walk around and have some reference points to look at. The challenge here is to figure out which details matter and how to expose them to the programmer in a beautiful way. Looking at the three.js API, it seems like it's gonna be pretty rough given how many variables there are to think about.

Really, honestly, for real, the easiest mistake to make is to just start coding. It won't turn out nice with something this complicated. Once you have the API, the coding is easy and the result has all of the nice properties you expect.

One thing that could help is using records (coming next release) to hide a bunch of default settings. This would allow the defaults to be out of mind for most cases, but also allows advanced users to update the records if necessary.

Another note on how to map a functional API onto a procedural API, this is the point of the Render.js and Collage.js files which both take a purely functional data structure and figure out how to do efficient imperative updates. I expect basic 3D bindings would work much like John says, where you have a purely functional data structure that you turn into procedural calls. Not sure if John is currently taking a look at the 3D case, but we have discussed it in the past and it seems feasible to do in WebGL (perhaps to the detriment of the Elm RTS though).

Tinco Andringa

unread,
Dec 28, 2012, 7:03:55 AM12/28/12
to elm-d...@googlegroups.com


On Dec 28, 2012 11:09 AM, "Evan Czaplicki" <eva...@gmail.com> wrote:
>
> The more I think about this, the more I feel like it'd be nice to hook it up to a higher-level physics engine or rendering engine. One of my work mates is working on a JS physics engine, so I'll talk to him about how all of these things work. It sounds like they may end up with somewhat declarative interfaces, making it a nice choice for Elm.

Is he doing that for your employer? I have a. Bunch of experience with the box2d physics engine and I think it would be relatively easy to wrap. Because physics engines are so. Self contained. You can just configure them and they're off.

> Really, honestly, for real, the easiest mistake to make is to just start coding.

Which is why we're talking here right?

> One thing that could help is using records (coming next release) to hide a bunch of default settings. This would allow the defaults to be out of mind for most cases, but also allows advanced users to update the records if necessary.

Yes I think that is a good idea but configuration is not the big problem here, you can hide complexity with helper functions too.

I expect basic 3D bindings would work much like John says, where you have a purely functional data structure that you turn into procedural calls.

Yes this is what I am trying to do but the problem lies with the typing of this graph in relation to the shaders. I couldn't make sense from Johns mail at all but it was send from his phone so maybe he can shine some light on this later.

>>> On Dec 27, 2012, at 1:18 PM, Dobes Vandermeer <dob...@gmail.com> wrote:
>>
>>>
>>>> What I understand, however, is that the accelerated 3d programming often requires things to be done in a certain order.  In this case, you can't just return a scene graph, you have to return a list of instructions to execute.

You are right. I used the term scene graph, but what I was trying to make clear is that this graph is actually an instruction tree. So the scene graph is the instruction list you are describing here.

>>> You could return a list of WebGL functions to call and their arguments, but that wouldn't be as nice to use as collage is.  So, perhaps a slightly higher level system like a mix of commands to change GL settings / modes and collage style scene graph objects to render.

Sort of, but the gl commands are mostly used to interact with the shader program. I am looking for a way to do this in a statically typed way. Perhaps the only way is to have the shaders themselves be written in an embedded Elm subset.

>>>
>>> It definitely seems a bit more tricky ... I'd probably start with something wrapping three.js if I were you, then move to a lower level once you have a good handle on how that API looks.
>>>

Thanks for the feedback so far guys.

John Mayer (gmail)

unread,
Dec 28, 2012, 1:06:45 PM12/28/12
to elm-d...@googlegroups.com
Here's how envision a WebGL API.

Suppose we have some magic builtin function that takes as input a GLSL string and produces as output a function that takes models and produces a Form3D. The analogy here is using colors to turn a Shape into a Form; a shader is like a color, and models are shapes. The type of the model (a record perhaps) would need to line up with the inputs expected by the shader at runtime, so buffer binding would work properly. This would be easy and practical, but unfulfilling. A GLSL eDSL would be the bee's knees, in my opinion.

I unfortunately have little experience with extensible records, but here goes: our new magic function is like this:

shade :: Shader a -> a -> Form3D

The trick here is that a "Shader a" is some sort of type-checkable construct such that it is tagged by a certain record type "a", and can be built using a small "shader combinator" library that takes care of all of the vertex and fragment shading, makes sure stuff is passed between correctly, and can generate GLSL source.

I feel like this may be the domain of something like Template Haskell, but if it can avoid being a builtin that would be cool. Really comes down to what records are capable of.

Sent from my iPhone

Tinco Andringa

unread,
Jan 1, 2013, 7:33:59 PM1/1/13
to elm-d...@googlegroups.com
That's weird, your e-mail only appeared on the list today (afaict) but it's dated 28-12-12..

Anyway I couldn't agree more, you are absolutely right. We can not support custom shaders without some dynamic construct. This does not have to be runtime dynamic, but it has to be at least compile time dynamic (i.e. macros) like Template Haskell.

So this is where we do *_* at Evan for some nice advanced language features ;) (in the meantime I might work on porting three.js to Elm to see how that works out)

Op vrijdag 28 december 2012 19:06:45 UTC+1 schreef John Mayer het volgende:

Evan Czaplicki

unread,
Jan 1, 2013, 10:10:50 PM1/1/13
to elm-d...@googlegroups.com
Yeah, it got stuck in the filters for the group and I didn't get an email about it until today :/ Sorry guys!

Can John or Tinco clarify the relationship between GLSL and the model? Is there type information that needs to match between them? I fear I am not knowledgable enough on these topics to speak confidently. Maybe there is a good resource on GLSL or can give me a quick overview of the language-level issues we face?

I am in the process of writing up a bigger description of extensible records, so hopefully that will help clarify some things too. They essentially give the flexibility of objects in JS without the this keyword, and adding purity and type-safety. So you can change the types of fields in a record if you want. Not sure if that's the kind of thing needed here though! 

Tinco Andringa

unread,
Jan 2, 2013, 6:14:42 AM1/2/13
to elm-d...@googlegroups.com
On Wed, Jan 2, 2013 at 4:10 AM, Evan Czaplicki <eva...@gmail.com> wrote:
> Can John or Tinco clarify the relationship between GLSL and the model? Is
> there type information that needs to match between them? I fear I am not
> knowledgable enough on these topics to speak confidently. Maybe there is a
> good resource on GLSL or can give me a quick overview of the language-level
> issues we face?

The only real issue we face and that is the attributes of the shader.
In GLSL you define attributes just like regular global variables, with
a type. Then from your javascript you can send values to those
attributes (in theory maybe you should also be able to retrieve the
values, but I don't think that is a really important feature).

Perhaps it could be solved with records:

data Shader = Shader a String

someShader :: Shader {attr1 :: Float, attr2 :: Float}
someShader = Shader {attr1=0,attr2=0} "... GLSL ..."

Is it possible to have a record type as an algebraic type parameter?

I think maybe the whole problem is solved if we can somehow integrate
the types of the shader attributes as a generic parameter.

Evan Czaplicki

unread,
Jan 2, 2013, 1:19:18 PM1/2/13
to elm-d...@googlegroups.com
Yep, records can be used like that. Ah! That reminds me that I did not get around to adding parsing for records in ADTs! Once I add that in you will even be able to say things like:

data Shader a b = Shader { x :: a, y :: b } String

I still am unclear on shaders and their attributes and why they are global. I can look into this more though.

Evan Czaplicki

unread,
Jan 2, 2013, 1:46:37 PM1/2/13
to elm-d...@googlegroups.com
Ok, parser is better now. Will push to github tonight.

Martin Dederer

unread,
Jan 19, 2013, 6:52:19 AM1/19/13
to elm-d...@googlegroups.com
I stumbled upon GPipe some time ago and as far as i can tell it implements a functional api for opengl in haskell. It might be a good source of inspiration.

http://www.haskell.org/haskellwiki/GPipe
Reply all
Reply to author
Forward
0 new messages