Planar 0.4 and the future

80 views
Skip to first unread message

Casey Duncan

unread,
Mar 21, 2011, 3:40:50 AM3/21/11
to grease...@googlegroups.com
Hi Folks,

I just released Planar 0.4, which at least for the foreseeable future
will be the final planar release. If there is any bad news, that is
it, but the good news is that I'm primed to get back into hacking on
Grease itself.

It's been 11 months since Planar 0.1 came out, and as much as I'd love
to say that it's come a long way, it's simply not a tenable approach
for a one man spare-time team to live up to the standard I've set for
it. Also, it does not exactly fit the needs of Grease as I envision
the project moving forward, though I'm certain code and lessons
learned from it will be handy. I'm not really widely advertising
Planar 0.4 as a result of this, but it's on pypi for any takers and it
could certainly be useful on its own.

I'm going to do some experimental Grease hacking soon, probably moving
it over to github in the coming days. Rather than make a bunch of
speculative promises here now, I'll report back again once I have
figured out if my ideas will pan out.

-Casey

Jonathan Hartley

unread,
Mar 21, 2011, 7:29:53 AM3/21/11
to grease...@googlegroups.com
Good luck with the future direction Casey - I have the utmost confidence
you'll figure out an optimal approach and follow through masterfully on
the implementation.

--
Jonathan Hartley tar...@tartley.com http://tartley.com
Made of meat. +44 7737 062 225 twitter/skype: tartley


Florent Aide

unread,
Mar 21, 2011, 4:33:50 AM3/21/11
to grease...@googlegroups.com
On Mon, Mar 21, 2011 at 8:40 AM, Casey Duncan <casey....@gmail.com> wrote:
> Hi Folks,

Hi Casey,

> It's been 11 months since Planar 0.1 came out, and as much as I'd love
> to say that it's come a long way, it's simply not a tenable approach
> for a one man spare-time team to live up to the standard I've set for
> it. Also, it does not exactly fit the needs of Grease as I envision
> the project moving forward, though I'm certain code and lessons
> learned from it will be handy. I'm not really widely advertising
> Planar 0.4 as a result of this, but it's on pypi for any takers and it
> could certainly be useful on its own.

just a quick word to tell you that I for one appreciate the work you
put into your projects and will continue to follow plannar and grease
with great attention.

> I'm going to do some experimental Grease hacking soon, probably moving
> it over to github in the coming days. Rather than make a bunch of
> speculative promises here now, I'll report back again once I have
> figured out if my ideas will pan out.

Definitely keep us posted on this!

Florent

Casey Duncan

unread,
Mar 21, 2011, 12:52:17 PM3/21/11
to grease...@googlegroups.com
On Mon, Mar 21, 2011 at 8:33 AM, Florent Aide <floren...@gmail.com> wrote:
> On Mon, Mar 21, 2011 at 8:40 AM, Casey Duncan <casey....@gmail.com> wrote:
>> Hi Folks,
>
> Hi Casey,
>
>> It's been 11 months since Planar 0.1 came out, and as much as I'd love
>> to say that it's come a long way, it's simply not a tenable approach
>> for a one man spare-time team to live up to the standard I've set for
>> it. Also, it does not exactly fit the needs of Grease as I envision
>> the project moving forward, though I'm certain code and lessons
>> learned from it will be handy. I'm not really widely advertising
>> Planar 0.4 as a result of this, but it's on pypi for any takers and it
>> could certainly be useful on its own.
>
> just a quick word to tell you that I for one appreciate the work you
> put into your projects and will continue to follow plannar and grease
> with great attention.

Thanks! I'm excited to give Grease some attention and finally
implement some of the ideas I have stashed away. I was beginning to
worry a bit about Pyglet, but some movement there recently has been
encouraging. So, all in all I'm optimistic for some fun Python game
hacking in the not so distant future.

-Casey

Casey Duncan

unread,
Mar 21, 2011, 12:54:00 PM3/21/11
to grease...@googlegroups.com
On Mon, Mar 21, 2011 at 11:29 AM, Jonathan Hartley <tar...@tartley.com> wrote:
> Good luck with the future direction Casey - I have the utmost confidence
> you'll figure out an optimal approach and follow through masterfully on the
> implementation.

Flattery will get you everywhere ;^)

-Casey

bleppie

unread,
Apr 4, 2011, 6:54:10 PM4/4/11
to Grease Users
> it does not exactly fit the needs of Grease as I envision
> the project moving forward,

Hi Casey,

I wonder if you could elaborate. I've been looking for a good 3d
geometry library for pyglet and have been thinking of writing my own,
using lessons from your and other libs. Why does Planar not fit your
needs gor Grease? Also, if you're on a roll, why did you decided to
make all the vectors immutable? Love to hear your thoughts on these.

Best,
Brian

Casey Duncan

unread,
Apr 4, 2011, 8:57:31 PM4/4/11
to grease...@googlegroups.com
Hi Brian,

I started off making planar specifically for Grease, but it kinda
acquired a life of its own, and I decided that it was drifting me
further away from actually making games. I really like the API of
planar and it was reaching a point where a few critical features would
make it really useful, in particular general intersection. But, the
problem is that Grease wants to deal data in bulk "vertically" in
components (i.e., arrays of simple objects like vectors), whereas
Planar is all about individual shape instances and making specific
collections for each was not the API direction I wanted to take, nor
do I have the time to even do it. Also, the standard of planar code,
complete Python+C + 100% test coverage + docs means years of
development that is possibly going in the wrong direction from it's
original purpose.

Another major problem with Planar, is that it is strictly 2D only, and
some of the folks interested in Grease, myself included, have an
interest in 3D. Although scope limiting is definitely useful, forever
limiting Grease to 2D doesn't seem like the best way to make the
project interesting in the long term. And allowing others, and myself,
to dabble in 3D with it seems good.

So, long story short I am revisiting the idea of using numpy for
Grease components. I had shied away from it for a few reasons before,
but it is actually an almost perfect fit for the Grease API, which
allows you to create arbitrarily large collections of data in arrays
with custom fields and then query them. Numpy basically gets me 80-90%
there and it will be sequestered away in Grease such that you will not
need to be a numpy aficionado in order to use Grease, but you will be
able to leverage that knowledge if you want to. What's great about
numpy is that it supports n-dimensions out of the box, so it will be
possible to do 11D games in Grease should you feel sufficiently
ambitious. ;^)

As for vectors in planar, they are immutable for a few reasons:

- Vectors are basically complex numbers, and mutating a number is
problematic. Shared references to vectors in particular can cause
bugs.
- It makes the api and implementation considerably simpler, and more
efficient, particularly for vector collections (e.g., Vec2Array and
Polygon)
- You can add a lot of caching for derived values (like length) and
you needn't worry about invalidation (it's not as easy as it seems),
implementation and testing is simpler
- Allocating new vectors is nearly free, so there's basically no
penalty to create new ones (like Python tuples)
- There are plenty of other mutable vector implementation out there,
such as in pygame, if folks preferred

So back to Grease, I am working on the data layer now, porting it to
use numpy. It will be n-dimensional, and the vectors there will be
mutable (since that's how numpy works). It will be designed to make
operations on collections of things (numbers, vectors, transforms,
shapes) efficient in batch, and elegant to express in Python without
loops. After releasing that I will be working on a new (2D) game, and
adding whatever geometry primitives it needs. At that point support
for 3D will be limited to basic stuff that is easily extended to
n-dimensions. There probably won't be any 3D-specific renderers yet,
but basic ones would not be too complicated. Even for 2D games beng
able to render 3D models is a must-have eventually.

I haven't decided yet how to approach geometry in Grease ironically
enough. I imagine I will do things like Lepton where 3D (or 4D
x,y,z,w) points are the standard and it is able to "up convert" from
lower dimensional data. This is relatively straightforward to do with
numpy using a simple zero-extension or projection transform.
Primitives would then be expressed in 3D, with early preference given
by me to 2D (i.e. literally planar) shapes. Other folks would be free
to implement 3D geometries of course if they felt compelled.

So, I think our goals may not be too distant. One question that
remains unanswered is how tightly coupled such a geometry library
would be to Grease. Ideally it would not be (like planar), but
practically speaking it needs some common ground. In this case I think
the common ground is numpy. So long as the library can use numpy as a
backing store, and have hook points to allow "allocations" of backing
arrays to be done by the "user" (i.e., Grease). This also means the
library can remain decoupled from "policies" like floating point
precision and be purely algorithmic. This is something I wanted to do
with Planar, but it was just too complicated to do essentially
"generics" in C, and C++ was right out 8^).

Still coming up with a general geometry api that is both useful for
bulk and individual operations will be challenging. But I do like a
challenge.

Anyway that's a lot longer of a reply than you probably bargained for.
I'd be interested to hear your ideas on what you need from a geometry
library. If we could collaborate, that'd be pretty cool.

-Casey

Jonathan Hartley

unread,
Apr 5, 2011, 4:32:56 AM4/5/11
to grease...@googlegroups.com
Plus, immutable instances can be used in sets and as dictionary keys!

The book SICP (Structure and Interpretation if Computer Programs) gives
a rigorous tutorial in why immutability in general is a good strategy.
There's a splendid point about two-thirds of the way through the book
when they introduce assignment for the first time, while pointing out
all the ghastly potential for bugs, inefficiency and confusion that it
creates, and readers like me who were previously unfamiliar with
functional programming quietly have their mind blown as they absorb the
idea that everything up to that point was done entirely without mutable
state.


On 05/04/2011 01:57, Casey Duncan wrote:
> - Vectors are basically complex numbers, and mutating a number is
> problematic. Shared references to vectors in particular can cause
> bugs.
> - It makes the api and implementation considerably simpler, and more
> efficient, particularly for vector collections (e.g., Vec2Array and
> Polygon)
> - You can add a lot of caching for derived values (like length) and
> you needn't worry about invalidation (it's not as easy as it seems),
> implementation and testing is simpler
> - Allocating new vectors is nearly free, so there's basically no
> penalty to create new ones (like Python tuples)
> - There are plenty of other mutable vector implementation out there,
> such as in pygame, if folks preferred
>

--

bleppie

unread,
Apr 16, 2011, 9:54:06 AM4/16/11
to Grease Users
Casey, Thanks for the very thorough answer! Your thoughts on
immutability in particular are helpful. I use numpy for a lot of my
work as well, and would be curious where you take it in Grease.
Backing a simple vector library with numpy feels like overkill, but
there should be a way to allow for easy interoperability. A purely
python implementation that uses c_types, for example.

B

Casey Duncan

unread,
Apr 18, 2011, 11:19:03 PM4/18/11
to grease...@googlegroups.com
The benefit of using numpy is that you get collections and
batch-operations for free. And in reality you don't even really need
to create a vector class at all because numpy basically has the basics
and it's even n-dimensional. Basically a shape consisting of vectors
(most can be) can then be manipulated via numpy at C-speed with no
C-coding. Imagine transforming a collection of 200 polygons, or even a
bunch of line segments. Doing all the vector math in Python is simply
too slow for a game with even a moderate number of vectors in play at
a time. I can get away with it in blasteroids because the vertex count
is very low.

A ctypes layer doesn't interest me much. Sending the data
back-and-forth across ctypes to do the arithmetic in Python will be
very slow. Grease strives to make looping through things in python
unnecessary as much as practical. To do that, data in components is
stored in numpy arrays where each element (or group of elements)
corresponds to an attribute value for an entity (e.g., an entity's
position, color, or even a collection of vertices of a shape). The api
allows you to access these values as attributes of the entity, like
you would expect, but more powerfully it lets you perform arithmetic
across sets of entities in batch. For instance you could update the
position of a set of entities in one python statement, and under the
hood the arithmetic is all performed by numpy en masse.

So to accomplish that, a bunch of shapes might be stored in a
contiguous array of points in Grease under the hood. They would be
exposed via their entities as individual "Polygon" objects or
whatever, but the data of the shape object would be a view (slice) of
the numpy array. Hence my interest in using numpy as a backing store
for shape data, and allowing outside code to "allocate" that data for
the geometry library.

-Casey

bleppie

unread,
Apr 20, 2011, 11:38:04 AM4/20/11
to Grease Users
Sure, that all makes sense and seems like a smart way to handle
things.

I'm thinking of a standalone, lightweight vector class for things of
which I don't have multiples. A window size, a background color, the
center of a force, for example. Things I'm not going to be
manipulating often, but would like the convenience of packaging into a
class that knows how to handle operators, dot products, lengths, etc.
Do you think for these it still makes sense to use a numpy.array?

Casey Duncan

unread,
Apr 20, 2011, 7:05:26 PM4/20/11
to grease...@googlegroups.com
Here's what I would suggest for something (lightweight):

- (ab)use complex numbers for 2D vectors (only). Very fast arithmetic
and built-in to Python. Downside is lack of abstraction.
- Use pyeuclid if ultimate speed isn't an issue, or compiled
extensions are. It supports 3D and has a nice api
- Use pyeigen if you want fast vectors, and don't mind compiling some
C/C++, don't know how the Python api looks though
- Use numpy if you want fast batch operations

However I think as soon as you start creating shapes from collections
of points/vectors, numpy gets attractive really fast.

-Casey

Jonathan Hartley

unread,
Apr 21, 2011, 5:55:59 AM4/21/11
to grease...@googlegroups.com
A very helpful breakdown, thanks Casey.

As you inferred from our conversation at PyCon, I have been starting to
feel similarly about batch operations and numpy, after having studiously
avoided it for years. I still have questions about how it might be used:

1) Assuming the majority of updates could be done efficiently using
numpy batch operations, how expensive is it for application Python code
to then reach in and tweak individual elements every so often? I am not
smart enough to know whether access to individual items within a numpy
array incurs the same sort of per-use penalty that I believe access to
ctypes arrays do.

2) At the other end of the process, I'd like to send a big array of
static geometry to the GPU, containing the concatenated modelspace
vertices for all objects in my scene. Also, I'd send a smaller dynamic
array of positions and orientations (as matrices for 3D, or as array of
(x, y, angle) for 2D.) Each vertex in the static geometry would be
tagged with an integer attribute ('model_id'), indexing into the
position/orientation array. The vertex shader would then apply the
indexed transform to each vertex position. Unless I misunderstand, this
should enable me to draw many independently positioned and oriented
meshes using a single OpenGL render call. The only substantial overhead
within the rendering would be streaming the array of
positions/orientations to the GPU every frame. This array has one entry
per model, rather than one per vertex, so is hopefully substantially
smaller than streaming geometry to the GPU.

This has been recommended to me for years by people smarter and more
experienced than I, but I have not yet made time to implement it. Has
anyone else been down this path, with any wisdom to offer? In
particular, how hard is it to then add or remove models from the scene?

Jonathan

--

Jonathan Hartley

unread,
Apr 21, 2011, 6:55:52 AM4/21/11
to grease...@googlegroups.com
Also!

Alex Holkner mentioned to me about six months ago that for his personal
uses, he now prefers (his) vectypes library over (also his) pyeuclid
library:
http://code.google.com/p/vectypes/

I think the advantages over pyeuclid are a GLSL-like vector/matrix API,
and that internally, it is more maintainable due to using code
generation to produce various sizes of vectors and matrices from a
single template.

Jonathan


On 21/04/2011 00:05, Casey Duncan wrote:

--

Casey Duncan

unread,
Apr 21, 2011, 12:24:15 PM4/21/11
to grease...@googlegroups.com
On Thu, Apr 21, 2011 at 3:55 AM, Jonathan Hartley <tar...@tartley.com> wrote:
> A very helpful breakdown, thanks Casey.
>
> As you inferred from our conversation at PyCon, I have been starting to feel
> similarly about batch operations and numpy, after having studiously avoided
> it for years. I still have questions about how it might be used:

It's strange, I've had something of an allergic reaction to numpy for
years, but it really does mesh well with my "model" now.

> 1) Assuming the majority of updates could be done efficiently using numpy
> batch operations, how expensive is it for application Python code to then
> reach in and tweak individual elements every so often? I am not smart enough
> to know whether access to individual items within a numpy array incurs the
> same sort of per-use penalty that I believe access to ctypes arrays do.

Although I haven't measured it, my impression is that changing
individual elements is similar in cost to changing elements in a
python list. Of course, an "element" in numpy can run the gamut of
complexity from scalars to multi-dimensional arrays.

One thing that makes numpy fit in pretty well as a backing store is
the concept of array slices as views. Thus slices are extremely
efficient to create, and mutating them writes back to the backing
array. This allows for some interesting api possibilities.

> 2) At the other end of the process, I'd like to send a big array of static
> geometry to the GPU, containing the concatenated modelspace vertices for all
> objects in my scene. Also, I'd send a smaller dynamic array of positions and
> orientations (as matrices for 3D, or as array of (x, y, angle) for 2D.)
>  Each vertex in the static geometry would be tagged with an integer
> attribute ('model_id'), indexing into the position/orientation array. The
> vertex shader would then apply the indexed transform to each vertex
> position. Unless I misunderstand, this should enable me to draw many
> independently positioned and oriented meshes using a single OpenGL render
> call. The only substantial overhead within the rendering would be streaming
> the array of positions/orientations to the GPU every frame. This array has
> one entry per model, rather than one per vertex, so is hopefully
> substantially smaller than streaming geometry to the GPU.
>
> This has been recommended to me for years by people smarter and more
> experienced than I, but I have not yet made time to implement it. Has anyone
> else been down this path, with any wisdom to offer? In particular, how hard
> is it to then add or remove models from the scene?

So in general I think this idea is a good one, since then there is a
fixed cost per object and things like LOD optimizations or whatnot can
be done entirely on the GPU. I might suggest that rather than
special-casing 2D, instead just support matrices of various sizes, or
at least 3x3 and 4x4. You could also optimize away the last row if you
know the transformations were always affine, which is probably
sufficient for many applications.

So in Grease, to solve the problem of removing entities, whose data
probably occurs in the middle of the data arrays, I use a mask that
knows which entities still exist. And deleted entity "slots" are
recycled for new entities. Thus the arrays tend to be only a little
larger than the number of active entities, with some space left over
at the end to make growing arrays less frequently needed. Of course
the extra space at the end is no problem to omit when using the array,
by just stopping or slicing to the "highest" entity. In this sense the
arrays act more like a hash, but the hashing itself can be statically
computed into the entity identifier, since it's constant for the life
of the entity.

numpy has some features for making this type of array masking efficient.

For rendering, you need to ensure that the elements in the
transformation array correspond to the static geometry data on the
GPU. So if an "entity" corresponding to one object in a scene is
deleted, you still need to send a transformation array for it so the
data lines up. To ensure the object is not drawn, I would suggest a
couple of ideas:

- Have the renderer pass an "off-screen" position for all deleted objects.
- Have the renderer pass a null transform for all deleted objects.

The latter could maybe be special-cased in the shader, if there is a
way to "skip" vertices in GLSL, but I can't remember right now. Lepton
uses the off-screen method for deleted particles by setting a very
large Z value for their position. It seems like a bit of a hack, but
it's very effective. And since the slots will be recycled, they don't
tend to be too numerous.

Of course a problem here is ensuring that when a slot is recycled that
the static vertices corresponding to the slot are still correct. In
Grease that sort of thing is handled by splitting the arrays into
blocks where each block holds data for only a single entity type. The
blocks are actually designed to make querying by type and batch
operations for a single entity type more efficient, but it might be
able to be leveraged for this sort of "cross domain" cpu/gpu
synchronization as well.

What would probably be more general would be for the renderer to keep
the cpu and gpu sychronized. Since presumably the renderer would load
the static vertex data, it would know how it was organized. It could
then reorganize the entities so that they were drawn with the proper
geometry. Of course if draw order is important then it all kinda goes
out the window. What you want then is for the set of geometry to
reduce itself into triangle soup so that they can be ordered properly
for drawing. Putting as much of that work in the GPU as possible would
be ideal, but that's definitely stretching beyond the limits of my
shader fu.

I could go on and on, of course 8^). I do think your concept has merit
and for complex geometries would be a win. There is however a good
deal of complexity working around the static and linear nature of the
data juxtaposed against the dynamic nature of the application. Using
numpy does, however, give a leg up on manipulating that data in useful
ways.

Note to really do this right, I think you want geometry shaders, or at
least instancing. Have you played around with those at all?

-Casey

Casey Duncan

unread,
Apr 21, 2011, 12:33:38 PM4/21/11
to grease...@googlegroups.com
Forgot about vectypes, good call!

-Casey

Casey Duncan

unread,
Apr 21, 2011, 12:39:10 PM4/21/11
to grease...@googlegroups.com
Also, this discussion has reminded me a bit why I retreated back to 2D
when I first started Grease. Planar quads ftw!

-Casey

Jonathan Hartley

unread,
Apr 21, 2011, 2:02:31 PM4/21/11
to grease...@googlegroups.com
Much food for thought. I'll be re-reading this again carefully tomorrow.

>> Note to really do this right, I think you want geometry shaders, or at
>> least instancing. Have you played around with those at all?

Aaaagh. I have not. As my colleague once observed "if only we were
cleverer, then we wouldn't have to think so hard all the time"

Just in case there is anyone else on the list who knows as little about
instancing as myself, I liked this quick intro about it:
http://www.geeks3d.com/20100629/test-opengl-geometry-instancing-geforce-gtx-480-vs-radeon-hd-5870/

Page 3 has a quick description of a few different techniques
Page 2 has a download demonstrating each one

--

Casey Duncan

unread,
Apr 21, 2011, 4:27:25 PM4/21/11
to grease...@googlegroups.com
I would love to be able to collaborate on a "trisoup" library to
abstract all of the OpenGL kung fu needed to load, modify, and render
a bunch of shapes using these types of techniques. I'm not sure how
practical of an idea it is, but it would be fun to prototype.

-Casey

Casey Duncan

unread,
Apr 21, 2011, 4:32:28 PM4/21/11
to grease...@googlegroups.com
Actually I would say "If only we weren't clever, then we wouldn't have
to think so hard all the time" ;^)

I tend to think of cleverness as both a blessing and a curse, many
times more the latter.

-Casey

On Thu, Apr 21, 2011 at 12:02 PM, Jonathan Hartley <tar...@tartley.com> wrote:

Jonathan Hartley

unread,
Apr 22, 2011, 7:04:49 AM4/22/11
to grease...@googlegroups.com
Conflicting emotion. I feel like the previously discussed ideas are
within my reach, but I'm intimidated by the trisoup idea. I appreciate
the value of it, no doubt, but it's a couple of steps beyond the level
of any graphics stuff I've done before, so I couldn't promise to be of
any value in building it.

Casey Duncan

unread,
Apr 22, 2011, 11:02:45 AM4/22/11
to grease...@googlegroups.com
Yeah, I don't know that I have strong enough need to motivate the
level of effort and learnings required. That's ok, just throwing crazy
ideas around.

-Casey

Reply all
Reply to author
Forward
0 new messages