SeExpr internship at Disney Animation

347 views
Skip to first unread message

Andrew Selle

unread,
Feb 3, 2014, 4:11:13 PM2/3/14
to seexpr-...@googlegroups.com
I apologize for sending this out on the list if you are not interested, but I thought somebody might be, so here goes:

At Disney, we are currently seeking a summer intern candidate to extend and improve SeExpr significantly. The major focus would be to complete integration of an LLVM backend into the v2 branch of SeExpr. We're looking for somebody has a working knowledge of compilers (specific LLVM experience is a plus). Moreover, it would be even better to have somebody who cares and knows about SeExpr (hence the post on this list)! The results of the internship will be released into the open source version on github. If you or somebody you know is interested please visit our formal posting here: https://careers.disneyanimation.com/job_groups/job_description?id=361 (sooner rather than later).

-A

Don

unread,
Oct 19, 2015, 3:42:34 AM10/19/15
to seexpr-discuss
Hi,

Just wondering if the LLVM backend has been completed & if it will be open sourced? I've been considering working extending Se.Expr with a LLVM JIT backend & TBB for a little while.

- Don

Andrew Selle

unread,
Oct 19, 2015, 3:45:24 AM10/19/15
to seexpr-...@googlegroups.com
SeExprV2 as we are calling it is decently far along. We are deploying on a test basis for one of our internal products. Once that has tested that it is somewhat stable we will be probably making an early version available on a branch publicly. Threading implications are still a work in progress. What we have done typically with SeExprV1 for threading is we just create thread local versions of the expressions so that each expression class can be evaluated in parallel. It would be nice to have a better solution. One thought is that if we change variables to be defined in a variable block, you could just encapsulate that as a per thread stack item.  Love to talk about this more...

-A

--
You received this message because you are subscribed to the Google Groups "seexpr-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to seexpr-discus...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Colin Doncaster

unread,
Oct 19, 2015, 9:41:22 AM10/19/15
to seexpr-...@googlegroups.com
This is similar to what we’re doing in Yeti.  

We have the concept of a DataBlock’s which have a handle to the different attributes (which become variables, ie. P) in the current object being processed, prior to evaluation all of the expressions (wrapped up into a Parameter class) are first sent through a pre-flight single state evaluation to ensure the expression is valid.  This then spawns off worker threads to process the current object, during this stage the Op using the Parameters (with the embedded expressions) can access the parameter values using a ParameterQuery class which wraps up the expression, attribute handles and per thread indices for each attribute.  

There was a lot of extra plumbing to get it all to thread nicely so have a SeExprDataBlock class would be helpful - it’s pretty lightweight in our instance as each variable just holds a handle to an attribute an index, handling the different attribute types is handled with templating. 

Not too sure if this is what you were thinking…

Colin

Alexander Kalyuzhnyy

unread,
Oct 27, 2015, 3:16:58 AM10/27/15
to seexpr-discuss
Well thats sounds similiar about how we did simple patches  in our inhouse fork SeExpr here at Wizart Animation.

Our reasoning for that, we were pretty freaked out by idea to hold SeExpression instance for each processed primitive.

So what we did:
1) Windows support patches => proper codegen with bison and flex.
2) tbb parallel for evaluate without copying=> we just extended api SeExprNodes that allow app writer to pass void* pointer on evaluate. That way we can resolve  external variables on each eval from separate threads.
3) Local vars problem=> same way we pass internally SeExprLocalThreadContext struct on each eval, and resolve and write to local vars using these data.

So this way we can parse SeExpression once, and evaluate one instance from several threads. 

Not sure if our patches are good enough for pull request, or required if big refactor are on roadmap, just to let you know about other ways.

P.S.
JIT is must have feature, intepreter can be still very slow even with simple expressions.

Alexander
понедельник, 19 октября 2015 г., 16:41:22 UTC+3 пользователь Colin Doncaster написал:
Reply all
Reply to author
Forward
0 new messages