# Slow Reading / Writing Envelope.Weights.Array

245 views

### Eric Thivierge

May 30, 2013, 3:58:04 PM5/30/13
to soft...@listproc.autodesk.com
Anyone know if there is a way to speed up reading and writing speeds to
the Weights Array for envelopes? It's extremely slow on high point count
/ high deformer count meshes.

I'm using Python but I'm not sure if that is the reason for the
slowness. Anyone else already do some testing or have any findings that
may help?

I'm writing some common tools that many have already done such as
normalizing weights, pruning, symmetrizing, etc.

Any experiences confirming this slowness or experiences where it is
exponentially faster in other languages are welcome too.

Thanks,

--

Eric Thivierge
===============
Character TD / RnD
Hybride Technologies

### Jeremie Passerin

May 30, 2013, 4:05:18 PM5/30/13
to softimage
Writting to the envelope array is usually pretty fast for me... what's taking time (in my case) is doing all the normalization of values...

This is how I read my weights :

def getWeights(envelopeOp):
weightsTuple = envelopeOp.Weights.Array
return [weightsTuple[j][i] for i in range(len(weightsTuple[0])) for j in range(len(weightsTuple))]

This is an example of how I set the weights (average weights) :

def averageWeights(envelopeOp, points=None):
'''
\remarks set the weights of given points to the average weights of given points
\param envelopeOp Envelope Operator - the envelope operator.
\param points List of Integer - Index of vertices to average.
'''

deformerCount = envelopeOp.Deformers.Count
weightsTuple = envelopeOp.Weights.Array
weights = getWeights(envelopeOp)
if points is None:
points = range(mesh.ActivePrimitive.Geometry.Points.Count)
a = [0] * deformerCount
for pointIndex in points:
for def_index in range(deformerCount):
a[def_index] +=  weightsTuple[def_index][pointIndex]

for pointIndex in points:
for def_index in range(deformerCount):
weights[pointIndex*deformerCount + def_index] = a[def_index]/len(points)
envelopeOp.Weights.Array = weights

### Eric Thivierge

May 30, 2013, 4:21:11 PM5/30/13
to soft...@listproc.autodesk.com
Thanks Jeremie,

I was referencing your code when I ran into the slowness to see if we are doing anything different and we aren't really.

As a test I'm grabbing the XSI Man Armored and selecting the body mesh and doing a local subdiv refinement with a setting of 2 then freezing modeling. Then running the following code with the body mesh selected:

# Python
# =============================================
from platform import system as OStype
from time import clock

xsi = Application
log = xsi.LogMessage
sel = xsi.Selection

start_time = clock()

weights = [list(x) for x in sel(0).Envelopes(0).Weights.Array]

timeTaken = clock() - start_time

units = ["seconds" if OStype() is "Windows" else "milliseconds"][0]
msg = "It took "+str(timeTaken)+" "+units+" to process your code."

log(msg)
# =============================================

It's taking around 6 seconds for me.

```
Eric Thivierge
===============
Character TD / RnD
Hybride Technologies
```

### Jeremie Passerin

May 30, 2013, 4:38:42 PM5/30/13
to softimage
just tested your scenario... got the same result here :D
Actually your code is faster than mine

### Eric Thivierge

May 30, 2013, 4:42:23 PM5/30/13
to soft...@listproc.autodesk.com
Well all my code is doing is creating a list of lists from the weights array nothing more.

Should I consider this speed not slow then? When running various tools that use this call to the weights array it seems extremely slow. Am I just being impatient or on meshes with this density and number of deformers is it to be expected? Should I accept it or look for other ways to speed it up?

Opinions welcome.

```
Eric Thivierge
===============
Character TD / RnD
Hybride Technologies
```

### Alok Gandhi

May 30, 2013, 4:56:39 PM5/30/13
to soft...@listproc.autodesk.com
--

### Eric Thivierge

May 30, 2013, 4:59:53 PM5/30/13
to soft...@listproc.autodesk.com
Thanks Alok,

I do not think that the slowness is from converting the tuple to lists more that the access time to the Weights.Array is slow as I see the slowness (~5 seconds) when I don't even convert it.

```
Eric Thivierge
===============
Character TD / RnD
Hybride Technologies
```

### Alan Fregtman

May 30, 2013, 5:02:06 PM5/30/13
to XSI Mailing List
You can convert it much faster with:

weights = map(list, sel(0).Envelopes(0).Weights.Array)

In my box here at work it went from 17.81 seconds down to 7.9s with the line above.

### Eric Thivierge

May 30, 2013, 5:14:02 PM5/30/13
to soft...@listproc.autodesk.com
Same time for me. 6 seconds.

Eric Thivierge
===============
Character TD / RnD
Hybride Technologies

On 30/05/2013 5:02 PM, Alan Fregtman wrote:
> map(list, sel(0).Envelopes(0).Weights.Array)

### Alan Fregtman

May 30, 2013, 5:15:48 PM5/30/13
to XSI Mailing List
Weird. Maybe it's cause I'm on Linux?

### Jens Lindgren

May 30, 2013, 5:17:46 PM5/30/13
to soft...@listproc.autodesk.com
I just tried these examples myself.
Got 4.5 seconds with Erics example and 4.6 seconds with map(). Using Soft 2014 on Windows 8.
It took longer time to subdivide the mesh.

/Jens
--
Jens Lindgren
--------------------------

### Alok Gandhi

May 30, 2013, 6:35:36 PM5/30/13
to soft...@listproc.autodesk.com
Maybe you can still get some optimization using numpy, I think. Throw your weights array directly into numpy. It is worth a try at least.

Sent from my iPhone

### Eric Thivierge

May 30, 2013, 6:44:34 PM5/30/13
to soft...@listproc.autodesk.com

On Thu, May 30, 2013 at 6:35 PM, Alok Gandhi wrote:
Maybe you can still get some optimization using numpy, I think. Throw your weights array directly into numpy. It is worth a try at least.

I'll give it a shot tomorrow. :)

--------------------------------------------
Eric Thivierge
http://www.ethivierge.com

### Bartosz Opatowiecki

May 31, 2013, 2:50:38 AM5/31/13
to soft...@listproc.autodesk.com
Hi Eric,

Why don't you use cProfile module to find slowest part of that code ?
Also KCacheGrind is cool if you like visual feedback.

Bartek Opatowiecki

W dniu 2013-05-31 00:44, Eric Thivierge pisze:

### Xavier Lapointe

May 31, 2013, 2:53:32 AM5/31/13
to soft...@listproc.autodesk.com
Or our lovely RunSnakeRun (cProfile like Bartosz mentioned):

### Bartosz Opatowiecki

May 31, 2013, 3:00:19 AM5/31/13
to soft...@listproc.autodesk.com
Also:

def getWeights(envelopeOp):
weightsTuple = envelopeOp.Weights.Array
return [weightsTuple[j][i] for i in range(len(weightsTuple[0])) for j in range(len(weightsTuple))]

xrange should be more efficient than range....

W dniu 2013-05-31 00:44, Eric Thivierge pisze:

### olivier jeannel

May 31, 2013, 5:40:31 AM5/31/13
to soft...@listproc.autodesk.com
Every once in a while I pay a visit to this site
http://emrahgonulkirmaz.com/ because I'm fan :)

Seems Emrah was involved in IronMan3.
I haven't see the movie, but possibly SI was involved.

Well, If any one has infos on this..

Very inspiring anyway.

### christian

May 31, 2013, 8:02:17 AM5/31/13
to XSILIST
well says he only did the styleframes. according to http://www.artofthetitle.com/title/iron-man-3/ (though the article is offline for whatever reason) the titles were done by prologue, who seem to be a maya house judging by their job listings.

### Eric Thivierge

May 31, 2013, 9:29:05 AM5/31/13
to soft...@listproc.autodesk.com
This takes even longer ~ 14 seconds. I really seems the slowness is simply accessing envelopeOp.Weights.Array.

Any devs paying attention that could confirm that this is just a slow call? Would doing these operations in C++ be the only way to speed them up?

# Python
# =============================================
from platform import system as OStype
from time import clock

xsi = Application
log = xsi.LogMessage
sel = xsi.Selection

start_time = clock()

envelopeOp = sel(0).Envelopes(0)
weightsTuple = envelopeOp.Weights.Array
weights = [weightsTuple[j][i] for i in xrange(len(weightsTuple[0])) for j in xrange(len(weightsTuple))]

timeTaken = clock() - start_time

units = ["seconds" if OStype() is "Windows" else "milliseconds"][0]
msg = "It took "+str(timeTaken)+" "+units+" to process your code."

log(msg)
# =============================================
```
Eric Thivierge
===============
Character TD / RnD
Hybride Technologies
```
On 31/05/2013 3:00 AM, Bartosz Opatowiecki wrote:

### olivier jeannel

May 31, 2013, 9:48:30 AM5/31/13
to soft...@listproc.autodesk.com
Mmm yes, I can see that.
Anyway great approach. Seem's he made a clever use of Eric Mootz's EmTools Rotate Vector or kind-of.
Love the stills.

### Martin

May 31, 2013, 10:50:14 AM5/31/13
to soft...@listproc.autodesk.com
Have you tried VBScript?
I usually do my envelope related scripts in VBScript because it is way faster than JScript (my main scripting language).

VBScript
selection(0).Envelopes(0).Weights.Array

is faster than

JScript
new VBArray( selection(0).Envelopes(0).Weights.Array )

and way way more faster than

JScript
selection(0).envelopes(0).Weights.Array.toArray()

I don't use Python, but in my tests, Python has been the slowest of the three when dealing with envelopes, subcomponent arrays and a bunch of loops.

M.Yara

### Bartosz Opatowiecki

May 31, 2013, 1:00:46 PM5/31/13
to soft...@listproc.autodesk.com
What do you mean by "high point count / high deformer count meshes." ?

Bartek Opatowiecki

W dniu 2013-05-31 15:29, Eric Thivierge pisze:

### Alan Fregtman

May 31, 2013, 1:04:26 PM5/31/13
to XSI Mailing List
Here's another way to read weights. Not sure if it's faster. Probably not.

```xsi = Application
obj = xsi.Selection(0)
envCls = obj.ActivePrimitive.Geometry.Clusters(0)
envProp = envCls.Properties(0)

# Envelope ICE attrs:
# EnvelopeWeights, EnvelopeWeightsPerDeformer, EnvelopeDeformerIndices, NbDeformers

data = envProp.GetICEAttributeFromName("EnvelopeWeights").DataArray2D
indices = envProp.GetICEAttributeFromName("EnvelopeDeformerIndices").DataArray2D
nbdef = envProp.GetICEAttributeFromName("NbDeformers").DataArray[0]
print "Deformer count: %s" % nbdef

for i,val in enumerate(zip(indices, data)):
idx, val = val
print "Point #%s indices:(%s) - values:(%s)" % (i,idx,val)```

Beware that letting it print everything may be a terrible idea with a lot of points. :p May wanna put in a "if i > 100: break" line in there.

### Steven Caron

May 31, 2013, 1:09:28 PM5/31/13
to soft...@listproc.autodesk.com
ya, if performance is key i would use cpp or vb and make a command... but you might run right back into issues passing the data back into python.

### Eric Thivierge

May 31, 2013, 1:15:09 PM5/31/13
to soft...@listproc.autodesk.com
I don't necessarily need to pass them back to Python honestly so this is probably the plan for now.

```
Eric Thivierge
===============
Character TD / RnD
Hybride Technologies
```

### Matt Lind

May 31, 2013, 6:24:59 PM5/31/13
to soft...@listproc.autodesk.com

Have you tried a GridData object?  It would abstract the problem away from Python and be flexible enough to pass around between commands written in various languages.  I usually use the GridData object because it simplifies coding and has a few methods specifically for working with column data making it easier to manipulate weight values per deformer.  You do give up some speed compared to other available methods in the other languages, but in your situation the GridData might perform faster than anything available in Python.

The snippet below runs in Jscript in about 4 seconds on my 5-year old computer.  Will probably run faster on yours.

// JScript

function main()

{

var oObject = Selection(0);

var oEnvelope = oObject.Envelopes(0);

// Get weight values from envelope

var oWeightData = XSIFactory.CreateGridData();

oWeightData.Data = oEnvelope.Weights.Array;

// Modify the weight values for the first deformer

var aWeightValues = new Array( oEnvelope.Weights.Count );

oWeightData.SetColumnValues( 0, aWeightValues );

// Update the envelope with modified weight values

oEnvelope.Weights.Array = oWeightData.Data;

### Raffaele Fragapane

May 31, 2013, 7:59:09 PM5/31/13
to soft...@listproc.autodesk.com
You should use numpy regardless, because all operations you will need to work on after you pull will be A TON faster. So start from there.
As for the time it takes, what's the size of the table we're talking about? The data fetching stage for the HR Gorgo (the only one I tested when I refactored the weight handling tools) was shy of two seconds. You know the boxes and assets :)

You working with a heavier meshes and deformer counts than that?
That was a straight fetch and cast to numpy.
--
Our users will know fear and cower before our software! Ship it! Ship it and let them flee like the dogs they are!

### Eric Thivierge

May 31, 2013, 11:15:33 PM5/31/13
to soft...@listproc.autodesk.com

It's very dense geo probably no where near the gorgo though. It's just the Softimage default xsi man armored. The body mesh local subdivided and fetching from there.

In my previous posts it's simply the data fetch that is taking 6 seconds. The other processes aren't taking too long regardless. I was surprised because I remember the interaction on the dinos with your tools.

### Raffaele Fragapane

May 31, 2013, 11:22:21 PM5/31/13
to soft...@listproc.autodesk.com
The trick to that is numpy.asarray(aWeights) ;)

### Bartosz Opatowiecki

Jun 1, 2013, 3:08:20 AM6/1/13
to soft...@listproc.autodesk.com
W dniu 2013-05-31 15:29, Eric Thivierge pisze:
I really seems the slowness is simply accessing envelopeOp.Weights.Array.
Indeed, you are right.

Bartek Opatowiecki

### Pingo van der Brinkloev

Jun 1, 2013, 4:25:35 AM6/1/13
to soft...@listproc.autodesk.com
Hey list, I have some particles flowing along a curve nicely (with strands). I would like to add turbulence to the particles, so they wiggle, but still follow the curve(!) How do I do this without the particles going a-walk on me? Whichever way I try it it, the turbulence always cumulate.

Feel like there's some memo on turbulence I didn't get.

Cheers!

### Rob Chapman

Jun 1, 2013, 6:52:03 AM6/1/13
to soft...@listproc.autodesk.com
turbulences accumulate in the valleys you say? try curl noise framework

### Pingo van der Brinkloev

Jun 1, 2013, 12:07:30 PM6/1/13
to soft...@listproc.autodesk.com
Damn, I can't get any of the solutions to work.. this should be a walk in the park. Any other ideas?

P

### pet...@skynet.be

Jun 1, 2013, 1:34:36 PM6/1/13
to soft...@listproc.autodesk.com
are you adding the turbulence on the position, like: "get
position->turbulence->set position"?
If so the turbulence will be very "strong" and mostly override the effect of
any forces.

for a flow along curve, I'd combine three forces / vectors.
- subtract pointposition of the closest location on the curve from the
particle's position (or the other way around) to pull the particles towards
the curve
- pointtangent of the closest location on the curve for the movement along
the curve
- turbulence to randomize

you can multiply each of those vectors with a scalar - and by changing the
scalars have fine control over the blending of the different forces.

### Orlando Esponda

Jun 1, 2013, 4:46:06 PM6/1/13
to softimage
Pingo, here's a tricky compund, it may work or it may not, but just in case you want to give it a go. Just explode the compound and tweak it. A couple of comments will tell you what's doing what...    it's pretty simple compound so you shouldn't have troubles understanding it.  It should go between your "follow curve" nodes and your "strand trails" nodes.

Hope that helps,
Orlando.

--
IMPRESSUM:
PiXABLE STUDIOS GmbH & Co.KG, Sitz: Dresden, Amtsgericht: Dresden, HRA 6857,
Komplementärin: Lenhard & Barth Verwaltungsgesellschaft mbH, Sitz: Dresden,
Amtsgericht: Dresden, HRB 26501, Geschäftsführer: Frank Lenhard, Tino Barth

IMPRINT:
PiXABLE STUDIOS GmbH & Co.KG, Domicile: Dresden, Court of Registery: Dresden,
Company Registration Number: HRA 6857, General Partner: Lenhard & Barth
Verwaltungsgesellschaft mbH, Domicile: Dresden, Court of Registery: Dresden, Company
Registration Number: HRB 26501, Chief Executive Officers: Frank Lenhard, Tino Barth

--
Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. Wenn Sie nicht
der richtige Adressat sind oder diese E-Mail irrtümlich erhalten haben, informieren Sie bitte
sofort den Absender und vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die
unbefugte Weitergabe dieser Mail ist nicht gestattet.

This e-mail may contain confidential and/or privileged information. If you are not the intended
recipient (or have received this e-mail in error) please notify the sender immediately and destroy
this e-mail. Any unauthorized copying, disclosure or distribution of the material in this e-mail is
strictly forbidden.
image.jpeg
turbulize point position flowing along curve.xsicompound

### Pingo van der Brinkloev

Jun 1, 2013, 4:59:17 PM6/1/13
to soft...@listproc.autodesk.com
Hey thanks guys for all the input(s). I thought this was gonna be real easy. Orlando, looks like your compound is doing the trick I'll dive into it. Thanks!

P

On 01/06/2013, at 22.46, Orlando Esponda <orlando...@gmail.com> wrote:

Pingo, here's a tricky compund, it may work or it may not, but just in case you want to give it a go. Just explode the compound and tweak it. A couple of comments will tell you what's doing what...    it's pretty simple compound so you shouldn't have troubles understanding it.  It should go between your "follow curve" nodes and your "strand trails" nodes.

Hope that helps,
Orlando.

<image.jpeg>
strictly forbidden. <turbulize point position flowing along curve.xsicompound>

### jo benayoun

Jun 1, 2013, 10:18:46 PM6/1/13
to soft...@listproc.autodesk.com
Hey Eric,
I must confess that I am quite confused by this thread as I can't figure out what this is really about...

* performance of the Array property?

>>> env_weights = envelope.Weights.Array  # 7.18513235319

This is something on which we don't really have control over... well, seems like we're doomed with that =( ...

* fastest way to turn ((D1P1, D1Pn), ..., (DnP1, DnPn)) into [P1D1, PnDn, ..., PnD1, PnDn]?

XSI man with local sub refinement (81331 vtx, 126 defs)

not much better...
>>> w1 = [weights[j][i] for i in range(len(weights[0])) for j in range(len(weights))]  # 8.70612062111
>>> w2 = list(itertools.chain.from_iterable(itertools.izip(*weights)))                 # 5.21891196087
>>> assert(w1 == w2)

>>> w3 = itertools.chain.from_iterable(itertools.izip(*weights)))                      # 0.0

...way better! Actually, timing this kind of meaningless (no context) operations is quite boring and useless...
As I don't see the point of those alone:

>>> [list(x) for x in weights] # turning a tuple of tuples into a tuple of lists?
>>> map(list, weights)

Timing the algorithms is much funnier...

* using numpy?

Considering the Array property is half responsible of the slowness, not sure how numpy would help... eventhough,
the XSI man example could I think be considered as a worst-case scenario as this is not common to see so much
dense meshes rigged.  But even if we are meeting those cases once in a while, I think a few seconds are worth
than introducing dependencies on an external library such as numpy as it is quite inappropriate to talk about it
in such context (we are not talking about arrays of gigabytes of datas...).

In 90% of the cases, just refactoring the code will give ya the boost you need! before considering numpy or other languages, try to refactor the code...

Using the code Jeremie posted above and the XSI man example:

I am getting (jeremie's):
-- all points
envelope.Weights.Array:  7.24001892257 # half of getWeights
get_weights:             14.6793240571
average_weights:         97.3035831834
list(weights):           1.64222230682
total:                   113.65285342
-- xrange(200, 5000) points
envelope.Weights.Array:  7.13410343853 # half of getWeights
get_weights:             16.3542267175
average_weights:         9.32082265267
list(weights):           1.5987448828
total:                   27,322880867

refactored:
-- all points
envelope.Weights.Array:  7.17315390541
get_weights:             7.17847566392 # no more waste here
average_weights:         12.0701456704
list(weights):           4.16878212485
total:                   23.4396892319 ~5X faster
-- xrange(200, 5000) points
envelope.Weights.Array:  7.03594689191 # no more waste here
get_weights:             7.16215152683
average_weights:         5.33362318853
list(weights):           3.63788132247
total:                   16,1539661472 ~2X faster

...with my configuration! this is just to demonstrate my point, not trying to figure out who has the longest... =)

In Jeremie's, the design is quite limited and performances will be worst in a library where the function may be re-used many times within the same process.
You basically just get the weights, do the operation and write out (everytime).

In the code below, you 're only keeping a generator on the stack (no actual datas -> less memory) which will be passed along functions (in that case though we had to unroll it for computing the averages), and the design let you apply multiple operations to the weights before even setting back the envelope.Weights.Array property (kinda of ICE pattern) without affecting performances and even improving them...

def get_weights(envelope):
"""Get envelope weights.
((D1P1, D1Pn), ..., (DnP1, DnPn)) -> [D1P1, D1Pn, ..., DnP1, DnPn]
"""
weights = envelope.Weights.Array
return itertools.chain.from_iterable(itertools.izip(*weights))

def average_weights(envelope, weights, points=all):
"""Compute average of given points.
takes and returns:
[P1D1, P1Dn, ..., PnD1, PnDn]
"""
deformer_count = envelope.Deformers.Count

if points is all:
points_count = envelope.Parent3DObject.ActivePrimitive.Geometry.Points.Count
points = xrange(points_count)
else:
points_count = len(points)
points = frozenset(points)

#  Compute the average by summing up all weights of points.
weights = tuple(weights)  #  flatten it (we have to in this case)
averages = [sum(weights[p * deformer_count + d] for p in points) / points_count
for d in xrange(deformer_count)]

# groupby pattern
weights_per_point = itertools.izip(*([iter(weights)] * deformer_count))

enum = enumerate(weights_per_point)
it = ((i in points and averages or w) for (i, w) in enum)
return itertools.chain.from_iterable(it)

print_ = lambda x, *args: Application.LogMessage(str(x).format(*args))

siobject = Application.Selection(0)
envelope = siobject.Envelopes(0)
cls_property = envelope.Weights
weights = cls_property.Array

weights = get_weights(envelope)
weights = average_weights(envelope, weights)
weights = list(weights)

weights_p = get_weights(envelope)
weights_p = average_weights(envelope, weights_p, xrange(200, 5000))
weights_p = list(weights_p)

Anyway, just saying that the algorithm and the code matters more than using numpy or a JIT version of python (which are imho easy answers... hey dood! use C, why bother! ), and yes, you're right, envelope.Weights.Array is slow as they may perform underneath the construction of the array they return each time (just a guess).
--jon

2013/6/1 Bartosz Opatowiecki

### Eric Thivierge

Jun 1, 2013, 11:22:57 PM6/1/13
to soft...@listproc.autodesk.com
Yes the performance of the Array property. Interacting with it is abysmally slow on meshes with large point counts and large number of items used in its envelope. Thanks for the thorough examples exploring the other aspects. Should be good stuff to take a look at and learn from.

Eric T.

On Sat, Jun 1, 2013 at 10:18 PM, jo benayoun wrote:
Hey Eric,
I must confess that I am quite confused by this thread as I can't figure out what this is really about...

* performance of the Array property?

### Jeremie Passerin

Jun 3, 2013, 12:25:50 PM6/3/13
to softimage
woow ! Pretty cool Jo ! I wish I could do python like that. That would speed up some of my tools ;-)

### Alan Fregtman

Jun 3, 2013, 2:28:18 PM6/3/13
to XSI Mailing List

epic response, jo ;)

On Sat, Jun 1, 2013 at 10:18 PM, jo benayoun wrote:
slow-clap-citizen-kane.gif

### Martin

Jun 4, 2013, 1:52:51 AM6/4/13
to soft...@listproc.autodesk.com
Just a few tests :

XSI_Man_Armored.Body (467696 poly)

VBS
w = selection(0).Envelopes(0).Weights.Array
--> 0.504s

JS
w = new VBArray( selection(0).Envelopes(0).Weights.Array )
--> 0.981s

Python
w = sel(0).Envelopes(0).Weights.Array
--> 4.6s

I hate to work with VBS but sometimes it worth the pain if the code isn't too long.

MYara

slow-clap-citizen-kane.gif

### Matt Lind

Jun 4, 2013, 1:34:38 PM6/4/13
to soft...@listproc.autodesk.com

You don’t have to convert the VBSafeArray to a Jscript array unless you’re going to modify it in place.  You can allocate a Jscript array or use a GridData object to hold the modified values.  That’ll save considerable processing time.

Matt

From: softimag...@listproc.autodesk.com [mailto:softimag...@listproc.autodesk.com] On Behalf Of Martin
Sent: Monday, June 03, 2013 10:53 PM
To: soft...@listproc.autodesk.com
Subject: Re: Slow Reading / Writing Envelope.Weights.Array

Just a few tests :

### David Rivera

Jun 4, 2013, 6:30:36 PM6/4/13
to soft...@listproc.autodesk.com
Pingo, how did it go along? I think it´s about time that someone nails this kind of compound.
I remember back in SI 7  there was no "flow along curve" then it was given to community,
we need a "factory preset" turbulize on curve flow :)

D

From: Pingo van der Brinkloev <xsi...@comxnet.dk>
To: soft...@listproc.autodesk.com
Sent: Saturday, June 1, 2013 3:59 PM
Subject: Re: particle flow along curve with turbulence

### Morten Bartholdy

Jun 10, 2013, 6:22:05 AM6/10/13
to David Rivera, soft...@listproc.autodesk.com

+1 BIG time. I have tried and failed at making this work properly. Even with Mootz emFlock this is not terribly simple to do.

It would be great if flow along curve could work as a force alongside other motion control nodes and allow adjusting degree of effect via slider. It has a slider but I just can't get turbulence working with it without ugly jitter., no matter which way I order the different nodes.

Morten

### Pingo van der Brinkloev

Jun 10, 2013, 11:13:26 AM6/10/13
to soft...@listproc.autodesk.com
Hi, I ended up scripting my way out of it in After Effects and Particular. I've been down your road Morten. A preset would be nice.

I do understand the problem though, since I'm simulating, and not running a "procedural flow"

I guess running the flow along curve compound on one pointcloud, then have a second pointcloud - perhaps not even simulated - get point positions from the first pointcloud and turbulizing those, and then adding strands to that is the way to go. But I didn't have time to make it work cause I just cloned myself.

I'm going to pursue this, when I get some time, since Soft can handle a lot more strands than particular. Hands up for EmRPC by the way!

P

### Richard Perry

Jun 10, 2013, 11:24:15 AM6/10/13
to soft...@listproc.autodesk.com
Hi

Sorry to Off Topic

I'm in Beijing unexpectedly, would love to meet up with any Softies in this crazy city!

Pez

### Peter Agg

Jun 10, 2013, 11:31:50 AM6/10/13
to soft...@listproc.autodesk.com
For what it's worth: I find this kind of thing works better if you use the curve to apply a velocity on the particle, then plug the turbulence onto the force. Kinda like pictured.
Screenshot-1.png

### Morten Bartholdy

Jun 11, 2013, 4:36:34 AM6/11/13
to soft...@listproc.autodesk.com

Thats a good one Peter - it works pretty much like I wanted it :)

A couple of things though - the turbulence seems to accumulate along the curve - I would prefer to have turbulent behaviour around a base position and then send that along a curve - it is nice though to have the curve apply speed and direction though.

Another one is, if I Create Strands from the particles they will flip orientation when moving along the curve and I am not quite sure how to control that.

I have attached the scene if you care to take a look.

Cheers Morten

ICE_FlowAlongCurve_w_Turbulence_01.rar