Syntax for instances "reduce" functions

21 views
Skip to first unread message

Joseph Malloch

unread,
Jun 25, 2021, 1:23:08 PM6/25/21
to dot_mapper
Hi all,

I made a quick demo video showing the ability to use map expressions that loop over all active instances of a signal. Any comments or feedback on the proposed expression syntax are welcome!


Thanks,
Joe

Travis West

unread,
Jun 25, 2021, 4:22:13 PM6/25/21
to dot_m...@googlegroups.com
That’s really neat! My immediate thoughts are:

- how does the reduce functionality work with the existing expression syntax? You demonstrated a handful of handy built in reduction methods; can I write my own? I don’t recall there being a way to declare functions or lambda expressions…
- It would be really handy to have similar functionality to reduce convergent maps! You could write, I imagine, xn.mean() or xn.sum(). I can think of a few ways that could go wrong right away (eg if the source signals input to the map have drastically different range and units), but it could be useful. 

Always great to see new stuff happening with the library. 

TW

--
You received this message because you are subscribed to the Google Groups "dot_mapper" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dot_mapper+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/dot_mapper/bea6ee0a-a8e5-43a7-9c25-b95f1e878eedn%40googlegroups.com.

Joseph Malloch

unread,
Jun 25, 2021, 5:26:49 PM6/25/21
to dot_m...@googlegroups.com
Thanks for the feedback Travis! You can’t currently write your own reduce functions but this wouldn’t be hard to implement if we agreed on good syntax.

The syntax gets tricky since we need to distinguish between calculating across vector elements (x[0], x[1], etc…), signal instances, and input signals for convergent maps (e.g. x0, x1, …). Some maps will involve signals that are both vectors and are instanced so we can’t just overload.

Joe

Edu Meneses

unread,
Jun 26, 2021, 12:27:42 PM6/26/21
to dot_m...@googlegroups.com
Great demo! It shows how powerful libmapper can be. I can also feel the usefulness of a video series on using libmapper (similarly to Fiedstell's SC tutorial or Cheetomoskeeto's PD tutorial). Jump straight to this functionality can be daunting for a composer used to visual languages, but there should be no problem in a series with increased difficulty.

Travis's example for convergent mappings usage gave me ideas right away: For Trouveur (https://youtu.be/lEWevEhnPPg), we did a mapping where guitar position (GuitarAMI) modulate the T-Stick tilt gesture to control a harmonizer. A reduction (and possibly applying different weights for the signals) could be interesting when using a similar mapping with the capacitive sensor since the T-Stick can currently detect multitouch.

Cheers,

Edu Meneses
McGill University | Ph.D. Researcher
Input Devices and Music Interaction Laboratory | IDMIL
Centre for Interdisciplinary Research in Music Media and Technology | CIRMMT


Travis West

unread,
Jun 28, 2021, 7:41:12 AM6/28/21
to dot_m...@googlegroups.com
I'm not sure what the implied return value is for `x.instances()` currently, but if it returned a vector whose elements are simply all the instances, then we could just add another similar function (something like `x.inputs()`) that returns a vector whose elements are all the convergent inputs. Then we could allow whatever our reduction functions are to just apply to vectors. No need to distinguish different cases. Would that be feasible to implement? I need to take a look at the source code...

TW

Joseph Malloch

unread,
Jun 30, 2021, 5:07:16 PM6/30/21
to dot_m...@googlegroups.com
The “implied” return value is a vector of active instances with variable length as the instances come and go. Under the hood the instance reduce functions are creating a loop in the expression stack, which can include arbitrary code as shown in the video with the weighted sum example. We could loop over inputs in the same way we currently loop over instances (not currently implemented). I just want to make sure the syntax supports all the edge cases and makes sense to everybody.

The tricky thing is indicating whether to compute some function across vector elements for each instance and keep the instances, or across instances but keep the vector, or perhaps both! The same thing applies to convergent maps if we want to throw inputs into the mix as well. That is the reason for the “instances()” part of the expression, e.g. for vector signal x:

y = x.norm();  // for each active instance, computes the length of vector x and outputs a scalar
y = x.instances().mean();  // computes the vector mean of all active instances of x, outputs a vector
y = x.norm().instances().mean(); // computes the mean of all the instance vector lengths, outputs a scalar

I would like to know if others on this list like the “instances()” syntax or whether there are any other proposals.

For convergent maps, Travis’s “xn” syntax for convergent maps works for simple combinations but those are generally easy to type anyway, e.g. “y=xn.sum()” is the same as “y=x0+x1+x2”. Can this syntax be used to specify some computation per-input (the “map” part of map-reduce) before the combining/reducing function? Perhaps…

y = (xn.normalize()).mean(); // normalize each input based on its own range and then take the mean?

This seems confusing to me but I think it’s partly because our existing syntax for labelling inputs for convergent maps (x0, x1, etc) is also problematic. Perhaps “x” should be indexable in some way that specifies input index rather than vector index[] or time delay{}. Regardless we need a way to explicitly code that reduction happens across inputs rather than vector elements or signal instances.

weights = [0.5, 0.4, 0.1]; y = (xn * weights[n] for n in inputs).sum(); // weighted mean?

Also any thoughts on syntax for user-defined reduce functions?
y = x.reduce(x, y: x + y); // python style
y = x.reduce(x, y => x + y); // javascript style

BTW: The library currently supports the following reduce functions (differences in bold type):
for vectors: all(), any(), center(), max(), mean(), min(), norm(), sum()
for instances: all(), any(), center(), count(), max(), mean(), min(), size(), sum()

Cheers,
Joe

Travis West

unread,
Jul 1, 2021, 7:05:18 AM7/1/21
to dot_m...@googlegroups.com
I would love having functions "map()" and "reduce()" that take user-defined functions. I would suggest that the other reduce functions like sum may be defined as functions that strictly operate on vectors. It would also be nice to be able to apply them as functions with the vector as an argument, rather than using a dot / postfix / member access style. Both have their merits. The weighted sum would become:

weight = [a, b, c, ... n];
y = xn.map(x, n -> x * weight[n]).reduce(x, sum : sum += x);
// or
y = sum( xn.map(x, n -> x * weight[n]) );
// or maybe
y = sum( map(xn, (x, n) -> {x * weight[n]}) );

And I think using explicit map and reduce would ease the ambiguity regarding whether xn.function() or function(xn) should apply to each input or the vector of inputs. E.g. xn.norm() is ambiguous, since it depends on what the elements of xn are. If xn's elements are all scalar values, then it should treat them as one vector and return the norm. But what if one of the inputs is a vector? Should it output a vector whose elements are the norm of each input? What if all the inputs are vectors? Contrast, the following are all totally unambiguous (right?):

norm(xn); xn.norm(); // return the norm of the vector whose elements are the inputs. Syntax error if any xn elements are vectors
xn.map(x -> norm(x)).reduce(x, sum: sum += x);
xn.reduce(x, sum -> sum += norm(x));
sum( xn.map(x -> norm(x)) );

Consequently, it appears that functions like norm() and sum() should only work on vectors, and xn or x.instances() should only be allowed to be treated as a vector if all xn or instances can be treated as scalar values? Otherwise you should probably explicitly map/reduce them, especially for a list of inputs with heterogeneous types.

I have few strong feelings about how to specify the anonymous function arguments. Definitely not C++ style. Python and Javascript are fine. There's also "|" enclosed arguments as seen in rust or supercollider:

x.instances().map(|x, n| x * weight[n]).sum();

I guess my personal preference is for something resembling an arrow.

All of this begs the question as well: if there can be a norm() function or a normalize() function, how about squared_norm()? How about linear_map(x, input_min, input_max, output_min, output_max)? Maybe a bit off topic, but I've always wondered whether it would be convenient to have such helper functions rather than having to always do the math. Webmapper helps a lot in that regard, but it might still be useful e.g. if you're making maps directly in C.

TW

Joseph Malloch

unread,
Sep 14, 2021, 2:19:30 PM9/14/21
to dot_mapper
Hi all,

I have merged the reduce functionality into main branch – have a look at the expression syntax documentation for a review and let me know if you see any problems or have any questions!
testparser.c now evaluates 101 examples, including simple and nested reduce on instances, input signals, history and vectors.

Cheers,
Joe

Joseph Malloch

unread,
Jan 5, 2022, 8:53:10 PM1/5/22
to dot_mapper

All of this begs the question as well: if there can be a norm() function or a normalize() function, how about squared_norm()? How about linear_map(x, input_min, input_max, output_min, output_max)? Maybe a bit off topic, but I've always wondered whether it would be convenient to have such helper functions rather than having to always do the math. Webmapper helps a lot in that regard, but it might still be useful e.g. if you're making maps directly in C.

Just wanted to add this to the thread in case anyone is curious about Travis's suggestion above: the libmapper expression syntax includes a macro function:
linear(x, input_min, input_max, output_min, output_max)

The function is automatically expanded to:
sMin=<input_min>;
sMax=<input_max>;
dMin=<output_min>;
dMax=<output_max>;
sRange=sMax-sMin;
m=sRange?((dMax-dMin)/sRange):0;
b=sRange?(dMin*sMax-dMax*sMin)/sRange:dMin;
y=m*x+b;

 

d.andrew STEWART

unread,
Jan 15, 2022, 8:07:00 AM1/15/22
to dot_mapper
Hello.  Finally watched YT demo video. As always.......amazing ideas. As always....I'm Joematised.  I look forward to seeing how others – with minimal knowledge of writing complex expressions/scripts, can explore this functionality. Also, is anyone able to guide me through the "convergent map" functionality? Aside, if a session for general users were to occur, I'd try to attend.  Thanks

Joseph Malloch

unread,
Jan 18, 2022, 3:14:50 PM1/18/22
to dot_m...@googlegroups.com
Hi Andrew,

Convergent maps can be created using Webmapper by simply dragging from a signal onto an existing map rather than onto a signal. This will delete the existing map and replace it with a new map that also includes the new signal as a source.

Convergent maps can be created with the libmapper API, e.g. in Python:
map = mpr.map([srcSig1, srcSig2], dstSig).push()
# or using a string
Map = mpr.map(“%y=%x+%x”, dstSig, srcSig1, srcSig2)

You can also create convergent maps using OSC by sending messages to the admin bus:
/map srcSigName1 srcSigName2 <etc.> -> dstSigName <optional properties…>

The expression for a convergent map works just like a simple map but you can refer to difference sources using the ‘$’ character, e.g.: “y = x$0 + x$1”

Just be warned that libmapper alphabetizes the source signals so don’t count on the source signal indexes being in the same order as when you created the map. Webmapper labels the sources when a convergent map is selected to make things easier.

Cheers,
Joe

-- 
You received this message because you are subscribed to the Google Groups "dot_mapper" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dot_mapper+...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages