After starting the work on the language design, and noticing a flaw in the backing theory, I went back and fixed the missing detail:
The C-rel paradigm is based on lambda calculus and graph theory, and graph theory doesn't have "arc ordering".
Suppose I write a "concept", that will receive values from upstream concepts. The handler receiving all the values (called coalesce function) would be forced not to make any difference between input positions. It might differentiate between "type" of values, and the "type" (a structure, a class, or a whole concept) might include a place for a priority, but that would be extremely cumbersome both in the formalism of the theory, and in the practicality of a programming language.
In math, it's somewhat easy to hide this detail away. Consider that any non-commutative operator can, usually, be transformed in a commutative one. Even a < b might be transformed in something like min(a,b) == a, and become commutative. In the practicality of a programming language, being able to deal only with functions that have no information whatsoever regarding the position of their parameters becomes a bit harder. Consider a function like strtok(string, token). If we remove the information about the position of its argument (or a decoration like strtok(string=value, token=value)) then we need to put the information about what's the target string and what's the token to search in the parameters, and that's harder and clumsier than it looks, at practical programming level.
Even if the coalesce function of concepts wouldn't be ALL that we have in the language, i.e. we still have lambda calculus and function arity, declaring that, by definition, a concept CANNOT determine the order of its input values, unless that order is included in their type, is extremely limiting. I realised it while writing the concepts of the language, which was a couple of weeks ago.
This is the reason why I had to go back to the drawing board and add arc (concept inputs) ordering to the theory. This cascaded in a series of changes, that make ordering of arc stable during transformations; for example, when analysing the graph and computing a recursion, arcs must be kept in order.
Also, the fact that all retroaction functions HAD to be the same derived from the fact that you could reorder the arcs at will, or, you could actually NOT decide any order in the arcs. That meant that arcs in a coalesce function could not be distinguished between recursive and direct, and so, that you HAD to be able to handle recursive and direct arcs in the same function.
Now that arcs have an order, you could also have 1) a different coalesce function (gathering values from upstream concepts) for direct and recursive arcs, and then have a different retroaction function for each node.
Said simply, now the theory covers the case of you wanting to have a function called back when all the direct inputs are received, then firing outputs that might, at some point, go back as _different_ (recursive) inputs and when all the recursive inputs are received, invoke first a _different_ function to handle them, and then a final function to work with both the previously received _direct_ inputs and those that came back through the recursion, after a first broadcast.
This would have not been covered by the theory, or better, this would have been somewhat forbidden by the theory if it postulated by axiom that arcs _must be_ unordered, i.e. that you _must not_ distinguish between their order.
This also opens the door for parallelism optimization, when you declare coalesce functions that, once they receive the parameter N, wait for the N-1 parameters to be received as well, but are invoked multiple times as parallel processors provide them with the paremeters N+1, N+2 and so on. Without a theory of arc (input) ordering, this was forbidden.
Of course, there was the possibility to say "theory is more restrictive, but we allow for more freedom", but this would have reduced our ability to #1, provide proof of the theoretical validity of the programming paradigm and #2, reduce the opportunity for automatic optimizations (for instance, automatic parallelism as i.e. Haskell provides), that are based on mathematically provable assertions. This is the reason why I really had to backtrack my work up to the theory, and remove the axiom about arcs being not ordered, and follow this axiomatic change down the rabbit hole, exploring all its consequences.
Ideas and proposals as replies to this message are welcome.