Dear Jerome and all,
thanks for your suggestions. Here I try to address to the issue of triadic redundancy.
Standard (traditional) reductionism is based on finding and dealing with elementary units: from persons, to cells, to molecules, to atoms, etc. Triadic reductionism - which is a label I created ad hoc - is based on triads, whatever the nodes (elements) of a given network are, be them individuals, organizations, cells, etc. Now, let us suppose for a moment that it is always possible to decompose a given network into triads, exactly as it is supposed (by reductionists) possible to decompose a network (or a system) into its nodes (elements). This is all but new, and anti-reductionists (as me and most of you, I guess) know it from ever, but they contend that this possibility does not say much about: 1) a network/system aggregate properties; 2) its dynamics; 3) its determinants. Further, as soon as a network is a bit large and dense, let say 10,000 nodes and 1% normalized density, the number of triads becomes astronomic, preventing (or at least making computationally very hard) any treatment.
However, some of the world leaders in network analysis overlook all previous objections. They promote advanced statistical approaches to network analysis, like the ERGM (Exponential Random Graph Models), and claim that these methods can succeed in understanding a network generation (its determinants) and predicting its dynamics. On the contrary, I argue that it is possible to move to these methods the same criticisms holding for standard reductionism, and especially the most fundamental one: the possibility to decompose in elementary units does not guarantee that, given a set of triads you can recompose it, even if you knew the distribution of triads among the 16 possible types. A fortiori, you cannot either understand its emergent properties or its dynamics. In short: decomposition does not guarantee recomposition.
Further, it's time to question the previous supposition, that it is always possible to decompose a system. In my speech of July 7th 2021 in the broad CoR group, I attacked precisely this point, which in fact is a pillar of any kind of reductionism, including triadic reductionism. I argued that the impossibility of a system decomposability is related to the presence of ... nonlinear functions, which in networks means ... cycles! Exactly the issue on which I'm insisting. If you try to decompose a network structured with cycles, then your decomposition will be very lacking, incomplete, which means that you cannot properly understand it. And the more cycles there are, the more lacking the analysis will be. Moreover, approximation does not hold when a phenomenon is nonlinear, as the studies on deterministic chaos and recursive systems dynamics show very well.
Despite all the arguments against and regardless how solid they can be, scientists never stop running after the chimera of reductionism and full knowledge! That chimera resurges continuingly under different forms. Triadic reductionism is just the last camouflage.
As for the other issue - the connection between redundancy and consciousness - it would deserve a specific meeting, but the provisional answer is yes: in my view, consciousness and meaning are both emergent properties of large-scale recursive networks, as the one that in human beings corresponds to the PNEI (Psycho-Neuro-Endocryn-Immune) System.
Best
Lucio