There is rule of thumb in social system that states that Committee should not be too big or they will not achieve much. Actually it was first stated by Parkinson. The same that said that:
Work expands so as to fill the time available for its completion.
And
Parkinson found somewhere around 20 the limit of how many people should a committee be, to still be efficient. The reason why this is of interest to us is because if Vilfredo could break this limit, and let a bigger committee work efficiently then it would become really useful. While Parkinson statement was only semi serious, there are more serious work done recently:
Klimek, Hanel, Thurner: Parkinson’s Law Quantified: Three Investigations on Bureaucratic Inefficiency.
And again the same size comes out. The paper is more complex, and I have not had the time to read it fully. It also considers the pension time, and speed of raising in the corporate ladder. All things we do not consider at all in Vilfredo.
But my question is this one: Vilfredo finds a limit somewhere around 20 people. With 30 being generally above the limit. The same limit that those other studies found. Could it be that the two limits are connected, and maybe expression of the same hard limit?
When Vilfredo started I expected it to be able to scale. It was only in the Milan event when we had some 17 people that I realised that it was not going to be as easy as I thought. If people were to vote at random, the Pareto Front would expand very rapidly, very soon. Instead, I reasoned, as people are more similar between them, the Pareto Front will only grow up to a point. Boy, was I wrong. Yes people are more similar. No, the Pareto Front does not hit any hard limit, but keep on growing. At least for what we could observe. Then I thought, maybe this is because people do not pay attention when they vote. So we introduced Key Voters, asking people in key positions to reconsider their vote. If they voted badly they would now change their vote. We also asked people to comment why they voted against a proposal. This DID made things a little bit better, and we managed to be able to handle bigger groups. But not as big as we hoped. The fact is that while it is true that people vote not always consciously. It is also true that they are also enough diverse between them to let the Pareto Front keep growing. We just slowed that growth.
So is it all lost? I don't think so.
First of all we might have pushed the limit a little bit. If we can handle bigger groups than traditional committee, still the work is valuable.
Second Vilfredo, by computing the Pareto Front is showing clearly where everybody stands. It might be that this diversity among the various position is ultimately what is letting large committee fail. But it is a lot of work to handle big groups in reality. If this can be done by a computer, we can have groups of 25-30 people easily. And this is helpful.
And finally, not all committee need to decide unanimously. If we cross the Pareto Front with the popularity information, we might be able to make a much more thorough investigation over what are the possible alternatives. We will still decide using majority, because we cannot find an unanimous alternative. But the whole process will be transparent. And it will be clear, proven, that the result is the best solution this committee could come up with.
Here are two pictures from the latest experiment. The first one is from a moment when 30 people participated:

This one is from the same question, but a moment when only 18 people participated:

this can be downloaded from:
http://vilfredo.org/map/3d0c6247198129cb01ab99959b73a68f.svg
Still a complex graph, but a level of complexity we can cope with. The first one can only be handled by a computer or by a centralised human that imposes some top down simplifications. The kind of results we don't want.