On 2/23/14, Toby Pereira <tdp...@yahoo.co.uk> wrote:
> You're right that I didn't define proportionality. It's quite tricky to
> define, but it's something we can work towards by adding in extra criteria
> step by step. The basic start is obviously that if people are completely
> separated into factions that each have their own candidates and there's no
> cross-voting, then these factions would elect a number representatives
> proportional to their size (subject to rounding).
--this is basically the axioms underlying various PR theorems. However,
it is unrealistic and also many voting systems are "perfect" under
these criteria, even though undoubtably some are better than others.
So you need other criteria.
> But anyway, it seems that we've got (at least) two distinct sorts of
> proportionality that we're talking about. There's proportionality of what
> we vote for and proportionality of what we are. It's the former that I'm
> looking at. This does not require every human characteristic to be
> represented proportionally. If x% of people have brown hair, it doesn't
> mean that x% of representatives have to have brown hair unless x% of people
--the trouble is (a) what parliaments vote for is unpredictable. For
example, they could
hold a vote "YES if you have brown hair, NO otherwise." Random samples
work for any vote including ones we cannot predict.
(b) you do not know what people vote for & care about. Nor how much
they care. Nor
how it interacts with other stuff they also care about.
> And as for whether children can stand etc., that's also a red herring.
--I disagree. Look, children have specific issues & concerns. Ditto,
say, women.
The problem with children is not the lack of validity or importance of
children's issues, it is that they are too ignorant and easily
manipulated to be good MPs.
However, the same is also true of some adults...
> But maybe I was too hasty to dismiss a Bayesian Regret analysis. I think it
> would be interesting, although obviously the assumptions would be more
> stretched than in a single-winner case, and people would probably dispute
> them more. However, it doesn't have to be one or the other. A
> proportionality analysis would be done as well (subject to more rigorous
> definitions), and it may be simpler to do that a Bayesian Regret analysis.
--it is definitely easier to do a proportionality analysis, indeed you
can construct voting systems essentially perfect. But this will not
satisfy you. Is RRV or STV better? Both are "perfect" under some PR
theorem. So are some other systems.
With BR and a simulator you will be able to explore a lot more
territory and get more realistic. You will also discover new
surprising phenomena you would not have been able to think of yourself
without the computer pointing it out to you.
On 2/24/14, Toby Pereira <tdp...@yahoo.co.uk> wrote:
> I was thinking about some of the criteria that the "magic best" system
> might have for finding the "most proportional" set of candidates from all
> of those standing. This is not exhaustive and any one of them is open to
> debate and/or further explanation.
>
> 1. Independence of irrelevant alternatives. This is pretty obvious. If a
> slate A is better than a set B, then it will remain so regardless of
> candidates not in either slate being added to or removed from the election.
>
> The rules out any ranked-ballot method for being guaranteed to find the
> best set (assuming a few basic background assumptions like universal domain
> etc.)
--in other words, Toby is saying Arrow's theorem rules out ranked ballot methods
given his Ind-Irrel-Alt demand.
> 2. Partly based on the above, the system should work on open-ended utility
> ratings. That's not to say I am advocating a voting system that allows
> voters to score candidates arbitrarily high, but just that a simulation to
> find the most proportional candidate set would not have a score limit. The
> same system could still be used for elections but with a score limit.
> 3. Basic proportionality. If a group of voters rate a set of candidates
> each at a certain utility that is at least as high as any other utility
> rating of any voter, and have zero utility for all other candidates, then
> that group of voters should, subject to seat rounding, be able to elect
> the proportion of candidates that equals their proportion of the
> electorate.
> 4. Monotonicity. This should be obvious, but specifically if candidates in
> set A have the same scores from the voters as set B except that at least
> one of the candidates in A has strictly higher scores from one or
> more voters than the replaced candidate(s) and none lower, then set A is
> better than set B. I've brought this up because there are some systems
> where this would reach a limit.
--that was is a very specific kind of monotonicity demand. One could consider
other kinds of monotonicity demands which Toby is not demanding.
> 7. Independence of universally rated candidates. If a group of voters all
> rate a particular candidate at the same level, then if this candidate is
> elected, it is ignored when working out the rest of the proportionality
> between these voters. To give an example:
>
> Approval voting, proportional representation, elect 6
>
> 20 voters: A, B, C, D, E, F
> 10 voters: A, B, C, G, H, I
>
> A, B, C, D, E, G would be elected (or at least some combination of two from
> D, E, F and one from G, H, I). A, B and C are universally rated so you
> ignore them and the ratio of elected candidates should be 2:1 between the
> factions. RRV fails this.
--RRV fails this? It seems to me, RRV would elect A,B,C, at each step
downweighting
all voters equally. We then would reach a situation
20*w: D,E,F
10*w: G,H,I
where w is some common weight. Now, elect D. Then
20/2: E,F
10: G,H,I
and then it is a tossup who gets the next seat, but whoever does,
the next seat goes to the other faction.
So I'm not seeing RRV failing in Toby's example; it does what he wanted.
Also, more generally, I object to voting methods criteria which only apply
in situations which in practice will essentially never happen. Toby
criterion 7
sucks because in practice nobody will ever be truly 100% universally rated.
His criterion 4 also is predicated on assumptions which in practice
will essentially never happen... and his criterion 3 also is
attackable in same sort of sense, for example if you really had
unrestricted score ranges as he demands in (2), then his criterion 3
would essentially never happen since some loontog voter would give
somebody a score of 96594769985697436599999999999999999999999999.
His demand (2) is anyhow also stupid since it essentially amounts to ballot
renormalization to fixed score-range, so we might as well make a fixed
range 0 to 9999 say
to begin with. The advantages or disadvantages of that are surely not
so huge as to be worth demanding as a foundational axiom.
So I think Toby has some good ideas spiritually speaking, but the
details of how he here tried to codify those ideas, were poor.
Next, about "additive utility" (5), be careful what you wish for.
See, the utility of a committee is NOT the sum of individual members.
In fact that totally contradicts the demand Toby also made, for proportionality.
With 51% Democrat voters, the highest "additive" utility is got by
electing 100% Democrats, highly disproportionally.
Oops.
Also, if I (a single voter) tend to like Democrats best, I still might consider
a government which was NOT entirely Democrat, to be better because I
prefer somebody else keep an eye on them to stop unbridled corruption.
So even from my point of view alone with no other voters involved,
utility is not
additive.
So really, utility is not about single candidates, it is about the
decisions they collectively make once elected, which is why I proposed
"two stage BR" outlined in
http://rangevoting.org/BRmulti.html
> I think your third variant would pass IIA and
> independence of ratings multiplication, however.
--I think it would fail IIA?
--aha, now I see your point. Think you are right. And further, this seems to
be a good criticism of RRV? Not sure. One might argue than in
the situation
20: A,B,C,D,E,F approved
10: A,B,C,G,H,I approved
elect: A,B,C,D,E,F
then the 10 get represented by A,B,C, and the 20 get represented by
A,B,C,D,E, and/or F.
So it is "perfectly proportional."
Whereas, if we elected
A,B,C,D,E,G
this would violate perfect proportionality?
Or would it be a "better" kind of proportionality?
What do you think?
Now I think about it, I think I prefer ABCDEG
on the basis of (probably) better Bayesian Regret.
> but systems that are cloneproof (and not just in a really forced
> way) should not be adversely affected by several similar candidates.
> Similarly here, I would want a system that wouldn't penalise a smaller
> factions with their other choices for supporting a generally popular
> candidate.
>
> On the unrestricted scores in criterion 2, remember this is only for
> computer simulations in which not everyone would have the same utility
> score for their favourite candidate. Similarly for single-winner BR
> calculations, the best winner could result from one person's utility rating
>
> of 4579824752043 for one candidate. But as long as a simulation was
> realistic, one voter would not swing it in such a way - either in the
> single-winner case or the PR case. But just to clarify again, I would not
> advocate unrestricted scores in actual elections.
--??
--
You received this message because you are subscribed to the Google Groups "The Center for Election Science" group.
To unsubscribe from this group and stop receiving emails from it, send an email to electionscien...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
Well, let's see. First of all, even the Athenian random sample
method, only sampled rich men. No women. No slaves. No children.
So there has never been a random sample parliament yet.
--
STV gives the ABCDEG result, so it might be something to do with quota-based systems. That's not to say that non-quota systems *necessarily* fail, however.
--
You received this message because you are subscribed to the Google Groups "The Center for Election Science" group.
To unsubscribe from this group and stop receiving emails from it, send an email to electionscien...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
I'm glad you think the method shows promise, Jameson. The way I imagined it, if it were ever used in an election, it would probably be done sequentially. So the highest scoring candidate would be elected. Then the best set of two that includes the first candidate and so on. Is that what you mean by a greedy algorithm? I'm not sure if you can guarantee finding the best set without testing all possible winning sets of candidates, which might be too much computationally. But I'm not really an expert in such matters.
>>Now I think about it, I think I prefer ABCDEG
>>on the basis of (probably) better Bayesian Regret.
>I see it as a "better" kind of proportionality personally, and I have thought about it a lot before. It's partly because I don't see the 10 voters and the 20 voters as two completely separate factions so the 2:1 ratio doesn't necessarily apply. They agree on certain things and not on others so they can be seen as between one and two factions. ABCDEG also does just seem intuitively more proportional to me, not that that has to be taken seriously. I do think that it's a criterion that a lot of people might disagree with, however.
--In that example, if not all, but only say 99% of voters approved A,B,C
then it is less clear what to do.
The measure of a set of candidates is the average squared deviation from c/v for the voters' scores (lower deviation being better).
Recently I've been considering what a Condorcet version of PR would look like. I thought up of a system and I wonder if it would allow for enough diversity in opinion or if every winner would be pretty moderate. This is how it would work:
1) Start off with all votes equaling 1 vote.
2) Tabulate rankings and get the Condorcet win (doesn't matter which method).
3) Reweigh each persons vote for the next seat. This would involve multiplying the current weight of each voter's vote by a scale. The value of this scale would be determined by what position they ranked the winner of the last seat.
4) Repeat to step two if there are more seats to fill.
To calculate the scale:
1) Find the average scale. This would be (number of seats-1)/(number of seats)
2) calculate the average ranking value of winner of last winning candidate( can be a decimal)
3) calculate constant C = (100- Average Scale)/ (number of candidates - average ranking of winner)
4) set the function of each voter scale where R = individuals ranking of winning candidate. f(R)=[ 100 - (Number of candidates-R)*C] / 100
--
I suppose my main worry is that by having a hybrid of proportionality and well-likedness, you can end up having to make arbitrary decisions with the weighting system.
1a. My problem with doing it that way is that if a voter gave a score of 1/10 to a candidate and everyone else gave 0/10, then that voter would have full possession of that candidate.
2a. I'm not convinced by mean absolute deviation. ... I think generally squared deviations give more sensible results (with the mean of a set of data being the point that minimises squared deviation whereas the median minimises absolute deviation).
Also, the zero level of proportionality varies with number of voters. One person getting all the representation when there are 10 voters is very different from when there are 100. I'm not sure how well it works as a true zero point.
The problem is that whenever there is a tie between two results either side of proportionality it would always be pushed in the direction of the larger faction when your preference score is added, regardless of what the initial method is. I think a better measure to look at is something like voter agreement level. So for example:2 to elect (approval)10 voters: A, B, C10 voters: A, B, DYou might want to award it to AB over CD not because AB has a higher proportion of the available score, but because under this result there is 100% agreement between voters. Under the CD result you'd probably measure it at 50% or 0.5, because you'd count a voter agreeing with themselves. So you would just count up the agreements.
On Friday, June 13, 2014 2:40:00 PM UTC-5, Toby Pereira wrote:I suppose my main worry is that by having a hybrid of proportionality and well-likedness, you can end up having to make arbitrary decisions with the weighting system.Quite so; proportionality and preference are distinct concepts and I thought that should be kept clear. For example:Elect 12 voters: 1.0 for A, 0.5 for B, 0.0 for CRegardless of which candidate wins, the result is equally proportional. But there are clear differences of preference.1a. My problem with doing it that way is that if a voter gave a score of 1/10 to a candidate and everyone else gave 0/10, then that voter would have full possession of that candidate.Pardon, I'm not clear on why that's a problem. Can you elaborate?
2a. I'm not convinced by mean absolute deviation. ... I think generally squared deviations give more sensible results (with the mean of a set of data being the point that minimises squared deviation whereas the median minimises absolute deviation).It's only one character difference to switch the code from L1 to L2, so I'm happy to make that change. (I'm indifferent between use of the mean or the median, but I like to keep the code linear unless it needs to not be.)Also, the zero level of proportionality varies with number of voters. One person getting all the representation when there are 10 voters is very different from when there are 100. I'm not sure how well it works as a true zero point.Right. Having more people opens up a possibility for more severe inequality. It seems to me to be just a fact of life.
The problem is that whenever there is a tie between two results either side of proportionality it would always be pushed in the direction of the larger faction when your preference score is added, regardless of what the initial method is. I think a better measure to look at is something like voter agreement level. So for example:2 to elect (approval)10 voters: A, B, C10 voters: A, B, DYou might want to award it to AB over CD not because AB has a higher proportion of the available score, but because under this result there is 100% agreement between voters. Under the CD result you'd probably measure it at 50% or 0.5, because you'd count a voter agreeing with themselves. So you would just count up the agreements.What's the difference, exactly? For this example, agreement on AB is 100% and on CD is probably 50%, and the preference scores are the same (40/40=100% for AB, 20/40=50% for CD).
Ah, I see. You're arguing in favor of a complete, usable system. OK. I was only interested it as a way to measure the proportionality of different PR methods. That's why, on further examination, I needed to draw a separation between proportionality and preference.
(version 0.2 - it probably has errors still)(0.) Use scores in the range [0-1]. An honest score given by a voter to a candidate is here defined to be, not a utility, but the degree to which that candidate represents the voter.(1a.) A voter's degree of possession (DP) of a candidate equals the score the voter gave the candidate divided by the sum of all scores given to the candidate.(1b.) A voter's DP of a set of candidates is the sum of the voter's DPs of the individual members of the set.Given a vector V containing the voters' DPs of a set of candidates:(2a.) The Proportionality Score (PpS) of the set of candidates equals MEAN(V)^2/MEAN(V^2). For example, PpS is 100% if all voters have equal DP, 50% if half the voters have equal positive DP and the rest have zero DP. PpS approaches 0% in the limit as the number of voters rise and only one voter has nonzero DP.(2b.) The Preference Score (PfS) of a set of candidates equals the sum of scores given to the candidates divided by the maximum sum of scores the candidates could have been given (nVoters*nCandidatesInSet, assuming range [0-1]). It ranges from 100% for unanimous complete support to 0% for unanimous complete opposition.(3.) The Representativeness (REP) of a set of candidates versus the voters is defined to be the product PpS*PfS.-In the single-winner case, this always selects the same winner (i.e. most representative set) as normal Score Voting.For C candidates of which E will be elected, the idealized method would check all NCHOOSEK(C,E) sets of E candidates. It's definitely intractable for large C and E, but REP can readily be calculated for the winning set of more practical algorithms.PfS is a rescaling of Bayesian Regret or Voter Satisfaction Efficiency in the single-winner case and is closely related to them in multiwinner cases. It can be used to make the same comparisons as BR and VSE. PpS measures only how equally the voters' votes contribute to a set of candidates. It can be used to compare how proportional different voting systems are.When voters give honest score votes, I think REP has a reasonable claim of describing the degree to which a set of candidates represents the voters.
(2a.) The Proportionality Score (PpS) of the set of candidates equals MEAN(V)^2/MEAN(V^2).
On Saturday, 14 June 2014 01:11:45 UTC+1, Gabriel Bodeen wrote:(2a.) The Proportionality Score (PpS) of the set of candidates equals MEAN(V)^2/MEAN(V^2).Is this a standard statistical method?
Does it have a name outside PpS? Will a lower mean squared deviation from proportionality always have a higher PpS, making it just a normalised 0 to 1 score for standard deviation?
--
I don't think this does always elect the normal score winner in the single-winner case. For example:
A voter counts as agreeing with themselves. If A and B are elected, the voters agree 100% with each other on one candidate, and 80% on the other (1 minus the difference between the two), so 90%. This would make a total agreement level of 95% or 0.95.
On Saturday, 14 June 2014 01:11:45 UTC+1, Gabriel Bodeen wrote:(2a.) The Proportionality Score (PpS) of the set of candidates equals MEAN(V)^2/MEAN(V^2).
Is this a standard statistical method? Does it have a name outside PpS? Will a lower mean squared deviation from proportionality always have a higher PpS, making it just a normalised 0 to 1 score for standard deviation?
On Saturday, June 14, 2014 8:49:12 AM UTC-5, Toby Pereira wrote:I don't think this does always elect the normal score winner in the single-winner case. For example:Yeah, I realized that couldn't be true immediately after hitting "Post". Funny how that works. I'd forgotten that MATLAB code treats matrices' singleton dimensions differently, so the columns showing 100% for "PpS" under a range of circumstances were just a foolish bug.A voter counts as agreeing with themselves. If A and B are elected, the voters agree 100% with each other on one candidate, and 80% on the other (1 minus the difference between the two), so 90%. This would make a total agreement level of 95% or 0.95.Ah, thanks! I wasn't clear on how you were defining it.
On Saturday, June 14, 2014 11:23:43 AM UTC-5, Toby Pereira wrote:On Saturday, 14 June 2014 01:11:45 UTC+1, Gabriel Bodeen wrote:(2a.) The Proportionality Score (PpS) of the set of candidates equals MEAN(V)^2/MEAN(V^2).Is this a standard statistical method? Does it have a name outside PpS? Will a lower mean squared deviation from proportionality always have a higher PpS, making it just a normalised 0 to 1 score for standard deviation?Probably; I can't recall what and my search failed. It's not the variance. ( VAR(X)=E(X^2)-E(X)^2 . E(X)^2/E(X^2) = ??? )
A voter counts as agreeing with themselves. If A and B are elected, the voters agree 100% with each other on one candidate, and 80% on the other (1 minus the difference between the two), so 90%. This would make a total agreement level of 95% or 0.95.
What made you pick that measure? It seems to give a nice 0 to 1 scale, but I'm not really sure where it comes from. Do you know whether higher PpS always means lower variance? If it doesn't, then I'm probably less convinced by it.
If you used voter agreement as the criterion here though, then the way you measure proportionality would cause problems. If someone gives a candidate a score of 0.1 and no-one else gives that candidate anything, then that voter would fully possess that candidate (as we've discussed). Using your preference score would counteract that, but agreement level wouldn't. That's why I'd recommend using my way of calculating proportionality where for a score of e.g. 0.1 out of 1, the voter would "split" into 0.1 of a voter that gives a score of 1 and 0.9 of a voter that gives a score of 1. This also has the advantage of reducing to normal score voting in the single-winner case. Then you can combine proportionality with voter agreement.
On Sunday, June 15, 2014 11:19:17 AM UTC-5, Toby Pereira wrote:A voter counts as agreeing with themselves. If A and B are elected, the voters agree 100% with each other on one candidate, and 80% on the other (1 minus the difference between the two), so 90%. This would make a total agreement level of 95% or 0.95.Hm. If we add another voter to your example calculation:1 voter: A=1, B=0.61 voter: A=1, B=0.41 voter: A=0, B=0.3...it's not clear what to do. 1-2*MAD(vector of scores given to one candidate) looked at first like the most obvious generalization among several plausible choices. But in reading up on measures of rater agreement, no method stood out as particularly good.
What made you pick that measure? It seems to give a nice 0 to 1 scale, but I'm not really sure where it comes from. Do you know whether higher PpS always means lower variance? If it doesn't, then I'm probably less convinced by it.I chose it because it has the desired behavior. Using a vector of voter's possessions for a set of candidates (DPs) as I previously defined it without voter splitting...1. Cloning voters doesn't change the measure: e.g. PpS([0 1])=0.5, PpS([0 0 1 1])=0.5, PpS([0 0 0 1 1 1])=0.5, ...2. If a candidate set was chosen equally by a subset of the voters, the measure is the percent of voters in that subset: e.g. PpS([1 1])=1, PpS([0 0 1])=0.333, PpS([0 1 1])=0.667, PpS([0 0 0 0 1])=0.2, PpS([0 0 0 0 0 1 1 1])=0.3753. Scaling the DP by a constant factor (i.e. electing a different number of candidates) doesn't change the measure: e.g. PpS([0 10])=0.5, PpS([0 0.5 0.5])=0.667, PpS([0 0 0 0 0 0.667 0.667 0.667])=0.3754. If the voters have unequal influence in choosing a candidate set, the measure responds in the right direction: e.g. for 0<x<1 and 0<e<<1, PpS([0 x-e 1]) < PpS([0 x 1]) < PpS([0 x+e 1])5. The measure is more sensitive at large differences in influence than at small ones: e.g. PpS([0.10 1])-PpS([0.09 1]) > PpS([0.91 1])-PpS([0.90 1])No, higher PpS does not always mean lower variance. Variance has the wrong behavior for points 2 through 5.
Given point 2 and the consistency provided by the other points, PpS can be interpreted very simply as the equivalent percent of equally influential voters, or as the equivalent percent of voters whom a candidate-set represents equally (regardless of how well it does so).Elect 3 of A B C D2 voters: 1 1 0 02 voters: 0 0 1 1For the candidate set CD, the voters have DP [0 0 1 1] and PpS=50%. That matches the interpretation in the ordinary way.Elect 3 of A B C D1 voter: 1 0 0 02 voters: 0 1 0 01 voter: 0 0 1 1For the candidate set BCD, the voters have DP [0 0.5 0.5 2] and PpS=50%. Three quarters of the voters would have at least a little influence in choosing this candidate set, but one of the voters would have 2/3rds of that influence, so the equivalent percent of effective voters is less than 3/4ths. In this case, we can say the distribution of influence is as if the candidates had been chosen equally by 50% of the voters, or as if the candidates represented 50% of the voters equally.If you used voter agreement as the criterion here though, then the way you measure proportionality would cause problems. If someone gives a candidate a score of 0.1 and no-one else gives that candidate anything, then that voter would fully possess that candidate (as we've discussed). Using your preference score would counteract that, but agreement level wouldn't. That's why I'd recommend using my way of calculating proportionality where for a score of e.g. 0.1 out of 1, the voter would "split" into 0.1 of a voter that gives a score of 1 and 0.9 of a voter that gives a score of 1. This also has the advantage of reducing to normal score voting in the single-winner case. Then you can combine proportionality with voter agreement.I'm actually fine with it not producing the same winner as Score Voting, because Score Voting is based on choosing the highest-relative-utility candidate as winner. To select multiple winners with that basis, we'd just choose the N candidates with highest total scores, with no regard for proportionality. A voting method that aims at proportionality requires an additional conceptual basis. I suspect something like "representativeness" could be a good hybrid since it's the percent of voters whom a candidate-set represents ("PpS") multiplied by how well it represents them ("PfS"). REP approaches zero for candidate-sets that the voters dislike and also for candidate-sets that only represent very small groups. For a constant sum, PpS+PfS, the max REP occurs at PpS=PfS. So it's similar to the Chiastic method recently discussed.
Consequently the highest-PfS candidate is the same as the Score winner given the same ballots, but the highest REP candidate is likely to be a bit more centrist.
So to summarise, I think between us we've come up with a fairly decent method. We have my original proportional score/approval method, and then Gabriel turned the variance into a 0 to 1 score where intuitively twice as proportional means twice the score on the 0 to 1 scale. We also have my method of voter agreement which also works on an intuitive 0 to 1 scale, where for a given level of proportionality, twice the score give to the candidates from the voters means twice the score on the 0 to 1 scale (I'm pretty sure of that). So by multiplying the two together, we have a method that combines proportionality and overall voter preference in a non-arbitrary way.
Introducing a separate measure skews it towards larger factions.
The agreement scores aren't as intuitive as the proportionality scores in how they behave, but I'm struggling at the moment to think of a better way of doing it.
On Sunday, June 15, 2014 6:04:37 PM UTC-5, Toby Pereira wrote:Introducing a separate measure skews it towards larger factions.PpS*PfS is definitely broken this way. A single score of 0 with a single score of 1 together have the same weight as two scores of only 0.25. Doubling a faction's size (and keeping the total of the scores constant) has the same effect as doubling the scores.
One thing I tried is (PpS^(E-1))*PfS, where E is the number of candidates to be elected. It's quite arbitrary, but it patches the large-faction favoritism. It is just Score Voting for single-winner elections since the first term goes to 1. As the number of candidates to be elected increases, proportionality comes to dominate over preferences, without entirely eliminating the latter's tie-breaking use.
On Monday, June 16, 2014 5:29:32 PM UTC-5, Toby Pereira wrote:The agreement scores aren't as intuitive as the proportionality scores in how they behave, but I'm struggling at the moment to think of a better way of doing it.Since your method splits voters, can you omit the unrepresented voter-parts?