Approval Removal systems of PR

94 views
Skip to first unread message

Toby Pereira

unread,
Dec 28, 2015, 11:53:42 AM12/28/15
to The Center for Election Science
I've discussed this in a few threads, but I thought I'd put everything together here so that it's easier to keep track of.

Ebert and Phragmen are two systems of PR that use approval voting, but they fail monotonicity. Sometimes approving a candidate can count against that candidate if it causes the voter who approved the candidate to become overrepresented. So I suggested in another thread that approvals that count against a candidate could be removed in some way to preserve monotonicity. For example:

2 to elect

1 voter: AB
1 voter: AC

Ebert/Phragmen would elect BC as more proportional than AB or AC even though it is Pareto dominated. An approval removal system could remove all the A approvals from one of the two voters to make either AB or AC equally proportional to BC. Then they could win on a tie-break based on something like having more overall support. So I originally thought that it might be a good idea to make each candidate set as proportional as it possibly can be (based on Ebert or Phragmen score) through the removal of approvals. But that would seem to go too far in this case:

6 to elect

20 voters: ABCDEF
10 voters: ABCG

I would consider the best result to be ABCDEG. This would be measured by Ebert/Phragmen as fully proportional without the removal of any approvals. But ABCDEF (the Thiele winner) could be made fully proportional. The larger faction could have certain approvals removed making the following ballots:

15: DEF
5: ABC
10: ABCG

That would make ABCDEF fully proportional and would then beat ABCDEG if there was a tie-break based on overall support. But this seems to go too far. The removal of approvals is supposed to be there to prevent extra approvals counting against a candidate. But here we have removed approvals from A, B and C (who were going to be elected anyway) to help F. This was not introduced as some sort of "later no harm" feature for voters so this is an inappropriate use of approval removals. And without removing approvals from A, B or C, F cannot displace G from the winners' circle. So ABCDEG still seems to be the right result.

So it is clearly not the right thing to do to take a candidate set in isolation and simply maximise its proportionality through the removal of approvals. Ideally it would be possible to find a "correct" way of determining which approvals to remove that doesn't change depending on which candidate set we are looking at. To use one of Warren's examples:

1N: A1, A2, A3...
1N: B1, B2, B3...
1N: A1, A2, A3... B1, B2, B3...
3N: C1, C2, C3...

The correct result would seem to be for the As and Bs to get a quarter of the seats each and the Cs to get a half. But because of the overrepresentation of the AB faction, both Phragmen and Ebert award the C faction more seats. And it would also seem that the correct approval removal method would be for half the AB faction to lose all their A approvals and half to lose all their B approvals. This is not like in the previous case where we took a candidate set and tried to maximise its proportionality by removing approvals. Here we have made a single decision on which approvals to remove which would then be binding on all candidate sets.

Sometimes there would be more than one equally good method. In the first example:

2 to elect

1: AB
1: AC

Removing the A approvals from either the AB or AC faction would be equally good, so AB and AC would tie as the best candidate set.

But assuming that there can be one sensible way to do this (as opposed to having to consider different removals when comparing different sets), how would we design an algorithm to do this? In Warren's example, the A removals and B removals have to work together, so it's not a case of simply removing "bad" approvals. And even then, certain approvals might work in a candidate's favour under some results but not others, so a single removal might not work. Obviously some candidate sets will definitely not be elected (there would be ways of trivially removing the vast majority of possibilities) so these would not need to be considered when determining if an approval works in a candidate's favour.

But let's say we reach the point where we now have a definitive proportionality score for each set of candidates. We still have more work to do. Consider this example:

2 to elect

499: ABC
499: ABD
1: C
1: D

Approval removal will not make AB more proportional than CD according to Ebert or Phragmen, and yet we would still want to elect AB as the winning set. This is a case where Thiele seems to act reasonably. If we use Sainte-Lague divisors, then in this example:

3: ABC
3: ABD
1: C
1: D

we have a tie. As an aside, Ebert measures AB as 3/4 as proportional as CD, whereas CD has 2/3 of the approval of AB.

Toby Pereira

unread,
Jan 6, 2016, 5:00:28 PM1/6/16
to The Center for Election Science


On Monday, 28 December 2015 16:53:42 UTC, Toby Pereira wrote:
So I originally thought that it might be a good idea to make each candidate set as proportional as it possibly can be (based on Ebert or Phragmen score) through the removal of approvals. But that would seem to go too far in this case:

6 to elect

20 voters: ABCDEF
10 voters: ABCG

I would consider the best result to be ABCDEG. This would be measured by Ebert/Phragmen as fully proportional without the removal of any approvals. But ABCDEF (the Thiele winner) could be made fully proportional. The larger faction could have certain approvals removed making the following ballots:

15: DEF
5: ABC
10: ABCG

That would make ABCDEF fully proportional and would then beat ABCDEG if there was a tie-break based on overall support. But this seems to go too far. The removal of approvals is supposed to be there to prevent extra approvals counting against a candidate. But here we have removed approvals from A, B and C (who were going to be elected anyway) to help F. This was not introduced as some sort of "later no harm" feature for voters so this is an inappropriate use of approval removals. And without removing approvals from A, B or C, F cannot displace G from the winners' circle. So ABCDEG still seems to be the right result.

I was thinking about this again, and it's not as clear-cut as I originally thought. We could have these ballots:

15: ABCDEF, D1, E1, F1
5: ABCDEF, A1, B1, C1
10: ABCG, A1, B1, C1, G1

So I've essentially combined into one ballot the honest and the tactical votes. And we can clearly see that A1, B1, C1, D1, E1, F1 would be fully proportional under Ebert/Phragmen, and it's difficult to argue that it's better than ABCDEF. So maybe it does make sense after all to maximise proportionality using approval removals even if this does destroy strong PR. That's not the end of the story though because it might be back on with a "positive support" fix. That is to say that in this case:

x voters: ABC
x voters: ABD
1 voter: C
1 voter: D

there must be a value of x where AB beats CD. I have a couple of ideas but nothing concrete enough to post. Also in Warren's example here:

1N: A1, A2, A3...
1N: B1, B2, B3...
1N: A1, A2, A3... B1, B2, B3...
3N: C1, C2, C3...

Although it seems the best result would be a quarter As, a quarter Bs and half Cs, we would also by approval removal be able to achieve proportionality with 1/3 As, 1/6 Bs and 1/2 Cs. Or 1/6 As and 1/3 Bs.

Warren D Smith

unread,
Jan 6, 2016, 7:18:40 PM1/6/16
to electio...@googlegroups.com
"Approval removal" seems computationally very expensive?
I thought it was bad to have computation
growing like exponential(C) where C=#candidates.
But exploring all possible approval-removals would grow
exponential(V*C) where V=#voters, which is truly insane!!
You've got to try to do better than that -- which might be possible,
I'm just saying that is the most naive brute force method's runtime
behavior.


--
Warren D. Smith
http://RangeVoting.org <-- add your endorsement (by clicking
"endorse" as 1st step)

Toby Pereira

unread,
Jan 7, 2016, 9:53:26 AM1/7/16
to The Center for Election Science
I think they probably are pretty expensive. But the one I outlined in the "Holy Grail" thread probably isn't too bad as it only involves checking candidates one at a time. But what I'm trying to develop here probably would be very computationally expensive. However, regardless of its ultimate use, I still think it would be good to develop the "ultimate" system that elects the best set of candidates in all cases (if it can be done). Then we can see if we can either use it, or at least have it as something to aim for in terms of using the system that gets closest to it.

Toby Pereira

unread,
Jan 7, 2016, 1:35:53 PM1/7/16
to The Center for Election Science
I have an idea that could be combined with approval removal that could be promising. It might sound like a bit of a weird idea but go with it. Where a voter approves a candidate, we split the voter into two so that half the voter approves the candidate and half doesn't. If a voter approves two candidates, then 0.25 approve both, 0.25 approve one, 0.25 approve the other and 0.25 approve neither. I haven't proved it generally, but in the cases I've tried, it turns Ebert from Sainte-Lague to D'Hondt. For example:

2 to elect

2 voters: AB
1 voter: C

This method gives a tie between AB and AC. (Working available on request.)

3 to elect

3 voters: ABC
1 voter: D

We have a tie between ABC and ABD.

4 to elect

3 voters: ABC
2 voters: DE

We have a tie between ABCD and ABDE.

All of these agree with the D'Hondt result. But that isn't the most interesting part. Take this case:

2 to elect

x voters: ABC
x voters: ABD
1 voter: C
1 voter: D

It gives a tie between AB and CD when x is 3. Anything higher than 3 and AB wins, anything lower and CD wins. Quite a nice round result.

Provisionally the method would be - remove approvals as necessary to find the most proportional result (according to Ebert) given that all approvals will be "halved". It generally pushes things in the direction of larger factions, which is why it moved things from Sainte-Lague to D'Hondt in the cases I tried (hopefully it will work generally). I don't have a Sainte-Lague version. I haven't tested it for universally approved candidate examples, but in the following case:

20 voters: ABCDEF
10 voters: ABCG

removing approvals to make ABCDEF equally proportional with ABCDEG probably won't work. Removing approvals may increase proportionality in raw Ebert, but with the half approval thing, taking out approvals becomes more "expensive". ABCDEG is fully proportional in raw Ebert without any removals so is likely to retain a higher level of proportionality in this case too.

Warren D Smith

unread,
Jan 7, 2016, 1:58:33 PM1/7/16
to electio...@googlegroups.com
That's interesting. An A-approval ballot is converted to 2^A ballots
each weighted 2^(-A).
Perhaps "coin-flip approval" (CFA) is a better name.

The CFA transform ruins proportionality in "racist" situations.
4N voters: AB
4N voters: C
ought to elect AC.
After CFA transform:
N: A
N: B
N: AB
3N: nobody
2N: C
which is differently proportioned, 3:2 not 1:1 anymore,
and perhaps ought to elect AB.

Toby Pereira

unread,
Jan 7, 2016, 2:57:37 PM1/7/16
to The Center for Election Science


On Thursday, 7 January 2016 18:58:33 UTC, Warren D. Smith (CRV cofounder, http://RangeVoting.org) wrote:
That's interesting.  An A-approval ballot is converted to 2^A ballots
each weighted 2^(-A).
Perhaps "coin-flip approval" (CFA) is a better name.

The CFA transform ruins proportionality in "racist" situations.
4N voters: AB
4N voters: C
ought to elect AC.
After CFA transform:
N: A
N: B
N: AB
3N: nobody
2N: C
which is differently proportioned, 3:2 not 1:1 anymore,
and perhaps ought to elect AB.




I'm not sure it ruins proportionality. It does change the proportions in that case, but it's about whether it produces the correct result. And those ballots, I'm pretty sure, would still elect AC. D'Hondt awards both seats to the larger faction at >2/3 of the vote, Sainte-Lague at >3/4, whereas this is less than both (3/5). Also the Ebert proportionality scores that AB and AC get would be the same if there was a fourth candidate, D, approved by the C faction.

It generally means that larger factions ends up with proportionally more "voters" than smaller factions, but this seems to push it from Sainte-Lague to D'Hondt rather than to a violation of proportionality. But as I say, I haven't proved this yet, although it has worked in the cases I've tried. And these cases have been precise tie cases where any shift in either direction would give the wrong result.

Warren D Smith

unread,
Jan 7, 2016, 5:03:42 PM1/7/16
to electio...@googlegroups.com
The CFA transform will alter proportions by up to a factor of 2.

N: ABCD
N: E
which is 1:1 ratio, will become
15:8 which is nearly 2:1 ratio.

It thus makes a system vulnerable to cloning (e.g. if C,B,D were
clones).

So, I'm skeptical. I suppose one could argue a factor 2 is not
hugely serious (not as serious as a factor 100) but it's enough to
make me skeptical CFA will be useful.

I am biased but presently think the most hopeful avenue is to start from my
holy grail scheme, and see if one can cook up a better holy grail scheme
which uses less computing.

Toby Pereira

unread,
Jan 7, 2016, 7:22:50 PM1/7/16
to The Center for Election Science


On Thursday, 7 January 2016 22:03:42 UTC, Warren D. Smith (CRV cofounder, http://RangeVoting.org) wrote:
The CFA transform will alter proportions by up to a factor of 2.

N: ABCD
N: E
which is 1:1 ratio, will become
15:8 which is nearly 2:1 ratio.

It thus makes a system vulnerable to cloning (e.g. if C,B,D were
clones).

So, I'm skeptical.  I suppose one could argue a factor 2 is not
hugely serious (not as serious as a factor 100) but it's enough to
make me skeptical CFA will be useful.

I don't think this should cause a problem in the way you think. With those ballots, if there are two to elect it wouldn't be a problem because it's basically AB v AE (which AE would win) and the existence of the extra clones is irrelevant (candidates not in a set make no difference to the score for that set). If there are more than two to elect, then the E faction will lose out, but only because they haven't fielded enough candidates anyway.

 

I am biased but presently think the most hopeful avenue is to start from my
holy grail scheme, and see if one can cook up a better holy grail scheme
which uses less computing.


It's possible that your system is more plausible for real-life elections because of computability, but I think what I'm pursuing is still worthwhile to see where it goes and what results come out of it. 

Toby Pereira

unread,
Jan 7, 2016, 7:35:09 PM1/7/16
to The Center for Election Science
I have a couple more promising results.

3 to elect

2 voters: ABC
1 voter: AD

This gives an exact tie between ABC and ABD as one would expect from D'Hondt with strong PR.

Also this based on one of Warren's examples:

6 to elect

1: A1, A2, A3
1: B1, B2, B3
1: A1, A2, A3, B1, B2, B3

I tried three results. Firstly approval removal so that the AB faction is cut in half and half approve A and half B and with the result A1, A2, A3, B1, B2, B3. The sum of the squares of the loads was 16.

Then with the AB faction losing all the B votes and with the result A1, A2, A3, A4, B1, B2. This also gave 16.

Finally, no removal and A1, A2, A3, B1, B2, B3. This gave 16.5 (higher being worse).

I suppose I'm a little uncomfortable with 4 As and 2 Bs equalling 3 of each, but it least it didn't beat it and this might be unavoidable. In a way I was surprised that 4 As and 2 Bs didn't win because the further you go in that direction, the nearer you get to universal approval. All 3 voters supporting the same 6 candidates gives a score of 14. So I might have to look into this further. So 3:0 beats 2:1 and 1.5:1.5 but those latter two tie.

What would happen if we had e.g.

1: A candidates
1: B candidates
10: A and B candidates
?

I'll look into this later.

Warren D Smith

unread,
Jan 8, 2016, 2:45:22 AM1/8/16
to electio...@googlegroups.com
ok, if it makes you happier, I modify the numbers:

The CFA transform will alter proportions by up to a factor of 2.

6N: ABCD
10N: E

E wins (single winner), but after CFA transform A wins.

Toby Pereira

unread,
Jan 8, 2016, 6:53:18 AM1/8/16
to The Center for Election Science


On Friday, 8 January 2016 07:45:22 UTC, Warren D. Smith (CRV cofounder, http://RangeVoting.org) wrote:
ok, if it makes you happier, I modify the numbers:

 The CFA transform will alter proportions by up to a factor of 2.

 6N: ABCD
 10N:  E

E wins (single winner), but after CFA transform A wins.



Your example still doesn't work. You're correct that fewer would approve E (5N) than at least one of ABCD (6N*15/16 = 5.625N) but that doesn't change anything in the A v E single-winner race. The ballots for that are effectively:

3N: A
5N: E
8N: Nobody

3N approve each of ABCD, whereas 5N approve E, so E wins.

The more I think about this, the more I think it could be very useful. This is obviously independent from approval removal, even if I've been joining them together to form a single method.

Toby Pereira

unread,
Jan 8, 2016, 10:56:29 AM1/8/16
to The Center for Election Science
I've tried a few more scenarios.

4 to elect

1: A1, A2...
1: B1, B2...
2: A1, A2... B1, B2...

With appropriate approval removal, A1, A2, A3, B1 is equal to A1, A2, B1, B2.

6 to elect

1: A1, A2...
1: B1, B2...
4: A1, A2... B1, B2...

With appropriate approval removal, A1, A2, A3, A4, A5, B1 is equal to A1, A2, A3, B1, B2, B3.

These election types seem to give equality regardless of the size of the AB faction (unproven though). This does actually seem reasonable. The AB faction are indifferent between As and Bs so from their point of view it doesn't matter whether they get all As, all Bs or a mixture.

I used this example yesterday:

3 to elect:

2: ABC
1: AD

ABC came out equal with ABD as a strong PR D'Hondt method should. However, I've now tried it with approval removal, so with the following ballots:

2: BC
1: A

With result ABC

And:

2: AB
1: D

With result ABD

These came up with exactly the same scores as ABC and ABD with no approval removal. No faction seems to be able to gain an advantage here by withholding approvals. Then we have:

6 to elect

4: ABCDEF
2: ABCG

With no approval removal, ABCDEG has a score of 7.5, beating the 8 of ABCDEF. But we can make ABCDEF more proportional by removing approvals as follows:

3: DEF
3: ABCG

ABCDEF may now be "fully proportional" but it also loses out in this method due to having less approvals all round. It ends up with a score of 8 - exactly the same as before the approvals were removed. So ABCDEG still wins, maintaining strong PR.

In conclusion, while the general criteria haven't been proved yet, this method has withstood empirical testing better than any other method we've discussed on here. It hasn't returned a single bad result. The only problem (computability aside) is that it is just a D'Hondt method at the moment. There is no Sainte-Lague. But I don't see that as fatal. I've often wondered if that might be the case anyway in a Holy Grail method. Sainte-Lague is arguably objectively the most proportional you can get, but if you want to add monotonicity and "positive support" (as I named one of the criteria), you inevitably shift it from pure PR in a "majoritarian" direction. But only pushing it as far as D'Hondt seems a very good compromise.

Computability aside, I would say at the moment that this is probably the best method you're likely to get. And I wonder if can be used to test a relatively small number of individual candidate sets, if more tractable methods are used to narrow down the possibilities to a shortlist.

Warren D Smith

unread,
Jan 8, 2016, 2:46:38 PM1/8/16
to electio...@googlegroups.com
The score-voting-ballot analog of the CFA transform, would seem to be:
Any score S for a candidate (use score range 0<=S<=1)
transforms via a biased coin flip into
approve with probability S/2
disapprove with probability 1-S/2
A score ballot in a C-candidate election thus is transformed
into 2^C approval ballots. The weight of the ballot corresponding to
a binary integer X (0<=X<2^C; the "1" bits of X tell us the approved candidates)
is
weight = PRODUCT(j=1..C) (S/2)^X[j] * (1-S/2)^(1-X[j])
where X[j] is the jth binary bit of X.

It seems to me what you (Toby P) want, re your CFA transform idea, is
to prove a monotonicity theorem.
So let's take a look at that.

Let Q be the Ebert "sum of squared loads" penalty function applied after a CFA
transform. (Choose parliament minimizing Q.)
S[v,c]=score for candidate c from voter v.
T[c]=sum of all scores for c=SUM(voters v) S[v,c].
Delta = a constant "shift" (most simply 0).

Before any CFA transform:
Q = SUM(voters v) ( SUM(winners w) S[v,w] / (T[w]+Delta) )^2

After CFA transform:
Q = SUM(voters v)
SUM(0<=X<2^C)
( SUM(winners w) X[w] / (T[w]+Delta) )^2 *
PRODUCT(winners y) (S[v,y]/2)^X[y] * (1-S[v,y]/2)^(1-X[y])

The QUESTION is:
is the latter Q function monotonic, meaning, increasing some S[u,Z] always
decreases Q for
Z-containing parliaments, while leaving Q unaltered for Z-omitting parliaments?
I doubt it. There are two opposite sign effects going on for the voter v=u
versus for all the other voters's terms, and it seems like you
could make whichever one you want be bigger by designing the election right.

Toby Pereira

unread,
Jan 8, 2016, 3:28:38 PM1/8/16
to The Center for Election Science


On Friday, 8 January 2016 19:46:38 UTC, Warren D. Smith (CRV cofounder, http://RangeVoting.org) wrote:
The score-voting-ballot analog of the CFA transform, would seem to be:
Any score S for a candidate (use score range 0<=S<=1)
transforms via a biased coin flip into
      approve with probability S/2
   disapprove with probability 1-S/2
A score ballot in a C-candidate election thus is transformed
into 2^C approval ballots.  The weight of the ballot corresponding to
a binary integer X (0<=X<2^C; the "1" bits of X tell us the approved candidates)
is
   weight = PRODUCT(j=1..C)    (S/2)^X[j] * (1-S/2)^(1-X[j])
where X[j] is the jth binary bit of X.

I think I'd probably still want to use the Kotze-Pereira transform. Doing it the way you suggest would I think fail scale invariance. When I rediscovered the Ebert method, I initially looked at something similar (without the CFA part) for the score version. So if someone gives a score of 5 out of 10 to 2 candidates, it would be equivalent to 0.25 people approving both, 0.25 approving neither and 0.25 each for one or the other. Contrast that with a score of 10 out of 10 where no fraction of a voter would disagree with another fraction.


 

It seems to me what you (Toby P) want, re your CFA transform idea, is
to prove a monotonicity theorem.
So let's take a look at that.

Let Q be the Ebert "sum of squared loads" penalty function applied after a CFA
transform.  (Choose parliament minimizing Q.)
S[v,c]=score for candidate c from voter v.
T[c]=sum of all scores for c=SUM(voters v) S[v,c].
Delta = a constant "shift" (most simply 0).

Before any CFA transform:
Q = SUM(voters v) ( SUM(winners w)  S[v,w] / (T[w]+Delta) )^2

After CFA transform:
Q = SUM(voters v)
     SUM(0<=X<2^C)
       ( SUM(winners w) X[w] / (T[w]+Delta) )^2 *
          PRODUCT(winners y) (S[v,y]/2)^X[y] * (1-S[v,y]/2)^(1-X[y])

The QUESTION is:
is the latter Q function monotonic, meaning, increasing some S[u,Z] always
decreases Q for
Z-containing parliaments, while leaving Q unaltered for Z-omitting parliaments?
I doubt it.  There are two opposite sign effects going on for the voter v=u
versus for all the other voters's terms, and it seems like you
could make whichever one you want be bigger by designing the election right.

I don't think using CFA with Ebert would itself pass monotonicity, although I think failures would be rarer than with raw Ebert. I think it would still have to be combined with approval removal. But with approval removal, it must be monotonic. Any approval that counts against a candidate would be removed. I know it's an added computational hassle though.
 

Warren D Smith

unread,
Jan 8, 2016, 3:56:47 PM1/8/16
to electio...@googlegroups.com
> I don't think using CFA with Ebert would itself pass monotonicity, although
> I think failures would be rarer than with raw Ebert.

--agree.

> I think it would still
> have to be combined with approval removal. But with approval removal, it
> must be monotonic. Any approval that counts against a candidate would be
> removed. I know it's an added computational hassle though.

--might be only weak monotonicity; the excess approvals will not help but
won't hurt? So what will be the precise definition of approval removal system?
Maximize Q over all parliaments and also over all possible ways
to remove approvals, *and* there is a
Kotze-Pereira transform followed by a CFA transform before start?

If that stuff all worked it could be better than the present Holy
Grail in the sense that
the computational work could be cut from depending on W! to depending
to on 2^W or something like that (W=#winners) -- which is better --
simple exponential growth is smaller than factorial growth. This all
is ONLY if it somehow can be done in time NOT exponential in #voters,
which would be a total disaster, completely infeasible in essentially
every election with (say) >200 voters.

Toby Pereira

unread,
Jan 8, 2016, 5:59:17 PM1/8/16
to The Center for Election Science


On Friday, 8 January 2016 20:56:47 UTC, Warren D. Smith (CRV cofounder, http://RangeVoting.org) wrote:


--might be only weak monotonicity; the excess approvals will not help but
won't hurt?

Yes, I suppose so. More approvals would win in a tie-break though.

 
So what will be the precise definition of approval removal system?
Maximize Q over all parliaments and also over all possible ways
to remove approvals, *and* there is a
Kotze-Pereira transform followed by a CFA transform before start?

Yes, that would be it. With the removal of approvals, I presume it could be done without checking every possible combination. Approvals would only need to be removed in cases where voters are effectively in two factions and so become too heavily overrepresented. One other complicating thing is that in a case like this:

2: A
2: B
2: AB

you might want to split the AB faction into an A and a B but here:

1: A
1: B
1: AB

you can't do that without splitting a voter in half. But it should be the case that multiplying the number of voters by a constant does not affect the result. So you'd have to be able to split voters into fractional parts. Although with many voters, it's unlikely to make much difference.

 

If that stuff all worked it could be better than the present Holy
Grail in the sense that
the computational work could be cut from depending on W! to depending
to on 2^W or something like that (W=#winners) -- which is better --
simple exponential growth is smaller than factorial growth.  This all
is ONLY if it somehow can be done in time NOT exponential in #voters,
which would be a total disaster, completely infeasible in essentially
every  election with (say) >200 voters.

I don't have that much of an idea about this side of things. I sort of assumed it would be quite bad from a computational point of view with the approval removal, but I don't know. As I said before, I don't think that much approval removal would even be required, but I suppose it's more about how much checking the program has to do rather than the number of approvals that would actually need to be removed.

Toby Pereira

unread,
Jan 8, 2016, 6:52:11 PM1/8/16
to The Center for Election Science
One other thing I need to add about the approval removals is that they have to be consistent with removing the original approval that the voter cast rather than from the CFA or Kotze-Pereira transformed ballots. If a voter approves A and B, this would become:

0.25: AB
0.25: A
0.25: B
0.25: Nobody

You can't just then remove the B from the top fraction to make this:

0.5: A
0.25: B
0.25: Nobody

The removal has to come from the root of the approvals - the voter in their original form.

Toby Pereira

unread,
Jan 10, 2016, 12:30:29 PM1/10/16
to The Center for Election Science
I think I have now proved that with partisan voting, Ebert with CFA is D'Hondt proportional.

To summarise what I would understand D'Hondt proportionality to be - if party A has a droop quotas of votes and party B has b droop quotas of votes and there are a+b-1 seats available, then there would be a tie between them winning a, b-1 seats respectively and winning a-1, b seats respectively. More generally if party A has ka votes and party B has kb votes, and there are a+b-1 seats that can be won between those parties, then there will be a tie between a, b-1 and a-1, b.

Here is an example of how to calculate the load on a party's voters if they win say, 4, seats. The party has v voters and the number of seats they win (4 in this particular case) is s. Here is the split in ballots of how many of the 4 candidates they would vote for after the CFA transformation.

4v/16: 1 candidate (4v/16 = 4c1*v/(2^s)
6v/16: 2 candidates (6v/16 = 4c2*v/(2^s)
4v/16: 3 candidates (4v/16 = 4c3*v/(2^s)
v/16: 4 candidates (v/16 = 4c4*v/(2^s)

That accounts for 15v/16 voters. 1v/16 are unrepresented.

A voter's individual load they get from a candidate is 1/(0.5v) = 2/v because 0.5v is the number of voters approving each candidate. So if a voter approves t of the s candidates, their load is 2t/v and their squared load is 4t^2/v^2.

The number of voters approving t candidates would be (s c t) * v / (2^s).

So the sum of the squared loads of all the voters would be as t goes from 1 to s, (4t^2/v^2) * (s c t) * v / (2^s). We can cancel down a bit:
The sum as t goes from 1 to s - 4t^2 * (s c t) / (2^s * v)

And then by magic we get that the sum of the voters' loads is s*(s+1)/v - http://www.wolframalpha.com/input/?i=sum+%28t+to+s%29+4t%5E2+*+%28s+comb+t%29+%2F+%282%5Es+*+v%29

Now we'll take the case where party A has ka votes and party B has kb votes. Firstly, Party A wins a seats and party B wins b-1 seats. The sum of the squared loads is:

a*(a+1)/ka + (b-1)*(b)/kb
= (a+1)/k + (b-1)/k
= (a+1+b-1)/k
= (a+b)/k

Now where party A wins a-1 seats and party B wins b-1 seats. The sum of the squared loads is (if it wasn't now obvious):

(a-1)*(a)/ka + b*(b+1)/kb
= (a-1)/k + (b+1)/k
= (a-1+b+1)/k
= (a+b)/k

Both results are the same. So in summary, if party A has ka votes and party B has kb votes, then Ebert with CFA would consider party A winning a seats and party B winning b-1 seats to be equally proportional to party A winning a-1 seats and party B winning b seats. This is what we set out to prove.

Toby Pereira

unread,
Jan 10, 2016, 1:06:54 PM1/10/16
to The Center for Election Science
It is also clear that Ebert with CFA would keep strong PR.

Under normal Ebert, we already know that if result A is better than result B then it will remain so if a universally approved candidate is added. So the sum of the squared loads under result A will remain lower than under result B if a constant is added to the loads of each of the voters.

Now consider the CFA case. We are effectively adding a constant to the loads of half of the voters. So by the result above, the sums of the squared loads will remain lower in result A than in result B for these voters. For the other half of the voters, their loads remain unchanged. So the result must hold and strong PR remains.

Toby Pereira

unread,
Jan 10, 2016, 1:26:11 PM1/10/16
to The Center for Election Science
To summarise some of the criteria this system (Ebert with CFA, KP transformation and approval removal) would pass:

PR - Yes
Strong PR - Yes
Global function - Yes
Monotonic - Yes (a least weakly, but I don't see that as a problem, especially with total approval tie-break)
Reduces to approval/score in single winner case - Yes
Scale invariant - Yes
Independence of irrelevant voters - Yes (they are not considered when scoring a set of candidates)
Sainte-Lague/D'Hondt proportionality - Yes as I originally defined it, in that it should have at least one (D'Hondt in this case). But there is currently no Sainte-Lague version.
Positive support - Yes. AB wins for x>3 in this example:

x voters: ABC
x voters: ABD
1 voter: C
1 voter: D

As for others, I think it probably does pass participation actually but I have no proof.

Then there is factional proportionality as I most recently defined it - If a faction (where all voters are connected to each other through votes and connected to no-one else) wins a certain number of seats, then if some of these ballots are given additional approvals (while still maintaining the boundaries of the faction - so not overlapping with another faction that was previously unconnected), then that faction should not win fewer seats.

I think it must pass this actually. The score of a set with an extra candidate not from the faction could not gain relative to the original winning set (before extra approvals are added). Any approvals that could be harmful to a candidate would be removed. I suppose it could still be possible for a gain in proportionality if one of the faction's candidates is replaced by a different one, and then a non-factional candidate then ousts a different factional candidate? In any case this is still a new and arguably doubtful criterion and so it hasn't really been proved for any system.

Toby Pereira

unread,
Jan 10, 2016, 2:31:26 PM1/10/16
to The Center for Election Science
We need to see if there is a sensible way of removing approvals that won't require way too many computational resources.

For a possible candidate set, we could list the voters of each candidate in order of how overrepresented they are. Then we could work down each list to see if removing approvals will increase the overall score of the set, and we stop at the point where no more gains to be had.

Obviously there a still a few problems with this. The order of the lists could make a difference. And it could be possible that we have to remove more than one approval to get any gain. We might lose and then gain again. Remember this example:

4: ABCDEF
2: ABCG

The score of ABCDEF was the same with no removals as it was with the ballots transformed to this:

3: DEF
1: ABC
2: ABCG

It's not the winning result (ABCDEG is where no removals are required) but there could be a case where something like this was. And where some removals similar to the above meant that ABCDEF became slightly better (rather than the same) after the removals. And although I haven't gone through the working, I suspect that removing these approvals one by one causes a drop before going back up again. Hmm, I don't know though. What we have here is a result that wasn't the best anyway and we've removed some approvals to get it back to where it was (i.e. still not the best), so maybe it's not too much to worry about.

We can do the same with these ballots:

2: AB

We could make it:

1: A
1: B

Which could be some sort of "local maximum", but it wouldn't be as good as what we started with, so I'm tempted to say that we don't need to worry too much about this.

So I would say that the best method would be to have each candidate's voters in order of most overrepresented, and then keep checking the top of each list (rather than doing a whole list at once) and remove the approval that causes the greatest gain in Q each time, until no further gains can be made from the top of any list.

I would be surprised if it wasn't exceedingly rare for the best set of candidates to fail to be elected in this case. Even if it's not entirely perfect and one could concoct a contrived example, this should still be very accurate.

Warren D Smith

unread,
Jan 10, 2016, 2:42:32 PM1/10/16
to electio...@googlegroups.com
On 1/10/16, 'Toby Pereira' via The Center for Election Science
<electio...@googlegroups.com> wrote:
> I think I have now proved that with partisan voting, Ebert with CFA is
> D'Hondt proportional.

--this looks very promising. But can proof be made to work with
arbitrary numbers (not just 2) of parties and each running arbitrarily many
candidates?

Warren D Smith

unread,
Jan 10, 2016, 3:12:52 PM1/10/16
to electio...@googlegroups.com
OK, as an independent check & extension of Toby, let me see if I can
produce my own
proof that CFA+Ebert ==> basic proportionality.

Let party k run C_k candidates and have V_k voters and get W_k winners.
Each voter approves her party's candidates and disapproves all rival
party's candidates.

The sum of squared loads for the party-k winners and party-k voters is
(without the CFA)
V_k * (W_k/V_k)^2 = W_k^2/V_k.
The reason this yields PR is that if we minimize
SUM_k W_k^2 / V_k
by choice of the W_k subject to SUM_k W_k = W = total #seats being held fixed,
then the answer (by method of Lagrange multipliers) is
W_k / V_k = independent of k
i.e. proportionality.

Side remark:
Toby actually took a more detailed look at the 2-party case finding in a
d'Hondt "perfect tie" scenario that "d'Hondt happens."
But I won't do that here. Toby also remarked he
had no Sainte-Lague version, to which I reply he should consider using "shifted
approval" to define "load." That is, a voter's load gets a contribution of 1/A
if that voter approved some winner who won with A approvals. Instead use
1/(A+shift) where shift is some pre-agreed value of the same order as V/W
where v+#voters and W=#winners. I think what I have been calling
"shift" is what Jameson Quinn has been calling "ghost voters." With shifts used
in this way, other proportionality notions should happen; see which
one best corresponds to Sainte-Lague.

Return to regularly scheduled programming:
The sum of squared loads for each party-k winner among party-k voters
is (now with the CFA)
2^(-W_k) * V_k * (SUM{0<=j<=W_k} Binomial(W_k,j)*(j/V_k)^2
The sum may be done in closed form -- type into wolframalpha.com
sum( Binomial(W,j)*j^2, j=0..W )
and the result is
(1/4) * W_k * (1+W_k) / V_k
Yes!! I agree with Toby that this yields PR, the reason being that
if we minimize
SUM_k (1+W_k)*W_k / V_k
by choice of the W_k subject to SUM_k W_k = W = total #seats being held fixed,
then the answer (by method of Lagrange multipliers) is
(1+2*W_k) / V_k = independent of k
which is a form of proportionality which now asks that (1/2)+W_k
rather than W_k be proportional to V_k.

(I have not checked about perfect ties ala Toby, nor have I looked
into "shifts.")

So anyhow what this proves is:
with any number of parties and any numbers of candidates run by each, we
get using CFA+Ebert ("minimize sum of squared loads" concept)
a guarantee of a kind of proportionality which is "half shifted" versus
the kind you get without the CFA.

Warren D Smith

unread,
Jan 10, 2016, 3:41:08 PM1/10/16
to electio...@googlegroups.com
About "shifted Ebert" using not 1/A but rather 1/(A+shift)
to define "load":
This will yield the following kind of proportionality when we minimize
the sum of
squared loads.

Old way (shift=0):
#winners for party_k = W_k proportional to V_k = #voters for party_k.
New way (with shift):
W_k proportional to (shift+V_k)^2 / V_k.
(Here I am not demanding W_k be integer, I'm allowing real number W_k.)
The right hand side is
V_k + 2*shift + shift^2/V_k
and if the rightmost term is neglected then
W_k proportional to about V_k + 2*shift.
So positive values of the shift (versus zero) tend to help smaller parties in
the same manner Sainte-Lague does (versus d'Hondt).
Use shift=V/(4*W), I think, to most closely duplicate Sainte-Lague?
Where V=#voters, W=#winners. That way each party gets "half a Hare quota"
worth of "free" pseudo-votes before election begins.

With the CFA, you instead will need shift=V/(2*W) I think, giving each
party 1 Hare quota of freebies -- the extra half-quota compensating
for the CFA effect of a half-shift in the other direction.

So this is all nice about the CFA and about shifts.

However, the approval removal idea may be problematic, huge computation.

Also, another side remark: if instead of sum of squared loads we were
using sum of cubed loads, or other powers, then the same proof
technique would still work as long as the power is a fixed integer>1;
Wolfram alpha can do all sums of form
sum( binomial(W,j) * j^u, j=0..W)
in closed form if u>1 is integer. In all such cases the CFA+minimize
sum of u-powers of loads voting technique will yield a provable proportionality
theorem with some suitable shift-esque twiddles adjusting the precise meaning of
the proportionality notion.

Toby Pereira

unread,
Jan 10, 2016, 5:44:55 PM1/10/16
to The Center for Election Science
The number of parties shouldn't make any difference. Although I only used two parties competing over a single seat, the same would apply if there was a third (or fourth and so on) party competing for the seat. Parties' entitlement to the next seat can be calculated by simply looking at each pair of parties head-to-head and when I talk about "next seat" we can safely elect sequentially in what is effectively a party-list election.

And as for the number of candidates they field, it wouldn't make any difference unless they don't field enough.

Warren D Smith

unread,
Jan 10, 2016, 6:31:37 PM1/10/16
to electio...@googlegroups.com
On 1/10/16, 'Toby Pereira' via The Center for Election Science
<electio...@googlegroups.com> wrote:
>
>
> On Sunday, 10 January 2016 19:42:32 UTC, Warren D. Smith (CRV cofounder,
> http://RangeVoting.org) wrote:
>>
>> On 1/10/16, 'Toby Pereira' via The Center for Election Science
>> <electio...@googlegroups.com <javascript:>> wrote:
>> > I think I have now proved that with partisan voting, Ebert with CFA is
>> > D'Hondt proportional.
>>
>> --this looks very promising. But can proof be made to work with
>> arbitrary numbers (not just 2) of parties and each running arbitrarily
>> many
>> candidates?
>>
>
> The number of parties shouldn't make any difference. Although I only used
> two parties competing over a single seat, the same would apply if there was
>
> a third (or fourth and so on) party competing for the seat. Parties'
> entitlement to the next seat can be calculated by simply looking at each
> pair of parties head-to-head and when I talk about "next seat" we can
> safely elect sequentially in what is effectively a party-list election.
>
> And as for the number of candidates they field, it wouldn't make any
> difference unless they don't field enough.

--yes, agree with all that, in fact I posted proofs in the meantime.

Toby Pereira

unread,
Jan 10, 2016, 6:39:36 PM1/10/16
to The Center for Election Science


On Sunday, 10 January 2016 20:41:08 UTC, Warren D. Smith (CRV cofounder, http://RangeVoting.org) wrote:

However, the approval removal idea may be problematic, huge computation.



What do you make of the idea I posted above about this? For a potential winning set of candidates, list the approvers of each candidate in order of overrepresentation. Then just check the top of each list to see which approval removal has the biggest positive effect on Q. Remove this approval. Another voter will then top this candidate's list. Continue until there are no such approvals. So if there are c winning candidates, there would only be at most c approvals to compare at any one time. Then it seems that we would at least find some equilibrium where it would not be beneficial for any candidate to have an approval removed. I think this would be good enough.

With the CFA transformation, this shouldn't be a problem. Even though voters are "split up" we can just look at the voter fraction that has approved all the candidates that the voter approved. Removing an approval would then mean removing the approval entirely from the voter - not just from a fractional part.

It might be slightly more complex in the score case with the KP transformation. It might be that quite a lot of voters have given a positive score to all of the candidates (even if just a score of 1 for some of them). Then all of these voters would have some fraction of them that is jointly most overrepresented. And if someone has given a score of more than 1 to a candidate, we can't start by removing the approval from this "bottom layer" anyway. You would have to start from the top of a voter's score for a candidate and work downwards.

I suppose one way to do this would be to only look at the voting fractions that are a voter's "top score" for a candidate and list them in order of overrepresentation, and then apply the same procedure as before (even if this still means many at joint top). Of course, this would mean that if a voter has given the max score to just one candidate, then none of this score will ever be removed because the top fraction will only approve the one candidate so cannot be overrepresented.

Another solution would be to do the above, but once that has been exhausted (no more gains are to be made), list them in order of overrepresentation of voters' "top score minus one" and see if any gains can be made by removing the top two points from a voter's score to the candidate. Then go to "top score minus two" and so on. I think this could work.

Warren D Smith

unread,
Jan 10, 2016, 9:05:17 PM1/10/16
to electio...@googlegroups.com
On 1/10/16, 'Toby Pereira' via The Center for Election Science
<electio...@googlegroups.com> wrote:
>
>
> On Sunday, 10 January 2016 20:41:08 UTC, Warren D. Smith (CRV cofounder,
> http://RangeVoting.org) wrote:
>>
>>
>> However, the approval removal idea may be problematic, huge computation.
>>
>>
>>
> What do you make of the idea I posted above about this? For a potential
> winning set of candidates, list the approvers of each candidate in order of
> overrepresentation. Then just check the top of each list to see which
> approval removal has the biggest positive effect on Q. Remove this
> approval. Another voter will then top this candidate's list. Continue until
> there are no such approvals. So if there are c winning candidates, there
> would only be at most c approvals to compare at any one time. Then it seems
> that we would at least find some equilibrium where it would not be
> beneficial for any candidate to have an approval removed. I think this
> would be good enough.

--might be good. It's another greedy scheme, as opposed to optimal scheme...


> With the CFA transformation, this shouldn't be a problem. Even though
> voters are "split up" we can just look at the voter fraction that has
> approved all the candidates that the voter approved. Removing an approval
> would then mean removing the approval entirely from the voter - not just
> from a fractional part.

--I'm confused.

> It might be slightly more complex in the score case with the KP
> transformation. It might be that quite a lot of voters have given a
> positive score to all of the candidates (even if just a score of 1 for some
> of them). Then all of these voters would have some fraction of them that is
> jointly most overrepresented. And if someone has given a score of more than
> 1 to a candidate, we can't start by removing the approval from this "bottom
> layer" anyway. You would have to start from the top of a voter's score for
> a candidate and work downwards.

--more confused.
But anyway, I agree there ought to be some fairly obvious greedy approaches
to removal. But it also might be, that the optimum approval-removal is another
of those "inapproximable" problems (I have no idea right now whether that is so)
in which case greedoids are especially suspect. And I express my
usual prejudice
in favor of optimum methods, because, optimum is nice for selling to
people and for
never having to say you're sorry. So given that, and given that
optimum approval removal is probably (?) totally infeasible, I'd
prefer it if somehow this whole approval-removal idea could be gotten
rid of and replaced with something else.

The CFA seems a nice tool to have in our box,
but does it have any use if we discard approval-removal?
It would then seem to be a hammer looking for a nail.

Warren D Smith

unread,
Jan 10, 2016, 9:25:10 PM1/10/16
to electio...@googlegroups.com
Idea.

Suppose we use the Ebert sum-of-squared-loads penalty function,
except for this. The denominators for computing "load" still are
A[w], the number
of approvals got by a winning candidate w. But the numerators, which used
to be 1 if that voter v approved that candidate c, else 0, are now
real numbers x[v,c] with 0<x[v,c]<=1 if approved, else 0.

The x[v,c] are chosen to obey the constraints that (1) each MP needs at
least some known amount of approval (e.g. a "quota" worth? or the amount he got,
if smaller than a quota?) even for "reduced approval"
got by considering that approval as score x not score 1; and (2) that
the sum-of-squared-loads (as redefined) is minimized.

I don't know if this works to produce a good voting system.
But I do know, that this is computationally efficient.
It's a "convex quadratic program" to find the optimum x's, and
that's doable in polynomial(V,C) timesteps.

So what I'm suggesting here, is a variant of "approval removal" which may
or may not yield a good-performing voting system... but definitely
avoids the exponential work explosion you would get from naive brute force
approval-removal. Also note: if this is monotonic, then it ought to
be strictly so
because those A[w] denominators still are there, unaltered by x-ifying,
and they prefer being bigger.

Toby Pereira

unread,
Jan 11, 2016, 12:56:55 PM1/11/16
to The Center for Election Science
This does look like an interesting idea, but I'm not sure if it would necessarily work because it wouldn't stop overrepresentation. Unless by the Ebert measure everyone has exactly the same amount of representation, some will be overrepresented and some underrepresented. And those that approve several of the winning candidates will be overrepresented, and so probably will help certain candidates by not approving them.

But then I suppose if you're changing the numerator, and reduce it for more highly approved candidates then you can reduce the sum of the raw loads (whereas this is always the same under Ebert), so you might be able to make gains that way. But I'm not sure what numbers you'd use. For example:

2: AB
1: C

This would be a D'Hondt tie between AB and AC. Ebert as a Sainte-Lague method would give it to AC. By changing the numerators, we wouldn't want to go so far as to give an outright win to AB, which would mean going the other side of D'Hondt.

Anyway, if the numerators for the C faction remain at 1, then for this to be a tie, the numerators for the AB faction would have to be sqrt(2/3) = 0.816. That's a fairly nasty-looking number and also quite high. You might naively think you could halve the numerator for the AB faction but this would give an easy win to AB over AC.

Toby Pereira

unread,
Jan 11, 2016, 1:48:36 PM1/11/16
to The Center for Election Science


On Monday, 11 January 2016 02:05:17 UTC, Warren D. Smith (CRV cofounder, http://RangeVoting.org) wrote:
On 1/10/16, 'Toby Pereira' via The Center for Election Science

> With the CFA transformation, this shouldn't be a problem. Even though
> voters are "split up" we can just look at the voter fraction that has
> approved all the candidates that the voter approved. Removing an approval
> would then mean removing the approval entirely from the voter - not just
> from a fractional part.

--I'm confused.

OK, suppose a potential winning set of candidates includes A and B. A particular voter has approved both A and B. But we can't simply look at whether this voter is under or overrepresented after the CFA transformation because they've been split up into:

0.25: AB
0.25: A
0.25: B
0.25: Nobody

So when looking at whether a voter is overrepresented we would look at the 0.25 that has approved all the candidates that the voter actually approved on their ballot. We ignore the other three parts in this case. And then if we decide to remove the approval for say, A, we would remove it throughout, not just for the 0.25 that approved both, so we would have:

0.5: B
0.5: Nobody

And not:

0.5: B
0.25: A
0.25: Nobody

The latter would give an invalid form of the CFA transformation. It is not a possible ballot.

 

> It might be slightly more complex in the score case with the KP
> transformation. It might be that quite a lot of voters have given a
> positive score to all of the candidates (even if just a score of 1 for some
> of them). Then all of these voters would have some fraction of them that is
> jointly most overrepresented. And if someone has given a score of more than
> 1 to a candidate, we can't start by removing the approval from this "bottom
> layer" anyway. You would have to start from the top of a voter's score for
> a candidate and work downwards.

--more confused.

Score ballots with the KP transformation seems to be more complicated than approval. But let's say a potential winning set is ABCD, and a voter has given them scores (out of 10) of 10, 1, 1, 1 respectively. I'll ignore the CFA transformation for now because it complicates it further, but the two can be combined fairly simply. So this leaves us with:

0.1: ABCD
0.9: A

It's likely that the "bottom" 0.1 is going to be heavily overrepresented, and this will also be the case for many voters. But we cannot remove the A approval from this 0.1 of a voter, because it would give us an invalid form of the KP transformation. So when removing approvals from voters we have to work from the top of their score down. We could reduce their score of A by 1, but this would give:

0.1: ABCD
0.8: A
0.1: Nobody

Not:

0.1: BCD
0.9 A

This would not be a valid ballot.

So when we list a candidate's approvers in order of overrepresentation, we would only consider the 0.1 of each voter that corresponds to the "top" of the score - the part that can be removed first without giving an invalid ballot. So each removal would mean just a reduction in a voter's score for a candidate by 1. But obviously it could then be reduced further with another removal if that voter is still (or later becomes again) the most overrepresented approver of that candidate.

Another problem is that if a voter gave the scores 10, 9, 9, 9 to ABCD respectively, then the ballot would become:

0.9: ABCD
0.1: A

So the "top" part would approve only A and this part would never be overrepresented, so the voter's score for A could never be reduced by the above method. But it might still be beneficial to reduce it (by more than 1). So once no further increases in Q are possible by reducing scores by 1, we look at reducing scores by 2. And to do this we would list the approvers of candidates in order of overrepresentation of second-to-top score. So in this example on the A candidate list, it would be the 0.1 corresponding to a score of 9, which approves ABCD. So if this fraction of this voter is top of candidate A's list of most overrepresented, we would then see if reducing their score of A by 2 would improve Q. If so, we reduce the score by 2. At this point we would actually have then check if it could be reduced by a further 1 before then checking the lists for scores that can be reduced by 2. (All reductions by 1 would have been exhausted before the start of this "round" but with this reduction by 2 it would become a new possibility for a reduction by 1.)

And when reductions by 2 are exhausted, we move onto reductions by 3 and so on. It's a bit complicated to explain, but computationally I don't think it's really much worse than the approval version.

 
But anyway, I agree there ought to be some fairly obvious greedy approaches
to removal.  But it also might be, that the optimum approval-removal is another
of those "inapproximable" problems (I have no idea right now whether that is so)
in which case greedoids are especially suspect.  And I express my
usual prejudice
in favor of optimum methods, because, optimum is nice for selling to
people and for
never having to say you're sorry.  So given that, and given that
optimum approval removal is probably (?) totally infeasible, I'd
prefer it if somehow this whole approval-removal idea could be gotten
rid of and replaced with something else.

The CFA seems a nice tool to have in our box,
but does it have any use if we discard approval-removal?
It would then seem to be a hammer looking for a nail.




As for using greedy algorithms generally, I obviously agree that in an ideal world we wouldn't do it. But I still think this would be better than using an otherwise obviously inferior method (which to me at the moment is all of them) that we could guarantee a global result for.

Approval removal was supposed to be a way of ensuring monotonicity, and even with the greedy algorithm it basically achieves that. It finds an equilibrium where no candidate can gain by having an approval removed or by having a score reduced. And while this isn't exactly a proof, I would be very surprised if the greedy algorithm didn't give the "best" result in almost all cases. And besides while it's a greedy algorithm, it's not like a sequential method - it's greedy at a much finer level of the process than that. So I don't think it should be considered of the same type.

Also, look at the PR competition. STV is always used as a sequential method. How many Schulze-STV or CPO-STV elections have there been? None that I know of. Yes, it's nice to be fully global, but I don't think any adoption of this method would be hampered by this.

Toby Pereira

unread,
Jan 11, 2016, 1:56:26 PM1/11/16
to The Center for Election Science
Basically, I think that a monotonic method that doesn't rely on approval removal would not have any of subtleties of the method. It would probably be a much coarser method and while it might produce "optimum" results on its own terms, I think it's unlikely to produce results that are as good as approval removal if we checked individual results against what we think the results "should" be, as we have been doing for various methods with our nasty examples.

Warren D Smith

unread,
Jan 11, 2016, 2:43:46 PM1/11/16
to electio...@googlegroups.com
I thought about my idea overnight and am now mildly optimistic.
Idea now also slightly altered.

What I am suggesting, is a way to do (a form of, or something closely
related to) "approval removal" in combination with Ebert-like
min-sum-of-squared-loads. It will
1. be doable, to find the exact optimum removal, in polynomial(V,C) timesteps.
2. should yield monotonicity (which Ebert wasn't).
3. still should yield basic PR.
4. But I'm worried strong PR may be ruined.

Let me try to explain this.
V=#voters.
C=#candidates.
W=#winners. 0<W<C.
Approval style voting.
A[c] = total number of approvals for candidate c.
X[v,c] is a real number which =0 if voter v disapproves candidate c;
but >=0 if approves. These X[v,c] demanded to obey various demands, see below.

The penalty function P is
P = SUM(voters v) { SUM(winners w) X[v,w] / (A[w]+OptionalShift) }^2.

The proposed voting system is:
among all possible W-winner parliaments, choose the one minimizing P.
Note, we always also choose the X[v,c]'s in such a way, subject to
meeting the demands,
that P is minimized. This choice-of-exactly-optimum-X's is doable in
polynomial)(V.C) time (for any particular parliament) by solving
convex quadratic program.
Therefore, P is computable for any particular parliament in polynomial time.

The demands we impose on the X[v,c] are:
X[v,c] is a real number which =0 if voter v disapproves candidate c;
but >=0 if approves. For each winner w,
SUM(voters v) X[v,w]=V/W=Hare quota.

The original Ebert-like system (before I changed it) would instead have had
X[v,c]=0 if voter v disapproves candidate c;
but =1 (or any fixed positive constant) if approves. Call that "ground state."

So if 0<X[v,c]<1 that is partially "removing" the approval of c by v,
and if X[v,c]=0
it is wholy removing it -- except that in the denominators A[w]
remains unaltered.

To think about basic PR, imagine a "racist approval" scenario. With
ground state,
a PR parliament would be elected, which if we could ignore integer roundoffs
and elect fractional seats, would actually provide exact PR at the P-minimum.
Now considering integer roundoffs, we see the ground state parliament is
approximately PR but we can reduce P further by reducing the X's a bit
below 1 for
colors which have, say, 3 seats but deserve 3.4 seats. But we cannot
drop those X's too far without violating quota demands. Meanwhile for
a party with, say
4 seats which deserves only 3.6, our quota demand forces us to raise
their X's a
bit above 1. These kinds of X-changes are not enough to alter
proportionality more than integer roundoff demands already do,. So I
think we still get basic PR with this system.

Now let us think about monotonicity.
If we had a nonmonotone situation where some extra approval by v of a
winning candidate c actually increased the penalty P, then we could
simply use the same X's
as in the old situation (before the noew approval) including making
that X[v,c]=0. That would bring us back to the old P value before the
new approval added, and
would not violate any quota demand. And the resulting X-set would not
necessarily
be optimal, in which case optimizing it would lower P. Result: the new approval
lowered P, contrary to our initial complaint P was increased. So that proves
our system is monotone.

Finally, why am I worried about strong PR perhaps being ruined?
If there is some universally-approved candidate U, with the ground system
his election just added a constant to every voter load. But
with our modified system, after U and some Reds and Blues are elected,
but say, some Greens fail to get elected because they do not have enough
supporters to pass some integer roundoff threshold... then suppose
all X[v,U] are equal to some fixed value.
But now we can reduce P by modifying X[v,U]
to increase it for Green voters v, but lower it for Red & Blue voters v.
My point is: this effect means that U no longer adds a constant to
every voter's load "fairly and equally" since U is universally approved.
Instead, it adds more load to Greens; and in general adds more
load to voters from under-represented parties and less load to voters
from over-represented parties. This make me suspect that "strong PR"
might be damaged. I'm presently unsure if this damage can be large or
whether one
can argue that it is always small enough so that we don't especially care.

Toby Pereira

unread,
Jan 11, 2016, 4:53:06 PM1/11/16
to The Center for Election Science


On Monday, 11 January 2016 19:43:46 UTC, Warren D. Smith (CRV cofounder, http://RangeVoting.org) wrote:
This make me suspect that "strong PR"
might be damaged.  I'm presently unsure if this damage can be large or
whether one
can argue that it is always small enough so that we don't especially care.




Well, I would say that saying this is at least as bad as accepting a greedy algorithm. In fact I'd rather approximate to a good global optimum than get exactly a faulty one!

Warren D Smith

unread,
Jan 11, 2016, 8:27:53 PM1/11/16
to electio...@googlegroups.com
On 1/11/16, 'Toby Pereira' via The Center for Election Science
--that criticism by you might be entirely correct -- IF it's a "faulty" one.
On the other hand, it might be a good one. That is the top question
I currently would like to know.
Further: if it is judged faulty, then I would like to know whether
this kind of
convex programming trick can be used in some other better manner.
It's clearly a good tool to have in the box & also is a hammer seeking a nail.

Toby Pereira

unread,
Jan 12, 2016, 7:24:54 PM1/12/16
to The Center for Election Science

On Friday, 8 January 2016 22:59:17 UTC, Toby Pereira wrote:
One other complicating thing is that in a case like this:

2: A
2: B
2: AB

you might want to split the AB faction into an A and a B but here:

1: A
1: B
1: AB

you can't do that without splitting a voter in half. But it should be the case that multiplying the number of voters by a constant does not affect the result. So you'd have to be able to split voters into fractional parts. Although with many voters, it's unlikely to make much difference.

 Although it's adding complexity, for completeness, to get round this you could start off by cloning each voter several times (maybe have the same number as the number of winning candidates?). Cloning voters would mean you could get a more optimal proportional result without "splitting" voters in half.

Toby Pereira

unread,
Jan 12, 2016, 7:29:33 PM1/12/16
to The Center for Election Science


On Sunday, 10 January 2016 20:41:08 UTC, Warren D. Smith (CRV cofounder, http://RangeVoting.org) wrote:
About "shifted Ebert" using not 1/A but rather 1/(A+shift)
to define "load":
This will yield the following kind of proportionality when we minimize
the sum of
squared loads.

Old way (shift=0):
  #winners for party_k = W_k  proportional to  V_k = #voters for party_k.
New way (with shift):
  W_k  proportional to  (shift+V_k)^2 / V_k.
(Here I am not demanding W_k be integer, I'm allowing real number W_k.)
The right hand side is
   V_k + 2*shift + shift^2/V_k
and if the rightmost term is neglected then
   W_k  proportional to about   V_k + 2*shift.
So positive values of the shift (versus zero) tend to help smaller parties in
the same manner Sainte-Lague does (versus d'Hondt).
Use shift=V/(4*W), I think, to most closely duplicate Sainte-Lague?
Where V=#voters, W=#winners.  That way each party gets "half a Hare quota"
worth of "free" pseudo-votes before election begins.

With the CFA, you instead will need shift=V/(2*W) I think, giving each
party 1 Hare quota of freebies -- the extra half-quota compensating
for the CFA effect of a half-shift in the other direction.

So this is all nice about the CFA and about shifts.



I tried my method with what would be a three-way Sainte-Lague tie but using Wolfram-Alpha I couldn't get any shift or "ghost voters" that would mean all three results would be equal.

Warren D Smith

unread,
Jan 12, 2016, 8:27:27 PM1/12/16
to electio...@googlegroups.com
if you do not even need heaps, you can just use sorted lists whose
order never changes, then that is even better, meaning faster and
simpler. However, as you mess with the approvals by removing them,
then if that changes the "over/underrepresentation" of other voters,
then you might not be able to do that, because correct list orders
could change underfoot? I mean, it alters other voters' loads, right?
That bother you?

But if you were using my flavor where the denominator approval-counts
were unaffected by approvals, though -- then you would be ok on that.

If so then, yes; sounds good on speed.
On provable near-optimum quality resulting from such a greedy
approach, or from some other approach... that might be tricky or hard
to figure out such a proof or such an approach. Or not. Sometimes it
is easy. There's a lot of books and stuff on that area of computer
science algorithm design+proof that you could look at for ideas if you
feel you are stuck.

Look, I have to take a break from this for a bit, I'm too overloaded.
Maybe you should start writing a paper :) if you feel able...
otherwise I will after some delay... or you could just keep on
researching...

Toby Pereira

unread,
Jan 14, 2016, 7:03:20 PM1/14/16
to The Center for Election Science
I might try writing a paper. Maybe it's worth trying to submit it to a "proper" journal in addition to having it on a website.

Warren D Smith

unread,
Jan 18, 2016, 10:27:22 PM1/18/16
to electio...@googlegroups.com
http://rangevoting.org/NonlinQuality.html#coinflip

I added a proof of basic PR for a class of voting methods
and a class of coin-flip transforms.
Reply all
Reply to author
Forward
0 new messages