Consensus-Weighted Range Voting

288 views
Skip to first unread message

David Hollander

unread,
Jul 13, 2017, 2:32:46 PM7/13/17
to The Center for Election Science
This is the draft of an article I started for the "consensus-weighted range voting" election method.

http://www.range5.org/cwrv/

It attempts to address the "tyranny of the majority" problem brought up in the "Selecting State District Maps" thread, by reducing the expected level of inequality created by the relative utilitarian social welfare function. It may be useful for high stakes elections in which minimizing inequality as equally desirable as maximizing utility.

The consensus factor currently selected is "1-2 * MAD(X)" (mean absolute difference), although other consensus factors can be used, depending on the level of weighting one believes is necessary for the election at hand.

William Waugh

unread,
Jul 13, 2017, 5:22:04 PM7/13/17
to The Center for Election Science
Does this election method meet the balance constraint, that for every possible vote, there is another possible vote that would exactly cancel the effect of the first vote?

On Thursday, July 13, 2017 at 2:32:46 PM UTC-4, David Hollander wrote https://groups.google.com/forum/?fromgroups#!topic/electionscience/m7YRlY_PBaA

David Hollander

unread,
Jul 14, 2017, 2:20:24 PM7/14/17
to The Center for Election Science
If voter 1 uses their ballot to express a strict ordered ranking of preferences between greater than two options, where all options are initially tied and there is a finite but > 2 range of possible scores, then in some instances I do not believe voter 2 can cast a ballot which will return the election to a tie containing voter 1's least preffered option. I believe this is a beneficial property which only applies to elections with greater than two outcomes and greater than two possible scores, in which one of the voters uses at least three unique scores to describe their preferences in ranked order.

If the balance constraint specifies that a second voter must be able to return the election to a stalemate containing the first voter's least preferred option when deciding between greater than two options, then I would consider following it to be a harmful constraint for election methods which are intended to maximize consensus.

David Hollander

unread,
Jul 14, 2017, 3:20:08 PM7/14/17
to The Center for Election Science
I think what I can say for now until I have time to look into this further is that the balance constraint is always satisfied for 2 candidate elections and range 2 elections, but for range 3+ election involving 3+ candidates, there are ballots which voter 1 can cast which prevent voter 2 from returning the election to a stalemate between all possible candidates.

David Hollander

unread,
Jul 14, 2017, 4:23:36 PM7/14/17
to The Center for Election Science
What are the desirable consequences with which 'balance constraint' fulfillment is associated in other election methods?

If the desirable consequence are related to minimizing individual expected profit from strategic voting, or minimizing the ability for one group to make gains at the expense of another, then I believe the 'consensus factor' has already taken care of those concerns in a more direct manner.

William Waugh

unread,
Jul 14, 2017, 6:01:23 PM7/14/17
to The Center for Election Science
"What are the desirable consequences with which 'balance constraint' fulfillment is associated in other election methods?"

That's an important question.

I think we have grounds to associate balanced voting with resistance to the effects of political money.

To be fair, I am not a mathematician, and Warren D. Smith, Ph.D. is a mathematician, and neither he nor the other math mavens who participate in this forum have signed off on the level of importance that I think balance deserves and that I think that Mark Frohnmayer and his father think it deserves (the father is to my knowledge the first person to point out that in FPTP with three or more candidates, those who want to vote against a candidate have less power than those who want to vote for that candidate).

However, Smith does publish http://rangevoting.org/Cash3.html in which he argues (persuasively to me) that in FPTP (which does not meet balance), the expected values that a voter sees induce that voter to seek a money-supported bandwagon so as not to "waste" her vote, and that Range Voting (which does meet balance) changes those expected values so that we should expect that the money wouldn't count for so much anymore in that way. Smith along with Jennings and Shentrup revisit these concepts in a longer article arguing for Range Voting, https://asitoughttobe.com/2010/07/18/score-voting/.

The piece by Smith compares a specific unbalanced voting system (FPTP) to a specific balanced system (Range a. k. a. Score). So a question he doesn't answer is whether the comparison extends to all unbalanced and balanced systems. Could we substitute any other balanced system for Range and any other unbalanced system for FPTP and would the argument still hold. I don't know that I can prove it, but my suspicion is yes.

So, in brief, resistance to cash, and resistance to two-party dominance (2PD), those are the gains I think would come from using a balanced system in single-winner elections. I do not try to extend this contention to multiwinner cases. I do not argue against eliminating single-winner offices when people advocate doing that (e. g. Swiss-style seven-person presidency). I could tentatively support that. But if we are still going to have single-winner offices, balance seems valuable for those resistances, against cash, and against 2PD.

David Hollander

unread,
Jul 14, 2017, 9:54:14 PM7/14/17
to The Center for Election Science
Suppose there are six candidates which all begin in a tie, and a dice roll selects one winner at random. A voter's ballot determines whether the dice should be rolled 0 or 1 times. If voter 1 votes "1", and the dice is thus required to be rolled 1 time, it is impossible for voter 2 to cast a ballot which returns the election to a six-way tie. They can only decide between rolling the dice once or rolling the dice twice.  Since voter 2 cannot "cancel" the effect of voter 1's ballot, this election method violates the balance constraint as stated. However, the election method is still perfectly resistant to cash, as buying more dice rolls from voters does not "load the dice" and increase the chances of a specific outcome being selected.

In range voting, if you pay voters to increase their score for candidate X, you have a reliable guarantee that X's chances of winning are increased. In consensus-weighted range voting, if you pay voters to increase their score for candidate X, you have no guarantee X's chances of winning are increased. This is because paying voters to increase their score for X will actually _decrease_ X's chances of winning in the event that other voters do not also believe the score for X should be increased. Paying voters to increase the score for X decreases the central tendency of scores for X if other voters do not agree with the increase, and when the average distance between the average score for candidate X increases, the consensus factor of X decreases, and the output of the social welfare function used to determine the winner is decreased.

Since there is a reliable guarantee in standard range voting that paying voter's to increase the score for X will increase X's chances of winning, and because there is no such guarantee in consensus-weighted range voting, standard range voting is susceptible to the influence of cash, whereas consensus-weighted range voting is resistant to the influence cash.

I do not believe the balanced voting criterion is a convincing or reliable predictor of resistance to cash.

William Waugh

unread,
Jul 15, 2017, 8:34:32 AM7/15/17
to The Center for Election Science
The cash I am talking about is that used to buy advertising, not direct bribes to voters. As Warren D. Smith points out, a big difference between how much advertising some candidates get and how much others get, has its effect via the "wasted vote" calculus. "Nader can't win, so I had better vote for Gore."

Yes, your example system with the dice rolls fails balance and is immune to both kinds of cash (advertising and bribes). However, it basically amounts to sortition, and we can go for more expressivity than that while maintaining resistance to the effect of massive advertising (of course all serious candidates have to make their positions known somehow, but that should be possible without massive spending).

David Hollander

unread,
Jul 15, 2017, 4:13:11 PM7/15/17
to The Center for Election Science
Campaign costs might be correlated with the cost of opinion polling. With single choice voting, candidates have to pay for opinion polls including the name of every other candidate they might face in order to demonstrate their competitiveness. If a new candidate enters the race, they have to pay for a new poll, because the opinion poll is not a stable estimator with respect to the number of candidates. With cardinal utility voting methods, candidates only have to pay for opinion polls including their own name. The approval rating obtained from the poll is a stable estimator of the candidate's "competitiveness", in the sense that it can be used to demonstrate competitiveness vs. any other candidate which might enter the race. I believe this holds true for any cardinal utility voting method, including approval voting, range voting, and consensus-weighted range voting. As with unweighted range voting, no new data has to be collected to determine a consensus-weighted approval rating for a candidate if the data needed to obtain an approval rating has already been collected, and the poll would not have to be redone if a new candidate enters the race. The possible violation of the balance constraint with consensus weighted range voting that I am exploring is related to numeric precision in discrete number ranges rather than vote splitting, and would possibly not exist if we assumed that voters can express an infinite number of possible scores on a ballot.

William Waugh

unread,
Jul 16, 2017, 2:18:28 PM7/16/17
to The Center for Election Science
"The possible violation of the balance constraint with consensus weighted range voting that I am exploring is related to numeric precision in discrete number ranges rather than vote splitting, and would possibly not exist if we assumed that voters can express an infinite number of possible scores on a ballot."

If it's just a matter of precision, then I think it's probably not a problem. Voters can increase precision by using probability.

On Saturday, July 15, 2017 at 4:13:11 PM UTC-4, David Hollander wrote https://groups.google.com/d/msg/electionscience/m7YRlY_PBaA/ha4BuA63BAAJ

Steve Cobb

unread,
Aug 8, 2017, 9:33:38 AM8/8/17
to The Center for Election Science
It seems to me that consensus requirements have been terribly overlooked by the CES community. For example, in Aaron’s recent article “What Is a Voting Method?”
http://electology.org/blog/what-voting-method
the decision basis (e.g. plurality, majority, super-majority) was omitted from the Decision section.

Consensus-weighting is a great approach that deserves more attention. Could it be applied to other forms of score voting, e.g. the score-preferential methods STAR and 3-2-1, which were designed to discourage strategic voting? Note that the second tally step of 3-2-1 is also about increasing consensus, even if Jameson did not use the word in his description.

Your example with two candidates and two groups assumed at least a majority requirement, but what if there were more candidates and groups, and the method were a typical score variant like AV with a mere plurality requirement? Even worse.

David Hollander

unread,
Aug 10, 2017, 8:55:42 PM8/10/17
to The Center for Election Science
Thank you for your interest. It might be possible to conduct a comparative analysis of the strength of consensus which different voting methods produce in different scenarios by repeating Warren's 'Bayesian Regret' simulations in a manner which produces additional metrics. According to Warren's site, 'Bayesian Regret' is measured by subtracting the sum of voter expected utilities of the outcome chosen by the election method, from the greatest sum of voter expected utilities from each outcome under consideration.

http://rangevoting.org/BayRegDum.html

This assumes that the socially ideal outcome is always the one which maximizes the sum of scores, ie. the relative utilitarian outcome. If a consensus weighted utiliarian social welfare function is used instead of the standard relative utilitarian social welfare function, then the socially ideal outcome is not always the outcome which maximizes the sum of voter expected utilities. It may also be the outcome which minimizes the variance between scores. This means that in addition to computing the 'relative utilitarian bayesian regret of an election method', it should also be possible to compute the 'consensus weighted bayesian regret of an election method'.

An alternate method of comparative analysis might focus on the fact that the utility which voters expect to receive from an outcome prior to it going into effect is different than the utility which voters actually receive from an outcome after it goes into effect. Even if voters are honest, candidates may lie, bills may contain poorly worded subsections, information is imperfectly distributed, and external events may have unforseen effects on the chosen outcome. The actual utility an outcome will produce is non-deterministic and not known in advance, so selecting outcomes might be more analagous to picking stocks, investing in assets, or selecting a portfolio. If there is low dispersion between the scores which voters assign to an option during voting, then the option is high consensus, low uncertainty, and low risk. If there is high dispersion between the scores which voters assign to an outcome during voting, then the option is low consensus, high uncertainty, and high risk. Using this analogy would allow us to use mean-variance analysis from modern portfolio theory to compare different election methods and recommend different election methods for different public processes in the same way that different investment strategies are recommended to different types of investors.

If we wish to use mean-variance analysis from modern portfolio theory in conjunction with bayesian regret to analyze election methods, then we can perhaps use the bayesian regret method to compute two different metrics, the expected return for how well the election method maximizes expected utility, and the variance for how well the election method minimizes risk and maximizes consensus.  Modern portfolio theory would also allow us to use an covariance based metric for comparing how well multi-winner election methods reduce risk and increase consensus by diversifying candidates\outcomes selected without using similar ad-hoc principles such as proportionality. Developing new methods of comparative analysis to include a second metric for consensus may increase the popularity of consensus as a criterion.

Steve Cobb

unread,
Aug 12, 2017, 3:40:32 PM8/12/17
to The Center for Election Science

You know that Warren’s BR simulation was recreated by Jameson in Python? Instead of BR, we’re trying now to refer to Voter Satisfaction Efficiency (BR normalized and with the sign flipped):

https://github.com/electology/vse-sim

He also has results for the newer score-preferential methods STAR/SRV and 3-2-1:

http://rpubs.com/Jameson-Quinn


How hard would it be to add a consensus-weighting knob from 0 to 1.0? 

What would be a reasonable consensus weighting value? Perhaps running the simulation across a range of consensus values would reveal some optimum.


In many cases, incorporating consensus would affect the candidates who get nominated. 


low/high score dispersion —> low/high consensus —> low/high risk: What a cool concept.

Andy Jennings

unread,
Aug 17, 2017, 5:50:50 PM8/17/17
to electionscience
Thanks to Steve Cobb, for prodding me to take another look at this.

It is an interesting idea.  Adjust each candidate's score down if the scores they were given are polarized.  I wonder what the right adjustment is.

- Multiply by "1 - k * MAD(candidate's scores)"
- Subtract k standard deviations
- Use an order statistic (like the median) but take each candidate's 40th percentile (from the bottom) score instead.
- Lower bound of Wilson score 95% confidence interval for a Bernoulli parameter (as proposed in http://www.evanmiller.org/how-not-to-sort-by-average-rating.html)

I wonder if there is a "natural" one, or a "statistically correct" one.  Of course, as Steve said, it seems like "seeking consensus" is a tunable parameter.

If the adjustment were sensitive to the number of voters, then it could naturally serve as a quorum rule, too.

~ Andrew


--
You received this message because you are subscribed to the Google Groups "The Center for Election Science" group.
To unsubscribe from this group and stop receiving emails from it, send an email to electionscience+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Jameson Quinn

unread,
Aug 19, 2017, 7:19:43 AM8/19/17
to electionsciencefoundation
I think this is an interesting idea. Here are some thoughts:

- It seems to me that this greatly exaggerates the incentives for strategy. In particular, it brings up a kind of single-winner analogue to the common multi-winner strategy of "free-riding": it becomes important for a majority not to give any points to a compromise candidate.

- Have you explored how this relates to Phragmen? It seems similar (though of course this uses MAD while Phragmen's corresponding measure would probably be SD) but I think that Phragmen's suffers less from the problem above.

My intuition says that Phragmen-like ideas are probably the best you can do in this kind of method, but I'd be interested to see somebody explore this more fully.

Steve Cobb

unread,
Aug 21, 2017, 9:40:46 AM8/21/17
to The Center for Election Science
All I found about Phragmen was this article by Warren:
Phragmén's proportional representation multiwinner voting method

Jameson, how easy would it be to incorporate consensus measuring and weighting into the VSE simulation?

David Hollander

unread,
Aug 21, 2017, 2:51:21 PM8/21/17
to The Center for Election Science
These are some of my current thoughts on the matter:

1) There may be innate benefits to minimizing score dispersion
2) High dispersion may create rational incentives for minority groups to establish new groups in which they are the majority in order to maximize utility for their members.
3) Minimizing dispersion might lower the systemic risk of group splits.
4) The risk of a split might be determined by the observed social level of group cohesion, what is at stake in group decisions and the factor by which relative voter utility is converted into unbounded economic utility, and the expected dispersion or lack of consensus which will be produced by participating in formal group decision making processes
5) for outcomes with equal average expected value, risk adverse groups should always prefer the outcome with lower dispersion \ higher consensus
6) for outcomes with unequal average expected value, different groups will prefer trade consensus and utility at different rates.
7) Other measures of dispersion \ variance other than absolute difference can be used and the positive effects of consensus weighting are not dependent upon the use of absolute difference.
8) Absolute difference may be a practical measurement of dispersion for use in implementations which seek to eliminate precision errors from rounding.
9) Regardless of the statistical method employed for measurement dispersion, groups should still have the option of normalizing and weighting the dispersion measure so that outcomes which have the maximum possible measure of dispersion are guaranteed to receive a minimum social welfare score of zero and finish in last place.
10) Offering a voting method which organizers can customize to guarantee that maximally polarizing options finish in last place, and will not finish ahead of outcomes which make everyone equally unhappy, can provide subgroups hesitant to participate in democratic methods of dispute resolution with assurances that make them more likely to participate.
11) A multi-winner proportional election methods based on this can be derived from the 1952 paper by Harry Markowitz on Portfolio Selection. Modern portfolio theory minimizes risk and achieves diversification by using a covariance matrix which would not require holding onto ballots in the original form for reweighting.
12) Voter score combinations can be anonymized by tallying them as a score distribution with respect to each candidate rather than with respect to each voter before any weighting occurrs.
13) While the math for incorporating covariances may be seen as slightly more complicated than the reweighting method in the linked Phragmen article, the math can be reproduced by many different parties using publicly published candidate score distributions without compromising the identity of voters, and the math is already in widespread use in finance.

I will look at the mentioned VSE code when I have the chance, however the time I have available for election method research over the next month may be somewhat limited. I would also like to update the range5.org website so there is a clearer distinction between the front-end voting interface and the backend method of winner selection. If the front-end voting interface outputs a public 5-bar score distribution for each candidate, then this intermediate representation of results could perhaps be used with a variety of winner selection back-ends, so that any variance weighting math applied to the results is reproducible and can be publicly audited.

David Hollander

unread,
Aug 21, 2017, 3:06:47 PM8/21/17
to The Center for Election Science
I may be misremembering on when the covariance matrix needs to be computed for multi-winner elections and probably need to write a seperate article on this.

Brian Langstraat

unread,
Aug 22, 2017, 7:32:24 PM8/22/17
to The Center for Election Science
I thought that Consensus-Weighted Range Voting (CWRV) was interesting and have done some experimentation with it in Excel.

Attached is the "SA Graph" that I created from the examples in http://www.range5.org/cwrv/
It shows the Social Welfare Output for each percent of Group B in the population.
I used Approval or strategic maximum/minimum of range 0 to 1.

There are some weird properties to CWRV.
0% support and 50% support are equal at 0.
1/6 (~17%) support and 2/3 (~67%) support are equal at 0.074.
88% support is needed to beat 0.5 for SN.
- Multiply by "1 - k * MAD(candidate's scores)"
- Subtract k standard deviations
As "k" approaches 0, the graph is linear (same as FPTP).
If "k" is greater than 2 in this scenario, some Social Welfare Outputs are negative.
SA Graph.JPG

Brian Langstraat

unread,
Aug 22, 2017, 8:15:00 PM8/22/17
to The Center for Election Science

- Multiply by "1 - k * MAD(candidate's scores)"
- Subtract k standard deviations

The ideal "k" is probably 1.5 which is shown in attachment "SA Graph_1.5".

It does not have an inflection point and the flattest point is at 1/3 support.
From 20% to 47% support, there is very little change in the Social Welfare Output.
84% support is needed to beat 0.5 for Neutral Proposal.
SA Graph_1.5.JPG

Clay Shentrup

unread,
Aug 26, 2017, 4:09:38 PM8/26/17
to The Center for Election Science
> It may be useful for high stakes elections in which minimizing inequality as equally desirable as maximizing utility.

A rational entity does not care about equality, only about maximizing its expected welfare. You'd rather have a utility of 6 and be the least happy person than have a utility of 5 in a sad society where you're the most happy person.

Clay Shentrup

unread,
Aug 26, 2017, 4:11:01 PM8/26/17
to The Center for Election Science
> Relative utilitarianism is a social welfare function which represents preferences using bounded cardinal utility scores and selects for outcomes which maximize a utilitarian sum of scores.

Utilities are not bounded. The number of copies a gene can make of itself runs from 0 to infinity.

David Hollander

unread,
Aug 26, 2017, 5:33:11 PM8/26/17
to The Center for Election Science
Utility in the quoted context referred to aggregate utility and the group utilitarian sum. Selecting a choice which maximizes the utilitarian sum for the group does not necessarily maximize utility for the individual. It may actually minimize utility and the expected welfare for individuals who belong to a minority subgroup. It may create a rational incentive for them to secede or create rival groups in order to maximize individual utility, in a manner which introduces systematic risk negatively impacting the level of welfare which is actually achieved.

Because the actual level of welfare which a social choice will achieve is non-deterministic and not known at the time of voting, and because information is imperfectly distributed and the information available to the group is not the same as the information available to each individual, it is also possible for voting methods which reliably suceed at selecting the option which maximize the sum of individual expected welfare to fail at selecting the option which maximizes the sum of individual welfare which is actually achieved.

David Hollander

unread,
Aug 26, 2017, 5:36:05 PM8/26/17
to The Center for Election Science
Utilities are bounded under 'relative utilitarianism'. This is an important applied assumption on which range voting and bayesian regret are based.

https://en.wikipedia.org/wiki/Relative_utilitarianism

Clay Shentrup

unread,
Aug 26, 2017, 5:59:20 PM8/26/17
to The Center for Election Science
On Saturday, August 26, 2017 at 2:33:11 PM UTC-7, David Hollander wrote:
Utility in the quoted context referred to aggregate utility and the group utilitarian sum.

The outcome which produces the highest aggregate utility is also the one that produces the greatest utility for any randomly selected member of the group. As noted by Harsanyi, discussed here:

Selecting a choice which maximizes the utilitarian sum for the group does not necessarily maximize utility for the individual.

It maximizes the utility sum for any given randomly selected individual.

So if you have a choice to be in a universe where your group's net utility is 100 or 120, you want the latter, given you know nothing else. Sure, if you find out your personal wealth in those universes, then you want that one regardless of the net. But in that case, you certainly don't want the "most fair" utility distribution. You want the distribution that's best for you, of course.

It may actually minimize utility and the expected welfare for individuals who belong to a minority subgroup.

That minority subgroup wants the distribution that's best for them, not the one that's "most fair" per se.
 
It may create a rational incentive for them to secede or create rival groups in order to maximize individual utility, in a manner which introduces systematic risk negatively impacting the level of welfare which is actually achieved.

Then you're back to caring about net welfare, not equality.

A utility-maximizing voting system already accounts for the fact that a theoretical person in the majority may cause the election of a candidate that he "favors" but whose decisions end up causing political turmoil that makes him worse off.

You're essentially trying to argue here that the system which maximizes net utility actually doesn't maximize net utility, which is an oxymoron.

Because the actual level of welfare which a social choice will achieve is non-deterministic and not known at the time of voting

Of course it's not known at the time of voting. This is why you have "ignorance factors" in Warren's simulation for example.

If you are trying to argue that your equality-favoring system will ultimately lead to better results, then you are arguing that you are somehow more clairvoyant than voters in a utility-maximizing system would be. They don't see how electing their (apparent) favorite will actually lead to instability that will ultimately make them worse off. Whereas you were much smarter than them, and realized that equality was super important to stability, and so your system actually helps account for their ignorance. Well, that is kind of the definition of hubris.
 
and because information is imperfectly distributed and the information available to the group is not the same as the information available to each individual, it is also possible for voting methods which reliably suceed at selecting the option which maximize the sum of individual expected welfare to fail at selecting the option which maximizes the sum of individual welfare which is actually achieved.

Warren's sims measure actual welfare that is strictly achieved. The difference between that and voter expectations is accounted for by ignorance factors. So this is effectively an argument that Warren's ignorance factors weren't correctly chosen. If you want to make that argument, you can. It's a tall order though.

Clay Shentrup

unread,
Aug 26, 2017, 6:03:46 PM8/26/17
to The Center for Election Science
On Saturday, August 26, 2017 at 2:36:05 PM UTC-7, David Hollander wrote:
Utilities are bounded under 'relative utilitarianism'. This is an important applied assumption on which range voting and bayesian regret are based.

https://en.wikipedia.org/wiki/Relative_utilitarianism

This article clearly shows that it is not the utilitarianism used in Warren's BR sims.

"When interpreted as a `voting rule', it is equivalent to Range voting."

Warren's utilities are not equivalent to Score Voting. They are converted to ballots via normalization. (Not to mention strategic exaggeration for some voters.)

Bayesian Regret can utilize any utility generator you like. There is no requirement for bounded utilities.

Clay Shentrup

unread,
Aug 26, 2017, 6:19:12 PM8/26/17
to The Center for Election Science
> 'Tyranny of the majority' is problem where social welfare functions and election methods select outcomes which make a minority substantially worse off in order to make a majority better off. 

You have this completely backward. Utilitarianism is the complete opposite of "tyranny of the majority". Net utility can be increased by making the majority a little worse off while making the minority substantially better off.

> In the above game, using an election method that was more likely select the compromise outcome SN, 'steal from no one', may have been wise.

No, it wasn't wise. By definition. It had a lower utility. If it was the wise choice, it would have had a greater utility.

David Hollander

unread,
Aug 26, 2017, 9:59:21 PM8/26/17
to The Center for Election Science
On Saturday, August 26, 2017 at 4:59:20 PM UTC-5, Clay Shentrup wrote:
On Saturday, August 26, 2017 at 2:33:11 PM UTC-7, David Hollander wrote:
Utility in the quoted context referred to aggregate utility and the group utilitarian sum.

The outcome which produces the highest aggregate utility is also the one that produces the greatest utility for any randomly selected member of the group. As noted by Harsanyi, discussed here:

Selecting a choice which maximizes the utilitarian sum for the group does not necessarily maximize utility for the individual.

It maximizes the utility sum for any given randomly selected individual.

Maximizing the average utility which a random individual receives is not the same as maximizing the utility which a given or specific individual receives. The outcomes which an election method selects do not necessarily distribute gains and losses randomly, you are only selecting an individual at random via contrived example. The outcomes selected by the election method will most likely distribute gains non-randomly and depend upon the specific substructures of the specific group in question. It's quite possible that for a given group with a specific substructure and a limited and finite number of options to choose between, that members of a minority subgroup will reliably receive zero utility from the outcomes which a utilitarian election method selects, and members of a controlling subgroup will reliably receive maximum utility from the outcome which a utilitarian election method selects.


So if you have a choice to be in a universe where your group's net utility is 100 or 120, you want the latter, given you know nothing else.

This completely ignores risk and consensus and other sample statistics which can be obtained from already collected voter scores, such as dispersion. It's quite to construct scenarios in which the net-100 option is preferable to the net-120 option. For instance, if there were two voters which unanimously agreed the net-100 option had a score of 50, and there was strong disagreement concerning the net-120 option, where one voter rated it zero and one voter rated it 120, then selecting the net-100 outcome will produce strong agreement whereas selecting the net-120 outcome will produce strong disagreement. If you are risk adverse and value stability and low volatility for a guaranteed return then the net-100 option is certainly preferable. However, if there net-120 option was arrived at via unanimous agreement by both voters to rate it a 60, and the net-100 option had one voter rate it 0 and the other rate it 100, then the net-120 option would clearly be a vastly superior option. This tradeoff can be handled automatically by weighting the sum by the dispersion.



Then you're back to caring about net welfare, not equality.

I believe weighting for score dispersion will address both. Minimizing variance in voter expected utility scores may maximize the net welfare which is actually achieved, in the same way that repeated low-volatility investment strategies often outperform high risk strategies, and may also help avoid outcomes which create structural inequality between subgroups. Structural inequality may also create systematic risks and catastrophes which are hard for individuals to account for.


A utility-maximizing voting system already accounts for the fact that a theoretical person in the majority may cause the election of a candidate that he "favors" but whose decisions end up causing political turmoil that makes him worse off.

Existing utility maximizing voting systems only address expected return. They do not address variance, risk, and consensus.


Warren's sims measure actual welfare that is strictly achieved. The difference between that and voter expectations is accounted for by ignorance factors.

If actual welfare is dependent on consensus, and it is not possible for voters to know whether the score they assign an outcome is similar to the scores which other assign an outcome before these scores have been communicated, then consensus cannot be accounted for by ignorance factors, because it is not ignorance for a voter not to know something at a point in time before it is possible for them to know it.

David Hollander

unread,
Aug 26, 2017, 10:17:30 PM8/26/17
to The Center for Election Science

On Saturday, August 26, 2017 at 5:03:46 PM UTC-5, Clay Shentrup wrote:
They are converted to ballots via normalization. (Not to mention strategic exaggeration for some voters.)

This normalization process is one-way. It destroys information which may be present in individual mental representations of unbounded utility scores, and election methods cannot recover unbounded utility scores from the bounded scores which are communicated to it. Range voting is a relative utilitarian voting method because it collects bounded relative utility scores for comparison. Using unbounded utilities to test resistance to strategic voting via simulations may be a great idea. However anything said about this is specific to the simulation and not the voting method. Range-voting itself is considered a relative utilitarian game.

David Hollander

unread,
Aug 26, 2017, 11:01:14 PM8/26/17
to The Center for Election Science
On Saturday, August 26, 2017 at 5:19:12 PM UTC-5, Clay Shentrup wrote:
> 'Tyranny of the majority' is problem where social welfare functions and election methods select outcomes which make a minority substantially worse off in order to make a majority better off. 

You have this completely backward. Utilitarianism is the complete opposite of "tyranny of the majority". Net utility can be increased by making the majority a little worse off while making the minority substantially better off.

With range voting, a 51% trivial majority faction can still unilaterally select the outcome, even if the chosen outcome is given the minimum possible score by the other 49% of voters. If 51% of voters max score an option and min score the remaining options, the remaining options are guaranteed not to be chosen regardless of how other voters fill out their ballots. Range voting may have many very desirable properties which make such outcomes less frequent than in other voting methods as a matter of practice, but it does not strictly prevent it as a matter of rule. With consensus weighted range voting, I believe it's possible to strictly prevent it as a matter of rule. Whether an electorate actually wishes to strictly prevent it as a matter of rule or prefers to use standard range voting, or an intermediate weighting, is probably dependent on the specific matter being voted upon.

Clay Shentrup

unread,
Aug 27, 2017, 1:08:24 AM8/27/17
to The Center for Election Science
On Saturday, August 26, 2017 at 6:59:21 PM UTC-7, David Hollander wrote:
The outcomes which an election method selects do not necessarily distribute gains and losses randomly, you are only selecting an individual at random via contrived example. 
The outcomes selected by the election method will most likely distribute gains non-randomly and depend upon the specific substructures of the specific group in question.

What is contrived is an example where gains are non-randomly distributed. The election system cannot identify voters. It's not as if Score Voting can be better for Christians, or IRV better for women or anything of that nature. A caveat might be that Plurality is better for the proverbial 1%, because it makes the entire system more vulnerable to gamesmanship via moneyed special interests. But aside from that, you'd be hard pressed to design a system that consistently advantages or disadvantages any identifiable group, be they a race, religion, gender, etc.

And even if you could, so what? Your core premise is that you can design a system that is smarter than the voters comprising a controlling bloc. They will naively elect X, who will be so detestable to the minority that the minority will e.g. wage civil war, making things ultimately worse for the controlling bloc. Your theory requires that you are smarter than them and can design a system that acts to preserve their best interest better than they themselves could. I think this is unfounded to put it mildly. It really is the epitome of hubris.

It's quite possible that for a given group with a specific substructure and a limited and finite number of options to choose between, that members of a minority subgroup will reliably receive zero utility from the outcomes which a utilitarian election method selects, and members of a controlling subgroup will reliably receive maximum utility from the outcome which a utilitarian election method selects.

This would only hold in a highly contrived scenario where the majority is consolidated into a highly specific spot in issue space. In reality, voters and candidates are spread over some continuum (albeit they may be in a heavily bimodal distribution), and the minority tugs the center in their direction. So no, they do not have "zero" influence. (Which must be what you meant when you said "zero utility", since that would mean they are all dead.)

So if you have a choice to be in a universe where your group's net utility is 100 or 120, you want the latter, given you know nothing else.

This completely ignores risk and consensus and other sample statistics which can be obtained from already collected voter scores, such as dispersion.

"Risk" (probability) is already accounted for in expected utility.

"Consensus" is a vague layman's term which has no precise mathematical meaning that's useful for talking about voting methods. It is defined as:

general agreement.
"a consensus of opinion among judges"
synonyms: agreement, harmony, concurrence, accord, unity, unanimity, solidarity;
 
In voting theory, we have more precise terms, like utility. If Bob thinks X=10, Y=6, and Alice thinks X=11, Y=7, is that "general agreement"? Gee, I don't know. But I have their exact utilities, which is vastly better than that subjective layman speak.

It's quite to construct scenarios in which the net-100 option is preferable to the net-120 option. 
For instance, if there were two voters which unanimously agreed the net-100 option had a score of 50, and there was strong disagreement concerning the net-120 option, where one voter rated it zero and one voter rated it 120, then selecting the net-100 outcome will produce strong agreement whereas selecting the net-120 outcome will produce strong disagreement.

Saying there's "strong disagreement" is just a redundant statement of the differing utilities. It is not new information. The net-120 option already takes those differing utilities into account, and adds up to a greater number.

If you are risk adverse and value stability and low volatility for a guaranteed return then the net-100 option is certainly preferable.

This is an oxymoron. If you initially like Y more than X, but you think electing Y will lead to instability, then you may take that into account and reassess, deciding that X is actually better, all things considered. Then THAT is the basis for your final utility estimation. You then ultimately DO prefer X to Y. Of course, maybe that is reality but you don't have that wise realization, and so you ignorantly vote for Y.

Additionally, there's no such thing as "risk averse" (averse, not adverse). This is simply a layman's way of describing the fact that money has a non-linear relationship to utility. For instance, someone may decide to take a guaranteed million dollars instead of a 50/50 chance of a billion, claiming he's "risk averse". But if he has a zero net worth, then:

log(1,000,000) = 19.9
Whereas...
log(1,000,000,000)/2 = 14.9

So the guaranteed million has a higher utility. To speak of it in terms of "avoiding risk" is nonsensical, because taking the million dollars means there's a "50% risk" of failing to win 999,000,000. There's risk either way.

However, if there net-120 option was arrived at via unanimous agreement by both voters to rate it a 60, and the net-100 option had one voter rate it 0 and the other rate it 100, then the net-120 option would clearly be a vastly superior option.

No, 100 is clearly less than 120.

If you state your precise social welfare function, we can do a reductio ad absurdum example to once again demonstrate that an additive welfare function is the only tenable one. I can't read your definition because you have broken HTML. See attached image.

This tradeoff can be handled automatically by weighting the sum by the dispersion.

Adding arbitrary error to your social welfare function in order to try to outsmart voters.

Then you're back to caring about net welfare, not equality.

I believe weighting for score dispersion will address both.

There's no "both". Welfare is the only thing that ultimately matters. Your argument was that inequality can negatively affect welfare, in ways that voters won't anticipate but your system can nevertheless account for.
 
Minimizing variance in voter expected utility scores may maximize the net welfare which is actually achieved, in the same way that repeated low-volatility investment strategies often outperform high risk strategies, and may also help avoid outcomes which create structural inequality between subgroups. Structural inequality may also create systematic risks and catastrophes which are hard for individuals to account for.

Again, this is just a verbose way of saying that you think Warren's ignorance factors weren't designed/simulated well.

A utility-maximizing voting system already accounts for the fact that a theoretical person in the majority may cause the election of a candidate that he "favors" but whose decisions end up causing political turmoil that makes him worse off.

Existing utility maximizing voting systems only address expected return.

No. Warren's BR calculations calculate actual, not expected, return. You can argue with his ignorance factor design if you care to.

 They do not address variance, risk, and consensus.

I.e. Warren's ignorance factors weren't well designed. Okay, prove it.

If actual welfare is dependent on consensus, and it is not possible for voters to know whether the score they assign an outcome is similar to the scores which other assign an outcome before these scores have been communicated, then consensus cannot be accounted for by ignorance factors, because it is not ignorance for a voter not to know something at a point in time before it is possible for them to know it.

Ignorance here just means a disparity between what will actually happen and what a voter expects will happen (which forms the basis for his vote). If you don't like that term, you're just arguing semantics.

And voters can absolutely have some idea about how polarizing the candidates are, or how likely a given candidate might be to incite civil unrest. For your system to actually produce better results, you have to make massive assumptions about voter being extremely bad at that, whilst your system will be extremely good at it, enough so to overcome its distortionary effects. You haven't substantiated any of that.


Screen Shot 2017-08-26 at 9.27.30 PM.png

Clay Shentrup

unread,
Aug 27, 2017, 1:18:29 AM8/27/17
to The Center for Election Science
On Saturday, August 26, 2017 at 8:01:14 PM UTC-7, David Hollander wrote:
With range voting, a 51% trivial majority faction can still unilaterally select the outcome, even if the chosen outcome is given the minimum possible score by the other 49% of voters.

Only if you assume a contrived example where that 51% is all compactly centered around a point in issue space. With a realistic distribution of voters, the 49% massively affects the outcome.

With consensus weighted range voting, I believe it's possible to strictly prevent it as a matter of rule.

By introducing distortion that will produce worse outcomes in a vast fraction of scenarios. I think you'd be unpleasantly surprised by the BR performance of your system.

Steve Cobb

unread,
Aug 27, 2017, 2:37:41 AM8/27/17
to The Center for Election Science
I see some voting theorists often focusing too narrowly on producing the best outcome given a set of assumptions, e.g.:
-fixed electorate
-single decision
-fixed candidate list
-standard political election
There are decisions where people don't have to join the group. There are repeating decisions where a majority can continue to dominate a minority (addressed by Reweighted Score Voting). Different candidates may appear depending on the meta-decision about voting method. And of course there is strategy.

If you have identifiable, fixed factions (e.g. large and small states, or northern and southern states) they can be guaranteed seats or other forms of influence, but what if the potentially abused voters are not identifiable in advance? Consensus-weighting seems like a way to offer everyone, e.g. the loss-averse, some assurance the result will not be too negative for anyone.

Jameson, what do you think about the statistical argument, that greater vote dispersion implies greater uncertainty about the right answer?

Clay Shentrup

unread,
Aug 27, 2017, 10:30:29 AM8/27/17
to The Center for Election Science
> Consensus-weighting seems like a way to offer everyone, e.g. the loss-averse, some assurance the result will not be too negative for anyone.

That is the wrong goal. The rational goal is to maximize your expected utility.

Also, so we have BR values for it?

David Hollander

unread,
Aug 27, 2017, 7:32:12 PM8/27/17
to The Center for Election Science
On Sunday, August 27, 2017 at 12:08:24 AM UTC-5, Clay Shentrup wrote:
What is contrived is an example where gains are non-randomly distributed. The election system cannot identify voters. It's not as if Score Voting can be better for Christians, or IRV better for women or anything of that nature. A caveat might be that Plurality is better for the proverbial 1%, because it makes the entire system more vulnerable to gamesmanship via moneyed special interests. But aside from that, you'd be hard pressed to design a system that consistently advantages or disadvantages any identifiable group, be they a race, religion, gender, etc.

There are a vast number of scenarios in which gains are non-randomly distributed. A politician could certainly run on a campaign of identifying and then killing or enslaving all Christians. They could run on a platform which advocates the seizure and redistribution of property from a minority religious faction to their political supporters. The election does not directly select the achieved level of welfare. It only preselects the method of selecting who will receive welfare, and politicians and policy proposals certainly do not operate as 'fair' or random dice when distributing gains. If an election was held to decide how an areas of land should be partitionted, there will certainly be proposals submitted which distribute gains non-randomly, and it would be rational for factions to submit gerrymandered maps, and members of a majority group may rationally decide to support gerrymandering in order to maximize their gains and power over other groups if doing so satifies their preferences.

This was actually pointed out to me by Warren, who argued that range voting was 'not good enough' for selecting maps, which is why the split line algorithm was needed to randomly distribute gains without direct measurement of voter scores. So if range voting performs nearly perfectly in voting metric X, but is still so flawed that a random algorithm which does not even measure voter scores can beat it in certain scenarios, then it is clear that voting metric X is not a universal measure of all possible properties of a good methods of group decision making in all contexts.

Concerning what it is possible for a voting system to do, it is certainly possible elections systems which operate upon range 3+ cardinal utility scores to distinguish polarizing outcomes from consensus outcomes by measuring dispersion and the central tendency of scores to the social average. It is possible for a voting system to determine whether a particular outcome is being min-maxed, and whether the average score for a particular is likely to suffer from a large normalization error. With outcomes which are heavily min-maxed with average ratings which suffer from large normalization error, we really don't know whether the 'true' unbounded utilitarian sum is radically positive or negative. If we are risk averse we will avoid selecting these outcomes in favor of reliable positives. No new information has to be collected from voters to do this, the only thing information needed is the same information needed to compute an average.


And even if you could, so what? Your core premise is that you can design a system that is smarter than the voters comprising a controlling bloc. They will naively elect X, who will be so detestable to the minority that the minority will e.g. wage civil war, making things ultimately worse for the controlling bloc. Your theory requires that you are smarter than them and can design a system that acts to preserve their best interest better than they themselves could. I think this is unfounded to put it mildly. It really is the epitome of hubris.

It is hubris to think that any voter can fully account for systematic risk at the time of voting. The systematic risk of a group split may be dependent upon the lack of consensus and degree of dispersion in the reported scores for an outcome. It is not possible for an individual voters to account for this in their reported score at the time of voting, because they do not know the score which others will report, because the score which all other individuals will report for the same outcome is dependent upon subjective factors and information which is impossible for the individual voter to know, until after these scores have already been collected and communicated. The belief that an individual voters can fully account for risk prior to voting and prior to the release of results is hubristic because it implies an epistemology in which an individual can obtain perfect knowledge of how others will act and what their true internal preferences are before they have actually acted or publicly communicated them.



This would only hold in a highly contrived scenario where the majority is consolidated into a highly specific spot in issue space. In reality, voters and candidates are spread over some continuum (albeit they may be in a heavily bimodal distribution), and the minority tugs the center in their direction. So no, they do not have "zero" influence. (Which must be what you meant when you said "zero utility", since that would mean they are all dead.)

The spread of voters depends upon the issue being voted upon. Intra-group violence and the risk of voters being killed is certainly a possibility which needs to be accounted for in a political context.There is also a selection bias when observing mixed democratic systems where democracy is completely avoided in areas where voting is likely to produce bad results. If a voting system can possibly select outcomes involving the murder of a certain % of the population, then there will be a much greater investment in non-democratic efforts to constrain outcomes, and certain decisions will not be made democratically even if the total social cost of holding a vote is lower. Additionally the only type of utility which voting systems can actually compare is relative utility values which have been explicitly communicated, which is bounded at a minimum value, which can conveniently be described as zero.

"Consensus" is a vague layman's term which has no precise mathematical meaning that's useful for talking about voting methods ... In voting theory, we have more precise terms, like utility.

For purposes of consensus-weighted range voting, I defined it as a measure of central tendency of voter supplied scores to the average. Central tendency is certainly a well understood statistical concept which is frequently used in economic analysis. In finance it is used in mean-variance analysis for portfolio selection and investment decisions and in macroeconomics it is used to compute the Gini coefficient.


This is an oxymoron. If you initially like Y more than X, but you think electing Y will lead to instability, then you may take that into account and reassess, deciding that X is actually better, all things considered. Then THAT is the basis for your final utility estimation. You then ultimately DO prefer X to Y. Of course, maybe that is reality but you don't have that wise realization, and so you ignorantly vote for Y.

That's logically impossible. You don't know whether Y will lead to instability until the other voters have already revealed their scores, at which point in time you were already forced to submit a final estimate via the ballot.


Saying there's "strong disagreement" is just a redundant statement of the differing utilities. It is not new information. The net-120 option already takes those differing utilities into account, and adds up to a greater number.

The average or sum of a sample distribution does not take variance into account. The average or sum is not the only information which voter scores provide. Stating a variance is not equivalent to stating an average. They measure different statistical properties.


No, 100 is clearly less than 120.

I would consider a relative utilitarian sum of 100 with high variance to not be a reliable measurement that the 'true' utilitarian sum of the outcome is 100. For relative utilities containing normalization error, a lower sum could considered preferable in some circumstances using existing information. The original does not make assumptions concerning whether 'true' unbounded utility values for individuals actually exist seperate from the scores which have been communicated via ballots and is only concerned with 'relative utilitarianism'. I am explicitly not using a sum of scores to select outcomes, and I am not making any normative moral assumptions which obligate me to.


If you state your precise social welfare function, we can do a reductio ad absurdum example to once again demonstrate that an additive welfare function is the only tenable one. I can't read your definition because you have broken HTML. See attached image.

This appears to have been a recent mistake from when the I added a file to the article on the 23rd which should now be fixed.


Ignorance here just means a disparity between what will actually happen and what a voter expects will happen (which forms the basis for his vote). If you don't like that term, you're just arguing semantics.

This is not a matter of semantics. This is a matter of logic and espitemology. If voters will adjust expectations in reaction to new information, and the information which they are adjusting their expectations to is the expectations of other voters, then it is not possible for them to fully make this adjustment prior to \ at the time of voting, and there will be no stable 'true' internal utility value for voters which remains the same after voting.


And voters can absolutely have some idea about how polarizing the candidates are, or how likely a given candidate might be to incite civil unrest.

They can't accurately account for the beliefs of other voters unless these beliefs have been publicly communicated via a trusted medium. The secure, accurate, and reliable way to empirically obtain the preferences of others is by holding a vote.


For your system to actually produce better results, you have to make massive assumptions about voter being extremely bad at that, whilst your system will be extremely good at it, enough so to overcome its distortionary effects.

All voting systems make assumptions and 'distort' results, and any attempts at measuring utility will change it. I would consider options in which voter scores have high variance to be inherently undesirable, and for the average score for options with high variance to not be a meaningful measurement of the average theoretical 'true' utility for an outcome. Using consensus weighting as described is simply a precommitment or agreement by voters prior to voting that the weight assigned to outcomes with high measured variance will be decreased by some variable factor. The primary purpose of this voting system was as a thought experiment to explore solutions to the shortcomings of standard range voting mentioned by Warren in the previous thread on direct election of district maps, without having to use extremely high minimum score requirements likely to produce deadlocks. I would agree that further research needs to be completed before such a method of voting can be recommended for actual use. I believe the current primary limitation on the completion of such research is the time I have available for investment, but I expect to chip away at it over the next couple of years.

David Hollander

unread,
Aug 27, 2017, 9:13:59 PM8/27/17
to The Center for Election Science
Utility and goals are both based on individual subjective preferences and values. I don't think its a meaningful to state that one should maximize utility for its own sake. It is unproblematic if individuals hold a variety of other beliefs which they consider axiomatically good or bad, as utility can simply be declared a measure of how well the option assists individuals in maximizing or minimizing whatever their other goals are. Additionally, it is certainly possible to construct a theoretical games in which loss-aversion is a successful strategy. Suppose there was an iterative game with imperfect information, where the longer a player stays in a game, the more information they gain, in such a manner that average utility of future moves will be higher than present moves. Suppose the game is lost if the accumulated score drops below a minimum amount. Under such conditions, it's possible for a loss averse strategy to realize higher gains, if it provides more time for an individual to train and reduce the instrumental error of processess responsible for translating observatons of external material conditions into expected values. However, this is not a claim that such a game is the 'true' model or universal description of all possible games, just a statement that using different models for different games will produce different results.


Also, so we have BR values for it?


I have not had time to read and modify the source code of these simulations yet. However it should be identical to range voting when using a consensus-weight of zero. With higher consensus weights, I predict it will produce different results if there are a small number of candidates on the ballot and no unanimously good outcomes are present. However with larger number of candidates on the ballot, in which a unanimously good outcome is likely to appear in the list of available outcomes under consideration, it should again perform very similarly to standard range voting. Whether outcomes differ is probably highly dependent on the preconditions and ballot access rules which determine the type of outcomes under consideration.  I would probably measure it with a consensus weights of [0, 0.5, 1.0, 1.5, 2.0]. I don't have any predictions on how this voting system interacts with the various strategic voting tests in these simulations yet until I better familiarize myself with the source code and find time to perform requested tests.

Clay Shentrup

unread,
Aug 27, 2017, 10:10:40 PM8/27/17
to The Center for Election Science
On Sunday, August 27, 2017 at 4:32:12 PM UTC-7, David Hollander wrote:
There are a vast number of scenarios in which gains are non-randomly distributed. A politician could certainly run on a campaign of identifying and then killing or enslaving all Christians.

From the point of view of the voting system, "Christians" is "random". A voting system does not know what a Christian is.

In any case, this would make the Christians vote against that politician, and would make their normalized scores for all his rivals essentially maximum, which would help prevent him from winning.

If you think your proposal would do better, add it to Warren's simulator and see what Bayesian Regret it achieves. So far, you just have speculation that it'll do better via the implausible mechanism of being smarter than voters.
 
If an election was held to decide how an areas of land should be partitionted, there will certainly be proposals submitted which distribute gains non-randomly, and it would be rational for factions to submit gerrymandered maps, and members of a majority group may rationally decide to support gerrymandering in order to maximize their gains and power over other groups if doing so satifies their preferences.

Just like utility is unevenly distributed in Warren's BR calculations. What is your point? If you think your system would do better, do the calculations and see if that's the case.

This was actually pointed out to me by Warren, who argued that range voting was 'not good enough' for selecting maps, which is why the split line algorithm was needed to randomly distribute gains without direct measurement of voter scores. So if range voting performs nearly perfectly in voting metric X, but is still so flawed that a random algorithm which does not even measure voter scores can beat it in certain scenarios, then it is clear that voting metric X is not a universal measure of all possible properties of a good methods of group decision making in all contexts.

This statement seems to confuse "Bayesian Regret" with "Score Voting". BR *is* the universal measure of quality. Whether Score Voting achieves the optimal BR for some situation is another matter.

Using a voting method at all is difficult for map drawing, because you could literally submit trillions of different districtings. And voters would have an incredibly hard time converting a particular arrangement to an estimate of their personal utility under that regime. This contains the same problem as calculating multi-winner Bayesian Regret: you can't feasibly convert the utilities of individual politicians into a net utility of putting them into a legislature together. This isn't additive. It would be like trying to use Score Voting to decide between 10 basically anonymous people. The problem isn't Score Voting, it's a lack of information.

This isn't applicable to your proposal, because your proposal is a voting method. It is subject to the same analysis via Bayesian Regret.

it is certainly possible elections systems which operate upon range 3+ cardinal utility scores to distinguish polarizing outcomes from consensus outcomes by measuring dispersion and the central tendency of scores to the social average.

So what? You still cannot predict how much unforeseen harm is caused by disagreement. And you certainly haven't made a case that you've designed an algorithm which can predict that harm better than the voting public.

It is possible for a voting system to determine whether a particular outcome is being min-maxed, and whether the average score for a particular is likely to suffer from a large normalization error.

Again, your implied argument is a non-sequitur. (And I'm skeptical about the normalization error—you might be talking about the tactical exaggeration error, not normalization error.)

With outcomes which are heavily min-maxed with average ratings which suffer from large normalization error, we really don't know whether the 'true' unbounded utilitarian sum is radically positive or negative.

Well, we don't know anything with certainty, as we can't see the future or read human minds. We "know with error bars", that knowledge coming from BR calculations. In your case, you know with the error bars caused by your untested mental model.

If we are risk averse we will avoid selecting these outcomes in favor of reliable positives.

There's no such thing as "risk averse". A rational entity wants maximum expected utility.

It is hubris to think that any voter can fully account for systematic risk at the time of voting.

Whatever hubris it may be, you are clearly displaying an order of magnitude more hubris to think that you can do any better than the "wisdom of the crowds" from those future elections, without even having any knowledge whatsoever of the candidates or context in which they will occur. And further, that you can encode that knowledge into a voting system. And that this "anti-dispersion" effect will prove so valuable so often that it'll more than make up for the lost utility it introduces via the massive outright distortion it will obviously cause.

Again, I would advise you to do some Bayesian Regret calculations if you want to make a case.
 
It is not possible for an individual voters to account for this in their reported score at the time of voting, because they do not know the score which others will report

Okay, so it sounds like you're saying that your system can take advantage of the actual disagreement revealed in the ballots, whereas the voters can only say, "I really think Trump is going to be the next Mussolini and will tear this country apart."

Now, what's a better indicator of the "unforeseen regret" that a candidate will bring about? Polarization of scores, or specific policy statements like, "We're going to bomb North Korea and build a wall between our neighbor to the South."? I can't answer that though I suspect it's more the latter. But I also don't think you can answer it either.

And in order for your proposal to work, it has to do such a phenomenal job of preventing extremely polarizing outcomes (which are already expected to be very rare with Score Voting) so often that it still makes up for the significant loss it introduces. You cannot make this argument without A) Doing the Bayesian Regret calculations and crossing your fingers that they show your system not to perform as poorly as I predict, and B) making the case that these kinds of polarizing outcomes actually are plausibly common with  ordinary low-BR methods like Score/Approval/321/etc.

Your whole proposal here is a thought experiment with really no empirical data.

The belief that an individual voters can fully account for risk prior to voting and prior to the release of results is hubristic because it implies an epistemology in which an individual can obtain perfect knowledge of how others will act and what their true internal preferences are before they have actually acted or publicly communicated them.

No, not perfect knowledge of course. Knowledge like, "I watch the news and know that Californians are probably going to secede if this nut job is elected President."

I maintain that the instability a leader will cause is probably more evident from specific past actions/statements/proposals than from mere polarization of scores on ballots.

Intra-group violence and the risk of voters being killed is certainly a possibility which needs to be accounted for in a political context.

Of course. But I trust voters more than I trust your system to account for this, for the reason I just stated. And I don't think this issue will come up nearly enough to account for the significantly worse results your system will produce in normal non-polarized circumstances.

If a voting system can possibly select outcomes involving the murder of a certain % of the population, then there will be a much greater investment in non-democratic efforts to constrain outcomes

This seems farcical, because there are alternatives like Score/Approval/Condorcet that we expect to elect boring relatively competent middle-of-the-road non-extremist types. People like Angela Merkel and Emmanuel Macron don't go ordering the slaughter of minorities.

This is an oxymoron. If you initially like Y more than X, but you think electing Y will lead to instability, then you may take that into account and reassess, deciding that X is actually better, all things considered. Then THAT is the basis for your final utility estimation. You then ultimately DO prefer X to Y. Of course, maybe that is reality but you don't have that wise realization, and so you ignorantly vote for Y.

That's logically impossible. You don't know whether Y will lead to instability until the other voters have already revealed their scores

Utterly false. People were predicting Trump would be the kind of polarizing President he turned out to be, long before he even clinched the nomination. People watch news. People watch speeches. People look at the past actions and statements of the candidates.

I would consider options in which voter scores have high variance to be inherently undesirable

That's an assumption. You need data to see how much harm is caused by disagreement.

Steve Cobb

unread,
Aug 28, 2017, 5:24:39 AM8/28/17
to The Center for Election Science
>ordinary low-BR methods like Score/Approval/321/etc.

Doesn't 321 weight consensus by throwing out the most disliked among liked candidates, even though that disliked candidate could have been the score winner?

Clay Shentrup

unread,
Aug 28, 2017, 10:29:34 AM8/28/17
to The Center for Election Science
On Monday, August 28, 2017 at 2:24:39 AM UTC-7, Steve Cobb wrote:
>ordinary low-BR methods like Score/Approval/321/etc.

Doesn't 321 weight consensus by throwing out the most disliked among liked candidates, even though that disliked candidate could have been the score winner?

No. That punishes a candidate for being disliked, not for being dissimilarly liked.

David Hollander

unread,
Aug 28, 2017, 6:16:12 PM8/28/17
to The Center for Election Science
On Sunday, August 27, 2017 at 9:10:40 PM UTC-5, Clay Shentrup wrote:
From the point of view of the voting system, "Christians" is "random". A voting system does not know what a Christian is.

It doesn't have to. It can simply look at the score distribution on a per-outcome basis and use additional sample metrics other than the average to identify differences in the distribution. Voting systems are perfectly capable of doing this. What do you think score variance represents? Why do you think this information is not valuable and groups should avoid using it at all costs?



If you think your proposal would do better, add it to Warren's simulator and see what Bayesian Regret it achieves. So far, you just have speculation that it'll do better via the implausible mechanism of being smarter than voters.

I am not a utilitarian and do not subscribe to the same assumptions as to what constitutes 'better'. I do not think it is always rational for individuals to support systems which seek to maximize utility for the 'average person'. The 'average person' does not actually exist. The 'average person' is a mythical person which no one actually embodies. The distance between a given individual and the mythical 'average person' can be measured by the absolute difference. The distance between the average individual and the average person can be measured by the mean absolute displacement. It MAY be rational for SOME individuals to support increasing utility for the 'average person' IF they have reason to believe that they are similar to the average person. We can provide assurances to the average person that this is the case using consensus weighting. I will volunteer software development hours to add consensus weighting to whatever preferred codebase holds the simulations you wish eventually, but there are a large number of other software projects which are much higher priority for me at the moment, so you may be looking at a time frame of months rather than days, as I try to avoid multitasking on multiple codebases. I am still only publicly advocating for standard range voting in actual elections. I consider speculation and the sharing of thought experiments to be valuable.

Consensus-weighted voting is not a dictatorship and has nothing to do with my intelligence. It is a public objective process which can be implemented by anyone voluntarily without reliance on an intelligent central authority. I would speculate that some groups might voluntarily prefer consensus weighted range voting over standard range voting because it provides an additional rational incentive for utility maximizing individuals that by seeking to maximize utility for the average person, that they are also likely to be maximizing utility for themselves. Adding an additional incentive may be necessary to encourage voluntary group cohesion, so that the need for enlightened central authority figures and leaders to promote group cohesion is substantially reduced.


Using a voting method at all is difficult for map drawing, because you could literally submit trillions of different districtings. And voters would have an incredibly hard time converting a particular arrangement to an estimate of their personal utility under that regime.

I laid out a number of proposals to address this in the previous thread. Ballot access rules are already ysed for regular elections to define what constitutes a 'legal candidate' and limit access for regular elections as well. Your complaint also apply to representative democracy in general. If we were to develop a more complete model of group decision making, the difference would only be a matter of degree.



you can't feasibly convert the utilities of individual politicians into a net utility of putting them into a legislature together. This isn't additive. It would be like trying to use Score Voting to decide between 10 basically anonymous people. The problem isn't Score Voting, it's a lack of information.

This is trivial to do with consensus-weighted voting. In addition to computing the average utility produced by each combination of candidates, you weight by score variance. I believe Parker Friedland already a thread on how to do this, and I have additional notes which I can publish when I have the time. Picking a portfolio of multiple 'assets' is done every day in quantative finance. You don't need to collect any additional information if you use the variance.


Now, what's a better indicator of the "unforeseen regret" that a candidate will bring about? Polarization of scores, or specific policy statements like, "We're going to bomb North Korea and build a wall between our neighbor to the South."? I can't answer that though I suspect it's more the latter. But I also don't think you can answer it either.

We cannot assume that voters have access to accurate, unbiased, and trusted sources of information, or that there is a strong and well functioning public sphere that raises awareness of relevant information for debate prior to debate, or that the voting system is being used in an existing liberal democracy with a strong 4th estate. This is a chicken and the egg problem where you are assuming that a functioning democracy already exists, but the point in time in which people are most likely to adopt new voting systems is when there are no longer any strong democratic institutions in place.



And in order for your proposal to work, it has to do such a phenomenal job of preventing extremely polarizing outcomes (which are already expected to be very rare with Score Voting) so often that it still makes up for the significant loss it introduces.

If there are strong candidates on the ballot, there is no loss, and the same winner is selected. If there are weak candidates on the ballot, it minimizes polarization to prevent a group split. It could only create 'significant loss' if there were no good candidates on the ballot anyway. In which case the central problem might be that you no longer have a democracy, and that there are strong preconditions and ballot access restrictions in place which unfairly restrict outcomes prior to voting. If consensus weighting reduces the potential gains from conspiring to add potential preconditions and access restrictions artificially limiting the selection of candidates under consideration, then it could stimulate long run reforms. We would need to develop a more complicated multistage voting model to account for this. However additional complexity may be necessary to more accurately describe the situation we are attempting to model.


No, not perfect knowledge of course. Knowledge like, "I watch the news and know that Californians are probably going to secede if this nut job is elected President."

What if you lived in a 'democracy' like Turkey with strong censorship of the press, a country where journalists are jailed, or a country where the news is provided by the state according to political directives provided by whoever won the previous election? We need to be more rigourous about the assumptions we are making about epsitemology. There is a large survivorship bias if we are only considering what happens in contemporary liberal democracies country with strong institutions outside of the public voting process


This seems farcical, because there are alternatives like Score/Approval/Condorcet that we expect to elect boring relatively competent middle-of-the-road non-extremist types. People like Angela Merkel and Emmanuel Macron don't go ordering the slaughter of minorities.

Liberal democracies have many checks and balances, laws, rules, restrictions, institutions, and accumulated capital useful for the maintenance of a strong public sphere. The factors fall outside of the game theoretic properties of the voting system.

Clay Shentrup

unread,
Aug 29, 2017, 1:53:38 AM8/29/17
to The Center for Election Science
Scenario 1: Every voter gives X a 0.
CWR score: 0

Scenario 2: Half the voters RAISE their scores to a 1 (i.e. MAX).
CWR score: 0

Scenario 3: That same half of voters then REDUCE their scores to 0.1.
CWR score: 0.05

CWR is non-monotonic.

Clay Shentrup

unread,
Aug 29, 2017, 12:53:29 PM8/29/17
to The Center for Election Science
On Monday, August 28, 2017 at 3:16:12 PM UTC-7, David Hollander wrote:
I am not a utilitarian and do not subscribe to the same assumptions as to what constitutes 'better'.

A utilitarian is one who believes the social utility function is just the sum of individual utilities. It has been well established that if you propose any alternative social welfare function, you get logical contradictions, therefore a utilitarian model is the only one that's tenable.

If you'd care to state your social welfare function, I'd be happy to demonstrate this. If it's anything like the voting voting method you're proposing, it will have the same contradictions, and thus is logically proven not to be correct.

Ergo, saying "I am not a utilitarian" is like saying "I am not someone who believes 2+2=4."

Liberal democracies have many checks and balances, laws, rules, restrictions, institutions, and accumulated capital useful for the maintenance of a strong public sphere. The factors fall outside of the game theoretic properties of the voting system.

Indeed! And I say this is a symptom of historically bad voting methods. Francis Fukuyama makes the case that e.g. America has "too much democracy", in his book Political Order, and Political Decay. He cites better democracies where there are much fewer checks and balances, but there's a better selection process for the leader(s) and/or easier ways to remove a bad leader. 

Clay Shentrup

unread,
Aug 30, 2017, 1:43:50 AM8/30/17
to The Center for Election Science
Does anyone have thoughts to offer? I'd say this is pretty damning.

Andy Jennings

unread,
Aug 30, 2017, 8:09:45 PM8/30/17
to electionscience
If half the voters give 0 and half the voters give grade X, the average will be X/2.  The mean absolute difference is X/2, so with David's original formulation "multiply by 1 - 2 * MAD", you get:

Final grade = X * (1 - X) / 2

So yes, it looks like his original formulation is not monotonic.

Subtracting some multiple of MAD or of the standard deviation would be monotonic, wouldn't it?

--
You received this message because you are subscribed to the Google Groups "The Center for Election Science" group.
To unsubscribe from this group and stop receiving emails from it, send an email to electionscience+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Clay Shentrup

unread,
Aug 31, 2017, 12:50:35 AM8/31/17
to The Center for Election Science
Not just non-monotonic, but massively distortionary.

David Hollander

unread,
Aug 31, 2017, 4:53:33 PM8/31/17
to electio...@googlegroups.com
The faction which gave the max-score to X does not get a chance to reduce their score to 0.1 during the same election cycle, without giving the other faction the same opportunity to change their score as well. A self-effacing strategy of lowering the score for a preferred outcome to increase its rating is risky as it weakens the maximum possible score which your preferred outcome can achieve vs other outcomes. You would probably only want to use a full weighting of 2 in a zero sum game where attempts to win the game is associated with negative externalities. I will look into monotonicity vs non-montonicity more closely, but if you consider non-monotonicity to be axiomatically problematic then I believe you can just decrease the weight from 2 to 1 or use a discrete scale. In the discrete scenario, I believe it will depend upon the magnitude of the number range. I may have originally made an implicit assumption of a range 3 voting using only the scores {0, 0.5, 1}.


--
You received this message because you are subscribed to a topic in the Google Groups "The Center for Election Science" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/electionscience/m7YRlY_PBaA/unsubscribe.
To unsubscribe from this group and all its topics, send an email to electionscience+unsubscribe@googlegroups.com.

David Hollander

unread,
Aug 31, 2017, 5:38:44 PM8/31/17
to electio...@googlegroups.com
On Tue, Aug 29, 2017 at 11:53 AM, Clay Shentrup <cshe...@gmail.com> wrote:
A utilitarian is one who believes the social utility function is just the sum of individual utilities.

Yes, I don't necessarily subscribe to that axiom. I don't believe the most socially useful action for a group to take is always the one which maximizes the sum of known present individual utilities. In an iterated game with imperfect information, if a group always choses to take the action which maximizes the sum of known present individual utilities at every decision making interval, I do not believe this will lead to a universe in which the sum of individual utilities has actually been maximized in comparison to the other possible universes which could have been achieved by using a different decision making criteria.


Message has been deleted

Clay Shentrup

unread,
Sep 2, 2017, 2:23:33 AM9/2/17
to The Center for Election Science
We are not talking about "present" utilities. We are talking about actual lifetime utilities. No one is claiming voters know those precisely at the time of voting

And as I said, there is no viable alternative function. Propose one and I will demonstrate that.

Steve Cobb

unread,
Sep 4, 2017, 8:07:51 AM9/4/17
to The Center for Election Science

>Warren's utilities are not equivalent to Score Voting. They are converted to ballots via normalization. 

>(Not to mention strategic exaggeration for some voters.)

>Bayesian Regret can utilize any utility generator you like. There is no requirement for bounded utilities.


Indeed, as Warren said:

http://rangevoting.org/BayRegExec.html

Even if utilities for different humans are regarded as inherently unknowable, unmeasurable, and inapproximable by any physical/biological process, that would not affect the validity of the Bayesan Regret methodology and its conclusions one iota.


CES generally focuses on voting methods and systems—how a given group should make a given decision— but often we stray outside into issues of constitutions, politics, and even specific political issues. That’s probably OK, but maybe sometimes we need to be reminded of our place. Most people here have a background in mathematics, not political science, economics, sociology, or philosophy. 


>And even if you could, so what? Your core premise is that you can design a system that is smarter than the voters comprising a controlling bloc. 


No! If at the meta-decision level some group decides that they value consensus, or something else in addition to VSE, that’s their business, and it’s our role as consultants to design them a system that meets their requirements. We don’t need to agree now on what those groups and decisions might be. The question is, for situations with consensus requirements, would Consensus-Weighed Score Voting be a useful tool in our toolbox?


>That is the wrong goal. The rational goal is to maximize your expected utility.

>A rational entity does not care about equality, only about maximizing its expected welfare.


Talk about hubris—you claim to know the utility function of every rational entity? Maybe some rational entities’ welfare depends on others’ welfare. My welfare certainly depends on that of my kids’. 


Rational entities want to maximize their welfare *over time*, and it might mean not participating in the decision, or future decisions, at all. The “civil war” scenario comes in more common, less hyperbolic variations. Most important are votes that would otherwise not take place, because the group would never form. Also, yes—group members could leave and not participate in future collaborations. We need to think more broadly about other electorate types, not just the captive electorate of a political district. Instead of “civil war”, I’d call this scenario “exit” or “non-participation”. 


Do SV votes necessarily reflect all utility? What if the voters, say a group of Quakers, derive utility from consensus? How would they know the consensus candidate in advance of the vote? What if only one faction values consensus? What about the opposite: hyper-partisan Americans who derive utility from their adversaries’ loss and suffering?


>"Consensus" is a vague layman's term which has no precise mathematical meaning that's useful for talking about voting methods.


It is a quite useful term, which, like other seemingly (to the layman) vague terms (e.g. “quality”), can be given precise mathematical meanings. In the case of Choose-One Plurality Voting and Approval Voting, we have consensus measures called the decision basis (plurality, majority, and super-majority). The Analytic Hierarchy Process (AHP) has a consensus metric (Microsoft Project Server even calculates it), and another AHP consensus metric is suggested here:

http://bpmsg.com/ahp-consensus/


CWSV provides a new consensus measure which sounds pretty useful.


>In voting theory, we have more precise terms, like utility. 

>If Bob thinks X=10, Y=6, and Alice thinks X=11, Y=7, is that "general agreement"? 

>Gee, I don't know. But I have their exact utilities, which is vastly better than that subjective layman speak.


Utility is precise? Ask Andy what reaction he got to utility from an economist at a conference last year. How do you get someone’s utilities? I thought we agreed that score votes are only roughly based on the underlying utilities. It’s not clear what the SV votes mean, even if the scale (e.g. the zero point) is defined. A while back I asked if SV voters should vote on an absolute scale (based on reasonable but imaginary worst and best candidates) or a scale relative to the nominated candidates (worst gets lowest rating, best gets highest, and all others fall between). The answer from above was that voters should do whatever they want.

Clay Shentrup

unread,
Sep 5, 2017, 1:53:39 AM9/5/17
to The Center for Election Science
On Monday, September 4, 2017 at 5:07:51 AM UTC-7, Steve Cobb wrote:

Most people here have a background in mathematics, not political science, economics, sociology, or philosophy.


Translation: the math depends on empirical data inputs that we may not be experts in. (I think that's what you're saying.)

OK, if you're trying to make the case that some of the empirical inputs into Warren's figures are wrong, then make a specific argument.

>And even if you could, so what? Your core premise is that you can design a system that is smarter than the voters comprising a controlling bloc. 


No!


Yes! That is literally the argument he (appeared) to be making. I.e. disagreement will produce externalities like civil war that will actually lead to paradoxically worse results. And I am saying, if I think a candidate's views are indicative of extremism or other factors that would make him likely to produce instability, I can already account for that in my assessment. (And, I speculate, probably better than a simplistic algorithm that merely assesses disagreement. Part of the reason for this is that normalization exaggerates actual differences.) But I think all of that is pretty minor in significance, because this claim is really just tantamount to saying, "Warren's ignorance model is inaccurate."

If at the meta-decision level some group decides that they value consensus, or something else in addition to VSE


If everyone agrees that X=0, Y=9, and Z=10, do you think they really prefer X to Y or Z because then they get consensus? I say this is irrational, and certainly not what they really want, if given a revealed preference opportunity.

that’s their business, and it’s our role as consultants to design them a system that meets their requirements. We don’t need to agree now on what those groups and decisions might be. The question is, for situations with consensus requirements, would Consensus-Weighed Score Voting be a useful tool in our toolbox?


To me this is like helping someone who doesn't believe in Western medicine pick the best placebo. I don't want to help people make a bad choice. I want to educate them to make a good choice. Maybe there are tradeoffs, like you make money and get notoriety that lends itself to promoting good systems elsewhere in the long run. Still, feels pretty cynical in general.

Talk about hubris—you claim to know the utility function of every rational entity?


The definition of utility essentially requires that it be "the thing you're trying to optimize". By contrast, we could create an artificial intelligence "entity" who would choose X over Y, and Y over Z, and Z over X. That's a decision-making heuristic but it's not based on utilities. Utilities are defined such that if I tell you X=0, Y=3, Z=6, you'd find Y equally preferable to a 50/50 lottery of X or Z.

Living organisms are (to nature's best approximation) utility-based, because natural selection is incentivizing us to maximize the expected number of copies of our genes we make. Utility is just a proxy for making gene copies. Can you have some weird mutation that gives you cyclical preferences? Sure, in theory. In Thinking, Fast and Slow, they even show humans doing things like that. Or look at the Allais Paradox—the human brain just gets confused by the way options are phrased. But there's something deeper going on with organisms, particularly humans. Utility isn't just how we make decisions, it's how we feel. When I say I prefer apples to oranges, I literally mean that the feeling of pleasure I experience is greater. And per this notion of utility, you cannot prefer X to Y to Z. It is intrinsically a scale—you can have more or less.

More practically, if Bob says he'd rather he and Alice both have a utility of 5 rather than him have a 6 and her have a 7, what is he even really saying that's meaningful? This is logically contradictory.

Maybe some rational entities’ welfare depends on others’ welfare.


Suppose Bob is only happy if Alice is unhappy, and Alice is only happy if Bob is happy. This is a logical paradox. It is, at best, impractical to consider this scenario, not to mention it's simply irrational for either of them to have their happiness be a function of the other's.

A workable model is that Bob can be happy or sad about external circumstances which affect Alice's welfare, and vice versa. But to have either of their happiness dependent on the other's is a hall of mirrors.

Instead of “civil war”, I’d call this scenario “exit” or “non-participation”.


If I think having X selected will lead many others to not participate, and I dislike that outcome, then this knowledge will reduce my utility estimate for X and impact my decision about whether to vote for X. I do not think a voting system can be sophisticated enough to let me instead just express my "selfish original utility" and recalculate by inferring from score diversity how much harm will be caused to society by e.g. civil war.

Do SV votes necessarily reflect all utility?


Certainly not. They are distorted by ignorance, normalization, and strategy.

Utility is precise?


100%. There are literally specific numbers of neurotransmitters in various parts of your brain.
 

Ask Andy what reaction he got to utility from an economist at a conference last year.


First show me how that information is relevant.
 

How do you get someone’s utilities?


You don't. You get a lossy transform of them.
 

I thought we agreed that score votes are only roughly based on the underlying utilities.


Absolutely correct.
 

It’s not clear what the SV votes mean, even if the scale (e.g. the zero point) is defined.


There are lots of ways you can translate your utilities to scores. Here's the general best strategy.
 

A while back I asked if SV voters should vote on an absolute scale (based on reasonable but imaginary worst and best candidates) or a scale relative to the nominated candidates (worst gets lowest rating, best gets highest, and all others fall between). The answer from above was that voters should do whatever they want.


I'd say utilities essentially go from zero to infinity, because your expected number of gene copies derived from any action can go from zero to infinity (or whatever the maximum size of the universe can hold).
 

David Hollander

unread,
Sep 6, 2017, 5:27:04 PM9/6/17
to electio...@googlegroups.com
Unbounded utilities are at least as problematic as using bounded utilities. Maximizing a utilitarian sum with unbounded utilities with subjective preferences suffers from the problem of sadists and Nozickian utility monsters. If you discard subjective preferences and instead assume there exists a single objective measure of wealth which everyone acquires diminishing returns from according to the natural logarithm (very large assumptions which not everyone will subscribe to), then you suffer from the problem where any outcome in which a single member's wealth drops to zero, regardless of the group population, results in a social utility approaching negative infinity. If you declare that wealth can't possibly reach zero, and that individuals cannot derive negative utility from remaining in a group, but that values must be normally distributed in a manner which prevents this from happening, then you are not actually using unbounded utilities, you are reverting bounded utilities. If you use anecdotal examples to justify this, then you are falling victim to the availability and survivorship bias. If you are assuming that groups are not permeable and individuals are not free to enter and exit groups at will, then you are assuming the existence of huge barriers and failing to account for the cost of maintaining them.

I think it's rather troubling that you are attempting to equate utilitarian social ethics concerning group decision making with the rational actor model of individual agents, and casting your interpretation of utilitarianism as ''western medicine' and the alternatives as 'placebo'. There are certainly a large number of western schools of philosophy which would not agree with your interpretation and it is not shared by all consequentialists and economists.

Now, if you wish to appeal to evolutionary game theory and self-replication, then I believe we would need to use an extremely different method of simulation to determine the evolutionary fitness of a voting system.

In order to conduct an evolutionary simulation, I believe what we would need to do is pit virtual groups assigned a single voting systems against each other in a competition for human capital and a larger share of a scarce population of individuals. We would need to use a population dependent function to model group output wealth produced in order to reflect gains from specialization and  trade. Groups with 1 member acquire all wealth themselves at each interval of accumulation, groups with > 1 member vote on plans to divide wealth between their members. Each individual would have to recompute the expected utility for working for themselves, staying in their current group, and joining a competing group each turn. We could not model the expected utility for an individual remaining in their current group as a random lottery. This is because the other members of their group are perfectly capable of discrimination and participating in faction based strategy. That is to say, if a rational utility maximizing individual repeatedly lost wealth by participating in a group which used a specific voting system, they would be forced to either leave to work for themselves or join a competing group (whichever maximized utility) under such a model. This would not be an instance of a gambler's fallacy, because we are not at all discussing random games of chance, the randomness is only introduced by those conducting the simulation for purposes of convenience.

--

David Hollander

unread,
Sep 6, 2017, 10:35:33 PM9/6/17
to electio...@googlegroups.com
An alternate method of analysis which may preserve most existing assumptions, including the assumption of a closed group and single voting method being used in isolation, would be to develop a metric which measures the expected utility which the average voter gains from disenfranching another voter of their choice.

If a decision rule has no preference between outcomes with score distribution 1) where 1/2 of the population expects to live in a utopia and 1/2 of the population expects to live in a dystopia, and outcomes with score distribution 2) where everyone expects to live in an equal state of mediocrity, then the utility which the average voter expects to gain from participating in a strategy of disenfranchisement of another voter of their choice seems quite large.

If there is a 50-50 split between factions A and B, and a member of group A successfully disenfranchises a member of group B by removing their ballot from consideration by the voting system, this could possibly mean the difference between living in a dystopia vs a utopia. Their expected utility from disenfranchising another voter would be quite high. However, if the decision rule used by the group leads them to believe that there was a much higher chance of an outcome with score distribution 2 being selected rather than an outcome with score distribution 1, then their expected utility from engaging in a strategy of voter disenfranchisement should be much lower. If there is an expected cost which all individuals pay for participating in a strategy of disenfranchisement, perhaps associated with the probability of discovery and severity of punishment, then they may never attempt to cheat if the voting system employed by the group is a stable voting system in which the expected utility for the disenfranchisement of other voters is low.

In addition to creating a metric that measures the expected utility which the average voter will acquire from disenfranchising another voter of their choice, we might also create a metric which measures the expected utility which the average voter will acquire from excluding an outcome from consideration.

With a voting method which has no preference for lower variance outcomes, the gains which the average individual may acquire from engaging in disenfranchisement and exclusion could be quite large, however these antisocial behaviors could ultimately produce deadweight loss and group self-cannabilization which decreases the maximum achievable utilitarian sum, even in a closed group model where no individuals are allowed to enter or leave.

So there may still be a role for analysis of the score variance, development of supplementary metrics dependent on the score variance, and a role for voting systems which minimize the score variance, in a manner which may be compatible with most of your initial assumptions. However we would now have to assume that the participants in voting games have the ability to 'cheat', and cause small manipulations to the input set of outcomes under consideration and the input set of ballots used to determine outcome desirability.

Clay Shentrup

unread,
Sep 7, 2017, 2:20:57 AM9/7/17
to The Center for Election Science
YOU might be the utility monster.

David Hollander

unread,
Sep 7, 2017, 2:27:57 PM9/7/17
to electio...@googlegroups.com
Should I take this as ad-hominem or the beginning of a more compelling argument?

My usage of the term was in reference to Robert Nozick's critique of utilitariainism. If we wish to entertain the possibility that his hypothetical 'utility monster' exists, then we must consider the possibility that his critique is valid. If there is a possibility that his critique is valid (perhaps under a certain set of additional assumption it is and under another set it isn't) then it seems reasonable to allow for the discussion and exploration of multiple alternative models. Continuous investment in the production of additonal models could allow us to assemble a diversified portfolio of multiple tools and metrics for the quantitative measurement of voting system properties.

On Thu, Sep 7, 2017 at 1:20 AM, Clay Shentrup <cshe...@gmail.com> wrote:
YOU might be the utility monster.

--

Clay Shentrup

unread,
Sep 8, 2017, 12:27:02 AM9/8/17
to The Center for Election Science
I'm saying what Harsanyi said: that the outcome that maximizes the net utility of the group also maximizes the expected utility of any random person given that he is unsure of his identity. So my point there is, any given person could be the utility monster.

Of course if you KNOW that you're, say, a 1% member, then you do not want the utility-maximizing outcome, because you may know of another that's bad for society as a whole, but good for you. You're still want to maximize your own expected utility, but that just might not be the same as the outcome that maximizes society's utility.

David Hollander

unread,
Sep 8, 2017, 11:32:56 AM9/8/17
to electio...@googlegroups.com
Using an assumption that individual identity does not exist to justify the position that 'anyone can win' seems extremely sophistic. If you ask someone to close their eyes, they are not obligated to believe that there is now a equal chance of it being night or day. Harsanyi's assumption does not seem to be a necessary or useful one to make. In elections, selecting a person at random for purposes of measurement, or creating sample statistics dependent upon the selection of a person at a random, does not imply that a random lottery ever took place.

In random lotteries, outcomes of repeated trials with fair dice are statistically independent. In elections, outcomes of repeated votes can be highly depedent. In random lotteries, the expected value for the lottery does not change between trials. In elections, the expected value of the election can change dramatically between trials. In lotteries, if an individual begins to lose, they should not necessarily expect to continue losing in the future, if they have perfect knowledge that it is a purely random event. In elections, if an individual begins to lose, they should strongly suspect to continue losing in the future. In a random lottery, the outcome is not dependent on the current state of the system. In an election, the outcome is heavily dependent on the current state of the system. With repeated trials, it possible for the same individuals to win every time and for the same individuals to lose every time. Depending on the nature of what is being voted on, wealth and power can quickly and permenately be concentrated in the hands of the few, and a decision which appears to maximize utility based upon information available to voters in the present is not necessarily the decision which will do so over the long run once new information is made available or becomes more widely distributed in the future.

I think you are missing the main conclusion of the UM thought experiment: that utilitarianism is inherently anti-egalitarian and can easily lead to unnecessary suffering depending on the subtleties of the problem. I think a 1% class should certainly be in favor of utilitarianism, as long they are able to make a small investment to ensure that outcomes which both maximize economic efficiency and minimize economic inequality are kept off of a ballot or out of public discussion, as it would allow for outcomes which produce large levels of inequality to be viewed as equally desirable as outcomes which produce low levels of inequality, and the gains produced by convincing others to choose the unequal outcome can be reinvested between elections. Gains produced when the unequal outcome is chosen can be reinvested between trials to ensure that information is unequally distributed concerning options which have the potential to produce greater net utility so that the greatest net utility is never actually achieved. If gains from previous elections are reinvested to keep strong options out of the running, it increases the competitiveness of dismal tradeoffs which produce large inequality, so that the outcome which produces greater inequality is chosen, in a manner which produces gains which can be reinvested to ensure the current winners stay winners and the current losers stay losers.  If you want a historical perspective on this I would recommend "The Corruption of Economics: Neo-classical Economics as a Strategem Against Henry George" by economist Mason Gaffney.

On Thu, Sep 7, 2017 at 11:27 PM, Clay Shentrup <cshe...@gmail.com> wrote:
I'm saying what Harsanyi said: that the outcome that maximizes the net utility of the group also maximizes the expected utility of any random person given that he is unsure of his identity. So my point there is, any given person could be the utility monster.

Of course if you KNOW that you're, say, a 1% member, then you do not want the utility-maximizing outcome, because you may know of another that's bad for society as a whole, but good for you. You're still want to maximize your own expected utility, but that just might not be the same as the outcome that maximizes society's utility.

--

Clay Shentrup

unread,
Sep 8, 2017, 11:20:19 PM9/8/17
to The Center for Election Science
On Friday, September 8, 2017 at 8:32:56 AM UTC-7, David Hollander wrote:
Using an assumption that individual identity does not exist to justify the position that 'anyone can win' seems extremely sophistic.

No one said identity doesn't exist. The social welfare function just can't know it; it just sees a bunch of utility values. Likewise a voting method just sees a bunch of ballots. It can identify yours or mine.

David Hollander

unread,
Sep 9, 2017, 5:33:23 PM9/9/17
to electio...@googlegroups.com
You did in the following quote:

> I'm saying what Harsanyi said: that the outcome that maximizes the net utility of the group also maximizes the expected utility of any random person given that he is unsure of his identity

The pronoun 'he' in 'he is unsure' appears to refer to the individual's perception of themselves, not the group's perception of the individual. I don't see any compelling reason to accept this 'given' and I'm not going to. I've already shared my thoughts on what voting methods can do and why a voting method already has access to substantially more information than only the sum or average.

I don't think continuing this conversation is likely to be mutually profitable at present. However, I can notify you when I upload new notes in the future which may clarify my position, if you wish to continue it.



Clay Shentrup

unread,
Sep 10, 2017, 1:32:22 AM9/10/17
to The Center for Election Science
On Saturday, September 9, 2017 at 2:33:23 PM UTC-7, David Hollander wrote:
The pronoun 'he' in 'he is unsure' appears to refer to the individual's perception of themselves, not the group's perception of the individual.

I didn't say anything about the group's perception of the individual. If the voter looks at two utility distributions, he cannot tell which utility represents him. He is "unsure of his identity". So the only relevant consideration for him is his expected utility, which is effectively equivalent to the sum.
 
I don't see any compelling reason to accept this 'given' and I'm not going to.

For the most part, it's just a reality. A caveat could be something like, Plurality Voting is beneficial for the 1%, and the 1% know who they are. But that's a rare caveat. Most people are ignorant of their identity.

I've already shared my thoughts on what voting methods can do and why a voting method already has access to substantially more information than only the sum or average.

Yes, it does. But that information isn't useful to rational people who are just trying to maximize their expected wellbeing.

Steve Cobb

unread,
Sep 11, 2017, 8:13:35 PM9/11/17
to The Center for Election Science

>>Most people here have a background in mathematics, not political science, economics, sociology, or philosophy.


>Translation: the math depends on empirical data inputs that we may not be experts in.

>(I think that's what you're saying.)


Theory and principles, not just data.

Voting is just a piece of larger systems within systems, with feedback loops. Besides electing people and approving some ballot measures, there are other group-related decisions, e.g. association and constitutions. Without using the term, you referred to John Rawls’ “veil of ignorance”. David Hollander referred to Robert Nozick, whose book Anarchy, State, and Utopia was a response to Rawls. In the past I’ve recommended to you The Calculus of Consent, which won one of the co-authors a Nobel Prize; it also casts doubt on voting. 


In the case of association, have you heard of the book The Big Sort? 

https://www.goodreads.com/book/show/2569072

The author laments Americans’ increasingly living with or near those demographically and ideologically similar to themselves, but this is a logical consequence of increasing politicization. If groups can impose on their members decisions with increased scope, then people should be more careful about the groups that they join. They would find it more important to join groups with members more like themselves, e.g. move to political districts with like-minded neighbors. They thus achieve consensus results without the need for voting mechanisms like consensus weighting or higher decision bases. This is of course more harmful to people on the lower socioeconomic end, who are less able to benefit from interaction with the wealthy.


>To me this is like helping someone who doesn't believe in Western medicine pick the best placebo.


Western doctors sometimes disagree. One hopes that one’s doctor is not excessively confident, especially in controversial areas at the edges of his specialization, bordering on areas where other specialists see things differently.


Anyway, I look forward to someday seeing some simulation results of VSE vs. Consensus Weighting.


Steve Cobb

unread,
Sep 11, 2017, 9:05:20 PM9/11/17
to The Center for Election Science

Voting differs from group-estimation problems, like estimating the weight of a cow:

http://www.npr.org/sections/money/2015/08/07/429720443/17-205-people-guessed-the-weight-of-a-cow-heres-how-they-did

in one key respect: utility. Voters are expected to contribute both neutral information (e.g. which candidate is most qualified to be president, which restaurant has the lowest prices) and interest information (e.g. which candidate will benefit the voter, which is the preferred cuisine). For different decisions, the ratio of neutral to interest information varies widely.

In the case of excellence awards:

https://en.wikipedia.org/wiki/List_of_prizes,_medals_and_awards

we expect the voters/judges to be neutral, but probably they have favorites, and so interest plays some minimal role. For the Webby Awards, Jameson devised a pretty cool system combining elements of the N-interviewer secretary problem, honeybee voting, and maybe the Kalman filter:

https://electology.org/blog/behind-webby-curtain

The large number of voter/judges were fed a stream of sites to evaluate, and couldn’t choose which. So if there was virtually no personal interest in the outcome, where was the utility?

Jameson could probably comment on the statistical processing he applied to noisy judges. 

Andy Jennings

unread,
Sep 14, 2017, 7:59:24 PM9/14/17
to electionscience
1. I was wrong about the standard deviation.  If N is the number of voters, one person changing their grade by eps will affect the average by eps/N and can affect the standard deviation by something like eps/sqrt(N).  So if the deciding number is "avg - k * std_dev", then k better be less than 1/sqrt(N).  Not helpful.

MAD, on the other hand, can change by 2*eps*(N-1)/N^2.  So if you make k < 1/2, then "avg - k * MAD" is always monotone with the votes.

2. I still wonder about making the multiplier, or the formula, vary with N and using it for a quorum rule.

3. I like Steve's idea of a group that decides in advance to value consensus.  The easy scenario is an informal one, for example a group deciding where to go to dinner.  They might pre-decide that they'd rather go somewhere that everyone gives 6/10 over somewhere that seven people love and three people hate, but they may have no idea which actual choices are divisive and which aren't.  So they tell the voting system to handle it for them.  Punish polarity.

The alternative is to take a poll of just first-order preferences ("how much do you like the food at each restaurant?") and then let people see the distribution of ratings for each choice, then ask people for a second-order vote ("how much do you want to go to each restaurant, considering the quality of the food _and_ the feelings of others in the group?").

Perhaps there is a political party that would like to choose its nominee(s) this way.  They'd rather be united on someone with a slightly lower average grade than choose a divisive nominee with a higher average grade.

~ Andy

Clay Shentrup

unread,
Sep 15, 2017, 7:30:25 PM9/15/17
to The Center for Election Science
On Thursday, September 14, 2017 at 7:59:24 PM UTC-4, Andrew Jennings wrote:
3. I like Steve's idea of a group that decides in advance to value consensus.  The easy scenario is an informal one, for example a group deciding where to go to dinner.

I guess a case where something like this could be reasonable is like, suppose I'm vegan (which I am, extremely so) and I know my identity, ergo I know that there's a very high likelihood we'll end up some place where I literally can't eat anything. This only works because there's no veil of ignorance for me.

Though not about consensus per se. It's about having some minimum threshold of happiness. That is, {9,8} is better than {7,7}, definitively. But maybe {9,0} is worse than {4,4}?

Steve Cobb

unread,
Sep 16, 2017, 5:18:42 AM9/16/17
to The Center for Election Science
What if our group contained someone who thinks veganism is silly (there is always one such person, right?) and who thinks you ought to eat meat. He's not exactly sadistic, but he would derive pleasure from seeing you go to a normal restaurant and eat proper food. Interesting--the "pizza candidate" isn't really a compromise solution for vegans. ;)
Reply all
Reply to author
Forward
0 new messages