You know that Warren’s BR simulation was recreated by Jameson in Python? Instead of BR, we’re trying now to refer to Voter Satisfaction Efficiency (BR normalized and with the sign flipped):
https://github.com/electology/vse-sim
He also has results for the newer score-preferential methods STAR/SRV and 3-2-1:
http://rpubs.com/Jameson-Quinn
How hard would it be to add a consensus-weighting knob from 0 to 1.0?
What would be a reasonable consensus weighting value? Perhaps running the simulation across a range of consensus values would reveal some optimum.
In many cases, incorporating consensus would affect the candidates who get nominated.
low/high score dispersion —> low/high consensus —> low/high risk: What a cool concept.
--
You received this message because you are subscribed to the Google Groups "The Center for Election Science" group.
To unsubscribe from this group and stop receiving emails from it, send an email to electionscience+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
- Multiply by "1 - k * MAD(candidate's scores)"
- Subtract k standard deviations
- Multiply by "1 - k * MAD(candidate's scores)"
- Subtract k standard deviations
Utility in the quoted context referred to aggregate utility and the group utilitarian sum.
Selecting a choice which maximizes the utilitarian sum for the group does not necessarily maximize utility for the individual.
It may actually minimize utility and the expected welfare for individuals who belong to a minority subgroup.
It may create a rational incentive for them to secede or create rival groups in order to maximize individual utility, in a manner which introduces systematic risk negatively impacting the level of welfare which is actually achieved.
Because the actual level of welfare which a social choice will achieve is non-deterministic and not known at the time of voting
and because information is imperfectly distributed and the information available to the group is not the same as the information available to each individual, it is also possible for voting methods which reliably suceed at selecting the option which maximize the sum of individual expected welfare to fail at selecting the option which maximizes the sum of individual welfare which is actually achieved.
Utilities are bounded under 'relative utilitarianism'. This is an important applied assumption on which range voting and bayesian regret are based.
https://en.wikipedia.org/wiki/Relative_utilitarianism
On Saturday, August 26, 2017 at 2:33:11 PM UTC-7, David Hollander wrote:Utility in the quoted context referred to aggregate utility and the group utilitarian sum.The outcome which produces the highest aggregate utility is also the one that produces the greatest utility for any randomly selected member of the group. As noted by Harsanyi, discussed here:Selecting a choice which maximizes the utilitarian sum for the group does not necessarily maximize utility for the individual.It maximizes the utility sum for any given randomly selected individual.
So if you have a choice to be in a universe where your group's net utility is 100 or 120, you want the latter, given you know nothing else.
Then you're back to caring about net welfare, not equality.
A utility-maximizing voting system already accounts for the fact that a theoretical person in the majority may cause the election of a candidate that he "favors" but whose decisions end up causing political turmoil that makes him worse off.
Warren's sims measure actual welfare that is strictly achieved. The difference between that and voter expectations is accounted for by ignorance factors.
They are converted to ballots via normalization. (Not to mention strategic exaggeration for some voters.)
> 'Tyranny of the majority' is problem where social welfare functions and election methods select outcomes which make a minority substantially worse off in order to make a majority better off.You have this completely backward. Utilitarianism is the complete opposite of "tyranny of the majority". Net utility can be increased by making the majority a little worse off while making the minority substantially better off.
The outcomes which an election method selects do not necessarily distribute gains and losses randomly, you are only selecting an individual at random via contrived example.
The outcomes selected by the election method will most likely distribute gains non-randomly and depend upon the specific substructures of the specific group in question.
It's quite possible that for a given group with a specific substructure and a limited and finite number of options to choose between, that members of a minority subgroup will reliably receive zero utility from the outcomes which a utilitarian election method selects, and members of a controlling subgroup will reliably receive maximum utility from the outcome which a utilitarian election method selects.
So if you have a choice to be in a universe where your group's net utility is 100 or 120, you want the latter, given you know nothing else.
This completely ignores risk and consensus and other sample statistics which can be obtained from already collected voter scores, such as dispersion.
general agreement."a consensus of opinion among judges"synonyms: agreement, harmony, concurrence, accord, unity, unanimity, solidarity;
It's quite to construct scenarios in which the net-100 option is preferable to the net-120 option.
For instance, if there were two voters which unanimously agreed the net-100 option had a score of 50, and there was strong disagreement concerning the net-120 option, where one voter rated it zero and one voter rated it 120, then selecting the net-100 outcome will produce strong agreement whereas selecting the net-120 outcome will produce strong disagreement.
If you are risk adverse and value stability and low volatility for a guaranteed return then the net-100 option is certainly preferable.
However, if there net-120 option was arrived at via unanimous agreement by both voters to rate it a 60, and the net-100 option had one voter rate it 0 and the other rate it 100, then the net-120 option would clearly be a vastly superior option.
This tradeoff can be handled automatically by weighting the sum by the dispersion.
Then you're back to caring about net welfare, not equality.
I believe weighting for score dispersion will address both.
Minimizing variance in voter expected utility scores may maximize the net welfare which is actually achieved, in the same way that repeated low-volatility investment strategies often outperform high risk strategies, and may also help avoid outcomes which create structural inequality between subgroups. Structural inequality may also create systematic risks and catastrophes which are hard for individuals to account for.
A utility-maximizing voting system already accounts for the fact that a theoretical person in the majority may cause the election of a candidate that he "favors" but whose decisions end up causing political turmoil that makes him worse off.
Existing utility maximizing voting systems only address expected return.
They do not address variance, risk, and consensus.
If actual welfare is dependent on consensus, and it is not possible for voters to know whether the score they assign an outcome is similar to the scores which other assign an outcome before these scores have been communicated, then consensus cannot be accounted for by ignorance factors, because it is not ignorance for a voter not to know something at a point in time before it is possible for them to know it.
With range voting, a 51% trivial majority faction can still unilaterally select the outcome, even if the chosen outcome is given the minimum possible score by the other 49% of voters.
With consensus weighted range voting, I believe it's possible to strictly prevent it as a matter of rule.
If you have identifiable, fixed factions (e.g. large and small states, or northern and southern states) they can be guaranteed seats or other forms of influence, but what if the potentially abused voters are not identifiable in advance? Consensus-weighting seems like a way to offer everyone, e.g. the loss-averse, some assurance the result will not be too negative for anyone.
Jameson, what do you think about the statistical argument, that greater vote dispersion implies greater uncertainty about the right answer?
That is the wrong goal. The rational goal is to maximize your expected utility.
Also, so we have BR values for it?
What is contrived is an example where gains are non-randomly distributed. The election system cannot identify voters. It's not as if Score Voting can be better for Christians, or IRV better for women or anything of that nature. A caveat might be that Plurality is better for the proverbial 1%, because it makes the entire system more vulnerable to gamesmanship via moneyed special interests. But aside from that, you'd be hard pressed to design a system that consistently advantages or disadvantages any identifiable group, be they a race, religion, gender, etc.
And even if you could, so what? Your core premise is that you can design a system that is smarter than the voters comprising a controlling bloc. They will naively elect X, who will be so detestable to the minority that the minority will e.g. wage civil war, making things ultimately worse for the controlling bloc. Your theory requires that you are smarter than them and can design a system that acts to preserve their best interest better than they themselves could. I think this is unfounded to put it mildly. It really is the epitome of hubris.
This would only hold in a highly contrived scenario where the majority is consolidated into a highly specific spot in issue space. In reality, voters and candidates are spread over some continuum (albeit they may be in a heavily bimodal distribution), and the minority tugs the center in their direction. So no, they do not have "zero" influence. (Which must be what you meant when you said "zero utility", since that would mean they are all dead.)
"Consensus" is a vague layman's term which has no precise mathematical meaning that's useful for talking about voting methods ... In voting theory, we have more precise terms, like utility.
This is an oxymoron. If you initially like Y more than X, but you think electing Y will lead to instability, then you may take that into account and reassess, deciding that X is actually better, all things considered. Then THAT is the basis for your final utility estimation. You then ultimately DO prefer X to Y. Of course, maybe that is reality but you don't have that wise realization, and so you ignorantly vote for Y.
Saying there's "strong disagreement" is just a redundant statement of the differing utilities. It is not new information. The net-120 option already takes those differing utilities into account, and adds up to a greater number.
No, 100 is clearly less than 120.
If you state your precise social welfare function, we can do a reductio ad absurdum example to once again demonstrate that an additive welfare function is the only tenable one. I can't read your definition because you have broken HTML. See attached image.
Ignorance here just means a disparity between what will actually happen and what a voter expects will happen (which forms the basis for his vote). If you don't like that term, you're just arguing semantics.
And voters can absolutely have some idea about how polarizing the candidates are, or how likely a given candidate might be to incite civil unrest.
For your system to actually produce better results, you have to make massive assumptions about voter being extremely bad at that, whilst your system will be extremely good at it, enough so to overcome its distortionary effects.
Also, so we have BR values for it?
There are a vast number of scenarios in which gains are non-randomly distributed. A politician could certainly run on a campaign of identifying and then killing or enslaving all Christians.
If an election was held to decide how an areas of land should be partitionted, there will certainly be proposals submitted which distribute gains non-randomly, and it would be rational for factions to submit gerrymandered maps, and members of a majority group may rationally decide to support gerrymandering in order to maximize their gains and power over other groups if doing so satifies their preferences.
This was actually pointed out to me by Warren, who argued that range voting was 'not good enough' for selecting maps, which is why the split line algorithm was needed to randomly distribute gains without direct measurement of voter scores. So if range voting performs nearly perfectly in voting metric X, but is still so flawed that a random algorithm which does not even measure voter scores can beat it in certain scenarios, then it is clear that voting metric X is not a universal measure of all possible properties of a good methods of group decision making in all contexts.
it is certainly possible elections systems which operate upon range 3+ cardinal utility scores to distinguish polarizing outcomes from consensus outcomes by measuring dispersion and the central tendency of scores to the social average.
It is possible for a voting system to determine whether a particular outcome is being min-maxed, and whether the average score for a particular is likely to suffer from a large normalization error.
With outcomes which are heavily min-maxed with average ratings which suffer from large normalization error, we really don't know whether the 'true' unbounded utilitarian sum is radically positive or negative.
If we are risk averse we will avoid selecting these outcomes in favor of reliable positives.
It is hubris to think that any voter can fully account for systematic risk at the time of voting.
It is not possible for an individual voters to account for this in their reported score at the time of voting, because they do not know the score which others will report
The belief that an individual voters can fully account for risk prior to voting and prior to the release of results is hubristic because it implies an epistemology in which an individual can obtain perfect knowledge of how others will act and what their true internal preferences are before they have actually acted or publicly communicated them.
Intra-group violence and the risk of voters being killed is certainly a possibility which needs to be accounted for in a political context.
If a voting system can possibly select outcomes involving the murder of a certain % of the population, then there will be a much greater investment in non-democratic efforts to constrain outcomes
This is an oxymoron. If you initially like Y more than X, but you think electing Y will lead to instability, then you may take that into account and reassess, deciding that X is actually better, all things considered. Then THAT is the basis for your final utility estimation. You then ultimately DO prefer X to Y. Of course, maybe that is reality but you don't have that wise realization, and so you ignorantly vote for Y.
That's logically impossible. You don't know whether Y will lead to instability until the other voters have already revealed their scores
I would consider options in which voter scores have high variance to be inherently undesirable
>ordinary low-BR methods like Score/Approval/321/etc.Doesn't 321 weight consensus by throwing out the most disliked among liked candidates, even though that disliked candidate could have been the score winner?
From the point of view of the voting system, "Christians" is "random". A voting system does not know what a Christian is.
If you think your proposal would do better, add it to Warren's simulator and see what Bayesian Regret it achieves. So far, you just have speculation that it'll do better via the implausible mechanism of being smarter than voters.
Using a voting method at all is difficult for map drawing, because you could literally submit trillions of different districtings. And voters would have an incredibly hard time converting a particular arrangement to an estimate of their personal utility under that regime.
you can't feasibly convert the utilities of individual politicians into a net utility of putting them into a legislature together. This isn't additive. It would be like trying to use Score Voting to decide between 10 basically anonymous people. The problem isn't Score Voting, it's a lack of information.
Now, what's a better indicator of the "unforeseen regret" that a candidate will bring about? Polarization of scores, or specific policy statements like, "We're going to bomb North Korea and build a wall between our neighbor to the South."? I can't answer that though I suspect it's more the latter. But I also don't think you can answer it either.
And in order for your proposal to work, it has to do such a phenomenal job of preventing extremely polarizing outcomes (which are already expected to be very rare with Score Voting) so often that it still makes up for the significant loss it introduces.
No, not perfect knowledge of course. Knowledge like, "I watch the news and know that Californians are probably going to secede if this nut job is elected President."
This seems farcical, because there are alternatives like Score/Approval/Condorcet that we expect to elect boring relatively competent middle-of-the-road non-extremist types. People like Angela Merkel and Emmanuel Macron don't go ordering the slaughter of minorities.
I am not a utilitarian and do not subscribe to the same assumptions as to what constitutes 'better'.
Liberal democracies have many checks and balances, laws, rules, restrictions, institutions, and accumulated capital useful for the maintenance of a strong public sphere. The factors fall outside of the game theoretic properties of the voting system.
--
You received this message because you are subscribed to the Google Groups "The Center for Election Science" group.
To unsubscribe from this group and stop receiving emails from it, send an email to electionscience+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to a topic in the Google Groups "The Center for Election Science" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/electionscience/m7YRlY_PBaA/unsubscribe.
To unsubscribe from this group and all its topics, send an email to electionscience+unsubscribe@googlegroups.com.
A utilitarian is one who believes the social utility function is just the sum of individual utilities.
And as I said, there is no viable alternative function. Propose one and I will demonstrate that.
>Warren's utilities are not equivalent to Score Voting. They are converted to ballots via normalization.
>(Not to mention strategic exaggeration for some voters.)
>Bayesian Regret can utilize any utility generator you like. There is no requirement for bounded utilities.
Indeed, as Warren said:
http://rangevoting.org/BayRegExec.html
“Even if utilities for different humans are regarded as inherently unknowable, unmeasurable, and inapproximable by any physical/biological process, that would not affect the validity of the Bayesan Regret methodology and its conclusions one iota.”
CES generally focuses on voting methods and systems—how a given group should make a given decision— but often we stray outside into issues of constitutions, politics, and even specific political issues. That’s probably OK, but maybe sometimes we need to be reminded of our place. Most people here have a background in mathematics, not political science, economics, sociology, or philosophy.
>And even if you could, so what? Your core premise is that you can design a system that is smarter than the voters comprising a controlling bloc.
No! If at the meta-decision level some group decides that they value consensus, or something else in addition to VSE, that’s their business, and it’s our role as consultants to design them a system that meets their requirements. We don’t need to agree now on what those groups and decisions might be. The question is, for situations with consensus requirements, would Consensus-Weighed Score Voting be a useful tool in our toolbox?
>That is the wrong goal. The rational goal is to maximize your expected utility.
>A rational entity does not care about equality, only about maximizing its expected welfare.
Talk about hubris—you claim to know the utility function of every rational entity? Maybe some rational entities’ welfare depends on others’ welfare. My welfare certainly depends on that of my kids’.
Rational entities want to maximize their welfare *over time*, and it might mean not participating in the decision, or future decisions, at all. The “civil war” scenario comes in more common, less hyperbolic variations. Most important are votes that would otherwise not take place, because the group would never form. Also, yes—group members could leave and not participate in future collaborations. We need to think more broadly about other electorate types, not just the captive electorate of a political district. Instead of “civil war”, I’d call this scenario “exit” or “non-participation”.
Do SV votes necessarily reflect all utility? What if the voters, say a group of Quakers, derive utility from consensus? How would they know the consensus candidate in advance of the vote? What if only one faction values consensus? What about the opposite: hyper-partisan Americans who derive utility from their adversaries’ loss and suffering?
>"Consensus" is a vague layman's term which has no precise mathematical meaning that's useful for talking about voting methods.
It is a quite useful term, which, like other seemingly (to the layman) vague terms (e.g. “quality”), can be given precise mathematical meanings. In the case of Choose-One Plurality Voting and Approval Voting, we have consensus measures called the decision basis (plurality, majority, and super-majority). The Analytic Hierarchy Process (AHP) has a consensus metric (Microsoft Project Server even calculates it), and another AHP consensus metric is suggested here:
http://bpmsg.com/ahp-consensus/
CWSV provides a new consensus measure which sounds pretty useful.
>In voting theory, we have more precise terms, like utility.
>If Bob thinks X=10, Y=6, and Alice thinks X=11, Y=7, is that "general agreement"?
>Gee, I don't know. But I have their exact utilities, which is vastly better than that subjective layman speak.
Utility is precise? Ask Andy what reaction he got to utility from an economist at a conference last year. How do you get someone’s utilities? I thought we agreed that score votes are only roughly based on the underlying utilities. It’s not clear what the SV votes mean, even if the scale (e.g. the zero point) is defined. A while back I asked if SV voters should vote on an absolute scale (based on reasonable but imaginary worst and best candidates) or a scale relative to the nominated candidates (worst gets lowest rating, best gets highest, and all others fall between). The answer from above was that voters should do whatever they want.
Most people here have a background in mathematics, not political science, economics, sociology, or philosophy.
>And even if you could, so what? Your core premise is that you can design a system that is smarter than the voters comprising a controlling bloc.
No!
If at the meta-decision level some group decides that they value consensus, or something else in addition to VSE
that’s their business, and it’s our role as consultants to design them a system that meets their requirements. We don’t need to agree now on what those groups and decisions might be. The question is, for situations with consensus requirements, would Consensus-Weighed Score Voting be a useful tool in our toolbox?
Talk about hubris—you claim to know the utility function of every rational entity?
Maybe some rational entities’ welfare depends on others’ welfare.
Instead of “civil war”, I’d call this scenario “exit” or “non-participation”.
Do SV votes necessarily reflect all utility?
Utility is precise?
Ask Andy what reaction he got to utility from an economist at a conference last year.
How do you get someone’s utilities?
I thought we agreed that score votes are only roughly based on the underlying utilities.
It’s not clear what the SV votes mean, even if the scale (e.g. the zero point) is defined.
A while back I asked if SV voters should vote on an absolute scale (based on reasonable but imaginary worst and best candidates) or a scale relative to the nominated candidates (worst gets lowest rating, best gets highest, and all others fall between). The answer from above was that voters should do whatever they want.
--
YOU might be the utility monster.
--
I'm saying what Harsanyi said: that the outcome that maximizes the net utility of the group also maximizes the expected utility of any random person given that he is unsure of his identity. So my point there is, any given person could be the utility monster.Of course if you KNOW that you're, say, a 1% member, then you do not want the utility-maximizing outcome, because you may know of another that's bad for society as a whole, but good for you. You're still want to maximize your own expected utility, but that just might not be the same as the outcome that maximizes society's utility.
--
Using an assumption that individual identity does not exist to justify the position that 'anyone can win' seems extremely sophistic.
The pronoun 'he' in 'he is unsure' appears to refer to the individual's perception of themselves, not the group's perception of the individual.
I don't see any compelling reason to accept this 'given' and I'm not going to.
I've already shared my thoughts on what voting methods can do and why a voting method already has access to substantially more information than only the sum or average.
>>Most people here have a background in mathematics, not political science, economics, sociology, or philosophy.
>Translation: the math depends on empirical data inputs that we may not be experts in.
>(I think that's what you're saying.)
Theory and principles, not just data.
Voting is just a piece of larger systems within systems, with feedback loops. Besides electing people and approving some ballot measures, there are other group-related decisions, e.g. association and constitutions. Without using the term, you referred to John Rawls’ “veil of ignorance”. David Hollander referred to Robert Nozick, whose book Anarchy, State, and Utopia was a response to Rawls. In the past I’ve recommended to you The Calculus of Consent, which won one of the co-authors a Nobel Prize; it also casts doubt on voting.
In the case of association, have you heard of the book The Big Sort?
https://www.goodreads.com/book/show/2569072
The author laments Americans’ increasingly living with or near those demographically and ideologically similar to themselves, but this is a logical consequence of increasing politicization. If groups can impose on their members decisions with increased scope, then people should be more careful about the groups that they join. They would find it more important to join groups with members more like themselves, e.g. move to political districts with like-minded neighbors. They thus achieve consensus results without the need for voting mechanisms like consensus weighting or higher decision bases. This is of course more harmful to people on the lower socioeconomic end, who are less able to benefit from interaction with the wealthy.
>To me this is like helping someone who doesn't believe in Western medicine pick the best placebo.
Western doctors sometimes disagree. One hopes that one’s doctor is not excessively confident, especially in controversial areas at the edges of his specialization, bordering on areas where other specialists see things differently.
Anyway, I look forward to someday seeing some simulation results of VSE vs. Consensus Weighting.
Voting differs from group-estimation problems, like estimating the weight of a cow:
in one key respect: utility. Voters are expected to contribute both neutral information (e.g. which candidate is most qualified to be president, which restaurant has the lowest prices) and interest information (e.g. which candidate will benefit the voter, which is the preferred cuisine). For different decisions, the ratio of neutral to interest information varies widely.
In the case of excellence awards:
https://en.wikipedia.org/wiki/List_of_prizes,_medals_and_awards
we expect the voters/judges to be neutral, but probably they have favorites, and so interest plays some minimal role. For the Webby Awards, Jameson devised a pretty cool system combining elements of the N-interviewer secretary problem, honeybee voting, and maybe the Kalman filter:
https://electology.org/blog/behind-webby-curtain
The large number of voter/judges were fed a stream of sites to evaluate, and couldn’t choose which. So if there was virtually no personal interest in the outcome, where was the utility?
Jameson could probably comment on the statistical processing he applied to noisy judges.
3. I like Steve's idea of a group that decides in advance to value consensus. The easy scenario is an informal one, for example a group deciding where to go to dinner.