[EM] Why proportional elections - Power arguments needed (Czech green party)
Kristofer Munsterhjelm
km-elmet at broadpark.no
Sat May 22 05:02:38 PDT 2010
Peter Zbornik wrote:
>
>
> On Wed, May 19, 2010 at 9:43 PM, Kristofer Munsterhjelm
> <km-elmet at broadpark.no <mailto:km-elmet at broadpark.no>> wrote:
> I may have given the link before, but I think it's a good graph
> showing this tradeoff for a council of two candidates:
> http://munsterhjelm.no/km/elections/multiwinner_tradeoffs/
>
> Scroll down a little to see the results graph.
>
> Yes, your example is cool and gives food for thought.
> However some explanations might help.
> What is the Social optimum Pareto front? How do you calculate it? In
> what publication is "Social optimum Pareto front"
Since the voting simulator generates opinion data for every voter, it
can simply go through all possible councils and determine the
proportionality and satisfaction scores for each. Some of these councils
are bad (disproportional *and* have bad satisfaction) and some of them
are good.
Let me first define a few terms before I go on, because they're needed
in order to describe how the simulator determines the social optimum front.
The simulator works by going through a large number of rounds. In each
round, opinions are assigned to the voters and candidates, ballots
generated depending on this, and proportionality and satisfaction scores
determined for each method (in the manner I have shown earlier).
What is important is that you can consider any given council to have two
error scores associated with it: the disproportionality "score" and the
regret "score", and for both of these, greater is worse.
When determining the social optimum, the method doesn't just generate
councils by running the generated ballots through methods - it generates
all possible councils of the configuration given by the round. The idea
here is that one of these must clearly be the best, and in a 2D
situation where there's a tradeoff, some of the councils will form a
Pareto front.
Consider a council as "dominated" if there exists another council that
is better at either proportionality or regret, and the dominated council
isn't better than that council on either. (If one's more proportional
than the other, but the other has lower regret, neither is dominated)
Then the Pareto front for that round is simply those councils (out of
all possible) that are undominated.
However, the whole point of using multiple rounds is to average out
oscillations - to get a representative sample. Thus, we need some way of
augmenting the first round's Pareto front with the second's, so we get
the Pareto front had all councils from both rounds been considered. The
simulator does this in the following way:
The first round, we record the score pairs for all the councils on the
Pareto front.
At any later round, call the score pairs we've retained (i.e. not of
this round) a_1...a_n, and the score pairs of the Pareto front for this
round, b_1...b_k. Then we generate an estimate:
for all i from 1 to n
for all j from 1 to k
c_i_j = a_i + b_j.
where Z = X + Y means "Z has disproportionality score equal to the sum
of X's and Y's, and has regret score equal to the sum of X's and Y's".
Now c contains the Pareto front for the previous rounds and this round
combined. All that remains is to remove the dominated score pairs from c
and move c to a so that the same thing can be done the next round.
-
In more simple terms, the simulator updates the Pareto front for all
previous rounds combined with the Pareto front for the current round,
then removes dominated councils. The number of points on that front
still grows very quickly, though, so it takes a long time and can only
be done for fewer rounds than the actual method testing.
The social optimum Pareto front then gives a barrier below which it's
impossible to go, at least for the particular situation given by the
parameters.
> On the page above, you write "Detailed data: election methods scores
> (Pareto front)" -What is the difference between the election methods
> scores and the Pareto front election data?
Not all methods lie on the Pareto front. For instance, the election
methods scores data includes a deliberately bad method:
Maj[Worst_Borda] 0.61001 0.99782
(last line), which has a normalized disproportionality of 0.61 and a
normalized regret of 0.99782, where 1 is worst possible. STV dominates
it handily:
STV 0.11907 0.09529
and so that deliberately bad method isn't listed on the Pareto front,
nor is it part of the graph.
> What is normalized Bayesian regret for ranked list, how do you calculate
> it? Could you please give a simple example?
As I've said before, the generator gives every candidate and voter a
binary array that defines their positions on a certain number of yes/no
issues. Voters prefer candidates that are close to them: when a voter is
to rank candidates, then, for any given candidate, give the candidate
one point for each opinion the candidate and the voter has in common.
Finally, add some noise (mean about 1/100th of a point) to get around
the problem of equal rank.
This grants us both a ranking (greater score go first) and rating
(number of issues where the candidate and the voter agrees). The ideal
council in terms of Bayesian regret is then the one that consists of the
candidate the voters rated highest, the candidate they rated next to
highest, etc. To get the actual regrets, the simulator subtracts the
ideal sum from the ratings sum of the council that was actually elected.
Normalization works by that, for each round, the simulator also keeps
record of the rating sum for the worst and best councils. If a method
consistently elected bad councils, its sum over all rounds would be
close to the sum for the worst councils. Thus, we can map the sum for
the worst councils to 1, and for the best councils to 0, normalizing the
score to 0...1 no matter how many rounds were run.
I'll give a simple example for both. First, ordinary Bayesian regret.
Say that we have ballots of this sort:
100: A 20 pts, B 13 pts, C 10 pts, D 4 pts.
80: C 20 pts, B 10 pts, A 5 pts, D 4 pts.
20: D 20 pts, B 18 pts, A 13 pts, C 8 pts.
Then the sum ratings (utilities) are:
A: 2660 pts.
B: 2460 pts.
C: 2760 pts.
D: 1120 pts.
or, sorted:
C: 2760 pts.
A: 2660 pts.
B: 2460 pts.
D: 1120 pts.
Let's furthermore say there are two candidates to be elected. Then the
ideal council, Bayesian regret wise, would consist of C and A, for 5420
pts in total. The worst council would consist of B and D for 3580 pts in
total.
Let's continue with an example, and run the ballots through a
(hypothetical) method X, and it turns out that this method elects A and
D. Then the utility/rating score of that council is 2660 (A's) + 1120
(D's) = 3780. The regret is simply maximum minus this, or 5420 - 3780 =
1640.
That's ordinary regret. Now let's consider normalization in multiple
rounds. I'll list ratings scores directly instead of subtracting worst,
since that's simpler. Let's say we run the method for 5 rounds and get
the following results:
round 1: worst: 3580, elected by X: 3780, best: 5420
round 2: worst: 1070, elected by X: 2300, best: 3590
round 3: worst: 100, elected by X: 1238, best: 1300
round 4: worst: 3120, elected by X: 4140, best: 5720
round 5: worst: 1667, elected by X: 3840, best: 9620
----------------------------------------------------
TOTAL : worst: 9537, elected by X:15298, best:25650
To normalize, we want to map worst (9537) to 1 and best (25650) to 0,
and get the value for "elected by X". To do this, we use:
norm = (actual - minimum) / (maximum - minimum)
or
norm = (15928 - 25650) / (9537 - 25650). Minimum is the greatest value
because that would have the least regret.
The normalized Bayesian regret for X then turns out to be 0.603.
> Game theory can be applied to single-winner methods, and has been
> with concepts like the uncovered set, minimal covering set,
> independence of Pareto-dominated alternatives and so on. Game theory
> can't really be applied to multiwinner methods because much less is
> known about multiplayer games. Further confounding the issue is the
> fact that voting is not rational in an economic sense; unless in a
> very small committee, any given voter has next to no chance of
> actually altering the outcome.
>
> I guess this is correct if we analyse the decision between voting and
> not voting for a "rational economic man".
> The situation is different, when we assume that a person will vote with
> a ranked ballot (like in the green party council elections or the
> elections in any organization).
> Then we get a "clean" economic problem.
> I cannot think of a simpler and more important example in real life,
> where the preference orderings of the actors are required to be
> explicitly stated.
Well, that's the problem. Game theory deals with the rational economic
man, a creature that will do what benefits him the most. Then the
argument basically goes: since voters decide to vote even though the
effect is vanishingly small, they aren't economic men, hence game theory
will be inaccurate.
If we ignore that and consider voters as having to vote (like they would
have to in a nation of compulsory voting) and otherwise as rational
economic beings, then that problem vanishes, but the difficulty arising
from the multiwinner election being, well, multiwinner, remains.
> I am personally interested in the possiblity of measuring
> utility, is there some (preferably short) literature on social
> welfare, utility and voting theory for proportional elections (I
> know some undergrad maths and statistics)?
>
>
> My idea of proportionality ("proportional") and majoritarian
> satisfaction ("preferred") being two separate dimensions hasn't been
> formally investigated, but it makes sense.
>
> Could you please give a simple example of the calculations of the
> normalized Bayesian regret and majoritarian satisfaction.
I have done so above.
The Sainte-Lague Index is normalized similarly: keep a running sum of
worst and best, and map these to 1 and 0. For my earlier tests (of
proportionality alone), I normalized per round; that is another way to
do it, but I didn't do that in the graph, because, since I was using
Bayesian regret, it seemed sensible to normalize both in the same manner.
More information about the Election-Methods
mailing list