[EM] General PR question (from Andy Jennings in 2011)
Kristofer Munsterhjelm
km_elmet at t-online.de
Sun Sep 28 00:57:25 PDT 2014
On 09/28/2014 01:28 AM, Toby Pereira wrote:
> I was thinking recently again about Andy Jennings's PR question (below)
> and available here
> http://lists.electorama.com/pipermail/election-methods-electorama.com/2011-July/093278.html,
> which is about the trade of between proportionality and having
> candidates with strong support. Warren Smith
> (http://lists.electorama.com/pipermail/election-methods-electorama.com/2011-July/126111.html)
> gave the extreme example of a 500-member parliament where two candidates
> each get 50% approval, and the others each get 0.2% approval. Perfect
> proportionality could be achieved by electing 500 candidates with 0.2%
> approval, but in many ways this would seem a perverse result.
>
> But the more I think about it, the more I think there isn't a
> non-arbitrary solution to the problem. What's the exchange rate between
> proportionality and support? There isn't an obvious answer.
This seems to reflect what I found out when doing my proportionality/BR
simulations years ago. There was a tradeoff between proportionality (as
measured by say, the Sainte-Laguë index) and support (as measured by
total Bayesian Regret, where less is better).
Basically, my program gave each candidate and voter a number of yes/no
opinions and the voters ranked candidates in order of Hamming distance,
lower first. Then total BR is a sort of majoritarian support indicator
(maximized by electing a bunch of candidates everybody likes), and the
SLI (or LHI, Gini, etc) can be calculated based on the deviation of
proportion of people holding opinion k with respect to how many hold it
among the people.
And the result becomes something like
http://munsterhjelm.no/km/elections/multiwinner_tradeoffs/
Still, you do get a Pareto front. And methods that are not on that
Pareto front are dominated by methods on it. So some methods are still
better than others. For instance, STV beats SNTV on both axes.
A bad system might elect 500 candidates with 0.1% support whereas a
better system elects 500 candidates with 0.2% support and is on the
Pareto front at the very proportional end of the scale. Or a bad system
might elect 500 left-wingers if the left wing has majority support,
rather than electing 500 BR-minimizing centrists. The latter better
method would be on the majority support end of the front - at least if
you go by SLI vs Bayesian regret.
> However, this also assumes that every possible winning
> set of candidates would be looked at and the most proportional one
> found. In practice, the system might be used sequentially. This would
> force through the most popular candidate, and then each subsequent
> candidate would be elected to balance it proportionally. This would
> elect the two most popular candidates in Warren's example, but would
> fail to elect CDE in Andy's example. But given that there may be no
> non-arbitrary solution, electing sequentially may be the simplest and
> least arbitrary way around the problems we have. It is also a solution
> that would likely be forced upon us due to limits on computing power
> when it comes to comparing all possible sets of candidates. Necessity
> may force the pragmatic solution upon us.
I do somewhat suspect that optimal PR has very high complexity, but I
don't have a formal argument for it. Warren had a reduction of
proportional representation to geometric clustering, but I can't find
it. But perhaps one could sketch a similar problem:
Given n points in a square, find k points so that the sum of distances
from every point on the square to the closest such selected point is
minimized.
Then one can turn that into a PR election problem where the "voters" are
all the points in that square (integrate into regions of same preference
ordering) and the "candidates" are the n candidate points. But finding
the correct k out of n points is very hard (complexity wise), and since
you can do it if you have a good PR algorithm, PR must also be very hard.
But there may be caveats here. For instance, the problem might be easy
with n or k small, and so "ideal" PR may still be practical for the
number of candidates or the council sizes that one would usually
encounter. Or perhaps cardinal PR elections are required for this to
work and the ordinal approximation is easy.
So it's just a hunch. But it would make sense, given how the complex
methods (like Schulze STV, CPO-STV or my own Sainte-Laguë based method)
fail polytime.
If ideal PR (by whatever definition) is impractical, then finding the
closest feasible approximation would probably be very hard, too, but in
a different sense. And "feasible" might not just be in terms of
complexity. In small elections, Schulze STV (say) works pretty well, but
pretty much no general voter can understand what it does, and so it's
not likely to be adopted. On the other hand, some New Zealand elections
use Meek's method, so how hard to understand is too hard to understand?
I don't know.
More information about the Election-Methods
mailing list