[EM] Fair and Democratic versus Majority Rules
Kristofer Munsterhjelm
km-elmet at broadpark.no
Mon Dec 13 15:29:05 PST 2010
fsimmons at pcc.edu wrote:
> So far we have established a formal analogy between lotteries (i.e. allocations
> of probability among the alternatives) in stochastic single winner methods and
> allocations of seats to parties in deterministic list PR methods.
In the single-winner case, holding lotteries seems to me to be rather
risky. In a scenario where 10% support a candidate with extreme views,
the extreme views will come into play 10% of the time. There is little
moderation because either the candidate is the winner or he isn't; if he
is, he can push all the policies he wants - if not, he can't push any of
them.
The lottery *has* to be risky, at least from the majority's point of
view, in order to encourage a consensus. If the lottery is close to
majority rule, then the majority might be tempted to take the risk
instead of participating in the consensus, and the inclusive nature
would be lost.
However, the same risk means that if nobody swerves, the electorate will
end up with minority views some of the time, and when they do, the
minority has absolute control over that position. If the minority is
sufficiently fringe, then a single period of that control may be enough.
Thus, I would say that the greater the power of the position being
elected, the worse a purely random selection would be, because once
sample error gets you, it really gets you. If one is to use a random
fallback, one either has to be sure the voters will prefer a consensus,
or the position should be heavily checked.
(That's my reasoning for why those methods aren't that good, single
winner wise :-) I may have digressed, but I'll try to get back on topic.)
> We left off with the promise that Jobst's solutions to the defection problem in
> the single winner lottery setting, when transferred (mutatis mutandis) to the
> multi-winner setting, would (without sacrificing determinism) revolutionize the
> world of list PR methods.
In the PR case, we *can* have partial power - we can have something
between all or nothing. Sample error (or being unlucky) factors much
less into things. So I would tend to think these methods would be better
applied to PR, although there are caveats.
> All of Jobst's recent lottery solutions are based on this fundamental insight:
> We should take the random favorite lottery F as a basic benchmark of democratic
> fairness, and consider any lottery C unanimously preferred (even if only
> "weakly") to F as a frosting-on-the-cake improvement. In the context of the
> example of our previous installement F is the 60%A+40%B lottery, and C is the
> 100% C lottery. As in this example, so in general; because C is (at least
> weakly) preferred to F by 100% of the voters it is considered a consensus
> compromise.
If the A-party is completely coordinated (i.e. the representatives are
merely present to vote on the party's behalf), then they will prefer a
60% A-40% B outcome to a 100% C outcome. If the party is completely
coordinated, 50%+1 is as good as 100%.
This kind of problem has been brought up in the discussion of PR in
general (and of kingmaker parties in particular), and I think the
distortion of PR by power nonlinearities is acceptably small that one
can just elect "straightforwardly" using Webster or whatnot without
having to reweight the proportions in any way.
However, since the probabilistic-turned-PR method you detail relies
greatly on the threat of B acquiring power in order to get A to
cooperate, the threat is entirely nullified if A knows it has a majority
no matter what. It might also encourage parties to turn more
hierarchical because a coordinated party can accept smaller majorities
than a loose one.
60/40 might be a contrived example, but the distortion would show up in
less contrived cases as well. A minor party or its supporters may want
to block consensus because it knows it'll be "in the balance" and can
thus get power far exceeding its real support. On the other hand, a
relaxed consensus (80%? 90%?) might encourage major parties to support
the consensus to block such strategies, so it could go either way. In
any event, it adds unpredictability.
I suppose that for the method to truly work, one would have to find some
way of removing that nonlinearity, as it were. Perhaps some kind of
system where adding support to legislation is costly, so that power
spreads evenly among the representatives and each get their turn... but
it could make things even more complex. Or maybe supermajority rules
within the representative body, since we're dealing with a
consensus-focused concept in the first place.
> But, as they say, "The devil is in the details." The technical difficulties
> are all in how (in general) to automatically find a good compromise allocation
> (of probabilities or seats, as the case may be) by a process that is immune to
> manipulation. In the simple example given in our previous post, 100%C is the
> obvious compromise.
>
> In the following example
>
> 50 A 100, C1 90, C2 40, B 0
> 50 B 100, C2 90, C1 40, A 0
>
> it is likewise obvious that the best compromise allocation is 50%C1+50%C2.
>
> How do we find these allocations automatically?
>
> Jobst has proposed many nice ways of doing this. One of the more recent ones is
> this (slightly adapted to the list PR setting to get whole numbers of seats):
>
> (1) Each voter rates the competing parties on a range style ballot.
>
> (2) Each voter (optionally) nominates an allocation of the seats. This could
> well be done by choosing from a published list of such allocations.
>
> (3) Each of the nominated allocations is tested against the fall back allocation
> F on each of the ballots. Whenever the fall back allocation F is strictly
> preferred over a nominated allocation on even one ballot, that nominated
> allocation is eliminated.
>
> (4) If no nomination survives the previous step, then the fall back allocation F
> is used. Else ...
>
> (5) Calculate the compromise allocation C by averaging all of the remaining
> (i.e. uneliminated) nominations together and converting to whole numbers
> according to Jefferson, Webster, or Hamilton (consistent with the fall back
> allocation F).
>
> (6) Pit the compromise allocation C head-to-head against F to make sure the
> conversion to whole numbers has not destroyed the unanimous approval for C. If
> it passes this test, seats are allocated according to C. If not, then the fall
> back allocation F is used.
This feels a bit like the compromise process in an assembly itself -
that is, where a majority pass/fail determines whether or not the law
will be passed, and then discussion continues until one reaches an
agreement. The difference is that the ballot asks the voters for a
compromise right off the bat... and the trick is to make it costly to
give a fake compromise because going back and forth, while possible in
an assembly, is unwieldy when dealing with the entire electorate.
More information about the Election-Methods
mailing list