[EM] methods based on cycle proof conditions
Abd ul-Rahman Lomax
abd at lomaxdesign.com
Thu Jun 3 13:28:30 PDT 2010
At 02:54 PM 6/3/2010, Chris Benham wrote:
> Forest Simmons wrote (1 June 2010):
><snip>
>
>I. BDR or "Bucklin Done Right:"
> >
> >Use 4 levels, say, zero through three. First eliminate all
> candidates defeated
> >pairwise with a defeat ratio of 3 to 1. Then collapse the top two
> levels, and
> >eliminate all candidates that suffer a defeat ratio of 2 to 1. If any
> >candidates are left, among these elect the one with the greatest number of
> >positive ratings.
> >
><snip>
>
>This seems to be even more Approvalish than normal Bucklin.
>
>65: A3, B2
>35: B3, A0
>
>(I assume that zero indicates least preferred)
>
>Forest's "BDR" method elects A, failing Majority Favourite.
As will any method which optimizes expressed utility, assuming that
these numbers are a rough expression of utility. Because they can be
some kind of artifact, I would suggest that a vote like this calls
for a runoff.
Note, in Bucklin, those approvals would be spread, and, of course, A
wins in the first round. In real elections, though, with numbers like
this, and no interfering candidate, if the 65 actually do rate B with
a 2, they are expressing that they don't much care, so the election
can legitimately be decided by the voters who do care. This is normal
democratic practice! If a runoff where held with these primary
numbers, I would expect low turnout and B wins by a landslide.
But if the A faction was somehow duped into voting like that, the
reverse will happen.
Now, who would use BDR with only two candidates? It's like using
Range with only two candidates. Why would you care about "majority
favorite" if you decide to use raw range. I wonder why the A faction
even bothered to vote with that pattern of utilities ("ratings").
That's what is completely unrealistic about this kind of analysis.
For many, for years, to note that a method failed the Majority
criterion ("Majority favorite") was the same as saying that it was
totally stupid. But real, direct, human decision-making doesn't
satisfy the criterion unless people just want a fast decision and
don't care much. If they don't care very much, and somebody does care
and makes a big fuss, what happens?
From the answer you can then tell what kind of society it is, its
sense of coherence and unity, its ability to negotiate consensus and
thus natural operational efficiency, etc.
If I let you have your way when you care and I don't, then you let me
have my way when I care and you don't. Therefore utility maximization
systems will generally improve outcomes for *everyone*. The overall
game is not a zero-sum game. Single-winner elections appear to be
only when they are divorced from the context.
There is something related in comparing Approval and Range.
I found this odd effect, studying absolute expected utility in a
zero-knowledge Range 2 election voted "sincerely" in the presence of
a middle utility candidate. The individual expected utility for the
voter was the same for the approval-style vote vs the Range vote.
(And, of course, it was the same if you voted for the middle
candidate or not, the situation is symmetrical). But if the *method*
was approval, the expected utility was lower than if the method was
Range. Compared to Range, Approval was lowering everyone's expected utility!
So everyone votes Approval style, the expected utility of the outcome
must be lower than if at least one person votes Range style.
(Otherwise Range is the same as Approval, if nobody actually uses the
intermediate rating.)
Nobody has bothered to confirm or disconfirm this result. Warren
validated some of it, but not that part, and to really nail it down
required more math than I could easily do.
In order to insist on the Majority criterion, we lower the expected
utility for nearly everyone, certainly for most people, when we
consider many elections.
More information about the Election-Methods
mailing list