[EM] Does IRV elect "majority winners?"
Abd ul-Rahman Lomax
abd at lomaxdesign.com
Wed Jan 7 12:04:25 PST 2009
At 05:15 AM 1/7/2009, Kristofer Munsterhjelm wrote:
>Dave Ketchum wrote:
>>Condorcet certainly costs more for the system than
>>Plurality. Costs bullet-voters nothing - provides a service to
>>whichever voters like to do more than bullet vote.
>> Actually can be a service to candidates. Clinton and Obama
>> had to try to kill their competitor's campaign for the Democrat
>> nomination they could not share. A similar race in Condorcet
>> would let them both get nominated and have a more civilized fight
>> as to which should be ranked higher than the other on the ballot.
>If people tend to bullet-vote, it may be the case that elections in
>general suffer from vote-splitting - simply because if C splits into
>C1 and C2, people either bullet-vote C1 or C2.
>On the other hand, the mayor election data that was given on this
>list earlier seems to show that people don't bullet-vote as much as
>one would expect (even though one should be careful in deriving
>conclusions from sample sizes of one).
It showed this for *that* election. Burlington is a very unusual
town, and there can be a lot of enthusiasm for preferential methods
at first (there was a lot of enthusiasm for Bucklin at first.) We
should look at the San Francisco data for more evidence. In
Australia, experience with Optional Preferential Voting shows that
bullet voting tends to increase with time, after full ranking becomes
optional, as voters realize that full ranking is mostly a waste of
time. Most voters can simply bullet vote for a frontrunner, in most
elections, and if they do rank lower, it is never even counted. Only
those who support minor party candidates need add additional ranks.
>>Bucklin deserves more thought as a competitor to Condorcet.
>Bucklin doesn't do that well, Yee-wise. It's simple, however; I'll
>grant that. As far as criteria go, it fails independence of clones,
>is not reversal symmetric, and can elect a Condorcet loser (according to WP).
Don't trust Wikipedia for *anything*. It is not to be used as a
source. I say this as an experienced Wikipedia editor, and a strong
supporter of Wikipedia. When we get Flagged Revisions, it *might*
change, but Wikipedia has become highly conservative and very
resistant to the changes that might be needed for it to become,
itself, a reliable source. Wikipedia uses advanced voting systems,
but usually doesn't vote at all, and even the votes, when they vote
(for members of ArbComm and for the WikiMedia Foundation board) are
Delegable proxy was proposed for Wikipedia and the proponent was
promptly indefinitely blocked, and when he was unblocked, he was
watched very closely and was blocked again, twice, on some fairly
thin grounds, for actions that normally wouldn't have resulted in a
block at all. Delegable proxy is what Wikipedia needs, badly, it is
the only amalgamation or consensus-estimation method that would fit
with Wikipedia traditions. It's cousin, Asset Voting, would make
perfect sense for elections, and for an Assembly. It's simple, and it
creates a hierarchy of trust; the arguments against it have been
almost entirely based on misconceptions and assumptions.
For example, delegable proxy was suggested -- and implemented,
actually -- simply to provide a means of *analyzing* what are called
!votes. Not-votes, a Wikipedia conceit, a pretense that comments in
processes like Articles for Deletion aren't votes, they are merely
individual conclusions and arguments, and decisions are supposedly
made on the basis of the arguments and evidence, not by simple votes.
Sure. That's true, when it works. It often doesn't. When there was a
Miscellany for deletion (MfD) debate for the proposal
(http://en.wikipedia.org/Wikipedia:Delegable_proxy), the debate was
closed as Keep as Rejected, rather than the Delete which most !voters
were insisting upon. They mostly argued that Delegable proxy was
voting, which, of course, "we don't do," in spite of what the
proposal actually said. (No change in policy and procedures was being
proposed, only an additional piece of information which the closer of
a debate *might* consider -- or might not, at his or her discretion.)
When the debate closed Keep, these Delete voters screamed that
consensus wasn't being respected, because the !vote was obviously for
Delete. Eventually, the MfD was reopened for technical reasons, and
then quietly closed a week later with the same result and practically
no more !votes.
(That was actually an example of the system working. It usually
works, in fact. Basically, all decisions on Wikipedia -- except the
top-level WMF decisions which are mostly moot as to content, they
stay out of it, and the Arbitration Committee decisions which are by
voting according to established rules -- are made by individuals,
typically administrators, self-selected (whoever notices it first
when it's ripe and decides to make a decision) who review a debate
and make a conclusion, which they implement. These individuals are
self-chosen and are expected to be "neutral." As you can imagine,
sometimes this all becomes controversial, but in the large majority
of situations, it works fine. Wikipedia, though, is suffering badly
from problems of scale, and administrators are burning out, tending
to become erratic and impulsive and sometimes draconian with time.)
I don't trust much of the simulation work that's been done, because
of lack of simulation of truncation, for example. Truncation is
*normal.* With Bucklin elections, maybe two-thirds of the voters
don't add additional preferences.
Bucklin, further, as it was implemented, didn't allow multiple voting
in the first and second rounds. I'd toss that restriction, I see no
need to *force* voters to rank candidates; if they have sufficient
preference strength between them, they will rank them, if not, they
may not. This would make Bucklin even more like Approval.
And simulating approval realistically is far more difficult than
simulating ranked methods. Most simulations of approval have made
wild assumptions that voters will, for example, approve any candidate
better than the mean utility. It's preposterous, voters won't vote that way.
>>How do you count equal ranking in IRV? If I vote X>A=B>Y, A and B
>>become visible to the counters at the same time - what does this do
>>to deciding what candidate is next to mark lost?
>I would assume that if one does A = B > Y and A is eliminated, then
>the ballot becomes B > Y next - "the ballots are transformed as if
>the candidate in question never ran". The difference from A > B > Y
>or B > A > Y would be that both would be counted the first time
>around, either with a full point to each ("whole") or half ("fractional").
>>Approval, Plurality and IRV are distractions from need to pick a
>>live destination. I see need to compare, more carefully, Condorcet
>>vs Range vs Bucklin.
>Range reduces to Approval if enough people use strategy. I think
>that any version of cardinal ratings should either be DSV or have
>some sort of Condorcet analysis (like CWP does, or perhaps not that
>far). Those are my opinions, though, and others (like Abd) may disagree.
I think that Condorcet analysis should be done with Range ballots,
but that a Condorcet winner, when different from the Range winner
(unusual) should not automatically win, but there should be a runoff.
In theory -- my theory -- the Range winner is better, but there are
conditions where this isn't accurate, and a runoff tests those.
Generally, whenever the majority gives up its first preference in
favor of a more strongly preferred minority choice, the majority
should consent to this.
Condorcet analysis, note, would encourage a shift away from pure
Approval Voting. So, too, would running a Range election in rounds,
Oklahoma tried to implement a Range Bucklin method, but erred in
trying to require additional ranking.
Essentially, the approval cutoff, for elections purposes, would be
lowered through the ratings until a majority appears, that would be
the Range winner. Condorcet analysis would detect if there was a
Condorcet problem, and if there is an absolute approval cutoff,
majority failure would also be a runoff trigger.
Basically, a method should be able to detect that the electorate
hasn't yet made up its collective mind.
>I don't know much about the cost of optical scanning machines, but
>presumably getting one with 8 or 10 sensors shouldn't be that more
>expensive than one with 3. They wouldn't have to be specialized,
>either, since optical scanning is used for other things than counting ballots.
It's really silly. A fax machine can adequately scan any
reasonably-designed ballot. Normal scanners can handle it.
Burlington, in fact, used a cobbed-together solution with scanners
and open source software.
I've promoted "public ballot imaging," which simply means that the
raw images that scanners or faxes or whatever produce, become public,
and further implies that election observers can independently image
ballots. Digital cameras would do the job just fine. A single camera,
current technology, can hold all the ballots from a precinct without
>The ideal solution as far as granularity is concerned would be to
>have a machine that does OCR, and where voters just write a number
>in a box next to the candidate (1 for first place, 20 for
>twentieth). That would be quite a bit more expensive, though, and
>would also need some sort of fallback... or just manual counting.
Use standard mark-sense technology. However, with Bucklin, I see no
need for more than three ranks. It's highly questionable that the
lower rankings as with the Burlington election means much at all. We
may find a technical Condorcet winner, but it's actually meaningless
in terms of how the voters view that candidate, if it's based on
More information about the Election-Methods