[EM] Why I think IRV isn't a serious alternative 2

Abd ul-Rahman Lomax abd at lomaxdesign.com
Sun Dec 14 17:58:15 PST 2008

At 08:57 PM 12/13/2008, Kevin Venzke wrote:
>--- En date de : Lun 8.12.08, Abd ul-Rahman 
>Lomax <abd at lomaxdesign.com> a écrit :
> >> What you're talking about here isn't even "playing nice," it's more
> >> like using lower ratings as loose change to toss into an (inadequate)
> >> street musician's hat. I'm not clear on what motivates that either.
> >> I don't think I've ever wanted to communicate to a candidate that they
> >> aren't acceptable (i.e. worse than what I expect out of the election
> >> after considering both frontrunners' odds), but should keep trying.
> >
> >Why did voters vote for Nader in 2000? Were 
> they purely stupid? You may >never have voted 
> this way, but other real people do. Why do 
> voters bother >to vote for minor parties, ever? 
> Do you think that most of them imagine >that candidate could win?
>I would say that they voted for Nader because they wanted him to win.

Mmmm.... Sure, they would *prefer* to see that 
candidate win. But that avoided the issue. Did 
they think the win was a realistic possibility? 
Were they naive? I don't think so. I think they 
knew that their vote would have no effect on the 
outcome. (They did *not* cause Bush to win, if 
they sinned, it was a sin of omission, not of 
commission. If, in Range, they voted zero for 
Gore, *then* we might say that they would have 
caused Bush to win, perhaps. But it depends on 
method details and the rest of their votes. If 
they bullet voted for Nader, then their vote 
would probably have been moot, not causing Gore to lose.)

>  It
>is not relevant whether he could or not.

It's relevant to most voters who would otherwise 
support a minor candidate. In fact, because of 
the election method, it's quite likely that quite 
a few more voters would prefer minor candidates. 
They don't even go there because they don't want 
to waste their time with false hopes. Give them a 
better voting system, they would, indeed, tend to 
become more politically sophisticated. Warren 
Smith is right, to a degree: Range Voting would have an "incubator effect."

>  The phenomenon I'm scratching
>my head over, is where you give a lowish but positive rating to someone
>who isn't good enough to be elected, but good enough to "encourage" in
>a sense.

It's not about the candidate, necessarily, though 
candidates can grow and mature. Giving a small 
but positive rating to a candidate could send a 
message: you've got something. Work on it and 
maybe next time I'd give you a higher rating. 
It's also about the party. Giving some positive 
rating to a minor party could encourage your 
major party to shift in that direction.

But never give an approval rating (if that means 
anything in a method) (in Range it might be above 
50%) to a minor party unless you'd like to see 
that party win. That's my suggestion. The 
exception would be under serious 
lesser-of-two-evils conditions, which, I'd argue, 
would cover U.S. Presidential 2000, definitely 
2004 and probably 2008. Those are just my 
opinions, of course, and don't affect the principles here.

I would certainly have preferred Obama over any 
of the libertarian candidates, including Ron Paul 
(the libertarian "Republican"). But I'd have 
given Ron Paul some serious rating strength, were 
he on the ballot, because I want wider 
consideration of libertarian principles, and 
because I don't think he'd be a disastrous 
President, and thus if it happened that, by some 
rare constellation, he were to win, I'd not have been distressed.

Range Voting allows far more sophisticated 
expression. Many wouldn't use it. That's not a 
problem. It seems that many *would* use it. With 
good voter education, they wouldn't use it 
stupidly. If they care who wins the election, 
they would know to vote high, perhaps max, for at 
least one frontrunner, and low or even min, for 
the other, and that it isn't, in any but the 
rarest and weirdest of circumstances, which can 
safely be neglected, advantageous to vote 
reversed preference. If you prefer one candidate 
to another, rate the one higher than the other, 
or rate them equally if you don't want to waste 
any vote strength. But don't reverse rate them, thinking this might help you.

> >> I'd rather "start" with MCA (two rating levels plus the option to
> >> not rate at all) and stay there, as I think MCA is at least a little
> >> better than Approval.
> >
> >How is it counted?
>There are three slots (the lowest of which can be expressed through
>truncation). If any candidate has top-slot ratings from more than half
>of the voters, then the one of these candidates with the most, wins.
>If there is no such candidate, then elect the candidate who has the
>most top-slot plus middle-slot ratings. (Which is the same as saying:
>Elect the candidate truncated by the fewest voters.)

This is Bucklin-ER with two ranks. I've been 
recommending it. The only difference between it 
and, say, Duluth Bucklin is that the latter had 
three explicit ranks, and overvoting was prohibited in the first two.

I see no reason at all to prohibit overvoting in 
the second rank, and little to prohibit it in the 
first. Duluth Bucklin allowed unlimited approvals in the third rank.

>I have criticized this method (and Bucklin and median rating, to which it
>is similar) for not offering any great basis on which to decide whether
>to rate a candidate top or middle. But I do guess that it is more stable
>and more Condorcet efficient (in the abstract sense) than Approval. (In
>my simulations it was definitely more stable, though it was difficult
>to devise the strategic logic for it, so there could have been a flaw.)

I'd start with an assumption that most voters 
would bullet vote. That's what happened with some 
Bucklin elections. We should look at the ballot 
images for IRV in San Francisco; bullet voting 
isn't reported, because most bullet votes will be 
for the top two, generally, and those votes aren't ever considered exhausted.

I expect that 3-rank Bucklin and 3-rank IRV 
voting patterns would be practically identical, 
for most voters. But, of course, strategic voting 
patterns could differ. Most voters won't vote 
strategically, in both systems. Except that they 
will mostly try to rank, at some rank, a 
frontrunner, when they have that information. 
Otherwise they will simply vote "sincerely," 
which in this case means a bullet vote for a 
favorite, to start. Then they would add in votes, 
ranked down if there is significant preference, 
but not to the extent that they rank everyone 
above some mean expectation. And that's why 
runoffs might still be a good idea. Expect 
majority failure. It's normal with IRV, when 
runoffs are needed. And it would be slightly less 
common with Bucklin; the difference would be 
significant with nonpartisan elections. With 
partisan ones, fewer voters would approve both 
frontrunners, thus there would be less of a fix 
of majority failure, by Bucklin, than with 
nonpartisan elections. IRV, of course, never discovers these hidden votes.

> >> That's not very generous. I can think of a couple of defenses. One would
> >> be to point out that it is necessitated by the other criteria that IRV
> >> satisfies. All things being equal, I consider LNHarm more desirable than
> >> monotonicity, for instance.
> >
> >I, and certainly some experts, consider LNH to cause serious harm.
> >Absolutely, it's undesirable in deliberative process, someone who insists
> >on not disclosing lower preferences until their first preference has
> >become impossible would be considered a fanatic or selfish. That's a
> >trait I'd like to allow, but not encourage!
>Well, I said "all things being equal." All things being equal I think it
>is a positive thing that by providing more information, you don't have
>to worry that you're worsening the outcome for yourself. Maybe something
>else gets ruined, but then all things are not equal.

You don't add the information if you reasonably 
fear that the damage to your desired outcome 
would be serious. You provide it if you think it 
will increase your expected outcome.

Where I would agree is that it would be ideal if 
a voter could control LNH compliance. It is 
possible. This is equivalent to the voter taking 
a very strong negotiating stance. But I would 
not, myself, want to encourage this unless the 
method tested majority failure and held a runoff 
in its presence. And it's a general truth that if 
there is a real runoff, with write-ins allowed, 
total LNH compliance is impossible. Unless you 
truly eliminate the candidate. Never again can an eliminated candidate run!

Basically, so that I can't "harm" my favorite by 
abstaining in one of the pairwise elections 
involving him, the *method* eliminates him! I'd 
rather be responsible for that, thank you very much.

>Again, you seem to describe LNH as though it is synonymous with the IRV
>counting mechanism. MMPO and DSC do not render preferences "impossible"
>thereupon "disclosing" more preferences.

I think this is correct. LNH, however, is 
strongly associated with sequential elimination 
methods. It's possible to reveal lower 
preferences but to not use them in the pairwise 
election with the additional approval. I've not 
studied all the variations, there is enough to 
look at with forms of Approval and Range.

When Bucklin is mentioned to knowledgeable IRV 
proponents -- there are several! -- LNH will be 
immediately mentioned as if it were a fatal flaw. 
But the "harm," as I've noted is actually not 
harm from the ballot but only the loss of benefit 
under one particular condition: the voter, by 
adding a lower preference, *if* there is majority 
failure in previous rounds, has abstained from 
that particular pairwise election while 
participating in all the rest. It should be 
possible, by the way, to leave the second rank in 
3-rank Bucklin empty, thus insisting on LNH for 
one more round. That shouldn't be considered an 
error, but a legitimate voting pattern.

> >Entirely neglected in Kevins consideration here is the possibility I've
> >mentioned: that the very fact that voters can express intermediate
> >ratings, and the near certainty that some do so, improves the method
> >performance.
>There is a possibility. But even if voters do provide them, this isn't
>sufficient to say that this would improve method performance, because we
>can't deduce that the intermediate ratings we collect mean the same thing
>as the mind-read utilities we can see in simulations.

Of course not, and we make no such assumption. 
However, they do express sincere preferences, we 
*can* assume. Further, there is the dithering 
effect. More work is needed, but adding even one 
intermediate vote causes all voters to have an 
increased probability of altering the result, 
thus increasing the expected utility for all 
voters. This took me aback when I discovered it 
in an analysis of absolute voting utilities in 
Range 2, but if all voters vote approval style, 
one vote can, at most, change a tie into a win or 
a loss into a tie. If we assume, say, a coin toss 
as a tiebreaker, this means that the most that 
the vote can effect with the vote is one-half the 
expected utility for a full shift. With the 
prevote being Range, and it only takes one voter 
to do this, and with many voters, we can expect 
that the prevote totals are *not* integers, the 
vote, if it affects the result, will do so 
flipping a loss to a win for the candidate favored with a vote.

That is, the method being Range, even if all but 
one voter don't use it, improves the expectation 
for all voters, that their vote will make a 
greater positive difference. Since nobody has 
explicitly confirmed this result -- nobody has 
denied it, either, and I've made the claim many 
times -- this must be considered unconfirmed.

The "mind-read utilities" are used to judge 
election outcomes, not to determine them (except 
for the technical curiosity, "fully sincere 
Range." However, rational voting strategy can be 
based on those utilities, in a far more accurate 
and sophisticated way than without them.

The most powerful voting strategies with Range 
involve Approval style voting. However, they 
aren't the safest strategies, from my study. The 
variation is higher, they can produce a better 
result, but, get it wrong, and the voter may 
regret the vote. Another way to look at it is 
that "strategic voting" in Range can optimize 
your personal expectation. "Fully sincere" voting 
-- i.e, attempting to accurately disclose 
preference strengths -- allows the method to 
optimize overall satisfaction. If everyone does 
it, the optimization is perfect. If none do it, 
we get the same results as optimal Approval (i.e, 
assuming everyone in Approval uses a decent 
optimizing strategy -- which for most voters in 
most elections under anything like current 
conditions would be bullet voting, simple). Which 
isn't a bad result. Each voter who votes with 
full sincerity brings the result closer to an 
overall satisfaction, so, by voting that way, we 
might be voting for the principle of maximal 
distributed satisfaction. At small personal cost. 
I'd probably pay that cost, myself. If you would 
not, that's your choice, it's respectable and it 
is not lying. "You pays your vote and you takes your choice."

(The result will usually be the same! Because 
Approval voters tend to average out to vote a net 
vote, over many voters, as if they had voted Range.)

> >> Warren's approach could be useful when:
> >> 1. they simulate realistic voter profiles (and some of them apparently
> >do,
> >> but again, anyone can argue about whether they really are realistic)
> >
> >I've pointed out that they don't have to be realistic, only unbiased, not
> >warped against one method and for another.
>I don't agree. If certain scenarios are realistic for public elections,
>then those are the profiles we care about.

Yes. However, we do get useful information from 
simpler scenarios. If a method doesn't work 
reasonably with a unidimensional preference 
space, it seems a tad unlikely that it would work 
well with a multidimensional one. Sometimes, in 
some elections, the preference space is 
unidimensional. In others, it's far more complex.

>The idea of scoring each method according to an average of all possible
>election scenarios, is not on its face very promising.

Not if the "average" doesn't consider frequency! 
But the simulations do that. They do not consider 
"all possible election scenarios," but quite a 
limited set of them, those which came up in the 
simulations. Thus a very rare scenario may not 
occur at all. And unusual scenarios occur only 
rarely and so impact average results only a little.

However, the variation should be reported. A rare 
election outcome that is a disaster is of 
interest. Again, the questions that we can answer 
through simulations are "how much, on average." 
And "how often." These are questions that are 
utterly missed through the criterion approach, 
where clever voting systems theorists dream up 
election scenarios that are (1) highly simplified 
and (2) often preposterously rare. Further, if 
the result seems to violate some criterion deemed 
important, the result shown is considered a 
disaster, even if the actual harm is small, compared to possible alternatives.

To understand whether a result is a disaster or 
not, we need to know more than preference 
information, we need to know preference strength. 
So ... if some election scenario violates a major 
criterion, it's possible that the outcome is 
actually an *improvement* over satisfying the 
criterion. What we'd want to do is to find a set 
of voter utilities that would create, with some 
appropriate voting strategy or mix of strategies, 
the problem election scenario, then look at the affect on summed utilities.

It's important that voting systems theorists 
start paying attention to preference strength. 
The habit of simply writing A>B>C and assuming 
that this tells us enough means that a great deal 
of thought and analysis is being wasted. It's 
quite possible that A>B>C tells us less than 
A=B>C, in terms of what is significant to the 
voter. It's possible that the A>B part was 
forced, the voter was actually unable to 
distinguish a preference. So, okay, most experts 
seem to agree that allowing equal ranking is 
better than not. Saari excepted, and he is one 
bizarre holdout, practically incoherent. My claim 
is that giving voters more freedom is *generally* 
desirable, other things being equal. So Range is 
simply allowing voters to not only rank equally, 
but to specify preference strengths, within a 
certain restriction: the sum of preference 
strengths expressed must not exceed one full 
vote. This *forces* a certain kind of strategic 
analysis, if the voter wants to be strategically 
effective, and it happens that this analysis 
involves the generation of von 
Neumann-Morganstern utilities, which would be 
terrifing if it weren't already instinctive for 
us. We discount irrelevant alternatives, we don't 
spend our vote on them. If Adolf Hitler gets on 
the Range Ballot, so to speak, and Bush is on the 
ballot, I don't waste part of my vote raising the 
rating for Bush to discriminate him from Hitler. 
That's if the goal is an election for office. If 
it is an attempt to figure out the worst figure 
in history, everything gets inverted and I'd put 
some voting strength into that pair. How much? 
I'd have to think about it! And it depends on the alternatives.

> >> 3. they simulate voter strategy that is customized to the method
> >
> >That is relatively easy, and has been done.
>No, this is the hard one! I don't know if Warren has even implemented
>this for Approval and Range. I don't remember, whether the strategic
>voters simply exaggerate, or actually approve above-mean.

Various strategies have been used.

"Above-mean" is an *awful* strategy, unless it's 
defined to mean something other than the mean 
utility for all the candidates. "Exaggerate," with Approval, is meaningless.

That strategy was indeed used: from Smith's 2001 simulation run:

>16. Honest approval (using threshhold=average candidate utility)
>17. Strategic range/approval (average of 2 frontrunner utils as thresh)
>18. Rational range/approval (threshhold=moving average)

Strategy 16 is awful. That's what Saari assumed 
as a strategy when he gave his example in his 
paper, "Is Approval Voting an Unmitigated Evil?" 
How's that for a nice, academically objective 
title? The paper does not disappoint.

Strategy 17 is better. Strategy 18 is not 
described enough that I could figure out what it 
means. 17 is adequate, but better strategies can 
be described, and it's possible to devise a 
zero-knowledge strategy (where the voter doesn't 
actually know the frontrunners) that would work 
better than bullet voting or only approving 
candidates "almost indistinguishable" from the 
favorite. The last strategy, except for the 
possibility of equal ranking effective clones, 
reduces to Plurality Voting, which isn't a 
terrible system as long as there aren't too many 
candidates and the configurations are those 
common in settled Plurality voting environments. (Hint: two party system).

>For rank ballot methods Warren has implemented the same strategy for all,
>and it is the biggest problem, with the least clear solution.

This doesn't seem to be true. I'm looking at his 
old simulation run, which describes the strategies briefly.


But, absolutely, Warren's simulation approach needs much work.

> >> 4. they simulate pre-election information
> >
> >This is necessary for Approval and Range strategy, for sure, so I believe
> >this has been done.
>I don't believe Warren's simulations do this for any method. All
>strategy is either zero-info, or (for rank ballot methods) based on
>random arbitrary info provided uniformly to all voters.

No. Simulations using "poll strategy" involve, as 
described by Smith, simulated polls answered by 
random voters pulled from the complete voter set. 
That's not "random arbitrary info." It's a simulated poll of the voters.

The most common Approval Voting strategy is to 
vote for the preferred frontrunner, then for any 
candidate preferred to that candidate. This 
leaves out intermediate candidates, i.e., 
preferred to the worst frontrunner, but the best 
frontrunner is preferred to that candidate. 
However, these votes are mostly moot, unless the 
election is close between that candidate and the 
frontrunner, which would require that there be 
something close to a three-way tie.

In any case, to apply this strategy, the voter 
needs poll data. I've argued that the voter can 
*estimate* this from the voter's own opinion, 
either alone or together with the voter's general 
estimate of where the voter sits in the 
electorate. This is technically zero-knowledge, 
I'd assert, but it uses the voter's own opinions 
as a sample to estimate election probabilities. 
This has to be right more often that not! -- and 
this strategy would knock Saari's silly Approval 
voting scenario upside the head!

It's really crazy to expect that most voters will 
approve above the mean candidate, with no regard 
at all for anything else. That strategy would 
make Approval highly vulnerable to clones, when 
it probably is not. It would make Approval highly 
vulnerable to irrelevant alternatives, when it probably is not.

> >It can actually be done, in the simulations, with perfect strategy,
> >though, obviously, if you take this too far, you could run into loops, so
> >I'd guess that the best strategy used would assume some uncertainty and
> >would only iterate so many times, simulating polls and then shifts in
> >votes as a result, then another poll, etc. The "polls" would solicit how
> >the voter intends to vote, and the model can assume that the voter can't
> >hide the information. After all, just how complicated do we want to make
> >the model?
>Yes, my simulations are based on polls. Polls are a great idea.

Smith used them....

>How complicated do we want to make the model? Sufficiently complicated
>that we can compare methods in realistic situations. Personally I only
>care about public elections.

I care about all of them, but I agree that public 
elections are important. I agree that we should 
make the models as realistic as possible; 
however, we should remember that the models need 
not be fully realistic, as long as they are 
reasonable. The problem would be if the models 
preferentially select voting scenarios that 
improperly favor or denigrate a method. I think 
that this has happened with Plurality, for 
example, where candidates aren't random, they are 
preselected to appeal to large numbers of voters, 
and given cachet from this selection. Likewise, 
the neglect of preferential turnout, and write-in 
possibilities, has made Top Two Runoff look worse than it is.

> >Heavy use of serious strategization is pretty unlikely with ranked
> >methods, in my opinion, most voters will simply do as the method implies,
> >rank in preference order, and they can do this a bit more easily if equal
> >ranking is allowed.

Yes. I agree with both of these comments. The 
problem with ranked methods is that, sometimes, 
they come up with a poor result *from sincere 
rankings,* but this is hugely ameliorated, I 
expect, when equal ranking is allowed. Still, 
there is still the problem of the defective 
assumption of equal preference strength for each 
ranking, and that limits the performance of ranked methods.

My opinion is that all the benefit of ranked 
methods can be realized within a Range method, 
with appropriate rules. This is best done with an 
additional round when necessary, and this 
dovetails with runoff voting, then, and the 
desirability of explicit majority approval of any 
result. In other words, voting systems theory, 
the theory of democracy, and the long-standing 
understanding that top two runoff is a major 
election reform, all come together here. It's 
amazing to me that this wasn't being considered 
when I arrived on the EM list, it's not like it's really complicated....

>Warren's implementation would suggest that he strongly disagrees with you
>on this.

No, I don't think so. But Warren is quite quirky 
and sometimes cranky. Further, I don't see that 
he's doing serious continuing work on the 
simulations. He pretty much has said to others 
who criticize his simulations: "Fix it, then! Do 
your own damn simulations, my IEVS engine is 
published, you can use it and tweak it to your heart's content."

He has a point.

> >> Some of this isn't difficult, it's just again a question of how far you
> >> take it. Strategic voter behavior needs to be made less ridiculous.
> >> But what kind of strategy should be allowed, for (let's say) Condorcet
> >> methods? If everyone votes sincerely, then Range will look bad. So
> >> clearly the line has to be drawn somewhere else.
> >
> >No, you'd have to compare sincere Condorcet with some kind of sincere
> >Range.
>Look at it this way: To compare methods fairly we need to know how
>strategic voters attempt to be, in the same situations, under whatever
>methods we want to compare. Why compare strategic Method A with strategic
>Method B, if Method B voters would never vote that way in reality?

Well, maybe you are right. To synthesize this, 
the probability of voters using some voting 
strategy must be included in the simulation. In 
fact, with proper ballot design and voter 
education, I think we would see *more sincere* 
Range voting, than just popping a Range ballot in 
front of them. For starters, voters should be 
encouraged to maintain preference order, where 
it's distinguishable. *How much vote strength* 
they give to this is another matter. They should 
know that voting Range Borda style may not be optimal!

I think that there is room for some creative work 
here, in ballot design and in how Range is presented.

Practically speaking, though, I'm not pushing 
Range immediately. I'm pushing Approval (lowest 
cost, biggest bank for the buck, by far, because 
the cost is almost zero) and Bucklin (low cost, 
probably comparable performance to Approval, or 
better, satisfies clear voter desired to be able 
to indicate a favorite, maintains Majority 
Criterion compliance, avoiding that political can 
of worms, and several other reasons, including a history of use in the U.S.)

(Why aren't we *outraged* that race and 
red-baiting were used to torpedo STV-PR in New 
York? That Bucklin was extremely popular, widely 
considered fully constitutional, never produced 
bad election results -- compared to Plurality -- 
but was removed or even outlawed, as in 
Minnesota? One of the worst things a politician 
can do is to manipulate voting processes to 
ensure the politician's continued power. Voting 
fraud is obvious, but legal manipulation is just 
as damaging. We've had better systems in the 
past, and we lost them because we did not defend 
them! And now we are losing top-two runoff, a 
system known to favor healthy multiparty systems, 
in favor of IRV, known for the reverse, based on 
propaganda from hucksters, selling it on spurious claims of cost savings?)

>What if, in real life, Condorcet voters just don't use any strategy?
>And what if it's also true, that Range voters in real life turn the
>method into Approval?

Well, the first is reasonably likely. The second 
isn't. Rather, real voters will push Range toward 
Approval, which isn't a bad result! But it won't 
go all the way there, probably not even close.

Under present conditions, rational Range strategy 
is, indeed, practically Approval strategy, 
*except for some minor candidates." And that's 
where Range makes a big difference over Approval, 
though the effect works to some degree with 
Approval (as it should! -- Approval is a Range method.)

Mostly minor candidates are to the fringes in the 
U.S., so Range strategy, for most voters, would 
be to vote Approval or Almost-Approval style, for 
the frontrunners and the favorite. But it's 
intermediate candidates that are interesting: for 
the most part, the voters are free to rank these 
with full sincerity, with no significant loss of 
expected utility. Thus a minor party candidate 
can rise in the ratings without harm; only when 
the candidate approaches parity does something 
else start to shift, as voters need to start 
taking this candidate into account in strategic 
voting (i.e., the two-frontrunner model becomes 
inadequate, one needs to use a three-frontrunner strategy to maximize utility).

There is probably lots of time to prepare voters 
for that, and certainly in such an election there 
would be lots of commentary and suggestions on 
how to vote. Some of it would actually be good 
advice, and, as always, the most important skill 
we might need, politically, is to be able to 
distinguish good advice from bad! Otherwise we are sitting ducks!

>In that case, the only useful comparison to be
>done by the simulations, would happen to be sincere (or strategic, no
>difference) Condorcet vs. strategic Range/Approval. And according to
>Warren's simulations, Range doesn't win, in that case.

Depends. Do you have a page reference? Mostly 
what I've seen doesn't disclose enough details to make that conclusion.


This is the full original paper, written in 2000.

First of all, we should realize that advanced 
voting systems encourage more candidates to run. 
This can cause some systems to experience 
seriously increased regret. So I'm going to look 
at the maximum number of candidates used, five. 
Honest Copeland seems to do the best of the 
Condorcet methods, in the five-candidate 
elections, average regret of 0.14181. (This is 
the run with issue-based utilities, 50 voters).

Sincere Range, by the way, seems to do better 
with more candidates. Given that San Francisco 
sees more than twenty candidates in the ballot on 
their elections, this is interesting.

However, here we are comparing with strategic 
Range, though: 0.23232. Strategic Range does 
worse with many candidates (like Copeland), 
probably because of the oversimplified votes that 
result. Now, that is *fully* strategic Range, 
i.e., all voters vote that way. This is highly 
unlikely. I think I recall seeing that some work 
has been done with mixes. However, I'd assume 
that real Range Voting would roughly in between 
fully sincere, what Smith calls "Honest," and 
fully strategic. Honest Range with 5 candidates has regret of 0.05368.

While this may not be accurate, if 50% of the 
voters vote honestly, and 50% strategically, we 
might expect regret for the mix of the average: 
about 0.14. Roughly the same as Honest Copeland.

Now, Smith examined strategic Copeland. The 
strategy was to max rank the preferred 
frontrunner and to min rank the worst 
frontrunner, and to order the rest honestly. This 
is a simple strategy, I don't know how effective 
it is for the voter, but it's certainly easy to 
apply, and I think it does increase the voter's 
expected utility. Some voters will use it, if it 
is reasonably rewarding. (I don't know if that is 
true). (But some voters do tend to think this 
way: they elevate their preferred frontrunner to 
practically a kind of god, and the worst 
frontrunner is equated with the devil. Even if in 
other situations they would think better of that 
devil.) Strategic Copeland came up with 0.5443 as regret.

I'd say, on balance, Range is superior, overall. 
If you make the maximum favorable estimate of 
percentage of strategic voting for Copeland 
(i.e., total sincerity) and the minimum for Range 
(total approval-style voting with frontrunner 
strategy), sure, Copeland looks *a little better*.

It should be realized that these are all low 
regret values. Honest Plurality is 0.48628. 
(Except note that strategic Copeland was worse 
than that!) Note that fully strategic 
Range/Approval, which is implemented by simply 
dumping the no-overvoting rule, i.e., this is the 
cheapest possible reform, is quite good! 
(0.23232) Almost as good as Honest Copeland.

Honest Bucklin was 0.22931.  Slightly better than 
"strategic Approval" and Bucklin probably 
encourages sincere voting; however, I do expect 
high truncation rates with Bucklin *and with a Condorcet method."

Realize that "Honest Copeland" means no 
truncation. That is *highly unlikely.* Truncation 
is common, very common, with IRV, from many 
sources, for example, and is reputed to have 
occurred commonly with Bucklin. Truncated votes 
aren't "insincere," but they don't fully disclose 
preferences, and that behavior, in fact, is quite 
similar to strategic voting in Range.

I'd say that these results do favor Approval, as 
the first reform. And Bucklin may be better than 
Copeland, in practice, because the strategy of 
ranking one frontrunner is quite likely to be common.

> >> I wonder if you have ever been curious to wonder what a "strategic"
> >>voter is, for a rank ballot method.
> >
> >Nah, curiosity killed the cat.
> >
> >I've done a fair amount of reading on this, but who remembers anything?
> >Often not me.
>Actually this question was specifically about the simulations.

Read the paper. Smith describes it explicitly.

> >> Some six months ago I wrote a strategy simulation for a number of
> >> methods. One situation I tested was Approval, given a one-dimensional
> >> spectrum and about five candidates, A B C D E.
> >>
> >> In my simulation, once it was evident that C was likely to win, one of
> >> either B or D's supporters would stop exclusively voting for that
> >> candidate, and would vote also for C.
> >
> >B and D voters are motivated to ensure that C wins if their favorite
> >doesn't. Hence Approval will tend to find a compromise. If B or D are not
> >relevant, can't win, they *may* also vote for B or D, so I'm not sure
> >that the simulation was accurate.
>I'm not sure what you mean by this. Voters that prefer B or D to C have
>no reason to not continue voting for B or D.
>The issue is that when all the D supporters (for example) *also* vote
>for C, then it isn't possible for D to win. And the more that D voters
>"give up" and vote for both, the less sense it makes strategically for
>the remaining D-only voters to not "give up" as well.

I think something has been missed here. The votes 
for C are added, not when C becomes a 
frontrunner, but when C becomes a frontrunner 
preferred to another frontrunner. If B or D are 
frontrunners, with C, and if the voters prefer B 
or D to C, they won't vote for C in the most 
common strategy. The example is incompletely 
explained, I don't know if something was missed by me, or it just wasn't there.

No. If the D voters prefer D, they won't vote for 
C unless there is another candidate more likely 
to win if they don't vote for C. The only 
situation where this breaks down is a three-way 
tie (three-way close race) between their 
favorite, with C and another candidate less 
preferred than C. In other words, if their 
candidate can win, most voters will not add an 
additional approval for a likely and 
significantly less-preferred rival. They might do 
it with Bucklin, where it's more like insurance and an easier decision.

>My simulations involve polls. When the polls find that the winner will
>either be B or C, then it's strategically unwise to not approve one of

That's correct. However, you were talking about B 
or D voters. If it's a B voter, the strategy 
means "don't vote for C." If it's a D voter, it 
means "Vote for C," assuming that C is preferred 
to B. But in that case, the C vote is probably 
harmless to D, who isn't likely to win anyway, with or without the vote.

>At first, the polls report that C will win a lot but (due to bullet
>voters for B and D) there is some chance that B or D will win. Eventually
>the polls (which are subject to some randomness) will produce a prediction
>that D's odds (or B's) are abnormally poor. This causes D voters to stop
>voting only for D, and also vote for C. This almost immediately makes D an
>unviable candidate, and the bullet voters for D disappear.

You mean that they stop indicating in polls that 
this is how they will vote. I don't think that 
real voters will iterate in polls like that..... 
not with significant differences. Most Approval 
studies of iterative voting start with bullet 
votes. Then approval thresholds are gradually 
adjusted. If the bullet voters for D disappear, 
it must mean that the voters have concluded that 
D can't win, hence they go for the compromise, C. 
They will only do this if they think that the 
real pairwise election is not between C and D, 
but between B and C, and they prefer C.

But remember, it starts from bullet votes, pure 
favorites. Plurality has a fairly good ability to 
predict what a preferential voting system -- or 
Approval system -- will come up with, and it only 
breaks down under certain conditions. If the D 
voters have a significant preference for D over 
C, they will hold out longer, and some of them 
will never compromise. Remember, not all voters 
will follow frontrunner strategy. They don't with 
Plurality, why should they start with Approval?

To summarize this, the scenario makes sense only 
if B, C, and D are in a near-tie. If both B 
voters and D voters prefer C over the other of B 
and D, then C is, indeed, their compromise 
candidate! It's perfectly rational that the B and 
D voters, iterating over polls, increase their 
support for C, but it will never go all the way.

The behavior described seems reasonable, proper, 
and is effective for finding a compromise winner. 
Is there some problem with it?

Bucklin allows them to maintain their sincere 
preference, but, effectively, vote this way. Some 
might add C in the second rank, some in the 
third, depending on their preference strength. 
But some will always bullet vote, perhaps even 
most. Real voters don't give up so easily as your simulated ones!

More information about the Election-Methods mailing list