[EM] Why I think IRV isn't a serious alternative 2

Abd ul-Rahman Lomax abd at lomaxdesign.com
Fri Dec 19 13:18:57 PST 2008


At 10:36 PM 12/18/2008, Kevin Venzke wrote:
>Hello,
>
>--- En date de : Mar 16.12.08, Abd ul-Rahman 
>Lomax <abd at lomaxdesign.com> a écrit :
> >However, in defense of Venzke, he thinks that the situations where IRV
> >is non-monotonic are rare enough that it's not worth worrying about.
>
>What I think would be rare is that such situations (or the risk of such
>situations) would have any effect on voter behavior.

This is more true than false. However, this 
judgment depends on voter ignorance or laziness, 
when strategic voting would substantially improve 
results. Further, eventually, we will see, I 
predict, the norm to be that full ballot images 
will be available. It's happening in San 
Francisco with limited images. (The "images" are 
ballot data summarized by the Opscan equipment, 
so, for example, what is legally moot may have 
been removed. I don't recall details, but certain 
possibly interesting voting patterns have been 
removed. (An example would be an overvote in a 
second rank, with no third rank expressed; this 
is indistinguishable, to my memory, from the same 
overvote but with one of the candidates, or 
another candidate, voted in the third rank. That 
is a vote where a reasonable voter intention 
could be deciphered, and even if the overvote 
itself were not resolvable, a better estimate of 
voter intentions as to the whole election could 
be made. But at the least, the claim is made that 
IRV leads to more ballot spoilage, or that it 
presents more opportunities for voter error, at least....)

So, some big election shows monotonicity failure, 
if some small set of voters had abstained, or 
voted insincerely, they'd have gotten better 
results. This is different from majority 
criterion failure, a remote possibility and 
arguably harmless possibility with approval 
methods. This would, if discovered, create a 
sense of illegitimacy in the election, and there 
would be, in addition, two reasonably likely 
outcomes: (1) the rejection of the voting reform 
-- and increased suspicion regarding all voting 
reform (in this situation, a majority might agree 
that the result was poor) -- and (2) increased use of strategic voting.

The idea that voters won't vote strategically 
misses a huge phenomenon: the use of vote cards 
in Australia, where voting strategy is decided by 
a political party, and then voters are advised. Many will follow this advice.

And, of course, there is truncation. And there 
will be lots of truncation, unless full ranking 
is required, and, not only is this unlikely to be 
used in the U.S., it's been found 
unconstitutional in the past. Full ranking was 
required in the Oklahoma application of Bucklin, 
and, contrary to what's been claimed or implied 
-- the voting system aspect of it, aside from the 
three-rank ballot -- wasn't an issue for the 
Court rejecting the method. It was the obligatory 
additional choice votes, when there where three 
or more candidates. So voters cast first rank 
votes, *and they weren't counted.*

> >The real bite is with Center Squeeze.
>
>I agree with this.

Thanks. You and me and nearly everyone who understands the issue.

> > Highly speculative. Bucklin probably experiences about the
> > same level of bullet voting due to LNH fears as IRV, not
> > much more, because the "harm" only happens when a
> > majority isn't found in the first round.
>
>If methods typically won't require more than the top rank, then I guess
>neither LNHarm nor monotonicity failures will be much of a problem.

With LNH, the "harm" is that the voter sees a 
second preference candidate elected rather than 
the first preference. In fact, in full-vote 
methods (only Range is different), a single vote 
never purely flips an election result, rather it 
turns an election into a tie or a tie into an 
election. Voter's won't be exercised about a rare 
LNH failure. Most voters will bullet vote in a 
situation where LNH is a real risk.

And, yes, methods in the U.S., at least, will not 
require full ranking, and for very good reasons. 
The Oklahoma case gave them in about 1926, as I 
recall. Dove v. Ogleby. Full ranking forces 
voters to vote for someone, effectively, whom 
they may detest, striking at the heart of the 
freedom of the voter. Democratic process only 
"forces" this when there really is no 
alternative, as agreed upon by a majority of 
voters. They would rather see the office filled 
by the Lizard than go vacant. In real democratic 
process, election failure is always an option 
that a majority can create -- or prevent.

Majority rule. Don't try this at home?

> >In other words, Center Squeeze is a direct consequence of LNH compliance
> >by IRV.
>
>Well, MMPO satisfies LNHarm, and is nearly a Condorcet method.

I'd have to look at it. How does MMPO work? I 
worry about "nearly," but, sure, if the exception 
took extraordinarily rare conditions, and the 
results then were merely suboptimal, not 
disastrous.... I can imagine a method that 
uncovers the votes and uses them to decide other 
pairwise contests, but I'm suspicious of the claim.

> >Interesting, eh? Top three. A Condorcet winner is almost certainly in
> >there!
>
>I think this is doubly likely if you arrange the incentives so that it's
>likely that third place achieved that position better than randomly.
>
>In other words: I want to have a TTR election where candidates risk being
>spoilers if they place worse than third.

That would be a system where the candidate is 
risking damage to the overall benefit of the 
election. Did you mean to write it as you did? A 
spoiler typically will drop the "spoiled" candidacy one rank, not two.

By the way, I'm seeing, now, some work done in 
the last two decades on the Clarke tax or similar 
devices to give voters "incentive" to vote 
sincerely. However, it seems to me that under 
this is an assumption that voters won't vote sincerely, naturally.

The trick is to consider that their votes are 
some mixture of sincerity and "strategy," i.e., 
that the votes already take into account, for 
some voters, what the voters sense as a 
reasonable compromise. It seems that some of us 
don't trust that voters are capable of doing this.

The "incentives" may be trying to fix something 
that's not broken. We have *theory* that voters 
will "exaggerate" and thereby cause damage, but 
if we look at the simplest case, Approval, there 
is no Approval vote that makes sense that is 
actually "exaggeration." If two candidates are 
true clones, setting one's approval cutoff 
between them makes no sense, the voter would, by 
definition, be equally satisfied with the 
election of either and should therefore support both.

Sometimes at this point, it will be said, "But 
the voter wants his or her favorite to win!" Of 
course. That's a preference difference. The 
candidates aren't clones, for that voter. The 
voter is attached to one of them. The reason for 
the attachment isn't our business!

I claim that Range is strategically the same as 
Approval, simply with additional opportunities 
for a deeper expression of preference strength. 
We can assume, in Range, even though it may not 
be exactly true, that voters who express some 
preference strength in Range actually have a 
preference. And that when they don't express a 
preference, they do not consider the preference 
significant enough to warrant wasting a vote or fraction of a vote on it.

There is some indication, from a paper in 
progress by Warren Smith, that a mixture of 
"fully sincere" and "strategic" voting in Range 
produces less regret than either "strategy" 
alone. If this is true, the whole concept of 
"strategic voting" as some kind of negative must be examined.

In any case, a good voting system will not 
produce poor results with "strategic voting," but 
only, at most, results that have been mildly made 
less than theoretically optimal. All of the 
"nightmare scenarios" that I've seen presented 
for "vulnerability to strategic voting" in Range 
have been results where the "sincere voters" got a very, very good result!

The exception is Saari's "mediocre" election, 
where the supposed "sincere voters" simply follow 
a totally stupid strategy, a mindless 
approval-above-the-mean strategy, when we all 
know to avoid this in real life, we don't use 
that strategy, unless modified by estimated probabilities.

The *theory* of oscillation or endless regression 
based on feedback between polls and voter 
decisions is just that, a theory. It probably 
doesn't happen much, because there are other, 
stronger forces. Ron Paul might have made a good 
Republican candidate, but .... campaign funding 
would have to be addressed! I think Paul 
supporters could have overcome poll bias, the 
only Range poll that I saw on this (MSNBC Range 
2, with votes of -1, 0, +1, and a default vote of 
0 -- nice method!) showed Paul way ahead, far 
above the other Republicans. Even with 
participation bias, this was impressive. Same 
polls showed, at that time, Obama way ahead of 
Clinton. And, as I recall, McCain was, aside from 
Paul, the most approved Republican. I find that 
really interesting.... I should look again.

Voters aren't going to look at every poll and 
shift their voting decisions; many or most of 
them will only look at a few. Did anyone need a 
poll to know that the U.S. Presidential election 
was between Obama and McCain? Or that, in later 
Democratic primaries, it was between Clinton and Obama?

Sure, that makes us dependent on the media. So 
new? So got any alternative? (I do, we should own 
the media, collectively, and we could do it, 
effectively. Nothing stopping us but inertia. It 
would be a good investment, done right. 
Alternative: we don't own the media, but we are 
well advised as to which media to trust. Same 
difference. I prefer the ownership, because, 
then, there is no conflict of interest, we 
wouldn't prefer bad advice to a small loss in 
share value, except that the loss won't happen; 
the more trustworthy we find the media, the more 
useful it becomes, and thus the greater the value, including economic value.)

>This places part of the election process outside of the election itself,
>but we already do this with Plurality.

Yes. It's normal. We need to remember that voting 
systems are a special solution to a special 
problem: the difficulty of managing full 
deliberative process when the scale is large. As 
such, we should try to imitate deliberative 
process, to the extent practical. Asset Voting is 
a totally new idea that is actually old, but 
which escaped notice, turning any election into a 
deliberative process using chosen representatives 
as distinct from elected ones.

Short of Asset, what Range does is to imitate 
various participation strategies: strong 
preferences tend to lead to strong negotiating 
positions, even a pretense of "gotta have it." 
I.e., bullet voting. Weaker preferences *or a 
desire for overall satisfaction,* which is a 
normal human behavior, to value social 
satisfaction even above one's personal 
satisfaction, provided the personal loss is 
perceived as small compared to the overall social 
gain, leads to full initial disclosure of accurate utilities.

The voting pattern reflects two things:

(1) Preference strength.
(2) How much effort the voter is willing to put 
into being accurate, which is related to (1).

I.e., what we've been calling "strategic voting" 
in Range may be more sincere than we realize! It 
modifies the linear transformation of utilities 
into votes, it becomes more sensitive to strong 
preference and less sensitive to weak, but that 
is not a bad result, necessarily.

And the most that's needed as a protection is a 
majority approval requirement. Hence runoff forms 
of Range. Voting systems theorists almost 
completely missed this in the search for ideal 
methods, because completing in one round was 
considered essential. It's actually a severe and 
unnecessary restriction; it simply has a cost, 
and it's possible to keep that cost low, lower 
than the value of the improved results, when runoffs are needed.

It's possible that a good Range method would so 
rarely need a runoff, and would only choose 
second-best in the presence of only a small 
difference in social utility from that winner and 
the best, that runoffs wouldn't be worth it. If 
we had runoff Range operating, we could measure 
this with real elections. Until then, we need more and better simulations.

We need simulations that will predict truncation. 
We need simulations that will predict turnout. 
The models don't have to be perfect, some 
modelling is better than none. We know that, in a 
runoff special election, the idea that runoff 
turnout is always poor is false when voters have 
a very strong preference, such as the Lizard v. 
the Wizard or the similar Chirac v. Le Pen runoff 
elections, where final turnout exceeded that of 
the primary. The reverse should be true: low 
preference strength equals low turnout in runoff 
election *and that is not a bad thing.*

So many false or weak assumptions, so little time!



> >From the first message:
>
> > "Frontrunner strategy" is a common one that seems
> > to help with ranked methods as well as Range ones. Make sure
> > you cast a maximally effective vote for a frontrunner, and,
> > where "against" matters, against the worst one.
> > Usually there are only two frontrunners, so it's easy.
> > "Expectation" is actually tricky if one
> > doesn't have knowledge of the electorate's general
> > response to the present election situation. How do you
> > determine "expectation." Mean utility of the
> > candidates is totally naive and non-optimal.
>
>Mean utility is supposed to be naive, and it is optimal, if you are
>"naive" about win odds.

I know that this (mean voting strategy in 
Approval) has been proposed, but it's a poor 
model. A voter who is "naive" about win odds is a 
voter who is so out of touch with the real world 
that we must wonder about the depth of the 
voter's judgment of the candidates themselves!

This naive voter has no idea if the voter's own 
preferences are normal, or completely isolated 
from those of other voters. This is far, far from 
a typical voter, and imagining that most voters 
will follow this naive strategy is ... quite a stretch, don't you think?

Instead, most voters will, in fact, assume that 
their own preference are reasonably normal, and 
this will indicate a far different strategy to 
them than mean utility. They will bullet vote, in 
the presence of significant preference between 
the favorite and other candidates, *and this is 
known to happen*, even when voting systems give 
them other options. The exception will be when 
the preference is low. Making that call can be a 
difficult choice. Did we claim that voters should 
only be presented with easy choices?

Other voters will know that they have unusual or 
idiosyncratic preferences, and they will vote accordingly.

So in Saari's example, the supposed "nutty" voter 
is the only one out of 10,000 voters who votes a 
reasonable strategy! -- he or she approves the 
supposedly "mediocre" candidate. If this voter 
had voted the "I'm normal strategy," there would 
have been a tie between Best and Mediocre -- 
because this is how all of the 9,999 other voters 
voted. Saari should have been so thoroughly 
discredited by that paper, "Is Approval Voting an 
Unmitigated Evil," that he'd have had trouble getting anything else published.

Smith claims that Saari is right on in many ways, 
and that sometimes he writes much better. Maybe. 
All I know is that in that particular paper, 
which is almost entirely polemic without solid 
foundation, he went way outside academic norms and standards.

>"Better than expectation" is mean *weighted* utility. You weight the
>utilities by the expected odds that each candidate will win. (There is
>an assumption in there about these odds being proportional to the odds
>that your vote can break a tie.)

Sure. That's the correct understanding of "mean 
utility." It means a reasonable expectation of 
the outcome. However, what's incorrect is 
assuming that voters have no idea of the probably votes of others.

Being human, each voter is a sample human, and 
more likely to represent the views of other 
humans than not. This is a far more accurate 
model of human behavior than the assumption that 
candidate preferences are random, which only 
would be true in a simulation that assigns the 
preferences that way. Voters are members of 
society, and not independent in the sense that 
their choices can't be predicted, with some level 
of accuracy, by those of a sample, even a sample as small as one voter.

By this argument, the rational vote, 
zero-knowledge, is the bullet vote. This happens 
to be the vote that has the best probability of 
favorably affecting the outcome (i.e., if the 
voter is the last voter). We've done it 
backwards. The default vote should be a bullet 
vote, and only in the presence of significant 
strategic considerations should the voter deviate from that.

Now, if the voter has low preference between two 
candidates, one of them the favorite, when the 
preference strength is low enough, the voter may 
indeed approve both of them. But this is far from 
Saari's example, where the middle candidate was 
equally placed between the best and worst 
("mediocre"), not "almost as good as the best."

Or if the voter has some sense of the other 
voters that leads the voter to conclude that the 
voter's personal preferences don't reflect the 
overall ones, then the voter will consider strategy to address that situation.

And for the voter's sample to be only one voter 
would require that the voter doesn't discuss the 
election with anyone! Indeed, because birds of a 
feather flock together, voters are quite likely 
to have a biased view of the overall preferences, 
tending toward bullet voting, again.

So most voters, we can think, under current 
conditions, will bullet vote. Fantasies that 
large numbers will approve mediocre candidates 
based on a stupid strategy is just that: a 
fantasy, an example of how naive game theory can 
fall flat on its face. Won't happen. Bullet Voting *will* happen.

>"Frontrunner strategy" is just a special case of "better than
>expectation," where only two candidates are expected to have any chance
>of winning.

Sure. There remains the issue of how to rate a 
middle candidate. I think that the "mean 
strategy" overlooks other factors, including what 
might be called "absolute approval." I.e., if I 
absolutely disapprove of a candidate -- never 
mind the other options -- in that I would not 
want it to be in my history that I voted for him 
or her, I won't, no matter what the math tells 
me. I'll listen to my gut instead of the math, 
because it's more likely, in fact, that the math 
is wrong than that the gut is wrong. The "gut" 
was developed over millions of years of 
evolution, where making wrong decisions was life 
or death, or starvation or nutrition, and the 
math is how old? The "gut" does use math, in a 
sense, Warren is right. But it's VNM utilities, probably, that it follows.


> > But it's a complex issue. My point is that "better
> > than expectation" has been taken to mean "average
> > of the candidates," which is poor strategy, any wonder
> > that it comes up with mediocre results?
>
>"Average of the candidates" is the special case of "better than
>expectation," where there is no information on candidates' win odds.

Which is a non-existent situation, unless you 
posit radically artificial conditions. "No idea 
of probable outcomes" is rare in the real world, 
it mostly crops up with gambling, where random 
choice is artificially created. And, indeed, we 
can make bad decisions under those conditions, 
assuming, as would be natural and generally 
correct, dealing with nature, that we can improve 
our performance the more we play the game!

What I'm pointing out is that the voter's 
knowledge of himself or herself is adequate for a 
better default "zero-knowledge" strategy than "mean utility of the candidates."

In Range, i.e., Range N with N>1, I'd rate, 
candidates, "sincerely," i.e., with reasonable 
accuracy, but with some bias towards approval 
strategy, perhaps. I.e., the transform from 
utilities to range votes might be linear, except 
that the transformation is truncated, possibly at 
the top and bottom. In particular, I'd be 
unlikely to give candidates I'd not like to see 
win the election, purely on their own, any positive rating at all.

But we overlook, in these analyses, that most 
voters don't know enough to "sincerely rate" all 
the candidates. As Carroll noticed, most voters 
may know their favorite, maybe their favorite's 
main opponents, and that's it. So what do you do, 
how do you vote, when you don't recognize the candidate?

Naively, Warren Smith thinks you might abstain, 
and he wants to see average Range rather than sum 
of votes range (Sum of votes range usually treats 
an abstention as a bottom rating, though it could 
be, for example, midrange, as it was in the MSNBC polls.)

However, only voting for candidates I recognize 
and approving the best of these and not the 
worst, is a kind of frontrunner strategy, for the 
best-known candidates tend to be the 
frontrunners. I may only know one candidate 
(Carroll's realization), my favorite. Bullet 
voting is my response, as it should be.

What we have done, too often, is to study voting 
systems through their theoretical performance in 
preposterously rare situations. As I've written 
many times, a very common objection to Approval 
Voting, including among experts, is Majority 
Criterion failure. Yet MC failure with Approval 
is, in the vast majority of real political 
elections, highly unlikely, because of the 
preponderance of bullet voting, and when it 
happens, it's hardly a disaster (only if the 
majority were drastically misinformed about 
themselves would it be a mediocre result). (By 
definition, the supporters of frontrunners when 
there are only two, have no incentive to add 
additional approvals unless they don't mind that 
these votes, by some miracle, elect a minor party 
candidate. So bullet voting when the voters are 
voting for a frontrunner can be expected, it 
would be the norm. It's only when voters prefer a 
minor party candidate that we will see an 
increase in the usage of additional votes, and 
this is true for optional ranked systems. I 
should look at the Australian OPV data, but the 
reported data doesn't show truncation in votes 
for the top two. I do know that ballot exhaustion 
is *common* with OPV, which would mean bullet 
voters who *don't* vote for a frontrunner.

We have to realize this: many or most voters will 
ordinarily vote for one candidate, and habit 
isn't the only cause for this, it is probably 
better strategy for the majority of voters than a 
naive "mean of the candidates."

And, of course, we can then see Saari's example 
as the piece of preposterousness that it is. If 
two voters out of 9,999 vote this very common 
strategy -- under Plurality, where it's costly if 
wrong! --, which we have every reason to think 
will be normal, certainly not rare, the Best wins.



> > > The big concern is what happens when poll stability
> > can't be achieved.
> >
> > Nah! Most voters won't pay that much attention to
> > polls, they will just vote their gut. Polls will be used by
> > those who are very seriously involved, who want to maximize
> > the power of their vote. I think most of the "big
> > concern" is simply imagination. There won't be big
> > surprises with Approval. Little ones, sure.
>
>I use the term "polls" loosely. It is hard to imagine that under any
>election method, voters in this recent election might not have realized
>that the important contest was McCain vs. Obama.

And I can't think of an exception. Probably the 
closest would have been Ross Perot. Some people 
may have voted for him thinking he could win. But 
I think most realized exactly what they were 
doing. They simply didn't have enough preference 
between Bush Sr. and Clinton to make it 
worthwhile to them to drop the value of the statement of a vote for Perot.

To emphasize this, we have been diverted by the 
idea that "strategic voting" is a bad thing, 
instead of looking at what's underneath the hood: 
preference strength. If you don't have 
significant preference strength involved, you 
don't bother with utility maximization, you don't 
care enough even if you are *certain* that it 
will be A or B, you'd rather vote for C for other 
reasons. In a preferential method, *some* of you 
will vote for A or B. Some won't. How many of 
each? *Depends on preference strength.*

The idea that preference strength was useless, 
the easy rejection of it because it wasn't 
"practical," the claim that "voters will 
exaggerate," all this diverted us from this large 
gray pachyderm in our main living space. 
Preference strength drives voter behavior, and 
preference strength is behind voter turnout 
(particularly in special elections), and how voters choose to vote.

>If voters "vote their gut" and don't consciously use any strategy, I'd
>say this will be well beyond the point where polls have already taken
>their toll and removed unviable candidates from the voters'
>consciousness.

But it happens without polls! That a candidate 
isn't "in the voter's consciousness" is the worst 
nightmare of campaign managers. "Bad publicity is better than no publicity."

Consider this: in real IRV elections, nonpartisan 
is important, vote transfers seem to behave, 
where I've looked, as if the supporters of an 
eliminated candidate are a representative sample 
of all the other voters, i.e., their lower preferences will match, generally.

I was astonished to see this, in fact, it was a 
totally unexpected result. But if you think about 
it, it makes a great deal of sense. In 
nonpartisan elections, there isn't some automatic 
means for voters to connect candidates. It's easy 
for a Green Party supporter to assume that a 
Democrat will be a better choice than a 
Republican. Take the party markers away, what's left?

The candidates themselves, and the combination of 
their character as viewed by the public, to some 
degree their policies, and how well they are known.

For whatever reason, the vote transfers tend to 
not alter the social order among the remaining 
candidates, so the first round leader wins the 
election. No exception, so far, in nonpartisan 
IRV elections in the U.S. -- which is the large 
bulk of nonpartisan elections. It's quite 
different with Top Two Runoff, where the 
runner-up wins the runoff in about one-third of 
elections. At this point, the sample size is 
small enough that this could be some statistical 
fluke. But it's actually a known effect in 
Australia. "Comeback elections" remain relatively 
rare, even with partisan elections (which the 
Australian PV elections are), and it is 
practically unknown (totally? as I recall, maybe 
one or two elections in the last century?) for 
the first round third place candidate to win.

Using Approval isn't going to magically increase 
the voter awareness for minor candidates! The 
only system I know of that gives these candidates 
a real chance, except under quite unusual 
circumstances, some sea change, is Top Two 
Runoff, which better simulates deliberative 
process. So ... we should be following that clue. 
Require a majority approve a result, *at least* 
for an election to be decided on the first 
ballot. Use better methods for discovering a 
majority if one can be found in the votes, i.e., 
use Bucklin or a Condorcet method instead of IRV, 
and better methods of picking what happens in the 
runoff. I've been making Range/Condorcet hybrid 
proposals. Asset, though, finesses the whole 
mess. One election to pick the electors, and the 
electors handle the rest, and can use as many 
ballots as they choose. With public voters, the 
whole secret ballot/security/counting expense thing goes away.

>I absolutely want voters to pay attention to polls, because if they don't,
>this is probably the same as the polls being unable to stabilize around
>two frontrunners. And the results of such elections would be rather
>arbitrary, I would guess.

Only when preference strengths are small! Give me 
a large enough preference strength, I don't give 
a fig about polls! And I think that's true of most voters.

So the "oscillation," the lack of stability, will 
only take place when the choice isn't terribly 
important to most voters. Like most voters, I'd 
guess, in this last Presidential election, what 
was most important to me was that a Democrat win. 
I.e., I had *intrinsic* low preference among the 
major Democratic candidates, I'd have supported 
any of them. I came to favor Obama, early on, but 
for a combination of electability and an 
assessment of him as a person. Clinton had -- as 
the MSNBC polls showed -- too many negatives, not 
so much personally, but as to electability.

Please don't give me an open primary, the IRV 
supporters' suggestions that IRV could replace 
primaries and general elections with a single 
ballot is very, very dangerous. Range might do 
it, but I'd *insist* on a true majority approving the winner.


> > In plurality
> > Approval, strategy based on polls would loom larger. Sure,
> > it could oscillate. But how large would the osciallations
> > be?
>
>The only situation I'm concerned about is where, when the polls report
>that A and B are the frontrunners, this causes voters to shift their
>approvals so that the frontrunners change, and when the polls report
>this, the voters react again, etc., etc.

Of course. Except it's not going to happen. 
Voters will overstate their tendency to bullet 
vote in the polls. Voters will only approve more 
than one when they have lower than a certain 
threshold of preference strength, and even there, 
it's questionable how much they will do it unless 
they really have no significant preference, it's 
hard for them to state a preference between two, so they approve both.

Further, the results don't shift the way you seem 
to expect. A and B are the frontrunners, a poll 
shows. How do voters respond? One common response 
would be no response. Then there are the 
supporters of C. They get this news, they now 
plan to add a vote for A or B, from their prior bullet vote for C.

There is only one class of voter who will shift 
their vote: those who already preferred a 
frontrunner, but who, in ignorance of this 
situation, already approved both. You have to 
understand that this is an unusual situation, in 
itself. Most voters in early polls will bullet 
vote, unless preference strength is low, and if 
preference strength is low, they aren't likely to 
stop approving both. But voters who did vote like 
this may raise their approval cutoff to reflect 
how they probably should have voted in the first place!

Sure, it could oscillate. But only if most voters 
have low preference between A and B. In which 
case it doesn't matter that much who wins! Sure, 
the choice would then be somewhat arbitrary. This 
is Approval, after all, the terminally simple 
Range 1. It's like a control mechanism with only 
two motor speeds: Off or Full-On. Such systems 
will oscillate under some conditions, 
oversteering. In Range, even Range 2, the 
response is damped. If, in an initial poll, I 
rated Obama 1 and Clinton 1 -- our unusual 
situation, I wouldn't have done that in reality 
-- I'd not drop Clinton to -1 if a poll showed 
them running neck and neck, I'd have dropped her 
to 0.5. Now, in reality, I did, in fact, rate 
Clinton at 0 -- midrange -- in the MSNBC poll. 
And finding out that they were running neck and 
neck, I don't think it would have shifted my 
rating at all. (Remember, I've got other 
candidates rated too, some at 1 or 0.5, some at 
zero, this was a "primary poll," not for the 
election itself.) Allowing intermediate responses 
will reduce oscillation. The idea that everyone 
is going to want to go full-on for their favorite 
against everyone else is just as preposterous as 
the idea that everyone will add additional preferences.

Instead, even with Approval, the results are 
damped, through averaging. People aren't the 
same, will respond differently, so the average 
Approval votes will tend toward Range results.

(Like an analog to digital converter that 
collects a lot of 0s and 1s based on a comparator 
output, where the voltage of interest is compared 
with either a random voltage in some range or is 
swept. Either way, if the random distribution is 
correct, the sum of outputs of the comparator 
will vary quasi-linearly with the analog voltage 
being studied even though all the "votes" are binary.

In other words, when we study Approval using 
limited examples that assume large numbers of 
voters voting identically, and switching their 
votes identically, we get a much poorer image of 
the method than some simulation that imitates the 
varying underlying utilities and approval 
cutoffs, the latter being a process of feedback, 
of interaction between absolute utilities and 
their probability-modified VNM versions.

This process is part of how a participating 
electorate seeks and finds compromise. It's much 
better than raw voting system theory might 
predict. It considers and measures preference 
strength, not directly, but through the outputs, the votes.

>  Obviously it wouldn't be as
>neat as that (in my simulation, not everyone is allowed to change their
>vote at the same time; they receive random opportunities). But I guess
>the result is that there would ultimately be more than two frontrunners
>in the voters' consciousness.

Actually, in real elections, there may be only 
one. The incumbent advantage is a real one, and 
difficult to overcome, and it's not even clear 
that it *should* be overcome. I prefer Asset, 
though, because it bypasses the whole can of 
worms. Vote for your favorite, period. Don't 
trust your favorite to carry on in your place? 
Why, then, are you voting for such an 
untrustworthy person? The qualification for 
office *generally* implies qualification for 
delegating responsibility and authority. I.e., 
for choosing who will hold the office. Real 
officeholders, especially major offices, must be 
able to delegate authority; someone very good at 
the office, as to what they *personally* do, but 
who doesn't know whom to trust, can be a 
disaster, vulnerable to unscrupulous staff or 
associates. So the only reason that I can think 
of that one would vote for someone considered 
untrustworthy is a system that doesn't allow 
voting for the true favorite; the "favorite" in 
this case is a lesser evil, not actually trusted. 
And if the voter has *nobody* whom the voter can 
trust, given the vast freedom in a mature Asset 
system, well, there are two responses, and, beyond that, TANSTAAFL.

(1) Adjust medication.
(2) Register as an elector and vote for yourself.

> > And, in the end, the winner is the candidate accepted by
> > the most voters.
>
>But when one (such as myself, and I think also you) portrays Approval as
>a strategy game, under which "sincerity" is a red herring, a statement
>like the above falls very flat. What does it mean to be "accepted" by the
>most voters?

It means that the voters literally "accepted" -- 
which is an *action*, not a sentiment, sincerity 
has practically nothing to do with it -- the candidate.

It's possible to have a Range system where the 
voter specifies a value that is an approval 
cutoff. So the voter could vote with total 
sincerity, accurately representing preferences. 
The approval cutoff is a separate decision, and 
that cutoff is *always* a strategic decision. You 
are offered $159,000 for your house. Do you 
accept? You answer will depend on what you think 
you can get, you will generally approve better 
than your expectation, and not approve what is 
below your expectation. Expectation, not 
"desire." It's pure judgment, or should be!

Approval is *partly* a strategy game, but not 
entirely. The same is true for Range. We may 
assume, with Range, for example, that all 
expressed preferences are sincere. (Exceptions 
would be rare, largely moot).  So "preference 
expression" is sincere in both Approval and 
Range. The "game" aspect has to do with where to 
set the approval cutoff. With Range, and in 
particular with hi-res Range, we can treat a 
rating of 100% as indicating a favorite, or a 
candidate so close to the favorite that there is 
no difference worth considering, because the 
strategic value of voting 100 vs. 99 is so low 
that we might as well forget about it. (And it's 
possible to have preference markers in Range 
where voters can indicate pure preference or 
precedence within a rating. There are actually 
some simple, practical possibilities for doing 
this. I just don't think that those markers are 
necessary once the range resolution is high enough.)

Kevin, you've neglected this: the votes in Range 
and Approval reflect both sincerity and strategy, 
a mix. You can infer certain preferences from 
them, and those inferences will generally be 
accurate. The strategic part has to do with what 
the voter is concealing as to preference, or 
simply not disclosing. We don't know if this is strategic or accurate.

Call it Later No Harm! It's the same idea, 
really. The voter isn't disclosing certain 
preferences, for whatever reason. Do we want 
preference disclosure to be cost-free? I'd 
suggest that this is not a good idea, it 
introduces noise into the system more than it 
increases information. Remember, the big concern 
about Range is supposed to be "vulnerable to 
strategy." Behind this is an assumption that 
"exaggerating" is insincere, since, supposedly, 
it's cost-free. But it isn't cost-free, unless it's moot!

In a three-way election, approval A and B against 
C, you've just abstained from the AB election, in 
favor of defeating C. There was a cost to the 
"exaggeration," if that is what it was. There is 
a cost either way, but .... when the number of 
voters is small, it turns out, the optimal 
strategy is the bullet vote. The incremental 
utility gets smaller and smaller as the number of 
voters increases (and this is relative utility, 
with the assumption that the vote affects the 
result, so this effect is compounded by the 
increasing rarity of ties and near-ties), but it 
never disappears. That optimal strategy is such 
when the middle candidate has an exact middle 
utility. I've not studied the other cases. But my 
sense is that as the middle utility moves toward 
A, the optimal vote moves toward double Approval. 
Has to be, I'd think, because if the utility gap 
goes to zero, the optimal vote is obviously 
double Approval, 100% guarantee of no regret over the vote.

Another example, by the way, of how "mean 
candidate" is a bad Approval zero-knowledge 
strategy. It has to be probability modified, and 
the voter's own preference *must* be considered 
to weight the probability, since the voter is a 
member of the electorate, and if all other voters 
are unknown, we still have a net vote weighted, 
by one vote, toward our voter's position.

>If candidates were at least obtaining majority approval, I could be
>content with the statement. But if no one obtains a majority, offering as
>consolation that the most "accepted" candidate won is not much more
>comforting under Approval than under Plurality.

This is an argument for requiring a majority, 
isn't it? Sure. However, suppose there is some 
other threshold than "more than half" of the 
ballots approving. Set this threshold at X.

Whatever X is, that one candidate exceeds it with 
a greater margin is "more comforting" *on 
average* than that, say, the other candidate be 
chosen. Absolutely, this might not be much 
improvement. Take the California gubernatorial 
election with its bizarre number of candidates. 
Make it approval. (not a bad idea, actually, 
certainly better than what they did!). If the 
winner has 17% of the vote, whereas with 
Plurality it would have been 15%, sure, not very comforting.

Plurality is an anomaly. No business may be 
decided, in deliberative process, with less than 
a majority, of those voting, voting for it. 
"Voting for it" is "accepting it." Using Approval 
Voting doesn't change that one bit. So Plurality 
voting has only to do with multiple-choice 
questions, and is only used where it is 
impractical to use repeated balloting, and this 
has been specified in the bylaws.

I consider, it should be made clear, requiring a 
majority to be so important that I'd not replace 
Top Two Runoff, with all its defects, with 
Approval Voting. Instead, I'd suggest: use 
Approval or other method, such as Bucklin or 
Range with specified approval cutoff, for the 
primary, reducing the need for runoffs.

And Top Two Runoff is much better than we have 
thought, particularly if write-in votes are 
allowed in the runoff, and it gets even better if 
advanced methods are used in the primary and runoff.

Bucklin runoff with write-ins allowed, two-rank, 
totally cool. Easy to count. Sure, if voters 
bullet vote for the write-in, there could easily 
be majority failure. No limited-ballot system is 
going to be perfect, Range and hybrid methods just get as close as is possible.

But who is likely to bullet vote for a write-in 
in a runoff election between the top two 
candidates from the first ballot? One of those 
candidates was, say, the Range winner, and one is 
a Condorcet winner if different -- or there was 
no Condorcet winner beating a Range winner, so 
it's top two Range, nobody having gained a 
majority. Bucklin would allow voters limited LNH 
protection: vote for a favored write-in, then for 
the favored candidate on the ballot. Wouldn't you?

(I.e., if you thought that somehow something went 
wrong with the first election, the best candidate 
got eliminated, which would be vanishingly rare 
with a Range/Condorcet runoff system, or everyone 
got low approval so a new campaign is needed, you 
can still mount a write-in campaign without 
spoiling the election. There haven't *really* 
been any eliminations, only restricted ballot 
position, thus the method gets closer to pure 
deliberative process, where no possible 
compromise is every ruled out until a final decision has been made.)

> > It's not going to be a terrible result,
> > if Approval falls flat on its face, it elects a mediocre
> > candidate because the voters didn't get the strategy
> > right.
>
>Well, what is a "terrible result" after all? It seems to me you don't
>have to be too picky to find methods that only fail by electing mediocre
>candidates.

When ranked methods fail, they can fail 
spectacularly, and with sincere votes. It gets 
unusual, to be sure, with better ranked methods 
(it may be as high as 10% failure with IRV, under 
nonpartisan conditions, but most of those 
failures will also be of minor effect.)

I really shouldn't have written "mediocre." 
Rather, Approval can elect a "less controversial" 
candidate, which perhaps many or even most of the 
voters would judge a "more mediocre" result than 
the best candidate, were all the preferences 
accurately known. Saari used "mediocre" to refer 
to a candidate with mean utility between the best 
and worst, as seen by the vast majority of 
voters. It's correct, that would be a "mediocre" 
winner. Better than the worst case for ranked methods.

(Or, perhaps I should say, "some ranked methods." 
Borda, for starters, looks like a ranked method 
but is more accurately a ratings method with a 
highly restricted way of expressing the ratings. 
I'm not familiar with *how bad* Condorcet methods 
can fail. Generally, with reasonable 
distributions of candidates, the difference 
between a Condorcet winner and a Range winner are 
small. So I've had in mind a method like IRV, 
where the winner could be opposed by two-thirds 
of the voters, and that could be a maximally 
strong preference -- they will revolt! -- and 
that's with sincere votes. Strategic voting 
could, indeed, improve the results.)

> > What type of voter is bad for Approval? Easy compromiser or
> > tough bullet voter?
>
>The type of voter who is willing to cast a suboptimal vote due to
>principle. It is harmful under Plurality and here is a situation where
>it would be harmful under Approval.

What does that mean?

Here is what I get from it. The Nader voter cast 
a supposedly "suboptimal vote" under Plurality. 
For principle, i.e., the importance of voting for 
the best candidate, in one's opinion.

Is that the meaning? But who are we to say that 
this vote was suboptimal? Remember, the campaign 
rhetoric, by Nader, was that it didn't matter who 
won, Bush or Gore, they were both totally in the 
pocket of the large corporations. So why can't we 
just assume that the voter made an *optimal* 
decision? From the voter's perspective.

Or does this mean the voter who supports Nader, 
but who *does* have a reasonably strong 
preference between Gore and Nader, and decides to vote that?

Note that these situations apply to Approval. 
Both scenarios will happen with Approval just as 
with Plurality. In the first situation, i.e., 
Nader is believed, there is no incentive to add a vote for Gore or Bush.

(We presume that the Nader voter would vote for 
Gore, but if there is no difference, why one over 
the other? And if there is a difference, why in 
the world does the voter prefer Nader, who has 
just tried to feed him some nonsense? A believe 
that all politicians, including Nader, are going 
to lie? That, my friends, is why the American 
electorate voted in large numbers for Bush, when 
they knew he was lying. They all lie, after all, 
so why not vote for the one who tells you the 
lies you want to hear, since he'll perhaps feel 
some need to follow up on *some* of those 
promises, more than the other guy, who if he does 
what he says he will do, you won't like it.)

In Approval, the second situation doesn't create 
a big conflict, that's the improvement. With 
Bucklin, the remaining conflict is resolved, the 
voter can vote first preference and then indicate 
alternate choices. But, still, voters, including 
minor party supporters, will bullet vote, some 
percentage of them. And almost all those who 
truly prefer a major candidate will bullet vote. 




More information about the Election-Methods mailing list