[EM] IRV vs Plurality
Abd ul-Rahman Lomax
abd at lomaxdesign.com
Fri Jan 15 21:02:22 PST 2010
At 01:44 AM 1/15/2010, robert bristow-johnson wrote:
>>And it only takes a few people to realize this to start building
>>the structures. It is *not* necessary to convince the masses. That
>>will come later, after they have examples to look at, which is what
>>most people need.
>
>all of the above resonate very closely to what i've been thinking for
>about 10 months.
Thanks, Robert. Stick around :-)
>>So my political recommendations are based on what is already known,
>>what has been already tried, with only minor variations beyond
>>that. Approval voting, one might note the critics state, can
>>default to Plurality if most voters vote for their favorite and
>>leave it at that. *That's fine.*
>
>no it ain't. (Plurality is not fine.)
Always remember that "fine" is a relative adjective. Fine compared to what?
If the existing system is Plurality, and a change costs little or
nothing, that the new system defaults to Plurality if most people
vote it the same way means that the reform Does No Harm. That's what's "fine."
Further, what we are really talking about is Approval voting, right?
In a situation where voters don't use the facility to add additional
votes, it may indeed be fine, because those sincere Plurality votes
(which with Approval actually will be sincere, because there is no
reason to vote otherwise in Approval) will do the job and select,
quite likely, the ideal winner by any reasonable standard.
We are accustomed to thinking of the situations where plurality
fails, but those situations can be *largely* handled by simply
allowing additional approvals. We forget that Plurality *usually*
works, within the context where it is applied. When the context
changes outside that functional envelope, Plurality obviously has
defects, even serious ones.
We must, also, understand that voting systems don't exist in a
vacuum. There is a great deal of pre-election activity that
influences how people vote, and this activity frequently operates to
make Plurality more functional that straight theory might indicate.
Further, even the pathology of Plurality, such as the spoiler effect,
also has a positive function, and if we don't understand that, we may
indeed try to eliminate it and get rid of the baby with the
bathwater. We need to understand that there is a baby in there, so
that we can protect the baby and only get rid of the dirty water.
>(it's fine and good for us to have different positions. i just
>think, and have for decades, that in a multi-candidate race, the
>problems with FPTP are too well known to revert back to that because
>IRV doesn't cut the mustard.)
So don't revert back to that. Approval, however, is almost the same
as FPTP, there is just one very small rule change, eliminating a rule
that was a Bad Idea in the first place, it's an historical accident
that voting for more than one was *ever* prohibited. The rule makes
*some kind of sense* in a repeated ballot situation, where multiple
voting is less necessary, compromise can happen in other ways. But my
sense and actual experience is that approval is quite efficient in
that situation, where it is also quite possible to do an approval
poll or a set of polls, and then actually vote Yes or No to an
implementation motion. For some reason that I *don't* understand,
Robert's Rules discourages polling, but my guess is that they are
thinking of different kinds of polls than what would be used to
choose from multiple options. They are probably thinking of polls on
Yes/No questions, which are indeed a waste of time, just vote on the
question! If no majority, it doesn't cause damage. If majority, there
is a decision, and anyone voting for it, if they think that better,
may request reconsideration, reopening the question. A poll is indeed
a waste of time. But to quickly sort through multiple options,
approval polling is quick, can be done by voice vote or show of
hands, for example. If it's a written ballot, Range voting becomes
more of a possibility and can further increase negotiation efficiency.
We need to back up and look at why we vote. What's the goal? And then
we re-examine the various aspects of voting systems from that
perspective, not from a perspective that incorporates a host of held
assumptions about what's good and what's not.
>>Count All the Votes. And then, I claim, we should use the votes
>>that are counted, and political theory generally says that Approval
>>Voting, which is simply a matter of Counting All the Votes, is
>>quite a good method, superior to plain Plurality, and simply
>>defaulting to Plurality if people just vote for their favorite.
>
>i think the folks on the edges want a way to express a preference for
>their guy that will actually count against their fallback guy if the
>race were to become such that's between the two of them.
Sure. But look where this matters, with, say, Bucklin. Multiple
majorities. If there is a multiple majority, we might back up and
look at the higher preference votes. If there is a difference between
the majority winner after opening up additional approvals, and the
leader before, which defines the situation Robert is concerned about,
this will detect it, and a runoff can be held.
Alternatively, and possibly dependent on margins, we could decide to
follow the traditional most-votes assumption. But it's largely moot.
Multiple approvals both with a majority is probably very rare, unless
the candidates really are close to each other in broad perception, in
which case it doesn't matter a great deal which one is chosen. But "a
great deal" could be better defined by actual vote patterns.
If the method is Range Bucklin (the fractional vote equivalent of
Bucklin, the same effective sliding down with rounds of approval
cutoff), then it becomes more possible to detect a problematic
multiple-majority from an insignificant one, one where it would be
better to just go ahead and complete the election.
Consider this with ordinary Bucklin, Robert: you have a strong
preference for your favorite so you are concerned about adding that
second preference vote.
A. It's runoff Bucklin. Fine. Don't add any more second preferences,
leave that for a runoff. Or follow strategy B for any candidates
where you would prefer to finish the election rather than seeing a runoff.
B. It's deterministic. So skip the second rank and place your vote in
third rank. Maximized LNH protection, while still, in the end,
allowing your strategic approval(s).
It all depends on your preference strength. With Range/Bucklin, you
could directly express the strength, and the method basically does
the negotiating for you. My sense is that with a proper understanding
of strategy, not difficult, you would vote effectively, and it would
be a sincere vote. Sincerity expression is encouraged if it's
runoff/Bucklin, and the effect of that is a somewhat increased
likelihood of runoffs. But we must say that those runoffs happen if
the electorate effectively prefers them. They happen, if the
threshold is a majority, if a majority of the electorate withheld the
compromise votes necessary to find a majority result.
The best details depend on context, there isn't necessarily a
universal answer. When is "good enough" good enough?
> with
>Approval, they still have to strategize "do I vote for both or do I
>vote just for my favorite?" actually (Terry knows about this), in
>Vermont, the State Senate races are sorta weird.
Sure. And with a majority requirement, there is a sensible strategy
behind this, easy to understand and state. In Vermont, with the
gubernatorial election, the strategy question is "Would I prefer to
add an additional approval, or to see this election go to the
legislature for one of the top three to be picked?"
And the answer to that question depends on preference strength! As
well as our trust in the legislature. Trust the legislature a lot,
vote your favorite and you can leave it at that. Your vote is
effectively a vote against all other candidates.
Don't trust the legislature, add an additional approval if you'd
prefer to see that result to the legislature choosing.
It's much simpler than you think, Robert.
>unlike the
>Representatives that have legislative districts drawn (and have a
>single winner for each district), the State Senate candidates run at
>large for the whole county. being that Burlington is the largest
>city in the state, our county is also the largest, i think. we have
>6 state Senators and the rules are we can vote for up to 6 on a
>single ballot, and the 6 highest vote getters are elected.
Cumulative voting. Not bad, if the voters are organized. Actually
fair if they are.
> usually a
>party puts out 6 candidates and one might think that they could just
>plug the 6 of their party unless they like to cross over for some
>particular candidate they like. but if there are 4 or 5 candidates
>that are "okay" with me, but one or two candidates that i
>particularly like (and i might consider an underdog), i will end up
>"bullet voting" for just that one or two candidates because i want
>them to win badly enough that i don't want to risk having another
>person of the same party displace them in the top 6.
And then the party vote is split. Rather, it would be better for the
party to determine how many candidates it could be likely to
collectively elect, and only put up that number of candidates. It
would use utility-maximizing methods to pick its own candidates, so
that most members would really be willing to vote for them with all
their votes. It would suggest exact voting patterns to its members,
and from experience, it would know how the members respond, so it
could maximize effectiveness.
There are, of course, much better election methods for choosing the
six. Cumulative voting can produce a rough proportional
representation, but only with organized voters, otherwise it's quite
hit-or-miss. Asset Voting would be perfect, but, hey, that's a truly
radical reform, in spite of allowing every voter to vote sincerely,
with a minimum number of votes being wasted, and other benefits.
So there are good PR methods. Reweighted Range Voting. Proportional
Approval Voting. And, of course, STV, which ain't bad for PR, but it
tends to require much more knowledge on the party of voters, or the
use of voter information cards which gives much more power to party
leaders and probably subtracts power from independent voters.
> but i think a
>ranked ballot would do well for that and maybe STV is as good as we
>can do in a multi-winner case. i dunno.
Well, it's known. We can do better. But STV is good, *in some ways.*
It breaks down the fewer the seats being elected, the last election
is effectively IRV, but the first seats are almost certainly optimal.
Asset was proposed as a tweak on STV that would result in far fewer
wasted votes, so an STV ballot and counting could be used, but my
judgement is that Asset is so powerful a technique that a simple
Approval ballot would be fine, which, you will note, fully empowers
voters who simply vote for their favorite and leave it at that. I
only would even allow additional votes because of some very slight
theoretical improvement under some situations, it can simplify
certain voter decisions, and counting "overvotes" reduces ballot
spoilage with likely higher effective expression of voter intent.
> because i can see that it's
>possible (but it doesn't always happen) for Condorcet to order
>candidates from the top (the Condorcet winner) to the bottom (the
>Condorcet loser), maybe picking the top 6 using Condorcet ordering
>would be best, but i dunno.
The Condorcet criterion doesn't really apply to multiwinner, though I
think there is some way to extend it. Always remember an important
thing: if we want fair representation, if a quota of votes elect a
winner, those votes are fully represented and should be considered as
spent. So if you are going to use the condorcet criterion, you'd need
to look in sequence. Devalue the ballots which have been used to
elect a C winner, then look at lower preferences, which are then
fractionally counted. I'd say you should look at how multiwinner STV
works, in a decent application.
> my political licks have just been about
>IRV vs. Condorcet vs. Plurality (or the old 40%+ rule) in the
>single- winner case. i'll fight the multi-winner battle some other time, and
>i just don't know yet what side i'm on. i might become STV. sure,
>it's elegant to have the same theory for both the single-winner and
>multi-winner case (and IRV is STV for single-winner), but i think
>that IRV has enough problems that i just cannot support it over
>Condorcet, if given the choice.
I'm suggesting that if you look at Bucklin, including looking at the
history, you would realize that it's truly a powerful method with
much more implementation history in the U.S. than IRV. It is far
simpler to count than IRV or even Condorcet methods.
I don't think it's been adequately studied in the simulations, which
tend to be oversimplified in how they set up voter voting strategy.
And if there is a majority requirement, I'm pretty sure that hasn't
been studied yet. Bucklin, however, works, we know that, because it
worked, and the only thing it didn't do was find majorities if too
many voters didn't add additional preferences, which is an *intrinsic
problem*, not avoidable unless voters are coerced, which is
unconstitutional. It would still outperform Plurality and IRV,
resolves the spoiler effect, is probably not subject to Center
Squeeze, in itself, etc.
And it's really easy to vote, strategy isn't difficult.
Bucklin is really instant runoff Approval and can truly be voted that way.
With IRV, you add an additional lower ranked vote if you want your
vote to be counted if your candidate is eliminated.
With Bucklin methods, you add an additional vote if you want your
additional votes (additional approvals) to be counted if your higher
preferences are not going to win with a majority. The difference
between "eliminated" and "not going to win by a majority without more
votes from other voters" is important.
Sure, when you allow your additional approvals to be expressed, you
then create a possibility that your lower ranked vote will allow your
lower preference to beat your favorite. You have not, with Bucklin,
voted for the lower preference over your first preference and you
have in fact abstained from that particular pairwise election (after
it's clear that nobody is going to win with a majority). But what you
and your candidate gain is that other voters may push your favorite
over the victory margin. You lose a little, and you might gain a lot.
As to losing a little, suppose it was a situation where you would, in
fact, be losing a lot. I.e., if it happens that your second rank vote
causes your second favorite to win, would you regret it? *How much*
would you regret it. Do remember that if your vote has that effect,
the regret has to be considered at half value, because we are talking
about the difference between a tie and a decision. Perhaps I should
do the game theory analysis....
If, considering all this, you believe you would regret your second
rank vote, that your preference strength is too strong to allow you
to feel comfortable with this, then not adding the lower ranked vote
is quite an appropriate strategy. With a majority required, it's quite safe.
With Range/Bucklin, which I don't see as a politically practical
first step, you'd essentially control your approvals using
specification of preference strength. That's what Range/Bucklin does,
and the maximimally effective strategy there is quite likely the most
accurate expression of your actual preference strengths among the
major candidates. But more study is needed, for sure. Nobody has
studied this method, to my knowledge.
Then, under some circumstances, the Range information could be used
to avoid runoffs, or to resolve multiple majorities, which, if you
think about it, is the situation where the voting problem you mention
actually becomes important. (If a majority is required.)
>Abd ul, my position has always been consistent in the last 10
>months. i fully support the *goals* of IRV because i think they are
>the same *goals* that we have with Condorcet or any of the other
>ranked-ballot methods. those goals were, for me, boiled down to 4
>salient principles that i outlined in my paper that i have plugged
>here at least a few times. those principles are (i'm repeating 3,
>but hey, bits are cheap):
Thanks. Specifying the basis for your opinions is very helpful.
>___________________________
>
>1. If a majority (not just a mere plurality) of voters agree that
>candidate A is
>better than candidate B, then candidate B should not be elected.
And I've shown that this is not a basic principle, there are
situations where it's obviously a suboptimal result. Even seriously
suboptimal. It sounds good because we don't ordinarily have the
information in elections to notice the problem with it, we would if
we used Range methods or did good Range polling.
>2. The relative merit of candidates A and B is not affected by the
>presence of a
>third candidate C. If a majority (not just a mere plurality) of
>voters agree that
>candidate A is better than B, whether candidate C enters the race or
>not,
>indeed whether candidate C is better (in the minds of voters) than
>either
>candidates A or B (or both or neither), it does not reverse the
>preference of
>candidate A over candidate B. If that relative preference of
>candidate is not
>affected among voters, then the relative outcome of the election
>should not
>be affected (candidate B winning over candidate A). In the converse,
>this
>means that by removing any loser from the race and from all ballots,
>that
>this should not alter who the winner is.
This is a wordy version of IIA, and the vulnerability of a system to
IIA violation is a matter of great controversy. Range isn't
vulnerable to IIA if we just remove a candidate from the votes, nor
is Approval, for example. But voters might choose a different
strategy, making it vulnerable in that sense. Basically if the
removal of a candidate causes *any* change in how voters make their
decisions, it will cause an effect from the removal of what is called
an irrelevant candidate. In practical reality, we must recognize that
no method is completely invulnerable to IIA, because of the effects
of voter perception. Let me provide the reductio ad absurdem:
The voters all very strongly prefer A, except for a few, so even if
the voters have the option of ranking or rating other preferences,
they mostly don't bother. But a few voters for whatever reason don't
like A, or like A little enough that they add additional approvals.
(Let's say the method is approval). Easily, if A is eliminated, the A
voters will take a stronger look at the remaining alternatives, and
how they will pick from among them cannot be predicted as a general
rule. So it can *easily* happen that the elimination of A can shift
the result from B to C, for example.
IIA is considered by many experts to be the weakest of the Arrovian
criteria. But I'll agree that a good method won't be vulnerable to
IIA if all you do is remove the A votes, and don't do a separate
election. And Range, for example, satisfies that.
>3. Voters should not be called upon to do "strategic voting". Voters
>should feel
>free to simply vote their conscience and vote for the candidates they
>like
>best, without worrying about whom that they think is most electable.
>Voters
>should be able to vote for the candidate of their choosing (e.g.
>Perot in 1992
>or Nader in 2000) without risk of contributing to the election of the
>candidate
>they least prefer (perhaps Clinton in 1992 or Bush in 2000). They should
>not have to sacrifice their vote for their ideal choice because they are
>concerned about "wasting" their vote and helping elect the candidate
>they
>dislike the most. As an ancillary principle, a candidate should not
>have to
>worry about electing his/her least desirable opponent by choosing to run
>against another opponent that may be more desirable.
Now, consider this from a basic perspective. Voters don't have votes
in their head as some kind of immediate "sincere" vote, voters don't
even have relative opinions about candidates, except usually they
will have a favorite. What you want, I'd think, is a system that will
allow a sincere expression of a first preference, or maybe even a
full expression of preferences, all the way down. There are such
systems. But this is the problem: say, with Range, you accurately
express your relevant preferences, normalizing your vote so that it
has maximum impact. This means that you vote at at least one
candidate with maximum vote (100%) and at least one with minimum vote
(0%). Suppose, however, there is an irrelevant candidate! Somehow,
Satan got on the ballot. Satan doesn't have a prayer of winning. For
sure -- he's Satan!
So, do you vote Satan 0% and everyone else 99% or 100% by comparison?
Only if you think there is a good chance of Satan winning! There is
nothing really wrong with your sincere vote, but it doesn't allow you
much voting power between the real candidates. Okay, so we use a
condorcet system. But this system can't express preference strength,
so it can really screw up.
But people are quite accustomed to this problem. They don't alter
their votes in Plurality through the presence of irrelevant
alternatives, not in the meaningful sense. And they will vote in
Range and Approval quite like that. They will disapprove of Satan and
vote in the rest of the election quite as if Satan weren't on the
ballot. They are exerting full voting strength against Satan, and in
a runoff system, they couldn't do more to prevent Satan from being
elected. The issue is how they vote for other candidates. Quite by
the same argument, they won't waste positive voting strength in
irrelevant pairwise elections. So, with Range, they might prefer
Nader to Gore, but a vote of Nader 100% Gore 50%, even if that is
"sincere" would have wasted half their vote in 2000. So they would be
more likely to vote, say, Nader 100%, Gore 99%. Because Gore is a
*relevant* candidate, a frontrunner, and the real race is likely to
be between Gore and Nader.
Bucklin allows full strength voting in the relevant race while still
allowing clear preference expression. Range/Bucklin would work in a
similar way. You wouldn't waste any of your vote by adding a lower
relevant preference and a higher no-hope preference.
>4. Election policy that decreases convenience for voters will
>decrease voter
>participation. Having to vote once for your preferred candidate, and
>then
>being called on to return to the polls at a later date and vote again
>for your
>preferred candidate (if he/she makes it to the run-off) is decidedly
>less
>convenient and we must expect that significantly fewer voters will
>show up
>for the run-off. Or, if your most-preferred candidate did not make it
>to the
>run-off, the motivation to return to the polls to vote for a somewhat
>less
>preferred candidate (or to vote against a much disliked candidate) is
>reduced and fewer voters show up. Electing candidates with decreased
>legitimate voter participation cannot be considered as democratic or as
>indicative of the will of the people, as electing candidates with
>higher voter
>participation.
Is "fewer voters" a bad thing. Doesn't it depend on what kind of
sample it is? It is standard practice in democracy that decisions are
made by the members of an organization or society who show up to
vote. Why? Repeated balloting in a standard direct democracy can be a
pain in the neck, very inconvenient. But that very inconvenience has
a function.
Here, though, I'm addressing the idea that runoff elections are
harmful. Obviously they have a cost and obviously either they or the
primary are likely inconvenient. Is that a good thing or a bad thing?
I've mentioned that some have repeated arguments here that are
essentially repeats of the familiar FairVote arguments used to sell a
fish bicycle to eskimos. (How's that for a hybrid, eh?)
Very obviously as well, many jurisdictions were willing to put up
with the cost and inconvenience in order to find majorities, and
there really isn't any way to guarantee a majority, there are merely
ways to make a majority more likely. (The guaranteed majority by
preventing any more than two candidates in a runoff election, no
write-ins allowed, is a faux majority, essentially coerced. A real
majority is a voluntary approval by a majority of voters of an
outcome, when they were free to express themselves otherwise.)
But here is the new information that should be digested, it seems to
be new because I haven't come across this elsewhere, even though I
consider it obvious. Like a lot of the stuff that I've come up with
that takes a few years of mention before people start saying ...
hmmm.... maybe there is something to this after all.
Robert doesn't like the consideration of preference strength, he
really hasn't said why. But the only reasonably objective standard
that anyone has come up with for judging voting system quality has
been to postulate absolute voter utilities and predict behavior from
them as simulated voters encounter a voting system in a series of
simulated elections. Voting system criteria address the limits of a
system's behavior but not its normal behavior. Lots of assertions and
claims are made about strategic behavior that aren't based on solid
ground, only on speculation and possibility.
In any case, if we postulate absolute utilities, never mind that
there is no clear way to extract them from voters, and then we make
reasonable predictions about how voters will behave, and we vary this
behavior according to, in one approach, varying degrees of "sincere"
and "strategic" voting behavior, we can see how a voting system
behaves under varying conditions, across many elections. In order to
compare voting systems, we need to be able to compare outcomes, and
when we take the "criterion satisfaction" approach, except for a
criterion that doesn't have an accepted name yet, we really are
reduced to saying "My satisfied criteria are more important than
yours! No, mine are more important!"
In order to consider voting system quality we have to back up and
consider the purpose of elections. If we define different purposes,
we may come up with different methods that most satisfy the purpose we choose!
For example, consider a possible application for voting. A society
exists, say, the members of which can be classified into factions
defined by a strong leader. Without some means of making decisions
that would reflect what will happen if it boils down to which faction
has the strongest army, the factions will in fact resort to this
approach, and the faction with the strongest army will prevail, in
general. So what if a vote is taken where every citizen can vote, and
the faction with the most members can be expected, on average, to be
able to field the strongest army, other things being equal. Presto.
Plurality voting. Add coalitions of factions that decide to
cooperate, and those factions make decisions internally by the same
method, generally raising the quality and stability of the plurality
votes. That leads to a 2-party system, with which, of course,
anything more complicated than Plurality isn't necessary.
But there are better models and better goals. Long ago, we came to
the concept of a majority faction, and we decided to respect the will
of the majority, when it was expressed. We also allowed the subset of
people who actually voted to represent the rest of society. Note
something: people who don't vote, on average, don't care about the
outcome as much as people who vote. Low voter turnout is, more than
anything, a rational response by voters to a situation where they
don't believe that their vote will make an important difference. That
view is commonly derided, but it's probably more true in many
situations than not. This disinterest in the outcome may be a sign of
serious discontent coupled with a belief in the impossibility of
changing the system, but that has its greatest impact in what
candidates arise and how the public supports them. Or doesn't. But,
alternatively, and this is a reality in many places, and quite clear,
it reflects a contentment with the decisions made by the more highly
informed and interested voters who do actually vote. As long as the
group of interested voters doesn't stray too far from the interests
of the entire electorate, this situation can continue. The voters
aren't a sample of the voters, exactly, because they are, in general,
more informed than the disinterested voters. They are not necessarily
more informed than some of those who don't vote because they dislike
all the options.
If voting is totally convenient and quick, no pain at all, will this
improve results? Quite likely not! My sense is that, absent certain
precautions, the results would get worse. Large uninformed and
disinterested voters are easily manipulated by media masters, all
that has to be done is to understand voter psychology and press the
right buttons with enough skill. It is much more difficult to pull
the wool over the eyes of people who put in much more time and have
more interest.
The parliamentarians who edit Robert's Rules of Order are quite aware
of the inconvenience of repeated balloting. Yet they make a majority
result an absolute requirement, and recommend that it *never* be
waived. Not even for mail elections (FairVote has issued some
propaganda about this that could have initially been excused as a
simple error. Yet, when the error was pointed out, Rob Richie and
others took one of two approaches: in some cases they modified their
claims so that the claims weren't exactly false, but remained just as
misleading; on other cases they simply asserted I was being
deceptive, implying that what I showed as the plain meaning of the
words in the manual was preposterous, because "so many organizations
don't do that." But many organizations abandon democratic process
because the leaders decide that democracy is too much work, too
expensive, and, besides, we know better than the general membership.
The situation where the active voters decide that they deserve to
make decisions better than the general membership, which can be true
at a point, and is reflected by the lack of participation of less
interested members, turns into something else when the refraining
from participation becomes a fixed barrier, or something maintained
by differential obstacles. For example, I've seen ostensibly
democratic organizations, any member can vote at the annual meeting,
but ... the annual meeting is held in Podunk, Idaho, where the
founder and the members of the board live. And, of course, proxy
voting is not allowed. That's differential access, enforcing and
maintaining oligarchical control. And it's self-reinforcing, because
the oligarchy can see itself as being those who really care. Sure. But.
Where access to voting is equitable, differential turnout functions
as a kind of range voting, by excluding votes based on low preference
strength, while including votes where the voter cares about the
outcome. My sense is that runoffs improve the outcome of about one
out of three runoff elections, where top two runoff, with all of its
flaws, is used. (This is with nopartisan elections, I have no
information matching this about partisan elections). In the other two
elections, accepting the plurality winner is the same result as you
get from the runoff.
Now, a good voting system is likely to reduce the need for runoffs.
But there is no voting system that can match the power of
deliberative election, in terms of the intelligence of the result.
The trade-off is with efficiency, for deliberative methods have been
considered impossible as the scale gets large. And indeed they are,
if more advanced deliberative methods are not introduces.
Bottom line: we need to understand that repeated balloting, vote for
one, with a majority required is such a powerful method that
improvements can only be in the direction of reducing the number of
ballots necessary to gain a majority. Then, we might come to a point
where the improvement from holding an indefinite series of runoffs is
negligible, assuming we use good methods for primary and runoff, say.
Some Range advocates believe that Range is quite good enough for a
deterministic single-ballot system. I disagree, and for reasons that
I believe can be well-explained. But I don't have quantitative data:
the basic difficulty with single ballot, irreducible and unavoidable,
is that voters have less focus in single-ballot and are not as likely
to understand the issues and the necessary compromises if they don't
have the information from the first poll. And this is precisely what
Robert's Rules of Order points out with respect to Preferential
Voting in general (not just IRV). With IRV, it also points out center
squeeze, which, of course, afflicts ordinary top-two runoff as well.
However, using an advanced method with top two runoff, both in
primary and runoff, would, my opinion, so well simulate the process
of repeated election, majority required that, (1) fewer runoffs would
be needed, maybe many fewer, and (2) the improvement in quality of
result from extending the series would be negligible, and would be
outweighed by the damage from delay in resolution.
But we still need a method of assessing the quality of an outcome.
And my contention is that there is really only one way to do that:
test it with sets of postulated internal utilities, on a presumed
absolute scale, and, with two caveats, the best result is the one
which maximizes overall utility. "Absolute utilities" means utilities
on a scale that is commensurable between voters and is summable
across them linearly.
So if I encounter some election scenario and it is alleged that
system A performs better than system B, given such and such a set of
preferences, I always want to know what pattern of absolute utilities
underlies the voting pattern.
When we are talking about motivated voters, we may be able to use
normalized utilities. This is equivalent, in a sense, to one-person,
one-vote, it assumes that we will attempt to serve voters equally.
Maximum satisfaction for one voter is equivalent to maximum
satisfaction for another, and the same for the other side of the
scale, maximum dissatisfaction. However, that assumption clearly
isn't accurate. So for a deeper understanding that includes
differential turnout, we need to know what motivates voters to vote,
and the normal assumption I follow is that absolute preference
strength motivates it. If voters with low absolute preference
strength were to "sincerely vote," it could be argued, they might
vote like this in Range 100:
A 51, B, 50, C, 49. Weak votes. If they did so, the overall utility
of the result is enhanced over the same voter normalizing and voting
A 100, B 50, C 0, even though both scenarios are quite accurately
expressed by A>B>C and each of those ranked preferences is of equal
strength. (If everyone was like this with respect to their
preferences, Borda would work fine! -- but we still don't understand
the whole picture until we factor in turnout.)
There are voting system proposals that test or measure absolute
preference strength, most notably a Clarke tax. But inconvenience of
voting is, in fact, a kind of test of sincere preference strength.
And this isn't of little consequence. If not for it, my sense,
top-two runoff wouldn't have so many comeback elections!
>so Abd ul, Plurality can and has violated all 4 of those principles
>(in a multi-candidate context) and we've known that for a long time.
>for the average politically-savvy voter in a multi-party context
>(which i consider myself one of), those were the main reasons we
>supported IRV in the first place (over Plurality).
Except that, of course, Plurality and a two-party system are married.
Rather, in a two-party system, the accommodation for divergent ideas
is within the existing parties. The party system functions as part of
the voting system, and by creating separate parties that don't
cooperate with one of the majors (as a faction within it), one is
actually bucking the political system and can cause it to break down
and produce an extreme result.
A great deal could be written about how a healthy two-party system
works; the parties appropriate the center and overlap across it. If
they were strictly divided into left of center and right of center,
the system would be dangerous and, indeed, we have seen what happens
when a strong faction within a party, more extreme than the center of
the part rather than more moderate, is able to dominate for a time.
It produces stronger polarization and therefore a greater swing
between election results, about which my general comment is that it's
highly inefficient and can cause crashes, like any oversteering.
Notice this characteristic of Robert's approach: multiparty context.
But where has IRV been most often implemented? Where is top-two
runoff most often used? With nonpartisan elections. And, please get
this: with nonpartisan elections, IRV is astonishingly faithful to
plurality results. If we are going to understand voting systems, we
need to understand this fact. And if we are going to design and
recommend systems for specific applications, we'd better understand
that the optimal system may vary with the context.
If people just vote for their favorite, in a nonpartisan election, we
can predict that (unless it's very close), IRV will produce the same
result and Plurality. Now, if you think that IRV is a decent method,
and if you research and discover that this assertion is true about
nonpartisan elections, surely the conclusion is inescapable:
plurality is a decent method! Where does it break down? With partisan
elections!
When the systems in use in the U.S. were first designed, the
political party system had not arise. All elections were nonpartisan!
So plurality isn't as stupid as we might think; but it was unable to
handle the rise of political parties, just as those parties corrupted
the electoral college which was a beautiful conception that turned
into something quite different because the Constitutional Convention
punted and did not address the election of the electors, but left it
to the state legislatures, which easily became tools of the political
party that happened to dominate there.
Well, done is done. What can we do? I have some suggestions:
1. Preserve top two runoff where it exists and improve it, don't dump
it. Reduce the need for runoffs by using better methods of finding a
majority; possibly use algorithms shown to confidently predict runoff
results from voting patterns in the primary which could again reduce
runoffs. (Is a majority requirement too tight? The simple-minded 40%
threshold neglects the possibility of, say, 40%, 39%, 21%. That's not
predictable, generally. (It might be predictable from ballot data in
Range/Bucklin, because better preference strength information could
become available. But the proof is in the pudding. Start collecting
the data, start with a majority requirement, and accumulate
experience and see if it's possible to lower the threshold. what's
really important is lead. 40%, 21%, 20%, 19% is probably quite
predictable from plurality ballots, more so with better data.)
2. If you can't do anything else, at least stop tossing overvotes.
Count All the Votes. This, all by itself, turns almost any voting
system into a better one, and, at the very least, it causes fewer
spoiled ballots. If you vote for more than one, well, you have voted
for more than one. If it was an error, it should be counted just as
your vote is counted if you vote for the wrong candidate! But this
*allows* additional approvals, which turns Plurality into Approval
Voting, which is one of the top contenders among experts for best
voting system! Indeed, with repeated balloting, that might be true.
(Range/Bucklin amounts to the same thing with a more thorough
disclosure on the ballot, but the complexity would probably be
overkill, the really cool thing about repeated balloting with
approval is that you can accumulate person understanding of the
necessary compromises though the series of ballots. That happens with
repeated plurality voting, but it can simply become more efficient,
requiring fewer ballots in order to find a majority result.)
3. Don't even think about using IRV for nonpartisan elections. It's
an expensive waste of time and money. It's quite likely that most of
the votes cast aren't ever counted (except through systems that
report ballot images). Think about it: normally there are two major
candidates that lead the others, and those candidates aren't
eliminated until one of them finally is in the last round. Together,
the top two are normally a majority of voters. Now, have those voters
added extra ranked votes? If they did, that justifies the comment
and, realize, any lower preference votes for eliminated candidates,
candidates eliminated where a higher preference candidate was still
viable, are also not counted. This is why some people think that IRV
violates equal voting principles, since two people may cast a second
preference vote, but only one of these votes counts, the other didn't
because the candidate was eliminated first. Not all votes are equal in IRV.
> and i had hoped
>that it would be very rare indeed that IRV would be consistent with
>those goals (as it would if it agreed with Condorcet, as best as i
>can tell). it succeeded in 2006 in Burlington and *failed* in 2009
>(regarding Principles 1, 2, & 3, it succeeds with Principle 4).
>that's 1 for 2. not great odds.
>
>but that is why it might sound like i'm with the IRV proponents,
>because those principles are important to me and *sorta* consistent
>with what IRVers like FairVote.org want.
Basically, you've understood half the problem....
>anyway, it is because IRV so clearly *failed* to accomplish the very
>goals that we had for it when we adopted it is what has motivated me
>to learn a little bit about the whole election theory thing.
What a concept! Here we are trying to get jurisdictions to spend
millions of dollars, but without actually studying all the reasonable
alternatives. Nor with any study of the history and what's known
about voting systems. Without inviting experts to testify and then
making sure that what they have stated is critically examined and understood.
>>But my concern is the deceptive arguments that have been advanced
>>by FairVote, including their arguments against other voting
>>systems, and it's very important to expose these.
>
>me too. but i still value those 4 principles and Plurality does
>worse (than IRV which, evidently, does worse than Condorcet).
Plurality obviously fails in certain contexts. But the method isn't
necessarily to blame for that. Plurality works quite well with
nonpartisan elections, and it's very simple to vote. It can be
improved, but, quite simply, it's not as bad as it has been made out
to be. It fails badly in certain partisan conditions. It also fails
in nonpartisan elections with very many candidates, but, then again,
IRV fails spectacularly there as well. No voting system performs
really well presented, all in one step, with a hundred candidates.
>[large section of my prior comment I have removed.]
>i really agree with all that above, Abd ul.
Please understand how much work it was to discover all that. Nobody
was challenging FairVote on the Robert's Rules of Order claim. Nobody
noticed that the voter information pamphlet in San Francisco was
deceptive on "the winner will still be required to get a majority of votes."
(That is not a "requirement" on a winner under the IRV method used in
San Francisco, the proposition actually removed the majority
requirement from the election code. The faux IRV "majority" is not a
majority of votes, but a majority of votes for the top two, so it's a
tautology, not some standard that the winner must meet. What was said
is true for the Robert's Rules of Order process, not what FairVote promotes.)
And, now, I'll point out again that nobody, to my knowledge, has
noticed the likely effect of runoff "inconvenience" on election
quality. Except for me and those who read what I've written on this.
What I'm trying to do is encourage people to step back and try to
understand the foundations of voting, as well as to look at how
voting systems actually function. This requires much more study and
thought than a knee-jerk identification of problems. One of the
dangers of reform is that sometimes reformers don't understand what
they are tearing down and they lose the baby with the bathwater, they
lose the positivefunction of some process or system that they don't
understand, only seeing the down side.
Communists missed the function of speculators in regulating and
buffering markets, and only saw greed. While in theory, a scientific
study of supply and demand combined with the best predictive sciences
could outperform the ad-hoc and chaotic function of speculation, but,
it turns out, it ain't so easy to do that and avoid corruption as
well as simple incompetence in bureaucracies. A speculator makes a
mistake, he loses his shirt. It's self-regulatory, within limits.
(Hence the best general process so far is to regulate and tax the
profits of speculation in ways that still allow the positive
function, while protecting the public against the possible extreme
negative impact, under some conditions. And I'm not saying that it
can't get even better than that, I'm saying that we should be careful
when monkeying with long-standing traditions and processes. They
might exist for reasons we don't yet understand.)
>>Focus on pure winning makes sense in the heat of a gladiatorial
>>contest, but, note, the gladiators served a very unhealthy system,
>>at the expense of themselves, they were pawns, sacrificed for
>>entertainment, fighting each other to the death, which, rather
>>obviously, wasn't good for gladiators. Sooner or later someone else
>>is faster or stronger or one slips.
>>
>>> > about this, Kathy, i don't believe your veracity at all. since
>>>March of
>>> > 2009 (when Burlington IRV failed to elect the Condorcet winner
>>>and all sorts
>>
>>Kathy may make mistakes, but I'd be astonished to find her lying.
>
>she's pretty partisan (as am i), now i don't even remember what she
>said that i found so hard to believe.
Probably a good idea to completely forget about it, then, because,
while Kathy makes mistakes, so do you, and perhaps what you found
hard to believe, then, was actually true. She's not partisan as to
political party, I think. She's a voting integrity expert and is
really concerned about that, and is only starting to look at voting
systems themselves in terms of performance.
>>>In my own imagination, I **do** support the Condorcet method,
>>>although
>>>I don't know how to solve the Condorcet cycles or how often, if ever,
>>>they might occur.
>>
>>There are Condorcet-compliant methods, and the first-order
>>intuition of most of us who start studying voting systems is that a
>>Condorcet winner should always win the election. Turns out, no. Not
>>necessarily.
>
>i haven't yet (despite Terry trying) been persuaded of that.
Well, Terry is leading you largely down the wrong path, and he didn't
propose a sound method of assessing election quality. Basically, if
you don't understand how the Condorcet winner can be a *lousy*
choice, and clearly so, uncontroversially so, you've missed the whole
topic of social utility as a method of considering election quality.
But it can be done with simple examples. Ask.
> i
>*still* believe that electing the non-CW (assuming there is a CW) is
>fundamentally less democratic (reflective of the will of the
>electorate) than electing the CW. if the CW exists and your
>candidate is not that person, the CW beat that candidate when the
>electorate is asked to choose between the two. it's my Principle #1.
Okay, let me suggest an approach. Should a good election method work
regardless of the number of voters? We don't use voting, generally,
in very small groups, because we don't see it as necessary.
I won't describe the "pizza election," I've done that so many times,
maybe someone else will. But if we consider three people trying to
make a common choice, and they use a condorcet method, and two of
them favor one choice, that will be the condorcet winner, but the
situation can be such that, once the preferences and preference
strengths of all the voters is known, *the majority will revise its
position and consider that *for the group* the best choice is
actually a different one.*
Majority rule, based on information, Robert. It's how we really make
decisions in small groups when the relationships are functional. We
can't understand large-scale decision-making if we don't understand
small-scale decision-making!
When the scale becomes large, certain approaches become impractical,
or may require more sophisticated techology or procedures, but that
does not make them undesirable.
And it's possible to come up with many examples where the condorcet
winner is clearly suboptimal. Consider the
placement-of-the-state-capitol election used as an example on
Wikipedia. How can we judge the quality of the result, aside from the
method? I.e., what is the optimal placement of the capitol. Most
methods only consider numbers of voters in each category and their
preferences, but it's pretty obvious that the ideal result, other
things being equal, would be the placement that minimizes average
distance to the capitol. And this isn't necessarily the Condorcet
winner. If the voters were to vote sincere votes based on mileage,
they would actually, with a voting method, pick the optimal placement.
But we can transcend voting systems almost entirely. Consensus is
powerful, and it's much more possible than most people think.
Absolute and complete consensus on a large scale is probably
unreachable, there are practical limits, but that doesn't make
consensus undesirable! And when the goal is consensus, the
differences between voting systems become far less important, but
what becomes of interest is collecting the best information, at a
point in time, as to voter preferences, and it's obvious that this
should include preference strengths, because otherwise we have a
mouse looking like a monster and vice-versa.
>>But the exceptions are probably relatively rare, and, in order to
>>understand it, you need to have a deeper understanding of the
>>science of public choice than is possible with only consideration
>>of pure ranking.
>
>but the problem with considering *more* than pure ranking (Range) is
>that it requires too much information from the voter. and the
>problem with *less* (Approval or FPTP) is that it obtains too little
>information from the voter.
And this is an argument for runoff voting! Use Bucklin in the
primary. The voter can vote Bucklin with bullet votes for the
favorite (safe and easy). Look, you are assuming that the system will
*require* complicated expression. Rather, the point is to *allow*
expression. Each voter still has and will normally exercise one full vote.
This is something that should be realized: if voters just vote
sincere normalized preferences, they can vote Range quite the same as
a ranked ballot. Basically, vote Range as Borda count. And then if it
looks wrong, *nudge it*! That's a sincere Range vote. Now, can you
vote a more powerful vote?
Sure. But that requires strategic consideration. What if you don't do
that? You won't be harmed, actually, though you may not get totally
*maximized* results. If you want to optimize your vote fully, you
have to do some work! TANSTAAFL.
Any voter can vote Range as Approval, and it's a powerful vote. My
own study showed that the expected utility for a voter in a
simplified Range 2 election with three candidates and a midrange
candidate for the voter was the same if the voter voted pure approval
or if the voter voted the midrange. But the idea that voting Range is
difficult is based on some idea that you have to get it exactly
right. You don't.
I would start in voting Range by ranking the candidates. That's
because ranking is easy. But if I found it difficult to rank two
candidates, I'd lump them together. If I don't want to do strategy at
all, note, I may not maximize my own personal outcome, but I will
maximize that of the overall election result! All I have to do is
sincerely express what the election of each candidate means to me.
I'd put my favorite at the top, and the worst at the bottom. Now,
what do I do with the rest? Some of them I won't know, perhaps. Where
do I put them? I can simply not rank them, different range methods
handle that differently. I wouldn't try to do this, if there were
many candidates, in the voting booth! I'd do it at home. I'd spread
out the names of the candidates on a range scale, favorite top, worst
bottom, then I'd try to arrange them in a way that made sense in
terms of their value to me. I find it more helpful to think of a
positive-negative scale rather than the zero to positive number scale
normally used with Range, because with the top end, I might think of
how much I'd be willing to pay for the election of a candidate, to
try to get a quantity. And on the negative half, how much I'd pay to
avoid the election of this candidate.
Looking at each election pair, candidates which are more like each
other would be placed together. Candidates which are more unalike (in
terms of my like/dislike of them, I'd put futher apart. There is a
formal process for this which would truly maximize the accuracy. But,
really, it's only an election and I'm only casting one vote! How much
trouble is it worth?
Once you realize that with a majority requirement, a bullet vote for
the favorite is just fine, and the rest is just a method of being
more expressive if one wants to be so, because with a majority
requirement, a bullet vote is a vote against all other candidates.
Now, is that what you really want. If there is another decent
candidate you'd be happy to see election, do you really want to vote
against this one?
Bucklin makes it really easy. You rank them. The basic rule for
adding lower preference votes in Bucklin with runoff would be that if
you would prefer to see the election of a candidate than a runoff,
add a vote for the candidate. Where you add that vote, in what rank,
depends on your preference strengths. Bucklin essentially votes for
your in a series of approval elections where you lower your approval
threshold, which is exactly what happens in repeated balloting. You
make compromises in order to find a majority for some result.
Honestly, Robert, when this sinks in, you'll wonder why you didn't
notice it all. It is not rocket science. But we aren't used to
thinking in these ways, we've accepted certain conditions as normal
and proper without really looking at the foundations.
I'll quite here, it's late and this is already insanely long. But I
intend to come back with the rest.
More information about the Election-Methods
mailing list