[EM] Why I think IRV isn't a serious alternative 2
Abd ul-Rahman Lomax
abd at lomaxdesign.com
Mon Dec 8 18:40:02 PST 2008
At 04:33 PM 12/6/2008, Kevin Venzke wrote:
>So, to try to summarize. You can argue for Range in two ways. On the
>one hand, if voters really do vote similarly to how they behave under
>the simulations, then Range is the ideal method according to utility.
>On the other hand, if Range doesn't work out that way, no one claims
>it will be any worse than Approval, which many people feel is not too
>bad.
Right. In reality, Range will improve results. A little. What we
don't know is how much. But Range, unless perhaps it is afflicted by
poor ballot design, improper suggestions to voters, or bad voter
education, isn't going to make things worse. Range is nothing other
than Approval with fractional votes allowed. Just as Approval is
nothing other than Plurality with voting on more than one candidate
allowed. (I.e., it is quite analogous to common practice with
multiple conflicting initiatives, especially if there is a majority
requirement, where the analogy is practically exact. The vote on each
candidate is Yes/No. If two get a majority, the one with the most
votes wins. In initiative practice, if none get a majority, they all
fail. There is no runoff, but they could be proposed all over again....)
The procedure should be described as "voting," not "rating," just as
preferential ballots I've seen for RCV in the U.S. only call the
votes "choices." They do not use the word preference, and voting
doesn't make a statement about preference. But, nevertheless, that is
what people will do, mostly, almost all the time. My guess is that
most don't go far down the allowed ranks, my guess is that a majority
of ballots are truncated, but I don't know, it's not apparent from
the results because most of the truncations would be for frontrunners
in first choice. But it would be in the ballot images available from
San Francisco.
>So you can argue Range vs. Approval. For me this is a tough fight for
>Range in the absence of a way to show that voters would/should play along
>with it. On the other hand one can always point out that Range won't be
>any worse. But on Approval's side, you can say that it's displeasing
>for the method's results to disfavor those who play along with it.
>If the method is going to degrade towards Approval, it would be nice if
>the degradation were neutral in effect.
This repeats the misconception -- or mistaken emphasis -- that I've
been struggling against. Range does not "disfavor" those who "play
along with it." Any "harm" to them is small; as I've written, an
almost-ideal result instead of the fully-ideal one. I haven't seen
studies on the variability, i.e., *how much* does an impaired result
affect the sincere voters. There is no particular reason to expect
that sincere voters will be specially concentrated into those who
prefer one option, with maximizers concentrated into another, and
that is what it takes to get more impact on outcome, otherwise the
maximizers cancel or average each other out.
However, it's very important to recognize that I'm not proposing
Range for immediate use in political elections. In another post
today, I list three immediate priorities and I'll add a little here.
(1) Act to prevent the replacement of Top Two Runoff by Instant
Runoff Voting, particularly for nonpartisan elections, but also for
partisan ones. This is a step backwards, typically satisfying cost
concerns, supposedly -- that's probably a misrepresentation, or at
least exaggerated -- at the cost of better results. It is arguable
with a straight face that IRV is an improvement over Plurality for
partisan elections. But there are better options that are cheaper and
that perform better under the contingency that a third party actually
benefits from the improvement and rises in popularity, bring IRV to
Center Squeeze, which is generally considered a serious problem.
(2) Suggest Approval or Bucklin or other advanced method for use with
Top Two Runoff primaries, thus addressing the major known problem
with TTR, which is also Center Squeeze.
(3) Make it known that Approval is a cost-free reform, a drastic
improvement over Plurality. It really ought to be a no-brainer, if
the choice is Plurality or Approval.
(4) Make it known that Bucklin is "instant runoff Approval." It
answers the major objection raised to Approval, the inability to
express a favorite. It was widely used in the U.S. at one time, and
it is a bit of a mystery why it disappeared, but there are political
forces that, here, would act against any preferential voting system.
It doesn't technically satisfy Later-No-Harm, but its violation of
that is mild. It does not suffer from Center Squeeze, because it is
an Approval method which probably encourages broader use of
additional preferences. As an primary for TTR, it becomes even
better. (Bucklin has low counting cost as well, and, once a decision
is made to count a rank, all the votes are counted, thus it falls
under my Count All the Votes campaign. -- and I prefer that all the
votes be counted even if they are not necessary to determine a
winner, this will, in the long run, encourage more use of additional
preferences, and it is less work to count the votes than the public
put into casting them, I consider it rude not to count them.)
(5) Let Range's optimality be known, but I would not put much effort
into public implementations yet. Approval is a Range method, and
would educate voters as to how to vote effectively in Range. Then
adding additional flexibility allows more accurate voting, even
though many voters won't use it. The additional cost is small to nothing.
(6) Among voting systems theorists and those similarly interested,
encourage and develop the use of utility analysis and simulations to
compare voting systems, improving the models used. Work to develop
consensus on the application of these methods. Compare simulation
results with real elections where practical, or, working backwards,
infer or constrain models from real voting patterns.
>Other than that I guess you have to argue Approval vs. other methods.
>That's difficult too.
In the U.S. situation, which I'm mostly concerned with, the concerns
I state above are paramount. Condorcet methods are certainly
respectable, but it is easier to implement Approval or Bucklin or,
for that matter, Range. I find that it's easy enough to use Condorcet
analysis on a Range ballot, particularly if we are only going to test
if any candidate beats the Range winner with pairwise analysis, using
that information to determine a need for a runoff. So, I could see
this sequence:
1. Top Two Runoff. If it's there, keep it!
2. Bucklin primary. (probably skip Approval, though that's an option,
with Bucklin following)
3. Fractional vote options. [This could run into constitutional objections.]
4. Condorcet analysis on ballot data, as an additional trigger for runoff.
An ideal ballot would be full-on Range, but I've suggested elsewhere
that the votes could be phased in, similar to Bucklin, and a voter
might even be able to prevent the uncovering of a lower preference
until it's clear that the first preference would be eliminated if
bottom elimination were used. This would not be full LNH
compatibility, as it is usually stated, but it would be in substance.
Which is more "harm to the favorite," being eliminated because second
preference votes from other voters, possibly *many*, can't be
considered, or not being eliminated but not winning because the voter
added another preference? Contrary to what is usually said about
this, with respect to Approval or Bucklin, the voter's ballot, when
the second preference is counted and added in, doesn't actually count
*against* the favorite, it merely turns into an effective abstention
in that pairwise race. I find it weird to call this "harm."
>I wonder if you have ever been curious to wonder what a "strategic" voter
>is, for a rank ballot method.
Nah, curiosity killed the cat.
I've done a fair amount of reading on this, but who remembers
anything? Often not me.
http://condorcet.org/rp/strategy.shtml has this:
>Strategic voting occurs when a voter does not vote his or her true
>preference, in the hopes that a false one will get a better result.
>All the methods that people suggest adopting (including plurality)
>have some element of strategy. Ranked Pairs is no exception.
Note that the definition does not apply to Approval or Range. The
first part implies a "true preference" which is not voted, and that
can happen with these methods, but the second part gives a motive for
this (which is often a part of the definition: "in the hopes that a
false one will get a better result." It seems that some might want to
change this to "in the hopes that not expressing it will get a better
result." That is actually quite a different statement, in some
senses. The meaning that the writer had in mind is clear: the
expression of a false preference, not the nonexpression of a true one.
http://www.allacademic.com/meta/p_mla_apa_research_citation/1/3/7/3/4/p137347_index.html
has this:
>A strategic vote is generally considered a vote for a second-best
>alternative that has a greater chance of winning than a preferred
>alternative. In this study, rates of strategic voting and
>misrepresentation of preferences are estimated in a model ...
The author, again, has "misrepresentation of preferences" in mind.
The definition is worded such that it could apply to multiple votes
in Open Voting.
I looked at a lot of papers and the usage of "strategic voting" was,
first of all, mostly with reference to Plurality, but, then to
preference reversal. Now, I didn't include "Approval Voting" in my
search term. What interests me here is how the term came to be used
in a highly perjorative sense with application to Approval voting.
The best place to see this may be with Saari's weird paper:
Is Approval Voting an Unmitigated Evil? A Response to Brams,
Fishburn, and Merrill. Happens to be a copy at
http://rangevoting.org/UnmitEvil.pdf
He claims to show that "AV is one of the most susceptible systems to
manipulation by small groups of voters (e.g., the outcome could be
determined by small, maverick groups.)"
"Manipulation" carries with it the smell of strategic voting. These
are "small" groups, and should small groups of voters be able to
determine an election outcome? They certainly can if the rest of the
electorate is essentially tied! "Maverick" What does that have to do
with it? Let's see what he does. As I proceed with reading the paper
how he begins by repeating the claim, without evidence, but with
simply variations of the claim:
"The more negative features of AV." "AV gets bad grades." "Why should
we consider a voting system with defects so serious that they violate
the purpose of an election?" All this, before he's actually stated
the defects, other than to give one of them a name: indeterminacy. I
agree, though, with Smith: Saari is incoherent. This paper was way
below a standard expected in peer-reviewed material.
So what kind of example does he have in mind. First of all, it's
pretty frustrating to read this paper. It is mostly repetition of the
claim that Approval is a terrible system because it is indeterminate
and this can produce awful rules, why should we use such a horrible
method, Saari's paper is worse than a badly-written rant on this
list. It's even worse than my writing! Yet there is was in Public Choice, 1988.
Out of 10,000 voters ranking the candidates A, B, C, 9,999 believe
that A will do an excellent job, that B is quite mediocre but much
preferred to C, and that C is an absolute disaster. The last voter
prefers C but believes that B is much better than A. Using "BFM's
recommended strategy of mean utility," each voter votes for his or
her top two choices. The AV tally for the first 9,999 voters is a tie
vote between A and B. This tie is broken between A and B when the
last voter votes. So excellence is the clear choice of these voters,
but AV selects mediocrity. And then he refers to small groups of
mavericks altering the outcome.
This is really appalling. Is he pulling our leg? Apparently not. He
believes what he's writing, it seems. "Excellence" was not the "clear
choice" of those voters. Saari imagined that they had a preference
profile that would imply excellence as the choice, but they
*certainly* did not choose to actually *make* that choice. Rather,
they indicated indifference, all 9,999 of them. How does Saari
justify this bizarre voting pattern?
By the way, the C voter is rational. I'd vote that way if I were C,
because B is clearly an improvement over the expected outcome, if C
has any clue as to the context. The only way that the 9,999 voter
votes are rational is that none of them have any clue about how the
others might vote. It's zero knowledge. "Mean utility" as a strategy
would be used on with zero knowledge, as one option, and I've
discussed elsewhere in this series of posts that voting mean utility
as an approval cutoff, is probably a Bad Idea, there is a better way
of estimating zero-knowledge election utilities, which is to assume
that the average voter is like you. It's true more often than not.
Real voters, absent a rough assessment of probabilities, are likely
to bullet vote unless their preference strength is small. Saari's
example is preposterous, it posited absolutely uniform decisions by
9,999 voters, when, in fact, the voters are not identical, they are
spread, they will make different decisions. In a real election like
this, with, somehow, the voters would differ in their decisions of
where to place the approval cutoff and, as it must be noted, if even
one doesn't vote for B, it's a tie, and if 2, still a tiny, tiny
percentage, don't vote for B, excellence wins.
The problem here is that the voters do *not* vote with any real
strategy, as we have been discussing strategic voting, they vote with
a mindless rule that Saari seems to think was recommended. I agree
that Brams and Fishburn may have overestimated the degree to which
voters will add additional approvals. Results with Bucklin in primary
elections seem to have shown that only about 11 percent of voters did
so, and Bucklin makes it easier to add approvals, because they are
not counted at first. Majority failure was apparently common, as one
might expect. Bucklin doesn't fix all problems! The key would be to
hold a runoff when there is majority failure, and, in fact, IRV had
similar problems when used for primaries in the U.S., I understand;
and IRV was replaced with top two runoff as a reform.
Dhillon and Mertens get it right: they think that Approval voters
will vote VNM utilities, not raw utilities with some mindless mean as
a cutoff. If 9,999 voters express indifference between A and B, *of
course* a single voter can make the choice! Those weren't voters,
they were figments.
I was looking for usages of "strategic voting," but came across this
Saari paper again. Sorry.
>Where does truncation fit in? Surely truncation was seen as a strategy
>concern, considering how old STV is.
Truncation as a strategy is problematic. It's simply equal-ranking
bottom. It does not express preference, and if there are meaningful
preferences there, we could consider the vote inaccurate. But it is
not insincere. Truncation does two things: it is effectively a vote
against all those candidates, or, more accurately, the expressed
votes are votes against all the nonranked candidates, it expresses
indifference between them, and it may cause majority failure if that
is required.
And if a majority is required, truncation is a sincere strategy that
clearly can improve the outcome. It's sincere because it really does
state that one prefers all the ranked candidates to the ones not
ranked, which is the necessary for the effect to be an improvement,
and because if none of the "approved candidates" -- those receiving
votes -- are elected, the remaining effect of the vote is to cause a
runoff or further process where one of these or even one better may
win. And the voter can decide later if preference strengths make it
worth voting in the runoff.
Indeed, with IRV, truncation allowed and a majority required (the
Robert's Rules version), truncation even after the first preference
can be a reasonable strategy. To me, the issue is whether or not the
voter prefers to rank another candidate or to cause majority failure.
If I'd rather see majority failure, there is where I stop ranking.
*And this improves outcomes, not just for me, but for the overall
result, and how much depends on the method.*
> > An inaccurate vote with Range isn't necessarily
> > insincere, at all. The voter has decided not to put more
> > effort into determining relative utilities. The voter simply
> > has not considered other than two frontrunners. The voter
> > has simply decided that full-on Yes and No are adequate
> > expressions.
>
>Sure, but that is insincere *enough* to say that it's not what would
>be expected by the simulations for a sincere voter.
That's because "sincere voter" in Range is specially defined to
include an accurate representation of relative utilities, generally.
That's "sincere Range" in the simulations. I don't think that
inaccuracy was considered. Naturally, better models would do so.
But these aren't going to change relative results much, I'm pretty confident.
>With Approval it's difficult to find "obvious best winners." The
>notion of a candidate "representing" voters is difficult considering
>that voters, especially in Approval, are mostly just making strategic
>decisions when they vote.
Well, no. They are combining utilities and preference order
information with probabilities. The former are most important, the
votes are meaningless without them. The vote will vary between bullet
voting and antiplurality. The Saari example was actually
antiplurality. Utterly the opposite of what I'd expect in real
Approval elections. In a deliberative environment, if Approval is
used to speed things up, the expectation would be that you would not
add additional approvals unless the preference strength is weak
enough that you'd rather see the process terminate earlier than
later. With significant preferences, you would start with a bullet
vote, then, each new election, you would lower the approval cutoff
until a majority is finally found. Those who resist adding additional
approvals have higher absolute preference strength, it is practically
necessary.
And that's how it actually works in a real environment where a
majority is required. Quite often, depending on how many candidates
there are, a majority is found in the first ballot. But with many
candidates each with many supporters, majority failure becomes the
norm, especially in the first round.
>If we expect two frontrunners under Approval, I would be very surprised
>(and extremely put off) if Approval ended up failing to produce
>majorities. This would go most of the way to convincing me that the
>incentives are broken.
Well, maybe, but you would be foolish to expect Approval to be *much*
better than Plurality at finding a majority. IRV does a little better
(I mean real majority, of course). Bucklin probably a little better
at that. Approval will probably be not as good as Bucklin at finding
majorities.
Approval, one has to understand, is the no-cost reform, not the ideal
one. It's a great place to start, because it sets up the idea of
voting independently on each candidate. That's necessary for SWF maximization.
> > > If two candidates obtain majority approval, most
> > likely one of them was
> > > not a frontrunner, but a compromise choice.
> >
> > That's correct, if "frontrunner" refers only
> > to first preference. If it refers to overall popularity,
> > predicted approval votes, it's a different matter, and
> > one that I did not consider above.
>
>This is actually interesting now that I think about it.
Right. However, it's addressed pretty effectively by polling for
expected votes given the voting system. If the voters know that the
real race is between A and B, not A and C, they will tend to place
the approval cutoff between A and B, thus discriminating between two
good candidates, perhaps. If the significantly worse candidate, C, is
in Range of winning, voters will shift their approval cutoff to be
making the important choice, between the A,B set and C.
>Some six months ago I wrote a strategy simulation for a number of
>methods. One situation I tested was Approval, given a one-dimensional
>spectrum and about five candidates, A B C D E.
>
>In my simulation, once it was evident that C was likely to win, one of
>either B or D's supporters would stop exclusively voting for that
>candidate, and would vote also for C.
B and D voters are motivated to ensure that C wins if their favorite
doesn't. Hence Approval will tend to find a compromise. If B or D are
not relevant, can't win, they *may* also vote for B or D, so I'm not
sure that the simulation was accurate. The vote would depend on
preference strenth in the pair involving C and their favorite. If
weak, more likely to also approve, but, note, this doesn't, by
definition, affect the outcome unless they have bad information.
In a real situation, the likelihood of bullet voting varies with
utility distance, i.e, preference strength. If we look at a voter
equidistant between B and C, the voter may actually have minimal
preference between B and C and even have difficulty deciding which to
vote for as a first preference. For some region between B and C
voting for both in Approval will be common.
As C becomes a frontrunner, and unless B is a frontrunner also, the
probability that the voter, considering this, will also vote for C
increases. It's simply VNM utilities. You place the vote strength
where your limited investment can make a difference. In Approval and
Range you have a full vote to "spend." If we lay the candidates out
in a spectrum, in preference order, accurately, where we place the
vote is only proportional to the absolute preference strength in a
true zero knowledge vote. The real vote will depend both on the
preference strength and on the probability of the vote making a
difference. No such probability, the preference order remains the
same, but no vote strength is invested in the election pair, and
approval allows you to place that vote in only one pair, but it then
operates on all pairs straddling that pair. It's a choice. The power
of the choice improves with better knowledge. Big surprise eh?
Approval rewards having better knowledge, I think that is a good
effect. But even those with zero knowledge of the probabilities (how
could that happen with a voter who was at the same time informed
about the candidates?) can contribute something: that's the vote that
Saari claims was "recommended," or, better, the bullet vote, just
vote for your favorite. It is as sincere a vote as is possible in
Approval, it expresses exclusive preference, this one over all
others, which is the kind of preference people are accustomed to
expressing. In the sorry Saari scenario, that simple vote, from two
out of 9,999 voter who were not under the evil influence of Brams,
Fishbrurn and Merrill's plot to teach voters how to elect mediocre
candidates, would have saved the day. Saari posited stupid voters,
then proposes, elsewhere, that Approval is a system for unsophisticated voters.
>So then, the frontrunners really were either B and C, or C and D. (And
>C would almost always win.)
>
>If Approval managed to behave like that, I would consider it pretty good.
>I wonder if it can be tweaked to make this scenario more likely to occur.
But that is how Approval would tend to work. You add additional
approvals when your preference strength between the candidates is
weak. You don't when it is strong. In the scenario you posited, there
may have been, initially, five frontrunners, based on first
preference, which was based on issue distance. But if you look
carefully, for a region between each of the candidates, there is a
region where preference strength is weak, even if the candidates are
spread apart; it's that the voters involved are between their
positions. Those voters will add additional preferences. But majority
failure is still likely to occur in a real election. Beware of
setting up an election scenario that is exquisitely balanced; with
five candidates it is very, very rare. Three in balance is rare enough.
Your simulation allows a determination of relative preferences, which
into "sincere Range votes" if you want to study that. It occurs to me
to add a responsiveness factor. Some voters have little overall
preference strength across the entire candidate set, and others have
strong preference. If this were a special election (like a runoff, as
an example), the absolute preference strength will affect turnout.
Low absolute preference, low range of utilities possible between the
best outcome and any of the others, lower motivation to turn out and
vote. But also higher likelihood of adding additional approvals.
> > If more than one candidate receives a vote from a majority
> > of voters, there will also be a runoff election between the
> > top two.
>
>I don't think it is viable to have a runoff election between the top
>two Approval candidates. I know you have hinted that you're not concerned
>about Clone-Loser failures here.
Approval is basically tweaked Plurality. I'm simply considering a
double majority as a kind of majority failure, the majority has
failed to clearly indicate a preference. If A and B both gain a
majority, and A has more votes, we do not know that a majority favor
A over B. It might be as few as the difference in votes that have
that preference, or, in fact, as would be asserted by the MC
criterion failure, it might be as large as the vote for A. A runoff
test this. Yes, I'm not so concerned about strategic nomination to
try to get two candidates into the runoff. That strikes me as very
foolish politically, the likely result is that neither would make it.
There is still some level of vote-splitting in Approval, particularly
if both are frontrunners, plus there is the problem of trying to get
name recognition for two candidates instead of just one. Not
efficient. Pick the best candidate and put all the effort into that
one. Approval is just a minor tweak.
The point of using Approval in a TTR primary is that it is more
likely to detect a compromise winner in the primary. Bucklin,
probably more. Condorcet methods, certainly; but they will need an
explicit approval cutoff. Not hard to do. (It's like Range; range
bears a relationship to fully-ranked methods if it has sufficient resolution.
> > In order to do it, we need a method of *measurement* of
> > election performance. Enter, stage right, Bayesian regret.
> > Got any other alternative? Has any other alternative been
> > seriously proposed?
>
>I don't think so. We just called it "social utility."
Right. Bayesian regret, for onlookers, is simply the difference
between the ideal result, the social utility maximizer, and the
actual election victor. It's calculated as an average, in the
simulations, over many elections with vary assignments of candidates
to positions and derived voter preferences. Less regret is better. If
the perfect choice could be made every time, that would be zero
regret. No known method achieves that, unless we make very
unrealistic assumptions about the voters.
However, there has been no attempt to study, say, Asset Voting with
regard to Bayesian regret. It could be argued that Asset Voting
creates an electoral college with zero deviation from ideal, since
every voter is represented there, full strength, by their ideal
choice, without opposition. The translation from that into an
assembly can be quite close, but there will be deviations, if small,
from full representation.
> > > (It works when we
> > > > can see it and test it, why would it stop working
> > when we
> > > > can't?)
> > >
> > > Because real people will be involved
> >
> > That doesn't actually answer the question.
>
>Well, if it doesn't work when we can't see and test it, the reason likely
>has something to do with real people being involved.
There is no question that real results can differ when the
circumstances change! It's just that we learn as chldren than when
someone walking passes behind some obstacle, they *often* appear on
the other side a a time correlated with the distance and how fast
they were walking. We learn to assume that they continue walking, as
a first estimate, when they pass out of sight. It's an assumption
that is more often correct than false.
If we know that a method works when utilities can be measured, there
is no particular reason to suspect that it will stop working when
they cannot be. Preference strength is real, it motivates people, and
it can sometimes be measured. But it was assumed that it was
meaningless by people like Arrow, because there is no specific
general metric for it.
Dhillon and Mertens finesse this: they define the preference strength
by what a voter expresses! And this is pretty much the conclusion I
came to. Expressing a strong preference takes, at least, a *kind* of
strong preference. That the voting is a lottery, and that, in Range
Voting, one cannot simultaneously bet the same vote on all pairs, but
only spread it and allocated it between pairs, total strength one
vote, is a restriction on the vote that encourages a kind of
sincerity. It is simply that this is modified by probabilities, and
economists were accustomed to this kind of choice, that's VNM utility.
> > What questions remain? I see substantial agreement in
> > certain areas. What do you think?
>
>It is possible that we are getting closer.
That's the goal of discussion, isn't it? Discussion to win is rather
boring and usually goes nowhere, at least here.
>My original concern is to try to reconcile the ideas that simulations
>support Range and yet real voters should not be expected to mimic their
>mind-read votes from the simulations.
Yes, of course. But real voters will follow something similar. It's
unpredictable, Saari was right that Approval is indeterminate, but
Brams and Fishburn were right that it is a feature, not a bug.
It gets harder, not easier, to manipulate other voters when their
votes are indeterminate. Saari fell into the trap of imagining that
indeterminacy would allow a group of "maverick voters" to manipulate
results, but his example showed something very different: a long
voter (maverick?) who votes sensibly and who determines the outcome,
since everyone else votes in a totally predictable and uniform way,
hence failing to express an important preference and ending up with a
mediocre election outcome. That wasn't really, a mediocre outcome,
that was a poor outcome, with have the SU of the ideal, pretty much.
That's big regret, produced by stupid strategy, applied by 9,999 out
of 10,000 voters who were totally clueless.
>It strikes me that ultimately you don't really need the simulations,
>because you can argue in any case that Range will be at worst as good
>as Approval. If you can sell Approval you can start selling Range,
>basically.
Actually, that's my conclusion, politically. Start with Approval.
Count All the Votes.
Like, Duh! What took us so long to figure this out? It's not like it
was never done before.
However, the simulations are important because they lay a framework
for studying voting system performance that isn't so thoroughly
subjective as the criteria turnout out to be. The measure of
performance works when the outcomes can be actually assessed, as
with, for example, the placement of a capital problem based on a
population of users who would all like to be at a minimum distance to
the capital. For each voter, we could determine exact travel
distance, and we could assume that we want to minimize, assuming,
say, one trip to the capital each year, total distance travelled.
So, as you did, we can use issue distance to estimate preference
strengths, in simulations. We can theorize at length on why this or
that system is better, but performance in simulations should
generally be stronger as evidence.
More information about the Election-Methods
mailing list