[EM] Meta-criteria 8 of 9: Heuristic: fairness/honesty. (long - includes strategy typology)

Jameson Quinn jameson.quinn at gmail.com
Thu May 6 14:19:57 PDT 2010


The last heuristic is fairness. Many people might, at first, deny that it's
a heuristic at all. Fairness is a basic value, they'd say. But then, you ask
them to define it, and pretty soon they'd be tripping themselves up with
contradictions.

When you recognize that it's not an axiomatic value, just a heuristic, these
contradictions are not a problem. Because, in the end, fairness is a only
perception, but not an arbitrary one. It's based on a combination of just a
few aspects - I'd say three. Overall, these aspects, while not perfect or
even entirely consistent, provide a remarkably good heuristic. A "fairer"
strategy is likely to have better utility, better expressiveness, and better
legitimacy. (It may cost more, but that's the least of the values.)

The most important of the three aspects of fairness is honesty - the absence
of strategy. In fact, for the voting systems we tend to consider, it's the
only relevant aspect. So let's get the other two aspects out of the way
first. We have to go to a favorite straw man - Random Ballot - to find the
other two aspects: random chance; and assymetry, or domination of a majority
by a minority. (Thus the contradictory nature of "fairness" - the very
system which has the least strategy is simultaneously most unfair in the
other two ways.)

Why is strategy unfair? Let me count the ways. It leads to uncertainty - the
same voter preferences can have different results. It leads to "voter's
regret" - the knowledge that if some group of people had voted differently,
we could have attained a result we'd all prefer. (That's not the same as
"citizen's regret", the simple feeling that the result was not optimal). It
makes voting more difficult - you need to know the sizes and preferences of
voter groups and then do some sophisticated games-theory analysis in order
to vote most effectively. It means some degree of minority rule: a
strategically-sophisticated minority can swing the result in spite of an
honest majority. All of these factors reduce both utility and legitimacy.
Also, strategic dishonesty directly reduces expressiveness, because it's
impossible to know whether the motivation for some ballots was honest or
strategic.

Note that there are 5 different arguments there, and none of them correspond
to the straw man that "strategy is just bad, because you should always tell
the truth." All of them reference underlying values, values I hope we can
all agree on, even if we disagree about their relative importance.

Yet, as we all know, no voting system is perfect. Any system is unfair in
some way, and, Random Ballot aside, almost any system is subject to some
kind of strategy. So it's worthwhile to consider the different types of
strategy, and how "unfair" each one is. (While I'll discuss types of
strategy below, I could just as well conversely focus on types of
equilibria; for any relevant X, an "X equilibrium" just means the
unavailability of any "X strategies".)

The most narrowly-defined and broadly-effective kind of strategy is what I
call "dominant" strategy. This is a strategy which can't possibly backfire,
and by which a determined strategic minority of sufficient size could impose
their will - even a candidate who was clearly the worst option overall - on
an unstrategic majority. Basically, I'm talking about exaggeration strategy
in Range, Borda, and similar systems. An outcome-oriented rational voter
will always use such a strategy.

Of course, there's an important distinction between the dominant strategy in
Range, which is at least semi-honest, and the corresponding strategy in
Borda, which is fully dishonest. The dishonest strategy is subject to
pathologies such as DH3, with devastating consequences for utility. The
semi-honest one, on the other hand, has been shown in Warren's simulations
to have only a moderate impact on utility. These simulations considered
fully strategic voters and partial strategy with no strategic bias between
groups; they did not consider the worst case, which is one-sided strategy.
However, for the purpose of focusing on utility, this may be an adequate
approximation.

Still, any dominant strategy - semi-honest or dishonest - will have an
impact on expressivity and legitimacy. Since it's impossible to be certain
whether a vote like 100, 98, 2, 0 reflects an honest assessment of utility
or a moderate strategic exaggeration, it's impossible to know exactly what
it expresses. And it's also too-often possible for the loser to claim that,
with honest votes, they would have been the legitimate winner. Perhaps in
the long run, strategy would fall into some stable equilibrium; but in the
short run, I fear it could be a source for endless accusations of
illegitimacy and for acrimonious within-group recriminations about who did
or didn't use which strategy.

A somewhat less-exacting definition for a less-effective strategy is what
Peter de Blanc calls "cabal strategy". This refers to a situation where, by
voting dishonestly (that is, sacrificing true expressivity), a group of
voters can attain a result that all of them view as superior. Although this
definition includes several unrealistic assumptions - perfect information,
perfect coordination, and infinitesimal motivation threshold - it functions
as a good "strongest case" for strategy. Any dominant strategy is a cabal
strategy.

There are several aspects of cabal strategies worth noting. If they're not
semi-honest and dominant, they are dishonest and ordinal - that is,
invariant over non-ordinal changes in the electorate. That means that if
there's a strategy for ABC voters, it's available whether their true
utilities for A, B, C are 100, 99, 0; whether they're 100, 1, 0; or whether
they're 51, 50, 49. This is an important fact, because it means that, just
as the best systems for honest outcome utility, which is cardinal, tend to
be cardinal systems such as Range; the best systems for avoiding cabal
strategies, which are ordinal, tend to be ordinal systems such as Condorcet
variants.

Even so, there is no system which meets the majority criterion and avoids
all cabal strategies. If there is a Condorcet tie - a Smith set of over 1 -
then there are cabal strategies. Whoever is the winner, there is another
candidate preferred by a majority; if that majority works together, the
majority criterion says they must be able to elect that preferred candidate,
and that constitutes a cabal strategy.

As with semi-honest strategies, the impact on outcome utility of such
unavoidable cabal strategies is probably relatively minor. Any member of the
Smith set probably has good utility. Yet that does not make these strategies
innocuous: they still affect expressivity and, above all, legitimacy.

To make the analysis of cabal strategies more sophisticated, one can
consider the factors which make them more or less likely: motivation,
implied dishonesty, and minimum participation. Take the example of a cabal
of ABC voters who can elect A instead of B by voting ACB. Strategy is more
practical, and thus by human nature more likely, if the strategic result is
strongly, not weakly, preferred - that is, the true preferences are 100-50-0
rather than 51-50-0; if the cardinal dishonesty required to vote
strategically is minimal, and thus so is the downside if strategy were to
backfire - that is, if the true preferences are 100-50-49 rather than
100-50-0; and if only a small portion of the voters with strategy available
must use it to get it to work. Since motivation and dishonesty naturally
come in units of utility, those two aspects can be combined into a single
natural metric which should relate monotonically to the likelihood of real
humans using strategy in a given situation.

There are further definitions of strategies which are broader than cabal
strategies. For instance, there are various kinds of "symbiotic strategies",
in which two different groups are using strategy to different ends. Thus,
there's implicitly a non-zero-sum game matrix of 8 values; my notation is
that, for instance, u2(y,n) denotes the expected utility for group 2 if
group 1 uses strategy (y) and group 2 does not (n). From the persective of
group 2, here are some possible kinds of strategy:

Mutualist: u1(y,y) >= u1(*,*); u2(y,y) >= u2(*,*). The only reason that this
is not a cabal strategy for the union of the two groups is that it's a
probabilistic game of imperfect knowledge. Each group is hoping for a
different outcome, but the strategies combine to make both desired outcomes
more probable. For an example, see the CRV page on the DH3 pathology in
Condorcet systems <http://rangevoting.org/WinningVotes.html#DH3> . A and B
voters each bury the natural winner C under a dark horse D in hopes of
making their own candidate win, and without caring if their strategy ends up
helping the other group win. Personally I find this argument to be a
stretch, for several reasons. First, if A and B voters were indeed to
cooperate on this massive scale, I can't imagine that the association would
not move their honest votes towards their allied candidate - and then the
whole strategy would become unnecessary. Second, for the case where the
logic is extended to all 3 groups and thus the full DH3 pathology comes out:
in the models I find more compelling, each voter would realize that it's
more probable that some common external factor (say, the performance of the
national soccer team the night before the vote) would bias ALL voters
towards strategy, and thus make strategy lead directly to D winning; than
that all the various individual factors would line up to make strategy "work
out". Thus, I tend to discount the real importance of mutualist non-cabal
strategies.

Commensalist: u2(y,y) > u2(y,n); u2(n,y) <= u2(n,n); u1(y,y) = u1(y,n) >
u1(n,n). Group 2 is taking advantage of Group 1's strategy, without harming
group 1. I include this for completeness, but the fact a result which
pleases group 2 is nevertheless perfectly indifferent to group 1 makes it
highly unlikely.

Parasitic: u2(y,y) > u2(y,n); u2(n,y) <= u2(n,n); u1(y,y) < u1(y,n) >
u1(n,n). This can be further subdivided by the following factor:

Defensive: u2(y,n) < u2(n,n); u1(y,y) < u1(n,y). The point of a strategy
like this is to declare openly that you'll be using it.

Spiteful: Defensive and u2(y,y) < u2(n,y). Group 2 is willing to hurt itself
to get revenge if Group 1 is strategic. A truly rational voter will try to
bluff that she will use a spiteful strategy, but not actually follow
through. So if other voters know her to be rational, the bluff won't work;
she has to feign irrationality. (A non-spiteful defensive strategy is
commensalist or, more likely, parasitic).

Extorsionist: u2(y,y) > u2(n,n) >= u2(n,y); u1(y,y) > u1(n,y) <= u1(n,n).
Group 2 says "I'm using strategy to force you to". For instance, they could
declare that they're going to bullet vote for A instead of also favoring B,
to force the larger group of B voters not to bullet vote for B. Like a
spiteful strategy, this is a game of chicken in which it pays to be seen as
irrational. A situation like this could actually lead to a DH3-like
pathology in Range. Personally, I doubt that "honest ally" groups who
behaved this way with each other would still be honest allies by the time of
the election, so I tend to discount these strategies as well.

I'll leave it to the reader to work out the probable impact on the social
utility of the outcome of the above symbiotic strategies; it ranges from
minimal to serious. I would like to point out, however, that in all cases
they would damage legitimacy and, to a more variable extent, expressivity.

To sum up: strategy's biggest effects are not on outcome social utility, but
on legitimacy and expressivity. I believe that these effects are serious and
worth avoiding. Cabal strategy seems to me the best model for strategy
analysis, but if it isn't going to get hung up on all Condorcet ties being
strategic, it needs to include factors which affect the likeliness of
strategy actually being used. These factors include motivation, implied
dishonesty, and necessary participation.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.electorama.com/pipermail/election-methods-electorama.com/attachments/20100506/c2b969e8/attachment-0003.htm>


More information about the Election-Methods mailing list