# [EM] Strategy-resistant monotone methods

Kristofer Munsterhjelm km_elmet at t-online.de
Sun Feb 7 06:03:29 PST 2016

```(I've tried to write this post many times, but each time, I've bogged
myself down in details. So let's try again...)

Lately (the last few months) I've been running a lot of simulations on a
class of methods to try to find a method that is Condorcet, monotone,
and around the level of resistance to strategy as Smith//IRV that James
Green-Armytage found. (The tests I've used check susceptibility to
strategic voting when the initial ballots are produced according to
impartial culture or a 4D Gaussian model.)

I don't like nonomonotonicity :-)

I have found two three-candidate methods, one simple (linear) and one
not so simple (nonlinear), that seem to meet the criteria. The linear
method is monotone, passes mono-add-top, but is not reversal symmetric.
The nonlinear method is monotone, but not reversal symmetric either. I
don't know if it passes mono-add-top, but it might.

(Furthermore, I've found out that if you want both monotonicity and
reversal symmetry, it seems you have to pay for it by the method
becoming a lot more vulnerable to strategy.
The best nonlinear monotone and reversal symmetric method I found had
about 65% susceptibility and a lot of ties on IC; now compare that to
the stats below.
I don't know why monotonicity and reversal symmetry gives such
susceptibility to strategy, however. There's of course also the
possibility that there's a magic monotone reversal-symmetric method out
there but that my tests haven't found it.)

The general class of three-candidate methods go like this.

Suppose there are three candidates and you want one winner. Then there
are three possibilities:

1. There's a Condorcet winner, in which case he wins.
2. There's a tie, either two-way with a Condorcet loser, or three-way;
in which case the tie is preserved (the tying candidates come first).
3. There's a cycle.

If there's a cycle, then for each candidate, we can consider without
loss of generality that candidate to be A, the one he beats pairwise to
be B, and the one who beats him pairwise to be C. Then we can let that
candidate's score be f(ABC, ACB, BAC, BCA, CAB, CBA) for some function
f, where the variables give the number of voters who voted that way when
the candidates have been thus relabeled. The candidate with the greatest
score then wins.

For a linear method, I let f = x_1 * ABC + x_2 * ACB + x_3 * BAC + x_4 *
BCA + x_5 * CAB + x_6 * CBA, where the x values are chosen from {-2, -1,
0, 1, 2}. That gives 5^6=15625 different methods in all; I used best-arm
multiarmed bandit search to find the top methods.

For a nonlinear method, f is defined by a mapping from integers to a
stack machine program with several primitive functions (like max, min,
addition, division, etc., as well as the six variables above). There are
as many of those as can fit in computer memory; my program found about a
million unique ones and winnowed them down to 20-30k monotone methods.

-

f = fpA - fpC

i.e. a candidate's score is the number of first preferences he has,
minus the number of first preferences for whoever is beating him pairwise.

Suppose the cycle is A>B>C>A, then

A's score = fpA - fpC
B's score = fpB - fpA
C's score = fpC - fpB

If you raise A to the first place, then A's score will increase and
whoever was in first will have his score decrease; so raising A can't
hurt A. Raising A from last to second-to-last won't do much in term of
the score, since that doesn't alter anybody's first preferences;
however, it can break a cycle, so again, that doesn't hurt A and may
help A. E.g. if C>A is very weak, a few voters raising BCA to BAC may
make A the CW.

The monotonicity check for the linear methods is somewhat strict and
might exclude methods that are monotone, so this might not be the very
best. The program also found one that was very slightly better (about
0.01% less susceptible), but it was a lot more complex, so I haven't
given that method here.

The method's susceptibility to strategy for 37-voter elections is:

IC: Susceptible 17045 times out of 100000 (17.0%); 0 ties
Gaussian: Susceptible 4142 times out of 100000 (4.1%); 0 ties

In comparison, the figures for Condorcet,IRV are:

IC: Susceptible 16385 times out of 100000 (16.4%); 0 ties
Gaussian: Susceptible 3980 times out of 100000 (4.0%); 0 ties

and for Schulze:

IC: Susceptible 75230 times out of 100000 (75.2%); 4873 ties
Gaussian: Susceptible 20497 times out of 100000 (20.5%): 348 ties

-

The linear method is not quite at Condorcet,IRV's level, but the
nonlinear methods can do much better. The best one I found there was:

f = A>B * min(C>A, A>B)/fpC

so in an ABCA cycle:

A's score: A>B * min(C>A, A>B)/fpC
B's score: B>C * min(A>B, B>C)/fpA
C's score: C>A * min(B>C, C>A)/fpB

Break infinities by adding an epsilon to the denominators and let
epsilon approach zero. In practice, there will only be one infinity
anyway because otherwise two candidates would have no first preferences,
in which case the third has unanimous support and hence there'd be no cycle.

My program has only checked the monotonicity property of this method
statistically, so I can't say for sure it is monotone; however, it does
seem to be so. (If you can see any counterexample, please show it, as
that would either mean there's a bug or the tests weren't thorough enough.)

It's somewhat hard to check mathematically because the first preference
terms pull down while the minimum term pulls up unevenly.

The stats for this one are:

IC: Susceptible 12244 times out of 100000 (12.2%); 381 ties
Gaussian: Susceptible 3379 times out of 100000 (3.4%); 11 ties

which is better than Condorcet,IRV.

-

In theory, it would be possible to do the same kind of brute force
search for four candidates, but it gets pretty unwieldy pretty quick. A
linear four-candidate method would go

Either:
1. there's a CW				(Smith set of 1)
2. there's a tie
3. there's a maximal three-cycle	(Smith set of 3)
4. there's a maximal four-cycle		(Smith set of 4)

Then, in case 3, without loss of generality let the candidate be A and
the cycle ABCA; then use f, as above, but with D (the loser) not
participating;

and in case 4, let the candidate be A and the cycle ABCDA; then use
g(ABCD, ACBD, ..., DCBA).

A linear four-candidate method would have 30 constants (4! + 3! = 24 +
6). 5^30 ~= 1e21 which is completely impractical, even for a bandit
search. Granted, only a few of those will be monotone, and only a few of
*those* will be cloneproof, but still...

(I suspect that the fpA-fpC method would generalize to something using
solid coalitions rather than something using plurality scores. But I
have nothing solid to base that suspicion on.)

It might be better to try to expand the three-candidate methods in a way
analogous to Ranked Pairs. E.g. A's score relative to B is a function of
all the 3-candidate elections where A win compared to all the
3-candidate elections where B win, both of these being ones where both A
and B are involved.

It's also possible, I think, to get LIIA out of the three-candidate
methods above, but I haven't experimented with that. If there's an ABCA
cycle and A gets the highest score, then to pass LIIA, let the method
follow the cycle, i.e. rank B second and C last. Then, if C is
eliminated, the two-candidate election is A>B and A still wins.
Conversely, if A is eliminated, B>C stays and C is still last.

Another nice thing to do would be to try to construct a multiobjective
top-k bandit search and let the other axis be Bayesian Regret to find
the Pareto front. With even higher dimensionality one could make the
monotone and nonmonotone methods automatically sort themselves into
different categories; just let one axis be 0 if the method is monotone
and 1 if not; or let it be the proportion of ballot sets that exhibit
monotonicity failure. But that would probably take a lot of time and
work, and I have no idea how I'd go about to make a multiobjective pure
exploration bandit algorithm.
```