[EM] Poking about Heaviside methods, again

Kevin Venzke stepjak at yahoo.fr
Fri Jan 6 17:03:47 PST 2023


Hi Kristofer,

A few thoughts. I'll assume everything is just "max" because I just don't see how using the
sum could lead to a viable method no matter how good it looks in a simulation.

Le jeudi 5 janvier 2023 à 11:24:29 UTC−6, Kristofer Munsterhjelm <km_elmet at t-online.de> a écrit :
> The results show a big nope: every such method (with the values of x
> integer between 2 and -2 inclusive) seem to be 100% manipulable in the
> limit. However, it did come up with this, which seemed to converge less
> quickly, and might be an interesting idea on its own:
> 
> f(a) = sum (or max) over B:
>     A>B * H(fpA + fpB - fpC - fpD)

This is interesting. I like how Majority Favorite is ensured. The winning gross score (the
votes-for) can come either from A:B or A:C or, depending on the sizes, A:D or B:C. There is
a bias towards having more FPs, but also a step in the approval-ish direction of using gross
score.

> - There's a strange pattern where every method (if not augmented by
> Condorcet) seems to have almost all its strategy vulnerability as
> compromising. More later.

Do you know how many of those methods satisfy Majority Favorite? Maybe it's not that
surprising.

If we compare to methods that people actually propose, it seems true that non-Condorcet
methods don't tend to have huge burial issues. Main exceptions would be Borda, MMPO, and
iterated DSV methods that are nearly Condorcet-efficient. Methods like DSC and Chain Runoff
do have burial incentive, but their compromise incentive is probably a much bigger concern.

> - An interesting pattern seems to be: nonmonotone CV generalizations
> work by "If A and B are at the top by first preferences, then credit A
> with A>B and B with B>A". Monotone ones are about absolute comparisons
> instead: "if A and B combined are above some quota, then...".

That's interesting but I have to imagine the latter approach breaks at some point (i.e.
ceases to provide monotonicity), perhaps with more than four candidates.

> - For monotone "CW if there is one, otherwise method X", the best
> impartial culture generalization for four candidates goes like this:
>     Let A and B be two candidates. Then if for every other candidate C, fpA
> + fpB - 2 fpC > 0, let A's score in the (A,B) comparison be A>B and B's
> score be B>A. If they're all < 0, it's zero for both.
>     Let every candidate A's score be the maximum of any comparison involving A.
>     Highest score wins unless there's a CW. If there is, the CW wins.

So this seems the same as the December method, except you allow each candidate C to serve
as D as well. (This should make it a bit harder for some A:B to qualify.) And also you're
doing a Condorcet check here.

> - Including terms involving first preferences after elimination can
> improve the results quite a bit, but the resulting methods are
> completely incomprehensible. Here's an example for four candidates, that
> *should* be monotone although I haven't checked:
>     - Let A and B be two candidates. Then if, for every pair {C, D} of
> other candidates, if fpA\C + fpB\C - fpD\C -fpD > 0, A's score is (A>B)
> and B's is (B>A). Otherwise if all are <0, it's zero for both. fpX\Y
> here is my terminology for "X's first preference count once Y has been
> eliminated".
>     - The rest is as above (including the use of the max operator).

The monotonicity of this method is an interesting question.

When you eliminate a candidate before checking the FP count, you arguably are weeding out
noise and trying to get to a "true" FP count after, essentially, accounting for compromise
incentive. It seems again like a somewhat arbitrary balance between FP count and the
approval-ish gross scores.

It's a little inexplicable why to compare with D and D\C instead of double D\C, but that
may just be the balance that happens to be optimal in your setting. I'm not sure you'll
really be able to explain that.

If you had a five-candidate framework I think all of your results would end up very
different. For example an expression like H(fpA + fpB - fpC - fpD) doesn't ensure that the
FP-winner (and possible majority favorite) is even included.

> So lots of strange things. I'm kinda understanding the limitations of
> not having a theory; the simulator can come up with some generalization
> that seems to work, but I have no idea why it works. Leaving elections
> to a black box isn't really a good idea, either :-) What's necessary
> here, I think, is some kind of explanation for just why these are
> resistant to strategy and others are not, and I don't know what that
> would look like.

Well, a FP emphasis is bad for compromise but can reduce burial, whereas a gross score
emphasis might reduce both but is bad for truncation incentive. I'm not sure if you have a
truncation concept though. I guess that your vulnerability metric leads you to identify
methods with some balance.

> The surprising part is that sometimes, you can include pairwise counts
> and it's still all compromise all the time. E.g. the contingent vote:

If like me you're used to trying to define methods using mainly a graph of defeats, and
caring about LNH criteria, it's apparent that changing the state of the X:Y contest must
not affect (in some way) the win status of unrelated candidate Z. That's a severe
restriction. But with contingent vote you select the decisive contest without ever looking
at the graph, and no candidate outside the contest has any win odds at all. So, the issue
isn't with pairwise comparisons inherently.

> Even the weird method
>     f(A) = max over B: A>B
> 
> seems to have a very low (though nonzero) rate of burial. Why? You'd
> imagine that if A is tying C, and A>B is A's max pairwise victory over
> someone else, then the buriers can benefit by burying A under B; and
> that that would happen pretty frequently.

This is a LNHelp method with huge truncation incentive. It must depend how you look at it.
C has incentive to decrease the A>B tally (if they had been contributing to it) but not to
increase the B>A tally.

> combinations of pairwise victories. How about if we only allow a single
> victory inside the monotone transformation, something like
> 
>     f(A) = sum over B: (A>B)^x
> 
> with x>1? Is that also only compromise?

It's the same surely. When you add an A preference you're just giving some positive number
of points to A. You would never wish to give points to a candidate that you don't want to
win.

Myself, I have returned to experimenting with a 4-candidate 4-bloc DNA scheme. It has 512
digits for 512 scenarios. An interesting thing I suppose is that the rate of failure of
some criterion could be found precisely. Maybe it's possible to hunt for a method that is
"as good as possible" given some other constraints.

My explicability problem is a little different from yours as I can't even say what my
generated methods are. But I've been making some progress in trying to define criteria
related to explicability, so that a generated method must be definable when limited to some
terms. If two scenarios can't be differentiated then they have to give the same winner.

Kevin
votingmethods.net



More information about the Election-Methods mailing list