[EM] Poking about Heaviside methods, again

Kristofer Munsterhjelm km_elmet at t-online.de
Tue Jan 10 05:49:37 PST 2023


On 1/7/23 02:03, Kevin Venzke wrote:

> Le jeudi 5 janvier 2023 à 11:24:29 UTC−6, Kristofer Munsterhjelm <km_elmet at t-online.de> a écrit :
>> The results show a big nope: every such method (with the values of x
>> integer between 2 and -2 inclusive) seem to be 100% manipulable in the
>> limit. However, it did come up with this, which seemed to converge less
>> quickly, and might be an interesting idea on its own:
>>   
>> f(a) = sum (or max) over B:
>>       A>B * H(fpA + fpB - fpC - fpD)
> 
> This is interesting. I like how Majority Favorite is ensured. The winning gross score (the
> votes-for) can come either from A:B or A:C or, depending on the sizes, A:D or B:C. There is
> a bias towards having more FPs, but also a step in the approval-ish direction of using gross
> score.

A's score can come from A>B, A>C, or A>D, depending. But if no candidate 
has a majority and B has a very low first preference count, B has no chance.

>> - There's a strange pattern where every method (if not augmented by
>> Condorcet) seems to have almost all its strategy vulnerability as
>> compromising. More later.
> 
> Do you know how many of those methods satisfy Majority Favorite? Maybe it's not that
> surprising.
> 
> If we compare to methods that people actually propose, it seems true that non-Condorcet
> methods don't tend to have huge burial issues. Main exceptions would be Borda, MMPO, and
> iterated DSV methods that are nearly Condorcet-efficient. Methods like DSC and Chain Runoff
> do have burial incentive, but their compromise incentive is probably a much bigger concern.

Some LNHelp methods have burial problems, though it's usually easier to 
just truncate instead of bury.

My general direction of thought was: it would be useful to identify just 
what contributes to low burial incentive because all the strategy 
resistant methods I know of have mainly low burial incentive.

Then amending them with Condorcet, e.g. going from IRV to Benham, seems 
to turn some of the compromising incentive into burial instead. But it 
doesn't greatly increase the strategic susceptibility as a whole, and 
that seems like a good trade for avoiding center squeeze and the likes.

So if that's true, then we should find some method that has very little 
burial incentive and that is strategy resistant (e.g. not Plurality, 
whose compromise incentive goes to 100% very quickly).

I doubt I have the mathematical chops to do this, though. For some 
methods, simple differentiation under burial with a sort of KKT strategy 
will work, e.g. minmax:
	f(A) = min over X != A: A>X
and
	f(B) = min over X != B: B>X

then f(B) - f(A) is increased by changing a ballot from BAC to BCA if 
A's minimum is A>C.

But doing so is way too nitty-gritty for larger methods and I haven't 
had much luck abstracting it. So I was hoping there was *something* 
about combinations of H() and A>B. But I don't see it at the moment.

> 
>> - An interesting pattern seems to be: nonmonotone CV generalizations
>> work by "If A and B are at the top by first preferences, then credit A
>> with A>B and B with B>A". Monotone ones are about absolute comparisons
>> instead: "if A and B combined are above some quota, then...".
> 
> That's interesting but I have to imagine the latter approach breaks at some point (i.e.
> ceases to provide monotonicity), perhaps with more than four candidates.

As I mentioned in a private mail, it should retain monotonicity because 
A and B share the pool that's compared against the quota. A will get A>B 
from any B where A and B's first preferences combined exceed the quota, 
so raising A to top on a B ballot can't harm A.

> If you had a five-candidate framework I think all of your results would end up very
> different. For example an expression like H(fpA + fpB - fpC - fpD) doesn't ensure that the
> FP-winner (and possible majority favorite) is even included.

Strangely, that's what I get for the optimum: it's still fpA + fpB - 2 
fpC. But I did a check with IRV and it seems that this is just an 
artifact of the H(...) * A>B methods hitting a wall.

With five candidates and 97 voters, impartial culture (no truncation), 
the results are:

Smith,(fpA + fpB - 2 fpC) is manipulable 92% of the time.
Smith,Contingent vote is manipulable 86% of the time.
Smith,IRV is manipulable 46% of the time.

(For four candidates, same number of voters:
Condorcet//(fpA + fpB - 2 fpC) is manipulable 66% of the time
Condorcet//Contingent is 60% of the time
Condorcet//fpA\C + fpB\C - fpD\C -fpD is 56% of the time[1]
Smith,IRV is 33% of the time.)


So my intuition is that as the number of candidates increases, first 
preferences become increasingly useless. IRV gets around this by 
reducing down to a three-candidate election using steps that themselves 
pass both LNHs.

But if it's necessarily true that you can only have burial resistance 
with elimination, then a direct consequence is that we can't have burial 
resistance and summability. The best we can do is to somehow string 
together k-candidate elections to get summability O(n^k), where the 
k-candidate election is itself strategically resistant for k candidates. 
E.g. something like
	f(A) = min over other candidates B,C: Friendly(A, B, C)
where Friendly(A, B, C) is the fpA-fpC method (Friendly Voting) where 
everybody but A, B, and C have been eliminated.

Then the pragmatic answer would be "we aren't going to see 10-candidate 
Smith sets, so strategy resistance for k<10 candidates is enough, 
summability is worth more".

However, I haven't proven *any* of this. The simulation really could use 
the other strategy-resistant summable method I found, Plurality Benham, 
to see if it does any better than Smith,Contingent. But I haven't 
programmed Pb in my simulator yet.

For that matter, I haven't even proven such a summability restriction. 
There might still be some magical summable method; all I've done is 
given evidence that it's not of the H(...) * A>B form.

(IRV itself would expand, for four candidates, to something like...
	f(A) =
		sum over all B, C, D, not equal to A:
			product over X != D:	H(fpX - fpD)	<-- D is the Plurality loser
				* H(fpA - fpC\D) * H(fpB - fpC\D) <-- then C is eliminated
				* A>B)

> Myself, I have returned to experimenting with a 4-candidate 4-bloc DNA scheme. It has 512
> digits for 512 scenarios. An interesting thing I suppose is that the rate of failure of
> some criterion could be found precisely. Maybe it's possible to hunt for a method that is
> "as good as possible" given some other constraints.
> 
> My explicability problem is a little different from yours as I can't even say what my
> generated methods are. But I've been making some progress in trying to define criteria
> related to explicability, so that a generated method must be definable when limited to some
> terms. If two scenarios can't be differentiated then they have to give the same winner.

I got to thinking that perhaps the next step of method brute-forcing 
would be to go beyond vector enumeration (try every x for H(x) * A>B) to 
abstract syntax trees. Then in theory you could iterate through them 
until you find a good fit to your method, or until you find something 
with high strategy resistance.

But in practice, it's pretty hard to construct a bijection between trees 
and integers (with the nodes corresponding to functions that each take a 
fixed number of parameters). Even harder to prove desirable things like 
monotonicity in generality. And probably it would take forever to try to 
find a good fit to a given DNA.

-km

[1] Note that there may be other more resistant methods of this form; I 
haven't done a search through all combinations of fpX\Y terms.


More information about the Election-Methods mailing list