[EM] Manipulation Resistant Voting

Kristofer Munsterhjelm km_elmet at t-online.de
Sun Jul 18 01:43:26 PDT 2021


On 7/17/21 11:12 PM, Susan Simmons wrote:
> I'm sure that Gibbard considers clone dependence a vulnerability to 
> manipulation. In this case, that vulnerability is limited to the 
> possibility of lying on the questionaires eliciting the information for 
> the construction of the decision tree. If the decision tree is generated 
> randomly, then we are leaving determinism behind.

IIRC, it doesn't consider clone dependence to be a vulnerability to 
strategy; the alternatives are given and then the voters either submit a 
ranking (G-S) or interact with the method in one way or another 
(Gibbard's later theorem).

> The Gibbard–Satterthwaite theorem states roughly that every 
> deterministic voting rule is manipulable, except possibly in two cases: 
> if there is a distinguished voter who has a dictatorial power, or if the 
> rule limits the possible outcomes to two options only.
> 
> Once the decision tree has been constructed all decisions are of this 
> binary type.
> 
> It may be true that Gibbard-Satterthwaite applies more or less to any 
> deterministic democratic process that could be used to generate a 
> decision tree.

Gibbard-Satterthwaite probably wouldn't, because the tree construction 
would fail universal domain. However, there is a more general theorem 
that doesn't require universal domain: 
https://en.wikipedia.org/wiki/Gibbard%27s_theorem

And I think not just the tree composition, but the method as a whole 
would fail this.

Consider a mix of exhaustive runoff and agenda, where the agenda is set 
beforehand, and then there are (n-1) one-on-one elections after which 
the candidate standing at the end is the winner.

Now suppose there's a typical Condorcet cycle A>B>C>A and the agenda 
ordering is A>B>C; so the method proceeds by matching B and C, and then 
matching the winner with A, and then the outcome of this is the final 
winner.

Under honesty, B beats C pairwise and then A beats B pairwise. But the 
voters whose honest ranking is B>C>A have an incentive to misreport 
their B vs C preference in the first round so that C beats B pairwise 
and then goes on to beat A pairwise. Even though every round is a duple 
(only one of two outcomes are possible), this is not true for the 
composite rule, and so it fails Gibbard.

I would suspect your method also is vulnerable to strategy for the same 
reason.

In reference to an earlier post of mine, suppose we define honesty as 
what a Random Ballot type method would return. In your method, this 
would be a trace down the tree that ends at the candidate who the voter 
prefers most of the candidates at the bottom level of the tree. Now 
suppose say, that the tree is

         A
     /       \
     B       C
   /  \     / \
   D  E    F  G

There are some voters whose honest vote is G>C>A>B>D>E>F, so their 
honest trace would go down the C branch and end at G. However, they know 
that if the C branch is chosen, then a vast majority will go for F. Thus 
they falsify their ballots to indicate a preference for the B branch 
instead (a compromising strategy) because they prefer both D and E to F.

> Suppose as a first approximation we ask the candidates themselves to 
> estimate the distances between the various candidates on the various 
> issues. They might be tempted to distort the truth to influence the 
> structure of the decision tree to their advantage.
> 
> Not to worry ...instead of taking their estimates at face value by 
> averaging them together ... suppose we pay more attention to how similar 
> they are in their responses.
> 
> Then it doesn't matter so much if they try to distort the truth to their 
> advantage, the closer they are to each other in candidate space, the 
> more similar their responses, whether sincere or feigned.

I imagine you could end up with "false issues" that don't really matter, 
but that the candidates agree to play up. Imagine a country that admits 
about a hundred immigrants a year. Both major parties are seeking to 
weaken the rule of law, but to distinguish themselves, they make a big 
fuss about whether even fewer immigrants should be admitted to the 
country or not - even though the number, as is, is insignificant in the 
grand scheme of things.

It would be much harder to do so in a ranked voting setup because of the 
(relatively) easier entry by newcomers. However, it's possible that 
everybody will have an incentive to accept an extra dimension of 
differentiation, for the purpose of strategically engineering the tree 
structure.

> So the distance function is not based per se on their distance 
> estimates, but rather on the distance between their patterns of answers.
> 
> Their answer patterns are recorded as arrays of numbers called "data 
> vectors." There are many possible "norms" for measuring the "magnitude" 
> of a data vector.
> 
> The distance between two candidates is taken to be the magnitude of the 
> difference between their pattern-of-response vectors.
> 
> This is the kind of psychometrics used by the NSA and other data miners 
> to map out your precise location in political/consumer space without 
> even listening in on your phone conversations.
> 
> The TV series Bull is based on this kind of psychometric analysis of 
> jurors for finding "mirror jurors" that are very close in psychological 
> distance from the actual jurors. Bull is a trial scientist who can 
> predict the reaction of the actual jurors on the basis of the respective 
> mirror juror reactions.
> 
> Do you find this to be interesting?

Possibly, but if we're going in a jury direction, I think a method like 
sortition or even a mixed process like the Venetian Doges' would be more 
appropriate. Maybe all this finding flaws in methods have turned me into 
a "nah, that won't work" kind of guy :-)


More information about the Election-Methods mailing list