[EM] Quick and Clean Burial Resistant Smith, compromise

Daniel Carrera dcarrera at gmail.com
Wed Jan 12 11:47:10 PST 2022


On Wed, Jan 12, 2022 at 5:01 AM Kristofer Munsterhjelm <km_elmet at t-online.de>
wrote:

> Yeah, you would think so, but in practice it seems to work pretty well.
> Perhaps this is an indication that if an election is susceptible to
> strategy, the strategy usually is not very exotic?
>


That certainly seems to be the case. Yesterday I also added a "simple
strategy" check where the ballot puts c_k at the top, w_A at the bottom,
and the other candidates are randomized. So in my loop I first check
whether that strategy works, and if it doesn't, I apply JGA's method, and I
just keep track of that. I noticed that the simple strategy always worked,
but I paid no heed because at the time I was doing tests with C=3 and the
impartial model. But today I implemented the spatial model and ran several
tests with more candidates and it really looks like the simple strategy
works *almost* every time that JGA's method works:

Spatial model + Benham
V=29, C=3 --> 0.1233-0.1365 (95% c.i.), simple=1.00, majority=0.73
V=29, C=4 --> 0.2811-0.2989 (95% c.i.), simple=0.97, majority=0.52
V=29, C=5 --> 0.4186-0.4384 (95% c.i.), simple=0.94, majority=0.37
V=29, C=6 --> 0.5388-0.5575 (95% c.i.), simple=0.92, majority=0.26

So the 'simple' value is the fraction out of all the JGA-susceptible
elections, where the simple strategy worked. Incidentally, the 'majority'
column is the fraction of the JGA-resilient elections where the election
was resilient because there was a majority winner. At some point I want to
do the non-JGA version of these tests, where each voter in the coalition
has a different ballot. It would be interesting to measure susceptibility
in terms of how often the strategy is simple vs complicated. The simplest
strategy that a group of conspirators can have is "put c_k on top, w_A at
the bottom, and rank the rest honestly". So I want to test that strategy.




> I have noticed a dropoff with >10 candidates, though.
>
> Other (obvious?) optimizations include:
>         - If there's a majority winner and the method passes the majority
> criterion, give up directly: the election is unmanipulable.
>         - Trying the naive exaggeration strategy (if A was the winner,
> make B>A
> voters vote B first and A last); requires just one check per other
> candidate.
>

Yeah. Those are the two I came up with. The numbers above give you an
indication of how often they'll speed up the program (i.e. quite often,
actually).



> In case it's useful, I can also mention that I'm using the Jeffreys
> interval for binomial c.i. to avoid problems with the Gaussian when the
> expected value is very close to 0 or 1. Though reading the Wikipedia
> article, it seems that the consensus is that the Wilson interval is better.
>

Yeah, I don't trust those a whole lot because I'm never quite sure what the
assumptions are or whether they apply. I compute my c.i. with a bootstrap.
It's easy to implement. In the main loop instead of incrementing SS and SF,
I use an array to keep track of the boolean result (i.e. success vs
failure). From that array I can compute SS and SF, but more importantly, I
can do 500 bootstrap resamples (i.e. sampling with replacement) and
basically bruteforce the c.i. with no assumptions made about the
distribution. This is expensive compared to an analytic approach, but in
this context, "expensive" means that it adds 0.025s to the runtime.

Cheers,
-- 
Dr. Daniel Carrera
Postdoctoral Research Associate
Iowa State University
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.electorama.com/pipermail/election-methods-electorama.com/attachments/20220112/64cf19d7/attachment-0001.html>


More information about the Election-Methods mailing list