[EM] The easiest method to 'tolerate'

Kevin Venzke stepjak at yahoo.fr
Thu Jun 30 15:44:32 PDT 2016


Hi Kristofer,

________________________________
De : Kristofer Munsterhjelm <km_elmet at t-online.de>
À : Kevin Venzke <stepjak at yahoo.fr>; steve bosworth <stevebosworth at hotmail.com>; EM list <election-methods at electorama.com> 
Envoyé le : Mercredi 29 juin 2016 6h19
Objet : Re: [EM] The easiest method to 'tolerate'

> 
> See also, from a Range perspective: http://rangevoting.org/MedianVrange.html
>

Unrelated: That link says Chris Benham invented MCA. I'm pretty sure that's wrong, and that it was either Forest, or else it didn't have a clear inventor. But I am struggling to find any archives for this list older than 2015. Hopefully we have not managed to lose nearly 20 years of posts...?

I'm trimming the below because I can't figure out how to make yahoo quote this:

> A rational voter would still always strategize because there's no actual
> harm to doing so, and if enough voters aligned with his candidate think
> the way he does, he'll benefit. So it's a chance of getting something
> better with no risk, and a rational voter would take that.
> 
> However, it seems unintuitive to me that a real voter would do so.
> Instead, it seems more that voters value expressing their true
> preference, and as long as the benefit to strategy is less than what
> they gain by expressing their preference, honesty wins.
> 
> I have no hard proof that this is the case, and as I mentioned in
> another post, there might very well be cultural differences. An
> electorate accustomed to FPTP and the pervasive strategy there might be
> more inclined to be strategic by default. I also seem to recall that the
> parties in New York under STV almost immediately tried to game STV
> through vote management, whereas according to Schulze's paper on vote
> management, other countries' STV elections seem to be relatively free of
> strategy.


If we are reading the same paper, that's not my impression of what it said. I thought it said that vote management in particular was widespread.

I don't disagree with your statements on median rating, but I don't quite see the point of messing around with a method like that, whose benefits are undone if human nature isn't what we think. Even if you manage to adopt it successfully for e.g. a school board election, who's to say it will continue to work properly as political stakes increase (e.g. moving on to adopt the method for a gubernatorial election). It seems hard to advocate.

Generally I don't think that much is based on culture when it comes to comparative politics. If something is at stake, one should look and keep looking for the practical reasons why actors do what they do.

Certainly I have an ulterior motive for thinking this way: If we can introduce roles for culture, or the inherent sincerity of voters, then it becomes even harder to create a meaningful simulation to try to forecast methods' properties.

> It's always advantageous if the method also makes sense based on how it
> works, but I'd rather have an opaque method with good compliance than
> vice versa. Borda is very simple, but its extreme teaming incentive

> makes it of little use in competitive elections.

I feel like the value comes not just from understanding how it works but being comfortable with how the method behaves as an agent for the voter. I've been wanting to write a post for several months on a few topics but I haven't found an excuse or even an overarching idea. This is one piece though. It's common to think of the method as an agent representing the entire electorate, which figures out the best possible result for everybody. But we could also try to imagine a given method's rules in terms of how we would define an agent representing an individual voter, such that the interactions of the agents produce the method's results. Think of the agents as legislators talking among themselves before making a final vote.

In this framework I find IRV pleasing and natural. Your agent advocates for a single best candidate and will abandon that candidate given the quite logical trigger that he's the weakest of whatever candidates remain. Everything seems great except, say, that the ultimate winner may lack a full majority and yet suffer a majority pairwise defeat that never gets explored in the process.

I feel like MinMax (etc.) doesn't compare so well. Say that the agents do a full set of round-robin voting to make the pairwise matrix. The potential for mischief and accidents related to LNHarm and LNHelp failures immediately occurs to me. Not only would some agents (ideally) not want to give up certain preference information prematurely, but the agents should also be suspicious of the information they receive from this process. Certain candidates will have a "superficial viability" and the lower preferences of their supporters are less likely to be something we want to look at.

Consider this standard cycle:
40 A>C
35 B>A
25 C>B

MinMax drops B>A, and people like me will historically say that if those A>C voters are lying, then it's the B>A voters responsibility and option to not give a preference for A, given that A is probably known to be viable ahead of time, and B's primary competitor. But maybe our agents should be somehow noticing the viability of A and saying to themselves that that C lower preference is suspect as long as A is in the running. Maybe it's C>B that should be thrown out. I haven't explored the implications of this idea though, or fully defined it even.

There are obvious reasons why we wouldn't want to just use first preferences to tell us about apparent viability. But the first preferences are the ones we can definitely trust. If we were to accept that as important (even if just for sake of argument) I wonder if it would lead somewhere.

Just a tangent.

Kevin


More information about the Election-Methods mailing list