[EM] Manipulability stats for more poll methods (fixed footnotes)

Michael Ossipoff email9648742 at gmail.com
Sat May 4 21:14:42 PDT 2024


Or approve everyone on the better side of the widest gap among the
successive candidate-merits.

Even if you don’t have a feel for cardinal-merits, you likely know where
the biggest gap is.

On Sat, May 4, 2024 at 21:10 Michael Ossipoff <email9648742 at gmail.com>
wrote:

> Of course, in Approval, if there aren’t other perceived reasons for
> choosing whom to approve, one could approve above the mean…if you have a
> feel for what’s average among the candidates. I guess that’s the usual
> assumption for simulations.
>
> If the candidate-lineup is so good that you do above-mean voting, then
> you’re indeed fortunate.
>
> If you’d do that, but you don’t have a feel about the average, & don’t
> perceive cardinal-merits, then of course you could just approve the best
> half of the candidates.
>
> Maybe that was the assumed Approval strategy to which you were referring.
>
> Approval is particularly perfectly matched for an election with
> unacceptable candidates:
>
> Just approve (only) all of the Acceptables.
>
> On Sat, May 4, 2024 at 19:27 Michael Ossipoff <email9648742 at gmail.com>
> wrote:
>
>> There’s no reason for the renormalization. Among A, B, C & D (in that
>> order of magnitude) if B is at the mean, then, with the A=0 & D=1
>> renormalization, B’s renormalized value is the mean of all of the
>> renormalized values.
>>
>> The position of the mean among the candidates doesn’t change with
>> renormalization.
>>
>>
>>
>> On Sat, May 4, 2024 at 15:25 Michael Ossipoff <email9648742 at gmail.com>
>> wrote:
>>
>>>
>>>
>>> On Sat, May 4, 2024 at 14:45 Kristofer Munsterhjelm <
>>> km_elmet at t-online.de> wrote:
>>>
>>>>
>>>> Yes, that's right. But consider a voter with the following utilities:
>>>>
>>>> A: 0.57
>>>> B: 0.32
>>>> C: 0.23
>>>> D: 0.08
>>>>
>>>> Normalization to two steps fixes the highest value (0.57) to 1 and the
>>>> lowest value (0.08) to 0 and rounds off the intermediate values after
>>>> linearly scaling them.
>>>
>>>
>>> Yes. So far, so good. But…
>>>
>>> This in essence says that a value is rounded off
>>>> to 1 if it's greater than or equal to 0.325 (the midpoint between 0.08
>>>> and 0.57)
>>>
>>>
>>> What? You didn’t average the normalized values. You averaged two of the
>>> values before normalization. The midrange isn’t usually the same as the
>>> mean. You used the midrange as the mean.
>>>
>>> If you call the top value 1, & the bottom value 0,
>>> then a rating’s new value is the number that’s the same % of the way
>>> from 0 to 1 as the old number’s % from.08 to .57
>>>
>>> Average of those new values: .4475
>>>
>>> You still approve the best two.
>>>
>>>
>>>
>>> so the 0-1 normalized ballot is
>>>>
>>>> A: 1, B: 0, C: 0, D: 0
>>>>
>>>> On the other hand, the mean utility is 0.3. So the mean utility
>>>> approval
>>>> ballot is
>>>>
>>>> A: 1, B: 1, C: 0, D: 0.
>>>>
>>>> > 4 "dimensions" sounds like a lot.  What are the "strategy attempts" ?
>>>> > How much and what information do the strategists have?  Are the
>>>> > strategists confined to just trying to get their favourites elected,
>>>> or
>>>> > any candidate they prefer to the initial winner?
>>>>
>>>> The method works pretty much like this, for generating and testing a
>>>> single election. (I've simplified the exact order that strategies are
>>>> called upon, but this is in effect what happens.)
>>>>
>>>> ==== (Algorithm start) =====
>>>>
>>>> Draw candidate positions for each candidate (in this case, each is a
>>>> point on a 4D normal distribution with mean 0 and variance 1).
>>>> Draw voter positions for each voter, and create their honest ballots
>>>> based on the distances between the voter and candidates.
>>>> Pass the resulting ballots through the method to establish the honest
>>>> outcome.
>>>> If there's a tie, skip (because deciding what a strict improvement is
>>>> when there's a honest tie is ambiguous). Otherwise let the winner be W.
>>>>
>>>> For each candidate X who is not the winner W:
>>>>         For i = 1 to number of strategy attempts / number of candidates
>>>>                 Set the strategic ballots to the honest ballots.
>>>>
>>>>                 For every voter who prefers X to W:
>>>>                         Replace that voter's strategic ballot with a
>>>>                         ballot according to a strategy that depends on
>>>>                         i.
>>>>
>>>>                 Pass the modified strategic ballots through the method.
>>>>                 If X is now a winner, the method is manipulable in
>>>>                         this election. Return success.
>>>>
>>>> If we reach this point without any success, return failure; the method
>>>> is (probably) not manipulable in this election.
>>>>
>>>> ==== (Algorithm end) =====
>>>>
>>>> The indexed strategies are
>>>>         i=0: Compromising (raise X to unique top)
>>>>         i=1: Burial (lower W to unique bottom)
>>>>         i=2: Two-sided (do both at once)
>>>>         i>2: Coalitional strategy
>>>>
>>>> The compromising, burial, and two-sided strategies modify the voters'
>>>> otherwise honest ballots - for instance, compromising changes a
>>>> strategist's ballot so that X is at unique top and the rest of the
>>>> ballot is unchanged.
>>>>
>>>> The first time the coalitional strategy is called for a particular
>>>> election, candidate to strategize for, and value of i, it chooses a
>>>> random number of strategic ballots (between 1 and 3 inclusive). Each
>>>> strategic voter then picks one of these ballots at random. This
>>>> simulates strategies where every strategist ballot is equal, as well as
>>>> ones where there are a few groups each with their own ballot type, thus
>>>> covering more than JGA's simulations without becoming *too*
>>>> computationally expensive.
>>>>
>>>> So with the setup for the stats that I gave, the full setup for a
>>>> single
>>>> method is like this:
>>>>
>>>> for j = 1 to 500k
>>>>         Run the algorithm detailed above.
>>>>         It returns one of three states: honest tie, success, or failure.
>>>>         Increment the corresponding counter, call it TIES, SUCCESSES or
>>>> FAILURES.
>>>>
>>>> manipulability = SUCCESSES/(500k - TIES)
>>>>
>>>>
>>>> So to answer your questions:
>>>>
>>>> The strategists don't adapt their strategy to the information available
>>>> to them, even though they strictly speaking have full information.
>>>> However, they get to try over and over again until they win. If there
>>>> is
>>>> a full information strategy with not too many distinct ballots, then
>>>> this random sampling will eventually find it, given a high enough
>>>> strategy attempts value.
>>>>
>>>> For each non-winner X, everybody who prefers X to the current winner
>>>> gets to have a go. So not just their favorites: anybody they all prefer
>>>> to the current winner.
>>>>
>>>> >
>>>> >>
>>>> >> [2] The detailed stats suggest that pushover is a problem with
>>>> Smith//DAC
>>>> >
>>>> > You don't have enough candidates for a sub-cycle, and so the method
>>>> > can't fail mono-raise.  How can it have a Pushover problem?
>>>>
>>>> I did a bit more checking, and the full preference version doesn't have
>>>> this high an "other strategy" count. Since I think it's unlikely that
>>>> the version with truncation would have more pushover than the fully
>>>> ranked one, I'm going to retract this; most likely it's just an
>>>> artifact
>>>> of the simulator's ballot reduction process that falsely attributes the
>>>> strategy to the "other" category for cardinal methods.
>>>>
>>>> >
>>>> >> - Margins-Sorted Approval, because I'm not sure how it works
>>>> >
>>>> > (I struggle to take this at face value.  Probably my promotion of MSA
>>>> > has convinced you that it is the best method and you were concerned
>>>> that
>>>> > your simulation wouldn't do it justice.
>>>>
>>>> I'd like to believe both that I have enough scientific integrity not to
>>>> do that, and that people know I have, too :-)
>>>>
>>>> Actually, I was planning on putting MSA at the same level as the other
>>>> "I don't know enough about these or their dynamics" methods (double
>>>> defeat Hare, MSMLV, and Max Strength Transitive Beatpath).
>>>>
>>>> > But our expert doing the
>>>> > simulation claiming he can't understand the method isn't a good look
>>>> for
>>>> > its proposability.)
>>>> >
>>>> > Why didn't you simply ask me to explain it to you?
>>>>
>>>> I think it's the sorting phase that does it. My vague idea of how it
>>>> works is that you essentially run a sorting algorithm on intermediate
>>>> values, and that seems a little too complex to me. But I might just
>>>> have
>>>> got it wrong and then the initial impression of it as an intimidating
>>>> method stuck.
>>>>
>>>> Ted Stern pointed me at the Electowiki article for MSA, which in turn
>>>> led me to his Python implementation. I might port it if I have time,
>>>> but
>>>> I feel a bit exhausted after gathering all these stats. We'll see :-)
>>>>
>>>> > What happened to separate entries for BTR,  Woodall and Benham?
>>>>
>>>> They're in the other post. I didn't want to add them all to the post
>>>> that was intended to focus on the new results. That's why I said "some
>>>> for comparison" - the others are here:
>>>>
>>>>
>>>> http://lists.electorama.com/pipermail/election-methods-electorama.com/2024-April/006029.html
>>>>
>>>> I could post all the stats - ordinal and cardinal methods' - in a
>>>> summary post if you or other EM members would like.
>>>>
>>>> -km
>>>> ----
>>>> Election-Methods mailing list - see https://electorama.com/em for list
>>>> info
>>>>
>>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.electorama.com/pipermail/election-methods-electorama.com/attachments/20240504/c8d9e7d7/attachment.htm>


More information about the Election-Methods mailing list