<div dir="ltr"><div dir="ltr"><div dir="ltr"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">So perhaps that should be replaced with a category listing, although it<br>would be interesting to see the actual IIA failure rate for Range with<br>automatic normalization (or above-mean approval strategy). Then again,<br>filling in numbers based on simulations from EM might be considered OR;<br>I don't know what the burden of proof/reliability rules of Wikipedia<br>would say.<br></blockquote><div>It's not OR if you put it on Arxiv first. ;)</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">IIA compliance might be less informative than you'd think, because every<br>Condorcet method has the same frequency of IIA failures: you can remove<br>a subset of non-winners to change the winner iff there is no Condorcet<br>winner. Non-Condorcet ranked methods are worse: they fail when there's a<br>cycle, and also whenever they fail to elect the CW.<br></blockquote><div>Ooh, that's true. How about a spoiler index instead, then, going by the winner's change in position? So, for example, if the runner-up is elected after adding/removing a spoiler, that contributes 1 point; 2 points if it's the third-place finisher, etc.</div></div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Mar 28, 2024 at 3:08 PM Kristofer Munsterhjelm <<a href="mailto:km_elmet@t-online.de">km_elmet@t-online.de</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">On 2024-03-22 02:49, Closed Limelike Curves wrote:<br>
> These are great suggestions, thank you :)<br>
> <br>
> For organizing the criteria, my proposal is to replace the current table <br>
> with maybe 5 numbers:<br>
> 1. Condorcet efficiency<br>
> 2. Social utility efficiency<br>
> 3. Spoiler resistance (IIA compliance)<br>
> 4. Participation satisfaction<br>
> 5. Monotonicity satisfaction<br>
> <br>
> (Is Jameson Quinn on this email list? I know he had some relevant <br>
> simulations.)<br>
<br>
IIA compliance might be less informative than you'd think, because every <br>
Condorcet method has the same frequency of IIA failures: you can remove <br>
a subset of non-winners to change the winner iff there is no Condorcet <br>
winner. Non-Condorcet ranked methods are worse: they fail when there's a <br>
cycle, and also whenever they fail to elect the CW.<br>
<br>
So perhaps that should be replaced with a category listing, although it <br>
would be interesting to see the actual IIA failure rate for Range with <br>
automatic normalization (or above-mean approval strategy). Then again, <br>
filling in numbers based on simulations from EM might be considered OR; <br>
I don't know what the burden of proof/reliability rules of Wikipedia <br>
would say.<br>
<br>
Possible categories could be:<br>
passes IIA (e.g. cardinal with an absolute scale, random pair)<br>
LIIA (Ranked pairs, River)<br>
ISDA (Smith//IRV)<br>
Condorcet (Minmax)<br>
None of the above (Plurality).<br>
<br>
Alternatively having clone independence instead of IIA would be better <br>
at differentiating between the methods. I would also suggest a strategy <br>
resistance number, by James Green-Armytage's definition. And summability.<br>
<br>
Although if you're doing clone independence, JGA's strategic exit/entry <br>
might give a better idea of strategic nomination resistance, since some <br>
nominally clone independent methods have incentive to enter or exit - in <br>
particular IRV.<br>
<br>
Jameson used to be on the list, but he hasn't posted since 2018.<br>
<br>
-km<br>
</blockquote></div>