<br><br><div class="gmail_quote">2011/12/25 <span dir="ltr"><<a href="mailto:fsimmons@pcc.edu">fsimmons@pcc.edu</a>></span><br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Jameson asked for "thoughts."<br>
<br>
My first thought is that this kind of analysis is exactly what we need.<br>
<br>
My second thought is that so far SODA has held up well under all the probes for weakness that anybody has<br>
come up with. SODA seems to be a very robust method. </blockquote><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
My third thought is that I have never seen this kind of soul searching or probing directed at IRV by IRV<br>
enthusiasts.<br></blockquote><div><br></div><div>Thanks, that's very flattering... but not quite how I'd put it. I don't think I was doing a hard-nosed search for SODA's weaknesses in this case. By picking a set of cases which are at the absolute limit of what any method could handle on a good day, I was more scraping the bottom of the barrel for yet another outstanding strength to ascribe to SODA.</div>
<div><br></div><div>And here's what I found: </div><div><ul><li>Preferential methods can't be sensitive to utility (duh.)</li><li>Honest range can (duh. But remember, Range has some of the strongest strategy incentives of any method, so honest range may be rare.)</li>
<li>Honest MJ and approval can, as long as voters have some randomness/ spread. (And they have significantly less strategy incentive than Range.)</li><li>With strategy, these rated systems can fail the given scenarios badly (chicken dilemma)</li>
<li>Honest, rational SODA cannot be sensitive to utility. </li><li>However, it at least avoids the chicken dilemma (and thus has effectively no strategy incentive at all.)</li><li>SODA may be able to be sensitive to utilities if candidates act meta-rationally (in ways that experiments such as the ultimatum game show that people sometimes do).</li>
</ul><div>I'd say that this shows that SODA handles the set of scenarios as well as any method and better than most. But I can't really say that SODA shines as the clearly best method in these cases.</div></div><div>
<br></div><div>Jameson</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
<br>
> Date: Sun, 25 Dec 2011 11:28:24 -0600<br>
> From: Jameson Quinn<br>
> To: EM<br>
> Subject: [EM] Fwd: SODA, negotiation, and weak CWs<br>
> Message-ID:<br>
><br>
> Content-Type: text/plain; charset="iso-8859-1"<br>
<div class="im">><br>
> I'm resubmitting this in a text-friendly format, at Forest's<br>
> request. I'll<br>
> also take the opportunity to add one paragraph about how rated<br>
> methods can<br>
> fail to find the highest-utility candidates in scenarios like<br>
> this. Added<br>
> text is marked ADDED.<br>
><br>
> ---------- Forwarded message ----------<br>
> From: Jameson Quinn<br>
</div><div class="im">> Date: 2011/12/25<br>
> Subject: SODA, negotiation, and weak CWs<br>
> To: EM<br>
><br>
><br>
</div><div class="im">> In order to have optimum Bayesian Regret, a voting system should<br>
> be able to<br>
> not elect a Weak Condorcet Winner (WCW), that is, a CW whose<br>
> utility is<br>
> lower than the other candidates. Consider the following payout<br>
> matrices:Group Size Candidate Utilities<br>
> Scenario 1 (zero sum) A B C<br>
> a 4 4 1 0<br>
> b 2 0 3 2<br>
> c 3 0 2 4<br>
> Total utility 16 16 16<br>
><br>
> Scenario 2 (pos. sum) A B C<br>
> a 4 3 1 0<br>
> b 2 0 3 1.5<br>
> c 3 0 2 3<br>
> Total utility 12 16 12<br>
><br>
> Scenario 3 (neg. sum) A B C<br>
> a 4 4 0.5 0<br>
> b 2 0 3 2<br>
> c 3 0 1 4<br>
> Total utility 16 11 16<br>
><br>
><br>
> All three scenarios consist of 3 groups of voters: groups a, b,<br>
> and c, with<br>
> 4, 2, and 3 voters respectively, for a total of 9 voters. All<br>
</div>> scenarioshave 3 candidates: A, B, and C, who favor their<br>
<div class="im">> respective groups. And in<br>
> all three scenarios, candidate B is the CW, because the<br>
> preference matrix<br>
> is always<br>
><br>
> 4: A>B<br>
> 2: B>C<br>
> 3: C>B<br>
><br>
> But in scenario 1, the utilities of the three candidates are<br>
> balanced; in<br>
> scenario 2, B has the highest utility; and in scenario 3, A and<br>
> C have the<br>
> highest utilities.<br>
><br>
> Obviously, any purely preferential system will tend to give the<br>
> same result<br>
> in all three scenarios. This might not be 100% true if strategy<br>
</div>> propensitydepended on the utility payoff of a strategy; but the<br>
> strategicpossibilities would have to be just right for a method<br>
<div class="im">> to "get it right"<br>
> for this reason.<br>
><br>
> It's easy to see how Range could "get it right" in scenarios 2<br>
> and 3. With<br>
> just a bit of strategy, it's also easy to see how it could<br>
</div>> successfullyfind the CW in scenario 1.<br>
<div><div class="h5">><br>
> You can also construct plausible stories of how Approval or MJ<br>
> could "get<br>
> it right" in all 3 scenarios, although it probably involves<br>
> adding some<br>
> random noise to voting patterns rather than assuming pure<br>
> "honest" votes.<br>
><br>
> ADDED: Of course, Range, Approval, and MJ can all get these scenarios<br>
> "wrong" too. Because the scenarios present a classic chicken dilemma<br>
> between B and C, these rated systems could all end up electing A,<br>
> regardless of utility.<br>
><br>
> But what about SODA? As a primarily preferential system, it<br>
> seems that it<br>
> should give the same result in all three scenarios. If<br>
> candidates all<br>
> rationally pursue the interests of their primary constituency,<br>
> then A will<br>
> approve B to prevent B from having to approve C, leaving a win<br>
> for B.<br>
><br>
> But if candidate A decides to make an ultimatum, things could go<br>
> differently. A says to B: "Make some promise that transfers 0.5<br>
> point of<br>
> utility to each member of group a, or I will not approve you."<br>
> Assume that<br>
> B can make a promise to transfer utility from one group to<br>
> another at 80%<br>
> efficiency; and that such promises are not strictly enforceable.<br>
> Thus, if A<br>
> gets too greedy, B can simply promise the moon and not keep the<br>
> promise;but if A asks for something reasonable, B will see<br>
> honesty as worth it.<br>
><br>
> B could promise to transfer 0.5 point of utility from groups b<br>
> and c to<br>
> group a. Since utility transfers are assumed to be only 80%<br>
> efficient, that<br>
> transfer of 2.5 utility points would result in a net loss of<br>
> 0.5. So the<br>
> payoffs would be:<br>
><br>
> Group Size Candidate Utilities<br>
> Scenario 1a(zero sum) A B C<br>
> a 4 4 1.5 0<br>
> b 2 0 2.5 2<br>
> c 3 0 1.5 4<br>
> Total utility 16 15.5 16<br>
><br>
> Group Size Candidate Utilities<br>
> Scenario 1b(zero sum) A B C<br>
> a 4 4 1.5 0<br>
> b 2 0 3 2<br>
> c 3 0 1.1 4<br>
> Total utility 16 15.3 16<br>
><br>
> Scenario 2a(pos. sum) A B C<br>
> a 4 3 1.5 0<br>
> b 2 0 2.5 1.5<br>
> c 3 0 1.5 3<br>
> Total utility 12 15.5 12<br>
><br>
> Scenario 3a(neg. sum) A B C<br>
> a 4 4 1 0<br>
> b 2 0 2.5 2<br>
> c 3 0 0.5 4<br>
> Total utility 16 10.5 16<br>
><br>
> Scenario 3b(neg. sum) A B C<br>
> a 4 4 1 0<br>
> b 2 0 3 2<br>
> c 3 0 0.1 4<br>
> Total utility 16 10.3 16<br>
><br>
> Note that in scenarios 1a and 2a, this utility transfer has left<br>
> B giving<br>
> the same utility to groups a and c, while in scenario 3a, B has<br>
</div></div>> switchedfrom favoring group c over group a, to favoring group a<br>
<div><div class="h5">> over group c. Also,<br>
> note that in scenario 2a, group b still gets a full point of<br>
> advantage with<br>
> candidate B versus what they would get with candidate C, whereas<br>
> in the<br>
> other two Xa scenarios, group b only gets half a point of<br>
> advantage there.<br>
> If group b demands a full point of advantage, then B could only<br>
> meet the<br>
> ultimatum in scenario 1 by taking all the utility from group c,<br>
> as in<br>
> scenarios 1b and 3b. Again, this would leave c with less utility<br>
> than a.<br>
><br>
> I believe that these factors tend to make it more likely that B<br>
> would meet<br>
> the ultimatum in scenario 2 than in the other scenarios (because<br>
> they'd be<br>
> reluctant to anger group c by "unfairly" favoring group a). Of<br>
> course, A<br>
> could realize this, and simply not attempt to make the ultimatum in<br>
> scenarios 1 and 3; and then B would still win. But A's utility<br>
> payouts show<br>
> that they honestly have no preference between groups b and c, so<br>
> I think<br>
> that it is not unreasonable to imagine that they'd make the<br>
> ultimatum in<br>
> all three cases.<br>
><br>
> The upshot is, there is a plausible (though perhaps not too likely)<br>
> mechanism for SODA to avoid electing a CW specifically in cases<br>
> where that<br>
> CW is intrinsically weak. And that's with perfect information;<br>
> I'd argue<br>
> that the mechanism would work even better in cases where B's<br>
> strength were<br>
> illusory; that is, where groups a and c were overestimating<br>
> their payoff<br>
> from candidate B because of the A/C rivalry. Candidate A,<br>
> realizing that<br>
> they were choosing between B and C, would be more careful about<br>
</div></div>> assessingthe relative payoffs between those candidates than<br>
<div class="im">> group a, distracted by<br>
> the A/C rivalry, had been.<br>
><br>
> Thoughts?<br>
><br>
> Jameson<br>
</div>----<br>
Election-Methods mailing list - see <a href="http://electorama.com/em" target="_blank">http://electorama.com/em</a> for list info<br>
</blockquote></div><br>