<div dir="auto"><div><br><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">El lun., 25 de abr. de 2022 4:25 a. m., Kristofer Munsterhjelm <<a href="mailto:km_elmet@t-online.de">km_elmet@t-online.de</a>> escribió:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Some time ago I was considering cardinal methods that might work using<br>
only lottery information (e.g. "I consider an x% chance of A, 100-x%<br>
chance of C to be just as good as getting B for sure"). Lottery<br>
information provides scaled utilities, i.e. variables of the form<br>
<br>
u_A = a * absolute utility of getting A elected + b<br>
<br>
with a > 0<br>
<br>
for candidate A. The problem with using absolute Range is that we have<br>
no idea what a and b are, and it's very hard to get all of society to<br>
agree on a common a and b to calibrate the ratings so they can be<br>
meaningfully compared.<br>
<br>
But earlier today I was adding some information about Myerson-Weber<br>
strategy into Electowiki. And then I noticed that it seems like<br>
determining optimal strategy according to the M-W model makes the a and<br>
b terms disappear!<br>
<br>
If so, SARVO-Range based on Myerson-Weber strategy would do the job; and<br>
we have (sort of) a principled method to do cardinal voting with lottery<br>
utilities (von Neumann-Morgenstern ones).<br>
<br>
Let's check.<br>
<br>
Suppose that voter v wants to decide what rating r_i to give to<br>
candidate i. Define the "public rated utility" u_i for candidate i as<br>
u_i = a * up_i + b<br>
where up_i is the absolute utility (according to some absolute scale<br>
that we don't know). Then we want to show that determining the optimal<br>
Myerson-Weber strategic ratings v_i based on u_i produces the same<br>
result as doing so for up_i. That would mean that the strategic ratings<br>
produced by using only the lottery information are the same as the<br>
ratings that would be produced if we were fortunate enough to know up_i<br>
for all voters directly.<br>
<br>
So the M-W strategy is: let<br>
v_i be the strategic rating we want to find<br>
u_i be the public utility of candidate i<br>
p_ij be the voter's perceived probability that i and j will be tied.<br></blockquote></div></div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"></blockquote></div></div><div dir="auto">I could be wrong but I think it should be "tied for winning."</div><div dir="auto"><br></div><div dir="auto">It is interesting that this strategy can actually result in non-solid approval coalitions on ballots ... sometimes it requires you to approve X while leaving unapproved some candidate Y rated above X on the same ballot ... i.e. insincere strategy.</div><div dir="auto"><br></div><div dir="auto">Furthermore, if estimates of both the utilities u_i and u_j, as well as of the probabilities p_ij in question were known with a high degree of precision, you might get away with those insincere gaps in the approval order. </div><div dir="auto"><br></div><div dir="auto">These facts reflect the fragility (anti-robustness) of the winning tie probability based strategy.</div><div dir="auto"><br></div><div dir="auto">Nevertheless, your result is highly relevant because it shows that on a fundamental level there is a meaningful, experimental way of defining individual utilities that are just as good as the theoretical utilities invoked as a basis for Approval strategy.</div><div dir="auto"><br></div><div dir="auto">It is equally true for the not as sensitive strategy of approving the candidates k with above expectation utilities: </div><div dir="auto">u_k >sum P_i u_i,</div><div dir="auto">based on estimates of (non tie based) winning probabilities P_i, which are still sketchy because of rampant misinformation, not to mention intentional disinformation.</div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
The prospective ratings are:<br>
R_i = SUM j != i p_ij * (u_i - u_j)<br>
<br>
and then we choose the v vector so that<br>
SUM i=1..n v_i * R_i<br>
is maximized.<br>
<br>
So expand the prospective ratings:<br>
R_i = SUM j != i: p_ij * (a up_i + b - a up_j - b)<br>
= SUM j != i: p_ij * a * (up_i - up_j)<br>
<br>
and for the sake of convenience, let Q_I be the alternate-universe R_i<br>
(where we know the up values directly)<br>
Q_i = SUM j != i: p_ij * (up_i - up_j),<br>
<br>
which means that R_i = a * Q_i.<br>
<br>
If we only have the u_i values, then we want to choose the v vector to<br>
maximize<br>
f(v) = SUM i = 1...n v_i * R_i<br>
and in the alternate world,<br>
g(v) = SUM i = 1...n v_i * Q_i<br>
<br>
But f(v) = a * g(v), and since a is a constant, it can't change the<br>
location of the optimum. Hence the M-W strategy depends neither on a nor<br>
b, as desired.<br>
<br>
So SARVO-Range with M-W strategy accomplishes what we want.<br>
<br>
Strictly speaking, I would have to show that it's not just a ranked<br>
method - that it elects the good centrist in the good centrist LCR<br>
example. But I *think* that's true.<br>
<br>
As proposed by Warren, SARVO-Range also considers one voter at a time.<br>
If we want homogeneity (scale invariance), it should consider<br>
infinitesimal slices of voters at a time instead. But I don't know how<br>
to calculate that, and the naive approach is clearly out of the<br>
question. I need someone with calculus of variations skills!<br>
<br>
-km<br>
----<br>
Election-Methods mailing list - see <a href="https://electorama.com/em" rel="noreferrer noreferrer" target="_blank">https://electorama.com/em</a> for list info<br>
</blockquote></div></div></div>