[EM] Better cardinal methods?

Andy Jennings elections at jenningsstory.com
Mon Oct 4 20:07:39 PDT 2021


Kristofer,

Thank you for thinking this over.

I do think the concept of an absolute zero (things I would like to
experience vs things I wouldn't) has some merit. Maybe if we were deciding
where to go to dinner, there would be some places I would just not go. The
company of my friends wouldn't be worth the pain of the venue. But in a
political election, it's harder to imagine where the absolute zero would
fall.

Here are some thoughts about facilitating the market:

Suppose there are N_A voters who want A > B > C and N_C voters who want C >
B > A. Assume all voters also submitted an evaluation of B (a real number
in [0,1]).

Find all the integer ordered pairs (a, c) in [1, N_A] x [1, N_C] such that
there is an a-sized subset of the A>B>C voters and a c-sized subset of the
C>B>A voters that are willing to, collectively, trade their chances of A
and C winning for chances of B winning. The subsets will not be difficult
to find. They will be the subsets of the A>B>C and C>B>A voters with the
highest opinion of B.

For each (a,c), we can simply ask whether both of the following are true:

a / (a + c) <= a-th highest evaluation of B by A>B>C voters
c / (a + c) <= c-th highest evaluation of B by C>B>A voters

We can solve the first one for c as an increasing function of a, and the
second one for a as an increasing function of c and see where they overlap.
Since they are increasing, there is always a maximal pair (a,c).


Here is an example:

Suppose there are 13 A>B>C voters with the following evaluations of B:
0.1, 0.25, 0.3, 0.3, 0.4, 0.45, 0.5, 0.5, 0.57, 0.571, 0.9, 0.9, 0.95

And 7 C>B>A voters with the following evaluations of B:
0.1, 0.1, 0.4, 0.5, 0.5, 0.65, 0.95


The 1 most-willing A>B>C voter (0.95) would trade with (0.05/0.95) = 0.053
(or more) C>B>A voters.
The 2 most-willing A>B>C voters (0.9+) would trade with 2*(0.1/0.9) =
0.222+ C>B>A voters.
The 3 most-willing A>B>C voters (0.9+) would trade with 3*(0.1/0.9) =
0.333+ C>B>A voters.
The 4 most-willing A>B>C voters (0.571+) would trade with 4*(0.429/0.571) =
3.005+ C>B>A voters.
The 5 most-willing A>B>C voters (0.57+) would trade with 5*(0.43/0.57) =
3.772+ C>B>A voters.
The 6 most-willing A>B>C voters (0.5+) would trade with 6*(0.5/0.5) = 6+
C>B>A voters.
The 7 most-willing A>B>C voters (0.5+) would trade with 7*(0.5/0.5) = 7+
C>B>A voters.
The 8 most-willing A>B>C voters (0.45+) would trade with 8*(0.55/0.45) =
9.778+ C>B>A voters.
The 9 most-willing A>B>C voters (0.4+) would trade with 9*(0.6/0.4) = 13.5+
C>B>A voters.
The 10 most-willing A>B>C voters (0.3+) would trade with 10*(0.7/0.3) =
23.333+ C>B>A voters.
The 11 most-willing A>B>C voters (0.3+) would trade with 11*(0.7/0.3) =
25.667+ C>B>A voters.
The 12 most-willing A>B>C voters (0.25+) would trade with 12*(0.75/0.25) =
36+ C>B>A voters.
All 13 A>B>C voters (0.1+) would trade with 13*(0.9/0.1) = 117+ C>B>A
voters.

The 1 most-willing C>B>A voter (0.95) would trade with (0.05/0.95) = 0.053+
A>B>C voters.
The 2 most-willing C>B>A voters (0.65+) would trade with 2*(0.35/0.65) =
1.077+ A>B>C voters.
The 3 most-willing C>B>A voters (0.46+) would trade with 3*(0.5/0.5) = 3+
A>B>C voters.
The 4 most-willing C>B>A voters (0.45+) would trade with 4*(0.5/0.5) = 4+
A>B>C voters.
The 5 most-willing C>B>A voters (0.4+) would trade with 5*(0.6/0.4) = 7.5+
A>B>C voters.
The 6 most-willing C>B>A voters (0.1+) would trade with 6*(0.9/0.1) = 54+
A>B>C voters.
All 7 C>B>A voters (0.1+) would trade with 7*(0.9/0.1) = 63+ C>B>A voters.


The (a, c) ordered pairs of successful trades are:
(1,1)
(2,1)
(3,1)
(2,2)
(3,2)
(3,3)
(4,4)
(5,4)

So we implement the maximal trade, 5 chances of A winning and 4 chances of
C winning turn into 9 chances of B winning. Our lottery goes from:
13/20 A + 7/20 C
to:
8/20 A + 9/20 B + 3/20 C

Is there a better definition of "market-clearing price"?

However, this does not seem to lower the entropy, like I expected. Entropy
must not be what I'm looking for.


The thing I like about this (similar to the strategy-proof aggregation
functions like the median or chiastic median) is that what ended up
mattering is that there were 5 A voters who graded B at or above 0.555 and
4 C voters who graded B at or above 0.444. The actual grades didn't matter
beyond that, so small adjustments by most voters won't change the outcome
at all. Yes, there are a few voters at the margin who could affect the deal
by going above or below the threshold, but in general, this system should
have some good strategy-resistance properties.

Of course, with perfect knowledge, one A voter might notice that a little
dishonesty (real utility=0.57, professed utility=0.55) will eliminate the
(5,4) trade and leave the (4,4) trade. They still get the same number of C
voters (4) converted to B voters, but it only costs them 4 A voters instead
of 5. They could have captured some of the surplus utility and kept it for
themselves.

This method is vulnerable because it's not making all trades at the
individual level. It doesn't really care that every trade at the margin is
net-positive-utility. It seems to use the surplus utility of the
most-willing-to-trade voters to get more people in on the deal at the
margin.

On the other hand, if one of the C voters thought they might benefit from
dishonesty and professed utility 0.44 (real utility=0.5), they would
scuttle both the (5,4) trade and the (4,4) trade, leaving (3,3) as the best
trade. This effectively causes 2 A votes and 1 C vote to NOT get traded for
3 B votes, a net-negative utility for them.

So hopefully it is only possible for a few voters (right at the margin) to
affect the outcome at all and hopefully it will be impossible to tell,
before the election, if being dishonest will help you or hurt you.

Thoughts?

~ Andy

On Sun, Oct 3, 2021 at 4:39 PM Kristofer Munsterhjelm <km_elmet at t-online.de>
wrote:

> On 9/27/21 8:43 AM, Andy Jennings wrote:
> > Kristofer,
> >
> > I, too, find myself going back to the risk-neutral lottery-based
> > definition of utility. I feel like it goes so naturally with "random
> > ballot".
> >
> > Suppose V1 has a ranking of A > B > C and V2 has a ranking of C > B > A.
> > In random ballot, V1's vote becomes a lottery ticket that causes A to
> > win and V2's vote becomes a lottery ticket that causes C to win. Let us
> > ask when would V1 and V2 both agree to trade their "one chance of A
> > winning and one chance of C winning" for "two chances of B winning".
>
> There are two threads to this that I think it's useful to keep separate.
> The first is that since we're dealing with lotteries (over candidate
> alternatives), it makes sense to consider *actual* lotteries
> (probabilities of winning) based on this information. The second is
> that, while interpersonal comparisons of utility are very hard (if not
> impossible, e.g. qualia problems), we can access risk neutral lottery
> information.
>
> I was investigating the second aspect, because if Range and Approval's
> failings come from asking more than the voter can provide (namely,
> asking for utilities on an interval scale, and sort of just throwing its
> hands in the air and say "then just normalize"), then it's natural to
> ask, well, what kind of cardinal utility information *can* we get? And
> how can a (deterministic) method that's honest about the limits to its
> information be constructed?
>
> The two threads or lines of investigation may be related, e.g. one such
> method may be "determinize a random method by electing the candidate
> with the greatest probability of victory", justified by arguing that
> this will generally be a good candidate due to the linearity of
> expectation. However, I would imagine that a method specifically
> designed for the deterministic case would be better.
>
> While it's also possible to e.g. take lottery information and "just
> normalize", I have the impression that doing so would add information
> that doesn't exist: it in a sense pretends that what isn't on an
> interval scale actually is. It needs additional justification, e.g. that
> OMOV means that each voter's power should the same, and that a voter
> with an extreme preferences should have all his other preferences
> diminuated rather than the scale being clamped.
>
> > We know we can say little about the "absolute" or "interpersonal"
> > utilities of how U_V1(A), U_V1(B), and U_V1(C) compare to U_V2(A),
> > U_V2(B), and U_V2(C).
>
> Ordinary incommensurability would mean that you can only get an affine
> scaling of the utilities (defined by the lotteries). But I was thinking:
> at least for sensory (hedonic) types of utilitarianism, couldn't you
> define a zero point by saying everything with utility less than zero is
> something you'd prefer not to experience (i.e. you wouldn't prefer a
> presence of this to an absence of this), while everything with utility
> greater than zero is something you would?
>
> > But asking each voter to quantify exactly where B lies on the spectrum
> > between A and C, as a number between 0 and 1, is completely meaningful
> > (in the risk-neutral lottery paradigm).
> >
> > Let b_1 = (U_V1(B) - U_V1(C)) / (U_V1(A) - U_V1(C))
> >
> > Let b_2 = (U_V2(B) - U_V2(A)) / (U_V2(C) - U_V2(A))
> >
> > (In other words, rescale each voter's utility so their favorite
> > candidate is at 1.0 and their least favorite is at 0.0 and examine their
> > utility estimations of B.)
>
> Yes. I used a four-tuple because I was thinking "what if A and C are the
> same utility value, then you'd get a division by zero". But if we also
> gather honest rank information, that problem more or less goes away,
> because it's clear who the voter's favorite and least favorite are
> (excepting the degenerate case where the voter equal-ranks everybody).
>
> There may still be numerical imprecision problems, but let's leave that
> for now; no need to make it any more complex than it needs to.
>
> So the sufficient data would seem to be: an ordering of c candidates,
> plus (c-2) scale values. There are c-2 of these because in a
> two-candidate election, there's no meaningful ratio between the favorite
> and the least favorite, as the affine (or even linear) scaling make
> their utilities completely ambiguous. All we know is that the favorite
> is better than the least favorite.
>
> But that this is sufficient suggests (at least at first glance) that
> cyclical preferences may occur. And perhaps this is a natural
> consequence of being limited to affine/linear scalings of utilities. I'd
> have to think more about what such a cycle "means": I would *guess* it's
> something like that any candidate in the cycle can win depending on what
> the affine constants are (as opposed to say, a Pareto-domination
> situation where everybody ranks A>B>C so that whatever the voters'
> constants are, A is a better candidate than B).
>
> > If b_1 = 0.5 and b_2 = 0.5, we propose they trade "one chance of A
> > winning and one chance of C winning" for "two chances of B winning". The
> > voters are actually completely neutral toward this trade, though as an
> > outsider I much prefer the lowered entropy. b_1 and b_2 would have to be
> > strictly greater than 0.5 for both voters to be excited about the
> > transaction.
> >
> > It works for other fractions, too. If b_1 = 0.6 and b_2 = 0.4, the
> > utility-neutral trade is "one chance of A winning and one chance of C
> > winning" for "1.666 chances of B winning and 0.333 chance of C winning".
> > (If b_1 > 0.6 and b_2 > 0.4, then the trade is positive-sum.)
> >
> >
> >
> > Can we actually set up this market (declared-strategy style), let all
> > voters submit their three-candidate ranking and a utility (between 0 and
> > 1) for their middle candidate, then we simulate all the trades and come
> > up with a final, optimal lottery?
> >
> > A > B > C voters and C > B > A voters would trade with each other. A > C
> >  > B voters and B > C > A voters would trade with each other. B > A > C
> > voters and C > A > B voters would trade with each other.
> >
> > It seems obvious to me that in an election where 50% of the voters want
> > A > B > C and 50% want C > B > A, if you can get a number from each
> > voter on where B is on their scale from 0 to 1, that information is
> > useful AND meaningful. I mean, if all the voters say 0.9 then clearly we
> > should just elect B as the compromise candidate. And if all the voters
> > say 0.1, then giving them a 50/50 lottery between A and C is probably
> > the best we can do. Why should we decline to collect and use this
> > "utility of the middle candidate" information?
> >
> > How can we simulate those trades? Line up all the A > B > C voters in
> > order of decreasing "B" utility and line up all the C > B > A voters in
> > order of increasing "B" utility and match up the two lines somehow? What
> > about the mismatch in length?
> I think the best approach in such a case would be to consider the method
> as an optimization problem: then trading should result in much less path
> dependence because the solver can consider the problem globally.
>
> This optimization problem would contain a penalty on entropy, but I'm
> not sure what more. The obvious choice would be to maximize social
> utility, but since we only have affine transformations (or linear,
> depending on whether the natural zero idea is tenable), we can't extract
> social utility from the ballots, so I'm not sure how to do that.
>
> (Well, you could fit a function of the ballots to maximize VSE under
> say, a spatial model. But I'm not sure what such a function would look
> like and if it would be generalizable. It sounds a bit ugly an approach.)
>
> I'd think the trade idea would produce constraints on the allowed
> solutions that we're optimizing over. Suppose that a A>B>C voter trades
> with a C>B>A voter to decrease both the chance of A and C winning and
> increase the chance of B winning. Then there exists a point where the
> A>B>C voter has given up enough probability that any further increase in
> B's chance of winning only lowers that voter's expected utility. At that
> point, the voter is indifferent between an epsilon more of B winning,
> and e/2 more of A and C winning. So that's a marginal constraint.
>
> I'm not sure how the optimization method should find out who each voter
> would consider trading with, though, and how to handle more complex
> trades. In a market, that's usually handled through some kind of money,
> but there's no money here. Someone who's better than me at
> microeconomics could probably figure that out.
>
> (But the good news, I think, is that the marginal constraints only need
> the lottery information, because they're about indifference between two
> lotteries.)
>
> > One problem I see is that whenever a transaction is perfectly fair, it
> > is utility-neutral, and the two parties are indifferent to whether the
> > trade actually happens. A trade that is positive-sum, on the other hand,
> > has some surplus utility and we could be unfair about which voter
> > captures it.
> >
> > If b_1 = b_2 = 0.6, then trading "one chance of A and one chance of C"
> > for any of the following would be utility-neutral or -positive for both
> > voters:
> >
> > - 1.666 chances of B winning and 0.333 chance of C winning
> > - 2 chances of B winning
> > - 1.666 chances of B winning and 0.333 chance of A winning
> >
> > Obviously, as neutral election administrators, we should choose the
> > middle option. But I think this illustrates the opportunity for
> > strategic voting in this system. If you, as a voter, have perfect
> > information about the other voters, maybe your utility for B is 0.6 and
> > you see that you can decrease your declared utility for B to 0.400001
> > and still get a trade. It will be a trade the other person barely agrees
> > to, and you'll maximize your utility, capturing all the surplus from the
> > transaction.
>
> Yes, all of the above remarks are in the context of a honest system. The
> deterministic system that I'm interested in would be subject to
> Gibbard's earlier theorem, and the lottery method would probably be
> subject to the later one.
>
> I'm not sure if accounting for strategy can be done as easily through
> the optimization framework. I think there are fields of study on this,
> from an economic perspective, on how to add constraints that make
> certain types of strategy pointless (at the expense of producing some
> lower utility solution), but I don't know anything about them.
>
> > Is there something else we could do as election administrators to make
> > dishonesty less profitable? Does it depend on the way we line up and
> > match up the opposing voters? If we always try to make sure that we
> > match up voters with a "sum of compromise utility" that is greater than
> > one but as small as possible, does that help somehow?
> >
> > Perhaps in a large election, it will be difficult to know enough
> > information about the other voters and the benefits will be small enough
> > that voters will just be honest?
> >
> > Or maybe we just discard the concept of matching up individual voters,
> > look at all the data, and come up with a "market-clearing price" for
> > turning A and C chances into B chances? Does that fix anything, or just
> > leave a lot of positive-sum transactions unfulfilled?
>
> There's the generalized SARVO approach: arrange the voters in some
> random order. The first voter goes first, the second voter optimizes his
> vote given the first's ballot; the third optimizes wrt the two who went
> before, and so on. Suppose the inner method returns a lottery. Choose
> the lottery that is the expectation of all these lotteries. More complex
> methods could try to use the minmax game AI algorithm for optimizing the
> ballots. (The idea for both is related to the concept of "averaging over
> clairvoyance", which is used in imperfect information game AI when there
> are no information-gathering moves.)
>
> But your idea might be both simpler to program and more comprehensible.
> For a deterministic method, if you can get enough voters to compromise,
> it doesn't matter if the other voters strategize away from the
> compromise, because the compromise candidate will win anyway. So first
> choosing voters who have the most to gain by compromising but not so
> much that they could've misrepresented their ballots (by strategy) and
> got a better result, might work.
>
> Other ideas: perhaps there is some kind of Condorcet analog, i.e.
> treating the ranks plus (n-2) factors as biasing the preferences. If so,
> we could then use standard Condorcet methods on the result and get
> something that's "majoritarian by utility" -- although it wouldn't
> exactly be by utility, since the factors don't set utility. But perhaps
> one could prove say, that if there's a lottery that involves only some
> candidates, and everybody (or some large enough fraction by strength of
> preference) prefers every lottery containing only candidates in that set
> to lotteries containing everybody, then that set (an analog of the Smith
> set) should win.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.electorama.com/pipermail/election-methods-electorama.com/attachments/20211004/b598bfe1/attachment-0001.html>


More information about the Election-Methods mailing list