[EM] More still on utilitarian methods
Kristofer Munsterhjelm
km_elmet at t-online.de
Thu Oct 14 05:39:14 PDT 2021
Suppose we have a two-candidate election. Assuming no second order
effects (e.g. voters deliberately introducing randomness to keep
candidates on their toes), every personal utility-maximizing voter would
submit a preference of type:
(1, 0): the preferred lottery is one where A wins with certainty
(0, 1): prefers B to win with certainty
(1/2, 1/2): indifferent to the outcome (though such a voter probably
wouldn't show up)
Then considering my categorization of cardinal methods: if we could
somehow extract the utilities directly, and were okay with dictatorship
of the strongest feelings, then each voter would rate the candidate
according to utility on some common scale, and then the candidate who
maximizes utility wins. But if that's not possible or desirable, and
we're limited to affine scaling, then:
A method that's of type two (enforcing OMOV all the time at the cost of
utility) would reduce to majority rule here (after the indifferent
voters are removed).
A method that's of type three is like Range: in a two-candidate
(Mushroom, Pepperoni) election, the meat-eating majority chooses to give
up some power by rating Mushroom not at 0 but at, say, 0.8.
This, I *think* requires more than just lottery information; it requires
a sort of common standard so that the majority knows that 1 is very
tasty and 0.8 is sufficiently tasty[1]. In any case, strategic voters
would not give away some voting power this way unless they're strategic
in a much broader sense.
We could generalize Range to any p-norm, call it Lp-cumulative voting.
In the "Range-ish" version, there is a threshold for max p-norm, and any
vote that exceeds this norm is invalid. This is a type three method.
Then there's the automatically renormalized version, where the voter's
ballot (whatever its type) is normalized so that its p-norm is exactly
the threshold. That's a type two.
For Range, the type-three method is Range itself. Since the norm for
Range is the max norm, the automatically normalizing version adjusts the
ratings so they exactly fit the scale (e.g. a 0 - 10 Range variant would
have at least one candidate rated 10). Just how it does that isn't given
- it could clamp values or scale linearly. If it's DSV, it would scale
in such a way as to maximize the voter's personal utility.
As for Condorcet methods, as they reduce to majority in the
two-candidate case, most cardinal Condorcet variants would be type two.
It would be possible to make a type three Condorcet method by saying,
suppose that a voter rates A at 1 and B at 0.8, then that voter only
adds 0.2 of a vote to the pairwise contest of A vs B. But that again
requires more than just lottery information.
Perhaps, then, three candidates is the minimum where type two cardinal
methods start to differ from ordinal ones; at least if based on a
Condorcet logic. But how? I would imagine there's a tension between
strategy resistance (possibly modeled by Nash equilibria) and honest
results. Suppose honesty. Then what lottery information is available?
Each voter who is not indefferent to the outcome has some candidate
where he's indifferent between electing that candidate with certainty,
and a nontrivial lottery of the other two. Then what?
I'm kinda just tinkering, but one possible way is to use Lp-cumulative
voting restricted to this tuple. Suppose we want to reconstruct a Range
ballot for the three candidates, and suppose a particular voter's
preference is A>B>C so that B is the candidate whose certain election is
equivalent to a lottery of A and C. By normalization, we can set the
ratig of A to 1 and that of C to 0. Then B's rating is p if he's
indifferent between the lotteries (p, 0, 1-p), and (0, 1, 0). Then for
any given three-tuple {A,B,C}, the winner is the Range winner of the
thus normalized ballots.
If there exists a candidate W so that he wins any three-tuple he's in,
then he's the "3-Condorcet" winner and wins outright. (Similar notions
would be possible for the Smith set.) Honesty might not be the best
policy here, but it would mitigate >3-candidate Burr dilemmas. It
wouldn't fix two-candidate ones, though... so more thinking may be
needed.[2]
Still, I think I've at least determined a bit more about the shape of
this particular elephant :-)
[1] The drawback of such a common standard, even if the voters are
completely honest, is that later experiences may recalibrate it. E.g.
suppose someone has only eaten mass-produced frozen pizzas and rate them
1 because they're the best he knows; then later he visits Italy and gets
a proper pizza. Now he'll have to readjust what 1 means. The voters may
even be confused about which standard others are using, which is the
problem of incommensurability.
[2] I'm thinking that the way to solve three-candidate Burr dilemmas
would have to rely on a plain Condorcet back-up - i.e. that the STAR
proponents have got at least that right. But then the {A,B,C} outcome
can't simply be the Range winner of the three with reconstructed
ballots. But we wouldn't want to simply reduce entirely into normal
ordinal Condorcet either. What's the nature of this tradeoff?
More information about the Election-Methods
mailing list