[EM] Strategy and Bayesian Regret

Andy Jennings elections at jenningsstory.com
Fri Oct 28 07:24:16 PDT 2011


On Fri, Oct 28, 2011 at 2:43 AM, Jameson Quinn <jameson.quinn at gmail.com>wrote:

> What makes a single-winner election method good? The primary consideration
> is that it gives good results. The clearest way to measure the quality of
> results is simulated voter utility, otherwise known as Bayesian Regret (BR).
>
> This is not the only consideration. But for this message, we'll discount
> the others, including:
>
>    - Simplicity/voter comprehension of the system itself
>    - Ballot simplicity
>    - Strategic simplicity
>    - Perceived fairness
>    - Candidate/campaign strategy incentives
>
>
> Calculating BR for honest voters is relatively simple, and it's clear that
> Range voting is best. But how do you deal with strategy? Figuring out what
> strategies are sensible is the relatively easy part; whether it's
> first-order rational strategies (as James Green-Armytage has worked out<http://www.econ.ucsb.edu/~armytage/svn2010.pdf>)
> or n-order strategies under uncertainty (as Kevin Venzke does) or even just
> simple rules of thumb justified by some handwaving (as in Warren Smith's
> original BR work over 10 years ago), we know how to get this far. But once
> you've done that, you still have to make some assumptions about how many
> voters will use strategy. There are several ways to go about this. In order
> of increasing realism, these are:
>
>    1. Assume that voters are inherently strategic or honest and do not
>    respond to strategic incentives. Thus, the number of voters using strategy
>    will be the same across different systems. Warren Smith's original BR work
>    with IEVS seems to have shown that Range is still robustly best under these
>    conditions. Although I am not 100% convinced that his definition of strategy
>    was good enough, the results are probably robust enough that they'd hold up
>    under different definitions.
>    2. Avoid the question, and just look at strategic worst cases. I count
>    this as more realistic than the above, even though it's just a special case
>    of 100% strategy, because it doesn't give unrealistically-precise numbers.
>    But of course, if I say that method X has a BR score somewhere between Y and
>    Z, and method A has a BR between B and C, if Y<B<X<C I cannot conclude that
>    X is better than A. So you lose the ability to answer the important
>    question, "which method is better?"
>    3. Try to use some rational or cognitive model of voters to figure out
>    how much strategy real people will use under each method. This is hard work
>    and involves a lot of assumptions, but it's probably the best we can do
>    today.
>    4. Try to get real data about how people would behave in high-stakes
>    elections. This is extremely hard, especially because low-stakes polls may
>    not be a valid proxy for high-stakes elections.
>
> As you might have guessed, I'm arguing here for method 3. Kevin Venzke has
> done work in this direction, but his assumptions --- that voters will look
> for first-order strategies in an environment of highly volatile polling data
> --- while very useful for making a computable model, are still obviously
> unrealistic.
>
> What kind of voter strategy model would be better? That is, what factors
> probably affect a voters' decision about whether to be strategic? I can
> think of several. I'll give them in order from easiest explanation to
> hardest; the order below has nothing to do with the relative importance.
>
> First, there's the cognitive difficulty of strategizing versus voting
> honestly. In a system like SODA, an honest bullet vote is much simpler than
> a strategic explicit truncation, so we can expect that this factor would
> lead to less strategy. In a ranked system, it is arguably easier to
> strategically exaggerate the perceived frontrunners (Warren's "naive
> exaggeration strategy" or NES) than to honestly rank all the candidates, so
> we might expect this factor to increase strategizing. Note that the
> cognitive burden for strategy is reduced if defensive and offensive
> strategies are the same. For instance, under Range, exaggeration is always a
> good idea, whether it's offensively or defensively.
>
> (Note: This overall cognitive factor is probably most important for "lazy
> voters", and such "lazy voters" are also probably open to strategic and/or
> honest advice from peers, so the cognitive factor is perhaps not too
> important overall.)
>
> Second, there's offensive strategy. The more likely it is that strategy
> will be advantageous against honest opponents, and the more advantageous it
> is likely to be, the more strategy people will use. The first question has
> been addressed by the Green-Armytage paper<http://www.econ.ucsb.edu/~armytage/svn2010.pdf>;
> it appears that IRV is relatively strategy-resistant, Condorcet is middling,
> and Range and Approval are likely to be subject to strategy. But remember,
> the whole point of this discussion is that strategy is not so much a problem
> in itself, as an input to the model for determining BR. If Approval gives
> better results under 100% strategy than IRV does with 0%, then Approval is
> still a better system.
>
> Third, there's defensive strategy. Basically, this means looking at the
> probability that the result will be subject to strategy from some other
> group, and seeing if you can defend against that.
>
> Fourth, there's peer pressure. If you feel that everyone else is
> strategizing, you are more likely to do so yourself. This raises the
> possibility of positive feedback and multiple equilibria.
>
> It is crucially important to understand that defensive strategy is not like
> offensive strategy in terms of peer pressure. If you think that your allies
> are unlikely to back you up on your offensive strategy, you may decide it's
> pointless to attempt it. But some people will use defensive strategy merely
> as insurance. Thus, there is more likely to be a "floor" for defensive
> strategy, a certain number of people who use it even if nobody else is. But
> it is also true that the more people use strategy, the more people will
> worry about defensive strategy. Thus, a method where defensive strategies
> are likely to be possible is more likely to be driven to a high-strategy
> equilibrium, than one where only offensive strategies are an issue.
>
> So, what does all this mean for BR calculations? Well, first, we should try
> to characterize the different systems in terms of the first three factors
> above. For the cognitive factor, can we develop some objective measure of
> how cognitively difficult it is to work out a good strategy under different
> systems? For the offensive strategy factor, we can thank Green-Armytage for
> making a good first step in giving the *probability* of strategic
> vulnerability, but we should follow up by working out the *amount* of
> strategic advantage a voter could expect. For the defensive strategy factor,
> Kevin Venzke's work gives some interesting clues, but more work is needed to
> isolate defensive factors.
>
> But even once we have all that nailed down, we need a voter model to turn
> it into a BR measure for each system. Of course, any such model will be open
> to accusations of bias, as it will include varying amounts of strategy under
> different voting methods. Range voting advocates in particular might be
> motivated to assume that strategy percentage will be the same under
> different systems. But it's important to undersand that no assumption here
> is unbiased; without real-world data, assuming equal strategy is at least as
> biased as a model which accounts for the factors above.
>
> So in the end, I'm inclined to bite the bullet, and make arbitrary
> assumptions for now. I'd guess that a method where the factors above favor
> strategy --- for instance, Range voting, where all three of the
> a-priori-quantifiable factors favor strategy --- would lead to a high degree
> of strategy, something around 75%. Meanwhile, a method where the factors
> favor honesty --- such as, I'd argue, SODA --- might have a low amount of
> strategy, something under 25%. Something like Condorcet or Majority Judgment
> has the factors somewhere in the middle, but it would be harder to guess
> what that would mean in practice; I'd guess that peer pressure feedback
> would mean that either <25% or ~75% would be more likely than an unstable
> middle value like 50%.
>
> Again, the end "quality" number is not strategy percentage, but the
> resulting BR. So even if range does lead to more strategy than any other
> system, it could still end up being the best system. I'd like to see real
> numbers on this. Any assumptions will be biased, but that doesn't make the
> numbers useless.
>

> Jameson
>

I agree with this analysis, and would also like to see real numbers on this.

~ Andy
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.electorama.com/pipermail/election-methods-electorama.com/attachments/20111028/e72b3c12/attachment-0002.htm>


More information about the Election-Methods mailing list