[EM] [CES #9004] Before Voting Methods and Criteria: Outcome Design Goals (long)

Abd ul-Rahman Lomax abd at lomaxdesign.com
Mon Jul 1 13:56:46 PDT 2013


At 11:03 AM 7/1/2013, Jameson Quinn wrote:
>Benjamin:
>
>You are right to point out that we should have some discussion of 
>basic principles to underly our discussion of specific systems. Here 
>are my own views:
>
>1. There is no single easy philosophical answer to these questions. 
>There will always be those who, like Clay, would rather grab the 
>quick self-consistent certainty of interpersonally-summable 
>utilities; and it is true that this point of view offers many 
>advantages (for instance, immediate immunity to probabilistic or 
>Arrovian money-pump arguments); but it also has serious 
>philosophical critiques. There is a continuum of possible 
>self-consistent answers to the breadth-versus-depth question that 
>runs from maximin to summed-utility to maximax, and if you allow 
>certain kinds of status-quo bias, the possibilities are even broader.

My take on this is that the hypothesis of interpersonally-summable 
utiliities is useful but not "the truth." What the Bayesian Regret 
studies do is to show how a voting system performs *if* there are 
summable utilities. With proper design, those simulated utilities can 
be quire reasonable.

A voting system that peforms poorly with *known utilities* is not 
likely to perform well with unknown ones. So BR studies are the best 
measure we have, so far, for assessing voting system performance.

Given this, Range voting would seem to be an ideal voting system, 
because, on the fact, it sums those utilities. In fact, there is a 
translation process between internal utilities and actual votes that 
introduce distortion into the system. The first is "normalization," 
where the realistic options are mapped to the full vote scale. If 
there is simple normalization only, then voters would essentially 
disempower themselves if they have a very strong preference, a *must 
have* or *must defeat.* So there is what I've called "magnification," 
where voters would stretch or compress the voting preference 
strengths (the differences between two voted values) in order to 
match assessments of relative value as adjusted for election probabilities.

All this means that the actual range vote sum is not necessarily the 
actual social utility maximizer. In particular, if the voters don't 
have good data on election probabilities, and, more than that, have 
*incorrect information*, they may vote foolishly. This problem is 
handled if the system uses repeated elections, because, after the 
first election, they should have much better information about what 
is probable, if the voting system does allow the disclosure of that 
information.

>2. However, for practical terms, we are not likely to get a better 
>metric for voting systems than total utility. A maximin metric would 
>philosophically give veto power to a single voter; an intermediate 
>metric would probably be equivalent to summed-utility if you rescale 
>utility by some monotic function; and any metric involving status 
>quo bias extrinsic to the voters themselves, is a horrible compass 
>for system design. So while I don't share Clay's easy certainty 
>about the impeccable solidity of the philosophical foundations here, 
>I do agree with him that this is the best single measure of outcome quality.

And I agree on that as well. This leaves two issues:

1. The practicality of implementation.
2. Majority consent.

The latter has often been seriously neglected. IRV was sold on a 
claim that it would find majorities. Essentially, FairVote lied, or 
allowed people, in some cases, to be decieved by naive expectations. 
We don't know that FairVote *actually corrupted the committee that 
wrote the voter information booklet* that led voters to approve 
"Ranked Choice Voting" in San Francisco, but we can be quite sure 
that FairVote took no steps to correct it, and what was said that was 
just plain wrong has been said by them in many places, though the 
gradually became more careful. They now say this "majority" thing in 
such a way that naive voters won't understand the difference, but if 
you call them on it, what they say is defensible. It's just 
misleading, not directly a lie.

The biggest problem in the way of implementing Range is lack of any 
test for majority consent. Range (Score) is a *plurality method.* 
With all the anti-Plurality hype, that's totally overlooked. All that 
has happened with Range is that fractional votes are allowed.

While, in theory, it can take an endless series of repeated elections 
to find a majority, the probability of that is vanishingly low. 
People *do* make the necessary adjustments, unless the majority *does 
not want to complete*, in which case that is the majority decision. 
Who is to say that this is wrong?

However, my sense is that a two-round system with intelligent choice 
of nominations for the second round can find a *true majority* almost 
all the time. And when it fails, it would be close enough that the 
value of continuing the process would be less than the cost of continuing it.

>3. That doesn't make things easy, though. For instance, for a given 
>set of ballots, the Score result is usually (though not always!) the 
>best way to use the information given to try to maximize the 
>underlying utilities. But since voters will almost certainly vote 
>differently under different systems, that does not mean that Score 
>is the best way to ensure the best result from a pre-balloting point 
>of view. Or another instance: system A could give better results, 
>but system B might be more likely to be implemented and thus offer 
>more expected value over status quo C.

Score produces optimal social utility votes when the voters vote 
"sincerely," though that has never been adequately defined. If voters 
voted *absolute utilities*, this would be a tautology. But voters 
*will not vote absolute utilities* in any realistic election of the 
kinds we have been considering. Much more study needs to be done on 
how voting strategy affects system performance. Many of the studies 
that have been done had defective models of voting strategy, that 
don't correspond to likely real voter behavior.

The BR study of Bucklin, if I'm correct, treated Bucklin as a ranked 
method, so voters voted for first and second preference in first and 
second rank. That is not how real Bucklin would be voted, nor how it 
was voted, and especially a mature electorate would not vote that 
way. Ranks can be skipped, and voters can also bullet vote. (I think 
about a third of voters in the early Bucklin elections bullet voted. 
That's quite low!)

The recent EMAV proposal makes it clear; the ballot is a full range 4 
ballot. Sincere normalized utilities make sense. Not betraying the 
favorite makes sense. I've not seen any clear evidence that voters 
*who have adequate information* would just bullet vote, unless they 
have strong preference, in which case the bullet vote *is an 
expression of that preference.*

So this method would start to collect full Range data, for the first 
time. We cannot force these voters to disclose preferences, but they 
will, where they are not too strong to allow expression. The method 
makes the expression *reasonably safe.* 




More information about the Election-Methods mailing list