[EM] [CES #4437] Re: Looking at Condorcet

Dave Ketchum davek at clarityconnect.com
Fri Feb 3 18:42:48 PST 2012


On Feb 3, 2012, at 12:31 AM, Clay Shentrup wrote:

> As far as I can tell, no amount of evidence will change DaveK's  
> mind. But it's worth pointing out that Score Voting is superior to  
> Condorcet in essentially every way.
>
> * Lower Bayesian Regret with any number of strategic or honest voters
> NOTE: Some would argue that maybe people are more honest with  
> Condorcet, but if you look at this graph, the difference would have  
> to be pretty enormous in order for Condorcet to outperform Score  
> Voting (http://scorevoting.net/BayRegsFig.html). And there's some  
> evidence it's actually the opposite — i.e. Score Voting inspires  
> more honesty.

No comments on BR for now.
>
>
> * Is simpler for voters.
>   1) Ranked ballots tend to result in about 7 times as many spoiled  
> ballots, whereas Score Voting REDUCES ballot spoilage.

Huh!  Was there bias by the measurers?  Both have voters use numbers  
for voting.  ANY number valid for Score could also be valid for  
Condorcet, which needs no more than to be able to read the numbers  
assigned to A and B by a voter and decide whether they say A>B, A=B,  
or A<B.

IRV, by prohibiting equal ranks, demonstrates having more opportunity  
for spoilage.

>   2) Even voters who can cast a valid ranked ballot will typically  
> have no understanding of how the system works. E.g. we use Instant  
> Runoff Voting in San Francisco, and experiments (plus my own  
> experience asking around) has shown that the vast majority of people  
> cannot correctly describe how the system works, or correctly pick  
> the winner given a simplified hypothetical set of ballots. They  
> generally assume it uses the counting rules of Borda ("you get more  
> points the better your ranking is, and the most points wins"). So in  
> reality, the same thing would happen with Condorcet. Whereas the  
> principle behind Score Voting happens to match people's intuitive  
> expectations, so it is simpler in that they will tend to just  
> inherently understand it, the same way people understand restaurant  
> ratings on Yelp.

Discussing IRV is not especially helpful here since it is somewhat  
more complex than Condorcet.

For Condorcet the basic is simply saying which candidate is liked best  
via assigned ranks - if more voters vote A>B than vote B>A, then B  
cannot be CW (liked better than each other candidate).

Score ratings say a bit more - how much better is A liked than B.   
Rating gets tricky when deciding how much - when wanting to say A>B>C,  
decreasing rating for B increases A's chance of winning over B but  
decreases limit for B>C, and thus increases possibility of C beating B  
(if other voters rate C about equal to B, this change could make C win  
the race).
>
> * Is MASSIVELY simpler for election officials.

Condorcet is not especially complex - read the ranks from each ballot,  
counting which candidate is preferred for each pair of candidates, and  
then note which is the CW.
>
> * Is more expressive, which is valuable for the 10% or more of  
> voters who will choose to be expressive rather than tactical.

Score ratings are numbers to show how much each candidate is liked -  
questionable how accurate as to matching true liking.

Condorcet ranking only asks which candidate is better liked in each  
pair - a simpler question.

> Condorcet systems fundamentally try to maximize the wrong thing.  
> They try to maximize the odds of electing the Condorcet winner, even  
> though it's a proven mathematical fact that the Condorcet winner is  
> not necessarily the option whom the electorate prefers.

Trouble is that the ballots ARE the voters' statements as to which  
candidate IS the CW.  The above paragraph seems to be based on the  
ballots sometimes not truly representing the thoughts of the voters  
voting them.






More information about the Election-Methods mailing list