# [EM] Request for criticism

Martin Harper mcnh2 at cam.ac.uk
Tue Mar 6 20:40:49 PST 2001

```Rob LeGrand wrote:

> Howdy all,
>
> I've developed a program that simulates voters and elections to compare
> methods, and I'd like some feedback on my approach.

Same here - though I'm not ready for feedback yet (still!)

> For now I'm only
> trying to measure how well a method reflects the true preferences of
> sincere voters, so the results will not reflect the manipulability of
> the methods.

That's a reasonable starting point, but you should be aware it is a HUGE
simplification. Ignoring manipulability, Cardinal Ratings is a perfect
system.

> (At this point I'm only interested in comparing methods
> that have been shown to meet desirable criteria.)

Heh. I started off with the simplest possible methods to reduce work...
My first method just elected a random candidate. The second was Random
Ballot... Needless to say, they don't comply with many criteria...

Which criteria are these, btw? - from the methods you've picked, I'd
assume the Condorcet Criterion - yet you include Borda and Copeland - odd.

> The numbers followed
> by (*) in the description below are values I picked; let me know if you
> think different values would be better.
>

> Since I'm assuming sincere votes, Ratings is the method by which the
> others are judged.  Each of the 10000(*) "voters" rates the 25(*)
> candidates individually

Ideally, this should be variable. It may well be, for instance, that
some methods are better for smaller or larger numbers of candidates. Eg,
IRV is actually pretty good with only two candidates... ;-)

> with a number from 0 to 5(*), generated
> randomly.

Obviously, real electorates are not random, so this is a major
simplification, and an important one. With 25 candidates, you'll get
many more condorcet paradoxes with a random electorate than one based on
a more realistic model.

> These are added up across the voters to determine the
> candidate that "should" win the election.  I picked 5 as my estimate of
> how fine people's preferences really are.

I don't think this is the case. I know that winning a million pounds
would be a million times better than winning a pound - yet according to
your 0-5 rating system my preferences aren't that fine. Seems odd.

> Each voter's candidate ratings are used to construct his sincere ranked
> ballot.  For example, if a voter rated A 4, B 2, C 0, D 5 and E 2, his
> ballot would be D>A>B=E>C.  The ranked ballots are compiled into a
> pairwise matrix, P, which each method can use to determine a winner.
> Pij is the number of voters who ranked candidate i strictly higher than
> candidate j.  P is used to calculate the pairwise matrices M, W and T:
>
> Mij = Pij - Pji   if Pij > Pji, and
>       0           otherwise;
> Wij = Pij         if Pij > Pji, and
>       0           otherwise;
> Tij = Pij         if Pij >= Pji, and
>       0           otherwise.
>
> Each of three procedures (beatpath, Tideman, Condorcet) are applied to
> each of the four matrices, giving the following 11 methods (with what I
> believe to be their equivalences to the right):
>
> beatpath(M)    Blake Cretney's Path Voting
> beatpath(W)    Mike Ossipoff's Cloneproof SSD
> beatpath(T)    Schulze's Method
> beatpath(P)    Norm Petry's interpretation of Schulze
> Tideman(M)     Tideman's Ranked Pairs
> Tideman(W)     Steve Eppley's Majoritarian Tideman
> Tideman(T)
> Condorcet(M)   Blake's Minmax
> Condorcet(W)   "Plain Condorcet"
> Condorcet(T)
> Condorcet(P)
>
> (Note that Tideman(P) is left out; it's equivalent to Tideman(T).)
>
> I also use P to find the Copeland and Borda winners.

Where's Approval? ;-)

I'd have chosen a more varied range of methods, myself - I'm not sure
how big the differences between the different condorcet completions is -
and whether its big enough to make a diff.

> Overall I simulate
> 9999* elections

You could simplify this by finding out if there is a condorcet winner,
and if there is, skipping the condorcet methods. That'd get you a
significant speed up...

> and record how many times the 14 methods match each
> other's results in a 14x14 array. The Ratings column reports how "well"
> each method did at giving the public what it wants

```