# [EM] Unranked-IRV, Cumulative, and Normalized Ratings

Richard Moore rmoore4 at home.com
Mon Apr 9 19:27:25 PDT 2001

```Forest Simmons wrote:

> On Sat, 7 Apr 2001, Richard Moore wrote (in part):
>
> >
> > I would be more interested in finding a way to reward sincere voting in
> > a CR system. But it seems unlikely that such a method exists.
> >
>
> As you said, normalization with the L2 norm does yield near sincere voting
> in the case of near zero information.
>
> Otherwise, as you say, it is unlikely that any such method exists.

...unless the voters let the system do the strategizing for them.

I think that is what methods like Dyadic Approval, Voter's Choice, and
Cranor's Method (mentioned previously by Mike, but did I remember
the name and spelling correctly?) are attempting to do, each at different
levels of complexity.

If the voters give their cardinal ratings to the system, and the system
determines
the zero-info approval strategy (for starters) for each voter, and counts the
approval votes that would result, then uses this information to revise the
strategy for each voter, then recasts the votes, and repeats until it
converges
on a winner, then would the voters have any reason to give insincere ratings
(assuming they understand and trust the system)?

I think the above description is similar to Cranor's, but I don't know all the

details of that method.

Aside from the enormous amount of computing power this method would
require for a large number of voters, what's the catch?

Could it be that such a system can't be guaranteed to converge in all cases?
I have a suspicion it won't.

But if it converges in most cases -- Then you can assume the voters still have

a strong incentive to give sincere ratings (assuming failure to converge
cannot
be predicted). So use the convergent winner if one is found. If not, then find

the winner using the Normalized Ratings (L2 norm, of course).

> Once it became apparent that no such method exists, I started searching
> for a method that allows degrees of Approval (without strategic collapses)
> and still satisfies the Favorite Betrayal Criterion.  That led me straight

Which I think is a lot easier to implement than the method I just described.
Simplicity is good!

Richard

PS -- Maybe the computing power wouldn't be that great. The amount of
storage for a U.S.-sized voting population and 5 candidates would be on the
order of 1 GB (and there are computers with that much RAM so swapping
won't be necessary). Each iteration would take on the order of 10 billion
operations for strategizing each of the ballots. There would be more
operations
required in each pass for the statistical and probability calculations to
prepare
for the next pass, but this would probably not exceed the 10 billion so let's
say 20 billion operations per pass. With a 500 MIPS machine -- and efficient
coding -- each pass might take about a minute, and 10000 passes would take
about 200 hours. Distribute this between about 40 machines and it could be
done in 5 hours. Want to cut it down some more, or use fewer machines? Then
allow the strategizing, counting, and analysis of each iteration (except the
last
one or two passes) to be done on a random sample of say, 5% of the ballots.

Could become a real CPU hog if there are 10 or more candidates, though.

```