[EM] MELLS (was Cooperation and Entropy in Lotteries)

fsimmons at pcc.edu fsimmons at pcc.edu
Fri Nov 28 11:22:33 PST 2008


I would like to give a name to my entropy method and post another example of this method in action.

The name is Minimal Expected Lack of Log Satisfaction or MELLS for short.

Here is the example (with ratings in brackets, with w+x+y+z=100%):

v:  A1>C1[v/(v+w)]>D[v]
w: A2>C1[w/(v+w)]>D[w]
x: B1>C2[x/(x+y)]>D[x]
y: B2>C2[y/(x+y)]>D[y]

In this example there are three lotteries tied for Minimal Expected Lack of Log Satisfaction.  They are

(1) The Random Ballot Lottery  with respective probabilities v, w, x, y for A1, A2, B1, B2.

(2) The consensus D "lottery."

(3) The other compromise lottery that elects C1 or C2 with respective probabilites of (v+w) and (x+y).

The common value of the MELLS for these lotteries is

   - (vlogv+wlogw+xlogx+ylogy).

I'll spare you the gory details.

By the way, I've decided to remove the "feasibility" requirement, but note that in this example it is met, 
anyway.  I'm afraid the kind of feasibility that I was requiring could hurt the methods chances of being 
monotone.

FWS



 > My previous missive under this heading suggested using the 
> entropy of a lottery
> to judge how far it deviates from the ideal of full cooperation.
> 
> Since then, a little reflection and experimentation has led to a 
> refinement of
> this idea that is more appropriate:
> 
> Instead of selecting (from among the feasible lotteries) the one 
> that minimizes
> the entropy, we should minimize the difference of the entropy 
> and the expected
> log of the winner's rating (on the randomly drawn ballot that 
> determines the
> winner). (This log must have the same base as the log in the 
> entropy calculation.)
> 
> For those just joining in, the "lottery" methods that we have in 
> mind select the
> circled candidate on a randomly drawn ballot. Different lottery 
> methods have
> different ways of defining the circling function c from the set 
> of ballots to
> the set of candidates. They all have the property that the 
> rating of c(b) is
> positive on ballot b. A feasible lottery L has the property 
> that for every
> ballot b, the rating of circled choice c(b) is at least as large 
> as the expected
> rating of the winner on ballot b under the lottery L. The 
> Random Ballot lottery
> that circles the top rated candidate on each ballot is (in 
> general) feasible.
> 
> The subtrahend in the difference mentioned above is the 
> reciprocal of the number
> of ballots multiplied by the
> 
> sum of log(b(c(b))) over all ballots b,
> 
> where b(c) is the rating given choice c by ballot b.
> 
> Example 1:
> 
> 25 A>C
> 25 B>C
> 25 A>D
> 25 B>D
> 
> For simplicity lets assume that both C and D are rated at some 
> constant R<1 on
> all of the ballots that give them positive ratings.
> 
> Then both the {A,B} and the {C,D} lotteries are feasible and 
> both have minimal
> entropy, which simplifies to log(2). But when we subtract the 
> other term, we
> get log(2)-log(1) and log(2)-log(R) in the respective cases. 
> Since 1>R, the
> first difference is smaller, so the first lottery is preferred.
> 
> Example 2:
> 
> 20 A>F
> 20 B>F
> 20 C>F
> 20 D>F
> 20 E>F
> 
> Suppose that every ballot rates F at R > 1/5.
> 
> Let's compare the Random Ballot lottery with the consensus 
> lottery that elects F
> with 100% probability:
> 
> The respective entropies are log(5) and log(1). When we 
> subtract the expected
> logs of the winner's rating we get log(5)-log(1)=log(5) and 
> log(1)-log(R)=log(1/R).
> 
> Since R>1/5, log(1/R)> 
> Nifty?



More information about the Election-Methods mailing list