# [EM] Re: Approval Strategy A- Question for Rob LeGrand

Forest Simmons fsimmons at pcc.edu
Tue Nov 25 19:00:14 PST 2003

```On Mon, 24 Nov 2003, Rob LeGrand wrote:

>
> > What is the function of "last own vote?"
>
> Nothing important, really, but it's used to implement what I call strategy
> B (the Poll Assumption in section 7.4 of Approval Voting by Steven Brams)
> and strategy I (adjust your previous Approval strategy just enough to
> differentiate between the latest top two candidates).
>

The more I think about it, the more I like the ballot-by-ballot approach.

If you have read my recent postings, you know that I have come around to
the opinion that methods based on cardinal or ordinal (ratings or
rankings, resp.) style ballots require some element of probability to
avoid or greatly ameliorate the temptation to rate only at the extremes or
otherwise severely distort the ratings from normalized utilities,
especially in the case when polls show that there is no Condorcet Winner.

The "random ballot" method [where one ballot is chosen at random and the
top ranked or rated candidate on that ballot is declared winner] does away
with the temptation to vote insincerely, but gives the voters low average
voting power.

How can we increase expected voting power while retaining the
non-manipulability of the random ballot method?

Rob LeGrand's ballot-by-ballot method (after having randomized the order
of the ballots) seems to me like one possible answer to this question.

Rob's favorite version of that is based on strategy A, where the approval
cutoff for the k_th ballot is placed next to the current approval
frontrunner on the side of the current approval runnerup.

Rob recently asked for other strategies.  I would like to propose the two
following strategies in the ballot-by-ballot context: (1) based on CR
ballots, and (2) based on mere ranked ballots.

(1) Place the approval cutoff on the k_th ballot at the current empirical
mean, i.e. the mean based on the current empirical winning probabilities
of the candidates, n1/k, n2/k, etc., where n3 (for example) is the number
of times that candidate 3 has been in the lead in the first k steps.

(2) Use Joe Weinstein's "weighted median" idea for the approval cutoff on
the k_th ranked ballot based on the current empirical probabilities.

Specifically, conceptually replicate each candidate on the ranked ballot
as many times as that candidate has been in the lead so far.  Then set
the approval cutoff halfway down the list of replicated candidates.

To make this work smoothly we must consider all candidates tied for the
would be left out of the list of replicated candidates.

Here's an example of one step in method two to help clarify the
ambiguities of my description:

Let's say that the current ballot is just  C>A>B>D, and that candidates C,
A, B, and D have been in the lead 5, 1, 3, and 7 times, respectively, so
far.

The the list with replications becomes

CCCCCABB|BDDDDDDD,

with the weighted median marked by a vertical bar.

Since there are more B's ranked before the bar than after it, we include B
among the approved. So this ballot contributes one each to the approval of
C, A, and B.

If there were one more C and one more B, the list with replications would
look like CCCCCCABB|BBDDDDDDD. This time there are the same number of B's
on both sides of the halfway mark.

I suggest that we go with Kevin's suggestion and give B exactly half
approval in this case.  But, since we are already into the spirit of
randomization, would it hurt anybody's feelings to just flip a coin to
decide whether or not to add one to the approval count for B?

Forest

```