[EM] Ideas for reducing computational burden in very large elections
Forest Simmons
fsimmons at pcc.edu
Mon Dec 14 19:29:05 PST 2015
Warren has been working on a PR election method for Canada based on range
ballots that do not make use of party affiliations.
He has pointed out that by using an idea of Toby Pereira it is possible to
convert the voted range ballots into a collection of weighted approval
ballots that give the same average scores to the respective candidates as
the original range ballots. Since, as he further pointed out, there are
only 2^k (or 2**k for those who remember Fortran) possible approval
ballots,when there are k candidates, this may well reduce the number of
ballots that you have to handle, if you are willing to make the
transformation.
I'll get around to describing Toby's transformation in context below.
But now I want to make a couple of other suggestions:
(1) Before doing the Toby transformation, for each candidate X let b_X be
the average of all of the range ballots that rate X ahead of all other
candidates. In case a ballot rates candidates X, Y, and Z top equal,
average that ballot into b_X, b_Y, and b_Z with weights that are the
respective random ballot probabilities for deciding among X, Y, and Z.
Think of each of these averages as a weighted ballot, and apply the Toby
Transformation to each of them. That will be at most k transforms, each of
which yields at most k approval ballots (with their weights). So even if
here is no duplication, the total number of Approval ballots will be at
most k^2 .
Can we do better than this? Yes if you are willing to accept another
transformation between the amalgamation step and the Toby Transforms.
Which brings us to the second idea ...
(2) We will make use of the idea of "sincere approval strategy." This
strategy takes a range ballot and briefly considers (but rejects in favor
of a better idea that becomes apparent only at that stage) the idea of
using a random number generator to assist in converting a range ballot B
into an approval ballot.
In this thought experiment suppose that you are trying to decide whether or
not to approve candidate i . If B(i) is zero or 100%, the decision if
easy, but if the score r is strictly between zero and one, we ask for a
random number between zero and one. If the number is less than the score
r, then we approve candidate i, otherwise not. This random idea has the
merit that, if many voters use it, then statistically speaking, the average
of the converted ballots will be very close to their prior average (before
they were converted). A disadvantage is that the variance of the averages
is greater.
But I remind you we are not going to implement this idea. We are only
going to use one statistic from it: the expected number of candidates that
will be approved when a ballot is subjected to this procedure. That
expectation is the sum of the range scores of the ballot in question.
At this point a light dawns; instead of deciding approval on the basis of
the random numbers experiments, just take the expected number of approvals,
and approve that many candidates, starting at the candidates highly rated
by the ballot, and working down.
If the expected number E is not a whole number, then there will be a
remainder candidate with a range value strictly between 0 and 1. So we
have falied at converting our ballot to an approval ballot. But it now has
at most two levels. When we apply the Toby transformation at this stage,
we get only two approval ballots, one of which approves a subset of the
other.
If we do this process to the ballots {b_X| X is a candidate}, then we have
at most 2k approval ballots.
I 'll finish this next time.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.electorama.com/pipermail/election-methods-electorama.com/attachments/20151214/963ee0a2/attachment.htm>
More information about the Election-Methods
mailing list