[EM] Ideas for reducing computational burden in very large elections
Forest Simmons
fsimmons at pcc.edu
Thu Dec 17 13:37:28 PST 2015
Hey Brian,
I like it a lot.
What if you used the L_1 norm instead of the L_2 norm to normalize the
ballot vectors? In other words, divide by the sum of the absolute values
of the ratings instead of the root sum squares.
It seems to me that this might work even better since then the contribution
to the sum of ballots is the same for each voter, and therefore inherently
proportional even with one pass if the voters coordinate a little. The
other passes can then be thought of as automatic coordination adjustments.
On Thu, Dec 17, 2015 at 9:29 AM, Brian Olson <bql at bolson.org> wrote:
> A method I proposed a few years ago requires order (C) passes through the
> vote data for C candidates and achieves Proportional Representation.
>
> First, the single winner simple version
>
> Instant Runoff Normalized Ratings (IRNR)
> 1. Voters cast ratings ballots (e.g. 0..10).
> 2. Each vote is normalized to a magnitude-1 vector (divide by the square
> root of sum of squares of ratings for that vote).
> 3. Normalized votes are summed.
> 4. The choice with the lowest sum is disabled.
> 5. Votes are re-normalized without disabled choices present so that each
> voter still has a magnitude-1 vote.
> 6. Repeat steps 3,4,5; summing, disabling , and renormalizing until are
> two choices left and one gets more vote and is the winner.
>
> Instant Runoff Normalized Ratings, Proportional-mode (IRNRp)
> 1. Voters cast ratings ballots (e.g. 0..10).
> 2. Each vote is normalized to a magnitude-1 vector (divide by the square
> root of sum of squares of ratings for that vote).
> 3. Normalized votes are summed.
> 3.1. Choices with a sum over the ‘quota’ (total vote / (seats to elect +
> 1)) get a pre-scalar so that their sum would come out the minimum
> measurable amount above the quota.
> 3.2. Pre-scale then normalize then sum all the votes.
> 3.3. Repeat 3.1 and 3.2 until the system ‘settles’ (squishing vote out of
> one choice can cause another to overflow), or some limit (10, 100, etc) of
> iterations to avoid infinite loops
> 3.4. If (seats to elect) choices have sums over the quota, those are
> elected, exit
> 4. The choice with the lowest sum is disabled.
> 5. Votes are re-normalized without disabled choices present so that each
> voter still has a magnitude-1 vote.
> 6. Repeat steps 3,4,5; summing, disabling, and renormalizing until are
> (seats to elect) choices left, those are the winners.
>
> I have coded this up in C and Java and Go
> https://bitbucket.org/bodhisnarkva/voteutil/src
>
> The bulk of the memory footprint is just holding all the votes in memory.
> Doing simple numeric computations on a big block of numbers is what
> computers are really really good at. I could probably even GPU accelerate
> this algorithm. But it’s not necessary because for any size of data I’ve
> thrown at it, it is done in a few seconds at most.
>
> If we were specially lucky and got 300,000,000 people to vote on a field
> of 10 choices, and store each rating as a 8 bit unsigned integer 0..255,
> that’s only 3 gigabytes of data. Pretty soon we’ll have phones that can run
> that. But I think being able to run a national election on a $1000 computer
> is good enough.
>
> I’ve recently been writing up a new implementation of STV (in Go, to add
> to my betterpolls.com site) and STV is a complexity nightmare. I’m
> running into a huge number of fiddly corner cases to deal with to get the
> accounting right. STV is kinda simple to explain in principle, but the
> devil is in the details. IRNR winds up being much simpler in implementation.
>
>
> > On Dec 14, 2015, at 10:29 PM, Forest Simmons <fsimmons at pcc.edu> wrote:
> >
> > Warren has been working on a PR election method for Canada based on
> range ballots that do not make use of party affiliations.
> >
> > He has pointed out that by using an idea of Toby Pereira it is possible
> to convert the voted range ballots into a collection of weighted approval
> ballots that give the same average scores to the respective candidates as
> the original range ballots. Since, as he further pointed out, there are
> only 2^k (or 2**k for those who remember Fortran) possible approval
> ballots,when there are k candidates, this may well reduce the number of
> ballots that you have to handle, if you are willing to make the
> transformation.
> >
> > I'll get around to describing Toby's transformation in context below.
> >
> > But now I want to make a couple of other suggestions:
> >
> > (1) Before doing the Toby transformation, for each candidate X let b_X
> be the average of all of the range ballots that rate X ahead of all other
> candidates. In case a ballot rates candidates X, Y, and Z top equal,
> average that ballot into b_X, b_Y, and b_Z with weights that are the
> respective random ballot probabilities for deciding among X, Y, and Z.
> >
> > Think of each of these averages as a weighted ballot, and apply the Toby
> Transformation to each of them. That will be at most k transforms, each of
> which yields at most k approval ballots (with their weights). So even if
> here is no duplication, the total number of Approval ballots will be at
> most k^2 .
> >
> > Can we do better than this? Yes if you are willing to accept another
> transformation between the amalgamation step and the Toby Transforms.
> Which brings us to the second idea ...
> >
> > (2) We will make use of the idea of "sincere approval strategy." This
> strategy takes a range ballot and briefly considers (but rejects in favor
> of a better idea that becomes apparent only at that stage) the idea of
> using a random number generator to assist in converting a range ballot B
> into an approval ballot.
> >
> > In this thought experiment suppose that you are trying to decide whether
> or not to approve candidate i . If B(i) is zero or 100%, the decision if
> easy, but if the score r is strictly between zero and one, we ask for a
> random number between zero and one. If the number is less than the score
> r, then we approve candidate i, otherwise not. This random idea has the
> merit that, if many voters use it, then statistically speaking, the average
> of the converted ballots will be very close to their prior average (before
> they were converted). A disadvantage is that the variance of the averages
> is greater.
> >
> > But I remind you we are not going to implement this idea. We are only
> going to use one statistic from it: the expected number of candidates that
> will be approved when a ballot is subjected to this procedure. That
> expectation is the sum of the range scores of the ballot in question.
> >
> > At this point a light dawns; instead of deciding approval on the basis
> of the random numbers experiments, just take the expected number of
> approvals, and approve that many candidates, starting at the candidates
> highly rated by the ballot, and working down.
> >
> > If the expected number E is not a whole number, then there will be a
> remainder candidate with a range value strictly between 0 and 1. So we
> have falied at converting our ballot to an approval ballot. But it now has
> at most two levels. When we apply the Toby transformation at this stage,
> we get only two approval ballots, one of which approves a subset of the
> other.
> >
> > If we do this process to the ballots {b_X| X is a candidate}, then we
> have at most 2k approval ballots.
> >
> > I 'll finish this next time.
> >
> > ----
> > Election-Methods mailing list - see http://electorama.com/em for list
> info
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.electorama.com/pipermail/election-methods-electorama.com/attachments/20151217/2c243a15/attachment.htm>
More information about the Election-Methods
mailing list