[EM] Ideas for reducing computational burden in very large elections

Forest Simmons fsimmons at pcc.edu
Fri Dec 18 14:13:18 PST 2015


Well, one thing that comes to mind is that single winner range voting with
L2 normalization tends to minimize the incentive for strategic voting, in
particular under zero information the optimum strategy for maximizing
utility is to submit a ballot that is just your L2 normalized utility
vector (with the average utility subtracted from each component).
Geometrically this follows from the fact that the L2 unit ball doesn't have
any bulges.  Note that the L_k unit ball for k<2 bulges towards the
coordinate axes, which skews optimal strategy towards "plumping" or "bullet
voting."  When k>2 the bulges are in the directions where all of the
coordinates have the same absolute values, which tends to give ballots
whose components all have nearly the same absolute value, but the
"approved" candidates get positive ratings and the disapproved negative..

Of course, we know that with Yes/No ballots it makes little difference
whether "no" is represented by zero or by negative one.

The the three cases are typified by L_1 (strategically equivalent to
plurality), L_2, and L_infinity (approval in disguise).

Thinking about these things has led me to a maximization method for
Proportional Representation that uses L_2 normalization, and would tend to
give a high quality score Q to a subset of the candidates chosen by Brian's
PR method that we have been talking about, namely Instant Runoff Normalized
Ratings, Proportional-mode (IRNRp).

Suppose that we are to elect a subset of 17 candidates from a total of K
candidates. Ballots are range style vectors that give non-negative ratings
to the candidates. Let B be the set of these ballot vectors. Let S be the
set of all candidate vectors (i.e. with K components) such that exactly 17
of the components are ones, and the rest are zeroes.

The winning subset is the one given by the member s of S that maximizes

Q = Sum over b in B of (dot(b,s)^2/dot(b,b)) .

Each of the summands in the Sum to be maximized is the square of the scalar
projection of the vector s onto the vector b.  The closer s comes to the
direction of b, the larger this value.

At this point I am not sure which power of the scalar projections should be
used.  The second power is the simplest, since that allows us to use
dot(b,b) in the denominator without the square root.

Best Wishes,

Forest

On Thu, Dec 17, 2015 at 4:03 PM, Brian Olson <bql at bolson.org> wrote:

> This might come down to how we define the “one vote” part of
> one-person-one-vote.
> Should the sum of the vote be 1 or should the magnitude of the vote vector
> be 1?
> Which ‘distortion’ best preserves voter intent?
> I’m not 100% sure which form of normalization is better.
> It might just be mathematical intuition that makes me prefer ‘magnitude’.
> I feel like I had a better reason for this a while ago and forgot the
> reason I chose vector magnitude L2 normalization.
> If I haven’t already done it and forgotten where the results are, I should
> run my random election simulator on IRNR normalized by both modes and see
> which one produces better outcomes.
>
> > On Dec 17, 2015, at 4:37 PM, Forest Simmons <fsimmons at pcc.edu> wrote:
> >
> > What if you used the L_1 norm instead of the L_2 norm to normalize the
> ballot vectors?  In other words, divide by the sum of the absolute values
> of the ratings instead of the root sum squares.
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.electorama.com/pipermail/election-methods-electorama.com/attachments/20151218/3488d489/attachment.htm>


More information about the Election-Methods mailing list