<div dir="ltr"><div>Proportional Approval Voting</div>
<div><a href="http://www.nationmaster.com/encyclopedia/Proportional-approval-voting">http://www.nationmaster.com/encyclopedia/Proportional-approval-voting</a></div>
<div>Brief summary of this method:</div>
<div>there are O(c!) (candidates factorial) many "pseudocandidates" consisting of all the possible combinations of candidates.</div>
<div>Let's say we have a voter named Alice and a three person pseudocandidate composed of real candidates X,Y, and Z.</div>
<div>If Alice approves of one of them, the score for XYZ += 1</div>
<div>" two " , += (1 + 1/2)</div>
<div>" three/all " , += (1 + 1/2 + 1/3)</div>
<div> </div>
<div>This way Alice approving of X and Bob approving of X is worth 2 pts whereas Alice approving of X and Y and Bob approving of neither is only worth 1.5 pts. The procedure isn't iterative hence the failure of RRV</div>
<div><a href="http://rangevoting.org/RRV.html">http://rangevoting.org/RRV.html</a></div>
<div>to satisfy the multimember equivalent of the participation criterion is sidestepped. In other words, voting for a candidate cannot hurt you because PAV does not use an elect-candidate-then-punish-supporters iteration to achieve its result.</div>
<div> </div>
<div>However great PAV may be its O(c!cv) (candidates factorial * candidates * voters) time complexity is enough to make me think twice before seriously considering it.</div>
<div> </div>
<div>Multiwinner Method Yardstick</div>
<div> </div>
<div>PAV is the basis of the multiwinner analogue of Bayesian regret. Think of it this way.</div>
<div>PAV gives us a nice formula for dealing with range values.</div>
<div>Let's use the previous example of Alice and XYZ</div>
<div>Let's pretend Alice votes X = 99, Y = 12, Z = 35</div>
<div> </div>
<div>with PAV, the formula is (1+1/2+1/3...1/n) for the nth thing</div>
<div>think of it as sorting the list for that candidate and THEN applying (1,1/2,1/3..1/n) to it.</div>
<div>in the previous example if Alice approved X and Z (1,0,1)</div>
<div>we sort the list</div>
<div>(1,1,0)</div>
<div>then multiply by the coefficients</div>
<div>(1*1,1*1/2,0*1/3)</div>
<div>and add</div>
<div>1.5</div>
<div> </div>
<div>apply the same thing to the current example</div>
<div> </div>
<div>99,12,35 ==> 99,35,12</div>
<div> </div>
<div>and multiply...</div>
<div> </div>
<div>99*1,35*1/2,12*1/3</div>
<div> </div>
<div>and add...</div>
<div> </div>
<div>120.5</div>
<div> </div>
<div>there, the score for XYZ from Alice is 120.5</div>
<div> </div>
<div>Thus the procedure for evaluating various multiwinner methods is simple:</div>
<div> </div>
<div>create some fake voters (make their preferences between 0 and n, distributed however you like) </div>
<div>I'd recommend NOT using negative numbers because I have no idea how they will interact with the sorting and tabulating procedure.</div>
<div> </div>
<div>In fact, it isn't even necessary to calculate the BEST result in order to compare multiwinner voting methods.</div>
<div> </div>
<div>Just calculate each winner according to the multiwinner methods you are testing, and then add the score of the winning group of candidates that method selected to the method's tally. In the end, each method will have a score that is equal to the utility it generated each round summed up. This gives you a great starting point for comparing the multiwinner methods.</div>
<div> </div>
<div>I'm in the process of programming something to actually test this. If anyone has a program for STV, CPO-STV, or some other multiwinner something or rather, I would really appreciate it. </div>
<div> </div>
<div>Even if it's just a description of a method; it's better than nothing. (no party-based or asset voting related methods please.)</div>
<div> </div>
<div>If anyone notices a glaring error of some sort, please tell me; I'm just a high school student. </div></div>