<br><br><div class="gmail_quote">On Tue, Jul 10, 2012 at 6:17 AM, Kristofer Munsterhjelm <span dir="ltr"><<a href="mailto:km_elmet@lavabit.com" target="_blank">km_elmet@lavabit.com</a>></span> wrote:<br><blockquote style="margin:0px 0px 0px 0.8ex;padding-left:1ex;border-left-color:rgb(204,204,204);border-left-width:1px;border-left-style:solid" class="gmail_quote">
<div class="im">On 07/09/2012 06:33 AM, Michael Ossipoff wrote:<br>
<br>
<blockquote style="margin:0px 0px 0px 0.8ex;padding-left:1ex;border-left-color:rgb(204,204,204);border-left-width:1px;border-left-style:solid" class="gmail_quote">
What about finding, by trial and error, the<br>
allocation that minimizes the calculated correlation measure. Say, the<br>
Pearson correlation, for example. Find by trial and error the allocation<br>
with the lowest Pearson correlation between q and s/q.<br>
</blockquote>
<br>
<blockquote style="margin:0px 0px 0px 0.8ex;padding-left:1ex;border-left-color:rgb(204,204,204);border-left-width:1px;border-left-style:solid" class="gmail_quote">
For the goal of getting the best allocation each time (as opposed to<br>
overall time-averaged equality of s/q), might that correlation<br>
optimization be best?<br>
</blockquote>
<br></div>
Sure, you could empirically optimize the method. If you want population-pair monotonicity, then your task becomes much easier: only divisor methods can have it</blockquote><div> </div><div>If unbias in each allocation is all-important, then can anything else be as good as trial-and-error minimization of the measured correlation between q and s/q, for each allocation?</div>
<div> </div><blockquote style="margin:0px 0px 0px 0.8ex;padding-left:1ex;border-left-color:rgb(204,204,204);border-left-width:1px;border-left-style:solid" class="gmail_quote"> so you just have to find the right parameter for the generalized divisor method:<br>
<br>
f(x,g) = floor(x + g(x))<br>
<br>
where g(x) is within [0...1] for all x, and one then finds a divisor so that x_1 = voter share for state 1 / divisor, so that sum over all states is equal to the number of seats.<br></blockquote><div> </div><div>[unquote]</div>
<div> </div><div>Yes, that's a divisor method, and its unbias depends on whether or not the probability density distribution approximation on which it's based is accurate. For Webster, it's known to be a simplification. For Weighted-Websster (WW), it's known to be only a guess.</div>
<div> </div><div>You said:</div><div> </div><div> We may further restrict ourselves to a "somewhat" generalized divisor method:<br>
<br>
f(x, p) = floor(x + p).<br>
<br>
For Webster, p = 0.5. Warren said p = 0.495 or so would optimize in the US (and it might, I haven't read his reasoning in detail).</div><div> </div><div>[endquuote]</div><div> </div><div>Yes, Warren said that if the probability distsribution is exponential, then that results in a constant p in your formula. He used one exponential function for the whole range of states and their populations, determined based on the total numbers of states and seats. But that's a detail that isn't important unless you've actually decided to use WW, and to use Warren's one overall exponential distribution.</div>
<div> </div><div>After I'd proposed WW, Warren suggested the one exponential probability distribution for the whole range of state populations, and that was his version of WW. </div><div> </div><div>You said:</div><div>
</div><div>Also, I think that the bias is monotone with respect to p. At one end you have<br>
<br>
f(x) = floor(x + 0) = floor(x)<br>
<br>
which is Jefferson's method (D'Hondt) and greatly favors large states. At the other, you have<br>
<br>
f(x) = floor(x + 1) = ceil(x)<br>
<br>
which is Adams's method and greatly favors small states.<br>
<br>
If f(x, p) is monotone with respect to bias as p is varied, then you could use any number of root-finding algorithms to find the p that sets bias to zero, assuming your bias measure is continuous. Even if it's not continuous, you could find p so that decreasing p just a little leads your bias measure to report large-state favoritism and increasing p just a little leads your bias measure to report small-state favoritism.</div>
<div> </div><div>[endquote]</div><div> </div><div>You're referring to trial and error algorithms. You mean find, by trial and error, the p that will always give the lowest correlation between q and s/q? For there to be such a constant, p, you have to already know that the probability distribution is exponential (because, it seems to me, that was the assumption that Warren said results in a constant p for an unbiased formula). If you know that it's exponential, you could find out p without trial and error, by analytically finding the rounding point for which the expected s/q is the same in each interval between two consecutive integers, given some assumed probability distribution (exponential, because that's what Warren said results in a constant p).</div>
<div> </div><div>As I was saying before, it's solvable if the distribution-approximating function is analytically antidifferentiable, as is the case for an exponential or polynomial approximating function.</div><div> </div>
<div>You might say that it could turn out that solving for R, the rounding point, requires a trial-and-error equation-solving algorithm. I don' t think it would, because R only occurs at one place in the expression. We had analytical solutions.</div>
<div> </div><div>But, as I was saying, you only know that WW is unbiased to the extent that you know that your distribution-approximating function is accurate.</div><div> </div><div>I felt that interpolation with a few cumulative-state-number(population) data points, or least-squares with more data points, would be better. Warren preferred finding one exponential function to cover the entire range of state populations, based the total numbers of states and seats. I guess that trying all 3 ways would show which can give the lowest correlations between q and s/q.</div>
<div> </div><div>Mike Ossipoff</div><div> </div><div> </div><div> </div><div> </div><div> </div><div><br>
<br>
</div></div><br>