[EM] The Sainte-Lague index and proportionality
Michael Ossipoff
email9648742 at gmail.com
Wed Jul 11 11:16:25 PDT 2012
On Tue, Jul 10, 2012 at 6:17 AM, Kristofer Munsterhjelm <
km_elmet at lavabit.com> wrote:
> On 07/09/2012 06:33 AM, Michael Ossipoff wrote:
>
> What about finding, by trial and error, the
>> allocation that minimizes the calculated correlation measure. Say, the
>> Pearson correlation, for example. Find by trial and error the allocation
>> with the lowest Pearson correlation between q and s/q.
>>
>
> For the goal of getting the best allocation each time (as opposed to
>> overall time-averaged equality of s/q), might that correlation
>> optimization be best?
>>
>
> Sure, you could empirically optimize the method. If you want
> population-pair monotonicity, then your task becomes much easier: only
> divisor methods can have it
If unbias in each allocation is all-important, then can anything else be as
good as trial-and-error minimization of the measured correlation between q
and s/q, for each allocation?
> so you just have to find the right parameter for the generalized divisor
> method:
>
> f(x,g) = floor(x + g(x))
>
> where g(x) is within [0...1] for all x, and one then finds a divisor so
> that x_1 = voter share for state 1 / divisor, so that sum over all states
> is equal to the number of seats.
>
[unquote]
Yes, that's a divisor method, and its unbias depends on whether or not the
probability density distribution approximation on which it's based is
accurate. For Webster, it's known to be a simplification. For
Weighted-Websster (WW), it's known to be only a guess.
You said:
We may further restrict ourselves to a "somewhat" generalized divisor
method:
f(x, p) = floor(x + p).
For Webster, p = 0.5. Warren said p = 0.495 or so would optimize in the US
(and it might, I haven't read his reasoning in detail).
[endquuote]
Yes, Warren said that if the probability distsribution is exponential, then
that results in a constant p in your formula. He used one exponential
function for the whole range of states and their populations, determined
based on the total numbers of states and seats. But that's a detail that
isn't important unless you've actually decided to use WW, and to use
Warren's one overall exponential distribution.
After I'd proposed WW, Warren suggested the one exponential probability
distribution for the whole range of state populations, and that was his
version of WW.
You said:
Also, I think that the bias is monotone with respect to p. At one end you
have
f(x) = floor(x + 0) = floor(x)
which is Jefferson's method (D'Hondt) and greatly favors large states. At
the other, you have
f(x) = floor(x + 1) = ceil(x)
which is Adams's method and greatly favors small states.
If f(x, p) is monotone with respect to bias as p is varied, then you could
use any number of root-finding algorithms to find the p that sets bias to
zero, assuming your bias measure is continuous. Even if it's not
continuous, you could find p so that decreasing p just a little leads your
bias measure to report large-state favoritism and increasing p just a
little leads your bias measure to report small-state favoritism.
[endquote]
You're referring to trial and error algorithms. You mean find, by trial and
error, the p that will always give the lowest correlation between q and
s/q? For there to be such a constant, p, you have to already know that the
probability distribution is exponential (because, it seems to me, that was
the assumption that Warren said results in a constant p for an unbiased
formula). If you know that it's exponential, you could find out p without
trial and error, by analytically finding the rounding point for which the
expected s/q is the same in each interval between two consecutive integers,
given some assumed probability distribution (exponential, because that's
what Warren said results in a constant p).
As I was saying before, it's solvable if the distribution-approximating
function is analytically antidifferentiable, as is the case for an
exponential or polynomial approximating function.
You might say that it could turn out that solving for R, the rounding
point, requires a trial-and-error equation-solving algorithm. I don' t
think it would, because R only occurs at one place in the expression. We
had analytical solutions.
But, as I was saying, you only know that WW is unbiased to the extent that
you know that your distribution-approximating function is accurate.
I felt that interpolation with a few cumulative-state-number(population)
data points, or least-squares with more data points, would be better.
Warren preferred finding one exponential function to cover the entire range
of state populations, based the total numbers of states and seats. I guess
that trying all 3 ways would show which can give the lowest correlations
between q and s/q.
Mike Ossipoff
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.electorama.com/pipermail/election-methods-electorama.com/attachments/20120711/d5e8ca4a/attachment-0004.htm>
More information about the Election-Methods
mailing list