[EM] more on apportionment - which member of potpourri "morally best"? - about AR

Warren Smith wds at math.temple.edu
Thu Jan 18 13:28:32 PST 2007


I updated the paper (and/or we page)
  http://rangevoting.org/NewAppo.html
with some more of the potpourri of apportionment methods
you can get with different probabilistic models and different goals.

I'm actually now coming to the opinion that the alternative method #2
in the potpourri there, is actually the "morally best" of the lot.

I might call this the "Ossipoff-Smith" method (at least were Ossipoff in the right mood for
that, which he apparently is not), because the underlying theoretial attack is exactly
that suggested by Mike Ossipoff for his "bias free Webster" method, except
that the underlying probabilistic model is now an exponential distribtuion
not a uniform "distribution" (I use the word in quotes since Ossipoff has in various ways
ignored the requirements of probability theory, e.g. in his recent attack on the idea
that probability distributions need to be normalizable;  I will say that generally, when
such an attack is required, it is a symptom of rot in one's underlying model which it
would be better to fix).   We get a more general formula than Ossipoff's that
reduces to his in the limit K-->0  where  K=#seats/#states=50/453.

QUANTIFIED MORALITY
That method aims to null out the "bias" which (for that method) is defined
as the expected  |additive change|
in the number of seats a state has (caused by the round to integer) divided
by state population.
Why is this "morally right"?  Well, it seems that the unfairness, for an
individual person in that state, of having X times more congressmen than he ideally
should, is
  |X-1|/StatePopulation.
At first I thought that X=1/10 should be equally unfair as X=10 and hence was
considering using unfairness formulas involving |log(X)|.
However, I later thought X=10, X=100, etc keeps getting more and more unfair
proportionally roughly to X, while X decreasing X=0.1, X=0.01 etc does not get much
more unfair, and when X=0 it is essentially the same badness as X=0.1, whereas
X=100 really is about 10 times worse than X=10.

Now we have to weight this unfairness per person, by the number of people in the state,
which means we multiply by StatePopulation thus cancelling out the denominator
to get |X-1|.

But that is exactly what the derivation of the method nulled out, because
|X-1|   is  proportional to additive change in #seats divided by state population.

OPEN QUESTIONS ABOUT THIS (AND THE OTHER METHODS IN  http://rangevoting.org/NewAppo.html)
Include whether we can now devise some pairwise optimality theorem or global optimality theorem
that these methods obey (see   http://rangevoting.org/Apportion.html   for such theorems
concerning the classic apportionment methods as well as an introduction to them generally).

------------------

Concerning Ossipoff's latest "AR" ("adjusted rounding") apportionment method,
explained in his post titled
  "Detailed (but obvious) instructions for Adjusted-Rounding"
it sounds interesting.

To take a more abstract view of this:  it seems to me that what you can accomplish with
the idea of treating each "cycle" on its own, is to avoid having to use ANY probabilistic
model.  That's because the probability distribution within one "cycle," is just the data itself.
We can now round the elements of that cycle, in such a way as to minimize our favorite bias
measure.

The sets that constitute cycles, however, depend on the global "divisor," which we have
to choose to cause the total number of seats to come out right.  It is not obvious (and requires
proof) that, for any given bias-measure, the resulting method actually works, i.e. that a
suitable divisor actually must exist.  Also, it is not obvious (and requires proof)
that two DIFFERENT divisors do not both exist that yield different apportionments.

It seems to me that such proofs would follow from an appropriate general purpose lemma
which says as you smoothly scale up the data within a "cycle" you get more seats one
at a time (and never fewer, and never 2-hops), and if
you add a new datapoint at the left endpoint n of the cycle, that increases the
number of seats for that cycle by exactly n (or if add at right endpoint, n+1).

Further this whole idea is not a "divisor method" in the sense of Balinski & Young et al,
and hence ought to yield "monotonicity failures."

Subjectively speaking, I do not see why the advantages that we can gain from going
to this sort of method, are worth the cost of losing monotonicity (because
such a loss seems based on the historical evidence to make a method politically unacceptable).

Warren D Smith
http://rangevoting.org





More information about the Election-Methods mailing list