[EM] Proposal

wclark at xoom.org wclark at xoom.org
Fri Jun 25 08:46:07 PDT 2004


wclark at xoom.org wrote:

>> ...but why 100 iterations?

Forest Simmons wrote:

> It is arbitrary.

If it's arbitrary -- but still has a significant impact on the result of
the election (in the deterministic case, at least) -- I suspect that many
people would have a problem with any particular value used.

While I may not be able to reasonably predict the exact results from
polls, I can probably figure out enough to realize that the lower the
value of J, the better the cances for my non-CW favorite to win. 
Opponents of the CW (if they are able to identify themselves as such prior
to the election) may argue for a lower value of J.  Since they may make up
a majority of the electorate, this presents a real problem.

The refinements you mention later may make this point entirely moot,
however.  Discounting early rounds or utilizing some sort of low pass
filter may mean that no matter WHAT value of J is used, my non-CW favorite
doesn't have any consistent advantage.

> If there is a CW, the method seems to always converge to it fairly
> quickly, but suppose that the set is

> 48 A(rank=1)  B(rank=3)  C(rank=4)
> 2  B(rank=1)  A(rank=2)  C(rank=4)
> 3  B(rank=1)  C(rank=2)  A(rank=4)
> 47 C(rank=1)  B(rank=50) A(rank=51) .

> It will take about fifty iterations for the approval cutoff to work its
> way down under rank 50 in the C faction ballots.  From that point on B is
> the winner.  So the marble method gives A and B about equal chances of
> winning, while the deterministic method makes the CW the sure winner.

> (1) Could anybody have predicted this ahead of time based on a random
> sample of the ballots?  If not, then there would be little incentive to
> falsify preferences.

In the deterministic case, I think perhaps somebody COULD have predicted
this ahead of time based on a random sample of the ballots.

My intuition is that (practical) unpredictability only comes into play in
the deterministic case when there is an eventual cycle in the outcome (so
that the value of J essentially picks one member of that cycle at random).

Since in your example the system eventually settles down to just returning
B after every round, and does so well before the J=100 cutoff, I think it
might be possible to predict that.  With input sets that eventually result
in (say) A, B, A, B, A, B, etc. cycles, I suspect the unpredictability
results from not knowing WHICH of A or B wins for sufficiently large J.

> We start giving all candidates equal weight.  If they cannot get even one
> win (in an hundred tries) with that crutch, then they don't have enough
> support to deserve any further chance.

This ties back to what I was saying about Procedural vs. Substantive
fairness.  Consider an extremely polarized electorate with just two
parties:

51 A(rank=1) B(rank=x)
49 B(rank=1) A(rank=y)

Candidate A wins every time.  While that may seem procedurally fair, given
that A has an absolute majority of support, think about what happens in
multiple elections of the same type (either different districts voting for
a multi-member body, or multiple terms spread over time for a
single-member body).  The candidates from B's party will NEVER be elected
-- despite the fact that they have the support of nearly half the
population!

This is a very real problem in the South-East US, where ethnic minorities
have been historically under-represented in office.  In fact, this
principle is a big part of the logic behind district jerry-mandering. 
Each individual election may be "fair" in the sense that it's procedurally
fair, but the overall results are hardly fair in actual substance
(outcome).

> But ultimately, if a candidate not in the equilibrium set has a chance of
> winning, then there will be some incentive for distorting preferences (it
> seems to me).

Random ballot (a.k.a "random dictator") gives every candidate with any
measure of first-place support at all a chance of winning, yet there is no
incentive whatsoever for distorting preferences.  So it's definitely
possible (though "reasonable" methods may still fit with your intuitions,
I'm not sure).

> Do the above comments clarify these questions for you?

Yes, thanks.

> [Detailed explanation snipped]

> If you don't like non-standard analysis, you can translate the whole
> thing into epsilons and deltas, at the expense of a lot of clutter, not
> to mention loss of intuitiveness.

That's okay, I'm (mostly) comfortable with non-standard analysis...
although the notion of a procedure with a non-standard number of steps
makes my head hurt a little bit (it's a bit too close to a "completed
infinity" for my liking... though I don't think it introduces any sort of
real complications in this case).

One concern I had initially (though I'm not entirely sure it applies) is
that there might be more than one probability vector P that has the
required properties.  What do you think about the possibility of more than
one equilibrium set?  What if using initial weights of (say) 2 resulted in
the system converging on a different candidate or set of candidates?

I'm not sure such a thing is possible in the system you describe... but
independent local minima/maxima are par for the course in other similar
systems (training of Artificial Neural Networks comes to mind).

-wclark

-- 
Protest the 2-Party Duopoly:
http://vote.3rd.party.xoom.org/



More information about the Election-Methods mailing list