[EM] Proposal
Forest Simmons
fsimmons at pcc.edu
Thu Jun 24 15:16:02 PDT 2004
On Thu, 24 Jun 2004, wclark at xoom.org wrote:
> Forest Simmons wrote:
>
> > To convert this to a fully deterministic but pseudo random method,
> > eliminate steps (4) and (5) and simply take the approval winner from the
> > last pass through the For loop.
>
> ...but why 100 iterations? Why not 50, or 400? Using 100 makes
> everything work out for nice round percentages, but other than that it
> seems arbitrary to me.
It is arbitrary. Ten would be too crude, a million would be too
expensive without changing any candidate's percentage of the marbles
significantly (assuming a reasonable number of candidates, and ranking
patterns not too extreme; see the first example below for extreme rankings
where 100 isn't large enough to yield essential equivalence between the
deterministic and marble methods.).
>
> What type of behavior does this system actually produce? I'd guess that
> we'd see convergence to a small (singleton?) set very quickly, so that
> additional iterations past a certain point only serve to bias the election
> in favor of the members of this set.
>
If there is a CW, the method seems to always converge to it fairly
quickly, but suppose that the set is
48 A(rank=1) B(rank=3) C(rank=4)
2 B(rank=1) A(rank=2) C(rank=4)
3 B(rank=1) C(rank=2) A(rank=4)
47 C(rank=1) B(rank=50) A(rank=51) .
It will take about fifty iterations for the approval cutoff to work its
way down under rank 50 in the C faction ballots. From that point on B is
the winner. So the marble method gives A and B about equal chances of
winning, while the deterministic method makes the CW the sure winner.
(1) Could anybody have predicted this ahead of time based on a random
sample of the ballots? If not, then there would be little incentive to
falsify preferences.
(2) In an extreme example like this, the marble method and the
deterministic method would have much better agreement with a million
iterations.
> In other words, each iteration results in a winner, and this list of
> winners will eventually converge to a single winner repeated over and over
> (or perhaps cycle between multiple winners).
If there is a Condorcet cycle, then the winners may cycle. The more
iterations, the better the estimate of the percentage of the time each
member of the cycle will have the upper hand in the limit of infinitely
many iterations.
>
> If that convergence is what's desired, why even bother keeping track of
> the initial rounds at all? Why not wait until the system has settled down
> before we start adding marbles to the bag (breaking the relationship
> between marbles and weights in the process).
That would be a natural refinement. But I want to keep the basic method
simple. Another approach would be to pass the winning weights through a
low pass filter so that the original nominal weights (all ones) and other
early effects would eventually lose their influence.
>
> > In fact, the method was designed to estimate the approval equilibrium
> > candidate winning probabilities: the candidate weights after the last pass
> > estimate those probabilities as percentages, i.e. the number of each
> > candidate's marbles divided by one hundred, estimates that candidate's
> > equilibrium approval winning probability.
>
> I assume it's related to your work described in this post:
>
>
> http://lists.electorama.com/pipermail/election-methods-electorama.com
> /2003-December/011372.html
Yes, closely related, approximating the same kind of equilibrium
probabilities by a different approach.
>
> This really sounds VERY much like the types of algorithms used at search
> engines, particularly the type of ranking systems used by Google and
> Teoma. They use various algorithmic tricks to vasty simplify their
> calculations, so perhaps you might be able to adapt something to your
> purposes. A search for "Google PageRank" will turn up plenty of details,
> much of it as mathematical as you'd like to get, noise-floors and
> eigenvectors and all. The Wikipedia, as always, is a good place to start:
>
> http://en.wikipedia.org/wiki/PageRank
Thanks for the reference. I'll look it up soon.
>
> > Observe that in both versions a candidate has to win at least one
> > approval round before having any chance of being the ultimate winner.
>
> I like your earlier goal of making sure any candidate with any approval at
> all would have a positive chance of winning. Why did you reject it?
>
We start giving all candidates equal weight. If they cannot get even one
win (in an hundred tries) with that crutch, then they don't have enough
support to deserve any further chance.
But ultimately, if a candidate not in the equilibrium set has a chance of
winning, then there will be some incentive for distorting preferences (it
seems to me).
> > Note that the deterministic version fails participation in one sense:
> > adding ballots favorable to the winner could change the value of J for
> > which this winner wins to J=99, and then some other candidate wins
> > on the 100th pass.
>
> Under what circumstances do you see this happening? I'm still not clear
> on the behavior of this system. Do you have any simulation results you
> could share? Or even just some intuitions you might have?
>
In an example where the winners cycle among A, B, and C, suppose that A is
the winner on the hundredth iteration. Adding ballots favorable to A may
disturb the phase of the cycle so that C is the winner of the hundredth
iteration, even though A wins more frequently and C less frequently than
before.
> I'm confused as to why you're talking about "the value of J for which this
> winner wins" rather than who wins in round J. Your way makes it sound as
> if the number of wins for each candidate is fixed, and I don't see why
> that would be the case.
>
> > However, the total weight of the winner would not be decreased by the
> > added favorable ballots, so the his winning probability (in the
> > non-deterministic version) would not decrease, and hence the prior
> > probability of winning in the deterministic case would not decrease
> > either.
>
> I don't follow your reasoning here. If the winner originally won in both
> the 99th pass as well as the 100th pass, but because of some ballot
> changes in her favor she no longer wins in the 100th pass... doesn't she
> get one less marble?
>
> Or are you implying that additional favorable ballots are only capable of
> shifting around the order in which candidates win each iteration round,
> and not the actual number of rounds won by particular candidates?
>
Do the above comments clarify these questions for you?
> > So the spirit of Participation is met: you don't decrease the prior
> > probability of your candidate's winning by participating.
>
> Everything else aside, we're still left with the issue of how to interpret
> probabilities for unique events. A strict Frequentist might accuse you of
> using too Bayesian an interpretation, in the deterministic (or perhaps
> even non-deterministic) case.
>
> If my candidate loses after I vote -- but provably would have won had I
> not voted -- I may or may not be willing to buy the Bayesian line about
> the relevence of prediction and prior knowledge and such.
>
One way to resolve the philosophical problem is as I suggested in the
posting you referred to above:
"Instead of enforcing the probabilities with a drawing, we just interpret
the non-zero probabilities as saying that in statistically similar
populations of voters, these other candidates have significant chances of
winning."
[meaning chances measured by the percentages in question]
When I embarked on this new approach I had in mind Edward Nelson's
non-standard approach to probability in his book, "Radically Elementary
Probability Theory," an excellent book available in English, Russian,
French, German, and (probably) Swahili.
Suppose that we have a standard number K of candidates, a standard number
N of ballots, and that no voter uses non-standard ranks on his ballot.
Let L be a non-standard whole number, i.e. an integer greater than any
standard integer.
Change the For loop upper limit from 100 to L, and subtract 1/L instead of
1/100 in the last step of the loop (or just eliminate this step entirely).
Suppose that candidate A wins m times out of L. Then we interpret the
standard part of (m/L) as A's equilibrium winning probability.
If m is only a standard number, then (m/L) will be infinitesimal, and the
standard part of (m/L) will be zero, so no candidate that wins only a
standard number of times has a positive equilibrium winning probability.
Now here's why we can call these equilibrium probabilities:
First let's form a vector P whose components are these probabilities, so
that the component of P corresponding to candidate A is st(m/L), the
standard part of (m/L).
If we take a round ball of infinitesimal diameter centered on this
probability vector P, and color the points of the ball according to which
candidates win when those points are used as weight vectors, then the
standard parts of the proportions of volumes of points of the respective
colors will equal the respective components of P.
In other words, a random infinitesimal perturbation P' of P will (when
used as a weight vector for approval cutoff calculations) yield candidate
A as winner with a probability infinitely close to (m/L), which is
infinitely close to A's component of P as well as infinitely close to A's
component of P', since the perturbation was infinitesimal.
(no need for Bayes in this approach)
If you don't like non-standard analysis, you can translate the whole thing
into epsilons and deltas, at the expense of a lot of clutter, not to
mention loss of intuitiveness.
I say "you can" because I don't have the stomach for it.
Forest
> ----
> Election-methods mailing list - see http://electorama.com/em for list info
>
More information about the Election-Methods
mailing list