[EM] PLEASE consider: More non-deterministic methods

Jobst Heitzig heitzig-j at web.de
Wed Jun 23 18:01:16 PDT 2004


Dear Bill and Forest!

Thanks for replying to my suggestions!

Forest wrote:
> The other solution is to use pseudo random methods.  Pseudo random methods
> are absolutely deterministic and the results are completely reproducible
> given the same set of anonymous ballots, yet in spirit they are
> probabilistic.

This is a good point when it comes to practical implementation! I fear
though that it will be difficult to introduce methods with a
considerable amount of randomness to the general public... What I
intended was only a first theoretical discussion about such methods
since I had the impression that randomness as an ingredient to methods
has just not been considered enough.

As for "Random Ballot": This is a different and far more extreme usage
of randomness than the one I have in mind. We should try to find out how
to introduce as few randomness as possible, and only use randomness when
there is no Condorcet winner. Therefore I suggested "randomizing"
existing methods in order to get a "continuum" of methods with more or
less randomness.

Forest continued:
> Here's another probabilistic method that is easily converted into a
> deterministic method that is (for all intents and purposes) equivalent to
> it:
> 
> (1) Voters rank the candidates.  They may duplicate ranks and they may
> skip several ranks to reflect their feelings about gaps in worthiness for
> office.  Unranked candidates are considered equally ranked one rank below
> the lowest ranked candidate.
> 
> (2) After the ballots have been submitted, each candidate's weight is
> initialized at the value one.
> 
> (3) For J from one to 100 ...
> 
>      (3a)  On each ballot using the current candidate weights
>            find the weighted average of the ranks on that ballot.
> 
>      (3b)  Use this weighted average rank as an approval cutoff rank.
> 
>      (3c)  Tally over all ballots the candidate approvals using the
>            current approval cutoffs.
> 
>      (3d)  Increment the weight of the candidate with the greatest
>            approval tally.
> 
>     Next J
> 
> (4) Put marbles into a bag so that each candidate corresponds to a
> different color of marble, and the number or marbles for a candidate is
> that candidate's final weight.
> 
> (5) Draw a marble at random from the bag.  The candidate to whom that
> marble belongs is the winner.

This is interesting indeed. It seems to be monotonic but I think it
might not elect the Condorcet winner.
	However, I suggest to consider using the median rank instead of the
mean rank since then the approval of A is just the number of voters
prefering the deterministic choice A to the probabilistic choice by the
current weights, in the sense that A has a greater probability to be
preferred to the probabilistic outcome than the probabilistic outcome to
be preferred to A by that voter!
	Perhaps one could also introduce some mechanism to reduce weights so
that not all options will have a positive probability of winning. For
example, you could transfer some proportion of the weights from less
approved to more approved options, perhaps using a Markov-like
transition matrix?

By the way, here's another method along the lines of the Copeland-based
one I introduced in my last mail:
	Copeland-based method was: Pick an option B uniformly at random. Then
choose the winner uniformly at random from the set of options which have
the smallest sum of magnitudes of defeats against them, among all
options who have the largest Copeland value, among all option defeating B.
	MINMAX-BASED METHOD: Pick an option B uniformly at random. Then choose
the winner uniformly at random from the set of options which have the
smallest maximal magnitude of defeats against them, among all option
defeating B.
	Advantage: Cloning some option increases the Copeland value of all
options defeating the clones and can thus lift the probability of such
an option from zero to positive in the Copeland-based method. In the
MinMax-based method, in contrast, the smallest maximal magnitude of
defeats against each non-clone remains unchanged, so that cloning here
can only increasing the probability of an option already having a
positive probability.
	Remaining disadvantage: Cloning can still increase the probability of
an option defeating the clones considerably.
	Therefore I am currently searching a probability distribution P that
remains unchanged when cloning an option in the sense that the clones'
probability sum up to the original probability of the cloned option.
This distribution P could then be used instead of "Pick an option B
uniformly at random." in the above methods so that then cloning would
not change probabilities.
	In fact, some such P is this: Attach random weights to all defeats in a
way such that each component ("set of clones") of the defeat graph
remains a component of the thus weighted defeat graph; compute the
"intermediate winner" (=the B for the above methods) by some
composition-consistent method such as river, beatpath, or Tideman, using
the random weights as magnitudes.
	Problem with this P: Although the resulting MinMax-based methods become
clone-consistent this way, they might lose there monotonicity unless the
random weights can be chosen so as to ensure monotonicity...

Jobst




More information about the Election-Methods mailing list