[EM] Defeat Strength

Richard Lung voting at ukscientists.com
Sun Sep 4 10:45:38 PDT 2022


I've not been receiving messages for a month or so.

JFS Ross has a chapter, in Elections and Electors, on Borda method, including some of the names you mention. His main point is that Laplace adjudicated on Borda vs Condorcet, and chose Borda. The reason given is that greater and lesser preferences should not be treated as of equal importance, as Condorcet pairing does.
Laplace gave a proof, in Analytic Theory of Probabilities, which my computer refused to access from Internet Archive. 
A Laplace proof is a big deal. He would often say, in Celestial Mechanics, this is too obvious to need proof. When his American translator saw that, he said, he knew he was in for a hard nights work. Laplace was rated one of historys half dozen greatest mathematicians.

It does not really matter, tho. The Laplace position was illuminated by SS Stevens, on the scales of measurement, in Science, in early 1940s. The advance of Borda method on Condorcet pairing is the progression to an interval scale of measurement. 
Standard statistical practise has a name for Borda count. It is weighting in arithmetic progression. Statisticians use this when they assume an arithmetic series adequately weights the successive importance of a series of data classes, to find a representative average. It is thus an assumed interval scale. 
Transferable voting Gregory method is a real interval scale, achieved by the standard technique, known as weighting in arithmetic proportion, when the real values of the class intervals in the data collection are known. -- a preferable more accurate calculation. (The interval scale is next to the ratio scale in accuracy, where election methods use both assumed and real ratio scales, by way of election quotas.)
Condorcet pairing stages binary counts, associated with the weakest scale, the categorical scale. Tho, in both instances, Condorcet and Borda, the ranked choice is on the ordinal scale.


Regards,
Richard Lung.




On 4 Sep 2022, at 4:22 pm, Colin Champion <colin.champion at routemaster.app> wrote:

Forest – this isn’t an answer, but a dissertation on a related topic.
   I recently read Condorcet's Essai, and then commentaries by Todhunter, Black and Young. I concluded that Condorcet's probability theory was all at sea, and that his commentators were too kind to him. In my view you cannot make sense of voting using pure probability theory (even turning a blind eye to the faults of a jury model); you need to fall back on statistics. This makes it possible to get away from Condorcet's untenable independence assumption and also to correct another fatal error in his method. He assumes that when a voter votes A>B>C, he is no likelier to be right in placing A above C than in placing A above B. I believe that this is impossible if erroneous rankings are treated as random errors. But if you try to find the relation between the likelihoods of one-place errors and two-place errors using pure probability theory, you just can't get started. You have to use a statistical model - eg. assume that true candidate merits are Gaussianly distributed, estimated merits are contaminated by Gaussian noise, and that a voter ranks candidates in decreasing order of estimated merit. If you do this, you can find the relative chances of one-place and two-place errors (subject to suitable distributional assumptions).
   Having done this, you can say: "I will give candidate A one point more than candidate B whenever he comes one place higher in a ballot, and x points more whenever he comes two places higher"; x=1 gives the Condorcet criterion, x=2 gives the Borda count. By my calculation, the optimal x is almost exactly 2 (and almost independent of distributional assumptions), and the Borda count is therefore almost optimal under a jury model with 3 candidates. Condorcet thought his jury theorem showed his own criterion to be optimal in the same case. 
   So I would say that you have to recast your question: let x and y be the candidate merits, distributed as N(0,1); let the noise be distributed as N(0,sigmasquared); let sigmasquared be governed by some fairly diffuse prior (maybe 1/sigmasquared). What is the probability that x>y when each of 100 noisy estimates of x is larger than the corresponding noisy estimate of y?
   If I have time, I might attempt the calculation.
      CJC

> On 04/09/2022 04:43, Forest Simmons wrote:
> Actually, max likelihood analysis would say that the A>B defeat was less likely to be reversed because max likelihood estimation would estimate Prob(B>A)=0.
> 
> But Baysian estimation would start with Prob(B>A)=Prob(A>B)>0<Prob(A=B) prior probabilities, and then adjust using Bayes' law to posterior probabilities in the order Prob(A=B)>Prob(A>B)>Prob(B>A)>0
> 
> and Prob(B>A) > Prob(G>F), so according to Bayes, the F>G defeat is stronger (less likely to be reversed in a parallel universe) than the A>B defeat.
> 
> Somebody should do the precise Bayesian calculations to verify (or refute) my statistical intuition.
> 

----
Election-Methods mailing list - see https://electorama.com/em for list info
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.electorama.com/pipermail/election-methods-electorama.com/attachments/20220904/f30bf467/attachment.htm>


More information about the Election-Methods mailing list