[EM] Scoring (was Re: OpenSTV 2.1.0 released)

Juho Laatu juho4880 at yahoo.co.uk
Tue Sep 18 23:46:06 PDT 2012


On 18.9.2012, at 18.03, Kristofer Munsterhjelm wrote:

> On 09/16/2012 02:35 PM, Juho Laatu wrote:
>> On 16.9.2012, at 9.57, Kristofer Munsterhjelm wrote:
>> 
> 
>>> 
>>> (More precisely, the relative scores (number of plumpers required)
>>> become terms of type score_x - score_(x+1), which, along with SUM
>>> x=1..n score_x (just the number of voters), can be used to solve
>>> for the unknowns score_1...score_n. These scores are then
>>> normalized on 0..1.)
>>> 
>>> It seems to work, but I'm not using it outside of the fitness
>>> function because I have no assurance that, say, even for a monotone
>>> method, raising A won't decrease A's score relative to the others.
>>> It might be the case that A's score will decrease even if A's rank
>>> doesn't change. Obviously, it won't work for methods that fail
>>> mono-add-plump.
>> 
>> What should "candidate's score" indicate in single-winner methods? In
>> single-winner methods the ranking of other candidates than the winner
>> is voluntary. You could in principle pick any measure that you want
>> ("distance to victory" or "quality of the candidate" or something
>> else). But of course most methods do provide also a ranking as a
>> byproduct (in addition to naming the winner). That ranking tends to
>> follow the same philosophy as the philosophy in selecting the winner.
>> As already noted, the mono-add-plump philosophy is close to the
>> minmax(margins) philosophy, also with respect to ranking the other
>> candidates.
> 
> What should "candidate's score" indicate? Inasfar as the method's winner is the one the method considers best according to the input given, and the social ordering is a list of alternatives in order of suitability (according to the logic of the method), a score should be a finer graduation of the social ordering. That is, the winner tells you what candidate is the best choice, the social ordering tells you which candidates are closer to being winners, and the rating or score tells you by how much.

Here "suitability" is close to what I called "quality of the candidate".

I note that we must have a suitable interpreation for "social ordering" because of the well known paradoxes of social ordering. Maybe we talk about "scoring" the candidates (transitively, numerically). That would make it "social scoring" or someting like that.

I also note that if we talk about "list of alternatives" in the sense that we want to know who should be elected in case the first winner can not be elected, then there may be different interpretations. We may want to know e.g. who should be elected if the winner would not have participated in the election, or in the case that the winner participates but can not be elected (=> the question is, do we measure losses to the winner).

One more note. Term "closer to being winners" does refer to being close in quality / scores. It does not refer to being close to winning e.g. in number of voters that could change the results. (I used earlier term "distance to victory".)

> 
> If the method aims to satisfy certain criteria while finding good winners, it should do so with respect to finding the winner, and also with respect to the ranking and the score. A method that is monotone should have scores that respond monotonically to the raising of candidates, too.
> 
>> I note that some methods like Kemeny seem to produce the winner as a
>> byproduct of finding the optimal ranking. Also expression "breaking a
>> loop" refers to an interest to make the potentially cyclic socielty
>> preferences linear by force. In principle that is of course
>> unnecessary. The opinions are cyclic, and could be left as they are.
>> That does not however rule out the option of giving the candidates
>> scores that indicate some order of preference (that may not be the
>> preference order of the society).
> 
> I think most methods can be made to produce a social ranking. Some methods do this on its own, like Kemeny. For others, you just extend the logic by which the method in question determines the winner. For instance, disregarding ties, in Schulze, the winner is the candidate whom nobody indirectly beats. The second place finisher would then be the candidate only indirectly beaten by the winner, and so on.

I guess in Schulze we can have the two options that I mentioned above. We can consider the ballots/matrix with or without the first winner, when determining the second winner.

In real life these cases could be compared e.g. to the situation where the winnig candidate has died or has just decided to be in opposition instead of becoming elected. The ideal winner may be different depending on what kind of opposition he/she will have. This difference is obvious e.g. in the case of a loop of three vs. majority decision between two candidates.

> 
>>> 
>>> Turning rankings into ratings the "proper" way highly depends on
>>> the method in question, and can get very complex. Just look at this
>>> variant of Schulze: http://arxiv.org/abs/0912.2190 .
>> 
>> They seem to aim at respecting multiple criteria. Many such criteria
>> could maybe be used as a basis for scoring the canidates. Already
>> their first key criterion, the Condorcet-Smith principle is in
>> conflict with the mono-add-plump score (there can be candidates with
>> low mono-add-plump score outside the Smith set).
>> 
>> My favourite approach to scoring and picking the winner is not to
>> have a discrete set of criteria (that we try to respect, and violate
>> some other criteria when doing so) but to pick one philosophy that
>> determines who is the best winner, and also how good winners the
>> other candidates would be. The chosen philosophy determines also the
>> primary scoring approach, but does not exclude having also other
>> scorings for other purposes (e.g. if the "ease of winning" differs
>> from the "quality of the candidate").
> 
> If you do that, you get into a problem when comparing methods, however. Every method can be connected to an optimality measure that it optimizes. That measure might be simple or it might be very complex, but still, there's a relation between the method and something that it attempts to optimize. Discussing methods could then easily end up on cross purposes where one person says: "but I think minmax is the obvious natural thing to optimize", and another says "but I think mean score is the obvious natural thing to optimize", and nobody gets anywhere.

I think that might get us somewhere. For example it makes sense to me to discuss what kind of a winner we should have. Should the winner be one that is not hated by anyone? Or, should the winner be one that is so strong that he/she can beat any other candidate that would try to oppose him/her (alone without teaming with other candidates)? Or maybe we should count the number of voters who can say "the winner would have lost to someone better".

After deciding who the best winner would be (this function can well be different in different elections), one could study if there are strategic problems with the method that implements the ideal "score function". (I note that many of the well known criteria talk about strategic vulnerabilities, not so often about which candidate is good.) If there are too many problems, we may be forced to use some other method than the one that gives best outcome with sincere votes. So, in this second phase I see already a compromise between "who should be elected" and "how to fix the strategic problems without making the method too bad".

> 
> At least with criteria, we have some way of comparing methods. We can say that this method behaves weirdly in that if some people increase the ranking of a candidate, the candidate might lose, whereas that method does not; or that this method can deprive candidates of a win if completely alike candidates appear, whereas that method does not.

For me the "ideal winner" / "ideal score function" and "small enough strategy problems" are targets. Different criteria are technical tools that we can use for exact discusson. Different methods may meet different criteria more or less well. I prefer not to talk about on/off (strategy related) criteria but rather about meeting them as well as needed.

For example FBC is an important criterion, but I can accept methods that do not meet it, but that are good enough in the sense that they allow voters to rank their favourite always first, as a safe enough rule of thumb. I don't lke methods that fail FBC in the sense that voters often have to betray their favourite, or if voters have to decide whether to betray or not based on some complex analysis. In the same way many other criteria can be met "well enough".

> 
> Or perhaps it's more appropriate to say that if we want to compare methods by some optimization function or philosophy, we should have some way of anchoring that in reality. One may say "I think Borda count is the obvious natural thing to optimize", but if we could somehow find out how good candidates optimizing for Borda would elect, that would let us compare the philosophies. Yet to do so, we'd either have to have lots of counterfactual elections or a very good idea of what kind of candidates exist and how they'd be ranked/rated so that it may be simulated, because we can't easily determine how good society would be "if" we picked candidate X instead of Y.

It is hard for me to find a choice/election where a society should follow the Borda philosophy. Certainly there are such choices, but I think Borda is not a "general purpose optimization criterion for typical decisions". On the other hand it it is easy to imagine situations where e.g. Range would yield ideal winners. Maybe in olympics range is a good philosophy for many events. Many election method experts think that Condorcet criterion is a good approach to making decisions in the typically very competitive political environment. Certainly we can also fine-tune that discussion and discuss which candidate is the ideal winner in the case that there is no Condorcet winner.

And if we take also strategies into account, the answer could be at some balance point where we don't always pick the best winner, but we do almost that, and add some resistance to strategies.

> (Well, we might in very limited situations: for example, one could take the pairwise matrix for a chess tournament once it's halfway through and use that to find the winner, then determine how often each election method gets it right. However, it's not obvious if "accuracy at predicting chess champions" is related to "being able to pick good presidents", say.)

In elections we in some sense have complete information. I have noted many discussions where people also make guesses on what those voters that didn't vote might think. Often this appears in discussions about truncated votes. Did the voter truncate because he thought that all others are tied last, or was he just lazy. Or maybe he thought that not ranking some candidates means that they are unacceptable. I guess the main rule is that we can assume that the information we have is complete (or that we can ignore those who didn't give us their opinion).

Juho

> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.electorama.com/pipermail/election-methods-electorama.com/attachments/20120919/5838d37e/attachment-0004.htm>


More information about the Election-Methods mailing list