[EM] High Resolution Inferred Approval version of ASM

C.Benham cbenham at adam.com.au
Sun Jun 23 09:50:05 PDT 2019


On 22/06/2019 9:15 am, John wrote:

> The great purported benefit of score systems is that more voters can 
> rank A over B, yet due to the scores score can elect B:
>
John,

Is every method that uses score ballots a "score system"?   My suggested 
VIASME method meets Smith and therefore avoids
the "benefit" you refer to.

> Wrapping it in a better system and using that information to make 
> auxiliary decisions is still incorporating bad data.  Bad data is 
> worse than no data.

As it relates to VIASME, I'm afraid you've lost me. A few years ago 
James Green-Armytage proposed a Condorcet method that asked the voters 
to both
rank the candidates (with equal ranking and truncation allowed) and also 
give each of them a high-resolution score and the ranking and the scoring
had to be consistent with each other.  If there was a Condorcet winner 
the scoring was ignored.

Well it seems to me that the ranking is a redundant extra chore for the 
voter because it can be inferred from the scoring. That is what I 
propose for
VIASME.  The Green-Armytage method was called Cardinal-Weighted 
Pairwise  and was designed to try to resist Burial strategy. He had a 
simpler-ballot
version called Approval-Weighted Pairwise. One of the reasons I don't 
much like it is that it can elect a candidate that is pairwise-beaten by 
a more approved
candidate.

https://electowiki.org/wiki/Cardinal_pairwise

On 22/06/2019 8:57 am, Felix Sargent wrote:

> That's not even going into what happens when a voter ranks an ordinal 
> ballot strategically, placing "guaranteed losers" to 2nd and 3rd 
> places in order to improve the chances of their first choice candidate 
> (in IRV at least). 

Felix, the Burial strategy you describe doesn't work in IRV because your 
2nd and 3rd place preferences won't be counted if your  first choice 
candidate is still alive.
It is methods that fail Later-no-Help (such as all the Condorcet 
methods) that are vulnerable to that, some more than others.

Chris Benham

On 22/06/2019 9:15 am, John wrote:
> The error comes when you make inferences.
>
> The great purported benefit of score systems is that more voters can 
> rank A over B, yet due to the scores score can elect B:
>
> A:1.0 B:0.9 C:0.1
> C:1.0 A:0.5 B:0.4
> B:1.0 A:0.2 C:0.1
>
> A=1.7, B=2.3, C=2.2
>
> Both B and C defeat A, despite A defeating both ranked.
>
> If the first voter scores B as 0.7, C wins.
>
> Whenever a system attempts to use score or its low-resolution Approval 
> variant, it is relying on this information.
>
> So why does this matter?
>
> The voters are 100% certain and precise that these are their votes:
>
> A>B>C
> C>A>B
> B>A>C
>
> We know A defeats B, A defeats C, and B defeats C.  A is the Condorcet 
> winner.
>
> For score votes, 1.0 is always 1.0.  It's the first rank, the 
> measure.  This is of course another source of information distortion 
> in cardinal systems: how is the information meaningful as a comparison 
> between two voters?
>
> How do you know 10 voters voting A first at 1.0 aren't half as 
> invested in A as 6 voters voting B 1.0, this really A=5 B=6?
>
> Ten of us prefer strawberry to peanut butter.
>
> Six of us WILL DIE IF YOU OPEN A JAR OF PEANUT BUTTER HERE.
>
> Score systems claim to represent this and capture this information, 
> but they can't.
>
> (Notice I used the negative: that 1.0 vote is an expression of the 
> damage of their 0.0-scored alternative.)
>
> Even setting that aside, however, you have a problem where an 
> individual might put down 0.7 or 0.9 or 0.5 for the SAME candidate in 
> the SAME election, solely based on how bad they are at creating a 
> cardinal comparison.  Humans are universally bad at cardinal comparison.
>
> So now you can actually elect A, B, or C based on how well-rested 
> people are, how hungry they are, or anything else that impacts their 
> mood and thus the sharpness or softness by which they critically 
> compare candidates.
>
> It's a sort of random number generator.
>
> Wrapping it in a better system and using that information to make 
> auxiliary decisions is still incorporating bad data.  Bad data is 
> worse than no data.
>
> On Fri, Jun 21, 2019, 7:27 PM Felix Sargent <felix.sargent at gmail.com 
> <mailto:felix.sargent at gmail.com>> wrote:
>
>     I don't know how you can think that blurrier data would end up
>     with a more precise result.
>     No matter how you cut it, if you rank ABCD then it translates into
>     a score of
>     A: 1.0
>     B: .75
>     C: 0.5
>     D: 0.25
>
>     There's no way of describing differences between candidates beyond
>     a straight line between first place and last place.
>     Even if the voter is imprecise in the difference between A and B
>     they will never make the error of rating B more than A, whereas
>     the error between a voter's actual preferences and the preferences
>     that are recorded with an ordinal ballot has the liability of
>     being massive. Consider I like A and B but HATE C. ABC does not
>     tell you that.
>     That's not even going into what happens when a voter ranks an
>     ordinal ballot strategically, placing "guaranteed losers" to 2nd
>     and 3rd places in order to improve the chances of their first
>     choice candidate (in IRV at least).
>
>     Your analysis depends on the question of how intelligent you
>     believe the average voter to be.
>     If voters can use Amazon and Yelp star ratings, they can do score
>     voting.
>
>     Felix Sargent <https://felixsargent.com>
>
>
>
>     On Fri, Jun 21, 2019 at 2:14 PM John <john.r.moser at gmail.com
>     <mailto:john.r.moser at gmail.com>> wrote:
>
>         Cardinal voting collects higher-resolution data, but not
>         necessarily precise data.
>
>         Let's say you score candidates:
>
>         A: 1.0
>         B: 0.5
>         C: 0.25
>         D: 0.1
>
>         In reality, B is 90% as favored as A. C is 70% as favored as
>         B.  The real numbers would be:
>
>         A: 1.0
>         B: 0.9
>         C: 0.63
>         D: etc.
>
>         How would this happen?
>
>         Cardinal: I approve of A 90% as much as B.
>
>         Natural and honest: I prefer A to win, and I am not just as
>         happy with B winning, or close to it.  I feel maybe half as
>         good about that?  B is between C and D and I don't like C, but
>         I like D less.
>
>         Strategic: even voting 0.5 for B means possibly helping B beat
>         A, but what if C wins...
>
>         The strategic nightmare is inherent to score and approval
>         systems.  When approvals aren't used to elect but only for
>         data, people are not naturally inclined to analyze a score
>         representing their actual approval.
>
>         Why?
>
>         Because people decide by simulation. Simulation of ordinal
>         preference is easy: I like A over B.  Even then, sometimes you
>         can't seem to decide who is better.
>
>         Working out precisely how much I approve of A versus B is
>         harder.  It takes a lot of effort and the basic simulation
>         approach responds heavily to how good you feel about A losing
>         to B, not about how much B satisfies you on a scale of 0 to A.
>
>         Score and approval voting source a high-error, low-confidence
>         sample.  It's like recording climate data by licking your
>         finger and holding it in the wind each day, then writing down
>         what you think is the temperature.  Someone will say, "it's
>         more data than warmer/colder trends!" While ignoring that you
>         are not Mercury in a graduated cylinder.
>
>
>         On Fri, Jun 21, 2019, 3:10 PM Felix Sargent
>         <felix.sargent at gmail.com <mailto:felix.sargent at gmail.com>> wrote:
>
>             Valuation can be ordinal, in that you can know that 3 is
>             more than 2.
>             There are two questions before us: Which voting method
>             collects more data? Which tabulation method picks the best
>             winner from that data?
>
>             Which voting method collects more data?
>             Cardinal voting collects higher resolution data than
>             ordinal voting. Consider this thought experiment. If I
>             give you a rating of A:5 B:2 C:1 D:3 E:5 F:2 you should
>             create an ordered list from that -- AEDFBC. If I gave you
>             AEDFBC you couldn't convert that back into its cardinal data.
>
>             Which tabulation picks a better winner from the data?
>             Both Score and Approval voting pick the person with the
>             highest votes.
>             Summing ordinal data, on the other hand, is very
>             complicated, as to avoid loops. Methods like Condorcet or
>             IRV have been proposed to eliminate those but ultimately
>             they're hacks for dealing with incomplete information.
>
>             Felix Sargent <https://felixsargent.com>
>
>
>
>             On Fri, Jun 21, 2019 at 5:23 AM John
>             <john.r.moser at gmail.com <mailto:john.r.moser at gmail.com>>
>             wrote:
>
>                 Voters can't readily provide meaningful information as
>                 score voting. It's highly-strategic and the comparison
>                 of cardinal values is not natural.
>
>                 All valuation is ordinal. Prices are based from cost;
>                 but what people WILL pay, given no option to pay less,
>                 is based on ordinal comparison.
>
>                 Is X worth 2 Y?
>
>                 For the $1,000 iPhone I could have a OnePlus 6t and a
>                 Chromebook. The 6t...I can get a cheaper smartphone,
>                 but I prefer the 6t to that phone plus whatever else I
>                 buy.
>
>                 I have a higher paying job, so each dollar is worth
>                 fewer hours, so the ordinal value of a dollar to me is
>                 lower. $600 of my dollars is fewer hours than
>                 $600 minimum wage dollars.  I have access to my
>                 most-preferred purchases and can buy way down into my
>                 less-preferred purchases.
>
>                 Information about this is difficult to pin down by
>                 voter.  Prices in the stock market set by a constant,
>                 public auction among millions of buyers and sellers. 
>                 A single buyer can hardly price one stock against
>                 another, and prices against what they think their
>                 gains will be relative to current price.
>
>                 When pricing candidates, you'll see a lot like Mohs
>                 hardness: 2 is 200, 3 is 500, 4 is 1,500; but we label
>                 things that are 250 or 450 as 2.5, likewise between
>                 500 and 1,500 is 3.5. Being between X and Y is always
>                 immediately HALFWAY between X and Y, most intuitively.
>
>                 The rated system sucks even before you factor in
>                 strategic concerns (which only matter if actually
>                 using a score-driven method).
>
>                 Approval is just low-resolution (1 bit) score voting.
>
>                 On Fri, Jun 21, 2019, 12:01 AM C.Benham
>                 <cbenham at adam.com.au <mailto:cbenham at adam.com.au>> wrote:
>
>                     Forest,
>
>                     With paper and pencil ballots and the voters only
>                     writing in their numerical scores it probably
>                     isn't very practical for the Australian Electoral
>                     Commission
>                     hand vote-counters.
>
>                     But if it isn't compulsory to mark each candidate
>                     and the default score is zero, I'm sure the voters
>                     could quickly adapt.
>
>                     In the US I gather that there is at least one
>                     reform proposal to use these type of ballots. One
>                     of these, "Score Voting" aka "Range Voting",
>                     proposes to just use Average Ratings with I gather
>                     the default score being "no opinion"  rather than
>                     zero and some tweak to prevent an unknown
>                     candidate from winning.
>
>                     So it struck me that if we can collect such a
>                     large amount of detailed information from the
>                     voters then we could do a lot more with it, and if we
>                     want something that meets the Condorcet criterion
>                     this is my suggestion.
>
>                     Chris Benham
>
>                     https://rangevoting.org/
>
>>                     *How score voting works:*
>>
>>                      1. Eachvote
>>                         <https://rangevoting.org/MeaningOfVote.html>consists
>>                         of a numerical score within some range (say0
>>                         to 99 <https://rangevoting.org/Why99.html>)
>>                         for each candidate. Simpler is 0 to 9
>>                         ("single digit score voting").
>>
>
>                     On 21/06/2019 5:33 am, Forest Simmons wrote:
>>                     Chris, I like it especially the part about naive
>>                     voters voting sincerely being at no appreciable
>>                     disadvantage while resisting burial and complying
>>                     with  the CD criterion.
>>
>>                     From your experience in Australia where full
>>                     rankings are required (as I understand it) what
>>                     do you think about the practicality of rating on
>>                     a scale of zero to 99, as compared with ranking a
>>                     long list of candidates?  Is it a big obstacle?
>
>                     <http://www.avg.com/email-signature?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=emailclient&utm_term=oa-4885-b>
>                     	Virus-free. www.avg.com
>                     <http://www.avg.com/email-signature?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=emailclient&utm_term=oa-4885-b>
>
>
>                     <#m_3685997563850768520_m_-2045031879217701627_m_-57446879386365185_m_-253417112080945646_m_-816986146098263387_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
>
>                 ----
>                 Election-Methods mailing list - see
>                 https://electorama.com/em for list info
>


---
This email has been checked for viruses by AVG.
https://www.avg.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.electorama.com/pipermail/election-methods-electorama.com/attachments/20190624/00b7eac2/attachment-0001.html>


More information about the Election-Methods mailing list