[EM] Conjecture: PR gauge should only depend on the distribution of the ratings averages of the cardinal ratings nballots

Forest Simmons fsimmons at pcc.edu
Sat Dec 3 13:19:51 PST 2016


Continuing ...

If my conjecture is right (and I thnk it is) there are profound
consequences:

Perhaps the most surprising one is that once you have the distribution F,
you no longer need to know the size k of the candidate slate K to finish
gauging the representativeness of K.

Another amazing consequence is that a candidate set K can be treated
exactly like a single candidate, and a single candidate (even without
multiple personality disorder) can be treated like a set of candidates.

A common mistake progressives make when calling for election reform is to
suggest electing the president by Proportional Representation, as though
selecting a committee instead of an individual.

Well, each presidential candidate has a slate of attributes, so think of
electing a slate of attributes.  By a process of weighted averaging, each
voter comes up with a score between zero and 100% for each slate of
attributes, i.e. for each candidate.  Then each candidate (i.e. each slate
of attributes) has a distribution F of attribute averages, i.e. F is hir
distribution of range scores.   Since F is sufficient for judging the
representativeness of the candidate's slate of attributes, F is sufficient
for judging the representativeness of the candidate hirself.

So each PR method based on this idea gives us a new single winner method
based on range scores.  The slates of attributes are external to the
method.  Nobody needs to know if the voters are coming up with their
ratings this way or not.

Since these methods are (in general) Pareto compliant, they will always
elect the Approval winner when the ballots are all voted approval style.

How cool is that?

On Sat, Dec 3, 2016 at 12:32 PM, Forest Simmons <fsimmons at pcc.edu> wrote:

> It now seems to me that all of the information needed in gauging the
> suitability of a set of candidates is contained in the distribution of
> ballot averages.
>
> Let beta be a multi-set of range ballots that are normalized so that max
> range is 100%.
>
> Let K be a subset of the candidates, and let beta' be the multi-set of
> ballots from beta restricted to (but not re-normalized to) the candidates
> in K.
>
> Let alpha be the multi-set of averages of ballot ratings from beta', so
> that for each B in beta' there is an average A in alpha obtained by adding
> up the ratings on ballot B and dividing by how many there are, namely the
> cardinality of K.
>
> Let F be the distribution function for the multi-set alpha.  In other
> words,
>
> for x between zero and one, F(x) is the percentage of numbers in alpha
> that are less than or equal to x.
>
> So loosely speaking, F(x) is the percentage of ballots in beta' whose
> averages are no greater than x.
>
> I believe that there is enough information in this function F to gauge the
> relative representativeness of the set K.
>
> For example, one possible gauge is
>
> G = Integral(over x from zero to 100%) of (1 - F(x))^(1/(1-x)) .
>
> What is the optimum gauge in this context?
>
> Is there a better gauge outside of this context?
>
> I don't think so.  But I could be wrong.
>
> Forest
>
>
> On Thu, Nov 24, 2016 at 8:26 AM, Warren D Smith <warren.wds at gmail.com>
> wrote:
>
>> New(?) "philosophy" for optimizing proportional representation
>> =============Warren D. Smith, 24 Nov 2016=====================
>>
>> Here's an idea with a lot of possible fruit.
>>
>> Regard the parliament as a miniature model of
>> the electorate.  But to make this preicse, regard it
>> as a PREDICTIVE model, and use typical statistical ideas
>> to measure the quality of the predictive model.
>> Such as entropy-based, or (what is probably the
>> most useful) max-log-likelihood.
>> Result is a "quality (or penalty) function" for possible parliaments.
>>
>> Note: Any decent instantiation of this will automatically yield PR.
>> But not necessarily "strong PR" -- but you can force that too if
>> design the model right.
>>
>> A problem with this philosophy is it is not the whole picture.
>> We do not just want a good predictive model; we also
>> want more approval.  But we already have techniques aiming for the latter.
>> Maybe they could be added to the pot.  Hybridize them in somehow.
>>
>> There are many possible ways to skin the resulting cat.
>> You should probably invent some.
>>
>> Idea #1:
>> With each candidate, associate a "prototype" approval-ballot
>> (each such ballot consists of C bits, if it is a C-candidate election)
>> constrained by the 1-bit demands that each candidate's prototype ballot
>> must approve that candidate.
>>
>> Now, divvy up the V-voter electorate into chunks (each chunk V/W voters
>> where there are to be W winners, 0<W<C).  Associate exactly 1 chunk
>> with each winner.  The penalty for each chunk is the sum, over
>> each of the C bit positions, of the log of the probability a random voter
>> in
>> that chunk differs in that bit, from that winner's prototype.
>> [To prevent log(0), adjoin 1 artificial approve-none and 1 approve-all
>> "voter" to each chunk.]
>> The total penalty is the sum, over all chunks, of the chunk penalty.
>>
>> This penalty is to be minimized over all possible chunkings, all possible
>> W-winner parliaments, and all possible choices of the prototype ballots.
>> (Note, the best possible chunking is findable rapidly by a "greedy
>> algorithm"
>> given that the parliament and prototypes are known.  But a big search
>> still
>> seems needed over the latter two.)
>>
>> Critique #1:
>> As defined, a parliament's penalty is affected by what the ballots
>> said about nonwinning candidates.  That's good as far as our
>> modeling-philosophy is concerned.  But bad from other viewpoints like
>> monotonicity demands.  (But we could redefine to ignore all
>> bits on all ballots that are for losing candidates.  Then still would
>> suffer monotonicity problems.)
>> Also, it should be better if a voter also approves
>> some winner who is not the winner for her chunk.  But as defined, it
>> isn't.
>>
>> Idea #2:
>> Also, above idea #1 does not yield strong PR, only plain PR.
>> That can be fixed as follows.
>> Also add special parliamentarians to the picture.
>> The special ones have no prototypes and no chunks.
>> They simply are approved by various fractions F of
>> each chunk, and log(F) is added to the penalty function of that chunk.
>> The nonspecial MPs ignore the ballot bits associated with the special
>> ones.
>>
>> Then the whole minimization also needs to minimize penalty
>> over the numbers of special and nonspecial MPs, and over which
>> ones are special.
>>
>> Idea #3:
>> above ideas fail monotonicity.
>> This might be fixable as follows.
>> Instead of counting all bit-deviations from prototypes when
>> assessing penalties, instead merely count non-approvals
>> that deviate from a prototype "approve" bit.  I.e. it still
>> is a predictive model, but it predicts fewer bits in the ballots
>> than it used to, ignoring the rest.  (Add a bonus per bit
>> predicted to reward models that try to predict more.)
>>
>> I think that might work to force monotonicity too.
>> If so, this is a new "holy grail" PR system.
>> But this whole philosophy has a lot of flexibility you can
>> use to try to accomplish more goals too, like maybe Toby's CPAI goal.
>> So it has a lot of potential.
>>
>>
>>
>>
>>
>> I've been a  bit vague, but hope it is clear
>> this twist causes the method to exhibit "strong PR."
>>
>>
>>
>>
>>
>>
>> --
>> Warren D. Smith
>> http://RangeVoting.org  <-- add your endorsement (by clicking
>> "endorse" as 1st step)
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.electorama.com/pipermail/election-methods-electorama.com/attachments/20161203/0b91d964/attachment.htm>


More information about the Election-Methods mailing list