[EM] Better cardinal methods?

Richard Lung voting at ukscientists.com
Tue Oct 5 16:38:39 PDT 2021



Dear All,

 But Elections are a statistic -- a sum of contingent choices. Votes are not in a logical relation to each other, such that there is some determinable right answer to who should be elected. Axiomatic deduction of a deterministic result is the reason why the Impossibility theorem is impossible -- it is a misconception of the nature of elections.

We can make probabilistic determinations of electing candidates, ranging from practically certain to completely indecisive. The most representative results depend on most representatively averaging the preference data. This avoids the usual social choice theory objections, that assume elections are analytic rather than synthetic.

Regards,
Richard Lung.



On 5 Oct 2021, at 10:46 pm, Kristofer Munsterhjelm <km_elmet at t-online.de> wrote:

> On 05.10.2021 04:54, Andy Jennings wrote:
> Forest,
> 
> Thanks for your thoughts.
> 
> I agree that there are many good ways to get cardinal information from
> voters on a valid interval scale, assuming that we don't try to compare
> intervals between voters. It seems that the cardinal information must be
> meaningful and it's a shame to throw it away (though I agree that the
> method should be invariant to affine transformations).
> 
> Speaking of lottery methods, it's interesting that there is so much
> reluctance (including my gut reaction) to actually recommend a
> lottery-based method for use in real political elections. We want our
> elections to be deterministic, not influenced by chance in any way. But
> certainly there is chance in the process. Weather can influence turnout,
> as can traffic. There may be some voters that actually flip a coin in
> the voting booth. Cosmic rays have affected vote counts in the past
> (https://youtu.be/AaZ_RSt0KP8?t=44 <https://youtu.be/AaZ_RSt0KP8?t=44>).
> Websites like FiveThirtyEight report on the whole election season with
> probabilities. And sitting there watching outcomes on election night can
> definitely feel like games-of-chance-and-skill like the Olympics.
> 
> So maybe we should just embrace it and try to convince people to use
> lottery methods.

If lotteries are on the table, then perhaps we should just dissolve the
electoral problem and go right to sortition. It certainly has appealing
corruption resistance properties :-)

But the problem of lotteries, I think, is that there's too much left to
chance. While every election leaves something to chance (as you've
correctly pointed out), it's not too much to destabilize the process. On
the other hand, if you have the simple random favorite lottery and 10%
of the voters vote for a dictator, then you have 10% chance of getting a
dictatorship.

One way of considering single-winner methods, I think, is that they try
to find the best outcome under the constraint of zero entropy.
Proportional representation methods might also need to compromise to
satisfy their seat limits. Perhaps it would be possible to create a
tunable entropy method where we set a maximum allowed entropy (or
variance), and it attempts to find the best outcome lottery subject to
this constraint. Such a constraint would help with the reluctance, I
think, as long as the threshold is set sufficiently low that there's no
chance of extreme upsets (like a dictator winning).

> Even if we trust the math that generates the lottery, maybe we just
> can't bring ourselves to believe that the final draw will not be rigged.
> I'm sure there are cryptographic methods for securely generating a
> random number between 0 and 1, but will the public trust them?

There was a thread about this on Reddit a while ago. There's a protocol
that goes like this:

Somehow pick a number of participants (they may be the whole electorate
or randomly chosen members of the public, or the representatives of the
previous term).

Each participant creates a sufficiently long random secret string and
publishes its cryptographic hash.

The participants (or the election officials) publish these hashes.

Once they're all published, each participant reveals his random string.
If they match their respective hashes, the strings are combined, and the
result is used as a seed for a CSPRNG.

This protocol works by forcing the participants to commit to their input
strings before they have any knowledge of the other strings. Thus they
can't adapt the inputs to fix a particular output.

Suppose there's a conspiracy to fix the output by bribing or coercing
the participants into selecting predetermined random strings. Then even
if a single member defects from the conspiracy, the chaotic nature of
the secure hash function makes the attack fail. If the combination
function is secure (e.g. a secure hash), then a conspiracy would in any
case have to use brute force to find a suitable set of strings. The
difficulty of this brute-forcing would depend on the entropy - e.g.
packing a majority of a sortition assembly of 100 would be prohibitive,
but changing the outcome of an election lottery with a few candidates
would be easier.

> Is the NIST randomness beacon trustworthy?
> 
> In a small enough election, you could agree to use randomness from the
> next block mined on the bitcoin blockchain, but that runs into problems
> at the scale of a national election.

There have been proposals to use public data as entropy sources, e.g.
this one for financial data:
https://www.usenix.org/legacy/event/evtwote10/tech/full_papers/Clark.pdf

If multiple countries were to provide signed public randomness beacons,
they could be used as part of the protocol above; every country would
have to be collaborating to force the output. Number stations would
*almost* work, except they aren't signed.

-km
----
Election-Methods mailing list - see https://electorama.com/em for list info


More information about the Election-Methods mailing list