[EM] Condorcet Jury Theorem

Kristofer Munsterhjelm km_elmet at lavabit.com
Sun Jul 3 04:51:25 PDT 2011


Greg Nisbet wrote:
> my premise, poorly articulated, but my premise nonetheless is that an 
> "adaptive" voting method that takes into account voters' previous 
> behavior may be able to outperform OMOV in the long run on average.

That sounds somewhat like a prediction market, only discrete instead of 
continuous. Say that each voter has some degree of trust associated with 
him, and that when he gets it right, that trust increases, related to 
the degree to which he thinks he is correct. Then you have "bets" (the 
confidence the voter puts into his answer being right) and payoffs 
(increase of trust when he gets it right).

Such schemes can be better than OMOV, but they can also have problems 
OMOV doesn't. When the issue is subjective or can be influenced after 
the fact, the feedback itself can contaminate the process: for instance, 
someone who thought a certain economic policy was good could ask others 
to say that they like the economic development that followed from the 
government taking the advice of the mechanism.

It could also have accumulation problems: say that you have a voter who 
knows a lot about computer science but little about engineering. Then he 
might use the trust he acquired from answering the CS questions 
correctly to pretend that he is knowledgeable within the domain of 
engineering and so influence the decisions there. Negative feedback will 
remove the trust afterwards, but by then, the vote has already been 
conducted. A similar accumulation problem could occur in which someone 
who has gained a lot of trust could "afford" to spend more of it on 
later estimates, giving his estimates a too-great weight for a while.

If the method is only designed as an oracle (ask a question and get an 
answer), there's no optimization on top of it (the answers aren't used 
for decision-making that affect those who answer), and the verification 
is done objectivly (by actually determining the right answer), then it 
could work.

> P=NP is only meant to evoke the relevant properties of objective truth 
> i.e. that it is true or false and that people don't know for certain 
> what it is. It is also meant to illustrate how people are NOT Condorcet 
> jurors themselves. We are NOT objective truth with some noise thrown in. 
> In fact, even in the P=NP problem, we would only distrust putting it to 
> a public vote because we have so much additional information about the 
> problem. In retrospect, using it as an example was a mistake. A system 
> of the ilk I am proposing doesn't know anything about the "content" of 
> the issues, simply what different people believe.

I'm not sure it would be generalizable even when we are uninfluenced by 
others. If you were to roll a die, hide it with your hand, and ask the 
people whether it came up less than four, then the people would have no 
idea and there would be only noise. This noise would make the result go 
in a random direction, and you couldn't know that either the majority 
decision or its negation would be correct.

Now say that everybody had a fixed bias to assume the die would come up 
less than four. Then the majority decision would amplify that bias to 
the point where the outcome would be "yes, it's less than four". 
However, you couldn't just negate the answer because it is possible (50% 
chance) that it did.

So I suppose that that means that the noise is not related to the truth, 
but it's related to some fixed reference frame. That would technically 
mean that we're not Condorcet jurors, because the probability of getting 
it right isn't either > .5 or < .5, it depends on which "side" of the 
frame we're dealing with. So you're right in that we wouldn't be 
Condorcet jurors, but it doesn't have to be related to what other people 
believe; the bias can be pre-set.

> Nevertheless, I believe that we can simulate a Condorcet jury by 
> weighting people differently based on past behavior. This would make the 
> resulting voting methods adaptive rather than memory-less. The current 
> methods that I believe have been proposed thus far are all memory-less. 
> The result of the n+1st election can't depend on the nth election, 
> indeed the results of any elections are independent of the order in 
> which they are conducted.
> 
> However, I would argue that this ignores important information that we 
> have in real life. We know something about the structure of 
> non-randomness in people's opinions and can account for it. Assuming 
> people are honest, I believe it is possible for an adaptive voting 
> method to outperform methods that enforce OMOV for the very limited goal 
> I set forward in my first post… to attempt to determine the truth of 
> propositions, not to make any type of normative decision.

What is the non-randomness in CS theorists with respect to the P = NP 
question? These people tend to say that P != NP 
(http://www.cs.umd.edu/~gasarch/papers/poll.pdf), and the hunch seems to 
be based on that they've discovered no algorithms that have come close 
(for the difficult phase transition areas). That may be entirely valid - 
if P != NP, then there won't be any algorithms to do NPC problems in 
polytime - but it may also be a hasty conclusion based on insufficient 
data. Who knows?

For hard questions like P != NP, you won't be able to reduce people to 
Condorcet jurors as the uncertainty is too great. For easier ones, you 
could reduce the bias, but you probably couldn't eliminate it altogether.




More information about the Election-Methods mailing list