[EM] Richard, re: Nash equilibrium for voting systems
MIKE OSSIPOFF
nkklrp at hotmail.com
Fri Jul 19 13:13:46 PDT 2002
I hope this posts better than it appears on this screen, with
the lines messed up.
I'd said:
> I know that Richard questioned the meaningfulness of that
> equilibrium, because it sounds like bloc voting.
Richard replied:
[...]
[Where F is the number of factions, and each faction is treated
as a player because each faction votes unanimously]
I have no objection to the definition that produces exactly
F
players, but simply want to emphasize that it is only one of
a
multitude of ways the set of players can be defined. I don't
hold that it isn't a valid approach for certain theoretical
explorations, especially given its simplicity. I would only
object to the idea that this definition is useful in
real-life
public elections, where many if not most voters will
strategize
independently from the rest of their factions.
I reply:
The objection stated in that last sentence is the one that I'm
talking about. Saying that many voters won't vote exactly the
same as others in their faction misses the point here.
My point
is that if, with a certain votes configuration and outcome, some
voters can improve the outcome for themselves by changing their
vote, then that existing votes configuration and outcome
is obviously unstable.
You can say that calling it an equilibrium if no same-voting, same
sincere ranking
group can improve on it for themselves is overlenient, but that doesn't
change
the fact that, with the margins methods, there are situations
(configurations of candidates and voters' sincere rankings) where
, even with that lenient definition, the only equilibria are
ones in which defensive order-reversal is used.
Margins advocates can't finesse their way out of this.
Besides, even with Blake's most unlenient equilibrium definition,
Condorcet(wv) and Approval, when there's a CW, always have
equilibria in which the CW wins and no one order-reverses.
So, whichever equilibrium definition you use, margins is the
method that fails by sometimes having no equilibria in which
defensive order-reversal isn't used.
By the way, I disagree when Richard calls Blake's definition
a definition of Nash equilibrium. Blake's definition may have
uses, but it certainly isn't a definition of Nash equilibrium.
It violates the intent of Nash equilibrium. Nash's
definition speaks of one person changing his vote. If Nash had
meant any set of people with different sincere rankings and different
current strategies, then he would have said that. You could say
we're overextending Nash's definition by letting more than 1 voter
change their votes, but we change it beyond recognition if we
have different-voting voters with different sincere rankings all
change their vote.
The part of Richard's message quoted above is the part that
I'm replying to. Though I don't reply to the preceding part of
that message, I'm copying it below:
Mike Ossipoff
I don't remember what I wrote about this. I might have
questioned
its practical meaning, but certainly in a theoretical
context,
like Alex's Small Voting Machine, it is useful (with the
suggested
modification by Alex, because utilities are only needed for
probabilistic strategies).
But there are many ways you could define an election as a
game,
depending on who you consider the players to be. And
practically,
the players are defined by how well groups are able to
coordinate
their strategies.
If V is the number of voters, and F is the number of
distinct
factions (per Alex's definition), then you could have many
possibilities for the number of independent players, P:
If the voters are unable to coordinate their strategies at
all,
then you have P = V.
(It's been pointed out that there would be a heck of a lot
of
Nash equilibria with this many players, since in almost any
circumstance except a near tie, no single voter can change
the
outcome. However, you could focus on those Nash equilibria
that
are the most stable, in the sense of how large a factor
would be
needed, if you exaggerated the power of a given voter by
that
factor, to allow the voter to influence the outcome by
acting
alone. I don't have a formal definition for this measure of
stability, and haven't really thought it through, but
hopefully
you get the general idea.)
If each faction is 100% coordinated (100% of its members are
informed of the strategy and can be relied on to participate
in
that strategy), then you have P = F (each faction is a
player).
If several subgroups within each faction are internally
coordinated,
but there is no coordination between the subgroups, then you
have
F < P < V.
If two or more factions with similar but not identical
rankings
can coordinate a strategy between themselves, that is better
for each than each faction can accomplish alone, those
factions
would form a single player. In such a case you could have P
< F.
If I understand Mike's paraphrase of Blake's definition, it
would apply to any possible combination. For example, two of
the
subsets of faction A (A1 and A2) will team with subset B1 of
faction B to form a strategy, while group A3 teams with B2,
and
at the same time each subset of faction C (C1, C2, and C3)
forms
a different strategy. This would lead to a valid definition
of a
Nash equilibrium for that configuration in that election.
_________________________________________________________________
Send and receive Hotmail on your mobile device: http://mobile.msn.com
----
For more information about this list (subscribe, unsubscribe, FAQ, etc),
please see http://www.eskimo.com/~robla/em
More information about the Election-Methods
mailing list