[EM] Finding SociallyBest. Is it impossible?

Peter de Blanc peter at spaceandgames.com
Sat Apr 7 23:26:48 PDT 2007


On Sat, 2007-04-07 at 23:44 -0500, Paul Kislanko wrote:
> Actually, Isaac Asimov speculated along these lines back in his early 1950s
> short story "Franchise", where the super-computer Multivac selected one
> voter, asked him a lot of seemingly irrelevant questions, and from the
> answers determined how everyone else would vote in every election. 

A similar example is SIAI's Coherent Extrapolated Volition:
http://sl4.org/wiki/CoherentExtrapolatedVolition

> >>Zero BR is impossible with strategic voters; that would mean electing the 
> candidate that maximizes aggregate utility. But if that's what you're doing,
> 
> then voters will be motivated to lie about their utility functions. It 
> doesn't matter what sort of contortions you use in designing the method.<<
> 
> In the short story, and surely in practice, the AI engine knows enough about
> the voter that it would account for the voter's tendency to lie. The voter

This sort of thing may be possible with humans, but it's a problem that
would go beyond voting theory, or any sort of abstract decision theory,
and requires special knowledge of human psychology.

My point about honesty is this: If I know ahead of time that the AI is
going to somehow figure out what would maximize aggregate utility, then
I can replace my utility function with a new one in such a way that my
original utility function will be better satisfied.

For example, suppose we have two voters: A and B, and four outcomes W,
X, Y, and Z. Here are their utility functions:

A(W) = 0, A(X) = 0, A(Y) = 2, A(Z) = 3
B(W) = 0, B(X) = 3, B(Y) = 2, B(Z) = 1.5

B might reason as follows: "I know the AI is going to figure out the
socially-best outcome, which will be outcome Z. But that's the worst
outcome for me personally. But if I rewrite my goal system, assigning a
utility of 3 to outcome Y, then the AI will select outcome Y. From the
perspective of my current goal system, this outcome is superior to
outcome Z. So I will rewrite my goal system in order to influence the
election outcome."

This may be less unrealistic than it seems. People change their
preferences for strategic reasons all the time (for instance, to signal
group membership).

> >>With honest, perfectly introspective voters, you could just ask everyone
> to 
> report their utility functions and sum them up. But such voters are a 
> fantasy.<<
> 
> Utility functions are a fantasy. But it's true that asking voters to define
> theirs wouldn't work. Can you define yours? I can't even get a good
> definition for what that means.

This is true, but honest voters are another fantasy on top of that
fantasy. I think the fantasy of utility functions is more useful than
the fantasy of honest voters. There are good reasons why a selfish human
would want to become a utility maximizer, but I can't think of why a
selfish human would want to become honest.

> >>The difficulty with evolving a voting method is that you don't know what 
> strategic voting would look like. Maybe you could evolve the voting 
> strategies too, but I expect you'd have pretty major issues with local 
> optima.<<
> 
> This is only true if you're basing the hypothetical AI engine on the
> principle of summing individual utilities. The meta-considerations I
> mentioned above include that you have to start with some axioms as
> underpinnings.

I don't see what axioms you could start with such that you don't have to
take into account voter strategies.

> It might turn out that minimizing BR isn't the same as determining what is
> "Socially Best".

This is a good point. I actually think that adding utility functions is
nonsense. A utility function describes an agent in the sense that it
tells us what decisions that agent would make; any two utility functions
which would always produce the same decisions should be considered
equivalent (So utility functions are only determined up to a positive
affine transformation). Then it's clear that adding two utility
functions doesn't produce a unique result.

So far, I don't see any sensible way to talk about social utility.
Society isn't an agent. It doesn't have a utility function.

Still, everything else I've seen seems even more ad-hoc than BR.

Peter de Blanc




More information about the Election-Methods mailing list