[EM] Comments on Heitzig's utility essay
Abd ul-Rahman Lomax
abd at lomaxdesign.com
Thu Feb 22 11:16:30 PST 2007
At 12:52 PM 2/22/2007, Warren Smith wrote:
>--WDS: OK, sure. The car trip at 20 km/hr with flashing lights
>might cost you, not
>1 cent, but in fact, $20 extra per trip, i.e. 2000 cents.
No, the cost of that mode of travel could be far higher than that.
What is the cost of the delay? Depends on how far the trip is and how
valuable one's time is, and the urgency of the trip. What if you are
on the way to the hospital?
> But nevertheless
>you evidently consider your $20 worth more than p*(your child's life)
>where p is the chance of a fatal car crash if you had driven with
>normal caution
>levels. In fact we know how frequent car crashes are, so we can estimate p,
>and we find p = 1/5,000,000 roughly.
>OK, now scaling by 2000, we find that you, by your behavior, have
>evidently considered
>1 cent worth more than p*(your child's life) where p=10^(-10).
>I was only trying to prove it for p=10^(-20) so in fact, even if you do not
>buy the validity of scaling by 2000, it still seems pretty clear I have a
>valid argument here since I have 10^10 worth of headroom. Heitzig in fact
>was arguing not only that I'm not right about p=10^(-20), but I'm
>also not right about
>p=10^(-100) or p=10^(-1000), or any p>0 whatever.
And he is obviously incorrect. But my point was that your
counterexamples had an obvious defect. To show contradiction, you
needed some example that was more closely similar to what Jobst had
proposed. His whole point was that we neglect utilities under some
circumstances, so to show him that he is "lying" -- bad choice of
words, Warren -- you'd have to show an example where the costs are similar.
The real question is *how much* would you increase risk to your
child's life for one penny of average benefit. It's obvious that you
would accept *some* level or risk. Would you, watching your child,
*ever* stoop to pick up a penny? If you would do it at *any* time,
then you might do it under those conditions, and it cannot be denied
that a moment's inattention watching a child can have serious consequences.
But *how often* will it have these consequences? Warren is right, the
probability is not irrelevant as it seems Jobst was claiming. But I'm
not sure that was his exact claim.
>Concerning the meteor shield, ok, fine. Do you, or do you not, toss
>1 cent pieces
>out of your pocket all day over the head of your child, in an effort
>to possibly deflect
>(or diminish the force of) incoming meteor fragements and stray bullets?
Hey, you never can tell. :-)
>The point is, people risk their lives and their children's lives in tiny ways
>all the time in order to gain money or convenience or other stuff
>exchangeable for money,
>and if you had the gall to try to prevent them, they would be furious.
Yes, and I think I was explicit about this.
>What we have here - in an effort to deny that obvious fact and to
>disparage all of
>utility theory - is a massive departure from a rational assessment of reality
>(which appears to occur in certain people's heads whenever the word
>"child" is mentioned).
>Social choice theorists cannot afford to be irrational to anywhere
>near that extent if they
>wish to *be* social choice theorists. Also, I point out that if you
>are that irrational,
>then you also are an extremely poor parent!
Not necessarily. Parents will spend money to create a potential
benefit or protection for their child when the benefit or reduction
of risk is out of proportion to the cost, because they are, shall we
say, programmed to do so. That is, when something *comes to their
attention*, they may act in this way, it is a manifestation of how
important the welfare of the child is to them. So someone comes to me
and literally asks me the question, "I'm running a lottery and you
can have a cent if you agree to participate. The "prize" is the
terrible consequence mentioned, but, hey, it's quite unlikely to happen.
Jobst is right. I would not enter the lottery. Indeed, I wouldn't
enter it for quite a bit more money than a penny! Now, if it is for
enough money to make my child's life significantly better, I might
consider it. And, in fact, I take risks like that routinely. I drive
my child to ballet classes, for example. Quite clearly, there is a
risk to her safety that I am accepting for a not-so-large benefit.
But I think that the risk is small enough and the gain large enough
to take the risk, even though, should I lose this lottery, I'll
severely regret the outcome.
Quite simply, social utility theory works much better when
amalgamated across a large population, where small expenses or risks
(or matching benefits) can actually become large enough to consider.
And when we are talking about Range Voting, we *are* considering such
amalgamation. Now, to aggregate these benefits and risks, we must
consider the individual value, which is small enough that the
individuals wouldn't necessarily make the so-called "rational" choice.
Factored into this must be that people will, quite rationally,
mistrust the judgement that the risk is small. If a stranger comes up
to me and offers me this terrible lottery, I will certainly suspect
that the risk has been understated, that the stranger is really a
sadist and has a gun ready to use. So I'd take the social-utility
penny and then be faced with terrible consequences, which could have
been predicted had my knowledge been better.
I think our instinctive responses factor this in. In other words,
they are *more sophisticated* than simple social utility theory. By
waving a magic wand, we can claim that all this is factored into
individual utilities.
Understand, Warren, I fully support the application of utility theory
to Range Voting. It's the best we've got. The attempts to poke holes
in it aren't suggesting, generally, better alternatives.
It is like people criticizing probability theory on the basis that,
hey, it doesn't predict the outcome. Okay, it doesn't, not exactly,
but you got something better on which to base decisions?
>I have a pretty decent idea what utility is, although getting to
>that point is not trivial as
>we see.
I don't know what good art is, but I know it when I see it. Something
like that.
We don't have to be able to precisely define utility in order to use
the general concept. For mathematical analysis, yes, we need *some*
kind of applicable definition. And it is almost certain that any
model we come up with is going to be short of precisely ideal. So
because we don't have an ideal car, we shouldn't go anywhere.
Because we don't have an exact definition of utility, we should
attempt to maximize it?
The fact is that we know it, under some conditions, when we see it. I
have often given the pizza example. Nobody has claimed that
preference strength *in that example* is irrelevant. This is because
we know it when we see it. Trying to define it under all
circumstances is a different matter, and may require great rigor, I'd
suggest. We might not be ready for it.
For what we are really talking about is how human beings make
decisions, or judge decisions, and this is highly complex. Something
like Range Voting is involved, I'm pretty sure, but that's only part
of it, for human decision-making process, except in emergencies, more
resembles deliberative process, where there is a back-and-forth
dialectic for some extended time. In emergencies, I expect, we use
pure Range, based on what is already in place.
> If you want to debate the meaning of or importance of "fairness"
> (which I gather
>you believe to be a disticnt notion from "utility"?), then
>by all means write another essay about whatever you think it might be.
I think that Jobst has done us a service by raising the issues,
whether he is "right" or not. We need some better discussions of
utility, more clear and more generally accessible. Ideally, if
possible, we should find a definition or general method of
application of utility that is broadly acceptible.
More information about the Election-Methods
mailing list