[EM] Spatial models -- Polytopes vs Sampling

Kristofer Munsterhjelm km_elmet at t-online.de
Fri Feb 4 13:38:17 PST 2022


On 04.02.2022 20:24, Daniel Carrera wrote:
> 
> On Fri, Feb 4, 2022 at 5:05 AM Colin Champion
> <colin.champion at routemaster.app> wrote:
> 
>>     I haven't followed this discussion - sorry if I'm missing something. I
>>     quite like Jameson Quinn's model of an infinite number of dimensions of
>>     progressively diminishing importance. On the other hand, if 'n
>>     dimensions' is understood as meaning n dimensions of equal importance,
>>     then it seems to me intuitively unattractive. As a first
>>     approximation I
>>     might describe politics on a left/right axis; as a second I might
>>     distinguish between economic and social liberalism but expect them
>>     to be
>>     correlated (leading to a cigar-shaped 2D Gaussian) etc. (This doesn't
>>     help Daniel who wants an upper limit.)
> 
> 
> 
> That's my intuition as well --- dimensions of decreasing importance. If
> you wanted to make a cigar-shaped Gaussian, do you have any idea of how
> elongated it should be? Like... should each dimension have half the
> variance of the one before? Something else?

I found a (very rough draft of a) paper that argues that political
positions are hierarchical down to infinity, i.e. that if you ask very
specific questions, different voters will have different opinions about
these, but that they can be grouped into larger categories that do make
sense in lower dimensional space.

https://garymarks.web.unc.edu/wp-content/uploads/sites/13018/2016/09/rovny-and-marks.-issues-and-dimensions.pdf

If that's true, then there should be some kind of effective limit to the
number of dimensions given by the degree that voters care to coordinate
and know the issues, and how strong a low-pass filter (so to speak) the
political mechanism provides. PR would allow for more distinctions than
FPTP.

In such a kind of model, I would imagine that any sort of single-member
district would have a definite attenuating effect on variety, but that
Condorcet would provide better reproduction for the unavoidable level of
quantization than would say, IRV or FPTP; i.e. that it's better at
finding consensus candidates.

If you want to directly represent an area of opinion space that 10% of
the voters care about in every district, then you would pretty much need
PR. IRV would either magnify or squash this 10% support based on its
"strongest wing of strongest wing" recursive logic, whereas Condorcet
would pull the consensus candidate somewhat in its direction.

So SMD would give you some degree of bundling, PR with thresholds give
you a lesser degree, and PR without thresholds an even lesser degree.

I'd imagine sortition would (on average) produce a very low degree of
bundling, since the representative sample should be accurate of the
people down to the variance provided by the size of the assembly itself.
It would also bypass whatever natural limit arises from the nature of
campaigning and party organization.

I don't know what this natural level would be, though, for something
like Condorcet. I guess it would depend on to what degree dimensions
follow geographical distinctions: if six districts have voters who care
exclusively about economic issues, and four districts care exclusively
about unitary state vs devolution issues, then that's going to give a
different composition of issue space than if the unitary/devolved voters
are represented by 40% in every district and the economic voters by 60%
also in every district.

(If the model is accurate, that is. Perhaps a properly designed poll
could determine whether it is; such a poll would ask various questions
within subcategories of subcategories of main categories and see if the
results are consistent with the hierarchical model.)

> Let me also respond to Kristofer's comment about not taking the current
> bungling of issues as a given. One could argue that the pragmatic
> approach is to ask how a change in the electoral system would affect the
> fortunes of minor parties that already exist, or how it might encourage
> a new party to form. In the former case, you only need to model the
> sub-space spanned by parties that already exist, and in the latter you
> only need 1 more dimension than that.

Yes. But if we're not careful, that's the kind of reasoning that leads
to IRV-type reform. Suppose that a method is stable for k parties and
then something bad happens at k+1. Once we're stuck at k parties, we may
ask for a reform that brings the well-behaved regime up to k+1. But then
we'll have the same problem at k+1.

It would be better, I think, to take into account scenarios all the way
up to n parties for some large n, so that there's plenty of room to grow
-- if we have that luxury as mechanism designers, of course.


The metaphor isn't completely fair to Condorcet because Condorcet passes
IIA whenever there is a honest CW and too few strategic voters to
disturb the presence of that CW. That could theoretically happen at any
dimension level. A Condorcet method can be stable at any dimension (best
case) and be unstable at any above one (worst case), so it's much harder
to tell how many parties it can support.

> 
>  
> 
>     Quinn's model is on his vse page:
>     http://electionscience.github.io/vse-sim/VSE/
>     <http://electionscience.github.io/vse-sim/VSE/>
> 
> 
> I've read this page before and I dismissed it because I couldn't figure
> out what the model actually was. Maybe it's obvious and I just don't see
> it, but what is the actual formula for the VSE?

The concept of VSE is simply this:

Suppose every voter assigns an absolute utility to each candidate. Then
each candidate provides the electorate as a whole with some amount of
total utility, were he elected.

Now suppose there's a magic method that reads the voters' minds and
always elects the best candidate. That method has 100% VSE by
definition. A method that picks candidates at random is defined to have
0% VSE. The VSE percentage of a method is its utility performance mapped
on this scale; most methods are above 0%, but it's possible for a truly
bad method to be less than 0%.

More info on wikipedia:
https://en.wikipedia.org/wiki/Social_utility_efficiency

Jameson summarizes his results here:
http://electionscience.github.io/vse-sim/VSEbasic/
Other people have also done VSE calculations. Here's John Huang's:
http://votesim.usa4r.org/summary-report.html

-km


More information about the Election-Methods mailing list