<div dir="auto">This reminds me of the n-body problem. For n not too large, the Parker-Sochaski method is the best numerical solution technique that I know of so far.<div dir="auto"><br></div><div dir="auto">But for many years they have been using statistical mechanics in the many body problem ... from n=3 on up to n =infinity ... thermodynamics, magneto-fluid dynamics, etc.</div><div dir="auto"><br></div><div dir="auto">Those guys know how to do it.</div><div dir="auto"><br></div><div dir="auto">When I did minuteman missile simulations back in the early1970's we had a Spherical Harmonics gravitational model with hundreds of terms, which was necessary to get a ballistic missile to within a few feet of an enemy missile silo half way around the world.</div><div dir="auto"><br></div><div dir="auto">The guys that came up with it used data from satellite tracking observations to get a spherical harmonics mass density model of the earth. Once they had that, it was a simple matter to express the gravitational potential in spherical coordinates, and from that, gravitational force via the gradient of the potential.<br></div><div dir="auto"><br></div><div dir="auto">We need spherical harmonics expansions of the voter density function on the surface of a sphere of the right dimension.</div><div dir="auto"><br></div><div dir="auto">Now, I know why the suggestion of a spherical distribution ... you can have a uniform probability density on a compact manifold like a sphere or a cartesian product of circles, but not for an open manifold like an entire plane .. for that, the best (closest to smooth & uniform) you can do is a Gaussian.</div><div dir="auto"><br></div><div dir="auto">[Somebody should start doing sphericalYee diagrams.]</div><div dir="auto"><br></div><div dir="auto">We need to get one of those geodesical scientists from the 1970's who solved this problem without super computers ... they could tell us how to do it ... just like the astronomers of the nineteenth century could use the "method of orbital elements", a clever perturbation technique to do all of the numerical integration of all the high precision n-body celestial mechanics problems of the solar system (planets, moons, comets asteroids, etc) ... with nothing beyond mechanical adding machines and tables of trig and log functions.</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">El dom., 23 de ene. de 2022 12:26 p. m., Daniel Carrera <<a href="mailto:dcarrera@gmail.com">dcarrera@gmail.com</a>> escribió:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div dir="ltr"><div class="gmail_default" style="font-family:trebuchet ms,sans-serif;font-size:small"><br></div></div><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Jan 21, 2022 at 9:02 AM Kristofer Munsterhjelm <<a href="mailto:km_elmet@t-online.de" target="_blank" rel="noreferrer">km_elmet@t-online.de</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">It's easy to approximately find such probabilities for spatial models as<br>
well by just creating an instance and randomly placing voters on it to<br>
determine what they vote, and hence in the limit of number of trials<br>
going to infinity, get the probability.<br>
<br>
But I was wondering, is it possible to do so exactly? Let's say that in<br>
a spatial model, first c candidates are placed uniformly at random in an<br>
unit d-cube. Then voters are also placed uniformly at random in this<br>
cube and they prefer candidates closer to them to candidates further away.<br>
<br>
Now I would suppose that the probability that a voter votes A first is<br>
equal to the volume of the Voronoi cell that belongs to A. (And the<br>
probability that a voter will vote A>B>C is the volume of the<br>
intersection of the closest-neighbor Voronoi cell for A with the<br>
second-closest-neighbor for B and the third-closest-neighbor for C.)<br>
<br>
Well, that eliminates one source of randomness -- assuming I can exactly<br>
calculate the vertices of these regions. But there's still the second<br>
source in the randomness of the candidates. Do you know of any calculus<br>
tricks to get the probabilities over every possible candidate position,<br>
or is this simply too hairy?<br></blockquote></div><div><br></div><div><br></div><div><div class="gmail_default" style="font-family:"trebuchet ms",sans-serif;font-size:small">I have been thinking about this. I don't have a clever solution for you, but I have some thoughts that could be a stepping stone toward a solution. What you are trying to do is compute an integral in a highly dimensional space. Let's say that for a given set of candidates you have a function f() that outputs a real value. That might be a utility, or a probability, or whatever. I just want it to be a real number and not binary so that we can hope that f() is smooth in the space spanned by the candidates.</div></div><div class="gmail_default" style="font-family:"trebuchet ms",sans-serif;font-size:small"><br></div><div class="gmail_default" style="font-family:"trebuchet ms",sans-serif;font-size:small">The set of candidates can be thought of as a point in a C*D dimensional space where C is the number of candidates and D is the dimension of your D-cube. So you have a vector:</div><div class="gmail_default" style="font-family:"trebuchet ms",sans-serif;font-size:small"><br></div><div class="gmail_default" style="font-family:"trebuchet ms",sans-serif;font-size:small">x_jk = position of candidate j on issue k</div><div class="gmail_default" style="font-family:"trebuchet ms",sans-serif;font-size:small"><br></div><div class="gmail_default" style="font-family:"trebuchet ms",sans-serif;font-size:small">x = [ x11, x12, ... , x1N, x21, ... , xCD ]</div><div class="gmail_default" style="font-family:"trebuchet ms",sans-serif;font-size:small"><br></div><div class="gmail_default" style="font-family:"trebuchet ms",sans-serif;font-size:small">When you ask:</div><div class="gmail_default" style="font-family:"trebuchet ms",sans-serif;font-size:small"><br></div><div class="gmail_default" style="font-family:"trebuchet ms",sans-serif;font-size:small">"<span style="font-family:Arial,Helvetica,sans-serif">Do you know of any calculus </span><span style="font-family:Arial,Helvetica,sans-serif">tricks to get the probabilities over every possible candidate position, </span><span style="font-family:Arial,Helvetica,sans-serif">or is this simply too hairy?</span><span class="gmail_default">"</span></div><div class="gmail_default" style="font-family:"trebuchet ms",sans-serif;font-size:small"><br></div><div class="gmail_default" style="font-family:"trebuchet ms",sans-serif;font-size:small">You are asking, "how do I integrate f()"? Your default plan of drawing random candidates and computing f() for each is integration by Monte Carlo sampling. Now, I don't know how to integrate f() analytically, but realizing that it is an integral allows us to consider other forms of numerical integration (hence my request that f() be a smooth function). One such example is Laplace's method:</div><div class="gmail_default" style="font-family:"trebuchet ms",sans-serif;font-size:small"><br></div><div class="gmail_default" style="font-family:"trebuchet ms",sans-serif;font-size:small"><a href="https://en.wikipedia.org/wiki/Laplace%27s_method" target="_blank" rel="noreferrer">https://en.wikipedia.org/wiki/Laplace%27s_method</a><br></div><div class="gmail_default" style="font-family:"trebuchet ms",sans-serif;font-size:small"><br></div><div class="gmail_default" style="font-family:"trebuchet ms",sans-serif;font-size:small">Let us imagine that f() looks like a hill. If you can compute the gradient of f(), you can climb the hill and get to the peak. Up there, if you can compute the Hessian of f(), you can approximate f() with the Gaussian function with the same height and Hessian. This method is approximate, but it is very fast because you do not have to sample across the entire space. Laplace's method is often used in highly dimensional problems, where the cost of random sampling is prohibitive.</div><div class="gmail_default" style="font-family:"trebuchet ms",sans-serif;font-size:small"><br></div><div class="gmail_default" style="font-family:"trebuchet ms",sans-serif;font-size:small">How do you compute the derivative and Hessian? Most often this is used with an Autodiff package. Alternatively, you can use finite differences but at the cost of reduced accuracy. What if f() isn't one hill but many? You can integrate each hill separately and add them up.</div><div class="gmail_default" style="font-family:"trebuchet ms",sans-serif;font-size:small"><br></div><div class="gmail_default" style="font-family:"trebuchet ms",sans-serif;font-size:small">Anyway, I don't know whether Laplace's method is a realistic solution, but if it isn't, perhaps there is a different form of numerical integration tool that is available to you that is more efficient than Monte Carlo integration. For example, importance sampling doesn't require derivatives. There is a long list of numerical integrators that have been devised over the ages.</div><div class="gmail_default" style="font-family:"trebuchet ms",sans-serif;font-size:small"><br></div><div class="gmail_default" style="font-family:"trebuchet ms",sans-serif;font-size:small">Cheers,</div>-- <br><div dir="ltr"><div dir="ltr"><div><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><font face="trebuchet ms, sans-serif">Dr. Daniel Carrera</font></div><div dir="ltr"><font face="trebuchet ms, sans-serif">Postdoctoral Research Associate</font></div><div><font face="trebuchet ms, sans-serif">Iowa State University</font></div></div></div></div></div></div></div></div></div></div></div>
----<br>
Election-Methods mailing list - see <a href="https://electorama.com/em" rel="noreferrer noreferrer" target="_blank">https://electorama.com/em</a> for list info<br>
</blockquote></div>