[EM] Exact spatial model probabilities?

Ted Stern dodecatheon at gmail.com
Mon Jan 24 14:46:09 PST 2022


Hi Forest,

To digress from the topic a bit, there is an O(N) method for N-body
problems which has been around since the 1980s. It is called the Fast
Multiple Method.

The essence of the method is that the effect of a group of objects can be
approximated (outside a certain radius) as the effect of a single composite
object. These composite objects can further be combined into larger scale
composite objects. This is similar to Isaac Newton's approximation of
gravitational effect being due to a point mass.

If you just stop there, this reduces the order of operations to O(N logN).
That first stage of approximation is the effect of collections of masses on
everything else in the Universe.  N-body simulations using this idea are
called Barnes-Hut methods.

The key to FMM is that you can do the same thing in the other direction,
finding the effect of everything *outside* a given radius on the objects
inside. That can also be described as a composite object -- a multipole.
Then there is a similar set of outer-inner nested collections to convey
information from the rest of the Universe to each individual particle.
This brings the number of operations to O(N).

https://en.wikipedia.org/wiki/Fast_multipole_method



On Sun, Jan 23, 2022 at 3:06 PM Forest Simmons <forest.simmons21 at gmail.com>
wrote:

> This reminds me of the n-body problem. For n not too large, the
> Parker-Sochaski method is the best numerical solution technique that I know
> of so far.
>
> But for many years they have been using statistical mechanics in the many
> body problem ... from n=3 on up to n =infinity ... thermodynamics,
> magneto-fluid dynamics, etc.
>
> Those guys know how to do it.
>
> When I did minuteman missile simulations back in the early1970's we had a
> Spherical Harmonics gravitational model with hundreds of terms, which was
> necessary to get a ballistic missile to within a few feet of an enemy
> missile silo half way around the world.
>
> The guys that came up with it used data from satellite tracking
> observations to get a spherical harmonics mass density model of the earth.
> Once they had that, it was a simple matter to express the gravitational
> potential in spherical coordinates, and from that, gravitational force via
> the gradient of the potential.
>
> We need spherical harmonics expansions of the voter density function on
> the surface of a sphere of the right dimension.
>
> Now, I know why the suggestion of a spherical distribution ... you can
> have a uniform probability density on a compact manifold like a sphere or a
> cartesian product of circles, but not for an open manifold like an entire
> plane .. for that, the best (closest to smooth & uniform) you can do is a
> Gaussian.
>
> [Somebody should start doing sphericalYee diagrams.]
>
> We need to get one of those geodesical scientists from the 1970's who
> solved this problem without super computers ... they could tell us how to
> do it ... just like the astronomers of the nineteenth century could use the
> "method of orbital elements", a clever perturbation technique to do all of
> the numerical integration of all the high precision n-body celestial
> mechanics problems of the solar system (planets, moons, comets  asteroids,
> etc) ... with nothing beyond mechanical adding machines and tables of trig
> and log functions.
>
> El dom., 23 de ene. de 2022 12:26 p. m., Daniel Carrera <
> dcarrera at gmail.com> escribió:
>
>>
>> On Fri, Jan 21, 2022 at 9:02 AM Kristofer Munsterhjelm <
>> km_elmet at t-online.de> wrote:
>>
>>> It's easy to approximately find such probabilities for spatial models as
>>> well by just creating an instance and randomly placing voters on it to
>>> determine what they vote, and hence in the limit of number of trials
>>> going to infinity, get the probability.
>>>
>>> But I was wondering, is it possible to do so exactly? Let's say that in
>>> a spatial model, first c candidates are placed uniformly at random in an
>>> unit d-cube. Then voters are also placed uniformly at random in this
>>> cube and they prefer candidates closer to them to candidates further
>>> away.
>>>
>>> Now I would suppose that the probability that a voter votes A first is
>>> equal to the volume of the Voronoi cell that belongs to A. (And the
>>> probability that a voter will vote A>B>C is the volume of the
>>> intersection of the closest-neighbor Voronoi cell for A with the
>>> second-closest-neighbor for B and the third-closest-neighbor for C.)
>>>
>>> Well, that eliminates one source of randomness -- assuming I can exactly
>>> calculate the vertices of these regions. But there's still the second
>>> source in the randomness of the candidates. Do you know of any calculus
>>> tricks to get the probabilities over every possible candidate position,
>>> or is this simply too hairy?
>>>
>>
>>
>> I have been thinking about this. I don't have a clever solution for you,
>> but I have some thoughts that could be a stepping stone toward a solution.
>> What you are trying to do is compute an integral in a highly dimensional
>> space. Let's say that for a given set of candidates you have a function f()
>> that outputs a real value. That might be a utility, or a probability, or
>> whatever. I just want it to be a real number and not binary so that we can
>> hope that f() is smooth in the space spanned by the candidates.
>>
>> The set of candidates can be thought of as a point in a C*D dimensional
>> space where C is the number of candidates and D is the dimension of your
>> D-cube. So you have a vector:
>>
>> x_jk = position of candidate j on issue k
>>
>> x = [ x11, x12, ... , x1N, x21, ... , xCD ]
>>
>> When you ask:
>>
>> "Do you know of any calculus tricks to get the probabilities over every
>> possible candidate position, or is this simply too hairy?"
>>
>> You are asking, "how do I integrate f()"? Your default plan of drawing
>> random candidates and computing f() for each is integration by Monte Carlo
>> sampling. Now, I don't know how to integrate f() analytically, but
>> realizing that it is an integral allows us to consider other forms of
>> numerical integration (hence my request that f() be a smooth function). One
>> such example is Laplace's method:
>>
>> https://en.wikipedia.org/wiki/Laplace%27s_method
>>
>> Let us imagine that f() looks like a hill. If you can compute the
>> gradient of f(), you can climb the hill and get to the peak. Up there, if
>> you can compute the Hessian of f(), you can approximate f() with the
>> Gaussian function with the same height and Hessian. This method is
>> approximate, but it is very fast because you do not have to sample across
>> the entire space. Laplace's method is often used in highly dimensional
>> problems, where the cost of random sampling is prohibitive.
>>
>> How do you compute the derivative and Hessian? Most often this is used
>> with an Autodiff package. Alternatively, you can use finite differences but
>> at the cost of reduced accuracy. What if f() isn't one hill but many? You
>> can integrate each hill separately and add them up.
>>
>> Anyway, I don't know whether Laplace's method is a realistic solution,
>> but if it isn't, perhaps there is a different form of numerical integration
>> tool that is available to you that is more efficient than Monte Carlo
>> integration. For example, importance sampling doesn't require derivatives.
>> There is a long list of numerical integrators that have been devised over
>> the ages.
>>
>> Cheers,
>> --
>> Dr. Daniel Carrera
>> Postdoctoral Research Associate
>> Iowa State University
>> ----
>> Election-Methods mailing list - see https://electorama.com/em for list
>> info
>>
> ----
> Election-Methods mailing list - see https://electorama.com/em for list
> info
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.electorama.com/pipermail/election-methods-electorama.com/attachments/20220124/1bcc5cca/attachment-0001.html>


More information about the Election-Methods mailing list