<div dir="auto"><div><br><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">El mié., 26 de ene. de 2022 3:12 a. m., Kristofer Munsterhjelm <<a href="mailto:km_elmet@t-online.de">km_elmet@t-online.de</a>> escribió:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">On 26.01.2022 04:23, Forest Simmons wrote:<br>
<br>
> The dirac delta is the convolution identity distribution ... convolving<br>
> it with another distribution leaves it unchanged with cliffs and sharp<br>
> corners intact. <br>
> <br>
> But if you convolve with a smooth approximation of Dirac, like a<br>
> gaussian with tiny variance, you get an infinitely differentiable<br>
> approximation of the "horrible" function.<br>
<br>
Right. I once wrote a fully differentiable genetic algorithm (I was<br>
intending to use it for hyperparameter tuning). There were two problems.<br>
First, local optima. Second, suppose that you have a plain statement like:<br>
<br>
if (x>y) {<br>
return z;<br>
} else {<br>
return f(z);<br>
}<br>
<br>
When smoothing, this turns into something like<br>
<br>
return z * sig(x-y, k) + f(z) * sig(y-x, k)<br>
<br>
where sig(x, k) is a sigmoid function that returns 1 if x>>0, 0 if x<<0,<br>
and k is a steepness parameter controlling sig'(x) around 0.<br>
<br>
If k is too high, then gradient descent fails because there's no<br>
noticeable slope down to the optimum; it's flat almost everywhere and<br>
then goes to zero exactly at the optimum (vanishing gradient problem).<br>
<br>
So we need some smoothness, which means that both branch values come<br>
into play. But this makes the function slow to evaluate as the program<br>
must "go both ways" on every conditional.<br></blockquote></div></div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"></blockquote></div></div><div dir="auto">Here's a better way to get a smooth potential function V(x,y,z,) in the present context:</div><div dir="auto">First the potential V(x,y,z)for all voters concentrated at one point (X,Y,Z):</div><div dir="auto"><br></div><div dir="auto">V(r)=||r-R||, where r=(x,y,z) and R=(X,Y,Z).</div><div dir="auto"><br></div><div dir="auto">To smooth, replace this with</div><div dir="auto"><br></div><div dir="auto">V(r)=(epsilon+||r-R||^2)^(1/2)</div><div dir="auto"><br></div><div dir="auto">This is more realistic anyway because you cannot truly have all voters concentrated at one point. The epsilon smear factor would be a function of the variance of the voters clustered around the point R.</div><div dir="auto"><br></div><div dir="auto">That variance could be given exactly for a uniform distribution on a ball of radius delta centered at R, for example.</div><div dir="auto"><br></div><div dir="auto">Then for multiple factions...</div><div dir="auto"><br></div><div dir="auto">V(r)=Sum(over R in Omega) of</div><div dir="auto">f(R)*sqrt(epsilon(R)+||R-r||^.5),</div><div dir="auto"><br></div><div dir="auto">where Omega is the set of faction centers, and f(R) is the fraction of voters clustered near R, i.e. in a delta(R) [related to epsilon(R)] neighborhood of R.</div><div dir="auto"><br></div><div dir="auto">If we have a smooth (i.e. pre-smeared) pdf f(R), we can write the "electo-potential" V(r) as an integral</div><div dir="auto">V(r)=Integral(over R in Omega)of</div><div dir="auto"> ||r-R||f(r)dxdydz </div><div dir="auto"><br></div><div dir="auto">The hamiltonian for the wandering voter particle is the total energy, kinetic + potential, E=T+V, where T =.5 ||dR/dt||^2.</div><div dir="auto"><br></div><div dir="auto">The system (in vector form) of ODE's for the motion of the particle is</div><div dir="auto"><br></div><div dir="auto">(d/dr)^2=-grad V(r)</div><div dir="auto"><br></div><div dir="auto">I need to check the HMC link that Daniel gave me to see what notation they are using.</div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
> Electrical engineers have a vast library of standard test patterns to<br>
> use as input signals for use in designing and testing circuits.<br>
> <br>
> We need a similar library of test distributions for use in designing and<br>
> testing election methods.<br>
> <br>
> Election methods could even be profiled by their responses to these test<br>
> patterns.<br>
<br>
This, I do agree with. There have been a lot of voting method proposals<br>
lately, and we need some way to easily determine:<br>
<br>
- what is its VSE (under what models)<br>
- what is its voter-strategy susceptibility<br>
- what is its candidate-strategy susceptibility (cloning)<br>
- what criteria does it definitely fail<br>
<br>
and it would be nice to also know what criteria it definitely passes,<br>
though that requires formal verification, which is much harder than just<br>
testing a bunch of cases.<br>
<br>
(I remember using REDLOG to come up with BTV once, but I don't remember<br>
the details. Perhaps I should look into the current state of the art for<br>
provers, like Z3... so many things to do and so little time in which to<br>
do them!)<br>
<br>
-km<br>
</blockquote></div></div></div>