[EM] it's pleocracy, not democracy

Abd ul-Rahman Lomax abd at lomaxdesign.com
Fri Mar 2 21:53:00 PST 2007


At 05:40 AM 3/2/2007, Jobst Heitzig wrote:

>some clarification because in recent posts democracy and majority rule
>were confused quite often...

Well, I don't think I personally confuse them, but I might use 
language loosely sometimes.

>In a dictatorial system, almost all people have no power.

I talk about oligarchical systems, which includes, as a limit, 
dictatorship. I don't extend "oligarchical" to rule by the majority, 
but I do generally assume majority rule within a context that does 
not fix membership in the majority. That is, the majority shifts, so 
any given person, on any given issue, may or may not be in the majority.

>In a majoritarian system, up to half of the people have no power.
>In a democratic system, ALL people HAVE some power, that is, "the 
>people rule".

However, something which Jobst seems to neglect is the process by 
which the people rule. If the system does not allow majority rule, my 
experience as well as theory indicate that the result is not 
democracy, but oligarchy, whenever the status quo favors a minority.

"Majority rule" does not refer to a specific group of people, the 
"majority" who rule over others who have no power. It simply refers 
to any given decision, that the majority have the *right* and *power* 
to make a decision, in spite of opposition by a minority. Systems may 
limit this, to protect minority rights, but no democratic system of 
which I am aware limits it absolutely, that is, a majority, and 
especially a distributed majority, can unconditionally make decisions.

That it can do this does not make it wise to do so.

>Hence, majoritarian systems in which a majority of 50% + 1 voter can
>make all decisions are NOT democratic. The greeks called them "pleocratic".

Without caring what the Greeks called them, since I'm not Greek and I 
never granted the Greeks authority over the English language, words 
meaning what they mean in current usage, not what they meant, in some 
cognate form, to some people thousands of years ago, I would disagree.

With some cautions and provisions.

I know of no stable, long-established democratic system which does 
not allow some kind of routine majority-controlled decision-making 
power. There are attempts aplenty to institute consensus rule, or to 
require supermajority for routine decisions. I've had substantial 
experience with them. They both work and don't work.

Often those involved in them are quite enthusiastic about them, 
because the power and energy in discovering consensus can be 
invigorating. *However*, in the long run, the work involved can be 
debilitating; those who continue to support these systems don't seem 
to notice, or perhaps don't care, that the organization bleeds 
members who find the intensive and often long meetings necessary to 
work out consensus solutions more than they can spare. And then, 
eventually, the organization runs into a problem, an entrenched 
minority which is favored by the status quo. They can, with the 
rules, block changes desired by the majority.

This is why I claim that supermajority requirements eventually lead 
to minority rule. Yet, in fact, obtaining supermajority consent, even 
universal consent (i.e., consensus), is highly desirable. It is 
making it into a rigid rule that is the problem. Thus, I've 
concluded, the majority has the *right* of decision, but it is quite 
proper that the rules force the majority to make decisions cautiously 
and deliberately, with full awareness of the possible damage. 
Whenever a majority runs roughshod over a minority, it weakens the 
organization. So, I'd suggest, there better be a good reason.

And thus my preference for election methods that look for more than 
majority consent to outcomes. Yet, at the same time, my consideration 
that it can be desirable to provide a means for the majority to 
withhold consent and to insist on its preference. But consciously, 
not as an outcome of a Majority Criterion satisfying election system, 
which goes ahead and implements the first preference of a majority 
without qualification.

Social utility analysis of the most basic kind shows that the 
Majority Criterion conflicts with maximizing social utility, and it 
does this in situations where the maximum utility is completely 
clear. I use the pizza example because it is so blatant, and I use 
the civil-war-trigger examples because they show that this is 
something that can be, under some circumstances, crucially important.

I really would like to see Range systems that require majority 
consent to the outcome, and it is *impossible* to incorporate that in 
the first stage, though Approval and Approval-cutoff Range may 
attempt it. The problem is that what I will accept as a compromise 
depends upon information about what others prefer and their 
preference strength. If I don't realize how seriously some of my 
friends will suffer if the majority choice of pepperoni is 
implemented, I may insist upon it, after all, don't I have the same 
rights as them?

But if I *do* realize this, I would certainly be churlish to insist 
upon pepperoni, if there is some other option that is at least 
reasonably satisfying.

When circumstances allow full individual choice without inflicting 
harm upon others, I certainly can and should have my first 
preference. But when a decision must be made on behalf of all, then 
this freedom can and must be limited. And it should be limited 
voluntarily, for attempting to limit it by regulation is quite 
dangerous, for it cannot anticipate all the possible configurations 
and circumstances. Majority rule has the capacity to be intelligent, 
far more intelligent than any set of rules.

And, again, this is why I consider election methods necessary in some 
circumstances, but, if they are purely aggregative methods, 
dangerous. Deliberative decision-making methods are far safer; 
indeed, the power of deliberation is precisely the power of 
intelligence, and is the reason why humans dominate over animals 
which operate purely by instinct. I.e., fixed rules. (And, hopefully, 
we will dominate with care and consideration, not blind immediate 
self-interest. Once again, we can see the dangers of implementing raw 
majority preference....)

>Can a system be democratic?
>Can it even be democratic without using significant randomization?

I certainly hope so, because decision-making with randomization, 
while it will give power to a minority, also gives power to chaos. 
Absolutely, if you and I are at odds, and we make a decision by 
tossing a coin, we each have power. But when people are at odds, it 
is generally because consensus has not been found, not that it is 
impossible to find it. Tossing a coin makes sense -- some sense -- 
only when two conditions obtain: first, there is insufficient time 
available to make a deeper and better decision, and the conditions 
are balanced, that is, the sides of the polarization are equal or 
close to equal. Otherwise we can assume that the decision favored by 
the majority is most *likely* to be the best decision. Again, this is 
based on an understanding of how decision-making works.

Social decision-making resembles individual decision-making, and, in 
fact, each of us is a society, a society of cells that learned, long 
ago, to coordinate and cooperate. How do *we* make decisions? We use 
different methods under different situations, but, generally, there 
is a kind of majority rule. When we are close to balance, when we 
recognize that there is no internal consensus, those of us who are 
sane will postpone making final decisions until the matter becomes 
clear, but when there is no time for this, the tiger is chasing us, 
we make decisions by effectively taking an immediate vote and going 
with the plurality victor. Again, those of us who are sane. We make 
our best shot at it. And when there is balance, sometimes, we will 
quite properly toss a coin.

If this process is thought of in terms of winners and losers, as if 
the neurons on one side lose if the neurons on the other side win, 
then we start to get a warped view. The decision-making theory is 
based on benefit to the whole organization or organism. It is that 
the decision (of an informed majority) is most likely to be of 
maximum benefit or minimum loss. That this is true, in general, is 
the reason why democracies are demonstrating that they can 
out-compete oligarchies, even though present democracies are quite 
defective in their democracy.

The trick, I'd suggest, if we want to move beyond these limitations, 
is to empower the people in a way that brings the *best* to the top, 
where the people's consent is maximized and they trust those who are 
specially empowered. Where the ability of the people to continually 
monitor and change the assignments of trust is clearly established, 
we should have this situation. Again, elections and offices with 
fixed terms clearly conflict with this. A parliamentary system where 
representation *continually* represents the people, and does so 
accurately, and where officers are servants hired by the assembly at 
will, does not conflict with it.

Hence, again, my interest in Asset Voting, which approaches delegable 
proxy in representative power, if the assembly is large enough, and, 
if we must have fixed terms, maximizes flexibility in choosing who 
serves as representative, and minimizes the implicit coercion when we 
are represented contrary to our choice.

>If we are faced with a whole sequence of decisions instead of only one,
>we could distribute the power over all decisions in the sequence:

What Jobst is doing is placing "pleocracy" above intelligent 
decision-making. If this were a computer system we were designing, we 
would immediately recognize the randomization introduced as noise, 
something generally to be minimized. There are certain conditions 
under which one would want some level of randomization, but they are 
not the general conditions. Introducing noise into general 
decision-making is guaranteed to reduce intelligence.

The flaw in the thinking is an assumption that the minority is some 
fixed and disempowered entity. It is not. While it can happen that an 
individual is always in the minority, this is statistically unlikely 
unless there is something defective in that person's mental process. 
(Jobst does realize this, but he then examines percentage of 
satisfaction, below)

In fact, the *vast* majority of people agree on most issues; we don't 
even think of this agreement as such because we are so universal in 
our acceptance of the consensus. It is only where we disagree that we 
imagine there are winners and losers.

Again, where the society routinely imposes its decisions on 
minorities, it is weakened, but the solution is not to give 
decision-making power to minorities, for this would mere spread 
around the coercion and, indeed, it would increase it (in the sense 
that the number of those coerced would increase). The solution is to 
find, where possible, non-coercive solutions, solutions that allow 
the minority to retain maximum freedom.

Thus we have civil rights, and we set up safeguards against those 
rights being trampled upon by a rash decision of the majority. 
However, the majority, in democratic systems -- such as Robert's 
Rules -- retains the right to change the rules at any time. 
Supermajority requirements for bylaw changes under Robert's Rules 
only apply to majorities that are not absolute, and absolute majority 
can essentially do what it pleases. But it will only do this, 
hopefully, with a consciousness of the risks.

The nuclear option in the Senate was called that because of an 
awareness of the seriousness of changing the rules to suit the whim 
of the majority. There better be a good reason, something essential, 
not merely desirable to the majority.

(If there is a reader who does not know what the "nuclear option" 
was, it was a solution proposed by some Republicans to the impasse 
caused by filibuster or threatened filibuster of Democrats to block 
approval of Supreme Court Justices. These partisans realized that 
they could essentially rule the filibuster out of order, even though 
by the rules it would not be. As I recall, the claim would have been 
that the Senate was constitutionally required to approve or not 
approve, therefore such blocking was causing the Senate to fail to 
perform its duty, and thus the special interpretation of the rules 
was justified. In any case, the presiding officer of the Senate is 
the Vice-President of the U.S. Presumably he would have ruled the 
filibuster out of order and would have ordered a vote to proceed. 
This would presumably have been appealed by a Democrat. Under Senate 
rules, if I'm correct, and this is also true of Robert's Rules, 
appeals are decided by majority vote of members present and voting. 
So, in practice, a simple majority actually can do what it pleases, 
but it does so in flagrant disregard of the rule of law and 
precedent. Yet it has the power to do this. That it does not exercise 
this power is a measure of voluntary restraint. The nuclear option in 
the Senate was never exercised because certain Republicans and 
Democrats formed a coalition that had swing power, and agreed on the 
conditions under which filibuster would be allowed to continue, and, 
with this coalition, the normal cloture rule (60%) would function to 
close debate. That cloture rule, by the way, was changed from an 
earlier two-thirds, which is more standard in deliberative bodies....)

>Naive solution: assign each decision to a (different) single voter so
>that each voter decides something in turn and hence all people have some
>power. Obviously, there are many deep problems with this.

Indeed.


>More sophisticated solution:
>
>Remember for each voter in what fraction of the decisions so far the
>voter's then-favourite option has been elected; call this that voter's
>"actual success rate".
>
>Also remember for each voter the average (over all decisions so far)
>fraction of voters that had the same then-favourite as the voter at
>hand; call this that voter's "to-be-expected success rate".
>
>Now, in each decision, elect that option which minimizes the sum of
>squared errors between the voters' current to-be-expected success rate
>(including the current decision) and the voters' resulting actual
>success rate if that option were elected. In the long run, this sum of
>squared errors should converge to zero (remains to be proven), so this
>method can be called "asymptotically" democratic.

It's still the introduction of noise, simply not quite as much noise. 
Noise can have a zero sum (indeed, true noise always does, we call it 
'bias' when the sum is nonzero).

The cost would be an enormously cumbersome system that reduces the 
intelligence of outcomes. To what end? To satisfy a particular 
definition of "democracy," one which is clearly not the manner in 
which the word is currently used? What is the argument that this is desirable?

*If I get my way because I won the lottery, I'm probably wrong,* that 
is, it is more likely that I'm wrong that that I'm correct, on 
average. I'd prefer to get my way when I'm right, which is more 
likely to be the case when I'm with the majority.

And what I *really* want is communication. When society decides 
differently than I wish about a matter of sufficient concern to me, I 
want to know *why*. If I'm wrong, I want to learn how, so I don't 
repeat the error and remain dissatisfied, as would be the case if 
society continues to make correct decisions! And if I'm right, I want 
to be heard. I want my ideas and understandings to be available to 
those making decisions. And, under some conditions, I might be one of 
those people.

Thus delegable proxy. All roads lead to this, for me. I'm utterly 
unsurprised that it's been invented multiple times around the world 
over the last decade or so. Where I'm relatively unique (and no 
longer so) is in realizing that this is far more than an election 
method, it is an organizational technique, a communications system 
that connects all members of a society, and potentially a very large 
society, allowing direct democracy to function while freeing the 
deliberation that is crucial from the noise that otherwise overwhelms it.

Town Meeting governments do not devolve into mayor-council because 
people are incapable of making their own decisions, the devolve when 
the size becomes sufficient that meetings as normally constructed 
break down, they become impossibly tedious and eventually 
impractical. Delegable proxy is a solution to this problem that does 
not involve majoritarian representation. Proportional representation 
systems, in theory and probably in practice, approach DP in 
representational efficiency but not in flexibility and not in 
communication power.

Among other things, DP would make campaigning for office a fish 
bicycle. That is, one who campaigns, more than indicating 
availability and offering information about qualifications, would 
come to be seen as somewhat ludicrous and not to be trusted. 
Political campaigns would still exist, but over issues, and even that 
would likely be taken out of the realm where significant money is spent.

Jobst, again, is aware of the limitation of the randomization he suggests:

>However, both methods have another problem: They do not easily support
>cooperation between voters since it is either optimal to vote for the
>favourite or for the strongest competitor, while there is no incentive
>to vote for compromise options. Therefore, the results are "just" but
>not particularly "efficient" with respect to utility.

What kind of justice is it that results in average loss? Is there 
injustice in society not following the preference of an individual. 
My claim is that there is no injustice in this, *if* the individual 
has been heard. What is unjust is that the best ideas may be buried 
in the noise.

With DP, I expect, the best ideas will percolate to the top, because 
*all* ideas will be heard by someone with the power to take them up. 
Defective ideas will be stopped at a certain level (or at the top, if 
the defect is subtle -- or if the idea's time simply has not come), 
but in the context of full deliberation, where the reasons for 
rejection are not only explainable, but *will* be explained, back 
down the hierarchy, by a proxy who was chosen for trustworthiness by 
the client.

Let me put it this way: if we had DP, we would get DP in short order.... :-)

It's the bootstrap problem. If DP is a bad idea, by what method am I 
to discover and understand this?

I came to DP through a realization that we needed methods for vetting 
ideas. There is a reason for being conservative with ideas, but that 
reason only applies to implementation, not to discussion and 
understanding. Where a new idea seems dangerous, even if only 
intuitively and not with clear understanding of why, an intelligent 
organization will attempt to *test* the idea instead of diving 
headfirst into it. That is, change, quite properly, involves caution, 
and some of the great tragedies of the last century came when 
"brilliant" ideas were implemented without having been tested and 
shown to function well. Ah, the hubris of intellectuals!

But this doesn't mean that intellectuals are not intelligent! And we 
lose a great deal when we don't listen.

>The method D2MAC aims to improve upon this. It is: Draw two ballots at
>random; the winner is the most approved option of those approved on
>both ballots, if such an option exists, or else the top option on the
>first ballot.

If you must make a decision, extrapolate this method to the maximum 
condition. Hold an Approval election, and the most approved method on 
all ballots is implemented. Unless no option has majority approval. 
Why would we expect the outcome to be better if we limit it to two ballots?

Only if we buy this argument that distributing decision-making in 
this random fashion is more just. And I don't buy it at all. It's *unjust*.

Not to mention dangerous. "Random choices" aren't always random. 
(Though there are solutions to this problem.) 




More information about the Election-Methods mailing list