The Beginning of Infinity: Explanations That Transform the World (56 page)

BOOK: The Beginning of Infinity: Explanations That Transform the World
8.08Mb size Format: txt, pdf, ePub

Ever since, Congress has continually debated and tinkered with the rules of apportionment. Jefferson’s rule was eventually dropped in 1841 in favour of one proposed by Senator Daniel Webster, which does use reallocation. It also violates quota, but very rarely; and it was, like Hamilton’s rule, deemed to be impartial between states.

A decade later, Webster’s rule was in turn dropped in favour of Hamilton’s. The latter’s supporters now believed that the principle of representative government was fully implemented, and perhaps hoped that this would be the end of the apportionment problem. But they were to be disappointed. It was soon causing more controversy than ever, because Hamilton’s rule, despite its impartiality and proportionality, began to make allocations that seemed outrageously perverse. For instance, it was susceptible to what came to be called the
population paradox
: a state whose population has increased since the last census can
lose
a seat to one whose population has decreased.

So, ‘why didn’t they just’ create new seats and assign them to states that lose out under a population paradox? They did so. But unfortunately that can bring the allocation outside quota. It can also introduce another historically important apportionment paradox: the
Alabama
paradox
. That happens when increasing the total number of
seats
in the House results in some state losing a seat.

And there were other paradoxes. These were not necessarily
unfair
in the sense of being biased or disproportionate. They are called ‘paradoxes’ because an apparently reasonable rule makes apparently unreasonable changes between one apportionment and the next. Such changes are effectively random, being due to the vagaries of rounding errors, not to any bias, and in the long run they cancel out. But impartiality
in the long run
does not achieve the intended purpose of representative government. Perfect ‘fairness in the long run’ could be achieved even without elections, by selecting the legislature randomly from the electorate as a whole. But, just as a coin tossed randomly one hundred times is unlikely to produce exactly fifty heads and fifty tails, so a randomly chosen legislature of 435 would in practice never be representative on any one occasion: statistically, the typical deviation from representativeness would be about eight seats. There would also be large fluctuations in how those seats were distributed among states. The apportionment paradoxes that I have described have similar effects.

The number of seats involved is usually small, but that does not make it unimportant. Politicians worry about this because votes in the House of Representatives are often very close. Bills quite often pass or fail by one vote, and political deals often depend on whether individual representatives join one faction or another. So, whenever apportionment paradoxes have caused political discord, people have tried to invent an apportionment rule that is mathematically incapable of causing that particular paradox. Particular paradoxes always make it look as though everything would be fine if only ‘they’ made some simple change or other. Yet the paradoxes as a whole have the infuriating property that, no matter how firmly they are kicked out of the front door, they instantly come in again at the back.

After Hamilton’s rule was adopted, in 1851, Webster’s still enjoyed substantial support. So Congress tried, on at least two occasions, a trick that seemed to provide a judicious compromise: adjust the number of seats in the House until the two rules agree. Surely that would please everyone! Yet the upshot was that in 1871 some states considered the result to be so unfair, and the ensuing compromise legislation was so
chaotic, that it was unclear what allocation rule, if any, had been decided upon. The apportionment that was implemented – which included the last-minute creation of several additional seats for no apparent reason – satisfied neither Hamilton’s rule nor Webster’s. Many considered it unconstitutional.

For the next few decades after 1871, every census saw either the adoption of a new apportionment rule or a change in the number of seats, designed to compromise between different rules. In 1921 no apportionment was made at all: they kept the old one (a course of action that may well have been unconstitutional again), because Congress could not agree on a rule.

The apportionment issue has been referred several times to eminent mathematicians, including twice to the National Academy of Sciences, and on each occasion these authorities have made different recommendations. Yet none of them ever accused their predecessors of making errors
in mathematics
. This ought to have warned everyone that this problem is not really about mathematics. And on each occasion, when the experts’ recommendations were implemented, paradoxes and disputes kept on happening.

In 1901 the Census Bureau published a table showing what the apportionments would be for every number of seats between 350 and 400 using Hamilton’s rule. By a quirk of arithmetic of a kind that is common in apportionment, Colorado would get three seats for each of these numbers except 357, when it would get only two seats. The chairman of the House Committee on Apportionment (who was from Illinois: I do not know whether he had anything against Colorado) proposed that the number of seats be changed to 357 and that Hamilton’s rule be used. This proposal was regarded with suspicion, and Congress eventually rejected it, adopting a 386-member apportionment and Webster’s rule, which also gave Colorado its ‘rightful’ three seats. But was that apportionment really any more rightful than Hamilton’s rule with 357 seats? By what criterion? Majority voting among apportionment rules?

What exactly would be wrong with working out what a large number of rival apportionment rules would do, and then allocating to each state the number of representatives that the majority of the schemes would allocate? The main thing is that that is itself an apportionment
rule. Similarly, combining Hamilton’s and Webster’s schemes as they tried to do in 1871 just constituted adopting a third scheme. And what does such a scheme have going for it? Each of its constituent schemes was presumably designed to have some desirable properties. A combined scheme that was not designed to have those properties will not have them, except by coincidence. So it will not necessarily inherit the good features of its constituents. It will inherit some good ones and some bad ones, and have additional good and bad features of its own – but if it was not
designed
to be good, why should it be?

A devil’s advocate might now ask: if majority voting among apportionment rules is such a bad idea, why is majority voting among
voters
a good idea? It would be disastrous to use it in, say, science. There are more astrologers than astronomers, and believers in ‘paranormal’ phenomena often point out that purported witnesses of such phenomena outnumber the witnesses of most scientific experiments by a large factor. So they demand proportionate credence. Yet science refuses to judge evidence in that way: it sticks with the criterion of good explanation. So if it would be wrong for science to adopt that ‘democratic’ principle, why is it right for politics? Is it just because, as Churchill put it, ‘Many forms of Government have been tried and will be tried in this world of sin and woe. No one pretends that democracy is perfect or all-wise. Indeed, it has been said that democracy is the worst form of government except all those other forms that have been tried from time to time.’ That would indeed be a sufficient reason. But there are cogent positive reasons as well, and they too are about explanation, as I shall explain.

Sometimes politicians have been so perplexed by the sheer perverseness of apportionment paradoxes that they have been reduced to denouncing mathematics itself. Representative Roger Q. Mills of Texas complained in 1882, ‘I thought . . . that mathematics was a divine science. I thought that mathematics was the only science that spoke to inspiration and was infallible in its utterances [but] here is a new system of mathematics that demonstrates the truth to be false.’ In 1901 Representative John E. Littlefield, whose own seat in Maine was under threat from the Alabama paradox, said, ‘God help the State of Maine when mathematics reach for her and undertake to strike her down.’

As a matter of fact, there is no such thing as mathematical ‘inspiration’ (mathematical knowledge coming from an infallible source,
traditionally God): as I explained in
Chapter 8
, our knowledge of mathematics is not infallible. But if Representative Mills meant that mathematicians are, or somehow ought to be, society’s best judges of fairness, then he was simply mistaken.
*
The National Academy of Sciences panel that reported to Congress in 1948 included the mathematician and physicist John von Neumann. It decided that a rule invented by the statistician Joseph Adna Hill (which is the one in use today) is the most impartial between states. But the mathematicians Michel Balinski and Peyton Young have since concluded that it favours smaller states. This illustrates again that different criteria of ‘impartiality’ favour different apportionment rules, and which of them is the right criterion cannot be determined by mathematics. Indeed, if Representative Mills intended his complaint ironically – if he really meant that mathematics alone could not possibly be causing injustice and that mathematics alone could not cure it – then he was right.

However, there is a mathematical discovery that has changed for ever the nature of the apportionment debate: we now know that the quest for an apportionment rule that is both proportional and free from paradoxes can never succeed. Balinski and Young proved this in 1975.

Balinski and Young’s Theorem
Every apportionment rule that stays within the quota suffers from the population paradox.

This powerful ‘no-go’ theorem explains the long string of historical failures to solve the apportionment problem. Never mind the various other conditions that may seem essential for an apportionment to be fair: no apportionment rule can meet even the bare-bones requirements of proportionality and the avoidance of the population paradox. Balinski and Young also proved no-go theorems involving other classic paradoxes.

This work had a much broader context than the apportionment problem. During the twentieth century, and especially following the Second World War, a consensus had emerged among most major
political movements that the future welfare of humankind would depend on an increase in society-wide (preferably worldwide) planning and decision-making. The Western consensus differed from its totalitarian counterparts in that it expected the object of the exercise to be the satisfaction of individual citizens’ preferences. So Western advocates of society-wide planning were forced to address a fundamental question that totalitarians do not encounter: when society as a whole faces a choice, and citizens differ in their preferences among the options, which option is it best for society to choose? If people are unanimous, there is no problem – but no need for a planner either. If they are not, which option can be rationally defended as being ‘the will of the people’ – the option that society ‘wants’? And that raises a second question: how should society organize its decision-making so that it does indeed choose the options that it ‘wants’? These two questions had been present, at least implicitly, from the beginning of modern democracy. For instance, the US Declaration of Independence and the US Constitution both speak of the right of ‘the people’ to do certain things such as remove governments. Now they became the central questions of a branch of mathematical game theory known as
social-choice theory
.

Thus game theory – formerly an obscure and somewhat whimsical branch of mathematics – was suddenly thrust to the centre of human affairs, just as rocketry and nuclear physics had been. Many of the world’s finest mathematical minds, including von Neumann, rose to the challenge of developing the theory to support the needs of the countless institutions of collective decision-making that were being set up. They would create new mathematical tools which, given what all the individuals in a society want or need, or prefer, would distil what that society ‘wants’ to do, thus implementing the aspiration of ‘the will of the people’. They would also determine what systems of voting and legislating would give society what it wants.

Some interesting mathematics was discovered. But little, if any, of it ever met those aspirations. On the contrary, time and again the assumptions behind social-choice theory were proved to be incoherent or inconsistent by ‘no-go’ theorems like that of Balinski and Young.

Thus it turned out that the apportionment problem, which had absorbed so much legislative time, effort and passion, was the tip of
an iceberg. The problem is much less parochial than it looks. For instance, rounding errors are proportionately smaller with a larger legislature. So why don’t they just make the legislature very big – say, ten thousand members – so that all the rounding errors would be trivial? One reason is that such a legislature would have to organize itself internally to make any decisions. The factions within the legislature would themselves have to choose leaders, policies, strategies, and so on. Consequently, all the problems of social choice would arise within the little ‘society’ of a party’s contingent in the legislature. So it is not really about rounding errors. Also, it is not only about people’s top preferences: once we are considering the details of decision-making in large groups – how legislatures and parties and factions within parties organize themselves to contribute their wishes to ‘society’s wishes’ – we have to take into account their second and third choices, because people still have the right to contribute to decision-making if they cannot persuade a majority to agree to their first choice. Yet electoral systems designed to take such factors into account invariably introduce more paradoxes and no-go theorems.

One of the first of the no-go theorems was proved in 1951 by the economist Kenneth Arrow, and it contributed to his winning the Nobel prize for economics in 1972. Arrow’s theorem appears to deny the very existence of social choice – and to strike at the principle of representative government, and apportionment, and democracy itself, and a lot more besides.

Other books

Malgudi Days by R. K. Narayan
Here Comes The Bride by Sadie Grubor, Monica Black
The Chameleon by Sugar Rautbord
The Ghost Sister by Liz Williams
Take Two by Julia DeVillers
Cold Killing: A Novel by Luke Delaney
This Violent Land by William W. Johnstone
Ask Me by Kimberly Pauley