A visual guide through the argument that no government, leader, or ideology has ever successfully steered society as planned — and why that's structurally inevitable. Now reinforced with Appendix One, which rebuts technophile objections including control theory, machine intelligence, and the electronic philosopher-king.
This chapter builds a single, sweeping thesis through six interconnected parts: no person, government, or institution can steer the long-term development of a complex society. The argument draws on two thousand years of history, chaos theory, economics, and logic to show that the gap between intention and outcome is not accidental — it is structural.
The central thesis: plans enter a complex society and are refracted by chaos, competing wills, and structural forces — producing outcomes nobody intended.
The chapter opens with over 2,000 years of attempts to rationally guide society — and shows that in every case, reality deviated dramatically from the plan. Even when immediate goals were met, long-term consequences were unexpected and often destructive.
Rome enacted laws to curb decadence. They failed. Sulla then seized power and purged the opposition — but inadvertently destroyed the Senate's integrity, accelerating the Republic's collapse.
Solon abolished serfdom in Attica — a success. But the resulting labor shortage drove Athens to become a full-scale slave society, and paved the way for a populist dictatorship.
Kings issued laws against aristocratic oppression. The laws proved futile. Aristocratic power continued to grow unchecked.
Simón Bolívar liberated Spanish America but could not build stable government. He wrote bitterly that he had "plowed the sea." His prediction of tyranny proved roughly true for 150 years.
Bismarck unified Germany, won wars, industrialized the nation, and blocked democratization. Yet he died embittered. His creation eventually led to World War I and the rise of Hitler.
Succeeded in cutting alcohol consumption by 60–70%. But created massive organized crime empires and widespread corruption. Repealed after 14 years.
Johnson's social reforms assumed better housing would solve crime, poverty, and drug abuse. Moving families to new apartments changed almost nothing about the underlying problems.
New farming technologies increased harvests — but devastated land, poisoned water, caused cancer clusters, and ruined farmers' livelihoods in the Punjab and elsewhere.
Nuclear technology shared globally under a non-proliferation treaty. The result: weapons proliferation and unsolved radioactive waste. Called the worst example of all.
The connected world was supposed to foster understanding and peace. Instead it helped create a "post-truth" society, became a tool for terrorists, and a weapon for demagogues.
There is no discernible progress over the centuries in humanity's ability to guide the development of its societies. Modern attempts are no more successful than ancient ones.
Part II shifts from history to theory. The failures are not due to stupidity or lack of effort — they are structurally inevitable because modern societies are complex systems exhibiting chaotic behavior.
The US economy alone would require solving ~60 trillion simultaneous equations for rational price-planning — ignoring all psychological, social, and political factors.
Edward Lorenz showed that the tiniest data inaccuracy can totally invalidate predictions about complex systems. Named after his famous question about a butterfly in Brazil causing a tornado in Texas.
Even simple systems can behave chaotically: extending prediction range requires exponential improvement in data accuracy. There is an impassable "horizon of predictability."
The Uncertainty Principle sets an absolute physical limit on data precision. For chaotic systems, there is a point beyond which the horizon of predictability can never be extended — even in principle.
A hallmark of chaotic systems: nearly identical starting conditions lead to wildly different outcomes. This makes long-term prediction impossible, whether for weather or for societies.
Even with unlimited computing power, a society cannot fully predict its own behavior. The act of making a prediction changes the system, potentially invalidating it — a paradox reminiscent of Russell's Paradox in set theory. Furthermore, a society's computing devices are part of the society: as computing power grows, the society's complexity grows with it.
Technophiles may invoke modern control theory — which can keep complex systems on a fixed course even when only short-term effects are predictable. But the "complex systems" of control theory (power plants, oil refineries, air traffic) are extremely simple compared to an entire society. Control theory requires a precise mathematical model of the system, and for a society that means modeling the behavior of every significant individual — political leaders, military officers, executives — whose actions continuously interact with and shape the whole. Statistical models (e.g., "what percentage of consumers will buy X") are insufficient; you need individual-level precision.
Even granting a perfect model, the computing power needed for the trillions upon trillions of simultaneous equations would be unavailable — and even if it existed, the data collection required would be impracticable. And even granting all of that: who decides the objectives? Who imposes the control system on a society that won't accept it voluntarily? Any faction powerful enough to impose it would be riven by internal power-struggles thereafter.
"The horizon of predictability is an impassable barrier… prediction may be limited to short stretches… with frequent recourse to observation and experiment."
— Encyclopaedia Britannica, on chaos theoryAn imagined objection: even if we can't predict the long term, maybe we can steer society like a driver navigating a rough hillside — making small corrections. Part III dismantles this from multiple angles.
There is never agreement on what constitutes a "good society." As Engels wrote, the actions of countless conflicting individual wills produce outcomes that no one wanted. Even near-universal agreement is undermined by the "tragedy of the commons."
From Chinese emperors to American presidents, the most powerful leaders find their real power dramatically limited. FDR compared changing the Navy to "punching a feather bed." Lincoln said "events have controlled me." Mandela discovered he could rule "only through patient persuasion."
Economic reality, not greed, usually compels ruthless business behavior. In the 1840s, Massachusetts textile manufacturers were forced by market competition to cut wages and increase hours — not by choice. Franco's Spain could not run its economy by decree. Castro's Cuba could not escape sugar dependence despite total ideological commitment.
From Chinese emperors to American presidents, the gap between formal authority and real control is enormous. Leaders spend most of their time simply persuading others to cooperate.
Wang Anshi's reforms in Song Dynasty China were destroyed by factional opposition despite imperial backing. Louis XIV could only "steer" his realm within narrow limits. Joseph II of Austria tried progressive modernization and died deeply disappointed after being forced to reverse his own reforms. Stalin's Terror spiraled out of his own control, crippling the Soviet army. Hitler could not purge disloyal generals without destroying his military, and survived assassination attempts only by extraordinary luck. In Castro's Cuba, total charismatic authority still failed against bureaucracy, racism, economic dependency, and peasant resistance.
Part IV introduces the most critical factor: within any complex society, self-propagating systems — businesses, political movements, networks of corruption — evolve through a process analogous to biological natural selection.
Like organisms filling ecological niches, self-propagating systems invade every corner of society, competing for power regardless of any government's plans.
Just as biological organisms find ways to survive in every possible niche, self-propagating systems within society will circumvent all attempts to suppress them, competing for power in ways that render long-term rational planning futile.
Part V grants — purely for argument — every previous objection: suppose we could overcome complexity, chaos, and competing wills. Even then, the idea of a wise "philosopher-king" steering society collapses under its own contradictions.
Given vast disagreements in any large society, any real philosopher-king would be a bland compromise or a ruthless faction leader. The citizen would not get to choose.
Even if one king is wise, ensuring an unbroken line of equally competent successors with matching values has never been achieved. Not in Rome, not anywhere.
By Part V, the text has assumed away complexity, chaos, resistance, economics, evolution, and factions. The argument has entered pure fantasy.
Technophiles may propose an immortal supercomputer hardwired to adhere forever to a fixed system of values. This eliminates the succession problem but introduces new, equally fatal ones:
If values are rigid and precise — they will produce many decisions that practically everyone would regard as unreasonable. This is demonstrated by the history of American constitutional law: rigid rules always produce unjust outcomes, which is why courts use vague "balancing tests" and discretionary "factors."
If values are vague and flexible — then the stability that hardwiring was supposed to ensure is lost. Where principles are vague, "one can usually find a way to justify almost anything." Two decisions both consistent with the same vague values can have radically different practical consequences.
In either case, the electronic philosopher-king fails. And the original questions remain unanswered: who decides what values to hardwire, and how do they gain the power to impose their choice on society?
The notion of a wise dictator is not purely theoretical. During the Great Depression, many mainstream Americans advocated for rule by a dictator or "supercouncil." Many Britons admired Hitler's Germany. Lloyd George said: "If only we had a man of his supreme quality in England today." The chapter warns similar sentiments could resurface.
"The series of assumptions we've had to make is so wildly improbable that for practical purposes we can safely assume the development of societies will forever remain beyond rational human control."
The final part confronts a paradox: if all of this is so well-established, why do intelligent people keep proposing elaborate schemes to "redesign" society?
The worldview of most intellectuals depends deeply on the existence of a functioning large-scale society. Accepting that this society is beyond rational control — and that collapse may be the only exit from its trajectory — is psychologically devastating. So they cling to unrealistic plans rather than face the implications.
"It is always easier to deny reality than to watch your worldview get shattered."
— Naomi Klein (quoted in the text, though the author notes the irony)The text critiques technophiles who claim humanity will "take charge of its own evolution," environmentalists proposing to "reconstruct" the global economy, and authors offering "unified, transdisciplinary" solutions — all oblivious to the fact that such schemes have never worked. Even sincere proposals are indistinguishable from fantasy once measured against the historical record.
The expected technophile answer to Chapter One is: "Technology will solve all those problems! Humans will be replaced by superintelligent machines or cyborgs, and they will guide society rationally." But this misses the point entirely.
With one exception (noted below), none of Chapter One's arguments depend on human limitations. The problems of complexity, chaos, data impossibility, conflicting wills, the evolution of self-propagating systems, and the logical paradoxes of self-prediction apply to any entity — human, cyborg, or machine — attempting to steer a complex society. Replacing the steering agent with a smarter one does not solve the problem.
The one exception: a human philosopher-king changes over time, so his values drift. Technophiles propose a computer hardwired to fixed values instead. But this creates its own insoluble problems (rigid values produce unreasonable outcomes; flexible values lose their stability). And the meta-questions remain: who decides what values to hardwire, and how do they gain the power to impose their choice?
It is always risky to dismiss ideas about future technology merely because they seem implausible. In the 1950s, most people — probably including most computer scientists — would have dismissed as fantasy the suggestion that, fifty years later, every person would hold in their lap more computing power than a room full of million-dollar 1950s machinery.
Futuristic proposals need to be examined critically and dismissed only when good reasons for the dismissal have been found — not waved away on vague intuitive grounds. The Appendix argues that it has found such reasons: the mathematical, logical, and political barriers to rational societal control hold regardless of the intelligence level of the controlling agent. "Whatever technological miracles the future may have in store, there are excellent reasons for dismissing as science fiction the notion that the development of a society will ever be subject to rational guidance."
Each part builds on the previous one. History provides the evidence; complexity theory explains why; and the final parts eliminate the last escape routes.
From ancient Rome to modern America, from chaos theory to the tragedy of the commons, from the limits of presidential power to the evolutionary dynamics of institutions — the evidence converges: the development of complex societies will forever remain beyond rational human control.