A critical breakdown of Ted Kaczynski's essay dismantling transhumanist utopias — why the dream of technological immortality is, according to the logic of natural selection, doomed to fail.
Kaczynski coins the term "the techies" to describe a subset of technophiles who have drifted from science into science fiction — treating highly speculative ideas about technology's future as near-certainties, and confidently predicting a technological utopia within decades.
Notable examples include Ray Kurzweil (who envisions human intelligence re-engineering all matter in the universe) and Kevin Kelly (who writes of technology "filling" an otherwise-empty universe). Their claims, Kaczynski argues, are often grandiose, vague, and unsupported by evidence.
The techies occupy a position far closer to speculative fiction than empirical science, yet present their visions as near-certainties.
Most techie utopias include immortality as a central promise. Kaczynski identifies three distinct forms this takes:
The indefinite preservation of the living human body exactly as it exists today. Medical/biological immortality without alteration.
Merging humans with machines. The resulting cyborg survives indefinitely by staying competitive with pure machines via constant upgrades.
Uploading a human mind from the brain into a robot or computer, after which the digital consciousness lives on inside the machine forever.
All three paths converge on the same destination — and, Kaczynski argues, all three fail for the same underlying reason.
Even granting that all three forms become technically feasible, Kaczynski's argument is not about technical possibility. It is about systemic incentives: whether the dominant forces in society will actually bother to keep any particular human alive.
Kaczynski's central concept is the "self-propagating system" — any entity (a government, corporation, military, ideology) whose primary drive is its own survival and propagation in competition with other such systems.
These systems do not serve human values. They serve their own continued existence. They take care of humans only insofar as doing so is advantageous to the system.
Systems keep humans alive only while it is advantageous. When that calculus changes, the outcome changes too.
It is already technically feasible to feed, clothe, shelter, and provide medical care for every person on Earth — yet billions still suffer from poverty and violence. Feasibility alone does not guarantee action.
Natural selection guarantees that competing self-propagating systems act primarily for their own survival, not philanthropic goals. They only help humans when it is in their interest to do so.
Keeping 7+ billion people alive forever demands enormous, sustained resource allocation. No self-propagating system will commit to this unless it benefits the system — which it may not.
Even if technically possible, sustained immortality would be available only to whoever the dominant systems find it advantageous to preserve — a minute fraction of humanity.
Even techies who acknowledge the limitations sometimes quietly assume they personally will be among the select few preserved. Kaczynski turns this around with a sharp observation:
The key insight: even if you make it into the elite, systems keep you alive only as long as you are more useful than any non-human substitute. Humans are expensive — they need food, sleep, housing, healthcare, and entertainment. Machines do not. Once machines outperform humans in the decisions that matter to the system, the cost-benefit calculation tips decisively against keeping humans alive at all.
Kaczynski dismantles each of the three immortality forms systematically:
Form II (Man-Machine Hybrid): As better artificial components become available, biological remnants are progressively stripped away. The "human" in the hybrid gradually vanishes.
The argument for Form II (hybrids): love, compassion, ethical feeling, aesthetic appreciation, desire for freedom — these are human weaknesses from the system's perspective. They will be engineered out. What survives is no longer meaningfully human.
The argument for Form III (mind uploading): uploaded minds face the same pressure. They will be tolerated only as long as they remain more useful than non-human substitutes — and will be transformed until they share nothing with the original human mind.
For Form I (body preservation): machines do not need to surpass humans in all things — just in making the technical decisions that promote the short-term survival of dominant systems. Art, empathy, literature — these become irrelevant if humans are to be eliminated anyway.
Kaczynski uses the techies' own arguments against them. Kurzweil and others predict exponential acceleration of technological progress — a near-explosive "Singularity." But acceleration cuts both ways:
As tech progress accelerates, so does the pace of competition and elimination. Faster change = shorter survival windows for any given entity, human or otherwise.
Biological species can sometimes survive for millions of years in stable environments. But when rapid environmental change occurs, extinction rates spike sharply. An exponentially accelerating technological environment is the opposite of stable — it means competition becomes more intense, more rapid, and more ruthless with each passing year.
The 700-year or 1,000-year life-span that some techies aspire to? A pipe dream. The faster technology accelerates (per their own predictions), the shorter the survival window for any individual entity — human or human-derived — in the competitive ecosystem.
Having dismantled the logical case for tech-utopia, Kaczynski asks: Why do intelligent people believe this anyway? His answer: it is a religious phenomenon, not a rational one. He names it "Technianity."
Technianity shares the hallmarks of millenarian and apocalyptic cults throughout history. The parallels are striking:
Kaczynski notes that millenarian cults historically emerge at "times of great social change or crisis." This suggests the techies' beliefs reflect not genuine confidence in technology, but anxiety about the future — an anxious mythology they have created to escape from that anxiety.
Kaczynski's essay is ultimately a single coherent argument that can be compressed into this chain:
They expect a technological utopia, including personal immortality, within decades — on the basis of highly speculative ideas with no real-world social evidence.
Societies, corporations, and governments maintain humans only insofar as it serves their own competitive survival. Technical feasibility guarantees nothing about social implementation.
Machines don't need to surpass humans in everything — just in the decisions that matter to systems. Once that happens, the cost of maintaining humans exceeds their benefit.
Body preservation, hybrids, and mind uploading all depend on the system finding it worthwhile to maintain you. When that calculus changes, you are eliminated or transformed beyond recognition.
The persistence of these beliefs in the face of their logical problems is best explained as a quasi-religious response to social anxiety — an apocalyptic faith dressed in the language of science.