In 1999, Hugo Chavez took office in Venezuela with a promise to lift the poor. The oil was flowing. The programs were generous. The rhetoric was hopeful. Western intellectuals made pilgrimages to Caracas and returned with glowing reports. Here, finally, was socialism with a human face. Here was redistribution that worked.
Seventeen years later, Venezuelans were eating zoo animals. The Caracas zoo lost its tapirs, its buffalo, its Vietnamese pigs. People broke in at night and slaughtered whatever they could find. By 2019, the average Venezuelan had lost twenty-four pounds. Not through diet. Through hunger.
The trajectory wasn't accidental. It was structural. And the structure reveals something we'd rather not see about the long game of systems.
There's a pattern that plays out whenever centralized control replaces distributed decision-making. The early years often look promising. Sometimes they look better than the alternative.
Consider South Korea in 1960. The country was poorer than most of sub-Saharan Africa. It had just emerged from a devastating war, had few natural resources, and faced an existential threat from the North. The messy process of market development was slow, unequal, chaotic. Meanwhile, North Korea, with Soviet backing, was industrializing rapidly. The planned economy was building factories, schools, hospitals. By some measures, North Korea was outperforming the South.
This lasted about fifteen years. Then the curves crossed.
South Korea's GDP per capita is now roughly forty times North Korea's. The divergence didn't happen suddenly. It compounded. Each year, the distributed system got a little better at allocating resources, a little better at incorporating feedback, a little better at adapting to change. Each year, the centralized system got a little worse at all of it. The gap that was invisible at year five became undeniable at year twenty and catastrophic at year forty.
The same pattern appears in Germany (East versus West), in China (Mao versus post-reform), in every natural experiment history has provided. Early performance tells you almost nothing about long-term trajectory. The systems have different time signatures.
Why does this happen?
Hayek identified part of the answer. Central planners cannot access the knowledge they would need to plan effectively. That knowledge is dispersed across millions of actors, encoded in their local circumstances, their tacit understanding, their sense of what's actually happening on the ground. A price system aggregates this knowledge without anyone needing to collect it. A planning bureau can only work with what it can see, and what it can see is always a simplified map of a complex territory.
But the knowledge problem is only the beginning. The deeper issue is feedback.
Markets are noisy, unequal, often cruel. But they process feedback constantly. A product that doesn't work fails. A company that misallocates resources loses to competitors. A price that's wrong gets arbitraged. The feedback isn't always fast, and it's never fair, but it's relentless. The system learns because it cannot stop learning. Failure is information.
Centralized systems suppress feedback. They have to. The plan is the plan. If local actors could simply ignore directives that don't match reality, there would be no plan at all, just distributed decision-making with extra steps. So the system demands compliance. And compliance means filtering information.
The factory manager in Soviet Russia learned quickly: report good news up, hide bad news, meet the quota on paper even if reality diverges. The local official in Maoist China learned the same lesson. So did the agricultural ministry in Venezuela. The information that reaches decision-makers is pre-filtered for palatability. The feedback loop is broken at every level.
Thomas Sowell called this the conflict between constrained and unconstrained visions. The constrained vision assumes human knowledge is limited and human nature is fixed, so you design systems that work despite these limitations. The unconstrained vision assumes sufficiently enlightened planners can transcend the limitations, so you design systems that depend on getting the planning right.
The unconstrained vision looks better in the short term because it can mobilize resources toward visible goals. Build the dam. Construct the factory. Distribute the food. The constrained vision looks messier because it doesn't mobilize toward anything. It just creates conditions and lets outcomes emerge.
But the constrained vision compounds. The unconstrained vision decays.
Venezuela's collapse followed the script precisely.
Chavez nationalized industries, imposed price controls, expanded social programs funded by oil revenue. For a while, it worked. Poverty rates fell. Literacy improved. The Bolivarian Revolution had receipts.
But the nationalized industries couldn't adapt. The price controls created shortages. The social programs required ever-increasing oil revenue while the nationalized oil company, PDVSA, hemorrhaged competence. (When you fire technical experts for political loyalty and replace them with party members, output tends to suffer.) The feedback mechanisms that might have corrected course were systematically dismantled. Criticize the revolution and you faced consequences. Report bad news and you were the problem.
By the time the problems became undeniable, the system had no capacity to respond. The information it needed to course-correct had been suppressed for years. The institutions that might have forced accountability had been captured or destroyed. The political structure rewarded loyalty over competence at every level.
And then, the familiar move: as the economy contracted, political control tightened. This isn't coincidence. It's causal. A failing centralized system cannot admit failure without undermining the legitimacy of central control. So it doubles down. Blame external enemies. Suppress dissent. Tighten the grip. The surveillance state isn't a bug in the socialist program. It's where the program goes when it starts to fail.
Maduro inherited this trap and accelerated it. The economic collapse and the political repression aren't separate phenomena. They're the same phenomenon in two registers.
The uncomfortable part of this analysis is what it implies about short-term assessment.
Western intellectuals praising Chavez in 2005 weren't lying about what they saw. The clinics were real. The literacy programs were real. The reduction in poverty statistics was real. If you evaluated the system at that moment, you could make a reasonable case for it.
Lincoln Steffens visited the Soviet Union in 1921 and declared: "I have seen the future and it works." He wasn't hallucinating. The revolutionary energy was real. The mobilization was real. The sense of historic purpose was real. At year four, the Soviet experiment looked like it might succeed.
It's easy to dismiss these observers as naive or ideologically captured. Some were. But the harder truth is that their observations were accurate for the time horizons they were using. The systems did work, in the sense of achieving their stated short-term goals. What the observers couldn't see, because it hadn't happened yet, was the decay function.
This is the epistemological trap. The case for centralized control always looks strongest at the beginning. The visible achievements are front-loaded. The costs are back-loaded. The factory gets built today; the innovation that never happened is invisible. The program distributes food today; the agricultural decline takes a decade to compound. The literacy campaign succeeds today; the suppression of independent thought takes a generation to hollow out the culture.
If you're evaluating systems on time horizons of five or ten years, centralized planning will often look competitive. It can mobilize. It can build. It can achieve legible goals.
But societies don't exist on five-year time horizons. They exist across generations. And on generational time scales, the feedback question becomes decisive. Can the system learn? Can it incorporate information it doesn't want to hear? Can it adapt to changes no one predicted? Can it allow failures that reveal necessary corrections?
Markets answer yes, imperfectly and unfairly, but persistently. Central planning answers no, because the structure makes learning dangerous.
I think about this when I watch people argue about economic systems in the present tense.
The argument for central coordination often takes this form: look at this problem the market isn't solving. Look at this inequality it's producing. Look at this short-term thinking. Wouldn't it be better if someone rational were in charge?
The argument isn't wrong about the problems. Markets produce inequality. They reward short-term thinking in many contexts. They fail to solve problems where the benefits are diffuse and the costs are concentrated. The critique is accurate.
But "someone rational in charge" isn't a system. It's a wish. The question isn't whether a wise planner could theoretically do better. The question is what happens when you build institutions that concentrate planning authority. What information flows do you create? What feedback do you suppress? What happens in year twenty when the founders are gone and the system has been captured by whatever interests learned to navigate it?
Venezuela didn't fail because Chavez was uniquely bad. (He was charismatic, energetic, probably sincere in his early goals.) It failed because the system he built couldn't learn. North Korea didn't fall behind because its founders were stupider than South Korea's. It fell behind because the system filtered out the information it needed to adapt.
The seduction of control is that it offers visible results against invisible alternatives. The dam gets built. What would have emerged from a thousand uncoordinated decisions is never seen, because it never happens. The comparison feels asymmetric: something versus nothing. But the something is front-loaded, and the nothing would have compounded.
None of this makes markets sacred or central planning always wrong. Emergency mobilization sometimes requires coordination that markets can't provide. Public goods need collective provision. The state has legitimate functions that dispersed systems can't perform.
The point is about default settings and time horizons. Where should the burden of proof lie? When we design systems, what decay functions are we building in?
A system that filters feedback will degrade. A system that suppresses local knowledge will make worse decisions over time. A system that punishes failure will learn to hide failure rather than correct it. These aren't ideological claims. They're structural ones. They apply to corporations, governments, religions, families. Any system that cannot hear what it needs to hear will eventually stop working.
The messy, unfair, inefficient alternative isn't inspiring. It doesn't mobilize. It doesn't achieve legible goals on predictable timelines. It just compounds, slowly and unevenly, learning from its failures because it cannot prevent them.
South Korea in 1965 didn't look like the future. Venezuela in 2005 did.
Time tells.



