A Big Lack of Trust.

            The rise of big business hotly followed the Civil War.  Railroads, coal mines, steel mills, oil, telegraphs, and banks all grew in size and wealth as America industrialized.  Big companies integrated horizontally and vertically.  They formed ‘trusts” to cut up the national market and set prices without reference to market forces.  Companies played a rough game with each other, with their workers, and with their customers.[1]  The aggrieved fought back in a variety of ways, none of them very effective.  State regulation of railroads, national anti-trust legislation, and guns and dynamite made headlines without braking the advance of big business. 

            Louis Brandeis, lawyer and then Justice of the Supreme Court, advanced a compelling theory of anti-bigness.[2]  Brandies argued that there could be neither competition nor bargaining in sectors where one actor dominated the market for goods, services, or employment.  Moreover, a dominant company—well the handful of men who owned or controlled it–could impose its will in other areas thanks to the wealth it accumulated.[3]  His views came to dominate legal and government approaches to the growth of big business from the New Deal to the Eighties. 

            If a criticism might be offered, it is that the approach is subjective, moralistic, and essentially aesthetic.  It didn’t try to measure whether customers were economically better or worse off from any particular size of or market domination by a company.  It believed that competition should not be carried to its logical conclusion, victory for one competitor.  It could cite many instances of bad behavior by companies without demonstrating the representatives of those anecdotes.  Fundamentally, it reflected a view that, when pushed too far, inequalities of wealth and power are unseemly. 

            This view finally sparked an effective response in the Reagan Era.  In 1978, Yale law professor Robert Bork published The Antitrust Paradox.  The “paradox” identified by Bork lay in the raising of consumer prices and the limiting of competition through anti-trust laws that effectively protected established competitors.  Bork argued that “consumer welfare” should be the standard for deciding whether some merger should be allowed.  The price and variety of goods offered to the consumer could be measured objectively.  Bork’s view gained dominance in the courts. 

            If a criticism of this approach might be offered, it is that it views humans too narrowly.  How much stuff people can buy and at what price isn’t the only measure of human happiness or welfare.  For example, trust in the larger social, political, and economic systems to give people what they believe to be a fair shake in life also is essential.  That confidence often is based in emotion and intuition, rather than cold logic.  It is subject to manipulation.[4]  It’s real.  It’s vital. 

            Now a new phase in anti-trust has opened.  The current approach has been labeled “neo-Brandesian.”  Its face is Lina Khan, the new chair of the Federal Trade Commission. 


[1] See Glenn Porter, The Rise of Big Business, 1860-1920 (1992) for a concise summary of the scholarly literature.  See Raymond Chandler, The Long Goodbye (1953) for a mid-century popular evaluation: “There ain’t no clean way to make a hundred million bucks…. Somewhere along the line guys got pushed to the wall, nice little businesses got the ground cut out from under them… Decent people lost their jobs…. Big money is big power and big power gets used wrong. It’s the system.” 

[2] Greg Ip, “Latest Antitrust Approach Has Its Own Risks,” WSJ, 8 July 2021. 

[3] It’s probably hard to regulate anything effectively when one party can hire all the best lawyers. 

[4] American media is the last great industry largely free from government regulation.  Long may it so remain. 

Lovers Quarrel.

Historians have often examined the tempestuous relationship between Germany and the Soviet Union.[1]  The broad outlines of the story are well known.  They alternate between amity and enmity.  Long before Germany had become a “nation,” the region exerted a powerful cultural influence on Russia.  Russia, Prussia, and the Austrian Empire battled over the “bloodlands” between them in the 18th Century.  Otto von Bismarck, the Chancellor of a united Germany, built his foreign policy on managing the conflict between Russia and the Austrian Empire to avoid war.  After his fall from power in 1890, German leaders succumbed to the “spell of power.”  Their plan for war, the Schlieffen Plan, aimed to destroy Russia and France as major European powers.  German war aims against Russia in the First World War culminated in the 1918 Treaty of Brest-Litovsk, which largely accomplished the pre-war ambitions.  These gains were lost when the Western powers defeated Germany on the battlefield later in the year.  The Allies imposed a harsh, if just, peace on Germany.  It became an outcast, whose chief visible aim lay in restoring respectability.  Meanwhile, the Bolshevik seizure of power, their abandonment of their allies in the separate peace at Brest-Litovsk, their repudiation of pre-war debts, and their attempts to export revolution to other countries made the Soviet Union a pariah country. 

            The two outcasts found a community of interest in evading international restrictions in order to revive their power.  From 1922 to 1932, the German military and the Soviets cooperated on weapons development and military training.  The democratic Weimar Republic chose not to know about this relationship.  Initially, the German aims were short-term.  Many military leaders fantasized that it would be possible to renew the lost war within a few years.[2]  To this end, they encouraged right-wing paramilitary groups like the Nazis. 

            The renewed war in the West did not come, but—in the crisis of the Depression—the Nazis arrived in power.  Adolf Hitler, the anti-Soviet German dictator, ruptured relations with the Soviet Union.  Increasingly, the two countries became at daggers drawn.  In 1935, the Soviet Union formed an alliance with France; in 1936 Germany formed the Anti-Comintern Pact with Italy and Japan; from 1936 through 1938, Germany and the Soviet Union waged a proxy war in Spain.  Some Westerners hoped for a deeper engagement with the Soviet Union against Nazi Germany.  Others hoped that Hitler’s ferocious hostility to the Soviet Union would lead him into a bloody war of exhaustion in the East that would remove the need for the West to fight. 

            Suddenly, in August 1939, Germany and the Soviet Union signed a treaty that left them free to carve up Eastern Europe.  Hitler later chose to attack Western Europe and then, in June 1941, the Soviet Union.  Even in 1941, many Germans regretted this rejection of Russia.   


[1] John W. Wheeler-Bennett, Brest-Litovsk: The Forgotten Peace, March 1918 (1938), and “Twenty Years of Russo-German Relations: 1919-1939” Foreign Affairs Vol. 25, #1, pp. 23-43; Hans W. Gatzke, “Russo-German military collaboration during the Weimar Republic,” American Historical Review, Vol. 63, #3 (1958), pp. 565-597; Walter Laqueur, Russia and Germany (1965); Gerhard Weinberg, Germany and the Soviet Union 1939-1941 (1972); Barbara Jelavich, St. Petersburg and Moscow: tsarist and Soviet foreign policy, 1814-1974 (1974); Harvey L. Dyck, Weimar Germany and Soviet Russia, 1926-1944 (1984); Geoffrey Roberts, The Soviet Union and the Origins of the Second World War: Russo-German Relations and the Road to War, 1933-41 (1995); Aleksandr M. Nekrich, Pariahs, partners, predators: German-Soviet relations, 1922-1941 (1997); Ian Johnson, Faustian Bargain: The Soviet-German Partnership and the Origins of the Second World War (2015).

[2] From 1890 to 1945 Germany’s leaders repeatedly failed to adjust aspirations to resources.  Disasters followed. 

The Asian Century 21.

            Even a stopped clock is right twice a day.  The descendant of Albanian immigrants to Italy, the son of a father disgraced and imprisoned for embezzlement, born with a body that betrayed him early in life and killed him in middle-age, and resident one of Italy’s many impoverished areas, Antonio Gramsci (1891-1937) grew up just about as hard as was possible.  On the other hand he showed himself a brilliant student with wide-ranging interests.  In full revolt against a God who had condemned him to personal misery and a society that condemned the poor to social misery, Gramsci became a revolutionary socialist and then, after the First World War, a Communist.  The Italian elites preferred a Fascist policeman to a Communist revolution on the horrifying Russian model.  Once Benito Mussolini took power, the Italian left received a savage hammering.  Gramsci spent the last eleven years of his life in prison. 

At least it gave him time to think and write.  He filled thirty notebooks with his thoughts on a wide range of issues.  Gramsci’s ideas continue to exert influence today.  One of his ideas advanced the role of what is called “cultural hegemony.”  Traditionally, Marxists portrayed the bourgeoisie as retaining power through force.  Gramsci creatively extended this explanation by arguing that the bourgeoisie also retained power by propagandizing their values and culture to the rest of society as normal.  The schools and newspapers, the church(es), the newspapers and book publishers, and today he would add movie studios, television networks, and hip-hop music all propagate the values of the “hegemonic culture” of the established order.  In short, any alternative to the established system could not be legitimate.[1] 

Why do his ideas matter today?  Well, look at the contemporary People’s Republic of China (PRC).[2]  Since taking power in 2012, Xi Jinping has worked in a sustained way to entrench what has been called “Neo-Maoism” as the only thinkable way forward for China.  Partly this involves cultivating an image of Xi as a beneficent ruler who is promoting prosperity, making government more responsive to citizen needs, and rooting out endemic corruption.    

He also has launched a brutal repression of anyone who challenges the government or the “status quo” it represents.  Civil rights activists, defenders of the rule of law, aspiring union organizers, dissident intellectuals and artists, Christians and Muslims have all been persecuted.  Rigged trials, judicial and punitive torture, and administrative imprisonment have been reported.    

Most significantly, however, Xi has worked to limit and shape what Chinese people can know and believe.  He has banned serious study of the era of Mao Zedong as “historical nihilism.”  The schools teach a “George Washington chopped down the cherry tree” version of recent Chinese history.  The government has been creating a “social credit” system to grade (i.e. reward or punish) individuals for how well they conform to government-defined social norms.[3]  The internet is closely watched and increasingly tightly controlled.  “Xi Jinping Thought on Socialism with Chinese Characteristics for a New Era” has become the official, “normal,” and hegemonic thought.  Gramsci’s probably going “See, I told you so.” 


[1] I can’t recall a good biography of Gramsci.  I read some of his stuff in graduate school and rejected it out of hand.  More recently, I have come to take a different view.   Same goes for Noam Chomsky and Herbert Marcuse.  My reading of them, I mean, not their attitude toward Gramsci.  Not that I want Elizabeth Warren writing the tax laws. 

[2] Andrew Nathan, “An Anxious 100th Birthday for China’s Communist Party,” WSJ, 26-27 June 2021. 

[3] See: https://en.wikipedia.org/wiki/Social_Credit_System 

The Meaning of Murders in Mexico.

            Steven Pinker is a big believer that things have been getting better for humanity in many ways for a long time.        At the dawn of the Twenty-First Century, you could look at Central and South America for signs of progress.[1]  At the start of the century, Mexico’s Institutional Revolutionary Party (PRI) finally yielded its monopoly on political power in favor of multi-party democracy. 

From 1929 to 2000, the PRI deployed patronage to hold power.  Along the way, as in any other one-party state, corruption became endemic.  Obviously, in retrospect, one of the most important tasks of post-PRI government would be to build up honest and competent public administration right from the base to the peak of government.  It was going to take time. 

Mexico turned out not to have any time.  At the same time that Mexico moved toward multi-party democracy, another improvements took place.  Columbia won its long war against drug cartels.  Mexican crime gangs who had served as conduits for Columbian drugs now took over production as well.  Then they fought each other—and any interlopers—for control of the trade.  Along the way, policemen, prosecutors, and judges “on the pad”[2] became a valuable resource.  This happened just as Mexico tried to abandon the PRI’s policies.  Now a “vacuum of corruption” sent public officials in search of new patrons.  

            The drug cartels appeared invulnerable to the normal justice system.  The “narcos” even began to become celebrated public figures.[3]  In 2006, the Michoacan cartel let loose a carnival of highly public, grisly killings.  Also in 2006, Felipe Calderon squeaked through a close election to become president of Mexico.  Calderon decided to fight the drug cartels as hard as possible.  Knowing that the local police and courts were in the pockets of the cartels (and that they were incapable from long habit in any case), Calderon opted for a response from the national level.  Resources were diverted from local government to the military, which had the firepower to shoot it out with the gangs.  The government targeted the cartels’ leaders. 

            It worked—up to a point.  Cartels were de-capitated over and over again.  Factions formed and succession battles blazed in the streets.  However, the younger and wilder new drug lords led smaller gangs than had the older cartel chiefs.  They had less cash piled up; they had fewer connections with cops and judges; their connections to suppliers and distribution networks were thinner.  Many of them got pushed out of the business.  These losers in the Jurassic Park of Mexican drug dealing branched out into other forms of violent crime.  Kidnappings for ransom, armed robberies, and extortion all rose sharply.  This pushed the war between drug gangs and between the gangs and the government into the lives of ordinary civilians. 

            All across Mexico the government is losing not just the war against crime, but the war for its own survival.  Popular revulsion against the corruption and ineffectiveness of the government is leading to gangs becoming the effective government in many places.  Or it is leading to private self-defense initiatives—militias, security contractors, lynchings–that ask nothing of the state. 

A failing state on the southern border should deeply concern citizens of the United States. 


[1] Max Fisher and Amanda Taub, “Mexico’s Record Violence Is a Crisis 20 Years in the Making,” NYT, 29 October 2017. 

[2] Old NYPD parlance for crooked cops.  See: Peter Maas, Serpico (1973). 

[3] See for example, https://en.wikipedia.org/wiki/Narcocorrido   

Where we are with Iran.

            The radioactive isotope U-235 can be “enriched” to higher levels of purity by the use of special centrifuges.[1] Enriched to low levels (3.67 percent), U-235 can be used as fuel for nuclear power plants.  Enriched to very high levels (90 percent), U-235 can become the basis for a nuclear weapon.  Enrichment is a slow business in the early stages, but each successive step becomes much faster from higher levels of purity.  According to one expert, it might take a month to enrich U-235 from 20 percent to 60 percent, then a week to go from 60 percent to 90 percent.  However, more centrifuges are required to achieve each higher level of purity.[2] 

            The development of nuclear material is one step.  The development of the technology of making an actual weapon, and the development of ballistic missiles are additional steps.  There is nothing to say that these steps have to be done sequentially, rather than in parallel.    

            Iran had developed a large infrastructure of uranium-enriching centrifuges, along with other elements of nuclear weapons development.  Alarmed, the international community imposed increasingly severe economic sanctions on Iran.  Eventually, the Iranian government agreed to negotiate. 

            The 2015 international agreement limited Iran to possessing 660 pounds of U-235 enriched to 3.67 percent and required the shut-down of many of its centrifuges.  In return, Iran won removal of some—but not all—of the international economic sanctions.  Many other issues regarding Iran’s foreign and military policy were set aside for further negotiations.  Many economic sanctions were retained as leverage for these proposed future talks. 

            President Donald Trump soon abandoned the 2015 agreement and plastered Iran with sanctions.  Iran then began moving away from compliance with the 2015 agreement.[3]  Iran increased its supply of U-235 that had been enriched to 3.67 percent; enriched some of its U-235 to 20 percent; restarted some its centrifuges; and blocked international inspectors from some of their agreed work.  According to a February 2021 report from the International Atomic Energy Agency, Iran now possesses ten times the amount of enriched U-235 allowed under the agreement.  If processed into weapons-grade material, that would be enough for three nuclear weapons.  In addition, Iran has “largely ignored” an agreement on missiles and has allowed an agreement to expire that permits the security cameras to view Iran’s nuclear fuel.[4] 

            There are several ways of interpreting the series of measures taken by Iran.  One way is to see it as slicing the salami, seeing exactly what it can get away with without provoking an attack.  Another way is to see it as a slow ratcheting up of pressure to both force a revival of the 2015 agreement and to improve Iran’s position in negotiations. 

            In the nature of the production process, holding down both the amount of enriched U-235 and the number of centrifuges are key.  In mid-April 2021, Israel caused a major “mishap” at the centrifuge facility at Natanz.  Perhaps several thousand centrifuges were destroyed. 


[1] Rick Gladstone, William J. Broad, and Michael Crowley, “Iran Says It Won’t Make Bombs, But It May Be Inching Closer,” NYT, 18 April 2021. 

[2] Thus it would take 500 centrifuges to move from 20 percent enrichment to 60 percent enrichment, and 600 centrifuges to move from 60 percent to 90 percent enrichment. 

[3] As American bombing in Vietnam showed, this latter strategy doesn’t always work.

[4] David E. Sanger, “On Iran, Biden Walks a Tightrope Between Force and Diplomacy,” NYT, 29 June 2021. 

The Iran Problem.

            For decades, Shi’ite Iran pursued nuclear weapons, developed ballistic missiles, and supported terrorists around the Middle East as proxies in its war with Sunni Muslims.  With the American people clearly wary of any new war in the Middle East, President Barack Obama’s administration negotiated a multi-national agreement with Iran on part of these issues.  In return for relief from some of the painful international economic sanctions, Iran agreed to limits on its nuclear weapons development program for a limited time.[1]  President Donald Trump unilaterally abandoned the agreement.[2]  Both Iran and the Democrats bitterly criticized Trump’s action.  The election of President Joe Biden, then, seemed to promise a ready return to the agreement by both parties.  Nevertheless, difficulties arose in completing this restoration.[3] 

            For one thing, Iran’s government now wants more than it got from the Obama administration.  It wants more sanctions relief to allow it access to international financial services.  It wants to keep the nuclear-fuel production capacity it built up after President Trump abandoned the agreement.  To increase pressure on the Americans, it announced that it would raise the cap on enriching uranium from 3.67 percent to 60 percent, cutting the time needed to produce nuclear weapons if talks broke down. 

            For another thing, the United States government now wants more than it got from the Obama administration.  It wants immediate agreement to limits on Iran’s ballistic missiles and its support for proxy terrorism.  Furthermore, the United States wants to push out the duration of the agreement to prevent Iran from building a weapon for much longer than the original agreement.[4] 

            For yet another thing, Israel sees Iran’s government as a deadly enemy.  It sees the nuclear weapons program, the ballistic missiles, and the regime’s constant denunciations of Israel as warnings of a new Holocaust.  Israel has done everything it can—short of a bombing campaign conducted in co-operation with a nearly-as-skittish Saudi Arabia—to slow down Iran’s weapons programs.  Israeli intelligence purports to believe that Iran is much closer to making a weapon than do Americans.  The Israelis disliked the original deal, will really dislike any softer deal, and may see a no-deal as lighting a fuse. 

            The Iranian regime that negotiated the agreement with the Obama administration[5] has passed its sell-by date.  The Biden administration’s negotiations  took place under the shadow of a looming Iranian election likely to be won by “hard-liners”[6] who had criticized the original agreement.  In fact, this is what happened.  In contrast, the recent Israeli elections changed nothing except the prime minister. 


[1] I supported the agreement then and support it now.  That doesn’t mean that the critics of the agreement didn’t have valid points.  It’s just a case of “half a loaf is better than none” when the alternative is to start bombing. 

[2] His administration either re-imposed or created new sanctions for a total of 1,500. 

[3] Steven Erlanger and David E. Sanger, “Two Nations Divided By a Common Goal,” NYT, 10 May 2021. 

[4] Since these seem to have been the major Republican complaints about the original agreement, it would appear that we are actually experiencing Donald Trump’s second term, just without the egregious personal behavior.  See also: China policy, North Korea policy, Afghanistan policy, illegal immigration policy. 

[5] President Hassan Rouhani and Foreign Minister Mohammad Javad Zarif. 

[6] “Hard liners” is a term from the Soviet-American Cold War.  American observers often conjectured that a struggle took place within the Kremlin between “hard-liners” and “soft-liners” or “moderates.”  For a time, British diplomats applied the same sort of analysis to understanding the pre-war Nazi regime.  At least in the latter case, the distinction between “hard-liners” and “moderates” was purely wishful thinking.  Probably an example of projection. 

What is not said.

            Wealth and income inequality have become a much-discussed issue in American politics.  Democrats emphasize the injustice and aesthetics of recent income and wealth inequality.  Narrowing that gap has become a primary concern for many Democrats.  One way to achieve this is to increase what Europeans call “social provision” of services and money to lower income groups.  The “human infrastructure” component of their plans call for government spending on child-care, universal pre-K, free community college, and expanded spending on health-care.  Once created, such entitlements do not go away.  They only expand. 

A second way is to substantially raise taxes on both the income and the underlying assets of the wealthy.  The maximalist version of the current infrastructure bill nicely illustrates this double policy.  As one New York Times reporter puts it: they “see a rare opportunity to harness the political popularity of infrastructure spending to achieve their long held policy of raising taxes on the rich.”  Senator Ron Wyden (D-Oregon), chairman of the Finance Committee, puts it succinctly: “What we’re doing is generating revenue, but we are also making a major area of American government more fair, so people don’t feel they’ve been played while the rich person gets off scot-free.”[1]  In sum, the Democrats have long desired to tax the rich and they think that they have finally found a winning justification. 

The tax proposals chiefly target corporations, the fossil fuels industry, and wealthy individuals.  First, the tax on corporations would rise from the current 21 percent, the “small business” tax break for certain partnerships and limited liability companies (LLCs) would end, and so would a provision that taxes the fees of private equity firms as capital gain rather than as income.  The tax breaks currently allowed to fossil fuel companies would be redirected to benefit “clean energy” companies.  The top tax rate on individuals would rise from 37 percent to 39.6 percent, and the tax on capital gains for those earning more than $1 million a year would rise from 20 percent to 39.6 percent.  It is hoped that such measures would generate $2.5 trillion in revenue over ten years. 

One thing that is not discussed is alternative uses for the increased tax revenue.  Over the last 20 years, the real Gross Domestic Product (GDP) of the United States has slightly more than doubled.[2]  Over the same period, the national debt has risen from about $6 trillion to about $27 trillion.[3]  That is, it has more than quadrupled.  From 2016 to 2020, the national debt increased from 106.6 percent of GDP to 127 percent of GDP.  This has taken place in an environment of near-zero interest rates.  If interest rates had to rise to counter either a serious inflationary surge or speculative bubbles, then the cost of that debt would rise as a share of the budget. 

Suggesting that the increased revenue go to reducing the debt to a more manageable size would be met with hoots of derision.  No one is going to want to pay for benefits that their predecessors received, but for which they refused to pay.  Herein lies the lesson. 


[1] Both quotes are from Jonathan Weisman, “Bipartisan Infrastructure Talks Collide With Democrats’ Goal to Tax Rich,” NYT, 21 June 2021.  Conservatives would contest the “scot-free” designation.  In their view, the top 10 percent of income earners paid 71,7 percent of federal income tax on adjusted gross income, while the bottom 50 percent of income earners paid 2.94 percent of the federal income tax on adjusted gross income.  See “Letters to the Editor,” WSJ, 26-27 June 2021.  As my beloved sister-in-law said when asked to define “fair”: “more, a lot more.” 

[2] See: https://www.statista.com/statistics/188105/annual-gdp-of-the-united-states-since-1990/ 

[3] See: https://fiscaldata.treasury.gov/datasets/historical-debt-outstanding/historical-debt-outstanding

Durable Dictatorship.

            One way of understanding why the 1930s and 1940s were so terrible is to look at the 1920s.  In the aftermath of the First World War, two European governments fell to revolutionary regimes.  The Tsarist, and then the Provisional governments fell to revolution from the left, Bolshevism.  The liberal constitutional Italian government fell to revolution from the right, Fascism.  In both cases, however, the revolutionary movements were stopped short of their radical hopes.  Powerful constituencies were willing to tolerate some change, but rejected anything that harmed their own interests. 

In the case of Russia, the peasantry formed the main stumbling block.  They controlled the food supply, they formed the majority of the population, and they had gained possession of both their own land and that of the aristocracy.  Communism threatened private property, their private property.  So Lenin settled for the “New Economic Policy”: private property in land, private commerce in food, and government control of urban industry and international trade.  There things stood until the arrival of Stalin.   

In the case of Italy, multiple “old elites” formed the stumbling block.  The aristocracy dominated the military and the bureaucracy, the monarchy remained an important focus of loyalty, and big business and big agriculture controlled the economy.  They wanted the Socialo-Communist left and the unions destroyed, but they wouldn’t tolerate anything that threatened their power.  So Mussolini settled for the trappings of dictatorial power for himself and jobs for his followers. 

In the 1930s Stalin and Hitler exploited changed conditions to carry through real revolutions.  For Stalin, it was the death of Lenin and the disputed succession that followed, coupled with the legacy of debates on the best path forward to an actually Communist Russia.  This allowed him to play off factions within Bolshevism while mobilizing the intense enthusiasm of younger Communists.  For Hitler, it was the immense shock of the Great Depression to the society and politics of the Weimar Republic, followed by the commanding needs of mobilization for war.  In both cases, all the old barriers to sweeping change were destroyed. 

These examples may have value in understanding why some authoritarian regimes survive while others fail.[1]  One theory holds that dictatorships born out of revolution endure because the revolution destroys the old institutions, eliminating both enemies and anyone who could provide an alternative; and because the revolutionary movement packs the institutions of power with fanatics committed to maintaining the new order.  This theory may explain why Communist Cuba, Communist North Korea, Communist China, and Islamist Iran all remain standing many decades after their creation.  

One thing not sufficiently emphasized by this analysis is the role of terror.  Right to the end of their lives, Hitler and Stalin commanded police forces that had deeply penetrated the nightmares of their subject people.  Fear compelled compliance. 

Why then did these supreme examples perish?  Hitler lost a war Germany couldn’t win.  The Soviet Union’s rulers lost their nerve at a critical moment in 1989.  Those lessons may have been lost on Western observers.  They aren’t likely to have been lost on current dictators. 


[1] Max Fisher, “How Iran’s Government Has Endured in the Face of Instability,” NYT, 21 June 2021. 

Too Much of A Good Thing.

            The Great Depression (1929-1939) rocked capitalism.  Conventional economic thought offered no useful response.  Indeed, following its dictates only made things worse.  It held that governments should not intervene excessively in the natural workings of the capitalist economy.  However, if the “natural workings” of the capitalist economy puts 25 percent of the labor force out of work for a long stretch, then something new needs to be done.  Some countries—the United States, Nazi Germany, Imperial Japan–fumbled towards an effective solution.  Government deficit spending would have to fill up the gap between what the economy wanted to produce and what it needed to produce to maintain employment and living standards.  This actually worked, especially under the conditions of total war.[1] 

            It took a while, but after the Second World War these ideas were legitimized as more than just ad-hoc emergency measures.  Labeled “Keynesianism,” the legitimized government interventions, especially deficit spending, as a response to serious downturns in the business cycle.  Along with expanded provision of government services, the new approach seemed to be validated by the “Great Boom” of the post-war decades.  Health, education, and living standards all improved markedly during this time.  British Conservative Prime Minister Harold Macmillan crowed that “most of our people have never had it so good.” 

            Inevitably, a fly lit in the ointment.  The fly took the form of consequences that were unintended, unanticipated, and frankly undesired.[2]  Success at dealing with major down-turns tempted democratic political systems to apply the solution to less grave down-turns and then to enhancing up-turns.[3]  This seemed to reflect the invalidation of yet another old nostrum: that deficits financed by just printing money eventually undermine confidence in the currency, eventually stoke inflation to unacceptable levels, and eventually force central banks to raise interest rates.  “Eventually” never put in an appearance, so governments began to get the idea that free money actually is free.[4] 

            Now—or maybe “again” would be more accurate—storm flags are going up.[5]  A “perfect storm” in a rare one that combines several meteorological events.[6]         Cassandras discern something similar developing in economic policies. 

First, printing money raises asset values (stocks and bonds, houses).[7]  That’s great, except that just under half of Americans own no stocks and bonds, and ownership of most stocks and bonds is highly concentrated.[8]  Although much played up by the left, economic inequality really has risen dramatically.  That threatens to de-legitimize capitalism in the eyes of ordinary citizens. 

Second, easy money and government bail-outs do more than curb the negative effects of a recession or depression.  They also curb the positive effects.  Governments step in rather than allowing “creative destruction” to re-allocate economic resources and rewards.  Inefficient, non-performing firms don’t get driven under.  These “zombie”[9] companies continue to tie down capital and labor that would be better directed to new firms that are more competitive, efficient, and innovative.[10]  The share of “zombies” among publically-traded companies has risen from about 2 percent in 2002 to 19 percent by 2019.  Pouring money into “zombie” firms doesn’t increase production or productivity by any significant amount.  Instead, “creative destruction” lies at the heart of functional capitalism.  So, let her rip. 

Third, easy money and bail-outs are a part (not the whole) of the forces that have created an economy dominated by big companies.  Big firms can borrow the money needed to get bigger still.  They can buy start-ups before they grow into real competitors; they can hoover-up talented workers; and they can hire the armies of lawyers needed to cope with the immense and complicated regulations generated by active governments.  Hug rewards flow toward a company that can achieve market dominance.  Stomping on ants may come to seem preferable to thinking about uncomfortable innovations. 

In essence, capitalism is being smothered by the efforts to shield people from risk and adversity.[11]  In theory, it is not necessary to throw out the original Keynesian baby with the bathwater to solve this problem.  Governments must still save the economy in cases of serious down-turns.  The problems lie, first, in what it does after a crisis and, second, what it does when times are good.  The schoolroom solution is that governments should exercise some self-discipline.  After a crisis they should reel back in the debt they issued to revive the economy.  When times are good, they should refrain from trying to make them even better by printing more money. 

The solution rests on a hope that voters or interest groups will reward politicians who follow this path.  That hope, in turn, rests on a belief in civic virtue and a sense of self-restraint.  Will voters and interest group virtuous and capable of self-restraint?  Or are they habituated to government stimulus?[12]  Maybe it will take a big smash-up to change minds.[13] 

That opens up a discussion about “culture” (values, beliefs, behaviors) that is bound to be difficult.  It would be easy to give into cultural pessimism here.  Still, there’s always been “a lot of ruin in the Republic.” 


[1] On all this, see: Charles P. Kindleberger, The World in Depression, 1929-1939 (1973); and Alan S. Milward, War, Economy, and Society, 1939-1945 (1979). 

[2] See Gregor Samsa. 

[3] In the United States, it isn’t possible to blame only the “tax-spend-elect” Democrats for this.  Purely for electoral reasons, Republicans eventually responded with “tax-cut-spend-elect.”  Their latest tax cut came in 2017, when the recession of 2008 hardly even appeared in the rear-view mirror.    

[4] Two “oil shocks” in the 1970s led to a severe inflation.  After central banks defeated this inflation in the early 1980s, they pushed down interest rates to low levels.  Asian “sovereign wealth funds” soaked up a lot of the US Treasury paper thus generated.  See: Jacques Rueff, The Monetary Sin of the West (1972) for its criticism of the US for inflating the whole world’s economy.  More to the point, Rueff’s views influenced French president Charles de Gaulle to attack the value of the dollar in 1965.  See: https://en.wikipedia.org/wiki/Exorbitant_privilege 

[5] Ruchir Sharma, “The Rescues Ruining Capitalism,” WSJ, 25-26 July 2020. 

[6] See: Sebastian Junger, The Perfect Storm: A True Story of Men Against the Sea (1997). 

[7] After Congress took flight from Keynesianism from 2008 on, the Federal Reserve Bank stepped in with “quantitative easing”: buying privately-owned financial assets to pump up their value. 

[8] Just over half of Americans own some stocks and bonds, but most stocks and bonds are owned by a few people. 

[9] Companies that don’t earn enough profit to pay even the interest on their debts. 

[10] On the very real  problems in Asia, see: https://en.wikipedia.org/wiki/Zombie_company 

[11] There may be a larger argument to be made about the other unintended effects of the post-war reforms on a broader range of citizens.  For example, Zachary Karabell quotes Alexander Brown, founder of the investment bank Brown Brothers (later Brown Brothers Harriman): “Don’t deal with people about whose character there is a question.  It keeps your mind uneasy.  It is far better to lose the business.”  Karabell, “The Capitalist Culture That Built America,” WSJ, 15-16 May 2021.  Reflecting on some of the comments I have read in my local township Facebook page, I think that you shouldn’t say things that would make Jimmy Stewart or Lee Marvin believe that you should have your mouth washed out with soap. 

[12] A question at the heart of the work of the “Concord Coalition.”  https://en.wikipedia.org/wiki/Concord_Coalition 

[13] William E. Leuchtenberg, Franklin D. Roosevelt and the New Deal, 1932-1940 (1963). 

Cosmopolitanism and the Nation State 1.

            Nationalism is the idea that all people who share a common language and a common culture should be organized in independent, self-governing states.  Once upon a time, this posed a revolutionary threat to established boundaries.[1]  “Germany” and “Italy” were geographical expressions equivalent to saying “the Mid-West.”  History had fractured each into multiple independent states.  At the same time, the Russian Empire, the Austro-Hungarian Empire, and the Ottoman Empire were multi-lingual, multi-religious, and multi-cultural conglomerations.  In many places, different “national” groups were mingled together.  Beginning in the later 19th Century, Nationalism spread into all of these areas, leaving havoc and nation-states in its wake. 

            During the First World War, Britain and France agreed on how to partition the Ottoman Empire after victory had been won.[2]  After the war, the peacemakers in Paris tried to craft national boundaries that would gather as large a share of any national groups as possible into a coherent state.  The best will in the world could not disentangle all of the groups, so national minorities grumbled in many parts of Europe.[3] 

            Between the two world wars, predatory states fed on the grievances of national minorities, their own or those of others that created hostilities that could be exploited.  So German minorities in Austria, Czechoslovakia, and Poland; the lands across the Adriatic that had been promised to Italy, but given to the Artist Formerly Known as Yugoslavia; Hungarians in Rumania; Poles in Czechoslovakia; Croatians and Slovenians in Yugoslavia; and all the lands lost by Russia in 1918.  After the Second World War, the peacemakers drew the lines on maps, then shoved people where they wanted them.  The problem of national minorities was solved. 

            The peacemakers also tried to freeze their lines in place for all coming time.  In 1945 the newly-created United Nations outlawed “the threat or use of force against the territorial integrity or political independence of any state.”[4]  In essence, countries that existed had a right to continue existing in their original boundaries.[5]  The break-up of the Western colonial empires soon added many new nations to the world and to the rolls of those who accepted the United Nations’ prior decisions as their price of admission. 

            Now changed flows of power erode the established order.  Vladimir Putin decided not to wait on plebiscites that the United States would never allow.  He took back the Crimea and staged a limited invasion of two predominantly Russian “oblasts” of Ukraine.  He has claimed that Belarus, Ukraine, and Russia “are one people.”  Xi Jinping’s flouting of China’s agreement on the status of Hong Kong may be a preface to retaking Taiwan.  In 2020, Chinese publications sent up trial balloons referring to parts of Tajikistan and Kazakhstan as once part of Imperial China’s domains.  Water runs downhill, so lesser powers may soon start dusting off their claims. 

            Should this be stopped?  Can this be stopped?  Will this be stopped? 


[1] The American Revolution can easily be portrayed as the first war of national liberation.  The Dutch will object. 

[2] The Sykes-Picot Agreement (1916) gave France Syria and Lebanon; while Britain got Iraq, Trans-Jordan, and Palestine.  The British made a number of other commitments that did not accord well with reality, notably promising much of Turkey to Italy and Greece, an Arab state to the rulers of the Hejaz, and a “national home for the Jewish people” in Palestine. 

[3] “Sub-Carpatho Ukraine, land that we love.”  I stole that from Alan Furst, Kingdom of Shadows (2000). 

[4] Quoted in Yaroslav Trofimov, “The Dangers in A New Era of Territorial Grabs,” WSJ, 19-20 September 2020. 

[5] This amounted to a return to an established principle of 18th and early 19th Century diplomacy.