My Weekly Reader 14 June 2021.

            “We had won” Winston Churchill later wrote of American entry into full belligerency after Pearl Harbor in December 1941.  A “long and hard road” still had to be travelled to victory.  By August 1942, the Japanese had conquered the Philippines, the Dutch East Indies, British Malaya and Burma, and were on the frontiers of India.  By the same point, German armies were advancing on Stalingrad in Russia and on Alexandria in Egypt. 

            The war in the Mediterranean linked the two greater Theaters of Operations.  Gibraltar controlled the western entrance to the sea, while also serving as a key naval base protecting Atlantic Ocean convoys.  Suez controlled the eastern entrance, while also anchoring the British position in the Middle East.  In 1942, the Germans and Italians mounted a deadly threat to Suez from the Italian colony of Libya.  The Afrika Korps joined with the Italian army to advance deep into Egypt.  In one sense, holding onto the British position in the Middle East came down to a question of merchant shipping.  The Italian army in North Africa and the Afrika Korps had to be supplied by the short sea route from Sicily to the Libyan port of Benghazi.  The British in the Middle East had to be supplied by the long sea route around the Cape of Good Hope.  Under these unequal circumstances, the British fought hard to disrupt the Axis supply line. 

            In this effort, the little island of Malta played an over-sized role.  Located at the choke point between the western and eastern parts of the Mediterranean, it also stood astride the Sicily-Libya supply route.  Initially judging Malta to be indefensible in modern war, the Royal Navy had largely withdrawn to the Egyptian port of Alexandria at the outbreak of war.  However, the war in North Africa made it essential to hold Malta as a base for submarine attacks on the Italian supply line.  Holding Malta meant running convoys through Italian and German air and naval attacks without much British air cover. 

            In 1940 and 1941, the Royal Navy generally got the better of the Italian “Regia Marina.”[1]  The situation changed dramatically in 1942.  In December 1941, Italian frogmen disabled two British battleships in Alexandria Harbor.  The Japanese assault required the dispatch of other ships to the Indian Ocean.  The German “Luftwaffe” also began to play a larger role.  Britain lost naval supremacy in the eastern Mediterranean.  Three convoys in March and June 1942 suffered heavy losses or were turned back entirely.[2]  Yet Malta had to be held.  Courage and skill would have to make good the deficiency in ships. 

            In August 1942, the British sent off a do-or-die convoy code-named “Pedestal.”[3]  Running eastward past Gibraltar under heavy escort, the convoy came under sustained naval and air attack.  Three days of pure Hell followed. The British lost a carrier, two light cruisers, a destroyer, and nine merchant ships to bombs and torpedoes.  Yet the essential supplies—including a tanker full of fuel oil for submarines and aviation gasoline—got through.  Malta not only hung-on, attacks on the Axis supply line revived. 

            By the end of 1942, the tide of battle had turned against the Axis.  Tobruk, Guadalcanal, and Stalingrad are well-remembered milestones.  So, too, should be “Pedestal.” 


[1] Look up the Battle of Cape Matapan and the attack on the Italian fleet anchorage at Taranto. 

[2] A fictionalized and propagandistic account of the experience of one British cruiser, HMS “Artemis,” on one of these convoys is given by C.S. Forester, The Ship (1943).  Fine stuff. 

[3] Max Hastings, Operation Pedestal (2021), reviewed by Jonathan W. Jordan, WSJ, 12-13 June 2021. 

My Weekly Reader 16 January 2021.

            Between 1750 and 1914, what the British historian Eric Hobsbawm called the “Dual Revolution”[1] gave the West a sudden and enormous advantage over the rest of the world.  Taking advantage of this shift in the balance of world power, Western countries returned with new energy to the policy of imperialism.  By 1914, the Indian sub-continent and South East Asia had been subdued and Africa partitioned, while the rotting Ottoman and Chinese empires were next on the list.  Political control went hand-in-hand with determined effort at economic and social Westernization.  Christian missions, banks, schools, railroads and steamship lines, army posts and naval bases, mines, tropical medicine institutes, plantations, newspapers, tax collectors, and courts sprang up everywhere.[2] 

            Half a century later, those empires were gone.  How did that happen?  Many factors played a role.  The Second World War left Europe in ruins, while elevating two anti-colonial “Superpowers.”  Relatively few Westerners had gone out to run the empires.  Their sway over non-Western subjects depended heavily upon prestige, the sense that Westerners really were superior.  The Japanese victories over Westerners in British Malaya, the Dutch East Indies, French Indochina, and the American Philippines showed that non-Westerners could defeat Westerners.  After the war, European countries were preoccupied with using their limited resources for economic reconstruction and social reform.  Finally, the war had been fought by the Westerners for the cause of individual liberty, human rights, and democracy.  Faced with colonial independence movements after the war, they couldn’t say “For us, but not for you.”  

            Yet the Western collapse tells, at most, half the story.  More important is the rise of support for independence movements.  The colonial people had been no more happy to be subjugated to foreign rule than had African-Americans to be subjugated to slavery.  How to respond to the Western challenge had long divided non-Westerners—from the American Plains to Central Africa to the Ottoman Empire to East Asia.  One answer was to turn Western achievements against Western rule.  This ran from wholesale imitation of the sources of power (Japan) to the exploitation of Western political thought, like the idea of nationalism, against those who claimed to represent it (India).  Then the wars incidentally created a base of nationalists.  They did so by accelerating the creation of a middle class and a cadre of experienced military leaders.  Both groups were strongly nationalist and eager to rise. 

            Finally, there were the committed revolutionaries.[3]  Their numbers continually winnowed by the colonial police, they printed illegal newspapers and handbills, organized demonstrations and strikes, travelled within their homelands and between different empires using false documents, and sometimes led armed uprisings.  These were the men who often would take office at the hand-over of power.  They would try—with uneven success–to build new states. 


[1] In politics this meant the emergence of strong centralized nation-states ever-more based upon the support of the governed.  In economics this meant the Agricultural and Industrial Revolution, which generated immense wealth.  The political revolution began in France, the economic revolution began in England.  With the passage of time, both revolutions spread everywhere, simultaneously creating and destroying. 

[2] For a compelling view of the British Empire at its height, see James (Jan) Morris, Pax Britannica: Climax of an Empire (1968). 

[3] Tim Harper, Underground Asia: Global Revolutionaries and the Assault on Empire (2021).  See the perceptive review by Walter Russell Mead, WSJ, 13 January 2021. 

My Weekly Reader 7 January 2021.

            The Enlightenment had a good year in 1776.  The year witnessed the publication of “The Declaration of Independence,” Edward Gibbon’s History of the Decline and Fall of the Roman Empire, and Adam Smith’s The Wealth of Nations.  Smith attacked the prevailing “mercantilist” economic policies of the time, arguing that tariffs serve only politically-connected special interests at the expense of the larger community. 

            Broadly, for much of their history, Americans rejected free-trade as the best engine of prosperity.[1]  While James Madison advocated a ‘very free system of commerce” in the early days of the Republic, Alexander Hamilton preferred a mercantilist/protectionist line.  Tariff policy veered toward the Hamiltonian line once industrialization began, to the great distress of Southern cotton exporters.  After the Civil War, high tariffs became an article of faith among Republicans.  It is by no means clear that tariffs actually contributed much to American economic development in the “Gilded Age.”  Abundant natural resources combined with a scarcity of labor that put a premium on technological innovation probably did much more than tariffs.  Still, they didn’t hurt.  High tariffs as a protection against “unfair” foreign competition became a totem.[2] 

            Making a totem out of high tariffs came back to bite Republicans when passage of the Smoot-Hawley Tariff Act (1930) coincided with the plunge into the Great Depression.  Even though the Federal Reserve’s tight money policy during the 1920s played a far larger role, the high tariffs and falling trade explanation was ready to hand.[3] 

After the Great Depression drove many countries toward high tariff walls and autarky, after the Second World War wrecked most world economies, Republicans and Democrats converged on a new orthodoxy of free trade.  The United States played the leading role in designing the new world order of the Bretton Woods System.[4]  Americans continued this drive through the 1990s, with successive “rounds” of multilateral tariff reductions and the North American Free Trade Agreement (NAFTA). 

Some of the economic and social dislocations of recent decades loosened the post-war consensus.  Republicans still clung to free trade as tightly as they once clung to high tariffs, while Democrats lost the enthusiasm for free trade that inspired them from Franklin D. Roosevelt through John F. Kennedy.  More recently, populist uprisings in both parties have disrupted the march toward a still more integrated world economy.  Senator Bernie Sanders attacked free trade in general and the Trans-Pacific Partnership (TPP) in particular during his run for the Democratic presidential nomination in 2016.  Rival Hillary Clinton soon moved from being a leading proponent of the TPP to having her doubts to opposing it.  Donald Trump seized the Republican nomination in part by dint of his scalding criticism of NAFTA and Chinese trade practices. 

Will policy now snap back to normal under Joe Biden or are we at the dawn of a new era of managed trade?  The ability to formulate policies that help those displaced may hold the key.         


[1] Douglas A. Irwin, Clashing Over Commerce: A History of US Trade Policy (2017).  Reviewed by George Melloan, WSJ, 29 November 2017. 

[2] Tax cuts as the solution to every problem has become a similar totem for Republicans since the Reagan presidency. 

[3] See: https://en.wikipedia.org/wiki/Availability_heuristic 

[4] The General Agreement on Tariffs and Trade (GATT), the World Bank and the International Monetary Fund (IMF), the Marshall Plan and support for European integration all were vital early contributions. 

My Weekly Reader 15 November 2020.

            The Covid-19 pandemic has sent people streaming to History in search of previous events to provide some guidance for the present.[1]  Applying to America the understanding of the impact of epidemic disease formulated by his Yale colleague Frank Snowden, law professor John Fabian Witt argues that “New germs help make new laws and institutions, yet old ways of doing things shape the course of epidemics and the ways in which we respond to them.” 

            Witt discerns two trends in the American government response to disease, beginning with the smallpox epidemic that coincided with the Revolutionary War.  One is the development of preventive measures.  These measures include things like draining marshes and bogs to rob mosquitoes carrying malaria, yellow fever, and dengue of their habitat; providing city populations with clean water to drink and to clean the filth off the streets in order to avoid cholera; and the screening of populations to prevent the transmission of disease.[2]  Government, what Witt calls the “Sanitationist State,” grew in power in response to the need to prevent disease.  At the same time, science and medicine advanced rapidly in their ability to provide government with the needed tools.  All of these efforts Witt sees as expressing liberal values of a free society. 

            In contrast, there are the coercive or authoritarian measures of a “Quarantinist State.”  Governments caught up in a desperate emergency may impose an “authoritarian and discriminatory control over people of color, the poor, and immigrant newcomers.”  Here it is hard not to think that Witt may be using epidemic disease chiefly as a metaphor to criticize other forms of expanded government power.  “America’s record on infectious diseases is filled with discrimination and authoritarianism….Each new infection presents a risk of entrenching existing inequities.”  The same might be said of any national security emergency.  Witt may be extending an earlier argument against John Yoo’s interpretation of the Constitution in the aftermath of the undoubted emergency created by 9/11.[3]  However, one could just as easily point to the USA Patriot Act and the revelations of Edward Snowden for further examples of what can happen under an “emergency” that never seems to end. 

            Witt raises vital issues.  A democracy is rule by laws, not by men.  A democracy’s laws define the operations of government during normal times.  An emergency is a departure from what is normal.[4]  What becomes of the rule of law during an emergency?  Can the courts grant broad discretion to government officials to deal with an emergency?  When should government officials surrender such discretionary power?[5]  Is it fair to judge the quality of a democracy by what it does in an emergency, rather than by what it does in normal times? 

            Happily, American presidents have always pulled back or were pulled back from the brink in previous emergencies.  Those were decisions taken by individual men.  We know less about the behavior of the career bureaucrats who operate the machinery of government.


[1] See, for example, John Fabian Witt, American Contagions: Epidemics and the Law from Smallpox to Covid-19 (2020), reviewed by Adam J. White, WSJ, 10 November 2020. 

[2] The case of “Typhoid Mary: in 19th Century New York City offers a revealing example. 

[3] See: https://harvardlawreview.org/wp-content/uploads/pdfs/witt.pdf 

[4] War, rebellion, natural disasters, and epidemic or pandemic outbreaks of disease are common examples of conditions which may justify declaring a “state of emergency.”

[5] Declaring a “state of emergency” or a “state of siege” is a common feature of anti-democratic coups. 

My Weekly Reader 30 October 2020.

            The Twentieth Century might well be called “The Century of Monsters.”  Adolf Hitler, Josef Stalin, and Mao Zedong wielded absolute power over great states.[1]  They used that power to murderous ends from a combination of ideological fervor and personal pathology.  Hitler and now Stalin have been the subjects of an abundant biographies, each one seeking to understand what they did and why they did it. 

            Ronald Suny, an experienced and admired historian of the Soviet Union has added a first installment on his own biography of Stalin.[2]  It covers the years from Stalin’s 1878 birth in a remote backwater of the Tsarist Empire to the outbreak of the Russian Revolution in 1917.  The isolated, inhuman, psychopathic dictator is hard to recognize in his greener days.  Yes, he had a drunken, violent father.  He also had a loving mother.  Yes, he grew up in poverty and a society where the central government disdained his peripheral culture.  So did many Europeans. 

            In another time and place, perhaps he would have been something different.[3]  But he was born into a Russian Empire facing grave difficulties under bad leadership.  The Tsar-Liberator Alexander II had ended serfdom on terms disadvantageous to the freed people; he had sought to reform the law courts; he had begun the process of teaching Russians how to govern themselves at the lowest levels.  For all of these reforms he had been much hated and finally murdered.  His successors had embarked on a rapid industrialization that filled cities with unhappy toilers and a growing middle class.  However, the rulers had clamped down on reforms while mercilessly hunting dissenters and fostering anti-Semitism.  Defeat by Japan in 1905 wrenched political concessions from Tsar Nicholas II.  He soon repented this weakness.  

            Stalin came of age politically in this seething cauldron of unrest.  He encountered Marxism during a brief passage through a seminary run on much the same lines as the Russian state.  He encountered Lenin in books well before he met the man who led the extreme faction of Russia’s fragmented Marxist movement.  For Lenin, Stalin organized strikes (which often turned violent), robbed banks, and did time in Siberian prison camps.  For himself, Stalin schemed against other Bolsheviks closer to the center of power.[4]  It became a life-long trait. 

            The First World War created a final crisis for the Tsarist regime.  Calling up millions of peasants for military service (along with their draft animals) created a terrible food crisis in 1915 and 1916.  Incompetent management of both the war and the economic mobilization to support it cost the government the last shreds of credibility with the mass of Russians. 

            Stalin played only a mid-rank role in the Revolution that followed.  Food riots broke out in the capital city, Saint Petersburg, in February 1917.  These triggered a revolt against the whole regime that flashed across the empire.  The first victors were the established political parties: conventional bourgeois liberal parties and the moderate wing of the Social Democratic party.  The Bolsheviks found their real base of power for the subsequent October Revolution in the industrial workers.  Only then would Lenin—and Stalin—be on the road to dictatorship. 


[1] Vladimir Lenin and Benito Mussolini sought absolute power, but resistance from powerful forces in their own countries clapped a stopper on their tricks before they could reach the heights of their successors. 

[2] Ronald G. Suny, Stalin: Passage to Revolution (2020).  Reviewed by Joshua Rubinstein, WSJ, 29 October 2020. 

[3] Although it is hard to say what else he might have been.  A book reviewer?  “Eugen Onegin.  BAM!  BAM!”

[4] There has long been a suspicion that he worked as a police agent to thin out the competition. 

My Weekly Reader 21 October 2020.

            The Constitution reared up from a foundation of compromises.  Among these compromises was the toleration of slavery by states where it had little to no importance.  Article IV, Section 2, Clause 3 of the Constitution required the return of any fugitive from “service or labor” to her/his master from another state into which s/he had fled.  In sum, the Union mattered more than did slavery or the enslaved people.  A law, the Fugitive Slave Act of 1793, defined the legal mechanisms for returning fugitives.  However, as opposition to slavery increased in the North, local governments and private citizens often refused to co-operate or even obstructed the slave-catchers operating among them.  Therefore, another compromise, the Compromise of 1850, introduced a much more rigorous Fugitive Slave Act.  The new Act further inflamed Northern opinion.  

            Northern opinion divided more than this brief sketch suggests.  Anti-Black racism ran neck and neck with abolitionism in many places.  Many parts of the North valued their economic connections to the South and to slavery.[1]  Competition between political parties sometimes diverged from principled stands on issues.  All these forces came together in New York City before the Civil War.[2] 

            The city’s government dangled as a puppet of Tammany Hall, the Democratic Party organization.  Tammany pols played on the hostility to Blacks felt by the (predominantly Irish) immigrants they were organizing to vote early and often.  Judges and prosecutors had often crawled out of the same swamp.  New York City policemen sometimes moon-lighted as slave-catchers.  Businessmen who wanted to accommodate Southern customers turned a blind eye to it all. 

            Slave-owners would pay rewards for the return of run-aways, so Blacks in New York—people of color in an overwhelmingly White city–were deer in the jacklights[3] of slave-catchers.  This hunt only intensified with the passage of the Fugitive Slave Act of 1850, which offered handsome fees to both slave-catchers and the judges who approved their transfer Southward.  Lured by the money, the slave-catchers sometimes kidnapped—and judges regularly approved the transfer of—free Blacks who were knowingly misidentified as fugitives.  Applying the term not just to New York, but to the whole of the North, one historian has labeled this the “Reverse Underground Railroad.”[4] 

            Highly publicized stories of free Blacks kidnapped into slavery appalled a growing audience of Northern Whites.  Five Black boys were kidnapped from Philadelphia in 1825, then four survivors providentially returned to tell their story of the Black “Trail of Tears” that ran from the Upper South to the new cotton lands of the Southwest.  In 1853, Solomon Northup wrote of his “12 Yeas a Slave.”  Not for nothing has Elizabeth Varon called her book on the Union troops Armies of Deliverance. 


[1] Banks financed the cotton trade and its spendthrift planters; insurers and ship-owners profited from the massive cotton exports. 

[2] Jonathan Daniel Wells, The Kidnapping Club: Wall Street, Slavery, and Resistance on the Eve of the Civil War (2020), reviewed by Harold Holzer, WSJ, 19 October 2020. 

[3] See: https://en.wikipedia.org/wiki/Spotlighting 

[4] Richard Bell, Stolen: Five Free Boys Kidnapped into Slavery and Their Astonishing Odyssey (2019), reviewed by David S. Reynolds, WSJ, 17 October 2019.  . 

My Weekly Reader 23 July 2019.

During and immediately following the American Revolution, the Articles of Confederation had provided a framework for governing the country.  That framework proved unsatisfactory.  The current Constitution replaced it.  While the authors of the Constitution were experienced and practical men, it remained a theoretical system.  Would it work any better than had the Articles of Confederation?  Would it be able to foster a strong sense of national identity as well as provide effective government?  Could it overcome the distrust of the many Anti-Federalist who had opposed its adoption?  Carol Berkin has argued that four crises in the 1790s worked in various ways to legitimize the new system.[1]

The Whiskey Rebellion (1791-1794).  The new federal government needed revenue, both to operate the government and to pay off the national debt.  Congress passed a tax on distilled spirits.  Farmers living on the then-Western frontier of Pennsylvania and Kentucky often distilled rye and corn into whiskey.  That whiskey could then be traded for goods to merchants who shipped the whiskey east for thirsty consumers.  Both the farmers and the distillers resisted the tax, often violently.  Talking to them didn’t work, so President Washington finally led an army of 13,000 eastern militiamen.  The army cowed the rebels and asserted federal authority (although it didn’t stop moonshining).

The Genet Affair (1793-1794).  The French monarchy had provided vital aid to the American Republic during the War for Independence.  In 1793, the French Republic wanted American aid in its war with Britain and Spain.  Many Americans took sides for or against the French Revolution.  Ambassador Edmond Genet arrived in search of aid.  Before presenting his credentials to the American government and in defiance of a recent Neutrality Proclamation, he commissioned privateers to raid enemy shipping and recruited volunteers for an invasion of Spanish Florida.  Talking to Genet didn’t work.  Washington, supported by both Hamilton and Jefferson, demanded France recall its ambassador.  Which they did, planning to guillotine him.

The XYZ Affair (1797-1798).  Recalling Genet did nothing to solve the growing Franco-American conflict.  President John Adams sent off a delegation to negotiate with the French.  Upon arrival, various French diplomats demanded bribes before negotiations could begin.  Most of the Americans went home in a huff.  The Adams administration then published the reports of the delegation, with the names of French diplomats replaced by the letters X, Y, and Z.  Many Americans became yet more hostile to France and the Adams Administration pushed through more military spending.  A naval “Quasi War” with France began.  However, Thomas Jefferson’s Democratic-Republicans continued to favor the French Revolution and equated the Federalists with the old order.

The Alien and Sedition Acts (1798-1800).  The very divisive responses to the French Revolution and to relations with France embittered political debate.  The Adams Administration pushed through four Alien and Sedition Acts.  These extended the time to earn citizenship from 5 years to 14 years, allowed the government expel “dangerous” non-citizens, and allowed prosecution of those who made false statements that were critical of the government.  Under the guise of national security, the Federalists used the new laws in overtly political ways by prosecuting Democratic-Republican journalists, and by what amounted to future voter suppression.  (Many immigrants supported Jefferson’s party.)  Democrats attacked the Sedition law by invoking the First Amendment.  The reaction against the Alien and Sedition Acts helped spark the election of Jefferson as President in 1800.

[1] Carol Berkin, A Sovereign People: The Crises of the 1790s and the Birth of American Nationalism (2017).

My Weekly Reader 3 June 2019.

After the British surrender at Yorktown in October 1781, the Revolutionary War finally ended.  It had been a long war and a hard war.  The weary nation returned to peace.

Actually, that’s not what happened.  After Yorktown, war continued in the South and on the frontier.  The war on the frontier is particularly badly understood.[1]  Now, however, the war in the South can be better understood thanks to John Buchanan.[2]

Buchanan takes up his story well before Yorktown,  Horatio Gates, the “hero of Saratoga,” led the Army in the South to disaster at Camden (August 1780).  George Washington sent Nathaniel Greene[3] to clean up the mess.  He gave him a free hand and the assistance of Daniel Morgan.  Greene and Morgan rallied what troops they could—a core of “Continentals,” a fluctuating number of state militia, and a swarm of irregulars—and began a war of attrition.  Worn down by small defeats and Pyrrhic victories, the British commander Lord Cornwallis made fatal errors.  In April 1781, he divided his forces and led one element north toward Virginia.  The rest stayed in the South to try to hold what the British had won.

Rather than follow Cornwallis northward, Greene targeted the smaller force left behind.  Between May 1781 and December 1782, Greene carried on his earlier approach to fighting the British.  He achieved much the same result.  Small defeats and Pyrrhic victories wore down the British forces.  In the end, their main forces fell back on the heavily fortified ports of Savannah and Charleston.  Here they held out until July and December 1782 respectively.

The Royal Navy had controlled the seas since the beginning of the Revolution, with the sole—catastrophic—exception of the period around the siege at Yorktown.  Had the British won the “Battle of the Capes” against the French (September 1781), then Cornwallis could have been reinforced and re-supplied.  The British would have controlled New York, the Chesapeake, Charleston, and Savannah.  Those positions could not have been taken by siege.  The bargaining for a peace treaty might have been less favorable for the Americans.

With the British confined to coastal enclaves, the main effort of the war in the South became a gory combination of civil war and race war.  Patriots and Tories fought each other with a ferocity not limited to the battlefield.  Pro-British Indians raided the frontier and the Patriots struck back in their accustomed manner by burning villages, storehouses of food, and crops in the field in order to drive their enemy far away.  African-American slaves fled to the British lines, even though savagely punished when captured in flight.  One Patriot commander later recalled the Revolution in the South Carolina Upcountry: “in no part of the South was the war fought with such asperity as in this quarter. It often sank into barbarity.”

None of this was decisive.  Yorktown had led to the opening of peace negotiations.

Over the longer term, the civil war and race war in the South may have contributed to that culture of violence that long marked the South.[4]

[1] Still, see Glenn Williams, The Year of the Hangman: George Washington’s Campaign Against the Iroquois (2006), for a skillful introduction.  See also: “Oliver Wiswell” https://waroftheworldblog.com/2017/07/27/oliver-wiswell/

[2] John Buchanan, The Road to Charleston (2019).

[3] Greene was a 38 year-old quick-learner.  His political sympathies had led him to abandon Quakerism for war.  Between 1775 and 1777, the British had helped along his learning with a bunch of hard lessons.  He profited greatly from them.

[4] See: Fox Butterfield, All God’s Children (1996).

My Weekly Reader 16 May 2019.

If you want to think about “God” in simple evolution-of-ideas terms, then the stages would run something like the following.  At first, humans believed all Nature was alive and that all living creatures possessed an “anima” (spirit, soul).[1] Later, seeking to appease these powerful natural forces, people “personified” them as “gods.”  There were many things that could go wrong or right in life, so there were many gods.  Build temples, offer sacrifices, and hope for the best.[2]  Then they refined this polytheism into each city having one particular patron god or goddess, along with the others.  That deity lived in a temple in the particular city that s/he protected.  Participation in religious rites figured as an important duty, rather than as a choice.  The deity didn’t move around.  Greek and Roman religion were merely stems from this stock, but elaborated non-religious ethical systems of great power.[3]  Animism yielded to Polytheism.

After a while, what became Western civilization diverged from this broad cultural pattern.   The Hebrews developed “ethical monotheism.”  That is they believed that only one real God existed; all the others were false gods.  That God existed everywhere in the world, rather than being bound to the confines of some runty city-state.  That God had made a “covenant” with His “chosen people.”  He would protect them if they worshipped only Him.  He didn’t settle for mere rites and offerings.  He also required adherence to a moral code of action in this world.  Then Christianity emerged from Judaism by opening the “covenant” to anyone who would profess the faith and by extending the “covenant” to include a promise of life after death.

If you want to go all sociological-psychological, then you might argue that Christianity amounted a generational revolt by young men against the old men who ran Judaism.  Alternatively, you could argue that God now wanted all of His Creation to share in the benefits and strictures of the faith he had granted first to the Jews.  Polytheism yielded to Monotheism.

Then, in the 7th Century AD, another monotheistic faith arose: Islam.  This, too, is an example of ethical monotheism.  If you want to go all sociological-psychological, then you could argue that the Prophet Muhammad borrowed much from Judaism and Christianity, and then preached his new faith to the polytheist Arabs at a critical moment in their history.  Alternatively, you could argue that God had gotten fed-up with the inability of Jews and Christians to follow His instructions.  He had sent Muhammad to call back the whole world to the benefits and strictures of the faith he had granted first to the Hebrews.

Since then, Judaism and Christianity have divided between growing secular majorities and shrinking “fundamentalist” minorities.  Islam, however, has not followed the same path.  The Koran remains the unalterable Word of Allah.

“Every schoolboy knows” the term “a willing suspension of disbelief” when approaching a work of fiction.  What might make understanding between faiths easier would be a “willing suspension of his belief” on the part of the individual.[4]

[1] For a serious, accessible, and sympathetic portrayal of this belief system, see Brian Moore, Black Robe (1985).

[2] I suppose one could think of this as either bribery by the people or extortion by the gods.  Living now in a more secular age, it appears that politicians have become the new source of manna.  Reading the New York Times and the Wall Street Journal in parallel each day, I conjecture that Democrats believe in the bribery interpretation and Republican believe in the extortion interpretation.  But what do I know?

[3] If not of universal compliance.  That’s one of the things that makes Ancient History so much fun.

[4] I stole this from Eric Ormsby, “Allah: A Biography,” WSJ, 17 January 2019.