A fine kettle of fish.

Wage increases haven’t kept pace with inflation for at least a decade. Generally, American families earn less than they did in 1999. A host of factors lie behind this depressing trend. There is intensifying competition from overseas (globalization); there is the difficulty of workers adapting to technological changes that wipe out lower skill/lower wage jobs while creating higher skill/higher wage jobs; and there is a government that is managing the past more than helping create the future. Still, there are a couple of factors that capture the attention.

First, America has been suffering from slow economic growth for quite a while. Why have we suffered slow growth? One answer is that high energy costs exert a drag on the economy. Beginning with the oil shocks of the 1970s, energy costs rose until the 1990s. They dropped for most of that decade, but have returned to the post-1970s “normal” in this century. Energy costs work like a regressive tax: everybody drives, so everybody pays the same gas tax; high energy costs for employers drive them to hold down other costs, like wages, or to pass them on to consumers. Another answer is that American workers used to have an enormous education advantage over most foreign workers. Now other countries have moved forward, while Americans have remained stuck in neutral. This affects productivity in a competitive economy.

Second, what growth that has occurred has flowed toward those already at the top of the pyramid. Health care costs reduce real incomes. Either employers resist wage increases in order to provide health insurance or employees without work-provided health insurance have to pay their own costs. The long rise in health-care costs cut into the rise in pay for most people. It took a proportionately smaller share from the incomes of the well-off. They plowed the difference back into investments.

Are there any grounds for even a modest optimism? Yes. First, “fracking” has greatly increased the supply of cheaper energy in America. Second, the incessant talk about the importance of education for getting a decent job has led to an increase in the number of high-school and college graduates. In 2000, 29.1 percent of 25 to 29 year olds had a college BA; in 2008, 30.8 percent did; and in 2013, 33.6 percent did. Third, for reasons that are much debated, health-care costs have stopped rising for the last few years. This should allow pay to rise as well.

None of this means that we’re home free. The way forward is shrouded in fog. Short-term results haven’t been very satisfying. American voters clang back and forth between “Hope” and the “Tea Party.” The partisan “grid-lock” in Washington may be less of a cause of our troubles than a symptom of those troubles.[1]

This analysis raises a couple of questions.

First, how do we improve the educational preparation of American workers? Shove 50 percent or more of Americans through college? Create a trades-oriented alternative to college?

Second, how do we get health-care costs down? Western Europe and Japan spend two-thirds the share of GDP on health-care as does the US and get better results, so it can be done.

Third, where do we stand on the cheap energy versus the environment issue? Global warming argues for alternatives to burning carbon; jobs and economic growth argue for it.

Fourth, what is a government supposed to do in a highly complex society and economy? After the “London whale” and the Chrysler recalls, the “regulatory state” has a black eye. That’s hardly a reason to believe in the pure rationality of the market economy

[1] David Leonhardt, “The Great Wage Slowdown Of the 21st Century,” NYT, 7 October 2014.

 

 

The Secret History of Columbus Day.

The vast majority of the early settlers of British North America were Protestants. They brought with them a folk memory of how English Catholics had been seen—often correctly—as disloyal to the British government and in the service of foreign princes who wished to establish absolute monarchies that would force people to abandon their own faith to become Catholics. Protestantism and Catholicism regarded each other as defective faiths, rather than legitimate religions. From the late 18th Century on, the Catholic Church sided with autocratic governments and systematic ignorance. The Church opposed everything desired by progressive people of the day: representative governments, elections, freedom of speech, freedom of opinion, freedom of the press, individual civil rights, and modern science. The Church had maintained an Inquisition to repress heresy (wrong belief) and an Index of Banned Books that no Catholic should read. Occasionally, the Church kidnapped Jewish children who had been secretly baptized by Christians, and raised them as Catholics.[1] Moreover, in theory, Catholics owed their first loyalty to the Pope, rather than to the government of whatever country they happened to live in. Protestants in all countries despised Catholics as a primitive people who were slaves to the orders of their priests.

Catholic immigrants—from Ireland, Italy, and Germany—got a hostile reception from Protestant America. To make matters worse, the Irish and Italians, were poor country people. Usually they were illiterate and generally had no technical skills. Hence, they took the lowest-paying and least-regarded jobs when they first arrived in America. Their desperation for work dragged down wages for the native-born population. During the 1830s and 1840s, anti-Catholic sentiment boiled over in brawls, riots, press campaigns, and “Nativist” political parties.

The problem for Catholics lay in how to make themselves acceptable in a hostile foreign society. One solution came through associating themselves with the history of America from its earliest times. Italian-Americans first celebrated Columbus Day in New York City’s “Little Italy” in 1866. In 1882 Catholic Americans led by an Irish-American priest founded the “Knights of Columbus” as a device to help impoverished immigrants and promote Catholic education. The organization grew like wild-fire among Irish and Italian immigrants and their descendants. It emphasized the union of Americanism and Catholicism.

In 1892 President Benjamin Harrison proposed that Americans celebrate the 400th anniversary of the arrival of Christopher Columbus in the New World. Various dignitaries and un-dignitaries used the occasion to laud such ideals as patriotism and social progress. School-children recited the “Pledge of Allegiance” for the first time as part of the celebration.

Angelo Noce, an Italian immigrant who had become a citizen and who lived in Denver, Colorado took it into his head to press to make Columbus Day a Colorado state holiday. In 1905 the governor of Colorado decreed 12 October to be a state holiday.

In 1934, the Knights of Columbus and an Italian-American leader in New York City named Generoso Pope (the founder of the National Enquirer), got newly-elected President Franklin D. Roosevelt to proclaim Columbus Day as a national holiday. Roosevelt needed the Italian vote, so he agreed.

Now “progressive” people want to use the date to validate the long-neglected Native Americans. Why not? Catholics now are fully-integrated into American society. They don’t need it. And it isn’t as much fun as Saint Patrick’s Day. Still, that leaves Asian-Americans.

[1] See David Kertzer, The Kidnapping of Edgardo Mortara (New York: Random House, 1997) for one example that attracted much attention.

Bomb ’em till the mullahs bounce.

Iran has spent thirty years and $100 billion pursuing atomic weapons. Iran is deeply hostile to the West in general and to the United States and its allies in particular. So, that’s a problem. What to do?

Either we attack Iran’s nuclear resources to forestall the development of weapons or we accept Iran as a nuclear power and then seek to contain it. The choice will be shaped by how outsiders, the Americans in particular, perceive the Iranian leadership. If it is a rational, dispassionate leadership pursuing national security, rather than expanded power, then containment might well work. If it is an irrational, hatred-driven leadership seeking to expand Iranian power by toppling the established regional order, then an attack may be the only solution.             Kenneth Pollack[1] has concluded that Iran is driven either by “the Iranian leadership’s pathological perceptions of the United States or its own aggressive ambitions.” Nevertheless, he favors containment over the short to mid-term. Over the longer term, he argues, it would be better to engineer a change of regime through keeping the economic sanctions on Iran, reducing the diplomatic support it receives from Russia and China, and supporting dissidents within the country. Anybody, he thinks, would be better than the current rulers, both for America and for the Iranians themselves.

Matthew Kroenig[2] shares the conviction of Pollack and every other informed observer that Iran is pursuing nuclear weapons, not a peaceful nuclear program. He bolsters the standard arguments by noting that Iran is also developing Intercontinental Ballistic Missiles (ICBMs), the standard delivery vehicle for nuclear warheads. Kroenig derides the “containment” of a nuclear Iraq.  If the United States won’t fight a pre-nuclear Iran today, why would it risk fighting a nuclear Iran in the future? He also doubts the Pollack’s dream of regime change will become a reality. He sees the government in Tehran as too deeply entrenched and too ruthless in crushing its opponents, as it did with the so-called “Green Revolution” in 2009.[3]

 

Either containment or attack will leave the future uncertain. Might a “contained” nuclear Iran later tip toward expansionism when conditions become favorable? Would a successful attack stop Iran’s pursuit of nuclear weapons in its tracks for all time or would it just lead Iran to renew the effort after the dust had settled? Destroying a few key sites would still leave the country with scientists, engineers, and oil revenues—the real building blocks of a nuclear effort.

A creeping, largely unspoken fear is that the religious fundamentalists in Tehran share a basic mind-set with the religious fundamentalist suicide bombers of Al Qaeda and ISIS: death is to be welcomed in the service of a higher cause. It makes it hard to believe that Mutual Assured Destruction would dissuade Iran from waging nuclear war.

Finally, can the United States coerce Iran while seeking its support against ISIS? Or will the United States have to send troops to Iraq and Syria to defeat ISIS if it wants to coerce Iran?

If the United States agonizes too long, will Israel attack to degrade, even if it cannot destroy, the Iranian nuclear program?

[1] Kenneth Pollack, Unthinkable: Iran, the Bomb, and American Strategy (New York: Simon and Schuster, 2013).

[2] Matthew Kroenig, A Time to Attack: The Looming Iranian Nuclear Threat (New York: Palgrave Macmillan, 2014).

[3] The defeat of both the “Green Revolution” in Iran and the Tahrir Square movement in Egypt suggest the staying power of authoritarian governments in the Middle East.

Shi’a pets.

The Prophet Muhammad died in 632 AD. Who should succeed him as “caliph,” the leader of the Faithful? Should the succession be “elective” in the sense of someone chosen from among Muhammad’s chief followers? If so, then the leading candidate was Abu Bakr, Muhammad’s father-in-law and a powerful prop of Islam. Or should the succession be “hereditary” in the sense of someone chosen from among Muhammad’s sons-in-law so that the blood of the Prophet would run in the veins of future caliphs? If so, the leading candidate was Ali, the favored son-in-law. The majority supported the “elective” solution: Abu Bakr became the caliph. Ali and his followers sulked and schemed. Eventually Ali seized power as the fourth caliph, only to be assassinated. Since the debate over the succession, Islam has been split between a majority which sprang out of the supporters of Abu Bakr, the Sunni, and a minority that sprang from the “party of Ali,” the Shi’a[t Ali].[1] Eventually, the caliphate passed to the Ottoman sultan. The majority of Ottoman subjects were Sunni Muslims, with Shi’ites a minority located in what would become Syria and what would become Iraq. The great majority of Shi’ites were found in Persia/Iran.

Events in the 1980s turned up the flame under this conflict. The Iranian Revolution led to the creation of a revolutionary theocratic republic. Saddam Hussein’s attack on Iran led to a long war in which other Sunni states supported Iraq. Iran largely created the Hezbollah movement in Lebanon.

At the start of the Twenty-First Century, Syria under the Assad dictatorship offered a mirror-image to Iraq under the Hussein dictatorship. In the former, a Shi’a minority ruled a Sunni majority in the latter, a Sunni minority ruled s Shi’a majority.[2] The overthrow of these regimes then opened the door for the oppressed minorities to seek revenge.[3] Since the beginning of the Syrian civil war in March 2011, the Assad government has seen half the country secede from its control. In Iraq, the Maliki government got right to business as soon as they had waved good-bye to the all-too-willing Americans in 2011.

Both sides in the Syrian civil war have found supporters among their co-religionists abroad. Shi’ite Iran and the Shi’ite government of Iraq have aided the Shi’ite Assad government. Sunni Qatar, Sunni Saudi Arabia, and Sunni foreign fighters have supported the Sunni Islamists who are doing most of the heavy lifting against the Assad government in Syria and who have attacked the Shi’ite government in Iraq.[4] (See: “A Dog in This Fight?”)

“The Sunni-Shi’ite War,” The Week, 1 November 2013, p. 9.

[1] Wait. They’re fighting a gory war over something that happened 1400 years ago? Well, not exactly. During the 1400 years the two sects developed different religious practices which divide them. They also developed a history of conflict, oppression, and resistance linked to these two different faith traditions. So, they’re fighting a gory war over stuff that began 1400 years ago and continued—in widely varying degrees of intensity—down to the present. It probably isn’t helpful to try to analogize it to history-based conflicts in Western culture, like Protestant versus Catholic in Northern Ireland or the struggle for African-American civil rights.

[2] Do minorities create dictatorships as a defensive response to past or potential threats from the majority? That’s a political science question, rather than a historical question.

[3] While effete Italians assert that “revenge is a dish best tasted cold,” Arabs appear to prefer take-out.

[4] Is it possible to compare the Syrian Civil War to the Spanish Civil War? Or aren’t young Muslims entitled to a romantic commitment to an idealistic cause that subsequently turns out to be soiled by Great Power scheming?

 

Your mind is in the Qatar.

Qatar is about the size of Connecticut, but has a lot more going for it than insurance companies and casinos on Indian Reservations. Once an impoverished sandlot that lived from the pearl fisheries, Qatar now earns an immense amount of money from the sale of natural gas.

The ruling sheikh, Hamad bin Khalifah Al Thani (1952- ,r. 1995-2013) set out to make Qatar “important” to other people. On the one hand, he wants Qatar to be important to Americans in case the neighbors–either Saudi Arabia or Iran—took it into their minds to do his country some nastiness. What Iraq had tried to do to Kuwait in 1990, some other power might do to Qatar. He got the Americans to build a local command center for Central Command (which runs American military operations in the Middle East and Southwest Asia) at Doha. He enhanced the importance of Qatar for the world energy market by building a huge natural gas condensing plant to facilitate exports and earnings.

On the other hand, the sheikh wanted to be a player in the Middle East. In 1996 he created the “Al Jazeera” news network to promote an Islamist message. Beginning in 2011, Qatar has been financing upheaval in the Middle East. It has funded both the “Arab Spring” uprisings (which Westerners like to think of as “liberal” and “modernizing”) and Islamist groups (which Westerners think of as “illiberal” and “anti-modern”). Money flowed to Egypt’s Muslim Brotherhood, to Hamas, and to the Al Nusra Front fighting the Assad government in Syria.

Blaming Qatar for pursuing a two-faced policy by seeking close ties to America while funding Islamists groups misses the point. The Middle East is torn in its attitudes toward “modernization” and “Westernization.” Islamism is one face of that controversy. The rise of Islamism threatens the established order in the Middle East. People with an interest in history will note the radical difference between American policy in Europe after the Second World War and contemporary American policy. Then, the Americans had a better solution than its opponents and they were in favor of dramatic change to solve problems. Now, the United States doesn’t appear to have any positive alternative to offer and isn’t comfortable with change.

Qatar falls into a larger pattern. Qatar’s ruler may believe that you can’t get anywhere by pandering to the Americans. You’ll just end up living in Los Angeles and selling rugs at craft fairs. The military government in Egypt and the moderate Islamist government in Turkey also have both bridled at American policy of late. Egypt and the United Arab Emirates combined to bomb rebels in Libya without bothering to inform the United States first. Turkey refuses to have its army fight ISIS until the Americans agree to overthrow the Assad government in Syria.

Qatar also seeks to influence American opinion through “Al Jazeera America” and donations to the Brookings Institution. For American conservatives, this is an illegitimate international influence on American policy. For them, it falls into the same category as Islamist illegals entering the US through our porous border with Mexico. There is another way of looking at it, however. American journalism no longer invests many resources in foreign reporting. American journalists rarely have the language skills or the cultural competence to get outside of a restricted safe zone, either physically or intellectually. (It’s hard to understand the exaggerated importance assigned to the demonstrators in Cairo’s Tahrir Square otherwise.) Qatar seeks to enrich the information and perspectives offered to American to help them better understand events in the Middle East. Maybe people should spend more time watching an alternative news source? You don’t have to believe what you see and hear. It’s a free country.

“The tiny nation that roared,” The Week, 27 September 2013, p. 9.

Garrison States.

Governments have always oppressed and killed elements of their populations. However, the technological and organizational breakthroughs of the 19th century gave states unprecedented capacities. Telegraph, radio, telephone, railroads, automobiles, and air planes vastly improved communications and transportation, while centralized bureaucracies extended the reach of central government in other ways. Chemistry and machine-tools combined to provide killers with new means to deal out mass death. These trends converged to make the 20th century one of unmatched destructiveness.

The best estimate is that between 1900 and 1987 governments killed about 170 million people outside of combat operations between military forces. In comparison, battlefield deaths numbered “only” 34.4 million for the same period. This trend continued to the end of the 20th Century. In the 1980s about 650,000 people were killed in inter-state conflicts; in the 1990s that death toll fell to 220,000 people killed in international conflicts. On the other hand, about 3.5 million people were killed in civil wars during the 1990s.

Unsurprisingly, the phenomenon of state-sponsored mass murder has attracted the interest of thoughtful people. A political scientist named R. J. Rummel was one of the scholars who became interested in this phenomenon. His curiosity yielded one new word and two books. The word is “democide” (meaning the intentional killing of citizens by their government); the books are Death by Government (1994) and Statistics of Democide (1997).

In 1998 the CIA commissioned Professor Barbara Harff (Political Science, USNA)[1] to explore the possibility of predicting future “democides.” Harff found that statistical modeling of social, economic, and political factors produced a list of countries “at risk” of genocide. Some of these countries were places with long-running and already savage wars underway (Algeria, Sierra Leone, Afghanistan). The others clustered in northeastern (Ethiopia, Somalia) and central (Congo, Rwanda, Burundi, Uganda) Africa. Last, but not least, there was Iraq, where Saddam Hussein had already slaughtered about one and a quarter percent of the country’s people. (The total population was 24 million.)

Another factor should not be neglected, however.   Twentieth-century “democide” has generally been the child of attempts to create totalitarian social utopias. Democratic governments have virtually never engaged in “democide” in the Twentieth Century. (Admittedly, this isn’t going to make the Indians of the Americas feel any better.) Adolf Hitler, Josef Stalin, and Mao Tse-tung killed millions of people attempting to eliminate racial or class enemies. Their fore-runners (the Young Turks, Lenin) and imitators (Pol Pot) killed millions more.

How can we explain the proliferation of destructive utopia in modern times? Did the organizational and technological means available to madmen become much better developed than in earlier times? Did some accident of political, social, and economic conditions bring madmen to power in a single historical period? Is it possible to forestall catastrophe in the future?

 

“Human Development Report 2002,” Atlantic, October 2002, pp. 42, 44.

Bruce Falconer, “The World in Numbers: Murder by the State,” Atlantic, November 2003, pp. 56-57.

 

[1] Curiously, both Rummel and Harff were graduates in Political Science of Northwestern University.

Pop. 2050.

People from Thomas Malthus to Paul Ehrlich used to fear that population growth would outrun resources. These fears proved groundless by the end of the 20th century. Projecting from current trends, the United Nations foresees a world population of 9.3 billion by 2050, with growth slowing to stability at 11 billion by 2200. Other reliable estimates set the “carrying capacity” of the earth (its resource base) at something better than ten billion people. Many estimates hold that the earth could support 11 to 14 billion people. In short, a huge crush on resources seems unlikely to imperil human survival.

Instead, by the start of the 21st Century it was being predicted that “the most important changes in world population over the next fifty years are less likely to be in the total number of people than in their age and geographic distribution.”

For example, the anticipated overall slowing of population growth means that populations will age. In 2002 the median age of the world’s population was 26.5 years; by 2050 it will be something like 36.5 years. In the more-developed regions, long life-spans combined with a previous drop in the number of children below replacement level (2.2 children/family) will create very distinctly aged population patterns. The absolute and relative size of the working populations will shrink. Fewer working people will have to support more elderly dependent people, but fewer children. Unless there is substantial immigration from non-European areas, Europe’s 2050 population will be smaller than its 2000 population and only 57 percent will be of working age (15-65). Italy may be regarded as an extreme case: by 2050 the Italian population will shrink by 25 percent and only 3 Italians will be working for every two over 65 years. In both Russia and the former Soviet-bloc territories population is plunging as people have fewer children, many die younger than one would expect, and others emigrate.

Other areas of the world still face surging population growth: in China the birth rate is double the death rate, in India and Nigeria the birth rate is almost triple the death rate, in Pakistan the birth rate is more than triple the death rate. In general, almost all of Africa, the Arab world, and South Asia can anticipate population growth by 2050 that ranges from at least 50 to over 100 percent. Eight of ten of the fastest growing countries are Islamic-majority countries. Afghan women bear on average 6.8 children, while the population of the Gaza Strips is projected to quadruple by 2050. But it is not just Islam that reports rapid population growth: sixteen million more Indians were born than died in 2002 (20 percent of the world’s population growth); and the population of Africa is projected to increase by 150 percent between 2000 and 2050. This is in spite of the AIDS epidemic, which reduced life expectancy in Africa from 60 years (early 1990s) to 36 years (2002).

In contrast to developed Western countries (including Japan), in less-developed regions, the continuing comparatively high number of children will create distinctly youthful population patterns. The absolute and relative size of the working populations will grow. More working people will have to support more children, but not as many aged people.   (Retirement homes and elementary schools may become the key institutions in two different societies.)

More importantly, it is difficult to see how “developed” societies are going to do without a large influx of workers from “developing” countries. What school-teachers call “cultural competencies” are going to start to count more and more. “Controlling the border” will take on a different meaning.

 

Don Peck, “The World in Numbers: Population 2050,” Atlantic, October 2002, pp. 40-41.

Climate of Fear IV

Of all the water on the earth, 97.5 percent is salt water. The polar ice caps and the glaciers hold about 68 percent of this fresh water. Another 31 percent of it is not readily accessible because it is buried deep underground.

Like oil, the problem of adequate water supply can be addressed by a combination of greater efficiency in consumption and the opening of new sources to expand supply. For example, between 1980 and 1995 increased efficiency of use in the United States reduced both total consumption of water (10 percent) and per capita consumption (20 percent). Agricultural irrigation is very inefficient and better irrigation methods are available for those who want to use them.

Or you could move water from surplus areas to deficit areas. In a reversion to ancient governmental practice, the Chinese are building three huge canals to carry fresh water from the Yangtze River to northern China. The canals will end up being more than 700 miles long and will carry 12.7 trillion gallons of water per year.

Only about one-third of total annual run-off water is “caught” by reservoirs and dams; therefore, more dams and reservoirs could catch a lot more water for human use.

Deep drilling for water could tap into the 31 percent of total freshwater that is currently unavailable for human use (as compared to the 1 percent of fresh water that is available).

A much more serious problem is the availability of safe drinking water. About forty percent of the world’s population—most of them peasants in developing countries, 1.5 billion in India and China alone–lack access to modern sanitation systems. What this means in real terms is that people and animals shit upstream from where they get the water in which they bathe, with which they cook, and which they drink. What this means, in turn, is that about 2 million children under the age of five in developing countries die each year from waterborne diseases. As many as 76 million people are going to die of water-borne diseases by 2020, according to one projection. This is because 1.1 billion people don’t have a regular supply of safe water for drinking and 2.4 billion people have no access to sanitation systems. As a result, there are about 4 billion cases of diarrhea per year.

How to control this source of illness and how to treat the illnesses it causes are well understood. (Developed countries have been doing this for more than a century.) The real sticking point is that it is expensive to build sewage systems, water treatment plants, and hospitals. In theory, “These nations don’t have a shortage of water; they have a shortage of money.” In practice, a decade of economic growth since this statement was made has generated a lot of national wealth for China and India. Of course the problem is how to get at it. Taxing rich people in developing countries is as difficult as drilling for oil deep off-shore and drilling for the deeply-buried water.

Still, if you want to ask “what is the good” in environmental crisis, the answer is that it is good for American engineering companies. They have the skills to build sanitation and water-treatment facilities. They have the skills for all kinds of deep drilling.  Maybe the could capture melting polar ice at the source.

Or you could open a marina on Baffin Island.

 

Jen Joynt and Marshall Poe, “The World in Numbers: Waterworld,” Atlantic, July/August 2003, pp. 42-43; “Dirty Water: Estimated Deaths from Water-Related Diseases,” Atlantic, November 2002, pp. 46-47.

Climate of Fear III

People tend to fixate on oil as a key natural resource. How much oil is there in the world? Have we passed “peak oil” or is there a lot still to be discovered? (See: “The Blood of Victory.”) They should also give some thought to water. Water was a key natural resource long before oil and it will be a key resource long after oil has ceased to be the chief fuel source. We need it for drinking and for crop irrigation at a minimum.

Of all the water on the earth, 97.5 percent is salt water. Unless one goes through a very costly desalinization process ($2.50-$16/gallon, compared to $0.50-$2.00.gallon for conventional fresh water), this water is not available for use. This leaves 2.5 percent of the world’s water as usable fresh water.

This sounds scary. In theory, there is about 1.5 billion gallons for each person currently living on earth. However, only a small portion of that water is readily available for human use.   The polar ice caps and the glaciers hold about 68 percent of this fresh water. Another 31 percent of it is not readily accessible because it is buried deep underground. Thus, 99 percent of the 2.5 percent is not available for human use (at this time).

Even so, there is a huge amount of fresh water on the earth. Readily available fresh water surface run-off averages 524,151 gallons per person. That sounds reassuring.

The 6.3 billion people now living on earth use about 54 percent of that readily available water. So, it looks like we have a comfortable margin. That is reassuring. It is estimated that world population will rise to 7.8 billion people by 2025 and that use of readily available water will increase to 70 percent of the total. That sounds scary.

 

That small amount is unevenly distributed, just like most other resources. The UN (God bless its pointy little head) has worked out a scale of measurement for water supply per capita.

“Water abundance”:    >19,000 cubic meters/person. Canada, Russia, the Congo basin, almost all of South America.

“Water surplus”:          3,400-18,999 cubic meters/person. United States, Mexico, France, Ireland, the Balkans, Turkey, Southeast Asia, Kazakhstan.

“Water sufficiency”:   1,700-3,399 cubic meters/person. Most of Europe, Iraq, northern Iran, Afghanistan, most of India, southern and western China, Japan.

“Water stress”:            1,000-1,699 cubic meters/person. Northern Pakistan, South Africa and Zimbabwe, Syria, Czech Republic, Poland.

“Water scarcity”:         < 1,000 cubic meters/person. North Africa, Middle East, Saudi Arabia, southern Iran, southern Pakistan, northern China, southern India.

See: Jen Joynt and Marshall Poe, “The World in Numbers: Waterworld,” Atlantic, July/August 2003, pp. 42-43.

It seems likely that water shortages will start to weigh on both domestic and international politics. The pressure will come from the bottom, from those countries already facing “water stress” and “water scarcity.” One issue will be a campaign for international sharing.   Here the experience of the American West is likely to be useful. Western states have been sharing water resources for decades. It hasn’t always been easy or painless. It’s better than starting from zero.

A second issue will be migration—first internal, then international–by “water refugees.” People will try to ignore this problem for as long as possible. They will describe it as a domestic problem in water-deficient countries. It will not stay contained, any more than climate change.

Climate of Fear II

Recently, the New York Times has published pieces by economists arguing that the costs of limiting climate change may be much lower than people have feared.

The Cornell economist Robert Frank has made a series of arguments in favor of vigorous action in responding to climate change. Some of them are more persuasive than are others.

First, the same people who argue that climate change isn’t certain also go to the dentist once a year. Why? Because fillings are cheaper than root canals. The same reasoning goes for the uncertain effects of an uncertain degree of climate change.

Second, the same people who want to protect capitalism from excessive regulation ignore that the market works really well. Raise the costs of pollution to producers and consumers and they will find lower-cost alternatives. Carbon taxes and cap-and-trade policies can cut pollution without pushing up over-all prices.

Third, we restrict the right of individuals to exercise their “individual liberty” when it would harm others. Same thing goes for discharging greenhouse gases.

Some of his arguments seem to come from cloud-cuckoo-land.

First, capitalism is “creative destruction.” If carbon-based industries get destroyed by prices that reflect their real costs to the environment, then investors will plow money into alternatives. What Frank fails to understand is what Catherine the Great tried to explain to Denis Diderot: “You write your reforms on paper; I must write them in human flesh.” Coal miners don’t easily convert to barristas. Look at what happened to British coal miners after the Thatcher government decided to close many inefficient coal mines. Boozing away their dole in the local.

Similarly, there are only a relatively small number of convicted felons or people discharged from mental asylums who want to obtain a permit to carry a concealed weapon, but lots of people drive cars. It is easy to restrict the rights of the former, but it will be hard to restrict the rights of the latter.

Second, what you lose on the swings you make up on the merry-go-round. That is, high taxes on pollutants would generate huge revenues that would allow other taxes to fall. What Frank fails to notice is that American taxation is highly progressive. The top one percent on tax-payers provide over a third of all income tax revenue, while the bottom fifty percent pay less than five percent. Raising gas taxes, for example, would penalize the vast majority of Americans while off-setting tax cuts would benefit the “one percent.” Good luck getting that through Congress.

However, the proponents of the carbon tax increase + other taxes decrease frankly acknowledge that the two have to run together to keep the tax effect neutral. If the carbon tax is increased without an offsetting reduction in other taxes, then it really is a significant additional cost for the economy.

Third, American leadership would give us the moral high-ground, while the threat of tariffs could be used to lever the Chinese and the Indians into following our lead. I suppose we could ask Vladimir Putin what he thinks of America’s moral high ground—and of economic sanctions.

In short, there are some interesting ideas on offer. However, the political bugs haven’t yet been worked out of the system.

Robert Frank, “Shattering Myths to Help the Climate,” New York Times, 3 August 2014.

Eduardo Porter, “The Benefits of Easing Climate Change,” NYT, September 2014.