The Muslim Civil War.

With the “Arab Spring” of 2011, the “corrupt and dysfunctional Arab autocracies that had stood for half a century in places like Egypt, Syria, Iraq, Yemen, and Libya lost credibility because they had failed to meet the needs of the citizens.”[1]

Well, no. The “Arab Spring” counted not at all compared to American interventions. The corrupt and dysfunctional autocracies of Iraq and Libya were overthrown only by American attack. The corrupt and dysfunctional autocracy in Egypt quickly reasserted itself after a moment of panic induced by an American moment of panic. The corrupt and dysfunctional autocracy in Syria has retained the loyalty of many of its citizens and the Obama administration has tacitly abandoned its intemperate demand that Bashar al-Assad leave power.

Now, “an array of local players and regional powers are fighting skirmishes across the region as they vie to shape the new order, or at least enlarge their share of it.”

Well, no. We’re witnessing the outbreak of a Muslim civil war.[2] Sunni Saudi Arabia never got around to sending air or ground forces to battle the radical Sunnis fighting against the Shi’ite-dominated government of Iraq, but it has now intervened in the fighting against the Shi’ite Houthi rebels in Yemen. Shi’ite Iran is the principal supporter of the Shi’ite governments in Baghdad and Yemen and of the Alawite government in Damascus.

The Obama administration has claimed that there are “moderate” forces with which it can work to create stable states, if only people will get with the program.

Well, no. The Shi’ite-dominated government of Iraq began persecuting the Sunnis the minute the Americans were out the door. The Syrian “moderates” were virtually non-existent and unwilling to fight. Yemen is a primitive tribal society which a thin shellac of Western government titles could not disguise. Now Iranian forces have been introduced into Iraq’s fight against ISIS.

The administration claims to discern a difference between “moderate” and “hard line” forces in Iran. It hopes to strike a deal with the moderates over Iran’s nuclear program. The American drive to get a deal with Iran has most publically angered Israel’s prime minister Benyamin Netanyahu. However, Saudi Arabia and Egypt are just as concerned as is Israel that the United States has started to tilt back toward Tehran as its chief partner in the Middle East.

Iran is trying to obtain nuclear weapons to shift the balance of power in the Persian Gulf region. Saudi Arabia doesn’t want Iran to get nuclear weapons. Israel doesn’t want Iran to get nuclear weapons. Neither country places much trust in the fair words and promises of a distant United States. Both have modern American supplied air forces and airborne control systems. Aside from American objections, the chief impediments to an Israeli pre-emptive strike against Iranian nuclear facilities have been that the Israelis don’t have enough planes and they would have to over-fly Saudi Arabia. You do the math. (While you’re at it, Israel has nuclear weapons.)

If a “Muslim Civil War” does break out in flames, what course should the United States pursue? Intervene or stay neutral? Intervene against the country that already hates us (Iran)? Intervene on the side of those most likely to win in the short run (Saudi Arabia if backed by Israel)? Do a lot of off-shore drilling and tell the Middle East to solve its own problems? Head it off?  There’s no clear guide here, but there is the need to choose.

[1] Mark Mazzetti and David D. Kirkpatrick, “Policy Puzzle in the Middle East,” NYT, 27 March 2015.

[2] Or perhaps just a renewal of the long wars between the Shi’ite Safavid Empire of Persia and the Sunni Ottoman Empire.

Which Sides Are You On?

Americans are ambivalent about public unions.  In early industrial capitalism, all the power lay with employers. There were always more people seeking work than there were jobs, while state and local governments were there for the buying. As a result, wages were low, hours were long, working conditions were abominable, and job security was non-existent. Only unions offered any chance at improving the lives of workers. Union-organizing, however, proved to be hard and dangerous work. Employers resisted with every means possible and often did not stop at the edge of legality. Moreover, the very idea of a union clashed with the individualistic values upheld by most Americans. Only with the Depression and the New Deal did mass unionization sweep over heavy industry.

Public-sector unionization did not amount to much for a very long time. For one thing, the large American state is a fairly recent creation. More importantly, most people distinguished between public and private unions. On the one hand, public employment seemed far more secure than did private sector work and often seemed subject to various kinds of patronage. On the other hand, government provided services for which there was no alternative. While breaking a police strike in Boston, Calvin Coolidge declared that “there is no right to strike against the public safety.” Most people agreed with the sentiment for half a century. However, in 1962 President John Kennedy issued an executive order allowing many federal employees to unionize. The movement then spread to the state and local levels. Membership in public-sector unions now outnumbers membership in private-sector unions. Because the courts have upheld the right of unions to collect dues from all members, unions have deep pockets for political action.[1]

Amity Shlaes argues that there is an important emotional component to public attitudes toward unions. People have a positive view of Franklin D. Roosevelt and Roosevelt’s New Deal promoted mass unionization. Most people wouldn’t run into a burning building, or pull over a car on a dark night, or try to wrangle a room full of 14 year-olds, so they admire those who will do those things. So, public sector unions are approved on an emotional level. [2]

While the national media are interested in labor’s role in national politics, the unions actually focus most of their efforts lower down the food-chain. Local government elections often run in the “off” years between national elections. Turn-out is about a third lower in the local elections. When unions can turn out voters and supply campaign funds, they can have a disproportionate impact on the governments with which unions will then negotiate contracts.

Since they depend on union support in elections, Democrats tend to fold up under pressure. Since Americans don’t want to pay more taxes, local governments find their way out of the immediate dilemma by granting generous pension benefits that someone else in the years ahead with have to figure out how to pay. We can see the consequences in the balance sheets of some American cities. Dallas, a non-union town if ever I saw one, pays $74 a ton for garbage collection and disposal. Chicago, the union-city par excellence now that Detroit has cratered, pays $231 a ton. Speaking of Detroit, in 2013 the city sank under more than $18 billion in long-term debt. Half of that debt was for pension and health-care benefits for employees that could not be supported from the shrinking tax base.

Exasperated Republicans just want to cut government services to get rid of the burden of the unions. It’s difficult to see this as anything except a different kind of “strike against the public safety.” As with many things in contemporary America, some fresh thinking is needed.

[1] Daniel DiSalvo, Government Against Itself: Public Union Power and Its Consequences (2015).

[2] Her own sentimental attachments lie elsewhere. See: Amity Schlaes, Coolidge (2013).

Tales of the South Atlantic 1.

While a great deal of attention has focused on the “Mayflower Compact” as a foundational text in American government, historians have paid much less attention to the many pirate compacts.[1] In the first half of the 18th Century, there were an estimated 2,500 pirates at work in the Atlantic and Caribbean at any given time. Most were single men in their twenties who had “run” from a conventional merchant ship or the Royal Navy.[2] At the beginning of any voyage, the pirates drew up agreed terms of service. These defined who had what authority, how the profits of a voyage would be divided, and how discipline would be enforced. As piracy became more dangerous and less profitable as the 18th Century wore on, it seems likely that many men drifted back into the conventional merchant marine. The seaports of British North America—Boston, New York, Philadelphia, and Charleston—were filled with sailors who resented hierarchy and hated the “press gangs” of the Royal Navy. Did the experience of some of these men with drafting agreements for an egalitarian management of a “wooden world”[3] filter into the rhetoric of shore-bound pamphleteers and tavern table-pounders?

People trying to escape oppression are easy to understand. It’s a little more difficult to comprehend those who find themselves hunted by liberty. Nevertheless, such people do exist. His beliefs made Zephaniah Kingsley, Sr. an outcast in his adopted land, America.[4] A merchant who had migrated from England to Charleston, South Carolina, Kingsley was both a Quaker and a Tory. When the American Revolution ended in British defeat, Kingsley and his family rebuilt their lives in Canada. Eventually, his son, Zephaniah Kingsley, Jr. (1765-1843) took command of the family merchant ship trading to the Caribbean. In 1802 the experienced merchant captain embarked on the slave trade. This turned out to be a very dodgy decision. In addition to the perils of disease to be encountered on the African coast, Europe was at war. French or Spanish navy ships or privateers savaged the British merchant navy. Slaves were a precious cargo, for they might be sold as readily in Haiti or Cuba as in Jamaica. Once the Napoleonic Wars had ended, British reformers began to press for an end to the slave trade. Kingsley took refuge in Spanish Florida, where both slavery and the slave trade remained legal.

Along the way, Kingsley bought an attractive Senegalese slave named Anna Jai, freed her, and made her his common-law wife. Kingsley recognized her intelligence and ability, so she became his business partner as well as life partner. They added plantations to their other trade and prospered.

However, in 1821 Spain transferred Florida to the United States. As a Tory refugee turned Spanish Catholic, Kingsley didn’t like his prospects. American laws would not recognize his children’s rights of inheritance. Moreover, Kingsley, while a slave trader and slave owner, was not a racist. He criticized segregation laws for imposing “degradation on account of complexion.” In the 1830s he founded a colony in Haiti, the only free black country in the Americas and a source of terror to American slave-owners. He sent manumitted slaves to start the colony and employed indentured free workers.

Like many another thing in Haitian history, Kingsley’s colony came to a bad end. He died before it had taken root. His son died at sea. The Civil War ended slavery.

[1] Marcus Rediker, Outlaws of the Atlantic: Slaves, Pirates, and Motley Crews in the Age of Sail (2014).

[2] See B.R. Burg, Sodomy and the Perception of Evil in the 17th Century Caribbean (1983).

[3] I stole the phrase from N.A.M. Rodger, The Wooden World: An Anatomy of the Georgian Navy (1986).

[4] Daniel L. Schafer, Zephaniah Kingsley Jr. and the Atlantic World: Slave Trader, Plantation Owner, Emancipator (2014).

Inequality 4.

By and large, in recent years the upper income groups have collected most of the profits from economic growth while everyone else has lived with stagnant incomes. How much effect in monetary terms has that monopolization of growth had? According to one calculation, if the top one-percent still received the same share of income that they received in 1979, then every other family could have received a cheque for $7,105.[1]

However, compare this with another form of inequality. If incomes have stagnated for most people, so has educational attainment.          In 1900, about 11 percent of Americans aged 14 to 17 attended high school. By 1950, 75 percent of that age group attended high school. That was about double the European rate. The G.I. Bill (1944) carried the American lead forward into college education by financing college education for veterans (among other things). Then something started to go wrong in the 1970s. Male graduation rates for four-year colleges began to decline. Essentially, women have taken up the slack in educational attainment. Unfortunately, this coincided with the decline in heavy industry that paid good wages for people without a college education.

The educational differential both is and isn’t generational. Of Americans born between 1950 and 1959, 42 percent have a college degree. Of Americans born between 1980 and 1989, 44 percent have a college degree. However, only 30 percent of Americans reach a higher level of education than did their parents. Among 25-34 year-olds, 20 percent of men and 27 percent of women have made the big jump from parents who didn’t finish high school to having a college degree.

The differential is linked to social class. From the mid-1970s and the mid-1990s, college graduation rates for those in the top 25 percent of income groups rose from 36 percent to 54 percent; rates for those in the bottom 25 percent rose only from 5 percent to 9 percent. Between the early 1980s and the early 2000s, college attendance rates for people from the top 25 percent of income groups rose to be 15 to 25 percent higher than for those in the bottom 25 percent.

Why do these figures matter? They matter because, on average, Americans with a college degree are paid 74 percent more than those with only a high school degree. Between 1979 and 2012, the difference between the incomes of families headed by college graduates and families headed by high-school graduates grew by $30,000.

Education isn’t working as a vehicle for social mobility. It is starting to do the opposite.

The causes of this stagnation are complex. For one thing, middle class students go to much better schools than do lower class students. The middle class students come out less unprepared for college than do lower class students, usually markedly less unprepared. For another thing, college costs more in the United States than it does most places, and cuts in already inadequate support for public colleges have thrown even more of a burden on families.

If you think that a BA or more makes for a highly skilled work force, then expanding the percentage of Americans who are college graduates is vital for improving the quality of the American work force. If you think that international competitiveness in a globalized economy is vital for American prosperity, then improving the quality of the American labor force is essential.

Which of these two forms of inequality is worse for the country? This isn’t an attempt to divert attention from one form of inequality on behalf of the “one-percent.” It is an effort to get people to pay attention to complex fundamental problems.

[1] Eduardo Porter, “”Equation Is Simple: Education = Income,” NYT, 11 September 2014.

The Iran Dilemma.

Tom Friedman’s opinion on Middle Eastern matters must command respect. Friedman has remarkable access to American government sources. The Obama administration often appears to voice its views through his column.

Since the Revolution of 1979 overthrew the Shah, the United States and Iran have been at odds. At the same time, Sunni Saudi Arabia and Shi’ite Iran have been at odds. So, an alliance of convenience formed between the United States and Saudi Arabia. Recently, the upheavals in the Middle East have consolidated the grip on power of Iranian clients in Iraq, Syria, Libya, and Yemen. Over the longer term, however, Iran’s long pursuit of nuclear weapons has been profoundly destabilizing to the region. (See: Bomb ‘em ‘till the mullahs bounce.)

Friedman’s recent column on the negotiations with Iran over its nuclear program lays out some essential issues, even if it does not fully explore them.[1]

First, the Obama Administration hopes that a nuclear deal with Iran will be “transformational.” If sanctions are lifted, Iran can be drawn into the larger world. Contact with more liberal societies may—eventually—turn Iran into a “normal,” non-revolutionary state.

Second, the Obama administration sees Iran as a legitimate counter-weight to the Wahhabist version of Islam sponsored by America’s nominal “ally,” Saudi Arabia. Iran has competitive (if not “free”) elections; respect for women beyond the norm in the Muslim world; and real military power that it is willing to use. In contrast, Saudi Arabia is an absolutist monarchy that sponsors the spread of the extremist Wahhabism that can easily turn into Islamic radicalism, but will not use its powerful military for more than air shows.

Third, “America’s interests lie not with either the Saudis or the Iranian ideologues winning, but rather with balancing the two against each other until they get exhausted enough to stop prosecuting their ancient Shi’ite-Sunni, Persian-Arab feud.”

Fourth, “managing the decline of the Arab state system is not a problem [the United States] should own. We’ve amply proved we don’t know how.”

Points worth discussing.

What caused the collapse of the Soviet Union, contact with the West or the inherent stupidity of Communism? Is expanded contact with the West eroding the power of the Chinese Communist Party? These examples go to the “transformational” aspect of the issue.

Is the Obama administration hoping for a Nixon-Kissinger style “opening” (as to China) that will remake the politics of the Middle East? If so, is the game worth the candle? What American interests will be advanced by such an opening? Iran will fight ISIS and Saudi Arabia will back opponents of the Shi’ite government in Baghdad regardless of such a change.

Does the Obama administration accept that we are witnessing the undoing of the Sykes-Picot borders? If so, which borders are likely to be redrawn? Iraq, Syria, and Libya are failed states. What about Saudi Arabia (home to most of the foreign fighters in ISIS) or Egypt?

Finally, Friedman argues that “if one assumes that Iran already has the know-how and tools to build a nuclear weapon, changing the character of the regime is the only way it becomes less threatening.” First, he accepts the thrust of the piece by Broad and Sanger, that Iraq knows how to make a nuclear weapon. (See: A note of caution in Iran.) Second, he argues that changing attitudes is the “only” way to deal with the danger. Really? Soldiers usually plan for an enemy’s capabilities, not his intentions—which can be hard to discern.

[1] Thomas L. Friedman, “Looking Before Leaping,” NYT, 25 March 2015.

Climate of Fear XVI.

Coal is an important source of fuel: 38.7 percent of America’s electricity comes from 600 coal-fired generators.[1]

The trouble is that coal is bad for you and other living things. Coal burning for power generation in the United States gives off about 1.575 billion tons of carbon dioxide. That feeds the greenhouse gases responsible for global warming. Burning coal is worse than burning other fossil fuels. All the gasoline-powered vehicles in the United States give off about a billion tons. Burning natural gas gives off about half the carbon-dioxide as does burning coal.

No one is talking about having passed “peak coal”: there is a lot of coal still in the ground. People concerned about global warming want it to stay there. As the former Secretary of Energy Steven Chu memorably phrased it “there’s enough carbon in the ground to really cook us.”

However, the coal industry looked to be in decline for the same reason that gasoline prices have fallen recently. Hydraulic fracturing (“fracking”) has succeeded. Natural gas prices have fallen by 74 percent over the last ten years. Natural gas, emitting half the carbon dioxide as coal, is now price competitive with coal. Thus, a shift from coal to natural gas would achieve a substantial reduction in emissions without harming anyone—except the coal producers of course. The economics certainly tilt in that direction: 150 of the less efficient coal-fired generation plants have shut down already.

For these reasons, it may have looked like an opportune time to push for a reduction in coal-burning. The Obama Administration is pushing hard to cut carbon dioxide emissions by 30 percent from the 2005 level by 2030. In June 2014, the Environmental Protection Agency (EPA) announced a Clean Power Plan to limit coal burning in the United States. Each state would be required to reduce its carbon emissions. The logical thing to do would be to switch to other forms of energy generation ranging from nuclear to natural gas to “renewables” (solar, wind).

The EPA plan has elicited hard push-back from coal-mining states. The efficiency of coal-mining techniques has increased with the introduction of “open cast” mining (knock off the top of a mountain and excavate the coal with machinery). Coal miners will be thrown out of work[2] and coal mine owners will see their investments destroyed. Senate Majority Leader Mitch McConnell (R-Kentucky) has denounced the president’s “War on Coal.”[3] A dozen states have sued the EPA, claiming that it has exceeded its authority.

One way to smooth the path from coal would be to invest more in research into “clean coal” technology. So far, research has shown the process to be expensive and difficult. An experimental “clean coal” plant in Kemper, Mississippi, cost five billion dollars. However, it could both pacify the coal interest and find an international market.

The industrialization of countries like India and China are powered by coal. An estimated 82 percent of global coal reserves are still in the ground. China, which recently promised to reach peak carbon-burning by 2029, plans to build 363 new coal-fired plants before then. India is planning to build more than 450 coal-fired generating plants in years to come. The carbon dioxide emissions from these plants will overwhelm any reductions in the United States. Finding a way to “clean coal” might be one way to avert disaster.

[1] “The end of coal?” The Week, 27 March 2015, p. 11.

[2] Although employment in coal mining in Kentucky has fallen from 38,000 in 1983 to 17,000 in 2012.

[3] Bearing mind the importance of both tobacco and coal for the state’s economy, maybe they could find a new slogan for Kentucky license plates: “Kentucky is for Respirators.”

Our Kids.

A new book by the Harvard political scientist Robert D. Putnam has instantly attracted attention (and criticism from the left), so it will be much in the news for a while.[1] Combining research in the scholarly literature with many interviews, Putnam explores the disintegration of America into polarized communities of rich and poor that threaten to become hereditary castes.

Broadly, “rich” kids have parents who finished college; grow up in two-parent families; get a lot more attention from their parents while young; get better quality day-care when their mothers get fed up and go back to work; get dragged to church on Sunday[2]; attend better schools and have more access to developmental extra-curricular activities; eat dinner as a family; are much more likely than are “poor” kids to graduate from college (and to attend better colleges at that).

Broadly, “poor” kids have parents who went no farther than high school; “are increasingly entering the world as an unplanned surprise”; grow up in increasing numbers in broken homes; get about a third less time from their parent; get lower quality day care when their stressed-out mothers have to go back to work; skip church in favor of watching cartoons; don’t eat dinner as a family; are much less likely to attend college (and to attend lesser colleges when they go).

Jason DeParle concludes that Putnam’s “research is prodigious. His spirit is generous. His judgments are thoughtful and fair.”[3] Nevertheless, Putnam’s approach frustrates DeParle. “What [Putnam] omits… is a discussion of the political and economic forces driving the changes he laments.” Doing what Putnam left undone, DeParle argues that income inequality has grown “radically”; that the wealthy exert great influence in politics in defense of their interests; that inequality “gives those at the top [the power] to pull up the ladder”; and that Putnam “overlooks the extent to which it’s … a story about interests and power.” How can it be that, “though Putnam is a political scientist, his account is politics-free”? Doesn’t Putnam read the Times, where all of these things are high-lighted?

What DeParle fails to acknowledge is that some element of success or failure is volitional or behavioral. People drop out of high-school or out of community college; people don’t use contraceptives[4]; and people reject the life structures pursued by the successful. How is more progressive taxation or broader government programs going to counter these behaviors?

Tellingly (but perhaps without having thought through the implications), DeParle remarks that “for most [of the poor kids] the troubles seem to date back generations.” That is, long before economic inequality became a grave issue. Probably the same is true for most of the rich kids: the advantages date back generations.

Perhaps we need to ask follow-on questions. What changed to make long-standing individual failings and family dysfunction into a social disaster? What changed to make conventional bourgeois behavior into such a life advantage? Here we might look for answers in the long-term evolution of the American economy away from heavy industry and toward an economy that disproportionately rewards education. We might also look at the white flight from cities in response to both disorder and integration. Or we can stick with conspiracy theories.

[1] Robert D. Putnam, Our Kids: The American Dream in Crisis (New York: Simon and Schuster, 2015).

[2] Or to synagogue on Saturday, or to the mosque on Friday. Faith doesn’t matter. Apparently what matters is making your kids endure difficulty.

[3] Jason DeParle, “No Way Up,” NYT Book Review, 8 March 2015.

[4] $19.99 for a pack of 20 Durex at Walgreen, so don’t start with the cost of birth-control pills.

Libya.

In 2011, during the “Arab Spring,” an American-led coalition overthrew the dictatorship of Muammar Gaddafi. Libya under Gaddafi had been a society with several potential conflicts kept under control by the dictatorship. People of Arab descent clashed with people of Berber or Turkish descent. The American attack took the lid off this cauldron. Many tribes and towns raised “brigades” of troops to help topple the hated regime. Few of those militias disbanded once victory had been won. Instead, Libya found itself fragmented even while it sought a path to national reunification. The groups quarreled over power and shares of oil revenue.

Things got worse over the next several years. By August 2014, Libyan towns and tribes were choosing sides in a looming civil war.[1] Thus, the mountain town of Zintan recruited many former Gaddafi troops to their militia and declared against radical Islamism, while the coastal town of Misurata allied with the Islamists. As an object lesson to the rest of the country, order had broken down in the capital city of Tripoli, fighting had ravaged the city, electrical power was often interrupted, gasoline often unavailable, and municipal services had collapsed.[2]

In 2012, one Islamist group, Ansar al-Shariah, participated in the attack on the American mission in Benghazi. Two years later, the group had grown more powerful. Bombings and assassinations had demonstrated its power. Other militias forged alliances with the Islamists.

In May 2014, a former general named Khalifa Hifter managed to gather some forces. He declared war on the Islamists. General Hifter didn’t bother to distinguish between “moderates” and “radical.” His attacks around Benghazi tightened the bonds between Ansar al-Shariah and the other Islamist groups. Hifter’s attacks added to the polarization of the country between those who opposed Ansar al-Shariah and those who supported the radical Islamists. That polarization had the potential to spread the fighting in Benghazi to the rest of the country.

Among his other acts, General Hifter had closed the existing parliament and ordered new elections. The new parliament was to convene in Tobruk, an eastern city close to the Egyptian border and within Hifter’s territory. It will surprise no one that the Islamists, who had been well-represented in the old parliament, declined to go to Tobruk. Instead, they announced that the old parliament would meet in the western city of Tripoli (close to the Tunisian border and within the territory controlled by Misurata). Rival parliaments in a country full of armed men is bad.

Saudi Arabia and Egypt have both grown alarmed over the Islamists-next-door in Yemen and Libya. The United Arab Emirates, an ally of Saudi Arabia, plays host to a satellite network that broadcasts anti-Islamic material to Libya. Qatar, which has supported Islamic causes elsewhere in the Middle East (See: Your Mind Is In the Qatar) runs a rival network broadcasting to the Islamists. At some point, the Egyptian Army may have to choose between intervention and just trying to seal off the almost 700 mile-long border with Libya.

Back in August 2014, things looked to be sliding out of control. Observers foresaw a likely choice between the restoration of a dictator and letting the place slide into a cauldron of Islamist extremism. Especially in the latter case, Libya’s fate would have wide repercussion in North Africa and the Middle East. The recent Islamist attack on a museum in Tunisia and the nominal adherence of the Libyan Islamists to ISIS add to the urgency.

Neither Saudi Arabia nor Egypt is likely to feel grateful to the United States for having caused this problem in the first place.

[1] David D. Kirkpatrick, “Strife in Libya Could Presage Long Civil War,” NYT, 25 August 2014.

[2] In a curiosity unexplained by the author, “bicycles, once unheard of, are increasingly common.” Un-noticed by the rest of the world, someone is importing bicycles into Libya.

Koch Brothers.

In 1967, Charles (b. 1935) and David (b. 1940) Koch took over the small-time, Kansas-based oil refinery company built from nothing by their father.[1] Since then they have massively expanded the company into a petroleum and related products industrial conglomerate. Each man is now estimated to be worth $42 billion. This gives them a lot of money to play with. Like a lot of other successful Americans, they decided to “give back” by donating to good causes.

What has caused controversy is that their idea of “good causes” isn’t the same as that of Bill and Melinda Gates.[2] The Koch brothers are libertarians who favor a smaller, less intrusive government. They favor legalizing gay marriage (where President Obama’s opinion has evolved to match their own long-standing position) and of marijuana (where President Obama’s position has not yet evolved). They also oppose a minimum wage law, food stamps, the Affordable Care Act (ACA), and environmental legislation. If they had a potato farm in Vermont and sent out a monthly Xeroxed newsletter, that would be OK. However, they are fabulously wealthy and have a range of contacts with other fabulously wealthy people who think in the same fashion. So they can raise a ton of money for campaign contributions and political advocacy. Their various funds supported “Tea Party” candidates in 2010, then spent $400 million on the 2012 election, about $300 million on the 2014 elections, and are hoping to spend about $889 million in 2016.

Nominally, Democrats are outraged because of the flaws that it reveals in American electoral law. Supreme Court decisions have gravely weakened efforts at campaign finance reform introduced back in the 1970s. The “Citizens United” case is a particular “bête noir.” The chief funding arm of the Koch brothers is “Freedom Partners.” Because it is classified as a social welfare organization engaged chiefly in education on public issues, the donors to “Freedom Partners” are allowed to remain anonymous.[3]

Is it permissible to wonder if the source of the Democrats rage—and the complacency of Republicans—is that the Koch brothers’ money is going to Republican candidates? Democrats don’t vocally complain about the money from George Soros or Tom Steyer that flows into the coffers of Democratic candidates or liberal causes. For example, Steyer donated $74 million to Democratic candidates who supported his environmental policies in the 2014 elections.

One puzzle about this spending is whether it actually has any impact. The electorate is pretty much as divided as it was for many decades before the appearance of the Koch brothers.[4] Over the last thirty years the successful presidential candidate has captured an average of 49.74 percent of the popular vote. The best any candidate has done was George H.W. Bush in 1988, who won 53.37 percent. So, at the presidential level, the Kochs seem to be spending an awful lot of money to move a small number of votes. Economists would question the efficiency of this expenditure. At least four of the last seven presidential elections have been won by Democrats.[5]

It is rare to encounter someone who says that “I was a Democrat until I saw those ads the Koch brothers were running.” People commit to political parties for complex reasons related to life experience, fundamental beliefs, and economic interests. Perhaps the Koch brothers’ money has its greatest impact on the bottom lines of media outlets and political consultancies.

[1] “The Koch brothers’ agenda,” The Week, 13 March 2015, p. 11.

[2] You never hear people getting furious about the Gates Foundation giving too much money to fighting malaria.

[3] Why individual voters should be allowed to remain anonymous behind the curtain of a voting booth, but campaign donors should be compelled to reveal themselves is a question not much addressed.

[4] See: http://en.wikipedia.org/wiki/List_of_United_States_presidential_elections_by_popular_vote_margin

[5] There’s no point in going into the whole Gore v. Bush episode.

A note of caution regarding Iran.

In 2003 American intelligence discovered that Iran was conducting a massive nuclear program. International monitoring of Iraq’s program focused on fuel-development because these created a large foot-print that could be tracked by satellites and imports. Meanwhile, a whole series of increasingly-severe international sanctions followed. Eventually, in August 2013, Iran was forced to begin negotiations with six major powers.[1] Currently, the six powers want Iran to greatly reduce its uranium and plutonium production for an extended period. This is intended to block an Iranian “breakout” to possession of a nuclear weapon. Those negotiations are supposed to conclude at the end of March 2015.

Under these conditions, it is useful to consider a recent report in the New York Times.[2] Producing potentially weapons-grade material is one thing. Actually turning that material into a weapon is something else. So, does Iran know how to build a nuclear weapon?

The International Atomic Energy Agency (I.A.E.A.), a UN agency, has accumulated a lag amount of material that shows that Iran has been working hard on warhead design. Iran has dismissed this evidence as forgeries by the Americans and the Israelis. The IAEA claims to have confirmed the American and Israeli material through other sources.

Knowledgeable people assign priority to the nuclear “fuel” over the “knowledge” factor for a good reason. The fuel is the hardest problem to solve and knowing how to build a bomb without the means to make a bomb doesn’t constitute much of a threat. However, the Times correspondents point out that there are both bad actors (North Korea) which possess nuclear fuel that they might be willing to transfer, and a black-market.[3] Between 2007 and 2009, I.A.E.A. inspectors tried to discover what was happening inside certain laboratories. The Iranians stone-walled the inspectors. Since the beginning of negotiations in 2013, the Iranians have continued to rebuff inspectors interested in the “military dimension” of the issue.

The I.A.E.A. has published a list of a dozen critical technologies for building a warhead. Some of them are dual-use technologies that can apply to legitimate civilian purposes. The I.A.E.A.’s file of secret material on Iran’s nuclear program alleges that the Iranians have pursued work on all twelve. However, of the twelve, only one is under discussion. One is electrical detonators. The Iranians have claimed that these were used for civilian purposes (like mining). Two others have been raised, but have not been addressed by the Iranians. The second is “explosive lenses.” The third is computer modeling and calculations of a bomb’s release of subatomic particles. The remaining nine have never even been discussed at all. The fourth is a “neutron initiator,” a sort of spark-plug. The fifth is the technology for a long-distance test-firing. The sixth is a Uranium-235 metal core of a bomb. The seventh is the system for fusing, arming, and firing the weapon when it reaches its target. The eighth is a re-entry vehicle, that is, a capsule that protects the weapon during re-entry into the earth’s atmosphere. The ninth is a fuel compression test run on a mock core. The tenth is a complex program management organization. The eleventh is procurement activities, in this case run through ‘front” companies. The twelfth is the covert acquisition of bomb fuel.

None of these allegations can tell us how far the Iranian may have moved toward being able to build a weapon. The Iranian rejection of transparency creates a terrible dilemma. Keep the sanctions in place and wait? Strike a deal and hope for the best? Bomb them now?

[1] Britain, France, Germany, China, Russia, and the United States.

[2] William J. Broad and David E. Sanger, “What Iran Won’t Say About the Bomb,” NYT, 8 March 2015.

[3] Both some of the former states of the Soviet Union and Pakistan are at least conceivable sources.