Love and Marriage.

Twenty years ago, about 21 percent of married men and 7.5 percent of married women would admit to having had an extra-marital relationship.[1] Today, the rate for men has stayed the same, but the percentage of married women admitting to having had an extra-marital relationship has climbed to 14.7 percent.[2] I suppose that counts as some kind of victory of feminism.[3] At the same time, a slightly larger share of Republicans (67 percent) than Democrats (60 percent) report being “very happy” in their marriage.[4]

Scholars have commented on both issues. On the one hand, some suppose that the growing equality of women in the work-place has made married women more financially independent and less likely to fear the consequences of discovery. That is, getting tossed out on their ear, and losing their children, and late-model used car, and Kohl’s charge card. On the other hand, some scholars have suggested that there is more social support for marriage in conservative areas. Religion, family values, and blah-blah-blah. However, a 7 percent difference doesn’t seem that significant.

If we conjecture that the continuing economic inequality between men and women (where women earn two-thirds of what a man earns) means that the same share of women as men are unhappy in their marriage, but only two-thirds of them are able to enter the infidelity market-place, then 21 percent of women are also unhappy in their marriages. If 21 percent of husbands have and 21 percent of wives either have or would like to trespass beyond the bounds of Holy Deadlock, then 21 percent of Americans are in unhappy marriages.[5] If you average Republican and Democratic “very happy in marriage” rates, you end up with 63 percent. If 63 percent of Americans are “very happy” and 21 percent are very unhappy, then 16 percent are in the middle. (Or they “don’t know” if they are very happy or unhappy. Probably in the first couple years of marriage when such disorientation is common, what with discussions of thread-count versus Sawzalls, how to allocate time between families during the holidays, and whether tuna noodle casserole is better with or without crushed potato chips.)

So, broadly speaking, either you get marriage right or you mess it up.[6] There isn’t much of a middle ground. Generally, almost two-thirds of people get it right and one fifth gets it badly wrong. Some of those go on to get it right the second time. How does this match up against law school admission or the stock market or going to the dog track? I dunno. Be worth finding out.

[1] This was long before the whole “Ashley Madison” thing. See: https://en.wikipedia.org/wiki/Ashley_Madison You don’t get points for boasting in an anonymous survey, so, either all respondents were being honest in return for a promise of anonymity or a bunch of people still decided that telling your secrets is a dumb idea. If anything, then, the number of unhappy marriages can only have been equal to or higher than the number reported.

[2] “Noted,” The Week, 19 July 2013, p. 14.

[3] Kind of like poor Al Gore’s people going “We would have won Florida if convicted felons had the right to vote,” until somebody told them to shut up.

[4] Noted,” The Week, 28 August 2015, p. 14.

[5] Obviously, gay men and lesbians off-set in this calculation.

[6] Thus, marriage is “protopathic” (all or nothing), rather than “epicritic” (recognizing fine distinctions). See; Pat Barker, Regeneration (1991). No, really, go read it. Easily one of the finest novels of the 20th Century. The movie is not as good, in my judgement.

Climate of Fear XVIII.

Environmental record-keeping is really pretty new. It’s a function of the rise of both Science and the State during the 19th Century. In the case of California, systematic record-keeping only began in 1895. Beyond the formal records, policy-makers are forced to rely upon scientific studies and interpretations. Some geological studies indicate that “megadroughts” lasting a decade or more occur in the Western United States every 400 to 600 years.[1] We may be at the beginning of such a megadrought. It may last thirty years or more.

Since 2000, the whole of the West has been suffering from drought. This has created many different kinds of problems from crop failures to massive wildfires.[2] California offers a study of a particularly acute case. During the winters of 2012-2013 and 2013-2014, a large high pressure area prevented winter storms in the Pacific from blowing in-land to the great Sierra Nevada mountains along California’s eastern edge.[3] Three-quarters of California’s water comes from the snowfall in the Sierra Nevada. Come Spring each year, the snow melts and runs down streams and rivers into the rest of the state. Much of the water also seeps down into the aquifer of the great Central Valley. The high-pressure area cut rainfall in California to 42 to 75 percent of normal.[4] In addition, the absence of the cooling effect of on-shore breezes and storms helped bake California. That evaporated much of the water that did reach the ground.

By September 2014, 82 percent of California had been designated as either in an “extreme” or an “exceptional” drought. How to respond? Well, California is both a cluster of major cities and suburbs and a major agricultural state: it produces about 70 percent of the top 25 fruits, nuts, and vegetables. So, 80 percent of California’s water is used for irrigation of its farmers’ crops. To cut water to farmers is to cut the legs off a major industry. Instead of hitting agriculture, governments tried to limit non-agricultural water use. To begin with, the California Water Resources Control Board began fining people who watered their lawns or washed their cars without using a water-saving nozzle on the hose; Los Angeles—harking back to the oil shock of 1973—limited people to watering their yards on alternate days. That didn’t have much effect. Huge numbers of urban consumers pushed back against such restrictions.

Faced with limits on taking water from rivers, farmers turned to drilling into the aquifer. That’s a short-term—and destructive—response. There is a limited amount of water in the aquifer. A “water rush” equivalent to the Oklahoma “land rush” will privilege those with the most money for drilling operations and force smaller farmers to the wall.

In 2014 the state legislature passed a bill to regulate the use of groundwater (i.e. the drilling). This enraged farmers, who saw the groundwater as their own property. The basic question is whether a lot of people who use relatively little water (at most 20 percent of the total) should suffer hardships for the sake of relatively few people who use relatively a lot of water in order to produce valuable products. What if it was a question of electricity use, where urban areas consume far more than do rural areas? These questions aren’t just about California. They go to how we think about the environment and the economy in general.

[1] This cuts across the argument of supporters of androgenic climate change, without invalidating their arguments.

[2] Disclaimer: my son is a National Forest Service wildlands fire fighter. Actually, it isn’t a disclaimer. I just want everyone to know that I’m proud of my boy for doing a hard, demanding, and dangerous job when most kids want careers with Wall Street or Disney World or the US Gummint.

[3] “California’s epic drought,” The Week, 26 September 2014, p. 9.

[4] Apparently, no one attributes the high-pressure ridge to global warming. Inevitably, people make do with claims that “global warming” is intensifying the effects of the drought. Arguably, this is what climate-change denial on one side elicits from the other side: potential overstatement.

Query.

Why does my post called “White flight from Baltimore” draw so many hits/visitors?  Is it circulating on some kind of subterranean network?  No one comments.  No one “likes.”  But it keeps popping up on my list of views, right after “Archives.”  So, I’m puzzled.

The Roosevelts versus Ronald Reagan.

Back at the start of the Twentieth Century, Theodore Roosevelt had posited that big business and a foreseeably big labor would require a big government to balance their power and solve complex new problems. For a long time, it appeared that “the Republican Roosevelt”[1] had been prescient. The New Deal, launched by his cousin Democrat Franklin D. Roosevelt, greatly expanded the government’s role in the economy. That trajectory continued until the election of Ronald Reagan in 1980. Since then, Republicans have inveighed against the expansion of state power (unless national security can be invoked). What do Americans think about this issue in the early Twenty-First Century? A January 2014 opinion poll captured a fundamental division of opinion.[2] A majority (57 percent) agreed with the statement that “we need a strong government to handle today’s complex economic problems.” However, a very substantial minority (41 percent) rejected that idea in favor of letting a free market operate without “the government being involved.” To belabor the obvious, 57 + 41 = 98 percent of Americans. There is no uncertainty in the minds of Americans about this issue, no mushy middle ground on which compromise is possible. Two tribes confront each other. In Europe, on the other hand, there is a broad consensus on the role of government in the economy.

This has important implications for the economically-battered ordinary American. In 2010, the median wage was $26,364. After adjusting for inflation, this was the lowest real median wage since 1999.[3] In 2014, American median net worth per adult hit $44,900. Japan, Canada, Australia, and many Western European countries ranked ahead of the United States, which came in at 19th .[4] Apparently, if Americans are offered a choice between earning another $20,000 a year and getting another month of vacation, they would take the pay.[5] One could interpret this as Americans being workaholics. One could also interpret it as a sign of the economic stress under which many Americans are operating.

The question is what to do about this pathetic performance. The opposing positions generally pit redistribution through taxation policies (i.e. “strong government”) against pro-growth and social mobility policies (i.e. “let the market operate”).

If you combine federal, state, and local taxes, Americans are among the lowest taxed people in the developed world. Here the US ranks 31st, trailing most of the countries with higher median net worth.[6] Where does American federal spending go? Almost two thirds of it (65 percent) goes to three categories: Social Security (24 percent); Medicare/Medicaid/CHIP (22 percent); and defense (19 percent).[7]

None of this goes to the question of which group is correct. Perhaps neither one is entirely correct. Europeans are laboring under an “austerity” that would never be tolerated in the US. It does suggest that there is a core dispute that is more powerful—and important—than the “culture wars” that obsess the media and Democratic activists. Hence, Bernie Sanders.

[1] As Yale historian John Morton Blum called one of his books.

[2] “Poll Watch,” The Week, 17 January 2014, p. 17.

[3] “Noted,” The Week, 4 November 2011, p. 18.

[4] “The bottom line,” The Week, 20 June 2014, p. 34.

[5] “Poll Watch,” The Week, 24 July 2015, p. 15.

[6] “Noted,” The Week, 25 April 2014, p. 16.

[7] “Noted,” The Week, 25 April 2014, p. 16. It is worth pointing out that most countries don’t spend anything like the share of the budget on defense as does the US. Instead, they rely on the US in an emergency. That frees up a lot of resources for social programs. Then the federal nature of American government means that much spending is done by state and local authorities. Some European countries, in particular, have a more centralized system.

Command Crisis.

When George C. Marshall became Chief of Staff of the United States Army in 1939, he perceived a striking disparity between the officer corps and the grim international situation. The Army had been reduced to a small size after the First World War; America had been at peace for twenty years; promotion had been glacially slow; and the upper ranks of commanders were clogged with elderly men who lacked energy and imagination. America would be endangered, at the very least, by the looming European war, and might well be drawn into the fighting. To revitalize the Army, Marshall ruthlessly purged the officer corps. Six hundred senior officers were removed from command or nudged into retirement before Pearl Harbor. Since the world crisis led to a dramatic expansion of the armed forces, many more than six hundred younger men rapidly rose to high command. (The most dramatic example of this is Dwight Eisenhower, who went from colonel to lieutenant-general in just over a year.) Marshall didn’t demand just energy and imagination. He also demanded ruthless effectiveness. During the Second World War, sixteen division commanders and five corps commanders were relieved of command when they failed to perform up to standard.   The rise, fall, and resurrection of George Patton might be offered as a book-end to that of Eisenhower.[1]

Thus, it can be argued that one determined man took advantage of a grave crisis to re-make a hide-bound bureaucratic institution.[2] The failures in Vietnam and in the second Iraq War seem to suggest that something went awry after Marshall and his ruthless followers had faded away. Slowly and in stages[3], the Army reverted to a cautious, self-protective rather than self-critical, bureaucracy. One sign of this change is the reluctance to remove failed commanders. Relief is taken as a sign of institutional failure because it suggests to critics that senior commanders had made a poor choice in the first place. Short command tours reinforce this trend. A duff leader will cycle out in a couple of years anyway, so why rock the boat? Now it takes really egregious personal misconduct, rather than professional incompetence, to bring relief.[4] Will it take another existential crisis to bring new life to the Army?

This analysis strikes a chord with many observers.[5] What it ignores is the malign effects of civilian political meddling and incompetence. Army Chief of Staff Eric Shinseki didn’t under-estimate the number of troops needed to occupy a defeated Iraq, Secretary of Defense Donald Rumsfeld did. Tommy Franks didn’t disband the army of Iraq and order the purge of Baath Party members from public institutions, Paul Bremer did. A critical examination of the failings of the military can’t stand alone in the effort to better defend America. We have to be equally honest and critical in examining the political institutions to which the military is subordinate. Nor should the examination be a partisan witch-hunt. President Obama prolonged a war in Afghanistan in which he plainly did not believe. There is a lot of blame to go around.

[1] See: Forrest Pogue, George C. Marshall, 4 vols. (New York: Viking, 1963-1987); Stephen Ambrose, Eisenhower, vol. 1 (New York: Simon and Schuster, 1983); Carlo D’Este, Patton: A Genius for War (New York: HarperCollins, 1995).

[2] Thomas E. Ricks, The Generals: American Military Command From World War II to Today (New York: Penguin, 2012). See also: Anton Myrer, Once An Eagle (1968).

[3] Except for Vietnam, from the end of the Korean War to the first Iraq War, the Army was “at peace.” Even Korea, Vietnam, and Iraq did not amount to existential struggles on a par with the Civil War or the Second World War.

[4] See, for example, Stanley McChrystal’s ill-considered statements in front of a reporter.

[5] See, for example, Max Boot, “Bureaucrats in Uniform,” NYT Book Review, 9 December 2012; Thanassis Cambanis, review of Fred Kaplan, The Insurgents, NYT Book Review, 27 January 2013.

Annals of the Great Recession XI.

I saw the Iraq War as an obvious act of stupidity from even before we attacked in Spring 2003. So, in 2008, I voted for the candidate who had opposed it, Barack Obama. I voted for him in spite of his obvious weaknesses: he was as green as grass in politics, he had never run anything, he didn’t know anyone much in Washington, and he had some dopey ideas. My assessment of President Obama’s failings is amply borne out by Ron Suskind’s scathing account of how the President and his advisers made economic policy in the first two years after he reached the White House.[1]

Undoubtedly, Obama inherited an economic disaster from the George W. Bush Administration. However, his background and range of contacts left him ill-positioned to deal with the immense problems on his plate. First, the president believed in the power of rhetoric; he almost seems to have believed that talk and action were identical. Supporters have argued that he’s the first president in a while to speak in full sentences and paragraphs, and that doesn’t mesh well with sound-bites. In reality, the trouble was that much of his discourse seemed to have been picked up in Chicago rec-league basketball. He disses people who disagree with him.[2]

Second, the president turned out to be a poor judge of people and had few close advisers to keep him from going into the ditch at the first opportunity. Rahm Emanuel, who served as his first chief of staff (and who recently squeaked through to re-elections as mayor of Chicago), and Lawrence Summers, who headed his National Economic Council (before going off to become President of Harvard until he vexed the faculty, were imperious), abrasive men who rubbed people the wrong way as a first order of business in any meeting. Tim Geithner, his first Secretary of the Treasury, was consistently suspected of mouthing the Wall Street view.

Third, unlike his immediate predecessor, President Obama could not pull the trigger on any issue. Instead of deciding, he sought consensus. Endless debates went on, but the President refused to choose one option and then to say “it’s my way or the highway.” Who ever crossed Richard Nixon without landing on the sidewalk with his suit in tatters? It’s a short list.

Many of his own subordinates saw through him from the start. Famously, Lawrence Summers, the head of Obama’s National Economic Council, told another official: “We’re home alone. There’s no adult in charge. Clinton would never have made these mistakes.” Geithner has been accused of out-right insubordination, but stayed at Treasury as long as he chose.

The “friendly opposition” within the Democratic Party would argue that, after the rough ride of his first two years, Obama began to understand how things should operate. He got rid of his early hires and started to make decisions. So they say. With a year and change to run on his second term, it isn’t clear that much has changed.

Still, what was the bigger disaster for America: Obama’s mismanagement of the economy or the Iraq War? Somebody in Washington needed to get drilled for the Iraq War, not just the men and women who fought there. John McCain and Hillary Clinton had to pay a price at the voting booth. What are we supposed to do? Let bygones be bygones after each new train-wreck engineered by the usual suspects who populate American politics?

Finally, has Obama learned anything? The answer to that question goes to the credibility of the Iran deal.

[1] Ron Suskind, Confidence Men: Wall Street, Washington, and the Education of a President (New York: HarperCollins, 2011).

[2] See: Stuff my president says.”

Annals of the Great Recession X.

Jeff Madrick, once a New York Times economics columnist, argues that America’s economic decline can be traced to the 1970s.[1] Beginning with the Great Depression, governments had imposed tight controls on American financial markets. These controls had made banking boring. That was a good thing for anyone who had ever lived through a financial panic (or watched that scene in “It’s a Wonderful Life” where Jimmy Stewart abandons his honey-moon to save the savings-and-loan when the town bank collapses. However, it also restricted the opportunities for profit in one sector of the economy. Economists at the University of Chicago, inspired by the writings of Milton Friedman, pushed an “extreme free-market ideology.” Embraced by greedy financial industry leaders, then by the Republican Party in the era of Ronald Reagan and later by Democrats as well, these ideas led to the de-regulation of the American financial industry. “And Hell followed with him.”[2]

Reckless lending became progressively ever more deeply entrenched among bankers. Successive crises of ever greater severity sprang from these practices: wild real estate speculation in the 1980s; the Latin American lending binge; the “dot.com” bubble; and then the nightmarishly complicated real estate investments that ended in the financial crisis of 2007-2008. The government repeatedly had to step in to bail-out reckless bankers in order to avert an even worse disaster for the whole economy. Thus, profits were privatized while losses were socialized. Not exactly what Milton Friedman had in mind.

To a historian, some of Madrick’s argument appears kind of rickety. He, among others, appears to believe that the American prosperity and global economic domination of the period from 1945 to 1975 was somehow “normal.” However, it is at least equally possible to regard this situation as “abnormal” and bound to end. The financial industry (or just “greed”) can hardly be blamed alone for the complex changes that have undermined America’s position.

Then, for purposes of analysis and argument, he separates out free-market thinkers and financial industry leaders. However, in real life they existed within and responded to an evolving context of beliefs and influences. Thus, the inflation of the Vietnam years intersected with government regulations on the interest banks could pay depositors. People wouldn’t keep money in banks unless they could get a higher rate of interest. So, the interest rate regulations had to be relaxed.

Then, he appears to believe that “greed” drives the elite, but that the same behaviors by people lower on the income pyramid are unexceptionable. In 1970, 381 major strikes hit American companies as workers drove for higher pay and benefits at a time when foreign competition had begun to exert heavy pressure.[3] Why is one act greed, the other not?

Then, a lot of the American economy was deregulated from the 1970s on. Takes airlines as an example. Between 1980 and 2009, inflation-adjusted air fares fell by fifty percent.[4] Air travel increased, but crashes did not increase. (I’ll grant you that the air travel experience now reminds of a trip I once took on a Mexican bus.)

So, the part is not the whole. Blanket statements about regulation don’t get us very far.

[1] Jeff Madrick, Age of Greed: The Triumph of Finance and the Decline of America, 1970 to the Present (New York: Alfred A. Knopf, 2011).

[2] Revelation, 6:8.

[3] See: “American union, stay away from me uh.” March 2015.

[4] See: http://www.theatlantic.com/business/archive/2013/02/how-airline-ticket-prices-fell-50-in-30-years-and-why-nobody-noticed/273506/

An Israeli Dilemma.

In 1958, Leon Uris wrote Exodus. While portraying the birth of the state of Israel, he imagined an Israel-yet-to-be: a secular, socialist-inspired Jewish state living on terms of amity with the Arabs. Today, Leon Uris’s vision seems far-fetched.

In the Six Days’ War of 1967, Israel over-ran the Gaza Strip, the Sinai Peninsula, the West Bank, and the Golan Heights.

What to do with the conquered territories? The Sinai was traded away in exchange for peace with Egypt. The Syrians lacked the strength to take back the Golan, even before the current massive uprising against the Assad regime. Gaza and the West Bank, however, were chock-full of Palestinian refugees from the creation of Israel in 1948. One of the founding illusions of Zionism had been that Palestine was “a land without people for a people without land.” In 1948, many Muslims had fled the fighting, or had answered an appeal from Arab leaders to clear the path for Arab armies, or had been driven out by Israelis by means of exemplary massacres. They had never been allowed to return. Now Israel had over-run the places where the Palestinians had taken refuge. What course would Israel follow? One option would have been to create a Palestinian state that consisted of the West Bank and the Gaza Strip.[1] Another option would be to extend Israel’s territory into the newly-conquered lands. The loudest exponents of this policy were to be found among ultra-Orthodox religious zealots. This was the course pursued by Israel through the creation of settlements. Settlements—both “legal” and illegal under Israeli law—began to proliferate. Many successive governments turned a blind eye to the settlements and to the disastrous impact of the settlements on Israel’s international situation. While Israel has not—yet—annexed the West Bank and has withdrawn from Gaza, both the settlements and the inferior legal status of the Palestinians living under what is effectively Israel’s rule give the country something of the appearance of Prussia on the Jordan.

Why? Leftist critics argue that the country has come to be dominated by right-wing voters who pander to religious parties and are deeply hostile to the Arabs, both Israeli-Arabs and the Palestinian Arabs; and that military officers lean ever more toward Orthodox Jews who have a right-wing political bent.[2] Implicitly, a return to Israel’s leftist roots would facilitate a solution to the problems facing the country.

Attractive though it is, this interpretation ignores some realities. A return to its roots by Israel will not undo the radicalization of opinion among many Muslims, whether Palestinian or not. Hamas displaced the elected Palestinian Authority from control of Gaza, then turned the enclave into a base for attacks on Israel.[3] Hamas does not accept the right to survive of Israel, regardless of where its borders are drawn. In the future, Hamas could achieve domination, or at least the tolerance by a sovereign Palestinian government, on the West Bank. In Lebanon, Hezbollah is an Iranian client. Egypt is teetering on the brink of a civil war between Islamists and authoritarians. It isn’t entirely clear what kind of inroads ISIS could make in Jordan. It would be hard for any military adviser to argue that Israel should surrender strategic depth in return for promises of future peace. Not all problems have solutions.

[1] Between 1948 and 1967, this had been an option available to Egypt and Jordan. However, it appears that Egypt and Jordan were more interested in maintaining the Palestinians in misery as a stick with which to beat Israel in the square of international opinion than they were in actually creating a Palestinian homeland. Now the ball was in Israel’s court.

[2] Gershom Gorenberg, The Unmaking of Israel (New York: HarperCollins, 2011).

[3] Last summer’s war began over rockets fired into Israel, but Israel’s response soon uncovered a network of tunnels driven into Israel for what could only be offensive purposes.

American Opinion and the Confederate Battle Flag.

In the 1950s and 1960s the Civil Rights movement reached one of its peaks. American public opinion turned against segregation, overt racism, and the violent defense of white dominance. This peak also coincided with the centennial of the Civil War. I haven’t seen (but maybe I haven’t looked hard enough) much scholarly work on how white Southerners sought to commemorate the “American Iliad.”[1] Were little Confederate flags placed on the graves of veterans in cemeteries? Were there speeches on the “Confederate Memorial Day”? Were more streets and highways named for Confederate generals? In any event, I conjecture that a Civil War Movement arose to counter the Civil Rights Movement. One aspect of that appeared in laws incorporating the Confederate battle flag into the state flags of some Southern states or to the displaying of the flag on government buildings.

Fifty years later, much had changed. In late June and early July 2015, the vast majority of Americans (64 percent) opposed having the Confederate flag fly over public buildings, while 21 percent thought that the flag should be allowed to fly over public buildings; and 21 percent weren’t sure.[2] However, most of the 21 percent who favored flying the Confederate flag over public buildings live in Southern states. Two weeks later, a majority (57 percent) of Americans accepted that the Confederate battle flag is a symbolic expression of “Southern pride,” rather than a racist affirmation. However, a majority of Americans still supported hauling down the flag on public property. Among that majority viewing the flag as a symbol of Southern pride were 75 percent of Southern whites. However, 75 percent of Southern blacks saw it as chiefly a racist statement. Deep divisions exist in the South over the Confederate flag.[3] However, lots of Southern whites appear to recognize that what is a symbol of pride to them is also deeply offensive to African-Americans. (See the statement by South Carolina governor Nikki Haley.) This might suggest an important, but hard to define, psychological shift among Southern whites. Still, opinion polls don’t always dig too deep. What did the other 25 percent of Southern whites believe about the flag, that it was a racist affirmation? If so, did they like that or did they hate it?

Why does “Southern” appear to mean “Southern and white”? Is there a regional culture shared by whites and blacks? Looking at Farm Security Administration photographs from the Thirties and Forties might lead you to think so. See the remarkable on-line exhibition at: http://www.loc.gov/exhibits/bound-for-glory/ So might the history of Zydeco.[4] See: https://www.youtube.com/watch?v=fa8vyTfugcI Shooting people in church might fall outside the pale in such a shared culture. Or perhaps it awakens memories of a fire-bombed church in Birmingham, Alabama many years ago.

There is no question of the Confederate flag flying over federal buildings, but each state has the right to choose what flags fly on state government grounds. Another problem left to later generations by the Founding Fathers. What did they expect us to do, figure it out for ourselves?

[1] Charles Roland, An American Iliad: The Story of the Civil War, 2nd edition (Lexington: University of Kentucky Press, 2004).

[2] “Poll Watch,” The Week, 3 July 2015, p. 17.

[3] “Poll Watch,” The Week, 17 July 2015, p. 17.

[4] See: https://en.wikipedia.org/wiki/Zydeco

Power Surge.

In 2003 the United States attacked Iraq. Swift defeat of Iraq’s conventional forces then gave way to misstep after misstep. An insurgency arose among the minority Sunnis deposed from their long dominance by the American invasion. The Shi’ite majority demanded that the Americans leave as soon as possible so that they could get to the business of governing the country and settling scores. Al Qaeda in Iraq sought to foment a civil war that would make Iraq ungovernable and force an American evacuation. Foreign fighters poured into serve with Al Qaeda. By 2007 a disaster of epic proportions loomed before the Americans.

Then things began to turn around. Lower level American commanders began buying-off Sunni insurgents in their areas of operation. Many of the Sunni insurgents got fed up with the Al Qaeda fanatics. Together, these forces led to the “Sunni Awakening” that markedly reduced the level of violence from early in 2007. American special operations troops focused their efforts on killing the Al Qaeda fanatics. Civilian and military casualties began to fall. If only in comparison to the chaos visited on the country between 2003 and 2007, Iraq began to move toward something like a functional state.

Then and later, considerable energy has gone into myth-making about the turn-around. The commonly accepted—because commonly told—narrative is that the Bush administration belatedly developed a coherent and workable plan for victory in Iraq; General David Petraeus helped develop and then implemented an effective counter-insurgency strategy; and the “surge” of troops greatly enhanced security so that some form of national reconciliation could take place.

In fact, the Bush White House’s “National Strategy for Victory in Iraq” had neither substance nor application. “Victory” had been redefined to mean bringing down the level of violence to a point where responsibility could be handed off to the government of Iraq with something approaching a straight face.

In fact, the administration and the Pentagon were in search of a “hero” to revive American morale. They found that hero in General Petraeus, adept at both war and image-management. The buying-off of Sunni insurgents and the increasingly effective work of the special operations forces were well underway before General Petraeus arrived in Iraq. He endorsed and broadly applied the methods already developed.

In fact, although Chelsea Manning has been sentenced to 35 years in prison for giving classified documents to WikiLeaks and Edward Snowden has been hunted across the globe for having revealed NSA spying on Americans, officials in the Defense Department appear to have provided favored authors with a trove of classified documents.

In fact, what the United States achieved in Iraq was not victory, but avoiding defeat. By avoiding defeat the United States has also avoided any honest reckoning with the causes and consequences of a disastrous adventure that spanned two different presidential administrations. “The fraud is that a 20 year military effort to determine the fate of Iraq yielded something approximating a positive outcome.”

So says Andrew Bacevich, a professor of political science at Boston University, a combat veteran of Vietnam, a retired Army colonel, a grieving father to a son killed in action in Iraq, and a bitter and clear-eyed critic of recent American foreign and military policies.[1]

[1] Andrew Bacevich, “Avoiding Defeat,” New York Times Book Review, 10 February 2013, pp. 20-21. Professor Bacevich reviewed Michael R. Gordon and Bernard Trainor, The Endgame: The Inside Story of the Struggle for Iraq, From George W. Bush to Barack Obama (New York: Pantheon, 2012); and Stanley McChrystal, My Share of the Task: A Memoir (New York: Penguin, 2013).