Nothing to CLAP about.

There is an exam called the College Learning Assessment Plus.[1] The exam measures how much college students gain between the freshman year and the senior year. It assesses communications skills (reading, writing); analytical reasoning; and critical thinking. Thus, it is applicable across disciplines and measures the “transferable skills” that have long been touted as the real value of a college education.

The results of the CLA+ for 2013-2014 give cause for hope and fear.[2] Of Freshmen who took the test, 63 percent scored below the Proficient level and 37 percent scored Proficient or higher. Of Seniors who took the test, 40 percent scored below the Proficient level and 60 percent scored Proficient or higher. Of Freshmen, 31 percent enter college at a Below Basic level, but by the Senior year this share has been reduced to 14 percent. Similarly, 32 percent of Freshmen score in the Basic level, but by the senior year this had been reduced to 26 percent even as 17 percent have moved up from Below Basic to at least Basic.

So, the good news is that colleges take the 37 percent who are already proficient and make them more proficient; and they take 23 percent who are not proficient and raise them to proficiency. So, sixty percent of college students benefit from attending college.[3]

What’s the bad news? Well, 14 percent of seniors graduate with a Below Basic score and another 26 percent graduate with a Basic, but Below Proficient score. That’s 40 percent who come out of college deficient in the intellectual skills assessed by the CLA+ exam. That is a huge wastage of resources. Of late, much attention has focused on graduation rates and time-to-graduation. Here, the United States has lost its world-leading position and has fallen behind some other countries. The results of the CLA+ exam suggest that the problem is actually worse than it appears because 40 percent of college graduates don’t actually function at a BA level.

There’s a part I don’t understand, but which I will report. Test scores fall in a range between 400 and 1600. The average Freshman score is 1039; the average Senior score is 1128. The average improvement is 89 points. If, for the sake of argument, you subtract the 400 points you get for being able to sign your own name, then the Freshmen average score is 639 and the Senior average score is 728. An 89 point increase amounts to just under a 14 percent.

Still, these reports raise several questions. Why do almost two-thirds of Freshmen start college below the level of proficiency for their group? Furthermore, many students do not go on to college at all. This suggests that K-12 education is failing many students. It also suggests that an increasingly remedial function is being forced on colleges. (At the same time, they are being criticized for loading students and parents with debt and for not graduating students in a timely fashion.)

Is a 14 percent average improvement enough to justify the cost of four years of college? Does the 14 percent improvement push students over some undefined threshold between incompetence and competence? If it does, then the money probably is well spent.

It’s just my opinion, but professors are the least-qualified to understand the nature of the problem. Their children grow up with books, pictures on the walls, a variety of kinds of music playing, trips to cultural events rather than Disney World, experiences valued over possessions, and parents who work all the time. So, their children are usually successful in school and in life.

[1] This is abbreviated as CLA+ so that anxious parents will not be overheard asking other parents “So, how did your kid do with the CLAP?”

[2] Douglas Belkin, “Skills Gap Found in College Students,” WSJ, 17-18 January 2015.

[3] Maybe all of them do, without that showing up in the test scores. Maybe they are marginally more attuned to key skills without quite getting out of the bottom category.

Legacies of the Violent Decades.

The 1970s and 1980s were violent decades.[1] The rate for all violent crime rose from about 500/100,000 people to almost 800 between 1975 and 1991. The robbery rate rose from about 200/100,000 people in 1975 to about 270 in 1991. The rate for aggravated assaults rose from about 230/100/000 people to about 450 in 1992. From 1975 through 1991 the murder rate bounced around between 8 and 10/100,000 people. In 1990 there were 2,245 homicides in New York City (five a day), and 474 homicides in Washington, DC (more than one a day).

State and federal governments lashed out against this spike in crime with the weapons at hand. The federal government directed billions of dollars to the states to increase the number of police and to build prisons to house the people the police caught. Sentences were lengthened for some crimes and mandatory minimums were imposed to limit the freedom of judges. Between the early 1970s and 2009 the number of people in state or federal prisons quadrupled to about 1.5 million people.

Then the rates of violent crime began to drop. The rate for all violent crime fell by 51 percent, to a level 25 percent below the 1975 rate. The rate for aggravated assault fell from its 1992 peak by 48 percent, roughly back to where it had been in 1975. The rate for robbery fell by from its 1991 peak by 60 percent, to a level 51 percent below the 1975 rate. The murder rate fell from its 1992 peak by 41 percent, to a level slightly below its 1975 rate. In 2014 there were 328 homicides in New York City (less than 1/day) and 104 homicides in Washington, DC (two/ week).

This remarkable change has begun to spark debate, just as did the remarkable spike in violence in America before 1990. One question is what has happened since 1990 to bring down the rate of violent crime? Experts are not entirely sure how to answer this question. They do agree on some things. First, targeted policing is a big part of the answer. New York City Police Commissioner William J. Bratton introduced the use of computer data and crime mapping (“CompStat”) to identify targets for police efforts.   Police began to concentrate their efforts on these identifiable trouble spots. Drugs used to be sold right out on the street. Aggressive policing pushed the sales in-doors. That didn’t do much to cut down on drug use, but it did make drive-by shootings a lot less lethal. The “broken windows” strategy came to be widely adopted. Second, tougher sentencing and mass incarceration played a lesser role than advocates expected.[2]

A second question is about what to do going forward? On the one hand, what is to be done with the large numbers of people still locked up from the previous decades? If they are released, will they just return to their old ways? Can people convicted of non-violent crimes be safely released and better served with drug-treatment programs? Going forward, should the length of sentences be reduced?

On the other hand, should the aggressive policing that accompanied the reduction in crime be scaled back? When crime rates are high and people are afraid, they are willing to tolerate aggressive forms of policy that they will not tolerate when crime rates are low and people feel secure. “Stop and frisk” has come under heavy fire. It has been argued that this kind of policing—which may have created the situation in which Eric Garner died—has begun to alienate law-abiding people in the communities on which the police focus. Can the police operate in an environment in which they are widely viewed as the enemy?

See: “The Senator from San Quentin”; “Military Police”; Death Wish.”

[1] Erik Eckholm, “With Crime Down, U.S. Faces Legacy of a Violent Age,” NYT, 14 January 2015.

[2] Which is not the same as saying that they played no role.

The other land of liberty and opportunity.

The terrible events in Paris in early January 2015 have inspired all sorts of questions. What are the limits of “free speech”? Why did the security services fail to discern the threat? Perhaps most importantly, why do some French Muslims become radicalized?

During the 19th Century French population grew at a pace (40 percent) much below that of the rest of Europe (100+ percent). This population gap began to have an effect on the supply of workers. In the late 19th and early 20th Centuries the French began to make up the difference by encouraging immigration from countries like Italy, Poland, and Spain. By the eve of the Great Depression, immigrants had increased from 1 percent of the population to 3 percent. The Depression caused the French to seek to reduce the number of immigrants in the country. In the aftermath of the Second World War, however, France turned to encouraging the immigration of guest workers from its colonial empire as a national policy. The collapse of the French position in Algeria in the early Sixties then brought a flood of refugees (both Algerians of European descent and Algerian Muslims who had been loyal to France in the Algerian war). This population movement totaled well over a million people in the space of a few years.

From this point onward the question of immigration became politicized and tense. For one thing, there the “pied noir” immigrants from Algeria and the “harkis” competed for the same jobs at the bottom of the French economy, spawning a bitter hostility. For another thing, the great economic slump of the Seventies intensified the competition for jobs. France put a stop to immigration in 1974, but the immigrants in the country put down roots rather than going “home.” They sent for their families before French laws could prohibit this. Consequently, the immigrant population actually increased in size at a time when France sought to limit it. For a third thing, the French accepted the sociological theory of a “threshold of tolerance,” beyond which the number of unassimilated immigrants worked to disintegrate society. This latter theory had a particular resonance because of the “French social model.”

That model holds that there is a single French national culture and everyone has to assimilate to it to be French. Anyone who is not French is “foreign” (etranger). Formally, “etranger” refers to anyone without French citizenship, but informally it includes anyone who refused to become “French.” The French reject the Anglo-American model of multi-culturalism. The French carry this to the point of refusing to gather statistical data on the ethnic or national origins of French citizens. Rough estimates, done on the basis of the number of “etrangers” and their descendants living in France, put the number of non-French within the hexagon at 14 million or 25 percent of the population. Of these, it has been estimated that 5-6 million are Muslims.

It is open to question whether the Muslim immigrants have assimilated to French culture. On the one hand, they undoubtedly have: they eat pork, smoke, drink, and have premarital sex, just like ordinary “French” people of their generation. On the other hand, they are walled off in ethnic ghettoes on the outskirts of the major cities (especially Paris). These areas are marked by very high unemployment (40-50 percent), crime, and drug-use. At the same time, one can wonder whether the French have made much of an effort to assimilate the immigrants. The inhabitants of these ghettoes are often third generation residents of France with little knowledge of or interest in their “homelands,” there is a good deal of evidence that French employers prefer to hire people with lighter skins and French-sounding names, and former President Nicholas Sarkozy may have been expressing a common sentiment when he referred to the rioters at the end of 2005 as “racaille” (scum).   See: The Week, 2 December 2005, p. 15.

Can’t buy me love–or happiness.

Does money buy happiness? Yes—up to a point.[1] All sorts of other factors also play in, but nothing is as important as national income in determining response to “life satisfaction” surveys. A decade of surveys organized by a Dutch social scientist have found that “most people worldwide say they are fairly happy” and that people in more developed countries are happier than people in less developed countries (i.e. more development would increase happiness). However, once you get to the $20,000 per capita income level, advances in national income cease to produce much gain in life satisfaction or happiness. Thus, “happiness” or “life satisfaction” has not increased in the United States since the mid-Fifties, although there has been an 85 percent increase in the real value of family incomes (from $24K in 1953 to $51K in 2001). About 53 percent of Americans described themselves as “very happy” in 1957; about 47 percent did so in 2000. Curiously, the material ambitions of Americans seem to have sky-rocketed in recent years. In 1987 surveyed adults estimated that an income of $50K/year would be enough to “fulfill all your dreams”; by 1994 that figure had shot up to $102K, although prices had not doubled. (NB: All of a sudden Americans wanted things that were really expensive? Or college tuition sticker-shock had hit?)

What is “happiness”? One Yale political scientists (Robert Lane) argues that “happiness is derived largely from two sources—material comfort, and social and familial intimacy…” These needs tend to be out of whack. In “less developed countries…social ties are often strong and money is scarce…” People have social intimacy, but no material comfort. “Economic development increases material comfort, but it systematically weakens social and familial ties by encouraging mobility, commercializing relationships, and attenuating the bonds of both the extended and the nuclear family.” Initially, “the gains in material comfort more than outweigh the slight declines in social connectedness.” At some point the competing needs for comfort and intimacy balance, leaving people at their maximum point of “life satisfaction” or “happiness.” Western culture has a deeply entrenched need to produce and consume, to generate prosperity. It is what made the West the leader in economic development and it continues to hold sway long after the real need to produce has passed. Eventually, therefore, “the balance tips and the happiness-reducing effects of reduced social stability begin to outweigh the happiness-increasing effects of material gain.”

Still, there are places that are poor and unhappy, less poor and happy, and rich and happy, but there are no places that are rich and unhappy. The places that were poor and unhappy ten years ago were Ukraine, Russia, Belarus, Armenia, Azerbaijan, Bulgaria, and Latvia. Estonia and Lithuania are pretty close to falling into this category. In short, people were really miserable in the ruins of the old Soviet empire. Conversely, people who lived in the old American empire (the US, Canada, Western Europe, Japan, Australia) tended to be pretty happy. (Hence the outcome of the Cold War.) The highest levels of “life satisfaction” seemed to be found in politically insignificant countries with per capita incomes between $17,000 and $25,000, and located in more northern climates (Finland, Sweden, Denmark, Iceland, Switzerland, Netherlands, Luxembourg, Ireland, Canada). However, that doesn’t prove that moderate income and moderate social stability is the real key to happiness. Perhaps the cold climate just keeps people indoors all the time and they make love a lot. For lack of anything better to do.

[1] Don Peck and Ross Douthat, “The World in Numbers: Does Money Buy Happiness?” Atlantic, January-February 2003, pp. 42-43.

Death Wish.

As anyone knows who ever watched the “Death Wish” movies starring Charles Bronson, New York City is full of crazy people. Recognition of that truth helps us to understand the current conflict between Mayor Bill Di Blasio and the NYPD.

First of all, in spite of the concatenation of questionable police killings nation-wide in the past year and in spite of Mayor Di Blasio’s warning to his son, NYPD police shot to death three people during 2014. That is down from eight in 2013 (and 91 in 1971). Police department shootings fell by more than half in the later 1970s, then trended downward to one-sixth of the 1971 level though the first decade of the 21st Century. New York is a less violent city than in the past and the NYPD is less inclined to use lethal force.[1]

Second, it is dangerous to be a police officer, but much less dangerous than it used to be. In the “Bloody Seventies,” an average of 127 law enforcement officers a year were killed in the line of duty nationwide. Then the death-toll began to fall. In 2013, 32 police officers were shot to death in the line of duty; in 2014 the number rose to 50 officers killed.[2]

Third, Eric Garner was not an “unarmed black man” who died from an illegal choke-hold. He was a 6’3”, 350-pound career petty criminal[3] who suffered from asthma, heart disease, and obesity. When police attempted to arrest him for the minor crime of allegedly selling untaxed cigarettes on 17 July 2014, Garner resisted arrest. Officer Daniel Pantaleo put his arm around Garner’s neck and dragged him backward to the ground. Garner fell hard. However, the medical examiner found that there was no damage to either Garner’s windpipe or neck-bones. So, he wasn’t killed by the “chokehold.” He may have died of either a heart-attack or a severe asthma attack brought on by the arm around his neck, a high level of stress, and the slamming to the ground of a fat man with a bad pump. After Garner hit the ground, the police did nothing to assist Garner beyond calling for an ambulance. Garner died in the ambulance on his way to hospital.

Fourth, there is absolutely nothing to connect the liberal posturing of the mayor to the murder of the two New York police officers Rafael Ramos and Wenjian Liu. Their murderer, Ismaaiyl Brinsley, was a lifelong failure and malcontent who shot the officers after having shot and wounded the girl who had dumped him. It is obvious that he seized upon the Garner death as a way to go out in a blaze of gunfire that would make his otherwise forgettable life ring out.

Fifth, the hostility to Mayor Di Blasio arises from two sources. On the one hand, the unions representing NYPD officers are engaged in contract talks with the city. Anything that gives the unions the moral bulge on the city is fine with the unions. On the other hand, Mayor di Blasio is a fool—as a recent in-depth story by the New York Times makes clear.[4] He’s a racist and a classist. He ignored the reality of shared values and shared experiences among cops and assumed that a “more diverse” police force would naturally agree with him. Worse, he dumped off responsibility for his own errors on to cops in his security detail, blaming them for speeding by the mayoral entourage and for his late arrival at a ceremony when he had in fact over-slept. Well, the demonstrations by the cops may be seen as a wake-up call.

[1] http://reason.com/blog/2014/12/15/the-nypd-shoots-and-kills-fewer-people-t

[2] The Week, 16 January 2015, p. 16.

[3] Garner’s arrests included assault, grand larceny, and—most often—the selling of black-market untaxed cigarettes.

[4] Article summarized in Leon Neyfakh, “Bill Di Blasio’s Bad Bet,” http://www.slate.com/articles/news_and_politics/crime/2015/01/nypd_and_bill_de_blasio_why_new_york_s_mayor_was_wrong_to_count_on_police.html

Annals of the Great Recession II.

The Great Recession that began in 2007 ended in 2009.[1] Five years afterward, New York Times reporters asked how the recession had altered the American economy.[2] How was the economy different from before the recession began?

There were 1.5 million fewer construction jobs. Construction isn’t likely to fully revive. Even now, almost 20 percent of homeowners with mortgages owe more on their houses than the current market value of those homes. On average, these jobs had paid $55K a year.

There were 1.7 million fewer manufacturing jobs. Before the recession there were about 14 million Americans employed in manufacturing. That means that manufacturing employment fell by about 12 percent.

However, pre-recession manufacturing amounted to only about 10 percent of the labor force. Manufacturing jobs have been deserting America for decades. The loss of manufacturing jobs has been much more pronounced than the loss of manufacturing production. Manufacturers have used technology to increase production while cutting their work force. Overall, the loss of these high-wage jobs is probably permanent. On average, these jobs had paid $51K a year.

There were 1.5 million more health services jobs than before the recession.

The recession didn’t even dent technology companies. Technology companies both generated wealth for owners and created well-paying jobs. For example, the number of smartphones shipped rose from 20.1 million in 2007 to 136.6 million in 2013.

“Fracking” has boosted employment and income in states where it has been developed. It also has lowered energy costs. Chemicals and plastics consume a lot of energy, so cheap natural gas has given these industries a leg up. It also has improved the trade balance by reducing energy imports.

Health-related fields, energy, and technology appear to be the central pillars of the American economic future.

What did unemployment show? Population growth since the recession meant that the economy needed to add at least seven million more jobs than it did to absorb the growth. In Summer 2014, four million people were still counted as long-term unemployed and six million weren’t counted because they had just given up looking for work after many rebuffs. The slack labor market held down real wages as a low level of inflation continued. Hence, unemployment could stay high even as the economy grew. That, in turn, kept wages from rising in the way that they would in a tight labor market.

To get work, many people took lower-paying jobs than they had held before the recession. Eighty percent of Americans earn less than they did before the recession. Indeed, the median income fell from $55,627 in 2007 to $51,017 in June 2014. Many people also supplemented their income with government benefits like food stamps. In 2007 26.3 million people received food stamps; in 2009 47.6 million people received food stamps.

Continuing high unemployment and stagnant wages meant that it felt to any normal person that the recession remained in force. It’s hard to see the dawn when it is still so dark.

[1] The National Bureau of Economic Research has technical definitions for “recession” and “depression.” These have nothing to do with the popular perception of economic conditions. Therefore they become the butt of out-of-office politicians, late-night comedy-show hosts, and other people of that ilk.

[2] Shaila Dewan, Nelson D. Schwartz, and Alicia Parlapiano, “How the Recession Reshaped the Economy,” NYT, 15 June 2014, pp. 6-7.

Annals of the Great Recession I.

In a column in the New York Times, Neil Irwin takes up an important, under-analyzed topic.[1] He doesn’t take it very far, but it’s better than a poke in the eye with a sharp stick.

He begins by reporting on a new book on the “Great Recession” by the University of California-Berkeley economic historian Barry Eichengreen.[2] The book is 500 pages long and came out in early January 2015, so I don’t think that Irwin has fully digested what Professor Eichengreen has to say. (I sure haven’t.) All the same, he brings out several important points.

First, the Great Depression of the 1930s and—more importantly—the Second World War legitimized Keynesian counter-cyclical spending to moderate the economy. Thus, when the American economy began to plummet in early 2008, both Republican President George W. Bush and the Democrat-led Congress agreed on a $150 billion to cover the credit markets (the TARP). Within a year, however, Republicans had turned against big deficits. When the newly-elected President Barack Obama called for a $787 billion stimulus bill, scarcely any Republicans could be found to vote for it. For the next several years the Republicans kept up their assault on deficit spending. By 2011 Democrats also were running away from deficits.

Second, in the absence of spending action by Congress, responsibility for countering the recession fell to the Federal Reserve Bank (the Fed). Here again, Eichengreen finds too little effort made too late. The Fed’s stimulation mostly came a day late and a dollar short. “Quantitative Easing” through bond-buying pumped money into the economy. It just never pumped in enough money until the ambitious program that began in September 2012 (and which has now come to an end).

The result of these lamed policies came in an excessively-long recession that has just ground the spirit out of many Americans. Irwin is good on pointing out the events. He’s less good at explaining them. Why did Republicans turn against Keynesianism?[3] Why did Democrats, of all people, turn against the legacy of Franklin D. Roosevelt?

What is missing in Irwin’s explanation is any analysis of the rise of the “Tea Party.”[4] Not everyone responded well to the stimulus bill. A Seattle-area blogger named Keli Carender organized a protest against the bill in February 2009, then talked it up on-line. Soon, protests took place in other cities. Then Rick Santelli’s on-air rant against bail-outs went viral. Then Fox News pushed the cause. April 15, 2009 provided a forum for lots of protest rallies. ObamaCare added fuel to the fire.

This “Tea Party” movement was largely made up of previously apolitical ordinary citizens who had been energized by their economic concerns. Underneath this concern is a feeling that they have “lost” their country to “elites.” At the same time, many Tea Party people had “social conservative” views on gay marriage, the Second Amendment, hostility to the expansion of government authority by the courts). Illegal immigration formed another concern. Finally, some of the Tea Party supporters were just nuts: “birthers” and “Obama = Hitler” types. The specific targets of the “Tea Party” were the rapidly expanding federal deficit, the growth of “big government,” and taxes. The agenda of the movement appeared to be unrealistic and impossible to achieve: lower taxes combined with a balanced budget.

The “Tea Party” pressured Republicans. The Democratic abdication is harder to figure. Nobel Prize economist Paul Krugman argued that the Obama Administration’s stimulus program was half the size it needed to be, was spread over two years instead of front-loaded into one year, and contained a lot of tax cuts that were a waste. However, Irwin (and apparently Eichengreen) still trot out the tired excuse that the administration under-estimated the scope of the problem. More importantly, perhaps, President Obama felt no commitment to stimulus. Bob Woodward has quoted him as saying “Look, I get the Keynesian argument, but the American people just aren’t there.” But why didn’t he use the “bully pulpit” to get them there? And why didn’t Democratic leaders tell him—as Al Gore once told Bill Clinton, to “get with the program”? There is a lot of blood on the floor from this unnecessary disaster and a lot of blame to go around.

[1] Neil Irwin, “The Depression’s Unheeded Lessons,” NYT, 11 January 2015.

[2] Barry Eichengreen, Hall of Mirrors: The Great Depression, the Great Recession, and the Uses and Mis-Uses of History (New York: Oxford University Press, 2015).

[3] Richard Nixon once remarked that “We are all Keynesians now.” Republicans may yet come to rue the day they tossed over the ideas of Nixon for those of Ronald Reagan. Nixon also had proposed national health care, only to have it sunk by the petty personal jealousy of Teddy Kennedy.

[4] “The Rise of the Tea Party,” The Week, 19 February 2010, p. 13.

 

Euro Muslims.

Ten years ago, almost to the day, Ross Douthat made the following observations.[1] Then, about 4 percent of the population of Europe was Muslim.[2] This seemed likely to change. Demographers projected that the low and falling birthrate among Christian Europeans would reduce the European population from 728 million people to about 630 million by 2050. Moreover, it will be an aged population dependent upon young workers from somewhere else to finance their pensions and medical care. Already about 900,000 immigrants entered Europe each year. This was about enough to off-set native European population decline and to keep the population at about the 1995 level. However, in 2000 a UN study projected that the countries of the European Union will require over 13 million immigrant workers EACH YEAR to preserve the 1995 ratio of workers to retirees. Thus Europeans may be compelled to organize a huge increase in immigration over the coming decades. Much of this population growth will come from nearby Muslim countries.   In addition, Muslims in Europe have a younger demographic profile and a birth rate triple that of non-Muslims. As a result of these trends alone, and disregarding immigration, demographers anticipated that the Muslim share of European population would reach eight percent by 2015. Moreover, many Europeans are not so much Christian as non-Muslim. If we suppose that a future renewal of European religious enthusiasm is possible, why would Christianity benefit? Might not wide conversions to Islam take place?

Already in 2005 Europeans were being forced to consider the possibility that Muslims within Europe uphold values that are hard to reconcile with currently prevailing norms: the French expelled an imam who insisted that the Koran authorized wife-beating (and what if the imam is correct?); in 2002 and 2004 Muslim militants assassinated two Dutch politicians who warned against the danger posed by Muslim immigration; and the Madrid (2004) and London (2005) train bombings reminded Europeans of the dangers of Islamic radicalism.

Ten years on, Reuel Marc Gerecht makes a number of important points.[3] First, jihad has become charismatic for some EuroMuslims. ISIS is conquering territory, not just blowing up things. Hundreds, perhaps thousands, of young men have gone to fight under the black banners. They come from Europe as well as the Arab countries. To my mind, it is their Spanish Civil War.

Second, European Community states are not likely to sit still for selective targeting of their Muslim citizens by American immigration officers. Hence, American security against European radical Islamists depends on the French and British domestic security services. Both governments have robust security services, while the Americans have little human intelligence from among European Muslims. If the British MI-5 and the French DCRI fail in their efforts to track Euro jihadis and thwart their plots, then the United States is in for a bad time.

EuroIslam may succeed where EuroDisney failed. After the fall of Constantinople to the Turks in 1453 the Hagia Sophia cathedral became a mosque. Will we live to see the day when American tourists line up to enter the great mosque on the isle de la Cite, and when the nearby statue of Charlemagne, the grandson of Charles Martel who defeated an Arab army at Tours, seems like a monument to Muslim persistence?[4]

[1] Ross Douthat, “The World in Numbers: A Muslim Europe?” Atlantic Monthly, January-February 2005, pp. 58-59.

[2] The admission of Turkey to the European Union would raise the share of Muslims in the population to 15 percent. There is no way that will happen now, given the Islamist tilt of the current Turkish government.

[3] Reuel Marc Gerecht, “France and the New Charismatic Jihad,” WSJ, 8 January 2015, A11.

[4] OK, that’s alarmist. My imagination got the better of me.

Man of Steel.

In the first volume of his new biography of Joseph Stalin[1], Princeton historian Stephen Kotkin makes clear the disastrous effects of Bolshevik victory in the Russian Revolution. “The Russian Revolution—against the tyranny, corruption, and, not least, incompetence of tsarism—sparked soaring hopes for a new world of abundance, social justice and peace.” Garry Trudeau mockingly entitled a collection of his “Doonesbury” cartoons from the Vietnam War-era But This War Had Such Promise. The same might be said of the Russian Revolution. Twenty years on the country was a vast police state; millions were dead from famine or execution, while millions more were imprisoned in the Gulag archipelago; Russian agriculture had fallen below its prewar levels to off-set the increase in industrial production; and a murderous psychopath ruled the country without any check on his actions. Fifty years on from that sad state, Communism finally collapsed under the weight of its own failings.[2]

Historians have argued that circumstances forced Stalin to into certain courses of action, rather than him having chosen them. Thus, surrounded by a hostile world, Russia had to modernize its economy and be always on watch against subversion. The forced collectivization of agriculture offered the only means to rapidly modernize Russian peasant-based farming. Modernization of farming under State control held the only means to obtain the resources for rapid industrialization. The opposition from the peasants, then the wavering among Bolsheviks over the high human costs of collectivization forced the adoption of harsh measures.

Kotkin rejects such views. He argues that Stalin fulfilled, rather than diverged from, Lenin’s intentions. “If only Lenin had lived,” the same things would have happened to Russia and the world. From the first, the Bolsheviks “unwittingly, yet relentlessly reproduced the pathologies and predations of the old regime state in new forms.” Between 1918 and 1928, tyranny, corruption, and incompetence became the hall-marks of the Soviet state. Stalin rose to power in this environment thanks to his own ruthlessness, adroit skill at maneuver, and the spectacular incompetence of his rivals like Trotsky, Bukharin, and Kamenev.[3] (Indeed, the incompetence of these men at conspiratorial politics offers a clue to how Lenin himself rose to leadership of the Bolshevik party before the Revolution.) However, the Russian Revolution had never been carried through to completion. The resistance from the peasantry and many other people had forced the strategic retreat called the “New Economic Policy” (N.E.P.). While Lenin claimed to hold the “commanding heights” of the economy (foreign trade, finance, heavy industry), agriculture and commerce remained in private hands. The path to traditional capitalist economic development remained open. With that, Bolshevism would become irrelevant.

By 1928, when the first volume of Kotkin’s gigantic work ends, Stalin had gained control of the main levers of power in the Soviet Union. He set out to complete the Revolution. Heads would roll.

[1] Stephen Kotkin, Stalin, Volume I: Paradoxes of Power, 1878-1928 (New York: Penguin, 2014). See the review by Serge Schmemann, NYT, 9 January 2015, C29.

[2] Rather than from any actions of the Reagan Administration. American triumphalism in the wake of the collapse of Communism showed the first signs of that generational change in self-concept that was to prove so disastrous under the Bush II administration.

[3] See: Robert Conquest, The Great Terror: Stalin’s Purge of the Thirties (New York: Macmillan, 1968), pp. 3-122.

 

What We Learned from the Report of the 911 Commission XXIX.

The tricky issue of Personality and Culture.

The foreign intelligence community did a pretty good job of centralizing information and analysis on threats to American interests abroad and of coordinating a response. However, there was no centralization of information and analysis on domestic threats, no co-ordination of response, and no adequate communication between foreign and domestic intelligence. No one seems to have realized that the domestic agencies had no formal plans or procedures for how to respond to terrorism; no one told the agencies to develop such plans and procedures. (pp. 378-379.) There was no central co-ordination of intelligence analysis or threat assessment. “The mosaic of threat intelligence came from the [CIA’s] Counterterrorist Center, which collected only abroad. Its reports were not supplemented by reports from the FBI.” (p. 294.)

“Beneath the acknowledgement that Bin Laden and al Qaeda presented serious dangers, there was uncertainty among senior officials about whether this was just a new and especially venomous version of the ordinary terrorist threat America had lived with for decades, or was radically new, posing a threat beyond any yet experienced.” (p. 491.) Richard Clarke failed in repeated efforts to get the Clinton administration to recognize al Qaeda as a first order threat, and he was still trying to get a decision on this from the new Bush administration in early September 2001. However, no one—even Richard Clarke—ever forced an open debate on the issue. (p. 491.)

NB: A point worth considering. The above analyses fairly frequently point out the deficiencies of the FBI, the CIA, and the State Department because all three of them privilege the local commanders (so to speak) over central authority. Local offices tend to have autonomy about what they do and how they do it within the broad outlines of general policy defined from the center. However, at the start of Chapter Five, “Al Qaeda Aims at the American Homeland,” there appears the following remark. “Bin Laden and his chief of operations,…, occupied undisputed leadership positions atop al Qaeda’s organizational structure. Within this structure, al Qaeda’s worldwide terrorist operations relied heavily upon the ideas and work of enterprising and strong-willed field commanders who enjoyed considerable autonomy.” (p. 210.) How could the same system work FOR al Qaeda and AGAINST the United States?

President Clinton apparently grew impatient with the inability of the United States government to make Bin Laden just go away. President Clinton once remarked to JCS Chairman (and Green Beret and former commander of all Special Forces) Hugh Shelton that “You know, it would scare the shit out of al-Qaeda if suddenly a bunch of black ninjas rappelled out of helicopters into the middle of their camp.” Shelton subsequently declared that he didn’t remember Clinton making the statement and former Secretary of Defense William Cohen said that he thought the President might have been making a hypothetical statement, however Clinton has repeatedly stated that he said this. (p. 272.) NB: It’s like listening to my 13 year-old—when he was younger.

“According to Clarke, [National Security Adviser Sandy] Berger upbraided DCI [George] Tenet so sharply after the Cole attack—repeatedly demanding to know why the United States had to put up with such attacks—that Tenet walked out of a meeting of the principals.” (p. 278.) In Summer 2001 Tenet engaged in a lot of hand wringing about ordering a lethal attack on Bin Laden. “Are America’s leaders comfortable with the CIA doing this, going outside of normal military command and control? Charlie Allen told us that when these questions were discussed at the CIA, he and the Agency’s executive director, A.B. “Buzzy” Krongard, had said that either one of them would be happy to pull the trigger, but Tenet was appalled, telling them that they had no authority to do it, nor did he.” (reported, p. 305.) NB: What would Dulles, or Helms, or Colby have said?