We’ve been testing “intelligence” for about a century. What does an IQ test “test”? It tests what is called “abstract intelligence,” basically solving logic problems. There is a correlation—not a cause-and-effect linkage—between high IQ scores and both good grades and good job performance. In contrast, many of the standardized tests (SAT, GRE, MCAT, LSAT, GMAT) that hold the keys to opportunity in life measure verbal and quantitative ability. Apparently the two types of tests measure different things because IQ scores have been rising steadily for decades, while measures like the SAT have stalled or even fallen. (Then there’s the third kind of test called “personal judgment” used by employers, teachers, and voters. I don’t see scholars doing much work on what I conjecture to be a key factor in individual success.)

Raw IQ scores rose steadily in all developed countries throughout the 20th Century and continue to do so today. Three points per decade is the normal increase. That means that my older son should have an IQ about 20 points higher than my father and about a 12 points higher than myself.[1] They are rising across all groups tested, rather than in just part of the population. The dumb are getting smart and the smart are getting smarterer.

It’s hard to tell why they are rising. Improved nutrition and health may play a role by allowing most children to develop more rapidly. IQ tests are usually given to captive school or draft age populations, so a fast first step in life might contribute to rising scores. Also, more education became available to the lower income groups in the course of the 20th Century. This could move the “left tail” of a standard distribution to the right. On the other end of the spectrum, smart kids increasingly hang with smart kids. A “social multiplier effect” may explain why the IQ scores of this group continue to rise when improved nutrition and education cannot be factors. There is an intriguing third possibility. Both computer games and popular television shows have become increasingly complex. The narratives in each art form stress complicated plots and lots of characters. Perhaps this fosters a constant analysis of abstract relationships.

Then there are Torrance Tests of Creative Thinking. These aren’t discussed much, if at all, in the media. What one analysis of the scores since 1968 shows is that the Torrance scores for American children have been dropping since 1990.

While something is causing IQ to rise, it probably doesn’t have anything to do with what is going on in the classroom. Better nutrition, more years in school, watching “Lost,” and playing “Call of Duty” might be the most reasonable explanations. In contrast, the stagnation (at best) in the SATs and the decline in the Torrance Test scores might be related to what is happening in classrooms since they test things that schools claim to teach and foster.

There are implications for social policy. First, improving both childhood nutrition and education help poor children. That could get tangled up with questions about the quality of parenting supplied by poor people. FDA guidelines on nutrition and good advice on stimulating the intellectual development of children has been available for decades. Not everyone uses them. Second, schools and colleges might want to think about incorporating complex television series and electronic games into their tool-kit. If William Shakespeare were alive today, would he prefer to write “Kinky Boots” or “Breaking Bad”? Third, mixing higher IQ students with lower IQ students in the classroom may be good for the lower IQ students. It won’t be good for higher IQ students. I can hear the Republican outcry against Democrats’ “redistribution of grey cells” already. I might be part of it. (“Smarter than ever,” The Week, 16 September 2011, p. 11.)

[1] This may be an argument for having children as early in life as possible. It cuts down on the IQ gap between yourself and your children. Otherwise the little bastards will be just insufferable.

Save the Pagan Babies!

Poor countries cannot run what contemporary Americans would regard as “adequate” orphanages. They don’t have the surplus economic resources to provide robust social welfare institutions. Furthermore, as political scientists say, the state institutions lack capacity to achieve their goals. At best, they’re something out of Dickens. At worst, they’re warehouses in Hell. This is probably going to have some kind of long term psychological impact.

Long wars, especially civil wars, fought under barbarous conditions produce lots of orphans. The process of getting orphaned may involve something like watching your father have his arms chopped off with a machete. This, too, may have a lasting impact.

One report states that in Azerbaijan, “Many children are abandoned due to extreme poverty and harsh living conditions. Family members or neighbors may raise some of these children but the majority live in crowded orphanages until the age of fifteen when they are sent into the community to make a living for themselves.”

Finally, as in America not all that long ago, people use mental institutions and orphanages as receptacles for family members who are permanently disabled in some way. (One problem with tenement living was that you lacked an attic in which to confine Great-Aunt Grace who spent all her time talking about Kate Chopin’s The Yellow Wallpaper. Putting her in the storage locker in the basement just got the neighbors talking.)

Promoting international adoption can be one way of reducing the burden on taxpayers.

Still, there can be problems.

“Child laundering.” No, really, that’s what it’s called. Basically, “gringos” and “farangs” spend so much time with their cell phones that the radiation fries their little swimmers. So, no kids. So, they come to some developing country to buy a kid from an orphanage or some helpful soul who knows a starving child and would like to set him/her up in an American suburban home with a swing set in the backyard and 999 television channels. They’re rich, so there’s money to be made if you have a spare kid to sell. What if you do not have such a kid? Well, that’s what shopping malls are for in the United States. In developing countries you probably have to snatch them in a market-place or on their way home from school. Then, sell to “gringo” or “farang.” It helps if you know a “poor, corrupt policeman” who can help you with fake identity papers. (The US government has been prosecuting an American woman for her part in the fraudulent adoption of 800 Cambodian children.)

UNICEF estimates that there are 700,000 orphans in Russia. The number increases by over 100,000 a year. The striking thing is that these are “social orphans.” They have at least one living parent. The parent feels unable to care for the child, so they abandon the child to the care of someone else. Most go to other relatives or to foster homes. About a third are in the care of the state. Same thing is true in Haiti, where poor parents “hoped to increase their children’s opportunities by sending them to orphanages.” After the Haitian earthquake, the number of orphans sky-rocketed (although so did the number of suddenly-childless adults). American aid agencies descended on Haiti. One impulse was to promote the adoption of children from the orphanages to American homes. The obvious problem was that the Americans completely misunderstood the nature of Haitian orphanages. (On the other hand, they perfectly understood the motives of Haitian government officials who objected to the adoptions: they hadn’t got their cut.)

Little of this kind of “news” makes the headlines.

Origins of the War on Drugs

You used to be able to get cocaine eye-drops off the shelf in a drug-store and the Sears and Roebuck catalogue offered cocaine and a syringe for $1.50. Doctors regularly recommended opium to patients suffering from “female complaints.” Cramps? Get your head up.

Then domestic and international influences came together to launch a “war on drugs.” On the one hand, opium was legal in Asia. Chinese immigrants brought opium-smoking to America and the United States conquered the Philippines, where opium was legal. In 1901, Charles Brent, an American missionary in the Philippines, began to campaign for the international control of addictive drugs. President Theodore Roosevelt helped create an International Opium Commission (1906), then appointed Dr. Hamilton Wright as the first U.S. Opium Commissioner. The International Opium Convention (1912) tried to regulate the trade.

On the other hand, Americans began to associate drugs with both crime and race. African-Americans and Chinese immigrants became centers of concern, as did the white women who supposedly fell prey to non-white men as a result of drug use. A 1914 law limited the sale of narcotics and cocaine to those with a doctor’s prescription. A 1922 law regulated the import of narcotics. A 1924 law outlawed heroin. A 1935 law assigned enforcement responsibility to the states.   A 1937 law banned marijuana. Between 1914 and 1945 the number of addicts in the United States reportedly fell from 1 in 400 people to 1 in 4,000. Things cooked along quietly for the next two decades. Charlie Parker and Robert Mitchum, people like that, used drugs.

Then recreational drug use began to spread as part of the counter-cultural strife of the Sixties and Seventies, along with long hair, peasant dresses, pre-marital sex, draft-dodging, and really great music. The “Up With People” crowd pushed back hard. President Nixon seized on the issue of drugs in 1969. The Comprehensive Drug Abuse Prevention and Control Act (1970) created the current system of classes of drugs. President Nixon announced a “war on drugs” (1971). A National Commission on Marijuana and Drug Policy (1972) reported that marijuana was not addictive and did not pose any serious threat to society or its users, and recommended de-criminalization. “Shut up” President Nixon explained. A presidential order (1973) created the Drug Enforcement Agency (DEA) to co-ordinate and lead efforts to halt drug smuggling into the United States and to suppress the illegal black-market for drugs within the United States.

The current “war on drugs” has both international and domestic fronts.

On the international front, the United States seeks to attack the foreign sources and supply lines that feed the American market. The principal growing sites for opium poppies (the source of heroin) are highland Burma (the “Golden Triangle”), Afghanistan, and Mexico. Of these, Afghanistan is by far the most important, with 93 percent of opiates now (well, 2007) coming from Afghanistan.[1] Interdiction of drug traffic can involve support for local police; aerial spraying of defoliants; interception of ground, sea, and air shipments; and discovery of drug factories where the raw materials are turned into a finished product for sale.

On the domestic front, the chief anti-drug measure has become sharply increasing arrest rates. During the 1970s drug arrests scarcely increased, in spite of Nixon’s call for a “war on drugs.” Only during the 1980s did policy change. While arrests for all crimes rose by 28 percent during that decade, arrests for drugs rose by 126 percent. Between 1980 and 2010 the share of Americans imprisoned quadrupled. Half a million people a year go to prison for drugs.

[1] Revenues from sales in Western countries provide Afghan traffickers with over $60 billion of “foreign aid” each year.   In comparison, the United States provided the Afghan government with over $50 billion of aid over ten years. Since much of the trade is controlled by the Taliban, we are paying more money to our enemy than to our ally.

The last helicopter from Baghdad.

As we embark on an attempt to salvage Iraq from both the misdeeds of its post-Saddam Hussein/post-American occupation government and from the claws of ISIS, here’s a cold, hard lesson from History.

After his election as president in November 1964 Lyndon Johnson increased American troops in the war in Vietnam to a maximum of 540,000 men. In January 1968 the North Vietnamese Army (NVA) and the Viet Cong (VC) launched a massive offensive to coincide with the Tet lunar New Year celebration. The Americans and the South Vietnamese managed to defeat the Tet offensive on the ground, but not in the eyes of American voters. Up until Tet Americans had tended to believe the assurances of progress that were being made in Vietnam on the part by their leaders. Tet changed that. Now a majority began to doubt that victory was possible and that American leaders were telling them the truth about the war. In March 1968 President Lyndon Johnson announced a halt in the bombing of North Vietnam, solicited peace talks, and announced that he would not run for re-election.

Peace talks began in Paris in May 1968. When they failed to make progress, President Johnson resumed bombing until the North Vietnamese came to their senses in October 1968. However, Republican presidential candidate Richard Nixon encouraged the South Vietnamese to block further talks until after the November 1968 elections.

Nixon narrowly defeated Hubert Humphrey in the November 1968 election. Nixon’s goal was to extricate American forces from Vietnam without the whole house of cards coming down immediately. As his foreign policy adviser Henry Kissinger put it, “We’ve got to find some formula that holds things together for a year or two [i.e. until late 1970 or 1971].” That formula appeared to be “Vietnamization”: shifting the chief burden to the Army of the Republic of Vietnam (ARVN). While negotiations with North /Vietnam continued, Nixon began to draw down American forces. By late 1971 the total number of American troops had fallen from 540,000 under Johnson to 157,000 under Nixon. Unsurprisingly, the negotiations went nowhere since the US was obviously withdrawing and the North Vietnamese could anticipate swift victory once the Americans were gone. In March 1972 Nixon unleashed a massive air attack on North Vietnam. The North Vietnamese gave in, negotiations resumed, and a cease-fire was declared in January 1973. Most of the remaining American troops were withdrawn by March 1973.

The Republic of South Vietnam survived until early 1975. Then the North Vietnamese attacked. The ARVN collapsed, and huge numbers of refugees-in-the-making converged on Saigon in hopes of being evacuated by the Americans. Many (6,200) were, but most were not. Saigon fell on 30 April 1975.[1]

What are the parallels, if any, between South Vietnam then and Iraq now? Neither government enjoyed much legitimacy in the eyes of at least a large minority of their people. Both governments were up against ruthless and competent enemies. There are limits to what can be accomplished by airpower. The American administrations that had to clean up the mess weren’t the ones who had caused it.

Perhaps the differences are more important. Having escaped the Indochina disaster, Americans refused to recommit when a new crisis arose. The world did not end.

[1] “Leaving Vietnam,” The Week, 9 February 2007, p. 11.


Value for Money in College Education

A Pew Research Center report from 2011 made two interesting points. First,” less than half of members of the general public agrees [that students should pay for their own college education], with a majority saying either the federal or state government, private donors, or a combination of those should pick up the largest share of a student’s college tab.” Second, “nearly 60 percent of Americans say the U.S. higher education system is not providing students with a good value.” These attitudes put average Americans sharply at odds with college presidents and faculty, who feel themselves best by Yahoos.

It’s time for some plain speaking. First, college does cost more than most families care to afford. Second, most colleges don’t give good value for what they charge, at least not in educational terms. Third, it is the same general public that complains about low value for a high price that is the cause of these problems. An examination of the historical record makes this clear.

One part of the explanation comes from demography.  The Baby Boom (from 1946 to 1964 approximately) went through American society like a mouse through the rattlesnake my college room-mate used to keep.  In the Forties and Fifties a tsunami of students hit the schools.  In the Sixties and Seventies the same tsunami hit the colleges.  The result of massive demand was a huge increase in the size of colleges and college faculties.

Then the Baby Boom gave way to the Baby Bust.  This brought a decline in the number of 18 year-olds in 1982 and for years to come. The number of students no longer matched up with the size of colleges and faculties.  What to do?  In business, of course, lots of places would just have gone under, like nail or tanning salons. Supply would have returned to balance with demand.  Not in higher education however.  Colleges fought for survival. First of all, they molted into country-clubs attached to classrooms.  Sports facilities, luxury dorms, and improved food services became the hall-marks of a good college. Second, adult education and degree-completion programs multiplied. Third, they played to the American reverence for diplomas, if not for learning as an abstract concept.  Everyone emphasized the economic value of more education.  Everyone celebrated a liberal arts education for all as a form of democratization.  Graduate programs to confer credentials sprang up like mushrooms.

The end result was that not enough colleges were down-sized.  Instead, they passed the rising costs along to others: to parents (through tuition increases), to students (through larger student loan debt), and to taxpayers (through government aid to higher education).

A second part of the explanation is cultural.              On the one hand, we are living with the consequences of a regulatory society created to pursue well-intended, but ill-defined goals like “justice” and “well-being” for citizens.  The outcome of this has been the growth of a massive apparatus of administrative staff at every college.  If you compare a college phone directory of twenty years ago with one of today, you will be able to measure the scale of the growth of administrators, new offices, assistants, and secretaries.  These people largely respond to mandates imposed by the federal and state governments, and accrediting agencies.  The costs of those mandates, however, are carried by the colleges and passed on to the consumer.

On the other hand, we are living with the consequences of the “de-bourgeoisification” of the American middle class.  Being bourgeois used to mean valuing hard work, self-restraint, living on less than you earn in order to have savings and–in old age–to be able to leave “an estate” to one’s children to help them get started in life.  It did not mean being happy or “fulfilled.”  Even so, bourgeois used to have a positive association.  Since the 1960s being bourgeois has gone the way of fedoras and torpedo bras.  Increasingly, the cultural emphasis has been on individual fulfillment and happiness.  There isn’t much that is fulfilling or happy about hard work, so it is de-valued.

The average American home now has five books in it.  The average home also has a big screen TV and a huge range of channels on its cable package.  You can’t get literacy or analytical skills from reality shows or video games.

Furthermore, in 1950 about 40 percent of students never finished high-school.  They didn’t need much education to drive a truck or work a drill-press or dig a ditch.  THEN high school and college were for people willing to do the work and to respect authority in the form of unreasonable teachers and parents angry about report cards.  NOW the schools have shifted toward keeping kids in school regardless of the cost to the state of education.  The quality of education has suffered because it isn’t fashionable to do the work required for learning and almost impossible to coerce kids with threats of flunking out.  Parental authority also has declined.  (You try involuntarily institutionalizing somebody over the age of 14 in any state except Utah.)

The outcome of all this is that many students come to college without the intellectual or cultural or psychological capital that their predecessors brought.  They struggle–or don’t bother to–in the classes.  They require remedial course work and second chances.  The survival imperative driving many colleges leads to a dilution of course work and grading standards.  They need the tuition, so they need the students.

For many students, college is a rite of passage, not an education.  They get to live away from their parents for the first time.  They’re semi-adults on their way to being minor-league adults on their way to being full-scale adults on their way to being safely dead where nothing can go wrong now so they’re Winners!  (The movie “Trainspotting” may have been repellant, but it wasn’t wrong.) The country-club with classrooms environment reinforces this feeling.

Public attention has focused on the real-estate bubble and all the evidence of corporate misbehavior.  Much less noticed was the explosion in ill-considered consumer debt and use of home equity loans to finance consumption.  Basically, most people don’t save a ton of money to pay for their kids’ college education.  The attitudes reflected in the Pew survey are unrealistic on several grounds.  First, it would be one thing publically financing the higher education of some sort of elite.  In fact, most students in college are not part of some intellectual elite.  Second, the money just isn’t there.  The federal deficit is going to be cut through some combination of tax increases on most people and spending cuts for all.  How we would expand public aid to everyone seeking a college education in that environment is beyond me.  Certainly, Princeton could buy the moon if it was for sale. However, most colleges do not have large endowments to provide additional income.  Public colleges live off direct state aid and tuition.  Many private colleges are in the same leaky boat.  That means that the “someone” who will pay for college if parents and students don’t pay will be—parents and students in their capacity as taxpayers and tuition-payers.

Is there a solution to this problem? Sure. Shape up. Turn off the television. Get rid of the xBox. Take the kid to the library once a week. Ground the kid if the grade report isn’t good. Paint the house during your summer vacation or drive out to Gettysburg, but forget about going to Disney World or down the Shore. I hate having to quote Chris Christie, but “why are you mad at the first person who told you the truth?”


The Tax Wars.

Should the rich pay their “fair share”? In 1992 there were three tax brackets: 15%, 28%, and 31%. In 1993 the Democrats created two additional tax brackets on higher incomes: 36% and 39.6%. Thus, the Democrats imposed higher tax rates on high incomes.[1]

In 2001 the Republicans cut federal income taxes on all Americans.[2] Single tax-payers with taxable income up to $6,000, heads of households with taxable income up to $10,000 and people filing jointly with taxable incomes up to $12,000 had their tax rate reduced from 15% to 10%. Those in the 15% bracket had the lower threshold indexed to the new 10% bracket. The tax rate on people in the next bracket was reduced from 28% to 25% by 2006. The rate on the next bracket would be lowered from 31% to 28% by 2006. The rate on the next bracket was reduced from 36% bracket to 33% by 2006. The rate on the highest bracket was reduced from 39.6% to 35% by 2006. The biggest percentage cuts in the tax rates were at the bottom end of the tax brackets, the smaller cuts at the high end. The two highest brackets still were taxed at a higher rate than in 1992.

These taxes continued through 2012, when the 2001 cuts on the two top brackets were allowed to expire, while the rates on the other brackets were made permanent. To illustrate, the rate for single filers making up to $8,925 is 10%; on $8,925 to $36,250 is 15%; on $36,250 to $87,850 is 25%; on $87,850 to $183,250 is 28%; on $183,250 to $398,350 is 33%; on $398,350 to $400,000 is 35%; and on $400,000+ is 39.6%. So, most Americans live under the Bush Administration tax cuts, while the wealthiest Americans live under the Clinton Administration tax increases.

Under these systems, what do different income groups pay as a percentage of federal income taxes?[3] In 1991, before the Clinton tax increases on high incomes, the top one percent of income earners paid 24.82% of the income tax bill; the bottom 50% paid 5.48%. In 2000, before the Bush tax cuts, the top 1% percent of income earners paid 37.42% of the income tax bill; the bottom 50% paid 3.91%. In 2011, under the Bush tax cuts, the top 1% of tax payers paid 35.1%; the bottom 50% of tax-payers paid 2.89% of taxes. (The top 50% paid 97.1%; the top 25% paid 85.6%; and the top 10% paid 68.3%.)

Across three very different administrations and under very different economic situations, the tax burden has been continually shifted from the bottom 50 percent of taxpayers onto the top one percent of tax payers. The Democratic mantra that the Bush tax cuts “favored the rich” is absolutely untrue. (In all likelihood, the Republican mantra that tax cuts will stimulate economic growth is equally untrue. That needs to be the subject of a different jeremiad.)

If tax rates favor the bottom 50%, income distribution favors the top 50%.

The “hard times” experienced by many Americans don’t have anything to do with tax-dodging by the rich. They are more likely to be the product of big shifts in the American economy within a globalized world economy since the 1970s. Fighting over shares of a shrinking pie isn’t going to fix the problem. We need broadly shared economic growth.

[1] For the sake of comparison, in Canada the highest rate of national taxation—on incomes over $132,000—is 29%.


[2] Economic Growth and Tax Relief Reconciliation Act of 2001 (EGTRRA).


[3] Kyle Pomerleau, “Summary of Latest Federal Tax Data,” Table 6.

On the Border.

Sometimes it is useful to look backward to have some idea about contemporary issues.

Hispanic-Mexican immigration is a political problem in the United States. In 1986 the US offered an amnesty to those Mexicans in America illegally, combined with the promise of a crack-down on future illegal immigration. The illegal immigrants got amnestied, but the crack-down was slow in coming. In 1994 the US did crack down on immigrants openly flouting the law along US highways. As a result, illegal immigrants concentrated on crossing the Sonoran Desert into Arizona. In 2004 1.3 million Mexicans got snagged by the Border Patrol trying to cross into the United States; 500,000 of them in Arizona alone. This totaled more than those arrested in any other American state, and it ignores the many others who got through. One estimate held that about 485,000 illegal immigrants successfully entered the country each year.

By April 2007 there were about 20 million people from Mexico working in the United States. The goods they produced exceeded in value the GNP produced by all the Mexicans who stayed home. The money they sent home ($20 billion a year) trailed only oil exports in Mexico’s foreign earnings, leading both tourism and direct foreign investment. These remittances amount to a form of foreign aid paid by the United States to Mexico. Same as money for drugs.

Why do all these Hispanic-Mexicans come to the United States? In some places, going to work in the United States has become a basic right of passage for young men. The cost can run $20,000. The financing of this resembles American student loans. Illegal immigrants basically “charge” the cost of their passage, then spend years paying it off. The debt collector then becomes a regular figure in the emigrant community. Then there is is the awful state of the Mexican economy and the many injustices of Mexican society. Mexican elites export their surplus population to the United States to avoid having to pay decent wages or provide decent public services in their own country. More money for them.

So, it’s good for Mexicans and for Mexico. However, a majority of Americans regarded it as a Mexican invasion. Working-class voters see Mexican immigration as a threat to their livelihood. Probably a lot of middle-class people see the flood of Mexican immigrants as a threat to raise taxes for services and as a threat to the Anglo culture. You may not like that, but it’s a democratic country where citizens have a right to express their feelings—and where the feelings of non-citizens don’t count. In 2005 the—Democratic—governors of New Mexico and Arizona declared “states of emergency” in their states because of illegal immigration. They complained that the federal government has failed to address the problem. For example, while most Mexican immigrants are immediately returned to Mexico, most non-Mexican immigrants (120,000 of them) are released on their own recognizance by federal courts. It should surprise no one that they usually fail to appear for trial.

However, “American” politicians dissent from the majority view. Some people suspect that Republicans answer to powerful business interests, who see real advantages in having a low-cost labor force available for marginal enterprises; Democrats see potential voters if the “immigration reform” issue can be spun the right way. In both cases, the narrow interests of the political parties trump the desires of American voters. That can’t be good for democracy.

Ross Douthat and Jenny Dodson, “The Border,” The Atlantic, Jan.-Feb. 2006,” pp. 54-55.

Matthew Quirk, “The Mexican Connection,” The Atlantic, April 2007, pp. 26-27.

Clueless in Gaza.

Israel captured the Gaza Strip from Egypt in the 1967 “Six Days War.” In 2005 Israel ended its military occupation of the Strip, handing over government to the Palestinian Authority. In 2007 Hamas won elections in the Strip (although not among all Palestinians), then followed up electoral victory by seizing control of the government in Gaza from the Palestinian Authority.   Israel saw this development as a grave danger. Hamas does not recognize the right of Israel to exist. Hamas militants backed up words with deeds by firing rockets into Israel. Israel responded by imposing a tight blockade on Gaza. All sorts of things–from computers to food–were barred from entry, and most Palestinians were barred from leaving Gaza.

The blockade wrecked the economy of Gaza. In early Summer 2014, there were 1.8 million people living in the Gaza Strip; 40 percent of them were unemployed; almost half of them received food aid from the United Nations; and 80 percent of them lived under the level defined by the UN as in poverty. At the same time, Hamas circumvented the blockade by digging many tunnels into Egypt which allowed the import of all sorts of goods. (It is difficult to believe that there wasn’t also a large “black” economy that never figured into UN calculations of living standards.)   So long as the Egyptians tolerated the Hamas tunnels, Israel’s blockade could not have full effect as a form of non-military coercion. However, Hamas had begun as an extension of the Egyptian Muslim Brotherhood. When the Egyptian military overthrew the government of Mohammad Morsi, the new government cut-off the Hamas tunnels. The people of the Gaza Strips suddenly began to suffer a great deal more than before.

In April 2014 Hamas went so far as to form a unity government with its old rival Fatah, which rules the West Bank (after a fashion). This got Hamas nowhere. Israel sank the peace-talks being pushed by the United States rather than deal with Hamas.

Hoping to force an end to the blockade, Hamas went onto the offensive in Summer 2014. Hamas could not hope to coerce Israel directly. Hamas could hope to provoke a humanitarian crisis that would lead to international pressure on Israel to ease or end the blockade. Hamas had imported a large stock of missiles through the tunnel system before the coup that put Morsi in prison. Now these missiles began to rain down on Israel. The Israelis struck back with air attacks, artillery fire, and a ground incursion. In the process, the Israelis discovered many tunnels that ran not into Egypt for smuggling, but into Israel. Between 2001 and 2005 Palestinian suicide bombers had killed 800 people in Israel until the Israelis walled themselves off from the Palestinians. Finding this defense penetrated by the tunnels, the Israelis went wild.

Israel’s air and ground offensive against Hamas certainly provoked a huge humanitarian crisis. It killed about 1,900 people; destroyed 10,000 homes; and forced the emergency relocation of perhaps 400,000 people within the confines of the tiny area. Criticism of Israel’s actions came from all around the world. Israel has been pushed back toward revisiting the situation of April 2014 in the sense that it will negotiate through the Fatah-led Palestinian Authority. On the other hand, Hamas also came in for much criticism for using Palestinian civilians as human shields as they fired their rockets from the midst of civilian areas. Much of this criticism, little noticed in the West, comes from other Arab governments. Moreover, Israel demands the effectively-supervised disarmament of Gaza as a prerequisite to ending the blockade. Fatah sees a chance to make gains against its rival, Hamas.  Hard to make a deal when no one wants a deal.

“Misery in Gaza,” The Week, 22 August 2014, p. 11.)

Peachy and Danny

Soon after the death of the Prophet Muhammad in the Seventh Century AD, Arab Muslin tribes burst out of the Arabian peninsula to begin a wave of conquest that ran on for centuries. Muhammad’s successors as leaders of the Muslims took the title of Caliph (“Successor” to the Prophet). The single large Arab empire soon fragmented into multiple kingdoms. Sometimes the rulers claimed the title of Caliph. The last important ruler to claim the title was the Ottoman emperor. The title went unclaimed after the fall of that empire at the end of the First World War. As more and more of the Muslim world fell under direct or indirect control of non-Muslims, especially of European states, nostalgia grew for the days of Muslim power and unity.

Awwad Ibrahim Ali al-Badri al-Samarrai was born in Samarra, Iraq in 1971. He claims to be a descendant of Muhammad’s own Quraysh tribe. He earned a doctorate in religious law and set up as a preacher. Salafism was all the rage among Sunni Muslims at the time and he found himself attracted to it.

When the United States invaded Iraq in 2003, al-Badri joined the resistance. In particular, he joined not the Sunni Iraqi tribesmen fighting against the Americans and the Shi-ite majority, but Al Qaeda in Iraq. This franchise of al Qaeda sought to foster war between Sunni and Shi’a in order to make the Iraq occupation a disaster for the United States. In 2005 American troops arrested him during a raid on a resistance group. He spent some time locked-up in Camp Bucca. In the United States prison sometimes serves as a sort of advanced education in crime and as an anti-social networking site. That seems to have been the case with Camp Bucca as well. Al-Badri got to know a lot of people with views similar to his own. As part of its effort to disengage from the war in Iraq, the Americans turned over many of their prisoners to the new Iraqi government. As part of its effort to mend fences with former opponents, the Iraqi government let many of them go. Badri was among those released.

He went back to the struggle against the Americans and the government they had created. During his time in prison, much had changed. Sunni tribesmen had grown weary of both the bloodshed and the strict Islamic fundamentalism pushed by al Qaeda. The “Awakening” movement among Sunnis combined with the American “surge” to put al Qaeda on the ropes. The survivors were rethinking the whole strategy of fighting Shi’ites instead of just the Americans, who were plainly eager to get out of Iraq in the near future. The newly-released Badri must have had a Rip van Winkle moment. He argued for sticking to the old course. When he saw that he wasn’t winning the argument, he started making his own contacts with the rich men in the Persian Gulf states who had funded al Qaeda. This gave him an independent source of money. He adopted the name Abu Bakr al-Baghdadi, Abu Bakr having been the father-in-law of Muhammad and the first Caliph.

Thereafter, Badri/Abu Bakr went on a rampage. He gathered fighters drawn to his ideas and his oratory. He moved his operations into the eastern parts of war-torn Syria, where a vacuum of power existed. Syria itself was full of jihadist enthusiasts, either Syrian ones or foreigners drawn to the struggle. Many of these fighters shifted their loyalty to what Abu Bakr now called the Islamic State in Iraq and Syria (ISIS). He emptied banks, exported oil from wells ISIS had seized, sold plundered antiquities, and taxed local people. With a huge war-chest and 10,000 enthusiastic followers, Abu Bakr set out to recreate the Caliphate.

History is unlikely to repeat itself, but there’s only one way to find out.

“The man who would be caliph,” The Week, 19 September 2014, p. 11.


Just like imam used to make.

Trying to help foreigners understand America, the Gummint pays for some of them to study in the US as Fulbright scholars. Nasser al-Awlaki and his wife came from Yemen in 1971 on a Fulbright to study agricultural economics. He got an MA from UNM, then a Ph.D. from Nebraska, then taught at Meenasotta for a couple of years. Almost immediately, he and the missus had a child. They named the sprout Anwar al-Awlaki. Having been born in the USA, Anwar was an American citizen. In 1978 the family returned to Yemen.

To be perfectly honest, the goat pizza available in Yemen paled in comparison to what could be had in the States. In 1990, Anwar al-Awlaki started in at Colorado State University. One summer he spent the break fighting in Afghanistan. (Must have made for interesting conversation in the dorm that Fall. “So, Bill, what did you do this summer? I picked lettuce on my uncle’s farm. Hoo-whee, we had some wild times on Saturday night. How ‘bout you Anne-War? Well, I ambushed opposing mujahedeen, then walked around shooting the wounded in the head.”) Anyway, by 1994 he got a B.S. in Civil Engineering and was a part-time imam in a mosque in Denver.

In 1996 he landed a job as an imam in San Diego. Here he got an M.A. in Educational Leadership from SDSU. However, Shaitan (in the form of babes in bikinis) beset him: he got hauled in for soliciting prostitutes a couple of times. In 1999 the EffaBeeEye came around, wanting to know if he had any ties to the “blind sheikh” who had organized the 1993 WTC truck bombing or to the then-munchkin terrorist Osama bin Laden. He said “no” and that was good enough for them. Meanwhile, a couple of the future 9/11 guys were attending services at his mosque. In 2000 he landed a job as an imam at a big mosque in northern Virginia. (We can’t even keep alcoholic child-molesters from becoming school bus drivers, so why blame a mosque for hiring an imam who can’t keep his pants on?)   During 2001 he worked toward a doctorate in Human Resources Development at George Washington University’s Education School. (He actually didn’t have much in the way of Islamic scholarly credentials, so his charismatic appeal to ignorant fanatics seems to arisen from what he picked up in American Ed. Schools.)

Then 9/11 came along. “The US was at war with al-Qaeda, not with Islam.” So, Awlaki got invited to a lunch at what was left of the Pentagon. They probably served pork chops or crab cakes, because the next thing you know (March 2002), he was on a plane to Yemen. From 2002 to 2004 he bounced between Yemen, the US, and the UK as an advocate of jihad.

After a variety of adventures, Awlaki settled down as a long-distance recruiter and inciter of jihadis. His fluent English and knowledge of American society, his charismatic personality, and his ease in using modern media made him a prominent figure despite hiding out in a remote area of one of the most backward places on Earth. The London subway bombers, the Times Square bomber, the Fort Hood shooter, and many others all had his sermons on their computers or had exchanged messages with him. Since he has said that “jihad against America is binding upon myself, just as it is binding upon every other able Muslim,” he probably wasn’t trying to calm them down. When the Underwear Bomber said that Awlaki helped train him for his mission, the government got fed up and decided to kill him. On 9/30/11 they did.

Can the United States execute an American citizen without trial, without even producing the evidence upon which the decision to kill him is based? Would you really want to establish the legal precedent? Talk about “death panels”! So, civil libertarians opposed the execution. On the other hand, some of them say the US can’t “execute” anyone anywhere without trial. It will be hard to fight a war on terrorism with those hand-cuffs in place. What to do?