What did we learn from the Report of the 9/11 Commission? II

Westernized elites (lawyers, bureaucrats, soldiers) provided the leadership for the successful nationalist movements in the Middle East after the Second World War. The initial economic situation of the new states did not appear unpromising: “The established commercial, financial, and industrial sectors.., supported by an entrepreneurial spirit and widespread understanding of free enterprise, augured well.” (p. 79.) However, the secular variant of the new states failed to deliver on the extravagant promises made in the early period of independence. The governments of many new states followed policies that slowly stifled all economic progress.

In the Arab world the oil shocks of the 1970s inflicted grave damage in the disguise of a great blessing. The enormous profits proved transient, but the governments used them for efforts to transform Arab society that had long-term consequences. Governments spent heavily on “huge infrastructure projects, vastly expanded education, and…subsidized social welfare programs. Cronyism meant that lots of money stuck to members of the ruling elites, as well.

Modern medical care led to a soaring birthrate all across the Muslim world. This large, young population needed jobs to be created at a rapid rate, but the stagnant economies of all the Muslim states failed to fulfill their tasks. The result was the proliferation of angry, frustrated, aggrieved, half-educated or mis-educated young men. (p. 80.) Rather than yield power or turn to new policies, the ruling elites settled for repressing dissent.

When a sharp rise in population intersected precipitously declining oil revenues in the 1990s, the government had to sharply reduce spending. The generous programs of the early 1980s “established a wide-spread feeling of entitlement without a corresponding sense of social obligation.”   The later effort to cut spending “created enormous resentment among recipients who had come to see government largesse as their right.” (p. 79.)

Many people turned to religion. As is the case with Christianity, Islam has been subject to periodic reform movements that could be called “fundamentalist” or “revivalist.” One exponent of reform was the 14th century scholar Ibn Taimiyyah, who “condemned both corrupt rulers and the clerics who failed to criticize them. He urged Muslims to read the Qur’an and the Hadith for themselves, not to depend solely on learned interpreters like himself but to hold one another to account for the quality of their observance.” (p. 75.) NB: In short, Calvin’s Geneva.

In the 1940s, Sayyid Qutb, an Egyptian scholar had visited the United States at the behest of his government and returned to Egypt deeply estranged from everything Western. (pp. 75-76.) Qutb espoused a Manichaean worldview in which pervasive, corrosive “unbelief” (jahiliyya) among non-Muslims and Muslims alike threatened to overwhelm true belief. True believers had to fight the unbelievers by all means and to the death. (pp. 76-77.) “The extreme Islamist version of history blames the decline from Islam’s golden age on the rulers and people who turned away from the true path of their religion, thereby leaving Islam vulnerable to encroaching foreign powers eager to steal their land, wealth, and even their souls.” (p. 75.)

By the late Seventies and early Eighties there had arisen a powerful religious movement among young men in the Muslim world. Osama Bin Laden was inspired by a preacher in the late Seventies. Khalid Sheikh Mohammed became attracted to “jihadism” in the early Eighties. In the early Eighties “Hambali” became attracted to Islamist preaching in Malaysia. Young jihadis went to fight in Afghanistan (1980s), in Bosnia (1990s),

Thomas H. Kean and Lee H. Hamilton, The 9/11 Report: The National Commission on Terrorist Attacks Upon the United States (New York: St. Martin’s Press, 2004).

What did we learn from the Report of the 9/11 Commission? I

By the end of the 20th century the CIA was “an organization capable of attracting extraordinarily motivated people, but institutionally averse to risk, with its capacity for covert action atrophied, predisposed to restrict the distribution of information, having difficulty assimilating new types of personnel, and accustomed to presenting descriptive reportage of the latest intelligence.” (p. 137.)

How had this situation come into being?

First, “although covert actions represent a very small fraction of the [CIA’s] entire budget, these operations have at times been controversial and over time have dominated the public’s perception of the CIA.” (p. 126.) Furthermore, whenever covert actions turned into highly public exploding cigars, the Presidents who ordered them have left CIA officers to carry the can. The CIA became very reluctant to engage in them. (p. 132.) Eisenhower’s initiation of and JFK’s approval of the CIA’s Bay of Pigs scheme offered an important early example of this behavior. Allen Dulles lost his job as head of CIA and Dick Bissell got fired. It would not be the last time. The Global War on Terror involved “extraordinary rendition,” “secret prisons,” and torture, all under presidential order. Now there is a public shaming of the CIA officers who acted on those orders.

Second, Counter-Intelligence chief James J. Angleton’s long obsession with a Soviet “mole” in the CIA, then the Aldrich Ames case in 1994, left the Agency security conscious almost to the point of paralysis. The CIA disliked everything that it heard about the then-new Internet communications and it established almost impossible barriers to the recruitment of agents who could be used against foreign terrorist groups. (pp. 134-135.)

Third, intelligence agency budgets were sharply reduced from 1990 to 1996, then kept flat from 1996 to 2000. Policy-makers insisted upon ever more-robust technological capabilities in intelligence gathering, without providing additional funds to procure them, so intelligence agencies cannibalized both human intelligence and analysis to get the money. (p. 136.)

In the Clandestine Service the budget cuts of the Nineties meant the loss of many experienced officers and the closure of facilities abroad. The CIA adapted to this by relying heavily upon foreign intelligence service liaison, and by “surging” (running around putting out brushfires instead of covering regions with experts).

After the end of the Cold War, the Directorate of Intelligence’s “university culture with its version of books and articles was giving way to the culture of the newsroom.” (p. 133.) That is, analysts began churning out descriptive reports on more subjects based on a shallower understanding than had been previous reports.

People recognized that a problem existed at CIA. In 1997 George Tenet was appointed DCI with the mission of rebuilding the agency. In 1998 and 1999 two panels (the second chaired by Donald Rumsfeld) that evaluated the CIA warned of “the dispersal of effort on too many priorities, the declining attention to the craft of strategic analysis, and security rules that prevented adequate sharing of information.” (p. 134.)   Tenet obtained expanded budgets for all aspects of the CIA. (pp. 512-513.) In 1998 Tenet persuaded both Congress and the Clinton administration to begin rebuilding the Clandestine Service, but the 5-7 years of training needed to bring a new officer up to full speed meant that it would be 2005 or 2006 before the first recruits were of any real use to anyone. (p. 133.)

Thomas H. Kean and Lee H. Hamilton, The 9/11 Report: The National Commission on Terrorist Attacks Upon the United States (New York: St. Martin’s Press, 2004).

Zarqawi.

Ahmad Fadeel al-Nazal al-Khalayleh (30 October 1966-7 June 2006) was born in Zarqa, Jordan. He sprang from a Bedouin family which had settled down in Jordan’s one factory town. Something went wrong early in life. He drank a lot and had a great deal of “contact” with the police. At some point, he got religion and shaped up his life. A passport photo shows him clean-shaven, with a white shirt and tie—and a sad, mean look. At some point, he took the alias “Abu Musab al-Zarqawi,” which means “the father of Musab” and “From Zarka.”

In 1989 he followed the well-worn Young Islamist pathway to Afghanistan. Here he met Osama bin Laden, may have received basic military training in one of the numerous camps, and wrote some stuff for an Islamist newsletter. By 1992 he was back in Jordan conspiring to overthrow the monarchy, for which he did five years in prison (1994-1999). In prison he came under the influence of the Jordanian Islamist writer Abu Muhammad al-Maqdisi. No sooner did he get out than he tried to blow up a tourist hotel in Amman (1999). This didn’t work out any better than his earlier plot. From 1999 to 2002 he moved to Afghanistan (where OBL fronted him $200,000 to start a Jordanian franchise of Al Qaeda and the Americans almost killed him in a bombing), then went to Iraq by way of Iran. He may have been recovering from an injury in Baghdad for a while. In summer 2002 he moved into northern Iraq, where he joined an Islamist group that was waging jihad by cutting pictures of women off ads.

More serious work tugged at him. He helped plot the assassination of an American diplomat in Jordan (October 2002); organized the bombing of the UN’s HQ in Baghdad (August 2003); organized attacks on Shi’ite shrines in Karbala and Baghdad (March 2004); planned a huge abortive chemical weapons attack on the offices of the prime minister and the intelligence service of Jordan and on the American embassy (April 2004); beheaded a captured American civilian (May 2004), then posted the film on the internet; sent terrorists on an abortive attack on a NATO meeting in Turkey (June 2004); beheaded another captured American civilian (September 2004), then posted the film on the internet; organized the bombing of three hotels in Amman (November 2005); and organized the attack on the Al Askari mosque in Samarra (February 2006). These attacks are only the most spectacular of his operations.

Having been organizing in Iraq from before the Second Gulf War, he had the weapons and explosive, the local contacts, the hideouts, and the local knowledge for insurgent war. What he needed were fighters. These began to flow to him in the form of the many Islamist foreign fighters who entered the country from 2003 on. The newcomers lacked local contacts, so Zarqawi became their controller. He probably organized many of the hundreds of suicide bombings that battered Iraq from 2003 to 2006.

Zarqawi had been on American and Jordanian “Most Wanted” lists since early 2002. In January 2003, the CIA had proposed killing Zarqawi at a camp they had identified in Kurdistan. The proposal was rejected, possibly out of fear that an attack would release toxic clouds from chemicals stored in the camp. Once the US invaded Iraq, Special Forces groups hunted Zarqawi with mounting intensity. Several of these raids came close to capturing him, but always fell short. (One time they found eggs cooking, but not yet burning, on the stove of his empty hide-out.) However, the raids did capture some of his associates. One of these was interrogated—humanely—by an Air Force interrogator who uses the pseudonym “Matthew Alexander.” Zarqawi had a great many hiding places, but “Alexander” learned the location of one in a village near Baqubah. It took six weeks of watching before he came in sight. On the night of 7 June 2006, two precision guided bombs destroyed the house, Zarqawi, and his wife and child–Musab.

Oil for the Lamps of China.

Half of the world’s easily available oil is in Iran, Iraq, and Saudi Arabia. That oil powered the great Western economic surge since the Second World War. In 1973 and 1979 “oil shocks”—sudden rises in the price of oil and restrictions in supply—badly damaged the world’s economy in multiple ways. In 1979 the Soviet Union invaded Afghanistan, on the border with Iran when it was caught up in the turmoil of the Iranian Revolution. Visions of Red Army tanks reaching the northern shores of the Persian Gulf danced through the heads of many people. In 1980 President Jimmy Carter announced that “Any attempt by an outside force to gain control of the Persian Gulf region will be regarded as an assault on the vital interests of the United States.”

Actually, the American concern went beyond combatting an “outside force [seeking] to gain control.” The American concern encompassed any Middle Eastern state seeking to dominate so much of the region’s oil production that it could move the world market price for oil. What the Americans wanted was a stable world market in oil. President George H. W. Bush showed just how seriously the United States took both Carter’s declaration and the larger interest in price stability when he gathered a broad international coalition to cream Iraq in 1990-1991 after it occupied Kuwait.[1]

The spread of the Industrial Revolution into Asia has created a vastly more complicated situation. The collapse of the Communist experiment in the Soviet Union led the Peoples’ Republic of China (PRC) and then other one-time believers in a planned economy to turn toward a market economy. A head-long rush to industrialization in the non-Western world followed. Oil became in ever-greater demand. Thus, no sooner had Saddam Hussein’s invasion of Kuwait been defeated than the PRC entered international oil markets. By 2003 China had passed Japan as the world’s second largest economy and the second largest oil consumer.

The Chinese strategy began with two components. First, China re-cycles part of the profits from exporting low-cost manufactured goods to the West into buying up oil and gas drilling rights in developing countries. These export earnings leave China with deep pockets, so the Chinese often just out-bid their Western competitors. More than thirty countries have received Chinese investments in oil production. They include Algeria, Libya, Egypt, Sudan, Chad, Nigeria, Iran, and Indonesia. All Persian Gulf countries sell oil to China.

Second, China went where Western countries would not go. In particular, China began to court Sudan and Iran. By 2005, China had invested $15 billion in Sudan’s oil drilling and production. China chose to ignore the outcry in the West over the government of Sudan’s brutal war against its own people in the western and southern parts of the country. In Iran, China began trading modern weapons for oil to a state under a Western arms embargo. Cash investments soon followed. People in rich countries often forget that a delicate conscience is a luxury.

The Chinese demand for oil destabilizes the world oil market. Fighting China won’t be like fighting Iraq. So, perhaps people will strike a deal?

On all aspects of energy: http://www.eia.gov/countries/index.cfm?view=consumption

Matthew Yeoman, The World in Numbers: Crude Politics,” Atlantic, April 2005, pp. 48-49.

[1] The Great Depression of the 1930s had brought Hitler to power in Germany and had paralyzed the Western democracies. Reasoning backward from their own youthful experiences, many people in the West thought that if you hadn’t liked the Second World War and the Holocaust, then you should try to avoid a new world economic crisis. So, regardless of what Western liberals and Middle Eastern conspiracy theorists believe, “war for oil” isn’t the same thing as “war for oil companies.” It’s the same thing as “war for peace and prosperity.”

 

Shuffle the Deck and Deal.

The “recent unpleasantness” of the housing bubble and collapse has disguised a larger and more long-term movement. As economists never tire of pointing out, education is linked to prosperity—for both the individual and the community. In 1970, 11 percent of the population aged over twenty-five years had at least a BA. These people were spread around the country fairly evenly: half of America’s cities had concentrations of BA-holders running between 9 and 13 percent.

By 2004, things were very different in two respects. First, 27 percent of the population aged over twenty-five years had at least a BA. So, Americans appeared to be much better educated. Second, educated Americans now clustered together in a few cities. The densest concentrations are around Seattle, San Francisco, up toward Lake Tahoe on California’s border with Nevada, Los Angeles, San Diego, Phoenix, Denver, Salt Lake City, Austin, the Northeast Corridor from Washington to Boston, and in college towns scattered across the map.

 

Why this sorting?

Part of the explanation is a reciprocal relationship between educated people and prosperity. Businesses in science, health, engineering, computers, and education need to be where there are a lot of educated people; people who want to work in these industries need to be where they can get rewarding jobs. Part of the explanation is that some cities tolerate, or even foster, a high degree of diversity. All sorts of people who move toward these cities find a ready welcome and at least some other people like themselves. It’s easy to fit in. It’s easy to find people with whom to share ideas and projects. Seen from these two vantage points, another part of the explanation is that some cities got there first. Like early-birds at a yard-sale, they snapped up all the best things. Seattle, for example, had Boeing (lots of engineers), a big and more-or-less respectable university, a lot of racial diversity (and not just the White-Black kind that most Easterners mean), and a spectacular physical location. It’s easy to see why Microsoft stayed where it started. Others flocked there for the same reasons.

 

What are the effects?

The more that talent concentrates, the greater are the synergies that spin-off innovations—and economic growth. The more that prosperous people concentrate, the greater are the demand for all sorts of other services and amenities.

The production train used to run from innovation to design to manufacturing to distribution to sales to service. In this system, virtually all the different stages and skill-levels would be located in the same area. Detroit and cars or Pittsburgh and steel offer good examples. Today, much of the lesser-skilled work can be either automated or out-sourced to low-wage foreign suppliers. So, great prosperity can co-exist with economic decline.

But not for long. High income earners bid up the price of housing. It is common to find people without BAs being forced to re-locate away from the areas of tech prosperity. A long commute is one of the badges of un-success in contemporary America.

Steel and cars are waning as major American industries. The “knowledge economy” is central to future American prosperity. The transition has costs and problems that we don’t yet know how to resolve.

Richard Florida, “The Nation in Numbers: Where the Brains Are,” Atlantic, October 2006, pp. 34-35.

All Quiet on the Western Front.

Carl Laemmle (1867-1939) was a German Jew who migrated to the US in 1884. He worked as a book-keeper, but got interested in movies when they were a new thing. So did a lot of other people. In 1912 Laemmle and some of the others merged their companies into Universal Films, and then moved to Hollywood. Universal Films turned out to be very successful in the Twenties and early Thirties. However, in 1928 Carl Laemmle made the mistake of bring his son, Carl, Jr. (1908-1979), into the business as head of production. Carl, Sr. had been a book-keeper, so he paid attention to what stuff cost. Carl, Jr. had been a rich kid, so he never paid attention to what stuff cost. This could work out OK if the spending produced a huge hit, so Carl Jr. and Universal were always on the look-out for a potential huge hit.

Erich Maria Remarque (1898-1970) grew up in a working class family in Germany, but had some hopes of becoming a writer. He was drafted into the German Army in 1916. After his training, he served six weeks on the Western Front before he was wounded. He spent the rest of the war in hospital. After the war he took a swing at teaching, then wandered between different types of jobs. He still wanted to be a writer. In a burst of creativity in 1927, he wrote All Quiet on the Western Front. It became a hit when it came out in 1929.[1] Universal bought the rights.

First, Universal needed a screen-writer to adapt the novel into a movie. They hired Maxwell Anderson (1888-1959) whose career is a novel in itself: he was a poor kid and son of an itinerant minister; a school teacher[2] and newspaper writer (fired many times in both careers, usually for not toeing the company line); and then he became a successful play-write, who turned to doing move screenplays on occasion. In 1924 his realistic war-play “What Price Glory?” had been a hit on Broadway. Carl, Jr. hired Anderson to adapt the novel.

Second, they needed a director. Lieb Milstein (1895-1980) grew up poor and Jewish in Kishinev, a city in pre-Revolutionary Russia. Kishinev wasn’t a good place to be either poor or Jewish, so Milstein did what everyone else who didn’t have rocks in their head did: he migrated to the United States. Upon arrival he changed his name to Lewis Milestone. He had been in the US for five years when America entered the First World War. Milstein enlisted in the Army; the Army taught him the film business as part of its propaganda and training work; and Milstein moved to Hollywood after the war. He soon became a director, with a Best Director Oscar in 1928. At the top of his profession, he was much in demand for big pictures. Carl Jr. hired him to direct “All Quiet on the Western Front.”

Third, they needed a bunch of actors. The “extras” weren’t hard to find. Oddly, there were several thousand German war veterans living around Los Angeles. Carl Jr. hired a lot of them. For the lead role of Paul Baumer, they hired Lew Ayres (1908-1996). Ayres didn’t have much acting experience (and he wasn’t really much of an actor). He was young and innocent and impressionable looking, which was the whole point.

The movie cost $1.2 million to make and earned $1.5 million at the box-office. That was enough profit to tempt Carl Jr. into more big-budget movies. Most didn’t do so well. In 1936 he and Carl Sr. got shoved out of Universal.

Lewis Milestone won the Oscar for Best Director. He got black-listed in the Fifties, then went into television work. Ayres became a conscientious objector/medic in World War II.

[1] Remarque wrote ten more novels, but his first remains his most famous.

[2] You notice that both Remarque and Anderson were school teachers? So was William Clark Quantrill. On the one hand, it didn’t used to be a respectable profession, so all sorts of flakes tried their hand at it. On the other hand, anybody with some brains can learn how to do it.

The Secret History of Veterans Day.

Fighting in the First World War stopped at 11:00 AM on 11 November 1918. In 1919, President Woodrow Wilson proclaimed 11 November of that year to be a national holiday, “Armistice Day.” It was supposed to be a one-off. The next year, Wilson proclaimed the Sunday nearest 11 November to be Armistice Sunday so that churches could devote a day to recalling the lost and pondering the difficulties of peace. In 1921 Congress declared a national holiday on 11 November to coincide with the dedication of the Tomb of the Unknown Soldier at Arlington National Cemetery. Thereafter most states made 11 November a state holiday.

The American Legion campaigned for additional payments to military veterans on the grounds that wartime inflation had eroded the value of their pay. Civilian employees of the federal government had received pay adjustments, so veterans should receive them as well to “restore the faith of men sorely tried by what they feel to be National ingratitude and injustice.” There were a lot of veterans: 3,662,374 of them. All were voters, so Congress passed the Adjusted Compensation Act in 1921, which promised immediate payments to veterans. This would amount to about $2.24 billion. That was a lot of money, especially since Congress didn’t propose a means to pay for it. President Warren Harding initially opposed the Act unless it was paired with new revenue, then came to favor a pension system. Harding managed to block the legislation in 1921 and again in 1922. President Calvin Coolidge vetoed a new bill in 1924, saying that “patriotism…bought and paid for is not patriotism.” Congress over-rode the veto.

The World War Adjusted Compensation Act, also known as the Bonus Act, applied to veterans who had served between 5 April 1917 and 1 July 1919. They would receive $1.00 for each day served in the United States and $1.25 for each day served outside the United States. The maximum pay-out was capped at $625. The ultimate payment date was set for the recipient’s birthday in 1945. Thus, it functioned as a deferred savings or insurance plan. However, a provision of the law allowed veterans to borrow against their eventual payment.

In 1926 Congress urged the President to issue a proclamation each year on the commemoration of Armistice Day. It also ordered creation of a new and grander Tomb of the Unknown Soldier.

In 1929 the Great Depression began. Veterans suffered just like everyone else. Many of them began to borrow against the deferred compensation. By the middle of 1932, 2.5 million veterans had borrowed $1.369 billion.

In April 1932 the new Tomb of the Unknown Soldier at Arlington was completed. In Spring and Summer 1932 about 17,000 veterans gathered in Washington, DC, to demand immediate payment of their compensation. Accompanied by thousands of family members, they camped out in shacks on Anacostia Flats. The papers called them the “Bonus Army.” In mid-June 1932, the House of Representatives passed a bill for immediate repayment, but the Senate rejected it. At the end of July 1932 the Washington police tried to evict the “Bonus Marchers,” but failed. President Herbert Hoover then had the Army toss them out.

In 1936 the Democratic majorities in Congress passed a bill to allow immediate payment of the veterans’ compensation, over-riding President Franklin D. Roosevelt’s veto. A bunch of rich-kid jokers at Princeton soon formed the “Veterans of Future Wars” to demand immediate payment of a bonus to them since they were likely to get killed in the next war, before they had a chance to spend a post-war bonus.

In May 1938 Congress passed a law making 11 November an annual holiday for federal employees. In 1954 Congress changed the name to Veterans Day.

Climate of Fear VI.

Burning carbon emits carbon-dioxide and other greenhouse gasses into the atmosphere. Greenhouse gases then trap heat in the atmosphere, preventing it from escaping out into space. This effect is responsible for global warming. Since the late 18th Century, burning carbon has fueled the Industrial Revolution. In the 1980s and 1990s, the surface temperature of the Earth rose by 1.2 degrees. This rise then caused substantial melting of the polar ice caps and extreme weather events.

How much worse, then, would be the effects of the spread of industrialization into the non-Western world in the 21st Century? This has greatly increased the burning of carbon. Between 2000 and 2010, 110 billion tons of carbon dioxide were released into the atmosphere. This amounts to an estimated one-fourth of all the greenhouse gases ever emitted. At this rate, the volume of carbon dioxide concentrations in the atmosphere compared to pre-industrial times will double by 2050. In 2007 the UN’s Intergovernmental Panel on Climate Change (IPCC) predicted that such a doubling could lead to a temperature rise of 5.4 degrees, with increases each decade of 0.2 degrees Celsius per decade. (Which I think, but I’m a dumb American, works out to be 0.36 degrees Fahrenheit.) So, the temperature of the Earth should be rising even faster than before.

It isn’t. Since 1998 the surface temperature of the Earth has risen by 0.2 degrees. However, this is much less of a rise than climate scientists had projected by extrapolating the temperature increases that were recorded in the 1980s and 1990s. (I think that we should be about 0.5 degrees warmer, but see my earlier disclaimer.) “Baby, Baby, where did the heat go?”

Some climate change skeptics love this: “There is no problem with global warming. It stopped in 1998.” OK, but why did it stop? Will it restart? Another stripe of skeptics take issue with the accuracy of the models used to estimate the effects of greenhouse gas emissions. They argue that the climate is not as sensitive to increases in greenhouse gases as many models assume. We have more time to adapt and at a lower cost than “alarmists” predict.

Climate scientists offer a number of possible explanations for the “missing heat.”

The deep seas absorbed the extra heat, the way they did the “Titanic.” While surface sea temperatures have remained stable, temperatures below 2,300 feet have been rising since 2000.

The rhythms in the heat radiated by the Sun are responsible. The highs and lows of this rhythm are called solar maximums and solar minimums. One solar maximum ended in 2000 and we are in the midst of a solar minimum.

The pollution emitted by major carbon-burners like China actually reflects away some of the Sun’s heat before it becomes trapped in the atmosphere. (You can see how this answer would alarm proponents of responding to climate change. “The real problem with air pollution is that we don’t have enough of it.”)

Climate scientists have also scaled-back their predictions from a possible 5.4 degree rise in surface temperatures to projections between 1.6 and 3.6 degrees. These less-warm decades will then be followed by the roof falling in. The sun will move toward the next solar maximum; the heat trapped in the deep sea will rise toward the surface to boost temperatures; and the “pollution umbrella” will go back to trapping heat in the atmosphere. We’ll fry like eggs. Or perhaps just get poached. Depends on which scientists you believe.

“The missing heat,” The Week, 30 August 2013, p. 11.

Judith Curry, “The Global Warming Statistical Meltdown,” Wall Street Journal, 10 October 2014.

The Senator from San Quentin.

During the 1980s violent crime rose to new peaks. The murder rate in 1991 reached 9.8/100,000, about four times the rate in, say, France. A criminologist named George Kelling argued that the toleration of all sorts of little crimes or acts of indecency—even broken windows or vandalism or those homeless goofs at intersections trying to extort pocket change for cleaning your windows—created an atmosphere of disrespect for the law. From little things, people went on to feel less restrained about bigger things. Kelling sold this idea to New York City Police Commissioner William Bratton. New York cops started pushing the homeless into shelters, clearing the intersections of squeegee men, and stopping kids from hanging out on street corners.

However, Bratton also embraced the idea that a lot of crime is committed by a few people, and a little crime is committed by a lot of people. You want a big drop in crime? Concentrate on the few career criminals and put them away for a long time. Bratton concentrated on a statistical analysis of crime in each police precinct, then drove his precinct captains to find and arrest habitual criminals. This seemed to work, so lots of police departments adopted the New York approach. Bratton’s approach coincided with a get-tough policy adopted by legislatures in the Nineties. Mandatory minimum sentences and three-strikes-and-you’re-out sentencing kept criminals in prison for longer. The war on drugs, especially the crack cocaine epidemic, sent a lot more people to prison. Guys who are locked up can’t commit crimes, at least not against ordinary citizens. (Fellow prisoners or guards? That’s another story.)

Inevitably, there is a down-side. First, the United States has one-twentieth of the world’s population, but one-fourth of the prison population. That includes both Russia and China. There are more people currently in prison in the United States (2.3 million) than there are in any one of fifteen states, and more than in the four least-populated states put together. The rate of imprisonment in the United States is the highest in the world.

Second, black communities have been particularly hard hit by both crime and punishment. One in nine black men between the ages of 20 and 34 is in jail. (The overall ratio of imprisoned to paroled/probationed is about 1:3, so that would suggest that another three in nine black men is under some other form of judicial supervision.) Since felons lose the right to vote, large numbers of blacks have been dis-franchised in what one law professor has labeled “the new Jim Crow.” Since most prisons are located in rural areas, this leads to the over-representation of areas unsympathetic to city problems.

Third, keeping huge numbers of prisoners locked up is really expensive. Americans don’t like to pay taxes, so prison budgets have been held down for decades. The result is massive over-crowding. Courts have repeatedly held this over-crowding to amount to cruel and unusual punishment.

Fourth, imprisonment doesn’t seem to do anything to change behavior. Says one criminologist, “two-thirds of those who leave prison will be back within three years.”

What have changed are the crime rates. Between 1991 and 2009, the number of murders fell by 45 percent. From its peak of 9.8/100,000 in 1991, the murder rate fell to 5.0/100,000 in 2009. The same decline has been found in most other categories of crime over the same period. At least for now.

Prisoners are so numerous that, if grouped together and represented in the Congress, they would be a formidable voting bloc.

“The prison nation,” The Week, 13 February 2009, p. 13; “The mystery of falling crime rates,” The Week, 16 July 2010, p. 13.

Eye in the Sky.

Some time ago the courts decided that no one has a right to privacy when they are on the streets or in public places. Initially, this applied, in part, to the many surveillance cameras installed by banks and stores and apartment buildings. Then the development of digital cameras made surveillance video available to watchers in real time and it made it simple to transfer the images between widely separated computers. Then computer geeks developed face-recognition software and programs that detected “anomalous behavior.” All of these were great crime-fighting tools, at least according to the police who sing the non-specific praises of the cameras as deterrents and crime-solving aids.

With this doorway open, since 9-11 the Department of Homeland Security has been making grants to cities to fund the installation of security cameras targeting public places. These cameras supplement the already existing security cameras installed by banks, stores, and office buildings. Madison, Wisconsin—a bastion of Mid-Western liberalism–is putting in 32 cameras; Chicago and Baltimore—hotbeds of urban crime which actually don’t give a rip about Islamic terrorism—are installing thousands of cameras and are linking them to the existing systems of private cameras. The most elaborate system is that of the Lower Manhattan Security Initiative: by 2010, 3,000 cameras will be in place throughout Wall Street and the World Trade Center area. In addition, the system includes license plate readers connected to computers that cross reference the numbers of suspect vehicles and which share images with the Department of Homeland Security and the EffaBeeEye.

Now there is a new layer of observation: police, government, and private drones. The police are hot to use drones. In the 1980s the Supreme Court held that the police don’t need a warrant to observe private property from public airspace. [NB: What is “public airspace”? So far as I can tell, anything at a height of 500 feet or above is clearly public airspace; anything 83 feet or below is private airspace; and what is in-between is a little murky. Are you allowed to shoot drones under 83 feet like skeet?] Drones can be fitted with high-resolution cameras, infra-red sensors, license plate-readers, and directional microphones. They are quieter and smaller than helicopters, reducing the chance that people will know that they are being observed without a warrant. If you keep your shades pulled down, can they “assume” you’re running a grow house?

Are there problems with this program? In the eyes of individual rights advocates on the left and right, the answer is definitely yes. While government agencies will watch millions of people in public places in hopes of catching a few terrorists before an attack, it is more likely that they only will be able to figure out what happened after the attack. Will people just become habituated to being watched in public places? In a generation, will they accept the possibility of being watched in semi-public places? What happens when surveillance images leak from the government agency to the public sphere? See: http://www.youtube.com/watch?v=8zYRYh6cQ2g The clip is fun to watch, except that it is a public traffic camera with the film leaked to provide private entertainment. What if a mini-drone lands on your bathroom window sill one morning and catches you in the shower? Some Peeping Tom at home or cops finding a fun use for the technology paid for by the DEA or property seizures from teen-age druggies driving their Dad’s BMW? In the eyes of most Americans, however, more surveillance cameras are just fine. (“The drone over your backyard,” The Week, 15 June 2012, p. 11.)