Climate of Fear VIII.

In September 2014 the New York Times published a hard-headed essay by Robert Stavins, one of the authors for multiple reports by the UN’s Intergovernmental Panel on Climate Change.[1] He made some important points.

First, Americans first became sufficiently concerned about the environment to take action back in the late 1960s, when air and water pollution had become too obvious to be ignored. Then their attention turned to the issue of climate change during the 1980s and 1990s. Joining in a movement with other advanced economies, the United States signed a series of international agreements to reduce the emission of greenhouse gases. As a result of those agreements, emissions by these countries were held flat or even reduced.

Second, developing nations (China, India, South Korea, Mexico, Brazil, South Africa) refused to join in such agreements because their own industrialization is both carbon-fueled and essential to raise the living standards of their people. China leads this process: China produces 29 percent of the world’s annual carbon emissions and will pass the United States as the world’s leading total carbon emitter before mid-century. None of the developing countries want to check their own emissions because they fear that it will check economic growth. Their preferred solution is that the advanced countries restrict their emissions even more than they have to make space for the emissions of developing countries.

Third, unlike economists such as Robert Frank and Eduardo Porter (see: Climate of Fear II), Stavins doesn’t try to sugar-coat the costs of limiting emissions. The UN wants to hold the temperature rise to two degrees Celsius above the pre-industrial temperature. That would entail reducing carbon emission by 40 to 70 percent by 2050. Stavins argues that this would reduce economic growth by 0.06 percent per year from now to the end of the 21st Century. In total, that would cumulate to an annual reduction of economic activity of 5 percent.

Furthermore, even those predictions depend to some extent upon the rapid development of cheap alternative energy sources and technologies to limit emissions. Absent such cheap new technologies and the cost estimates are more than twice as high. Stavins appears skeptical that this will happen. Furthermore, cutting carbon emissions will require a large-scale use of nuclear energy and a world-wide carbon tax.

Fourth, the politics of meeting popular expectations raise a huge barrier to action. This isn’t a “democracy versus autocracy” issue. The rulers of China and India are sensitive to the economic aspirations of their people, even if they aren’t real democracies or democracies at all. Greenhouse gases are invisible and their impact is slow to show itself, rather than dramatic in form. So what if the people of the Seychelles have to take to the boats? Imposing costs immediately to avoid something bad in the future (or to someone else in the present) isn’t going to be popular anywhere. Similarly, the UN is just trying to limit the rise in temperature in the future, not to roll-back the 0.8 degrees Celsius rise that has already taken place. “If you make big sacrifices, then things will stay the way that they are now or get a little worse.” Try putting that on a bumper-sticker, then run for office.

Couple of things worth thinking about. On the one hand, is the best we can hope for a patchwork of wavering national efforts to limit emissions through administrative action? On the other hand, is there a way to make higher energy prices and more nuclear reactors palatable to voters? Or do we just adapt by drilling for deep water and moving back from the coasts?

[1] Robert N. Stavins, “Climate Realities,” NYT, 21 September 2014.

 

Climate of Fear VII.

Back in 1792, the Marquis de Condorcet was in hiding from some French Revolutionaries who wanted to cut off his head. To while away the time in a garret, he wrote an essay predicting the continual improvement of the human situation. Science would tell us more about the world, while education would make that knowledge widely understood and the emancipation of women would enrich the stock of human capital. A week later he was dead, but his philosophical essay continued to inspire optimists. In 1798, Thomas Malthus approached the issue of human progress from the hard-headed perspective of mathematics. Human population would always tend to run ahead of food supply. Most people would find their standard of living forced down to the bare subsistence level. Two intelligent people approaching the same question from two different perspectives arrived at radically different answers.[1]

Accept that global warming is real. What’s the worst that could happen? As was the case with Condorcet and Malthus, the answer depends on who is doing the imagining. Diane Ackerman, The Human Age: The World Shaped by Us (New York: Norton, 2014) is “enormously hopeful.” For one thing, humans have been remodeling the planet almost since they climbed down out of the trees. It has been one long Lowe’s project: dams, dikes, canals, logging, and moving life-forms (bacteria, plants, animals, people) from continent to continent. All of this even before the Industrial Age began. Human beings do stupid things, or smart things that turn out to have awkward, unforeseen consequences. However, human beings are also endlessly inventive when solving problems. Florida may become uninhabitable as the seas rise, but Florida only became inhabitable for large numbers of people in the first place through the invention of air-conditioning and insecticide. People will accommodate to a changing environment; new technologies will emerge to deal with new demands.[2]

Both Naomi Oreskes and Erik Conway, The Collapse of Western Civilization: A View From the Future (New York: Columbia UP, 2014) and Naomi Klein, This Changes Everything: Capitalism vs. the Climate (New York: Simon and Schuster, 2014) are less sanguine.[3] Klein argues that “we have not done the things that are necessary to lower emissions because those things fundamentally conflict with deregulated capitalism, the reigning ideology for the entire period we have been struggling to find a way out of this crisis.” Where will this lead? http://www.youtube.com/watch?v=5BmEGm-mraE Naomi Klein at least, urges a “Great Transition” away from capitalism that will solve not merely the climate crisis, but will also resolve a host of other social ills.

Nathaniel Rich, “Books: Feeling Our Rising Temperature,” NYT, 23 September 2014, D5.

 

So who is correct? Goldilocks. It’s likely to be worse than Ackerman expects, especially if you live in one of the fragile zones of the Earth. Human adaptivity will deal with the changes better than Klein, Oreskes, and Conway fear.

What is the most prudent response? Do what we can to limit the changes that will come, while creating an environment to stimulate adaptive responses and new technologies. Carbon taxes would be a good place to start. Raise the price of carbon. Let consumers and entrepreneurs—not governments—figure out how to respond.

[1] Julian Simon and Paul Ehrlich continued this debate in the late 20th Century.

[2] My own hope is to grow rich by building a marina and resort on Baffin Island.

[3] I suppose you could call them “Naomi-sayers.” Ha! Is joke.

 

The 9/11 Commission Report.

The 9/11 attacks took place in 2001.  The National Commission on Terrorist Attacks on the United States, commonly called the 9/11 Commission, issued its report in 2004.  Ten years on seems like a useful point at which to look back on the Report.

The historical “lessons” of the 9/11 Report have entered into the understanding of that “informed public” beloved of college professors and newspaper editors.  They shape much  American policy in the world.  They are worth examining if only on those grounds.

The Report also identified serious problems in American government and politics.  It defined a broad agenda for reform.  In this it resembles earlier American manifestos, like the journalism of the “muckrakers” in the Progressive Era and the reports on crime and violence, and on race relations that came out in the 1960s.  It is fair to ask, ten years on, how far have we come in reforming our problems?

I thought that I would spend some time  revisiting what we learned from the Report fo the 9/11 Commission.  Comments are always welcome.

What did we learn from the Report of the 9/11 Commission? III

Osama bin Laden seems to have encountered Sayidd Qutb’s philosophy through the tape recordings of a Palestinian evangelist named Abdullah Azzam, while attending Saudi Arabia’s Abdul Aziz University in the late Seventies. (p. 82.) Bin Laden adopted this worldview and only the conversion of everyone everywhere to his version of Islam would end his war with them. (pp. 76-77.)

Then the Soviets invaded Afghanistan in 1979 to support a threatened Communist regime. The Afghans fought back and devout Muslims from all over the world came to participate in the “jihad” against the Soviets. While the CIA channeled immense amounts of American aid to the “mujahideen through the Pakistani intelligence service (ISI), a parallel private network—the so-called “Golden Chain”—also raised money in Saudi Arabia and recruited fighters for Afghanistan. Osama Bin Laden and Abdullah Azzam played an important part in this latter effort.

At some point Bin Laden developed “a vision of himself as head of an international jihad confederation.” (p. 86.) When, in April 1988, the Soviets cried uncle and announced their plans to leave Afghanistan by the end of the year, Bin Laden and Azzam cast around for a new enemy to attack. Azzam argued for struggling to create a pure Islamic state in Afghanistan, then attacking Israel; Bin Laden argued for a global war. (p. 84.)

In fall 1989 Hassan al Turabi, an important Islamic fundamentalist leader in Sudan, invited Bin Laden to use Sudan as a base of operations. Turabi had a vision of Sunni and Shi’a putting aside their religious differences to make common cause against Israel and the United States. (p. 90.) Did Azzam oppose this move? On 24 November 1989 Azzam died in a car bombing. At the time, the bombing was attributed, but which now looks suspiciously like Bin Laden settling the debate.

Bin Laden then accepted al Turabi’s invitation. He sent men to begin buying property, while he himself returned to Saudi Arabia. Soon afterward, Iraq invaded Kuwait and threatened Saudi Arabia. A broad international coalition formed, led by the United States, to oppose a move that threatened the stability of the world oil market. Between August 1990 and April 1991 Bin Laden made himself deeply unpopular with the Saudi government by bitterly criticizing its decision to ally with the United States rather than calling on Islamic volunteers to oppose the invasion of Kuwait. By this time he was already profoundly anti-American. (p. 87.)

In April 1991 he escaped from Saudi Arabia and established himself in the Sudan. For the next few years Bin Laden worked hard at building covert international networks for finance and operations. He called his group al Qaeda. In this effort he seems to have had the strong support of Hassan al Turabi. The Sudanese leader created a “Popular Arab and Islamic Conference” as a forum for “violent Islamist extremists” who came to confer in the Sudan. Most of these groups forged links to al Qaeda. (p. 90.) Sudan also provided a safe haven for other terrorists who would attack surrounding Arab countries.

 

Thomas H. Kean and Lee H. Hamilton, The 9/11 Report: The National Commission on Terrorist Attacks Upon the United States (New York: St. Martin’s Press, 2004).

What did we learn from the Report of the 9/11 Commission? II

Westernized elites (lawyers, bureaucrats, soldiers) provided the leadership for the successful nationalist movements in the Middle East after the Second World War. The initial economic situation of the new states did not appear unpromising: “The established commercial, financial, and industrial sectors.., supported by an entrepreneurial spirit and widespread understanding of free enterprise, augured well.” (p. 79.) However, the secular variant of the new states failed to deliver on the extravagant promises made in the early period of independence. The governments of many new states followed policies that slowly stifled all economic progress.

In the Arab world the oil shocks of the 1970s inflicted grave damage in the disguise of a great blessing. The enormous profits proved transient, but the governments used them for efforts to transform Arab society that had long-term consequences. Governments spent heavily on “huge infrastructure projects, vastly expanded education, and…subsidized social welfare programs. Cronyism meant that lots of money stuck to members of the ruling elites, as well.

Modern medical care led to a soaring birthrate all across the Muslim world. This large, young population needed jobs to be created at a rapid rate, but the stagnant economies of all the Muslim states failed to fulfill their tasks. The result was the proliferation of angry, frustrated, aggrieved, half-educated or mis-educated young men. (p. 80.) Rather than yield power or turn to new policies, the ruling elites settled for repressing dissent.

When a sharp rise in population intersected precipitously declining oil revenues in the 1990s, the government had to sharply reduce spending. The generous programs of the early 1980s “established a wide-spread feeling of entitlement without a corresponding sense of social obligation.”   The later effort to cut spending “created enormous resentment among recipients who had come to see government largesse as their right.” (p. 79.)

Many people turned to religion. As is the case with Christianity, Islam has been subject to periodic reform movements that could be called “fundamentalist” or “revivalist.” One exponent of reform was the 14th century scholar Ibn Taimiyyah, who “condemned both corrupt rulers and the clerics who failed to criticize them. He urged Muslims to read the Qur’an and the Hadith for themselves, not to depend solely on learned interpreters like himself but to hold one another to account for the quality of their observance.” (p. 75.) NB: In short, Calvin’s Geneva.

In the 1940s, Sayyid Qutb, an Egyptian scholar had visited the United States at the behest of his government and returned to Egypt deeply estranged from everything Western. (pp. 75-76.) Qutb espoused a Manichaean worldview in which pervasive, corrosive “unbelief” (jahiliyya) among non-Muslims and Muslims alike threatened to overwhelm true belief. True believers had to fight the unbelievers by all means and to the death. (pp. 76-77.) “The extreme Islamist version of history blames the decline from Islam’s golden age on the rulers and people who turned away from the true path of their religion, thereby leaving Islam vulnerable to encroaching foreign powers eager to steal their land, wealth, and even their souls.” (p. 75.)

By the late Seventies and early Eighties there had arisen a powerful religious movement among young men in the Muslim world. Osama Bin Laden was inspired by a preacher in the late Seventies. Khalid Sheikh Mohammed became attracted to “jihadism” in the early Eighties. In the early Eighties “Hambali” became attracted to Islamist preaching in Malaysia. Young jihadis went to fight in Afghanistan (1980s), in Bosnia (1990s),

Thomas H. Kean and Lee H. Hamilton, The 9/11 Report: The National Commission on Terrorist Attacks Upon the United States (New York: St. Martin’s Press, 2004).

What did we learn from the Report of the 9/11 Commission? I

By the end of the 20th century the CIA was “an organization capable of attracting extraordinarily motivated people, but institutionally averse to risk, with its capacity for covert action atrophied, predisposed to restrict the distribution of information, having difficulty assimilating new types of personnel, and accustomed to presenting descriptive reportage of the latest intelligence.” (p. 137.)

How had this situation come into being?

First, “although covert actions represent a very small fraction of the [CIA’s] entire budget, these operations have at times been controversial and over time have dominated the public’s perception of the CIA.” (p. 126.) Furthermore, whenever covert actions turned into highly public exploding cigars, the Presidents who ordered them have left CIA officers to carry the can. The CIA became very reluctant to engage in them. (p. 132.) Eisenhower’s initiation of and JFK’s approval of the CIA’s Bay of Pigs scheme offered an important early example of this behavior. Allen Dulles lost his job as head of CIA and Dick Bissell got fired. It would not be the last time. The Global War on Terror involved “extraordinary rendition,” “secret prisons,” and torture, all under presidential order. Now there is a public shaming of the CIA officers who acted on those orders.

Second, Counter-Intelligence chief James J. Angleton’s long obsession with a Soviet “mole” in the CIA, then the Aldrich Ames case in 1994, left the Agency security conscious almost to the point of paralysis. The CIA disliked everything that it heard about the then-new Internet communications and it established almost impossible barriers to the recruitment of agents who could be used against foreign terrorist groups. (pp. 134-135.)

Third, intelligence agency budgets were sharply reduced from 1990 to 1996, then kept flat from 1996 to 2000. Policy-makers insisted upon ever more-robust technological capabilities in intelligence gathering, without providing additional funds to procure them, so intelligence agencies cannibalized both human intelligence and analysis to get the money. (p. 136.)

In the Clandestine Service the budget cuts of the Nineties meant the loss of many experienced officers and the closure of facilities abroad. The CIA adapted to this by relying heavily upon foreign intelligence service liaison, and by “surging” (running around putting out brushfires instead of covering regions with experts).

After the end of the Cold War, the Directorate of Intelligence’s “university culture with its version of books and articles was giving way to the culture of the newsroom.” (p. 133.) That is, analysts began churning out descriptive reports on more subjects based on a shallower understanding than had been previous reports.

People recognized that a problem existed at CIA. In 1997 George Tenet was appointed DCI with the mission of rebuilding the agency. In 1998 and 1999 two panels (the second chaired by Donald Rumsfeld) that evaluated the CIA warned of “the dispersal of effort on too many priorities, the declining attention to the craft of strategic analysis, and security rules that prevented adequate sharing of information.” (p. 134.)   Tenet obtained expanded budgets for all aspects of the CIA. (pp. 512-513.) In 1998 Tenet persuaded both Congress and the Clinton administration to begin rebuilding the Clandestine Service, but the 5-7 years of training needed to bring a new officer up to full speed meant that it would be 2005 or 2006 before the first recruits were of any real use to anyone. (p. 133.)

Thomas H. Kean and Lee H. Hamilton, The 9/11 Report: The National Commission on Terrorist Attacks Upon the United States (New York: St. Martin’s Press, 2004).

Zarqawi.

Ahmad Fadeel al-Nazal al-Khalayleh (30 October 1966-7 June 2006) was born in Zarqa, Jordan. He sprang from a Bedouin family which had settled down in Jordan’s one factory town. Something went wrong early in life. He drank a lot and had a great deal of “contact” with the police. At some point, he got religion and shaped up his life. A passport photo shows him clean-shaven, with a white shirt and tie—and a sad, mean look. At some point, he took the alias “Abu Musab al-Zarqawi,” which means “the father of Musab” and “From Zarka.”

In 1989 he followed the well-worn Young Islamist pathway to Afghanistan. Here he met Osama bin Laden, may have received basic military training in one of the numerous camps, and wrote some stuff for an Islamist newsletter. By 1992 he was back in Jordan conspiring to overthrow the monarchy, for which he did five years in prison (1994-1999). In prison he came under the influence of the Jordanian Islamist writer Abu Muhammad al-Maqdisi. No sooner did he get out than he tried to blow up a tourist hotel in Amman (1999). This didn’t work out any better than his earlier plot. From 1999 to 2002 he moved to Afghanistan (where OBL fronted him $200,000 to start a Jordanian franchise of Al Qaeda and the Americans almost killed him in a bombing), then went to Iraq by way of Iran. He may have been recovering from an injury in Baghdad for a while. In summer 2002 he moved into northern Iraq, where he joined an Islamist group that was waging jihad by cutting pictures of women off ads.

More serious work tugged at him. He helped plot the assassination of an American diplomat in Jordan (October 2002); organized the bombing of the UN’s HQ in Baghdad (August 2003); organized attacks on Shi’ite shrines in Karbala and Baghdad (March 2004); planned a huge abortive chemical weapons attack on the offices of the prime minister and the intelligence service of Jordan and on the American embassy (April 2004); beheaded a captured American civilian (May 2004), then posted the film on the internet; sent terrorists on an abortive attack on a NATO meeting in Turkey (June 2004); beheaded another captured American civilian (September 2004), then posted the film on the internet; organized the bombing of three hotels in Amman (November 2005); and organized the attack on the Al Askari mosque in Samarra (February 2006). These attacks are only the most spectacular of his operations.

Having been organizing in Iraq from before the Second Gulf War, he had the weapons and explosive, the local contacts, the hideouts, and the local knowledge for insurgent war. What he needed were fighters. These began to flow to him in the form of the many Islamist foreign fighters who entered the country from 2003 on. The newcomers lacked local contacts, so Zarqawi became their controller. He probably organized many of the hundreds of suicide bombings that battered Iraq from 2003 to 2006.

Zarqawi had been on American and Jordanian “Most Wanted” lists since early 2002. In January 2003, the CIA had proposed killing Zarqawi at a camp they had identified in Kurdistan. The proposal was rejected, possibly out of fear that an attack would release toxic clouds from chemicals stored in the camp. Once the US invaded Iraq, Special Forces groups hunted Zarqawi with mounting intensity. Several of these raids came close to capturing him, but always fell short. (One time they found eggs cooking, but not yet burning, on the stove of his empty hide-out.) However, the raids did capture some of his associates. One of these was interrogated—humanely—by an Air Force interrogator who uses the pseudonym “Matthew Alexander.” Zarqawi had a great many hiding places, but “Alexander” learned the location of one in a village near Baqubah. It took six weeks of watching before he came in sight. On the night of 7 June 2006, two precision guided bombs destroyed the house, Zarqawi, and his wife and child–Musab.

Oil for the Lamps of China.

Half of the world’s easily available oil is in Iran, Iraq, and Saudi Arabia. That oil powered the great Western economic surge since the Second World War. In 1973 and 1979 “oil shocks”—sudden rises in the price of oil and restrictions in supply—badly damaged the world’s economy in multiple ways. In 1979 the Soviet Union invaded Afghanistan, on the border with Iran when it was caught up in the turmoil of the Iranian Revolution. Visions of Red Army tanks reaching the northern shores of the Persian Gulf danced through the heads of many people. In 1980 President Jimmy Carter announced that “Any attempt by an outside force to gain control of the Persian Gulf region will be regarded as an assault on the vital interests of the United States.”

Actually, the American concern went beyond combatting an “outside force [seeking] to gain control.” The American concern encompassed any Middle Eastern state seeking to dominate so much of the region’s oil production that it could move the world market price for oil. What the Americans wanted was a stable world market in oil. President George H. W. Bush showed just how seriously the United States took both Carter’s declaration and the larger interest in price stability when he gathered a broad international coalition to cream Iraq in 1990-1991 after it occupied Kuwait.[1]

The spread of the Industrial Revolution into Asia has created a vastly more complicated situation. The collapse of the Communist experiment in the Soviet Union led the Peoples’ Republic of China (PRC) and then other one-time believers in a planned economy to turn toward a market economy. A head-long rush to industrialization in the non-Western world followed. Oil became in ever-greater demand. Thus, no sooner had Saddam Hussein’s invasion of Kuwait been defeated than the PRC entered international oil markets. By 2003 China had passed Japan as the world’s second largest economy and the second largest oil consumer.

The Chinese strategy began with two components. First, China re-cycles part of the profits from exporting low-cost manufactured goods to the West into buying up oil and gas drilling rights in developing countries. These export earnings leave China with deep pockets, so the Chinese often just out-bid their Western competitors. More than thirty countries have received Chinese investments in oil production. They include Algeria, Libya, Egypt, Sudan, Chad, Nigeria, Iran, and Indonesia. All Persian Gulf countries sell oil to China.

Second, China went where Western countries would not go. In particular, China began to court Sudan and Iran. By 2005, China had invested $15 billion in Sudan’s oil drilling and production. China chose to ignore the outcry in the West over the government of Sudan’s brutal war against its own people in the western and southern parts of the country. In Iran, China began trading modern weapons for oil to a state under a Western arms embargo. Cash investments soon followed. People in rich countries often forget that a delicate conscience is a luxury.

The Chinese demand for oil destabilizes the world oil market. Fighting China won’t be like fighting Iraq. So, perhaps people will strike a deal?

On all aspects of energy: http://www.eia.gov/countries/index.cfm?view=consumption

Matthew Yeoman, The World in Numbers: Crude Politics,” Atlantic, April 2005, pp. 48-49.

[1] The Great Depression of the 1930s had brought Hitler to power in Germany and had paralyzed the Western democracies. Reasoning backward from their own youthful experiences, many people in the West thought that if you hadn’t liked the Second World War and the Holocaust, then you should try to avoid a new world economic crisis. So, regardless of what Western liberals and Middle Eastern conspiracy theorists believe, “war for oil” isn’t the same thing as “war for oil companies.” It’s the same thing as “war for peace and prosperity.”

 

Shuffle the Deck and Deal.

The “recent unpleasantness” of the housing bubble and collapse has disguised a larger and more long-term movement. As economists never tire of pointing out, education is linked to prosperity—for both the individual and the community. In 1970, 11 percent of the population aged over twenty-five years had at least a BA. These people were spread around the country fairly evenly: half of America’s cities had concentrations of BA-holders running between 9 and 13 percent.

By 2004, things were very different in two respects. First, 27 percent of the population aged over twenty-five years had at least a BA. So, Americans appeared to be much better educated. Second, educated Americans now clustered together in a few cities. The densest concentrations are around Seattle, San Francisco, up toward Lake Tahoe on California’s border with Nevada, Los Angeles, San Diego, Phoenix, Denver, Salt Lake City, Austin, the Northeast Corridor from Washington to Boston, and in college towns scattered across the map.

 

Why this sorting?

Part of the explanation is a reciprocal relationship between educated people and prosperity. Businesses in science, health, engineering, computers, and education need to be where there are a lot of educated people; people who want to work in these industries need to be where they can get rewarding jobs. Part of the explanation is that some cities tolerate, or even foster, a high degree of diversity. All sorts of people who move toward these cities find a ready welcome and at least some other people like themselves. It’s easy to fit in. It’s easy to find people with whom to share ideas and projects. Seen from these two vantage points, another part of the explanation is that some cities got there first. Like early-birds at a yard-sale, they snapped up all the best things. Seattle, for example, had Boeing (lots of engineers), a big and more-or-less respectable university, a lot of racial diversity (and not just the White-Black kind that most Easterners mean), and a spectacular physical location. It’s easy to see why Microsoft stayed where it started. Others flocked there for the same reasons.

 

What are the effects?

The more that talent concentrates, the greater are the synergies that spin-off innovations—and economic growth. The more that prosperous people concentrate, the greater are the demand for all sorts of other services and amenities.

The production train used to run from innovation to design to manufacturing to distribution to sales to service. In this system, virtually all the different stages and skill-levels would be located in the same area. Detroit and cars or Pittsburgh and steel offer good examples. Today, much of the lesser-skilled work can be either automated or out-sourced to low-wage foreign suppliers. So, great prosperity can co-exist with economic decline.

But not for long. High income earners bid up the price of housing. It is common to find people without BAs being forced to re-locate away from the areas of tech prosperity. A long commute is one of the badges of un-success in contemporary America.

Steel and cars are waning as major American industries. The “knowledge economy” is central to future American prosperity. The transition has costs and problems that we don’t yet know how to resolve.

Richard Florida, “The Nation in Numbers: Where the Brains Are,” Atlantic, October 2006, pp. 34-35.

All Quiet on the Western Front.

Carl Laemmle (1867-1939) was a German Jew who migrated to the US in 1884. He worked as a book-keeper, but got interested in movies when they were a new thing. So did a lot of other people. In 1912 Laemmle and some of the others merged their companies into Universal Films, and then moved to Hollywood. Universal Films turned out to be very successful in the Twenties and early Thirties. However, in 1928 Carl Laemmle made the mistake of bring his son, Carl, Jr. (1908-1979), into the business as head of production. Carl, Sr. had been a book-keeper, so he paid attention to what stuff cost. Carl, Jr. had been a rich kid, so he never paid attention to what stuff cost. This could work out OK if the spending produced a huge hit, so Carl Jr. and Universal were always on the look-out for a potential huge hit.

Erich Maria Remarque (1898-1970) grew up in a working class family in Germany, but had some hopes of becoming a writer. He was drafted into the German Army in 1916. After his training, he served six weeks on the Western Front before he was wounded. He spent the rest of the war in hospital. After the war he took a swing at teaching, then wandered between different types of jobs. He still wanted to be a writer. In a burst of creativity in 1927, he wrote All Quiet on the Western Front. It became a hit when it came out in 1929.[1] Universal bought the rights.

First, Universal needed a screen-writer to adapt the novel into a movie. They hired Maxwell Anderson (1888-1959) whose career is a novel in itself: he was a poor kid and son of an itinerant minister; a school teacher[2] and newspaper writer (fired many times in both careers, usually for not toeing the company line); and then he became a successful play-write, who turned to doing move screenplays on occasion. In 1924 his realistic war-play “What Price Glory?” had been a hit on Broadway. Carl, Jr. hired Anderson to adapt the novel.

Second, they needed a director. Lieb Milstein (1895-1980) grew up poor and Jewish in Kishinev, a city in pre-Revolutionary Russia. Kishinev wasn’t a good place to be either poor or Jewish, so Milstein did what everyone else who didn’t have rocks in their head did: he migrated to the United States. Upon arrival he changed his name to Lewis Milestone. He had been in the US for five years when America entered the First World War. Milstein enlisted in the Army; the Army taught him the film business as part of its propaganda and training work; and Milstein moved to Hollywood after the war. He soon became a director, with a Best Director Oscar in 1928. At the top of his profession, he was much in demand for big pictures. Carl Jr. hired him to direct “All Quiet on the Western Front.”

Third, they needed a bunch of actors. The “extras” weren’t hard to find. Oddly, there were several thousand German war veterans living around Los Angeles. Carl Jr. hired a lot of them. For the lead role of Paul Baumer, they hired Lew Ayres (1908-1996). Ayres didn’t have much acting experience (and he wasn’t really much of an actor). He was young and innocent and impressionable looking, which was the whole point.

The movie cost $1.2 million to make and earned $1.5 million at the box-office. That was enough profit to tempt Carl Jr. into more big-budget movies. Most didn’t do so well. In 1936 he and Carl Sr. got shoved out of Universal.

Lewis Milestone won the Oscar for Best Director. He got black-listed in the Fifties, then went into television work. Ayres became a conscientious objector/medic in World War II.

[1] Remarque wrote ten more novels, but his first remains his most famous.

[2] You notice that both Remarque and Anderson were school teachers? So was William Clark Quantrill. On the one hand, it didn’t used to be a respectable profession, so all sorts of flakes tried their hand at it. On the other hand, anybody with some brains can learn how to do it.