Shuffle the Deck and Deal.

The “recent unpleasantness” of the housing bubble and collapse has disguised a larger and more long-term movement. As economists never tire of pointing out, education is linked to prosperity—for both the individual and the community. In 1970, 11 percent of the population aged over twenty-five years had at least a BA. These people were spread around the country fairly evenly: half of America’s cities had concentrations of BA-holders running between 9 and 13 percent.

By 2004, things were very different in two respects. First, 27 percent of the population aged over twenty-five years had at least a BA. So, Americans appeared to be much better educated. Second, educated Americans now clustered together in a few cities. The densest concentrations are around Seattle, San Francisco, up toward Lake Tahoe on California’s border with Nevada, Los Angeles, San Diego, Phoenix, Denver, Salt Lake City, Austin, the Northeast Corridor from Washington to Boston, and in college towns scattered across the map.

 

Why this sorting?

Part of the explanation is a reciprocal relationship between educated people and prosperity. Businesses in science, health, engineering, computers, and education need to be where there are a lot of educated people; people who want to work in these industries need to be where they can get rewarding jobs. Part of the explanation is that some cities tolerate, or even foster, a high degree of diversity. All sorts of people who move toward these cities find a ready welcome and at least some other people like themselves. It’s easy to fit in. It’s easy to find people with whom to share ideas and projects. Seen from these two vantage points, another part of the explanation is that some cities got there first. Like early-birds at a yard-sale, they snapped up all the best things. Seattle, for example, had Boeing (lots of engineers), a big and more-or-less respectable university, a lot of racial diversity (and not just the White-Black kind that most Easterners mean), and a spectacular physical location. It’s easy to see why Microsoft stayed where it started. Others flocked there for the same reasons.

 

What are the effects?

The more that talent concentrates, the greater are the synergies that spin-off innovations—and economic growth. The more that prosperous people concentrate, the greater are the demand for all sorts of other services and amenities.

The production train used to run from innovation to design to manufacturing to distribution to sales to service. In this system, virtually all the different stages and skill-levels would be located in the same area. Detroit and cars or Pittsburgh and steel offer good examples. Today, much of the lesser-skilled work can be either automated or out-sourced to low-wage foreign suppliers. So, great prosperity can co-exist with economic decline.

But not for long. High income earners bid up the price of housing. It is common to find people without BAs being forced to re-locate away from the areas of tech prosperity. A long commute is one of the badges of un-success in contemporary America.

Steel and cars are waning as major American industries. The “knowledge economy” is central to future American prosperity. The transition has costs and problems that we don’t yet know how to resolve.

Richard Florida, “The Nation in Numbers: Where the Brains Are,” Atlantic, October 2006, pp. 34-35.

All Quiet on the Western Front.

Carl Laemmle (1867-1939) was a German Jew who migrated to the US in 1884. He worked as a book-keeper, but got interested in movies when they were a new thing. So did a lot of other people. In 1912 Laemmle and some of the others merged their companies into Universal Films, and then moved to Hollywood. Universal Films turned out to be very successful in the Twenties and early Thirties. However, in 1928 Carl Laemmle made the mistake of bring his son, Carl, Jr. (1908-1979), into the business as head of production. Carl, Sr. had been a book-keeper, so he paid attention to what stuff cost. Carl, Jr. had been a rich kid, so he never paid attention to what stuff cost. This could work out OK if the spending produced a huge hit, so Carl Jr. and Universal were always on the look-out for a potential huge hit.

Erich Maria Remarque (1898-1970) grew up in a working class family in Germany, but had some hopes of becoming a writer. He was drafted into the German Army in 1916. After his training, he served six weeks on the Western Front before he was wounded. He spent the rest of the war in hospital. After the war he took a swing at teaching, then wandered between different types of jobs. He still wanted to be a writer. In a burst of creativity in 1927, he wrote All Quiet on the Western Front. It became a hit when it came out in 1929.[1] Universal bought the rights.

First, Universal needed a screen-writer to adapt the novel into a movie. They hired Maxwell Anderson (1888-1959) whose career is a novel in itself: he was a poor kid and son of an itinerant minister; a school teacher[2] and newspaper writer (fired many times in both careers, usually for not toeing the company line); and then he became a successful play-write, who turned to doing move screenplays on occasion. In 1924 his realistic war-play “What Price Glory?” had been a hit on Broadway. Carl, Jr. hired Anderson to adapt the novel.

Second, they needed a director. Lieb Milstein (1895-1980) grew up poor and Jewish in Kishinev, a city in pre-Revolutionary Russia. Kishinev wasn’t a good place to be either poor or Jewish, so Milstein did what everyone else who didn’t have rocks in their head did: he migrated to the United States. Upon arrival he changed his name to Lewis Milestone. He had been in the US for five years when America entered the First World War. Milstein enlisted in the Army; the Army taught him the film business as part of its propaganda and training work; and Milstein moved to Hollywood after the war. He soon became a director, with a Best Director Oscar in 1928. At the top of his profession, he was much in demand for big pictures. Carl Jr. hired him to direct “All Quiet on the Western Front.”

Third, they needed a bunch of actors. The “extras” weren’t hard to find. Oddly, there were several thousand German war veterans living around Los Angeles. Carl Jr. hired a lot of them. For the lead role of Paul Baumer, they hired Lew Ayres (1908-1996). Ayres didn’t have much acting experience (and he wasn’t really much of an actor). He was young and innocent and impressionable looking, which was the whole point.

The movie cost $1.2 million to make and earned $1.5 million at the box-office. That was enough profit to tempt Carl Jr. into more big-budget movies. Most didn’t do so well. In 1936 he and Carl Sr. got shoved out of Universal.

Lewis Milestone won the Oscar for Best Director. He got black-listed in the Fifties, then went into television work. Ayres became a conscientious objector/medic in World War II.

[1] Remarque wrote ten more novels, but his first remains his most famous.

[2] You notice that both Remarque and Anderson were school teachers? So was William Clark Quantrill. On the one hand, it didn’t used to be a respectable profession, so all sorts of flakes tried their hand at it. On the other hand, anybody with some brains can learn how to do it.

The Secret History of Veterans Day.

Fighting in the First World War stopped at 11:00 AM on 11 November 1918. In 1919, President Woodrow Wilson proclaimed 11 November of that year to be a national holiday, “Armistice Day.” It was supposed to be a one-off. The next year, Wilson proclaimed the Sunday nearest 11 November to be Armistice Sunday so that churches could devote a day to recalling the lost and pondering the difficulties of peace. In 1921 Congress declared a national holiday on 11 November to coincide with the dedication of the Tomb of the Unknown Soldier at Arlington National Cemetery. Thereafter most states made 11 November a state holiday.

The American Legion campaigned for additional payments to military veterans on the grounds that wartime inflation had eroded the value of their pay. Civilian employees of the federal government had received pay adjustments, so veterans should receive them as well to “restore the faith of men sorely tried by what they feel to be National ingratitude and injustice.” There were a lot of veterans: 3,662,374 of them. All were voters, so Congress passed the Adjusted Compensation Act in 1921, which promised immediate payments to veterans. This would amount to about $2.24 billion. That was a lot of money, especially since Congress didn’t propose a means to pay for it. President Warren Harding initially opposed the Act unless it was paired with new revenue, then came to favor a pension system. Harding managed to block the legislation in 1921 and again in 1922. President Calvin Coolidge vetoed a new bill in 1924, saying that “patriotism…bought and paid for is not patriotism.” Congress over-rode the veto.

The World War Adjusted Compensation Act, also known as the Bonus Act, applied to veterans who had served between 5 April 1917 and 1 July 1919. They would receive $1.00 for each day served in the United States and $1.25 for each day served outside the United States. The maximum pay-out was capped at $625. The ultimate payment date was set for the recipient’s birthday in 1945. Thus, it functioned as a deferred savings or insurance plan. However, a provision of the law allowed veterans to borrow against their eventual payment.

In 1926 Congress urged the President to issue a proclamation each year on the commemoration of Armistice Day. It also ordered creation of a new and grander Tomb of the Unknown Soldier.

In 1929 the Great Depression began. Veterans suffered just like everyone else. Many of them began to borrow against the deferred compensation. By the middle of 1932, 2.5 million veterans had borrowed $1.369 billion.

In April 1932 the new Tomb of the Unknown Soldier at Arlington was completed. In Spring and Summer 1932 about 17,000 veterans gathered in Washington, DC, to demand immediate payment of their compensation. Accompanied by thousands of family members, they camped out in shacks on Anacostia Flats. The papers called them the “Bonus Army.” In mid-June 1932, the House of Representatives passed a bill for immediate repayment, but the Senate rejected it. At the end of July 1932 the Washington police tried to evict the “Bonus Marchers,” but failed. President Herbert Hoover then had the Army toss them out.

In 1936 the Democratic majorities in Congress passed a bill to allow immediate payment of the veterans’ compensation, over-riding President Franklin D. Roosevelt’s veto. A bunch of rich-kid jokers at Princeton soon formed the “Veterans of Future Wars” to demand immediate payment of a bonus to them since they were likely to get killed in the next war, before they had a chance to spend a post-war bonus.

In May 1938 Congress passed a law making 11 November an annual holiday for federal employees. In 1954 Congress changed the name to Veterans Day.

Climate of Fear VI.

Burning carbon emits carbon-dioxide and other greenhouse gasses into the atmosphere. Greenhouse gases then trap heat in the atmosphere, preventing it from escaping out into space. This effect is responsible for global warming. Since the late 18th Century, burning carbon has fueled the Industrial Revolution. In the 1980s and 1990s, the surface temperature of the Earth rose by 1.2 degrees. This rise then caused substantial melting of the polar ice caps and extreme weather events.

How much worse, then, would be the effects of the spread of industrialization into the non-Western world in the 21st Century? This has greatly increased the burning of carbon. Between 2000 and 2010, 110 billion tons of carbon dioxide were released into the atmosphere. This amounts to an estimated one-fourth of all the greenhouse gases ever emitted. At this rate, the volume of carbon dioxide concentrations in the atmosphere compared to pre-industrial times will double by 2050. In 2007 the UN’s Intergovernmental Panel on Climate Change (IPCC) predicted that such a doubling could lead to a temperature rise of 5.4 degrees, with increases each decade of 0.2 degrees Celsius per decade. (Which I think, but I’m a dumb American, works out to be 0.36 degrees Fahrenheit.) So, the temperature of the Earth should be rising even faster than before.

It isn’t. Since 1998 the surface temperature of the Earth has risen by 0.2 degrees. However, this is much less of a rise than climate scientists had projected by extrapolating the temperature increases that were recorded in the 1980s and 1990s. (I think that we should be about 0.5 degrees warmer, but see my earlier disclaimer.) “Baby, Baby, where did the heat go?”

Some climate change skeptics love this: “There is no problem with global warming. It stopped in 1998.” OK, but why did it stop? Will it restart? Another stripe of skeptics take issue with the accuracy of the models used to estimate the effects of greenhouse gas emissions. They argue that the climate is not as sensitive to increases in greenhouse gases as many models assume. We have more time to adapt and at a lower cost than “alarmists” predict.

Climate scientists offer a number of possible explanations for the “missing heat.”

The deep seas absorbed the extra heat, the way they did the “Titanic.” While surface sea temperatures have remained stable, temperatures below 2,300 feet have been rising since 2000.

The rhythms in the heat radiated by the Sun are responsible. The highs and lows of this rhythm are called solar maximums and solar minimums. One solar maximum ended in 2000 and we are in the midst of a solar minimum.

The pollution emitted by major carbon-burners like China actually reflects away some of the Sun’s heat before it becomes trapped in the atmosphere. (You can see how this answer would alarm proponents of responding to climate change. “The real problem with air pollution is that we don’t have enough of it.”)

Climate scientists have also scaled-back their predictions from a possible 5.4 degree rise in surface temperatures to projections between 1.6 and 3.6 degrees. These less-warm decades will then be followed by the roof falling in. The sun will move toward the next solar maximum; the heat trapped in the deep sea will rise toward the surface to boost temperatures; and the “pollution umbrella” will go back to trapping heat in the atmosphere. We’ll fry like eggs. Or perhaps just get poached. Depends on which scientists you believe.

“The missing heat,” The Week, 30 August 2013, p. 11.

Judith Curry, “The Global Warming Statistical Meltdown,” Wall Street Journal, 10 October 2014.

The Senator from San Quentin.

During the 1980s violent crime rose to new peaks. The murder rate in 1991 reached 9.8/100,000, about four times the rate in, say, France. A criminologist named George Kelling argued that the toleration of all sorts of little crimes or acts of indecency—even broken windows or vandalism or those homeless goofs at intersections trying to extort pocket change for cleaning your windows—created an atmosphere of disrespect for the law. From little things, people went on to feel less restrained about bigger things. Kelling sold this idea to New York City Police Commissioner William Bratton. New York cops started pushing the homeless into shelters, clearing the intersections of squeegee men, and stopping kids from hanging out on street corners.

However, Bratton also embraced the idea that a lot of crime is committed by a few people, and a little crime is committed by a lot of people. You want a big drop in crime? Concentrate on the few career criminals and put them away for a long time. Bratton concentrated on a statistical analysis of crime in each police precinct, then drove his precinct captains to find and arrest habitual criminals. This seemed to work, so lots of police departments adopted the New York approach. Bratton’s approach coincided with a get-tough policy adopted by legislatures in the Nineties. Mandatory minimum sentences and three-strikes-and-you’re-out sentencing kept criminals in prison for longer. The war on drugs, especially the crack cocaine epidemic, sent a lot more people to prison. Guys who are locked up can’t commit crimes, at least not against ordinary citizens. (Fellow prisoners or guards? That’s another story.)

Inevitably, there is a down-side. First, the United States has one-twentieth of the world’s population, but one-fourth of the prison population. That includes both Russia and China. There are more people currently in prison in the United States (2.3 million) than there are in any one of fifteen states, and more than in the four least-populated states put together. The rate of imprisonment in the United States is the highest in the world.

Second, black communities have been particularly hard hit by both crime and punishment. One in nine black men between the ages of 20 and 34 is in jail. (The overall ratio of imprisoned to paroled/probationed is about 1:3, so that would suggest that another three in nine black men is under some other form of judicial supervision.) Since felons lose the right to vote, large numbers of blacks have been dis-franchised in what one law professor has labeled “the new Jim Crow.” Since most prisons are located in rural areas, this leads to the over-representation of areas unsympathetic to city problems.

Third, keeping huge numbers of prisoners locked up is really expensive. Americans don’t like to pay taxes, so prison budgets have been held down for decades. The result is massive over-crowding. Courts have repeatedly held this over-crowding to amount to cruel and unusual punishment.

Fourth, imprisonment doesn’t seem to do anything to change behavior. Says one criminologist, “two-thirds of those who leave prison will be back within three years.”

What have changed are the crime rates. Between 1991 and 2009, the number of murders fell by 45 percent. From its peak of 9.8/100,000 in 1991, the murder rate fell to 5.0/100,000 in 2009. The same decline has been found in most other categories of crime over the same period. At least for now.

Prisoners are so numerous that, if grouped together and represented in the Congress, they would be a formidable voting bloc.

“The prison nation,” The Week, 13 February 2009, p. 13; “The mystery of falling crime rates,” The Week, 16 July 2010, p. 13.

Eye in the Sky.

Some time ago the courts decided that no one has a right to privacy when they are on the streets or in public places. Initially, this applied, in part, to the many surveillance cameras installed by banks and stores and apartment buildings. Then the development of digital cameras made surveillance video available to watchers in real time and it made it simple to transfer the images between widely separated computers. Then computer geeks developed face-recognition software and programs that detected “anomalous behavior.” All of these were great crime-fighting tools, at least according to the police who sing the non-specific praises of the cameras as deterrents and crime-solving aids.

With this doorway open, since 9-11 the Department of Homeland Security has been making grants to cities to fund the installation of security cameras targeting public places. These cameras supplement the already existing security cameras installed by banks, stores, and office buildings. Madison, Wisconsin—a bastion of Mid-Western liberalism–is putting in 32 cameras; Chicago and Baltimore—hotbeds of urban crime which actually don’t give a rip about Islamic terrorism—are installing thousands of cameras and are linking them to the existing systems of private cameras. The most elaborate system is that of the Lower Manhattan Security Initiative: by 2010, 3,000 cameras will be in place throughout Wall Street and the World Trade Center area. In addition, the system includes license plate readers connected to computers that cross reference the numbers of suspect vehicles and which share images with the Department of Homeland Security and the EffaBeeEye.

Now there is a new layer of observation: police, government, and private drones. The police are hot to use drones. In the 1980s the Supreme Court held that the police don’t need a warrant to observe private property from public airspace. [NB: What is “public airspace”? So far as I can tell, anything at a height of 500 feet or above is clearly public airspace; anything 83 feet or below is private airspace; and what is in-between is a little murky. Are you allowed to shoot drones under 83 feet like skeet?] Drones can be fitted with high-resolution cameras, infra-red sensors, license plate-readers, and directional microphones. They are quieter and smaller than helicopters, reducing the chance that people will know that they are being observed without a warrant. If you keep your shades pulled down, can they “assume” you’re running a grow house?

Are there problems with this program? In the eyes of individual rights advocates on the left and right, the answer is definitely yes. While government agencies will watch millions of people in public places in hopes of catching a few terrorists before an attack, it is more likely that they only will be able to figure out what happened after the attack. Will people just become habituated to being watched in public places? In a generation, will they accept the possibility of being watched in semi-public places? What happens when surveillance images leak from the government agency to the public sphere? See: http://www.youtube.com/watch?v=8zYRYh6cQ2g The clip is fun to watch, except that it is a public traffic camera with the film leaked to provide private entertainment. What if a mini-drone lands on your bathroom window sill one morning and catches you in the shower? Some Peeping Tom at home or cops finding a fun use for the technology paid for by the DEA or property seizures from teen-age druggies driving their Dad’s BMW? In the eyes of most Americans, however, more surveillance cameras are just fine. (“The drone over your backyard,” The Week, 15 June 2012, p. 11.)

Runnin’ all ’round my brain.

Cocaine prices per gram in selected American cities, 1999 and 2005.

1999.             2005.               Change in base price.

Seattle.                       $80-100           $30-100          -62%

Denver.                       $100-125         $100-125         0%

Los Angeles.               $50-100           $30-100           -40%

Dallas.                        $90-125           $50-80             -44%

Chicago.                     $75-100           $75-100              0%

Detroit.                       $75-100           $50-120           -33%

Atlanta.                      $100                $80-100           -20%

Miami.                        $40-60             $20-110           -50%

New York.                 $21-40             $20-25             -0%

 

There are a bunch of ways of cutting up this data, so to speak.

First, in 1999, cocaine was a glut on the market in New York, Miami, and Los Angeles. These were major cities with a large over-all market, ports of entry, and centers of a counter-culture. In contrast, it was hard to come by in Atlanta, Denver, Dallas, and Seattle. These were chief cities of “the provinces,” as the Romans would have put it. Six years later Seattle had joined New York, Miami, and Los Angeles as the capital cities of cocaine. This probably has something to do with the explosion of the computer and software industries in Seattle. Maybe writing software allows for blow in a way that designing airplanes for Boeing does not. Still, the “cocaine revolution” hadn’t reached Denver, Atlanta, and Chicago. These cities remained the ones with the highest priced (and thus least available) cocaine.

Second, even in two of the original core cities of cocaine consumption, Miami and Los Angeles, prices fell sharply. New York began with the lowest price and pretty much stayed there. Perhaps $20 a gram was the rock-bottom price for cocaine. Lots of people hustling on a big, but limited, market, all of them competing to deliver the best product to the most people at the lowest price. Adam Smith take note. Labor costs driven down to the subsistence minimum. David Ricardo take note.

Third, prices fell while the Drug Enforcement Agency was spending billions of dollars to drive up the price (and thus reduce consumption) through interdiction and eradication. Why didn’t this effort produce better results?

One reason is that cocaine producers in Columbia dispersed their coca-growing operations into more remote areas and spread into Peru and Bolivia as well. These are outside the range of US-sponsored eradication efforts. Production went up, not down.

Another reason is that, since the signature of the North American Free Trade Agreement (NAFTA) in 1994, there has been a huge increase in trans-border truck and vehicle traffic between Mexico and the United States. This made it much easier to move cocaine into the United States. One government policy warred with another government policy. The thing is that people trying to make money won in both cases. What’s more American than that?

Final thing to think about: 88 percent of cocaine moved through Mexico. Eventually, the Mexican intermediaries for the Columbians wanted a better deal. Much violence followed. (See: Narcostate with a State.)

 

Ken Dermota, “The World in Numbers: Snow Fall,” Atlantic, July/August 2007, pp. 24-25.

Opium War.

Opium was a familiar plague in Asia before the 20th Century. Chinese efforts to ban the import of opium from British India led to the Opium Wars, which China lost. Conquering opium became associated with conquering sovereignty for the Chinese. When the Chinese Communists won the civil war in 1949, they launched a campaign against drug use and against opium production within China. Chinese producers fled to Laos and Burma (today’s Myanmar). Anti-drug campaigns in other Middle Eastern and Asian countries pushed the heart of production into increasingly remote areas: Burma, Laos, and most of all, Afghanistan. Once the long war against the Soviet Union and its Afghan puppets (1979-1989) wrecked traditional wheat and grape farming, Afghan peasants moved into growing opium poppies

Since the Iranian Revolution (1979) the government has tried to end drug abuse, production, and role as a transit corridor for Afghan production. Afghan producers shifted their routes to the successor states created by the collapse of the Soviet Union (1990). The hall-marks of these successor states were poverty, corruption, and badly secured nuclear stockpiles left over from the Soviet Union. For criminals—or for Islamists—conditions were perfect. (There’s a movie in this, if only Hollywood will listen.)

The Taliban, like the Iranian regime, tried hard to suppress the opium trade and opium use in Afghanistan after they came to power. In 2000 the Taliban ordered an end to poppy farming and to the opium trade. Partly, they wanted to end a social evil; partly they wanted to destroy the financial base of the regional warlords who opposed them. Whatever their motive, opium production came to a near halt. The American invasion in 2001 toppled the Taliban, freed the warlords to pursue their traditional actions, and caused the Taliban itself to turn to opium dealing as a way of financing its war to return to power. Within a few years of the American invasion, almost 90 percent of the world’s opium again came from Afghanistan. Myanmar and Laos came in distant second and third places.

Afghanistan is hardly the only weak state that is caught up in the international narcotics trade. In 1998 the Korean dictator Kim Jong-Il launched his government into the opium trade, producing it on collective farms and transporting the product through North Korea’s embassies. Nigerian drug dealers have set up business in Bangkok to buy Pakistani and Iranian heroin for re-sale everywhere there is a part of the Nigerian diaspora. (There’s a movie in this, if only Hollywood will listen.) The cocaine cartels fighting against the Columbian government broadened their own product-line to include opium poppies and then heroin.

In the eyes of American officials, putting a stop once again to the opium trade appeared to be essential to building a viable Afghan state by taming both the warlords and the Taliban. A viable state, in turn, formed a prerequisite to an American escape from Afghanistan. In early 2005 the Americans and the Afghan government launched “Plan Afghanistan,” which was modeled on the “Plan Columbia” anti-cocaine campaign begun in 1999.[1] The plan combined assistance to farmers to help them shift to other crops with efforts to eradicate opium poppies and interrupt the movement of opium out of the country. So far, neither “Plan” appears to have made a serious dent in the trade. Drugs give weak states a kind of strength, just not the kind we want.

Matthew Quirk, “The World in Numbers: The New Opium War,” Atlantic, March 2005, pp. 52-53.

[1] This offers an interesting example of analogical thinking as a guide to action. See: Yuen Foong Khong, Analogies at War: Korea, Munich, Dien Bien Phu, and the Vietnam Decisions of 1965 (Princeton UP, 1992); and Richard Neustadt and Ernest May, Thinking In Time: The Uses of History for Decision Makers (Free Press, 1988).

Fries with that?

What do we talk about when we talk about “Americanization”?  Are we talking about the spread of the American model through compulsion or seduction? Or are we talking about the Americans getting someplace first when everyone wants to go there? The global spread of obesity offers an example. In the United States the daily per capita consumption of calories has increased by 600 calories since 1980. Correspondingly, the share of overweight adults in the population has increased from 47 percent in 1980 to 64 percent in 2003. Since 1980 Americans have taken an increasing share of their meals from restaurants and take-out food. These meals tend to have about twice as many calories as does the typical meal Mom puts on the table. Nutritionists estimate that this accounts for about two thirds of the additional weight gained by Americans in the last quarter century. As late as 1991, generalized obesity was narrowly restricted geographically: only Michigan, West Virginia, Mississippi, and Louisiana had 15-20 percent of their populations classified as obese. By 1995 24 states had 15-20 percent of their adult population classed as overweight. By 2000 22 states had at least 20 percent of their populations classed as obese, and every other state except Colorado had at least 15 percent of its population classed as obese. “Obesity is often discussed as an American cultural phenomenon, closely intertwined with a taste for fast food, soft drinks, television, and video games.” This is probably what “Americanization” means in the eyes of Frenchmen and Islamist jihadis.

There have always been more underweight people than overweight people in the world. That gap has closed over time, however, and in 2000 it ceased to be true.   Falling food prices for consumers, the shift from rural to city life for many people, the substitution of desk-work for field-work, and the purchase of processed foods are world-wide trends. Obesity has emerged as a social characteristic in developing countries: 15 percent of adult Kenyan women are overweight compared to 12 percent who are underweight; 26 percent of adult Zimbabwean women are overweight compared to 5 percent who are underweight; 71 percent of adult Egyptian women are overweight compared to 1 percent who are underweight; and 29 percent of children in urban areas of China are obese.

This is largely attributable to the end of the other “oil crisis”: in recent years cheap, high-quality cooking oil has become available in developing countries for the first time. The oil contains dietary fat that has raised the caloric intake of individuals by 400 calories per day since 1980. But the increased use of cooking oil also reflects the increasing availability of meat as incomes rise around the world. For example, the ordinary Chinese diet used to rely very heavily on starchy roots, rice, and salted vegetables. Since 1980 the Chinese diet has added a lot of meat fried in oil. Most people now consume at least 2,500 calories per day.

There are still some places with “old” nutrition problems: 6 percent of the adult women in Cambodia are overweight compared to 21 percent who are underweight; 4 percent of adult Bangladeshi women are overweight compared to 45 percent who are underweight.

This has some large implications for public health. Excess weight has been associated with illnesses like diabetes and heart disease. Poor countries lack the medical systems to deal with these sorts of problems, which are new to them. An obesity epidemic is on the way. Does that mean that Weight Watchers will become an international phenomenon?

Don Peck, “The World in Numbers: The Weight of the World,” Atlantic, June 2003, pp. 38-39.

Freedom from Farmers.

Back in the 1920s and 1930s almost half of Americans lived in communities of fewer than 2,000 people and a full quarter of them lived in rural areas. Massive over-production of basic crops led to an agricultural depression long before the onset of the Great Depression. The larger collapse of the American economy in 1929 eventually led to an effort to address the agricultural problems. The New Deal’s Agricultural Adjustment Act tried to push up farm incomes. The Act linked desirable prices to their highest recorded level, then combined subsidies with payments to not grow crops as a way to meet desirable incomes for farmers. Generally, it worked. The program had been intended as a temporary “emergency” measure, but Congress made it permanent in 1949.

Since then the program has grown while the number of farmers has been reduced. Until recently, the government made direct payments to farmers and picked up almost two-thirds of the cost of insurance against weather-related problems. All farmers, great and small, have benefitted from this program: the average farmer made $87,000 a year in 2011, largely thanks to federal welfare, compared to the national average income of $67,000. At the same time, the “family farm” has become largely imaginary. American farming has become concentrated in the hands of a few giant “agribusinesses.” Since most of the beneficiaries of these programs are in a minority of “Red” states, Republicans bought off the Democrats by including the food-stamp program in the Farm Bill.[1] Probably not what Thomas Jefferson had in mind. Or maybe it was.

In 1973 and again in 1979 oil supplies from the Middle East were interrupted and gasoline prices soared. People eager to insulate the American economy from such price shocks urged the development of alternative fuels. One of the most prominent alternatives was ethanol—alcohol derived from plants. In particular, Middle Western farm states pushed for the conversion of corn into ethanol. However, other adaptations provided a first response. Not until 1995 did the United States government begin to subsidize the production of corn-based ethanol. This program grew tremendously over the next decade as Congress. In 2007 the United States produced about 5 billion gallons of ethanol from corn. It seems likely to grow even larger: in 2007 Barack Obama told an Iowa audience that he favored raising ethanol production to 65 billion gallons by 2030.

So far, so good. Is there a down-side to this pursuit of ethanol as an alternative fuel? Yes, there are several. First of all, corn-based ethanol is incredibly inefficient compared to other forms of fuel. The “energy balance” of any fuel is the ratio between the amount of energy produced and the energy consumed to produce it. Gasoline produces five times the amount of energy needed to produce it. Sugar cane-based ethanol yields eight times as much energy as is needed to produce it. Corn-based ethanol, however, produces only about 1.3 times as much energy as is needed to produce it. In short, you get virtually no benefit for the energy expenditure. Second, ethanol absorbs water. As a result, it cannot be shipped by existing gasoline pipelines and it cannot be mixed to more than a 1:9 ratio with gasoline because it would corrode engine parts. In turn, this means that ethanol has to be shipped by less energy-efficient tanker trucks and that it can only reduce oil-based gasoline consumption by 10 percent. Third, because the energy balance of corn-based ethanol is so low, it takes huge amounts of corn to produce much ethanol. About one-fifth of the existing corn crop is devoted to ethanol. (To reach President Obama’s goal of 65 billion gallons of ethanol by 2030 would require using thirteen times as much corn as is used currently—or about 250 percent of current total corn production. Since corn is used for many different things, the existing 80 percent devoted to producing corn for those purposes would have to remain in cultivation. This means that the real level of corn production would have to go well above triple the present level.)   Devoting corn to ethanol drives up the price of all other corn-derived products: Mexican tortillas, corn-fed beef, anything sweetened with corn-syrup or fried in corn-oil. Shifting land from producing something else to producing subsidized-corn then drives up the price of other goods.

If the energy balance of ethanol is poor, that of campaign contributions is not. One agribusiness giant made $3 million in campaign contributions between 2000 and 2013, but received subsidies for producing ethanol worth $10 billion.

[1] Although, in 2013, in one of those fits of insanity that have become their hall-mark, Republicans decided to shred the food-stamp program. President Obama threatened to veto any bill that didn’t fund food-stamps. “A welfare program for agribusiness,” The Week, 16-23 August 2013, p. 13.