Colleges Bobbing for French Fries.

Education has always been a commodity like any other. Sellers set the price at what the market will bear. Calling colleges and universities “not-for-profit” hides from this reality. The only difference between Chrysler and a college is that colleges have no shareholders or proprietors.[1] Therefore, increased revenue goes directly to the employees. The reverse is also true. In a period of revenue constraint, the costs are taken out of the hide of the employees.

Suzanne Mettler has argued that the political gridlock in Washington has kept federal aid, like Pell grants, from rising enough to keep an increasing burden for tuition from falling on ordinary families. At the state level, the requirement to balance budgets and a widespread hostility to taxes has intersected rising costs for Medicaid and prisons to force cuts to state aid to public institutions.[2] Access to college is becoming a privilege of wealth instead of motor of American prosperity.

Barton Swaim isn’t buying it.[3] First, he sees a huge expansion of the scale and activities on the part of colleges and universities since the mid-1980s. “Departments and schools have multiplied, lavishly expensive student facilities and high-tech research centers have gone up even during recessions, well-paid administrators have multiplied like locusts, and federal grant-money has poured in at ever-increasing rates.” Why has this happened? “When government pays the bills, prices always go up.” Sellers charge what the market will be bear. Second, Swaim argues that the supposed recent “cuts” in state-funding for education are usually presented in terms of a falling share of state budgets, rather than as inflation-adjusted real dollars. (Swaim himself doesn’t bother to give any figures to support his alternative interpretation.) Implicitly, what is needed is some market discipline. Third, Swaim’s interpretation fits into the narrative of the unforeseen—and disastrous–consequences of liberal good intentions. Mettler, he says, “is right that American higher education is no longer the force of equality and opportunity that predominantly liberal policy makers intended it to be. What she misses is that those policy makers are to blame.”

What does Swaim get right and what does he get wrong? First, he’s right about the fact of the huge expansion in activities since the mid-1980s. He’s just wrong about the cause of it. Simply put, there are too many colleges and universities relative to the demand for them. They compete by multiplying academic program to reflect the latest fad, degrading academic standards, engaging in an amenities arms race, and multiplying recruitment and support staffs (i.e. administrators). We need a shake-out.

Second, he’s wrong on the cuts-in-state-financing-causing-tuition-increases issue. Tuition at public school has spiked much more than has tuition at private ones. This is the product of cuts in state aid. (See: “College costs: the old eat the young,” 27 September 2014.)

Third, he misses (or dodges) the chance to talk about the equivalent unforeseen—and disastrous–consequences of conservative good intentions. The war on drugs and the conversion of tax cuts from a rational policy choice into a primitive fetish (of the religious, rather than the sexual sort[4]) have been just as much exploding cigars as anything liberals have advocated.

[1] On the other hand, when is the last time you heard of a student recall? Jus sayin.

[2] Suzanne Mettler, Degrees of Inequality (Basic Books, 2014).

[3] Swaim, review of Mettler, Degrees of Inequality, WSJ, 14 March 2014.

[4] Although I suppose that someone could work up a funny patter on the parallels with BDSM. If that’s how you roll.

The International Trade in Jobs and Workers

It is an article of faith among most economists and businessmen that barriers to trade between nations create inefficiencies and lower standards of living.[1] What kinds of barriers to trade exist? Tariffs are taxes on imported goods that raise the sales price to a level that makes the import uncompetitive with a domestic product. Government subsidies (payments) to domestic producers of some goods allow them to hold down prices compared to imports. Government regulations and standards for goods which vary from one country to another can force adaptation costs onto foreign producers, thus raising the price of their goods to a point where it isn’t worth the trouble to sell in a foreign market. The effect of these barriers is to reduce competition, efficiency, and specialization, while raising the cost of living for consumers.

So, trade barriers are bad. In 1994 businessmen won passage of the international treaty called the North American Free Trade Agreement (NAFTA). This treaty abolished tariffs and other barriers to trade on 70 percent of the goods produced and consumed in Mexico, the United States, and Canada. What is the up-side of this agreement? Trade between Mexico and the US tripled during the decade and a half after passage of the treaty; Canadian exports also tripled. What is the down-side of the agreement? Wages haven’t gone up in either Mexico or the US.

In the United States the response to NAFTA is ambivalent. The normal line of development in an advanced economy is that low-wage foreign competitors in low-skill sectors take jobs from the advanced economy, while the advanced economy creates jobs in high-skill and high-wage sectors. That is one of the things that seem to be happening in the United States. By 2008, three million American manufacturing jobs had been lost since the passage of NAFTA. This doesn’t count the many more jobs lost during the “Great Recession.” On the other hand, more jobs were created in those years than in the fourteen years before passage of the treaty. Similarly, highly-mechanized North American farming is far more productive and cheaper than is much Mexican farming, so agricultural exports to Mexico have also greatly increased. However, neither American politicians nor American media have been very good about pointing out the realities of the situation. Job-loss and displacement normally gets a lot more media attention than does job creation. “If it bleeds, it leads.” Those three million manufacturing jobs that went up in smoke since 1994? Mostly they went to China and India, not to Mexico.

In Mexico the response has been profoundly hostile. Mexicans dislike NAFTA by about two-to-one. Why is that? About forty percent of Mexicans still live in poverty. Small and inefficient Mexican farms have been unable to compete with low-cost imports from North America, so many Mexican farmers have been driven to the wall. There was been a huge increase in illegal immigration to the United States, until the “Great Recession” hit. Eight million of the twelve million Mexican illegal immigrants in the United States have come since the passage of NAFTA. Is NAFTA solely or even principally to blame for the flood of illegal immigrants? Not necessarily. One Mexican observer argues that the upper classes have creamed off all the rewards of expanded trade. This has kept the benefits of increased trade from flowing downward in society through higher taxes on the well-off, better services for ordinary people, and higher wages for most workers.

This raises the possibility that the Mexican upper-class is intentionally exporting much of its population to the United States in order to defend an inequitable social order at home.

[1] “Coming to terms with NAFTA,” The Week, 30 May 2008, p. 13.

Sore Winners and Sore Losers from Obamacare.

Medicare provides health insurance for 98 percent of Americans aged 65 and over.

Who lacked/lacks health insurance before/since the Affordable Care Act (ACA)?

Group                                                  Before ACA               Today              Difference.

All Americans under 65                      16.4 percent                11.3 percent    -31 percent.

Hispanic-Mexicans                              26.2 percent                16.5 percent    -37 percent

Blacks                                                             24.1 percent                16.1 percent    -33 percent.

Whites                                                14.1 percent                10.0 percent    -29 percent

Asians                                                             13.6 percent                 9.7 percent    -29 percent

Aged between 18 and 34,                   21.6 percent                14.2 percent    -34 percent[1]

Aged 35 to 44                                     16.4 percent                11.2 percent    -32 percent

Aged 45 to 54                                     15.0 percent                10.6 percent    -29 percent

Aged 55 to 64                                                 12.7 percent.               9.1 percent    -28 percent

Poorest 20 percent of neighborhoods 26.4 percent                17.5 percent    -36 percent

Next poorest 20 percent                      21.6 percent                14.3 percent    -34 percent

Middle 20 percent,                              17.6 percent                11.9 percent   -33 percent

Next highest 20 percent                      13.4 percent                 9.4 percent    -30 percent

Richest 20 percent                               6.5 percent                6.5 percent    ————–

 

Overall and within almost all groups, the ACA has reduced the uninsured by about one-third. Still, two-thirds of those who were uninsured before the ACA remain uninsured today.

Why hasn’t a plan intended to provide almost all Americans with health insurance come anywhere near to achieving that goal? In large measure, the failures of this part of the ACA go back to its design. The ACA originally sought to coerce the states into expanding Medicaid to cover many of those who are uninsured today. In 2012, the Supreme Court rejected that component of the plan. States were left free to expand or not expand Medicaid. So far, twenty-seven states have chosen to expand Medicaid, while twenty-three have rejected it.

Why did many states reject Medicaid expansion? One answer would be Republican wrecking tactics directed against the center-piece of President Obama’s agenda. However, not all Republican-led states rejected expansion and not all Democratic-led states accepted it.

It is possible that rational calculation played a role. The states that rejected expansion had an average uninsured rate of 18.2 percent before the ACA, while those that accepted expansion had an average uninsured rate of 14.9 percent. Federal subsidies for expanded Medicaid are scheduled to be reduced in a few years. States will have to increase their share of the expanded costs. Many of the states that rejected Medicaid expansion pursue a low-tax strategy to attract business. Other parts of the ACA were not completely thought through. Perhaps the failure to make the complete Federal subsidy permanent is another such “glitch.” It will take a Democratic House, Senate, and White House to fix it.

Even in states that expanded Medicare, 9.2 percent of people remain without insurance.   Why? Ignorance? A libertarian resistance to coercive good intentions? Most Republicans have an ideological opposition to an “entitlement” that was forced on them by Democrats. Unlike post-war Europe, there is no consensus on this issue.

Kevin Quealy and Margot Sanger-Katz, “Obama’s Health Law: Who Was Helped Most,” NYT, 29 October 2014.

[1] Understates the gain because it doesn’t include the three million people who are allowed to remain on parents’ insurance.

Shuffle the Deck and Deal.

The “recent unpleasantness” of the housing bubble and collapse has disguised a larger and more long-term movement. As economists never tire of pointing out, education is linked to prosperity—for both the individual and the community. In 1970, 11 percent of the population aged over twenty-five years had at least a BA. These people were spread around the country fairly evenly: half of America’s cities had concentrations of BA-holders running between 9 and 13 percent.

By 2004, things were very different in two respects. First, 27 percent of the population aged over twenty-five years had at least a BA. So, Americans appeared to be much better educated. Second, educated Americans now clustered together in a few cities. The densest concentrations are around Seattle, San Francisco, up toward Lake Tahoe on California’s border with Nevada, Los Angeles, San Diego, Phoenix, Denver, Salt Lake City, Austin, the Northeast Corridor from Washington to Boston, and in college towns scattered across the map.

 

Why this sorting?

Part of the explanation is a reciprocal relationship between educated people and prosperity. Businesses in science, health, engineering, computers, and education need to be where there are a lot of educated people; people who want to work in these industries need to be where they can get rewarding jobs. Part of the explanation is that some cities tolerate, or even foster, a high degree of diversity. All sorts of people who move toward these cities find a ready welcome and at least some other people like themselves. It’s easy to fit in. It’s easy to find people with whom to share ideas and projects. Seen from these two vantage points, another part of the explanation is that some cities got there first. Like early-birds at a yard-sale, they snapped up all the best things. Seattle, for example, had Boeing (lots of engineers), a big and more-or-less respectable university, a lot of racial diversity (and not just the White-Black kind that most Easterners mean), and a spectacular physical location. It’s easy to see why Microsoft stayed where it started. Others flocked there for the same reasons.

 

What are the effects?

The more that talent concentrates, the greater are the synergies that spin-off innovations—and economic growth. The more that prosperous people concentrate, the greater are the demand for all sorts of other services and amenities.

The production train used to run from innovation to design to manufacturing to distribution to sales to service. In this system, virtually all the different stages and skill-levels would be located in the same area. Detroit and cars or Pittsburgh and steel offer good examples. Today, much of the lesser-skilled work can be either automated or out-sourced to low-wage foreign suppliers. So, great prosperity can co-exist with economic decline.

But not for long. High income earners bid up the price of housing. It is common to find people without BAs being forced to re-locate away from the areas of tech prosperity. A long commute is one of the badges of un-success in contemporary America.

Steel and cars are waning as major American industries. The “knowledge economy” is central to future American prosperity. The transition has costs and problems that we don’t yet know how to resolve.

Richard Florida, “The Nation in Numbers: Where the Brains Are,” Atlantic, October 2006, pp. 34-35.

The Secret History of Veterans Day.

Fighting in the First World War stopped at 11:00 AM on 11 November 1918. In 1919, President Woodrow Wilson proclaimed 11 November of that year to be a national holiday, “Armistice Day.” It was supposed to be a one-off. The next year, Wilson proclaimed the Sunday nearest 11 November to be Armistice Sunday so that churches could devote a day to recalling the lost and pondering the difficulties of peace. In 1921 Congress declared a national holiday on 11 November to coincide with the dedication of the Tomb of the Unknown Soldier at Arlington National Cemetery. Thereafter most states made 11 November a state holiday.

The American Legion campaigned for additional payments to military veterans on the grounds that wartime inflation had eroded the value of their pay. Civilian employees of the federal government had received pay adjustments, so veterans should receive them as well to “restore the faith of men sorely tried by what they feel to be National ingratitude and injustice.” There were a lot of veterans: 3,662,374 of them. All were voters, so Congress passed the Adjusted Compensation Act in 1921, which promised immediate payments to veterans. This would amount to about $2.24 billion. That was a lot of money, especially since Congress didn’t propose a means to pay for it. President Warren Harding initially opposed the Act unless it was paired with new revenue, then came to favor a pension system. Harding managed to block the legislation in 1921 and again in 1922. President Calvin Coolidge vetoed a new bill in 1924, saying that “patriotism…bought and paid for is not patriotism.” Congress over-rode the veto.

The World War Adjusted Compensation Act, also known as the Bonus Act, applied to veterans who had served between 5 April 1917 and 1 July 1919. They would receive $1.00 for each day served in the United States and $1.25 for each day served outside the United States. The maximum pay-out was capped at $625. The ultimate payment date was set for the recipient’s birthday in 1945. Thus, it functioned as a deferred savings or insurance plan. However, a provision of the law allowed veterans to borrow against their eventual payment.

In 1926 Congress urged the President to issue a proclamation each year on the commemoration of Armistice Day. It also ordered creation of a new and grander Tomb of the Unknown Soldier.

In 1929 the Great Depression began. Veterans suffered just like everyone else. Many of them began to borrow against the deferred compensation. By the middle of 1932, 2.5 million veterans had borrowed $1.369 billion.

In April 1932 the new Tomb of the Unknown Soldier at Arlington was completed. In Spring and Summer 1932 about 17,000 veterans gathered in Washington, DC, to demand immediate payment of their compensation. Accompanied by thousands of family members, they camped out in shacks on Anacostia Flats. The papers called them the “Bonus Army.” In mid-June 1932, the House of Representatives passed a bill for immediate repayment, but the Senate rejected it. At the end of July 1932 the Washington police tried to evict the “Bonus Marchers,” but failed. President Herbert Hoover then had the Army toss them out.

In 1936 the Democratic majorities in Congress passed a bill to allow immediate payment of the veterans’ compensation, over-riding President Franklin D. Roosevelt’s veto. A bunch of rich-kid jokers at Princeton soon formed the “Veterans of Future Wars” to demand immediate payment of a bonus to them since they were likely to get killed in the next war, before they had a chance to spend a post-war bonus.

In May 1938 Congress passed a law making 11 November an annual holiday for federal employees. In 1954 Congress changed the name to Veterans Day.

The Senator from San Quentin.

During the 1980s violent crime rose to new peaks. The murder rate in 1991 reached 9.8/100,000, about four times the rate in, say, France. A criminologist named George Kelling argued that the toleration of all sorts of little crimes or acts of indecency—even broken windows or vandalism or those homeless goofs at intersections trying to extort pocket change for cleaning your windows—created an atmosphere of disrespect for the law. From little things, people went on to feel less restrained about bigger things. Kelling sold this idea to New York City Police Commissioner William Bratton. New York cops started pushing the homeless into shelters, clearing the intersections of squeegee men, and stopping kids from hanging out on street corners.

However, Bratton also embraced the idea that a lot of crime is committed by a few people, and a little crime is committed by a lot of people. You want a big drop in crime? Concentrate on the few career criminals and put them away for a long time. Bratton concentrated on a statistical analysis of crime in each police precinct, then drove his precinct captains to find and arrest habitual criminals. This seemed to work, so lots of police departments adopted the New York approach. Bratton’s approach coincided with a get-tough policy adopted by legislatures in the Nineties. Mandatory minimum sentences and three-strikes-and-you’re-out sentencing kept criminals in prison for longer. The war on drugs, especially the crack cocaine epidemic, sent a lot more people to prison. Guys who are locked up can’t commit crimes, at least not against ordinary citizens. (Fellow prisoners or guards? That’s another story.)

Inevitably, there is a down-side. First, the United States has one-twentieth of the world’s population, but one-fourth of the prison population. That includes both Russia and China. There are more people currently in prison in the United States (2.3 million) than there are in any one of fifteen states, and more than in the four least-populated states put together. The rate of imprisonment in the United States is the highest in the world.

Second, black communities have been particularly hard hit by both crime and punishment. One in nine black men between the ages of 20 and 34 is in jail. (The overall ratio of imprisoned to paroled/probationed is about 1:3, so that would suggest that another three in nine black men is under some other form of judicial supervision.) Since felons lose the right to vote, large numbers of blacks have been dis-franchised in what one law professor has labeled “the new Jim Crow.” Since most prisons are located in rural areas, this leads to the over-representation of areas unsympathetic to city problems.

Third, keeping huge numbers of prisoners locked up is really expensive. Americans don’t like to pay taxes, so prison budgets have been held down for decades. The result is massive over-crowding. Courts have repeatedly held this over-crowding to amount to cruel and unusual punishment.

Fourth, imprisonment doesn’t seem to do anything to change behavior. Says one criminologist, “two-thirds of those who leave prison will be back within three years.”

What have changed are the crime rates. Between 1991 and 2009, the number of murders fell by 45 percent. From its peak of 9.8/100,000 in 1991, the murder rate fell to 5.0/100,000 in 2009. The same decline has been found in most other categories of crime over the same period. At least for now.

Prisoners are so numerous that, if grouped together and represented in the Congress, they would be a formidable voting bloc.

“The prison nation,” The Week, 13 February 2009, p. 13; “The mystery of falling crime rates,” The Week, 16 July 2010, p. 13.

Eye in the Sky.

Some time ago the courts decided that no one has a right to privacy when they are on the streets or in public places. Initially, this applied, in part, to the many surveillance cameras installed by banks and stores and apartment buildings. Then the development of digital cameras made surveillance video available to watchers in real time and it made it simple to transfer the images between widely separated computers. Then computer geeks developed face-recognition software and programs that detected “anomalous behavior.” All of these were great crime-fighting tools, at least according to the police who sing the non-specific praises of the cameras as deterrents and crime-solving aids.

With this doorway open, since 9-11 the Department of Homeland Security has been making grants to cities to fund the installation of security cameras targeting public places. These cameras supplement the already existing security cameras installed by banks, stores, and office buildings. Madison, Wisconsin—a bastion of Mid-Western liberalism–is putting in 32 cameras; Chicago and Baltimore—hotbeds of urban crime which actually don’t give a rip about Islamic terrorism—are installing thousands of cameras and are linking them to the existing systems of private cameras. The most elaborate system is that of the Lower Manhattan Security Initiative: by 2010, 3,000 cameras will be in place throughout Wall Street and the World Trade Center area. In addition, the system includes license plate readers connected to computers that cross reference the numbers of suspect vehicles and which share images with the Department of Homeland Security and the EffaBeeEye.

Now there is a new layer of observation: police, government, and private drones. The police are hot to use drones. In the 1980s the Supreme Court held that the police don’t need a warrant to observe private property from public airspace. [NB: What is “public airspace”? So far as I can tell, anything at a height of 500 feet or above is clearly public airspace; anything 83 feet or below is private airspace; and what is in-between is a little murky. Are you allowed to shoot drones under 83 feet like skeet?] Drones can be fitted with high-resolution cameras, infra-red sensors, license plate-readers, and directional microphones. They are quieter and smaller than helicopters, reducing the chance that people will know that they are being observed without a warrant. If you keep your shades pulled down, can they “assume” you’re running a grow house?

Are there problems with this program? In the eyes of individual rights advocates on the left and right, the answer is definitely yes. While government agencies will watch millions of people in public places in hopes of catching a few terrorists before an attack, it is more likely that they only will be able to figure out what happened after the attack. Will people just become habituated to being watched in public places? In a generation, will they accept the possibility of being watched in semi-public places? What happens when surveillance images leak from the government agency to the public sphere? See: http://www.youtube.com/watch?v=8zYRYh6cQ2g The clip is fun to watch, except that it is a public traffic camera with the film leaked to provide private entertainment. What if a mini-drone lands on your bathroom window sill one morning and catches you in the shower? Some Peeping Tom at home or cops finding a fun use for the technology paid for by the DEA or property seizures from teen-age druggies driving their Dad’s BMW? In the eyes of most Americans, however, more surveillance cameras are just fine. (“The drone over your backyard,” The Week, 15 June 2012, p. 11.)

Fries with that?

What do we talk about when we talk about “Americanization”?  Are we talking about the spread of the American model through compulsion or seduction? Or are we talking about the Americans getting someplace first when everyone wants to go there? The global spread of obesity offers an example. In the United States the daily per capita consumption of calories has increased by 600 calories since 1980. Correspondingly, the share of overweight adults in the population has increased from 47 percent in 1980 to 64 percent in 2003. Since 1980 Americans have taken an increasing share of their meals from restaurants and take-out food. These meals tend to have about twice as many calories as does the typical meal Mom puts on the table. Nutritionists estimate that this accounts for about two thirds of the additional weight gained by Americans in the last quarter century. As late as 1991, generalized obesity was narrowly restricted geographically: only Michigan, West Virginia, Mississippi, and Louisiana had 15-20 percent of their populations classified as obese. By 1995 24 states had 15-20 percent of their adult population classed as overweight. By 2000 22 states had at least 20 percent of their populations classed as obese, and every other state except Colorado had at least 15 percent of its population classed as obese. “Obesity is often discussed as an American cultural phenomenon, closely intertwined with a taste for fast food, soft drinks, television, and video games.” This is probably what “Americanization” means in the eyes of Frenchmen and Islamist jihadis.

There have always been more underweight people than overweight people in the world. That gap has closed over time, however, and in 2000 it ceased to be true.   Falling food prices for consumers, the shift from rural to city life for many people, the substitution of desk-work for field-work, and the purchase of processed foods are world-wide trends. Obesity has emerged as a social characteristic in developing countries: 15 percent of adult Kenyan women are overweight compared to 12 percent who are underweight; 26 percent of adult Zimbabwean women are overweight compared to 5 percent who are underweight; 71 percent of adult Egyptian women are overweight compared to 1 percent who are underweight; and 29 percent of children in urban areas of China are obese.

This is largely attributable to the end of the other “oil crisis”: in recent years cheap, high-quality cooking oil has become available in developing countries for the first time. The oil contains dietary fat that has raised the caloric intake of individuals by 400 calories per day since 1980. But the increased use of cooking oil also reflects the increasing availability of meat as incomes rise around the world. For example, the ordinary Chinese diet used to rely very heavily on starchy roots, rice, and salted vegetables. Since 1980 the Chinese diet has added a lot of meat fried in oil. Most people now consume at least 2,500 calories per day.

There are still some places with “old” nutrition problems: 6 percent of the adult women in Cambodia are overweight compared to 21 percent who are underweight; 4 percent of adult Bangladeshi women are overweight compared to 45 percent who are underweight.

This has some large implications for public health. Excess weight has been associated with illnesses like diabetes and heart disease. Poor countries lack the medical systems to deal with these sorts of problems, which are new to them. An obesity epidemic is on the way. Does that mean that Weight Watchers will become an international phenomenon?

Don Peck, “The World in Numbers: The Weight of the World,” Atlantic, June 2003, pp. 38-39.

Freedom from Farmers.

Back in the 1920s and 1930s almost half of Americans lived in communities of fewer than 2,000 people and a full quarter of them lived in rural areas. Massive over-production of basic crops led to an agricultural depression long before the onset of the Great Depression. The larger collapse of the American economy in 1929 eventually led to an effort to address the agricultural problems. The New Deal’s Agricultural Adjustment Act tried to push up farm incomes. The Act linked desirable prices to their highest recorded level, then combined subsidies with payments to not grow crops as a way to meet desirable incomes for farmers. Generally, it worked. The program had been intended as a temporary “emergency” measure, but Congress made it permanent in 1949.

Since then the program has grown while the number of farmers has been reduced. Until recently, the government made direct payments to farmers and picked up almost two-thirds of the cost of insurance against weather-related problems. All farmers, great and small, have benefitted from this program: the average farmer made $87,000 a year in 2011, largely thanks to federal welfare, compared to the national average income of $67,000. At the same time, the “family farm” has become largely imaginary. American farming has become concentrated in the hands of a few giant “agribusinesses.” Since most of the beneficiaries of these programs are in a minority of “Red” states, Republicans bought off the Democrats by including the food-stamp program in the Farm Bill.[1] Probably not what Thomas Jefferson had in mind. Or maybe it was.

In 1973 and again in 1979 oil supplies from the Middle East were interrupted and gasoline prices soared. People eager to insulate the American economy from such price shocks urged the development of alternative fuels. One of the most prominent alternatives was ethanol—alcohol derived from plants. In particular, Middle Western farm states pushed for the conversion of corn into ethanol. However, other adaptations provided a first response. Not until 1995 did the United States government begin to subsidize the production of corn-based ethanol. This program grew tremendously over the next decade as Congress. In 2007 the United States produced about 5 billion gallons of ethanol from corn. It seems likely to grow even larger: in 2007 Barack Obama told an Iowa audience that he favored raising ethanol production to 65 billion gallons by 2030.

So far, so good. Is there a down-side to this pursuit of ethanol as an alternative fuel? Yes, there are several. First of all, corn-based ethanol is incredibly inefficient compared to other forms of fuel. The “energy balance” of any fuel is the ratio between the amount of energy produced and the energy consumed to produce it. Gasoline produces five times the amount of energy needed to produce it. Sugar cane-based ethanol yields eight times as much energy as is needed to produce it. Corn-based ethanol, however, produces only about 1.3 times as much energy as is needed to produce it. In short, you get virtually no benefit for the energy expenditure. Second, ethanol absorbs water. As a result, it cannot be shipped by existing gasoline pipelines and it cannot be mixed to more than a 1:9 ratio with gasoline because it would corrode engine parts. In turn, this means that ethanol has to be shipped by less energy-efficient tanker trucks and that it can only reduce oil-based gasoline consumption by 10 percent. Third, because the energy balance of corn-based ethanol is so low, it takes huge amounts of corn to produce much ethanol. About one-fifth of the existing corn crop is devoted to ethanol. (To reach President Obama’s goal of 65 billion gallons of ethanol by 2030 would require using thirteen times as much corn as is used currently—or about 250 percent of current total corn production. Since corn is used for many different things, the existing 80 percent devoted to producing corn for those purposes would have to remain in cultivation. This means that the real level of corn production would have to go well above triple the present level.)   Devoting corn to ethanol drives up the price of all other corn-derived products: Mexican tortillas, corn-fed beef, anything sweetened with corn-syrup or fried in corn-oil. Shifting land from producing something else to producing subsidized-corn then drives up the price of other goods.

If the energy balance of ethanol is poor, that of campaign contributions is not. One agribusiness giant made $3 million in campaign contributions between 2000 and 2013, but received subsidies for producing ethanol worth $10 billion.

[1] Although, in 2013, in one of those fits of insanity that have become their hall-mark, Republicans decided to shred the food-stamp program. President Obama threatened to veto any bill that didn’t fund food-stamps. “A welfare program for agribusiness,” The Week, 16-23 August 2013, p. 13.

 

A fine kettle of fish.

Wage increases haven’t kept pace with inflation for at least a decade. Generally, American families earn less than they did in 1999. A host of factors lie behind this depressing trend. There is intensifying competition from overseas (globalization); there is the difficulty of workers adapting to technological changes that wipe out lower skill/lower wage jobs while creating higher skill/higher wage jobs; and there is a government that is managing the past more than helping create the future. Still, there are a couple of factors that capture the attention.

First, America has been suffering from slow economic growth for quite a while. Why have we suffered slow growth? One answer is that high energy costs exert a drag on the economy. Beginning with the oil shocks of the 1970s, energy costs rose until the 1990s. They dropped for most of that decade, but have returned to the post-1970s “normal” in this century. Energy costs work like a regressive tax: everybody drives, so everybody pays the same gas tax; high energy costs for employers drive them to hold down other costs, like wages, or to pass them on to consumers. Another answer is that American workers used to have an enormous education advantage over most foreign workers. Now other countries have moved forward, while Americans have remained stuck in neutral. This affects productivity in a competitive economy.

Second, what growth that has occurred has flowed toward those already at the top of the pyramid. Health care costs reduce real incomes. Either employers resist wage increases in order to provide health insurance or employees without work-provided health insurance have to pay their own costs. The long rise in health-care costs cut into the rise in pay for most people. It took a proportionately smaller share from the incomes of the well-off. They plowed the difference back into investments.

Are there any grounds for even a modest optimism? Yes. First, “fracking” has greatly increased the supply of cheaper energy in America. Second, the incessant talk about the importance of education for getting a decent job has led to an increase in the number of high-school and college graduates. In 2000, 29.1 percent of 25 to 29 year olds had a college BA; in 2008, 30.8 percent did; and in 2013, 33.6 percent did. Third, for reasons that are much debated, health-care costs have stopped rising for the last few years. This should allow pay to rise as well.

None of this means that we’re home free. The way forward is shrouded in fog. Short-term results haven’t been very satisfying. American voters clang back and forth between “Hope” and the “Tea Party.” The partisan “grid-lock” in Washington may be less of a cause of our troubles than a symptom of those troubles.[1]

This analysis raises a couple of questions.

First, how do we improve the educational preparation of American workers? Shove 50 percent or more of Americans through college? Create a trades-oriented alternative to college?

Second, how do we get health-care costs down? Western Europe and Japan spend two-thirds the share of GDP on health-care as does the US and get better results, so it can be done.

Third, where do we stand on the cheap energy versus the environment issue? Global warming argues for alternatives to burning carbon; jobs and economic growth argue for it.

Fourth, what is a government supposed to do in a highly complex society and economy? After the “London whale” and the Chrysler recalls, the “regulatory state” has a black eye. That’s hardly a reason to believe in the pure rationality of the market economy

[1] David Leonhardt, “The Great Wage Slowdown Of the 21st Century,” NYT, 7 October 2014.