What we learned from Seymour Hersh 1.

Seymour Hersh (1937- ) is an investigative journalist and—on occasion—a Holland Tunnel of an ass-hole in the eyes of American government officials. His parents were Lithuanian Jews who got to the United States before the Holocaust. He got a BA in History at the University of Chicago, then drifted into reporting. His politics leaned left and he was hard to corral.[1] His first big break came when he broke the story of My Lai (1969). Then he worked in the Washington bureau of the New York Times during the Watergate events (after which he wrote a highly critical book about Henry Kissinger). More books critical of American foreign policy followed. Hersh became controversial not only for his sharp stabs at alleged government wrong-doing, but for his use of anonymous sources. Richard Perle called Hersh the “closest thing American journalism has to a terrorist.” Hersh has won five George Polk Awards for investigative journalism and a Pulitzer Prize. In 2004, Hersh published a book on the enormities arising from the intersection of intelligence and policy-making in the run-up to the Second Gulf War.[2] What did we learn?

After the fall of the Soviet Union and the end of the Cold War, the CIA had gone into a steep decline. One factor in this decline had been the change in the nature of the target. CIA case officers were overwhelmingly European-language speakers used to suborning treason on the part of Soviet bloc officials while operating under diplomatic cover. The implosion of the Soviet target and the liberation of the eastern European satellites had rendered most of these men redundant. Emerging dangers in the post-Cold War scene were difficult to identify with certainty; it was even more difficult to create new cadres of officers to deal with these dangers. These factors led to a considerable decline in the over-all number of Operations Directorate officers, rather than a shift of human resources to new targets. Instead, there took place a shift of resources from gathering human intelligence to gathering signals intelligence and remote observation. To compensate for the loss of case officers, the Directorate of Operations shifted to relying upon liaison relationships with foreign intelligence services. (pp. 76-77.)

Later, in 1995, the public revelation that the CIA had employed a Guatemalan involved with the death squads as an informant led to an order that “assets” who might be considered to have criminal or humans rights problems in their records could only be recruited with prior approval of CIA headquarters in Langley. Hundreds of existing agents all around the globe were simply dumped and new ones rarely recruited. (pp. 79-81.) One case officer of the time fumed to Seymour Hersch that “Look, we recruited assholes. I handled bad guys. But we don’t recruit people from the Little Sisters of the Poor—they don’t know anything.” Bob Baer recalled that “It did make the workday a lot easier. I just watched CNN.” (Hersh, p. 81.)

By 9/11 the C.I.A. lacked the personnel to respond effectively. In summer 2001–before the 9/11 attacks–former Middle Eastern case officer Reuel Marc Gerecht warned of the dangers in an article in The Atlantic Monthly. He quoted officers saying things like “For Christ’s sake, most case officers live in the suburbs of Virginia….Operations that include diarrhea as a way of life don’t happen.” (Quoted, p. 77.) As one now-retired clandestine service officer put it to Hersh, the decision-making was dominated by people who “wouldn’t drive to a D.C. restaurant at night because they were afraid of the crime problem.” (Quoted, p. 81.) So, that’s concerning.

[1] D’un: the highly-educated child of Jewish immigrant parents living in Chicago. Gene McCarthy’s press secretary in 1968.

[2] Seymour Hersh, Chain of Command: The Road from 9/11 to Abu Ghraib (HarperCollins, 2004) based on a series of New Yorker pieces.

Flip-flops on the ground in Iraq.

Iraq’s war with Iran (1980-1988) proved longer and costlier than Saddam Hussein had ever imagined.[1] At the end of the war Saddam Hussein found himself ruling a country that had exhausted its once huge oil reserves, that had become loaded with debt, and that badly needing to reconstruct. Iraq’s debt belonged to the Sunni Arab Gulf states. To finance the war he had presented himself to the other Gulf states as their shield against radical Shi’a Iran and has asked for money. Apparently Kuwait, the Arab Emirates, and Saudi Arabia had seen it in the same light, because they loaned Iraq $40 billion.

The post-war negotiations with Iraq’s creditors were mismanaged on both sides. Iraq asked for too much: forgiveness of the $40 billion debt, plus $30 billion in new money to pay for reconstruction. Since the Iranian danger had been blunted over the course of the Eighties, Iraq’s creditors were not much inclined to give the country easy terms or, for that matter, anything at all. Both Saddam Hussein’s request for loan cancellation and for an additional $30 billion loan (which was just as unlikely to be repaid as the original $40 billion in loans) fell on deaf ears. If Iraq could not get loan cancellation and additional loans, then it would have to pay its own way through oil sales. The falling price for oil put a severe crimp in what Iraq could earn.   In these negotiations the Emir of Kuwait took a particularly strong stand for the sanctity of international economic agreements by insisting upon repayment of the existing debt at the same moment that he was violating his oil quota.

In July 1990 Saddam Hussein sent Foreign Minister Tariq Aziz to put his case to the Arab League. The Iraqis made the same argument to the Arab League that the French and British once had made to the Americans after the First World War: We spent blood in the common cause while you gave only money, so you should cancel the money debt in exchange for us cancelling the “blood debt.” The Americans had not bought that line in 1919 and the Gulf states didn’t buy it in 1989.

On 17 July 1990 Saddam Hussein gave a belligerent speech that seemed to threaten action. That same day he sent the Kuwaiti government a letter in which he demanded a halt to the slide in oil prices, cancellation of Iraq’s debt to Kuwait, and an Arab package of aid to Iraq. Failing this, said Saddam Hussein, “we will have no choice but to resort to effective action to get things right and ensure the restitution of our rights.”[2]

To give meaning to this communication, Saddam Hussein ordered 30,000 troops massed close to the Iraq-Kuwait border. This threat, which Kuwait shared with Saudi Arabia and—undoubtedly–with the Americans, led the Saudi government to attempt to mediate. On 25 July 1990 Saddam Hussein had an interview with the American ambassador, April Glaspie, in which he gave her an ambiguous threat and she gave him an ambiguous warning. A week later, on 2 August 1990, the Iraqi army rolled into Kuwait.

The role of Saudi Arabia and its Gulf State clients in the coming of the First Iraq War is not much discussed these days in the American media. This role included financing Iraq in its long, predatory war against revolutionary Iran. It included pursuing a foolishly selfish policy on Iraq’s war-debts. It should surprise no one that, if it will take “boots on the ground” to defeat the Sunni fanatics of ISIS in their war against the pro-Iranian governments in Baghdad and Damascus, there will not be Saudi feet in them. Nor, probably, American feet. That just leaves the Iranians. Or the partition of Iraq.

[1] John Keegan, The Iraq War (2005).

[2] Quoted in Keegan, The Iraq War, p. 75.

White Flight from Baltimore.

Racism is widely deprecated. People of virtually all political stripes decry racism. Some Democrats deploy accusations of racism against their opponents in the sort of public shaming campaigns that other Democrats deplore when applied to other cases. However, one truth not much acknowledged in politics, the media, or scholarship is that—under most circumstances—racism isn’t illegal.[1]

The city of Baltimore offers an example of this inconvenient truth. After the Second World War, the other City by the Bay lost population, jobs, and the economic base needed to make the place run effectively. One important part of the problem arose from accelerating “white flight” from the city to the suburbs. Between 1950 and 1960, Baltimore’s population fell from 950,000 people to 939,000 people. From 1960 to 1970, Baltimore’s population fell from 939,000 people to 906,000 people. So, from 1950 to 1970 Baltimore lost 4.6 percent of its population.

Then came the riots of April 1968. Over a thousand businesses were looted, damaged, or burned down. The damage totaled about $79 million in today’s dollars. Virtually all of the businesses were owned by whites. One activist later reflected that “the riots really weren’t personal: They were against the system, not individual white people. There was only property loss.” However, property belongs to individuals. White flight accelerated, businessmen took their insurance money and moved to suburban locations, and landlords backed even farther off from maintaining property in a city where two-thirds of African-Americans rent their homes.

To make matters worse, Baltimore’s economic base declined. The Bethlehem Steel Company’s Sparrows Point complex of steel mill and shipyard provided high-wage jobs to a huge number of people in the area. During the 1970s and 1980s, Bethlehem Steel encountered all sorts of problems that it failed to master. Repeated rounds of retrenchment led to huge losses of jobs. Moreover, both the steel mill and the shipyard formed the center of networks of local suppliers of goods and services. Job losses at Sparrows Point rippled outward through the community. The decline of Sparrows Point and the attendant job loss cost the city an ever-growing amount of revenue.[2] From 1970 to 1980, Baltimore’s population fell from 906,000 people to 787,000 people. Decline continued until it reached 651,000 people in 2000. All told, Baltimore’s population fell by 28 percent between 1970 and 2000. Most the emigrants were whites. As a result, the “non-white” population in Baltimore rose from 24 percent in 1950 to 44 percent in 1970 to 65 percent by 2000.[3]

If 76 percent of the Baltimore’s population was white in 1950, that would mean that about 725,000 white people lived in the city. By 2000, whites constituted 35 percent of the population. That would amount to about 228,000 people. Almost half a million whites left the city. Property taxes pay for schools; business and operations taxes and licensing fees pay for city government functions like police and fire departments, and trash collection.

If the population was 950,000 in 1950 and half a million people left, then the city’s population should be about 450,000 people. However, the city’s 2000 population was actually 651,000 people. In all likelihood, the extra 200,000 people were African-Americans from farther South who moved North in hopes of finding greater opportunity. Need grew as resources shrank. Bitter must be their tears.

[1] Some racist actions are illegal. Racist belief, however, is not illegal and many racist actions are not illegal.

[2] See Mark Reutter, Making Steel: Sparrows Point and the Rise and Ruin of American Industrial Might (2005).

[3] http://www.baltimoremagazine.net/2007/5/1/100-years-the-riots-of-1968?p=2007/5/100-years-the-riots-of-1968

Disruption.

Clayton Christenson, the Harvard Business School professor whose theory of “disruption” is all the rage, once used the decline of the American steel industry as an example.[1] Leaders obsessed with profit ratios surrendered the less profitable segments of their businesses to alligators willing to accept a smaller profit in order to take over that segment. The newcomers then expanded their profit margins by investing in modern technology and pursuing efficiencies. Eventually, Big Steel found itself devoured by the alligators. From 2000 to 2013, Dan DiMicco ran Nucor Steel, one of the alligators and now the second largest steel-maker in the United States. Since leaving Nucor, DiMicco has been pondering the state of the American economy—and of the society that the economy supports. What has he concluded?[2]

First of all, he thinks that the federal government botched the 2009 stimulus bill. He thinks that the almost $800 billion stimulus could have revived the economy if it hadn’t been piddled away on subsidies to “green technology” companies, grants to limit the lay-offs caused by balanced-budget requirements of states squeezed by falling revenues, and tax cuts.[3] The failed stimulus and the obsession about cutting the deficit among Republicans have left the economy laboring along in first gear, if not in neutral.

Second, he thinks that the United States needs to create an awful lot of jobs in a Hell of a big hurry. On the one hand, there is the normal population growth that pumps out new would-be workers onto the labor market. On the other hand, the Great Recession has left a lot of people working part-time or out of the labor market entirely. He figures that the economy will have to add at least 30 million new jobs over the next decade to soak up those who want to work. The post-Great Recession economy doesn’t seem up to this task. Instead, DiMicco argues for heavy investment in a ten year plan for infrastructure as part of the basis for reviving industry.

Third, he thinks that capital-intensive manufacturing jobs are better than labor-intensive service jobs. Capital makes for high productivity; high productivity allows both high wages and high profits. In contrast, labor-intensive jobs require employers to hold down wages in order to make even a razor-thin profit. We’re never going to get strong consumer demand from an overwhelmingly service-based economy. Nucor invests in training workers for their jobs (rather than shoving the task off on colleges), so it never suffered from a supposed “skills gap.”

Fourth, he thinks that Americans—leaders and followers alike—are living in La-La Land about America’s place in the world economy. The Second World War developed the American economy while devastating those of every other country. For thirty years, American business and labor faced no serious challenge from foreign competition. At the same time, the United States promoted an open world economy because that would benefit the American economy of the Forties through the Sixties. The trouble was that the American economy did not stay “lean and mean,” while the reviving economies of Germany and Japan, and more recently China, became highly competitive. Moreover, the governments of those countries depreciated their currencies to make their countries’ more competitive with American ones. Free Trade has become a loser’s game for the United States.

There’s a lot to like in DiMicco’s bracing book.

[1] See: Larissa MacFarquahar, “When Giants Fail,” New Yorker, 14 May 2012.

[2] Dan DiMicco, American Made: Why Making Things Will Return Us to Greatness (Palgrave Macmillan, 2015).

[3] Here DiMicco is to some extent at odds with Paul Krugman. The Princeton economist wanted a stimulus bill that was twice as big, although he too derided the impact of the tax cuts.

Character Test.

Eduardo Porter has argued that Americans have been guided by a shared disdain for collective solutions and a belief individual responsibility. The conservative argument offered by Charles Murray and others is that the welfare state has undermined the character of its beneficiaries. The liberal argument offered by Eduardo Porter and others is that America has relied on continuing prosperity instead of a real welfare state. When long-term economic troubles hit, many Americans plunged through the cob-web of a “safety net.”[1]

On the right, in line with the moral corruption argument made by Murray, Republicans propose to repeal the Affordable Care Act and cut a bevy of other programs for the poor. This will end the culture of dependency that many conservatives blame for creeping social pathologies that came to light after the recent Baltimore riots that followed the arresting-to-death of Freddy Grey. The Republican budget plans seem like a dead-end. For one thing, they target relatively low-cost programs aimed at the poorest Americans. In reality, defense, Medicare/Medicaid, and Social Security are the big drivers of government spending. As Willy Sutton explained when asked why he robbed banks, “That’s where the money is.”

For another thing, these categories of spending are widely popular with the American middle class. Once again, as with opposition to gay marriage and to immigration reform, Republicans are picking the losing side of an argument. Takes Social Security as an example. As the Baby Boom retires, it places a mounting pressure on the system. When current revenue through withholding is inadequate to meet obligations, the System draws on the Social Security trust-fund (built up from revenue surpluses in the past). At the moment, the trust-fund is expected to be exhausted by 2033. After that happens, retiree benefits will be reduced to perhaps 75 percent of expected benefits.[2] Senators Elizabeth Warren and Bernie Sanders favor raising or removing the cap on Social Security withholding to greatly increase revenue for the supplemental retirement income system. However, they favor going beyond stabilizing the finances of the present system to create an expanded national pension system.[3]

This seems likely to emerge as a powerful issue in future elections. In 2005, 26 percent of still-working Americans expected “to rely on Social Security as a major source of income” in retirement. In 2015, 36 percent of still-working Americans “expect to rely on Social Security as a major source of income” in retirement. Among currently retired people, 73 percent are receiving reduced benefits because they retired early.

There are several possible explanations for the growing place of Social Security in the retirement income of Americans. One explanation could be that the Great Recession devastated both the savings and the income of ordinary Americans. Another explanation could be that a decade of aging forced many Baby Boomers to confront their own lack of thrift over the course of a lifetime. Similarly, the huge number of people who took early retirement could be explained by either the moral corruption argument or by the ravages of globalization over the last 25 years.

If conservatives want to sustain the moral corruption argument, they will have to openly apply it to middle class entitlements. Of course, cannibalizing the Affordable Care Act could provide some of the revenues to shore up middle class entitlements. However, this would require the middle class to turn its back on the poor. So, a test of character.

[1] Eduardo Porter, “Income Inequality Is Costing The Nation on Social Issues,” NYT, 29 April 2015.

[2] “Social Security worries mount,” The Week, 22 May 2015, p. 32.

[3] This strikes me as equivalent to the sort of defined-benefit system that American companies found to be unsustainable and abandoned in favor of the defined-contribution systems. Perhaps I’m wrong.

Terrorists in Palestine.

In the 1930s, which country posed the greater danger to the Jewish people? Was it Nazi Germany, which seemed bent on making the lives of Jews miserable in order to prompt their emigration? Or was it Britain, which seemed bent on blocking Jewish immigration to Palestine? In retrospect, with our knowledge of the Holocaust, the answer is obvious. At the time, however, some Zionists regarded Britain as the greater danger and more proximate enemy. In 1932 some of them found the Irgun to drive the British out of Palestine by force. When the Second World War broke out and, in Summer 1940, when German victories left the British standing alone, most Zionists saw Germany as the greater enemy. Most decided to support Britain in what amounted to an alliance-of-necessity. That included most of the members of the Irgun.

Most isn’t all: in August 1940 a small group splintered off under Avraham Stern formed a terrorist group called Lehi.[1] Stern tried publishing a newspaper, but his men also robbed banks to fund the organization. One of Stern’s chief subordinates was Yaakov Banai (1920-2009), who had recently arrived from Poland by way of Turkey. Banai took charge of the fighting organization. In January 1942, one of these bank robberies led to a shoot-out in which Jewish civilians were killed. Later that month, Lehi used a bomb to kill three policemen. This put the British police over the edge. In February 1942, British police killed Stern. Yitzhak Shamir (1915-2012) took over as leader of Lehi, then rebuilt it.

By early 1944 the Second World War appeared to be turning decisively against Nazi Germany, while news of the Holocaust had filtered out to the Jews in Palestine. The alliance-of-necessity with Britain began to be contested once again among the Zionists. Irgun decided to join Lehi in armed struggle against the British. Irgun’s early actions were essentially non-violent: they bombed government buildings when they were empty and seized weapons from police stations.

Lehi pursued a different course. Eliyahu Hakim (1925-1945) wasborn in Beirut, Lebanon, then under French rule. In 1932 his family moved south along the coast to Haifa, Palestine, then under British rule. In early 1943, Banai recruited Hakim. Soon, the organization ordered him to enlist in the British Army. After training, Hakim was posted to Egypt. He quickly deserted and went into hiding. On 8 August 1944, he formed part of a Lehi group that tried to kill Harold MacMichael, the High Commissioner for Palestine. On 29 September 1944 Lehi caught up with one of the policemen blamed for the death of Stern. Two gunmen shot him eleven times. In October 1944 the British began deporting hundreds of captured Irgun and Lehi men to camps in Eritrea. In November 1944, Lehi paired Hakim with Eliyahu Bet-Zuri (1922-1945) to kill Lord Moyne, the British Minister of State in the Middle East. The two young men shot Moyne on 6 November 1944.[2] The gunmen were captured, tried, and hanged in 1945.

Hard pressure from the British fell on all the Jews in Palestine. In response, the Jewish Agency quietly co-operated with the British, but also launched its own “hunting season” that targeted members of Irgun and Lehi. The “hunting season” warded off British action against the Jewish Agency, but it also thinned the ranks of the agency’s chief political rival. The “hunting season” came to an end in early 1945 and the Second World War in Europe ended soon afterward. All the Zionists began to focus their energies on the struggle to create the state of Israel. Quarrels of the past and of the future were put aside.

[1] Bruce Hoffman, Anonymous Soldiers: The Struggle for Israel, 1917-1947 (Knopf, 2015).

[2] One of the pistols used to kill the policemen was also used in the Moyne shootings, so it is possible that one of the gunmen had participated in more than one shooting. Or Lehi just hasd a small arsenal that had to be reused.

Inequality 5.

The community in which a person grows up exerts a big influence on his/her life-course. D’uh. Only now we have a big social science study to validate this common belief.[1] Growing up in a low-income black area reduces one’s chances of rising into the middle class, even if the person is white.

Some areas are dead-ends for low-income people. The old cotton South, Southern California, and much of the Rust Belt are bad places to be stuck.

Where are the places with the biggest positive impact on the earnings of low-income people? Places with lots of Scandinavians or Mormons: southern Minnesota, northern Iowa, Utah, adjacent parts of Wyoming, and southeastern Idaho. What distinguishes areas favorable to social mobility from places unfavorable to social mobility? The quality of the public schools, the share of two-parent families, the degree of social engagement by the community (functioning civic and religious groups), and the integration of different income groups in a single community.

Of course, the study may actually reveal the character of the people who go, as much as the character of the places to which those people go. Again, d’uh.         Children who moved from a low-income area to a higher-income area were later in life, less likely to become single parents, more likely to go to college, and earned more money. Moreover, the places where poor people cluster are full of poor people. The schools are poor, there is a lot of pathological behavior, and it isn’t very safe. Parents who move from a lower-income place to a better-income place do their children an immense service. Still, they have to pay a cost.

However, the study revealed several disparities. One is between older and younger siblings in the same families. The sooner a kid gets out, the better for their life prospects; the later a kid gets out, the worse for their life prospects. Getting a kid out before age 9 or 10 offers the best hope. Chances decline rapidly after that age. A second disparity is between the sexes. Low-income women who grow up in higher income areas earn about 25 percent more than low-income women who grow up in low-income areas. Low-income men who grow up in a higher-income area earn about 30 percent more than men who grow up in a low-income area. What was not reported was the comparative chances of being employed or unemployed.

The same study found that “commuting time has emerged as the single strongest factor in the odds of escaping poverty.”[2] The longer is the commute, the lower is the chance of improving one’s life. Basically, there aren’t any jobs in the places where poor people live. To get a job, someone has to travel. One of the big problems is that public transportation is not equally distributed across communities. In a lot of middle-class places, everyone has a car so no one cares about public transportation. If someone who is poor wants to live in one of these communities, they need to get a car. Aye, there’s the rub.

Still, what causes higher-income areas to be better than low-income areas? Sure, “they have more money.” Why do they have more money? Because they’ve always had more money, so they have better schools, two-parent families, and kids who go to college? Or because there is a culture that values marriage, family, education, and civic engagement? Which of these factors can be addressed by public policy? Which are matters of “personal responsibility”?

[1] David Leonhardt, Amanda Cox, and Claire Cain Miler, “Change of Address Offers A Pathway Out of Poverty,” NYT, 4 May 2015.

[2] Mikayla Bouchard, “Transportation Emerges As Key to Escaping Poverty,” NYT, 7 May 2015.

Days of Rage.

The Civil Rights movement in the South encountered a lot of violent resistance. (Birmingham, Alabama became known in some quarters as “Bombingham.”) The United States began to escalate its military commitment to South Vietnam. JFK, RFK, and MLK all were assassinated. Nothing in conventional politics seemed able to stop the momentum. In response, in Summer 1969, things began to boil over on the American Left. Outside the South, the Black Panthers were formed. Some people began to contemplate the “propaganda of the deed,” as the pre-revolutionary Russian dissidents had called bombings and assassinations. Perhaps a 100,000 young people had signed-up with the Students for a Democratic Society (SDS) by 1968. A radical fringe broke away from the Students for a Democratic Society (SDS) over SDS’s rejection of violence. They called themselves The Weathermen. When the Weathermen, called for supporters to stage so-called “Days of Rage” in Chicago in October 1969, only about 200 people showed up. The disappointed Weathermen promptly went underground and launched a terror campaign. Independently of the Weathermen, Sam Melville planted dynamite at a disused United Fruit warehouse in New York. Soon afterward, the Weathermen went underground themselves.

There was a great deal of savagery as well as a great deal of foolishness in the campaign that followed.[1] “Protests and marches don’t do it. Revolutionary violence is the only way,” said Bernardine Dohrn. “We could do [non-fatal fire-bombings] until we were blue in the face, and the government wouldn’t really care,” recalled one Weatherperson years later.[2] So, they opted for something more dramatic. Bombings followed in Berkeley, Detroit, Cleveland, and New York City. In February 1969, a secretary at Pomona College was wounded by bomb. In August 1969, one of Sam Melville’s bombs wounded twenty people in New York. In March 1970 one plan went wrong when a Weather Underground bomb factory in Greenwich Village blew up, killing three dissidents. The Weather Underground announced that it would shift back to non-lethal bombings. Apparently it was safer (for them) that way. In August 1970 a bomb at the University of Wisconsin killed a researcher named Robert Fassnacht. Between May 1971 and January 1972, a “Black Liberation Army” (BLA) killed five policemen around the country and badly wounded two others. In February 1974, the Symbionese Liberation Army kidnapped Patty Hearst. In May 1974, the Los Angeles police caught up with most of the group. The SLA got shot to bits on live television. In January 1975 the FALN, a terrorist group advocating Puerto Rican independence, launched a campaign that would run for eight years and set off 130 bombs. Finally, in October 1981, the BLA tried to rob a Brink’s armored car outside New York City. In the robbery and in a confrontation with the police afterward, three police officers were killed.

Brian Burroughs charitably describes the Weathermen, Symbionese Liberation Army, the Black Liberation Army, and a group of Puerto Rican nationalists as “young people who fatally misjudged America’s political winds and found themselves trapped in an unwinnable struggle they were too proud or too stubborn to give up.” That could be. In “The Searchers” (1956, dir. John Ford), the character played by John Wayne explained why he could not take an oath as a Texas Ranger: “I figure a man’s only good for one oath at a time and I took mine to the Confederate States of America.”

[1] Bryan Burroughs, Days of Rage: America’s Radical Underground, the FBI, and the Forgotten Age of Revolutionary Violence (Penguin, 2014).

[2] It strikes me as odd to complain that a government one accuses of putting property rights ahead of human rights doesn’t really care about property, but does care about harm to humans. I’m probably missing something.

Toward the cliff.

In brief compass, the “supply side” theories of the Reagan administration de-stabilized the traditional budget by cutting taxes without cutting expenditures.[1] Deficits expanded. However, observers were more concerned about the budget deficits that would be driven by the cost of entitlements—Medicare, Medicaid, and Social Security—for Baby Boomers. If one takes as a given that government can only account for some fixed share of GDP, then the growth of entitlements will crowd out spending on other areas: defense and the wide range of government functions labeled as “discretionary spending.”[2] These entitlements are so popular and the mythology surrounding them so powerful that the elected representatives in a democratic polity were unwilling to address them. The problem festered.

Then came the Great Recession. In 2009, the government’s deficit peaked at over 10 percent of Gross Domestic Product (GDP). By 2013 the Congressional Budget Office (CBO) projected that the deficit would fall to 2.1 percent of GDP. Moreover in September 2013 the CBO projected that short-term government deficits would shrink, thanks to the economy’s recovery from the Great Recession and the cuts enforced by “zee zequester.” So, the deficit has been mastered. We’re good, right?

Well, no. The deficit arising from the Great Recession has been mastered. However, that was a matter of course. Counter-cyclical deficit spending has been the normal response to recessions for half a century. Economic activity revises, spending falls, and tax revenues increase, so the deficit goes away.[3]

However, the deficit arising from entitlement programs has not been mastered. Or addressed. Or even acknowledged. Social Security, Medicare, and Medicaid are about to rise sharply in total cost as the fabled Baby Boomers begin to retire in droves. From 1973 to 2013, spending on Medicare, Medicaid, and Social Security averaged 7 percent of GDP per year; the CBO projects that they will rise to 14 percent of GDP by 2038. By 2023, government spending will rise to 3.5 percent of GDP; by 2038 it would reach 6.5 percent. The trouble is that the economy will not grow as much as does government spending. Moreover, federal revenues are projected to rise by only 2 percent of GDP over the same period. Hence, this will drive up the deficit from the 38 percent of Gross Domestic Product (GDP) that formed the average from 1968 to 2008 to 100 percent of GDP in 2038.

Neither Republicans nor Democrats have shown any willingness during the Obama Administration to address this important long-term problem. The administration has concentrated tis efforts on raising taxes on the higher income groups, rather than on trying to contain or reduce costs. The Republicans have concentrated on trying to repeal the Affordable Care Act, rather than on trying to address the ballooning costs of entitlement programs.

Nor is it likely to emerge as a major issue in the 2016 presidential election. Older people vote in larger percentages than do younger people. No one has yet formulated a way to deliver the same quality of medical care or retirement income at a much lower cost. No one yet has formulated a way to raise substantially larger tax revenues from all Americans.

[1] Jackie Calmes, “Budget Office Warns That Deficits Will Rise Again Because Cuts Are Misdirected,” NYT, 18 September 2013.

[2] Obviously, one does not have to agree that some fixed share of the economy should be devoted to government spending. Certainly Senator Sanders does not.

[3] See Paul Krugman’s terse, withering evaluation of President Obama’s performance in this regard in NYT, 8 May 2015.

Rise of the Machine Minders.

Back in August 2014, Tyler Cowen, an economics professor at George Mason University, got the idea that students returning to college for Fall Term needed to hear his advice on the keys to success.[1] He wrote a column for the New York Times. Here, in a nut-shell, is what he said.[2]

He took as his point of departure the proliferation of “thinking” machines in the economy. Industrial robots threaten factory workers; self-driving vehicles threaten truck drivers, if not other motorists; drones threaten airline pilots, did they but know it; e-discovery research software threatens lawyers and college professors. What are we non-mechanicals supposed to do for jobs in this dawning era? What are the keys to success?

Being conscientious is one key. The opportunity to do something and actually doing it are two different things.[3] If students or workers need something more corporeal than a gnawing anxiety to keep them relentlessly on task, they’re in for a bad time.

Listening to the machines is a second key. GPS-based driving instructions delivered by an Oxbridge-educated woman of a certain age are just the beginning.[4] Pretty soon our smart phones will be tuning-up our decision-making in many areas. Deciding, like Doc Boone in ”Stagecoach,” to wave off the warning and have another drink will have a cost.

Remembering that Price reflects the ratio between Supply and Demand is a third key. Stuff that is rare will command a higher price than stuff that is common. In human terms, people need to work on how they present themselves and how they interact with other people.

The Return of Calvinism is a fourth key. Either people are internally motivated or they are externally motivated. If an employer wants to keep down labor costs, then an effective “Atta boy” or “You slime” can replace a bonus as a motivator. People who know which to choose incentive will thrive in the new economy.

Developing a Thick Hide is a fifth key. Computer programs will be able to measure productivity and some other aspects of employee performance.[5] (Not all aspects, just some other aspects.) Workers at every level will get turned into a mathematical formula.[6] You need to be able to learn from a harsh performance review, get up, and move on.

Remembering Harold Lamb is a sixth key. One of his characters was a 17th Century Cossack who bought a pair of expensive leather boots to demonstrate that he had money and then spilled tar on them to demonstrate that at he didn’t care about money. Lots of young workers are libertarian-subversive in this way.

Recognizing that machines under-cut the price advantage of cheap (Asian) human labor is a seventh key. A higher class of Nineteenth Century “machine minders” is on the horizon. That will be good for the right American workers.

Your quick conceiving discontent will have noticed that none of this has anything to do with which major a student chooses. It is all about what kind of person chooses that major.

[1] My guess would be that he was fed up with the stuff that his teaching assistants had been telling him and with the results of the Generals Examinations of the graduate students.

[2] Tyler Cowen, “Who Will Prosper in the New World,” NYT, 1 September 2014.

[3] See: Dorothy Parker.

[4] You ever wonder if people into BDSM choose some German dominatrix’s voice? Just asking.

[5] The SEC wants to track executive compensation against company performance. The first rule for plumbers is that shit runs downhill. The second is to not bite your fingernails.

[6] This will lead to its own disasters. Both life experience and literature indicate that there are non-performers who are vital to the functioning of an organization. So, judgment and experience will be vital.