Bob Marley meets Adolf Hitler: Reggae.

When the Germans over-ran western Europe in summer 1940, the Americans took over the defense of a bunch of British possessions in the Americas: in the Bahamas, Guyana, the Leeward Islands, and southern Jamaica. Lonely American kids far from home brought American music with them. Later on, radio stations of the Armed Forces Network played American music all over the world. People with radios listened to that music. It wasn’t Lawrence Welk. In the Caribbean, people could hear both jazz and rhythm and blues.

Traditionally, Jamaicans had listened to live music, usually in dance-halls. After the Second World War, radio DJs and music promoters began running what were called “sound systems.” These were flat-bed trucks with generators, big speakers, and turntables. The idea was to roll into some poor neighborhood, begin blasting music, draw a crowd, and sell goat-jerky and white lightning to the audience. This innovative approach to marketing soon won large audiences. The trouble was that it threatened to put the owners of dance-halls out of business. How to regain market share? They started hiring “rude boys” to go cause a ruckus at “sound system” street dances.  Cut somebody’s face with a straight-razor, stuff like that.  This made the dance halls seem somehow…safer.  How were the “sound system” promoters going to regain market share? They hired their own “rude boys.” Rinse and repeat.  So, Jamaican music started to acquire a certain association in some minds—those of the venue-owners, the performers, the audience, the “rude boys” themselves–with violence.

Then the Americans moved on to rock and roll, leaving R and B behind.  Jamaicans didn’t like rock and roll as much as they liked R and B, so the “sound systems” started paying for original local music to record and play.  “Duke” Reid opened the first Jamaican studio in 1959.

There is an interesting progression in Jamaican music.  Mento is a kind of Jamaican folk music that developed in the 1940s and became very popular in the 1950s.  It sounds like calypso to my ignorant ear, but purists insist there is a difference.   Mento emerged as dance-hall music that was popular with poor people.  Hence, Marcus Brewer’s analysis of rap music applies to mento.[1]  Ska then added influences from American jazz and rhythm and blues in the late 1950s. It replaced mento (without destroying it).  It also spread to Britain, where it became popular with “mods” and later with “skinheads.”  Rocksteady then emerged about 1966 as a slowed-down version of ska.  By mid-1968 music had moved on yet again to reggae.  Part of the explanation for this is the growing influence of Memphis and Detroit “soul” music. (See: Motown, Stax, and Atlantic Records.)  Reggae emerged from this line of succession in the late 1960s.

What’s distinctive about reggae? It’s the mood as much as anything else (I would argue).  Partly, this comes from the emphasis on suffering and hardship in the wake of the failed hopes of the early Sixties. Partly, the simple chord progressions or—according to my technical advisor–smoking a lot of ganja encourage a meditative mood.  Partly, many of its musicians and followers were Rastafaris whose faith explained past suffering and promised future redemption.

How did reggae get to the United States?  The character “Pussy Galore” in Ian Fleming’s James Bond novel was modeled on Blanche Lindo.  Her son, Chris Blackwell (1959- ) founded Island Records in Jamaica in 1958, then moved it to Britain in 1962. Here he promoted Jamaican music for a British audience.  He produced the movie “The Harder They Come” (1972) and the first Bob Marley and the Wailers album outside Jamaica “Catch a Fire” (1973).  Eric Clapton covered “I Shot the Sheriff” (1974), introducing Marley to a huge American audience. The Wailers toured the US with Johnny Nash (1974), but got fired for being much more popular.

[1] “They’re pretty angry most of the time, but sometimes they just want to have sex.”–“About A Boy” (2002).

Big Pharma

What we think of as medicine is a fairly new development. Doctors used to be able to set broken bones, sew up cuts, lop off limbs, and give you an emetic. This changed in the later 19th Century, thanks to the addition of chemistry to medicine. Anesthesia and disinfectants made invasive surgery possible. No screaming, no gangrene. Then insulin (1921) and penicillin (1928) were discovered. Direct chemical treatment of disorders became possible. After the Second World War scientific research was applied in a systematic way to expanding knowledge of biology and techniques for producing drugs improved. The results of this combination appeared in a flood of new drugs and the growth of huge pharmaceutical companies. The new products included oral contraceptives, blood-pressure medicines, and psychiatric drugs. Cancer drugs began to come on-line in the 1970s. More recently, there have been drugs to treat cholesterol, acid-reflux, and asthma, as well as Viagra–and anti-depressants for when that doesn’t work. Then there is the terrible plague of male pattern baldness.

There have been several important developments in my life-time.

First, rules for medical trials became more elaborate and restrictive. Between 1957 and 1961 doctors prescribed a new tranquilizer to pregnant women to counter morning-sickness. Unfortunately, thalidomide caused terrible birth defects. In 1962 Congress amended the law governing the Food and Drug Administration to require that new pharmaceuticals prove not only safety, but also “efficacy”: the ability to produce a specific desired effect (and not some other effect or no effect) before a drug could be released. In 1964 the World Medical Association established rules requiring testing before the release of any new drug.   Again, pharmaceutical companies were required to prove “efficacy.” These reforms greatly extended the time and cost invested before drugs were released. In recent years, medical crises—like heart-disease and AIDS—has created a countervailing pressure for accelerated testing and approval.

Second, the pharmaceutical business became highly concentrated and vertically integrated. These are business terms, but it is a business. You should learn what they mean—although I didn’t when I was your age.) As pharmaceutical research sought treatments for complicated illnesses, research and development became more expensive. As research and development became more expensive, companies faced a greater risk that they would not be able to cover their costs before any patent ran out. Therefore, during the 1970s many countries passed laws strengthening and extending the time limit that the patents issued to pharmaceutical companies. These were intended to prevent generic producers from just figuring out the chemical basis of a drug, then producing it without having to bear the high costs of research. During the 1980s a wave of “buy-outs” of small bio-tech firms by big pharmaceutical companies took place. Today most pharmaceutical research, production, and sales are concentrated in fewer than twenty large companies. These companies are based in the United States, Britain, France, Germany, and Switzerland, although they operate internationally. This is called “concentration.” Each of these companies researches, develops, manufactures, and markets the product. This is called “vertical integration.” Critics refer to this complex as “Big Pharma.”

Most prescription drug use took place in a few rich countries (US, EU, Japan). Don’t get sick somewhere else. (My son’s room-mate got bit by a rabid dog while in Bolivia one summer. He had to fly home to get the injections to save his life. What if he had been Bolivian?) However, China, Russia, and South Korea expanded sales by 81 percent in 2006. Pharmaceuticals are already the most profitable business in America. Now a big money harvest looms in the developing world.

The GWOT if Israel was in charge.

What if Israel ran the Global War on Terror (GWOT)?

On the wall of his office Meir Dagan had an old black-and-white photograph of his grandfather about to be shot by a German in Russia during the Second World War. Must be some German soldier’s snap-shot, something he could keep as a trophy or send home to his girlfriend. I don’t know where Dagan got it. Probably did a lot of looking through the picture collection at Yad Vashem. This may not be psychologically healthy. Perhaps he should have considered grief counseling. On the other hand, Dagan was the head of the Israeli foreign intelligence service, the Mossad. He could look at it anytime he wanted during the day while he tried to figure out how to deal with Israel’s enemies.

One of the units under Dagan’s command was called “Kidon.” That’s the Hebrew word for bayonet. (Actually, it probably means “dagger” or “six inches of honed bronze” because Hebrew is a language from the many days ago before Bayonne even existed.) You go to Barnes and Noble, you’ll find a bunch of books about American snipers with 500 “kills” or sumshit like that. Kind of FPSy if you ask me. I don’t think I’ve run across books about sticking a blade in somebody, feeling it grate on a rib, inhaling the coppery smell of blood, hearing the guy gasping for breath like it’s sex. Nothing FPS about that. Kidon typifies Israel’s response to terrorism.

After the 1972 Munich Olympics, Kidon launched “Operation Wrath of God.” (See: “Munich.”) The Israelis killed eleven PLO terrorists believed to have been involved in the attack. It took seven years. Apparently, they’re tenacious and patient.

At least once, in Lillehamer, Norway, they killed a complete innocent. In front of his pregnant wife. Apparently, they don’t get thrown off-track by remorse over errors.

After Hamas rose to power in the Gaza Strip in 1993, it sent many suicide bombers into Israel. The Israelis didn’t take this lying down. In 1996 they palmed off a “burner” filled with explosives on Yahya Ayyash, the really talented chief bomb maker for Hamas; in 1997 they tried to kill Khaled Meshal, a Hamas leader, by injecting poison into his ear; in 2004 they killed the founder of Hamas, Sheikh Ahmed Yassin, with an Apache gunship; in 2008 they put a bomb in the headrest of a Hamas leader’s car in Damascus. In January 2010 they suffocated the chief contact between Hamas and Iran in his luxury hotel room in Dubai. Apparently, they focus on the enemy leadership. Just keep mowing the lawn.

When Hamas took full control of Gaza in 2007, it fired thousands of rockets into Israel. Israel responded by blockading Gaza: it will not allow in cement, steel, cars, computers, and lots of ordinary food; its navy will not let fishing boats proceed more than three miles from shore; it will not allow any Palestinians out of Gaza. From December 2008 to January 2009 Israeli forces bombarded the Gaza Strip. Anything big (police stations, factories, government buildings, schools, hospitals) got blown up; 1,300 people got killed; tens of thousands got “dishoused”—as the RAF used to describe the result of the area bombing of German cities. Apparently, they don’t care much about making a bad impression on world opinion.

At the same time, Israeli leaders began to talk about doing a deal with Syria for the return of the Golan Heights. Syria is the chief supporter of Hamas. Probably, the price of the Golan for Syria would include helping eliminate the ability of Hamas to engage in attacks on Israel—before the Syrians get back the Golan. (See: “Michael Collins.”) Apparently, they adapt to changing circumstances and will talk to their enemies.

So, tenacity, patience, focus, a thick hide to criticism, and adaptability are keys traits. The enemy hasn’t gone away, but neither have the Israelis. They live with a long struggle.

What we learned from the report of the 911 Commission XII

On 12 October 2000, an al Qaeda team staged a suicide bombing against the American warship, the USS Cole while it was at anchor in the Yemen port of Aden. The attack killed 17 American sailors.

Although the CIA “described initial Yemeni support after the Cole [bombing] as ‘slow and inadequate,’…the Yemenis provided strong evidence connecting the Cole attack to al Qaeda during the second half of November, identifying individual operatives whom the United States knew were part of al Qaeda. During December the United States was able to corroborate this evidence. But the United States did not have evidence about Bin Laden’s personal involvement in the attacks until Nashiri[1] and Khallad[2] were captured in 2002 and 2003.” (p. 278.)

The Yemenis arrested two of the surviving members of the Cole team; extracted from them the names and descriptions of Nashiri, their immediate commander, and Khallad, the liaison who came from Afghanistan; and suggested to the Americans (correctly) that Khallad was actually Tawfiq bin Attash. (p. 277.) Both Nashiri and Khallad were known to the Americans to have been involved in the 1998 embassy bombings, for which al Qaeda had claimed credit, and to be linked to al Qaeda. (p. 278.) An FBI special agent participating in the investigation recognized the name Khallad as someone described by an al Qaeda source as Bin Laden’s “run boy.” In mid-December 2000 the Americans’ al Qaeda source identified a photograph of Khallad obtained from the Yemenis as Bin Laden’s agent. (pp. 277-278.)

Moreover, the 12 October 2000 “attack on the USS Cole galvanized al Qaeda’s recruitment efforts.” [OBL ordered production of a propaganda video that highlighted the attack on the Cole.] “Al Qaeda’s image was very important to Bin Laden, and the video was widely disseminated… and caused many extremists to travel to Afghanistan for training and jihad. Al Qaeda members considered the video an effective tool in their struggle for pre-eminence among other Islamist and jihadist movements.” (p. 276.) [NB: Al Qaeda appeared to be claiming responsibility for the attack. How could the CIA still waver over identifying OBL as the originator of the attack on the Cole?]

In mid-November 2000 Sandy Berger asked Hugh Shelton to review plans for military action against Bin Laden. On 25 November 2000 Berger and Clarke wrote to President Clinton to inform him that the investigation would soon show that the Cole attack had been launched by a terrorist cell whose leaders belonged to al Qaeda and whose members had trained in al Qaeda facilities; the memo also sketched out a “final ultimatum” to the Taliban being pushed by Clarke. (pp. 280-281.)

 

 

 

[1] Abd al-Rahim al-Nashiri (1965- ). Saudi Arabian. One of the “Arab Afghans” who fought the Soviet Union in Afghanistan. Eventually aligned with Osama bin Laden. Captured by the CIA in 2002. Reportedly “waterboarded” during interrogation. Currently being held at Guantanamo.

[2] Walid Muhammad Salih bin Roshayed bin Attash (1979- ).  Yemeni immigrant to Saudi Arabia.  Another “Arab Afghan.”  Became very close to Osam bin Laden.  Captured in 2003.

What we learned from the report of the 911 Commission XI

Post-Crisis Reflection: Agenda for 2000.

In January, February, and March 2000 the NSC and others reviewed what lessons might be learned from the “millennium crisis.” They concluded that any effort at disrupting al Qaeda operations had to be undertaken in a more determined way henceforth and that domestic security had already been penetrated by “sleeper cells.” Action to deal with these problems was approved in a general way. (pp. 262-263.)

Various American delegations (including one by President Clinton which the security-conscious Secret Service loudly opposed) went to Pakistan in January, March, May, June, and September. The trouble is that the US had noting to offer the Pakistanis as a reward for their co-operation: Congressionally-imposed sanctions prevented the government from offering anything of substance [and apparently the Clinton Administration did not want to brave the wrath of Congress by requesting a revision of relations with Pakistan]. (pp. 263-265.)

Richard Clarke seems to have been so focused on al Qaeda that he could not see the need for CIA assets to deal with other forms of terrorism, still less for a robust general intelligence capability. This led to bitter disputes between Clarke and the CIA leaders, who may have played the terrorism card as a budget ploy without fully appreciating how grave the danger faced by America. (pp. 265-266.)

The executive branch didn’t get very far trying to tighten up border security, especially with regard to Canada.

By the end of 1999 or the start of 2000 the leader of the Northern Alliance, Ahmed Shah Massoud, wanted the US to line up as his ally in the struggle to overthrow the Taliban. Both Cofer Black and Richard Clarke wanted to do then what the US did anyway after 9/11. At the minimum, this would allow the CIA to put its agents into Afghanistan on a long-term basis, rather than relying on hearsay from the Northern Alliance and the “tribals.” The Clinton administration declined to forge such an alliance: the Tajik-dominated Northern Alliance represented the minority within Afghanistan and many of its people had very shady pasts. (p. 271.)

Meanwhile, CIA agents in Malaysia took the group of suspects identified by the NSA intercepts under surveillance, but failed to communicate departure information in a timely fashion when some of the men moved on to Bangkok, Thailand. CIA agents in Bangkok not only failed to arrive at the airport in time to tail the arriving suspects, they failed to learn that two of the suspects had left for the United States on 15 January 2000 until March 2000. CIA’s Counterterrorist Center did not inform anyone else–neither the State Department nor the FBI– of the arrival of the two suspects in the United States until January 2001, after the bombing of the U.S.S. Cole. (pp. 261-262.) As a result, the first two members of the 9/11 team arrived in Los Angeles on 15 January 2000, at the height of the “millennium crisis.” Although neither one spoke any English and were Arabs, they failed to attract any recorded attention from Customs.

“Heading South” (2005, dir. Laurent Cantet)

 

International Tourism.

Romans used to go to Greece to acquire some “polish.”   English noblemen used to send their sons on the “Grand Tour” for the same purpose.   Between the wars Americans used to visit European war cemeteries to see where their “gallant Willy fell.” Today, tourism is big business: in 2010 there were 940 million international tourist “arrivals” someplace and the industry earned $919 billion. The USA earned over $100 billion from foreign tourists that year. Airlines, hotels, taxis, restaurants, tour-guides, museums, and sellers of hand-woven guitars all profited. (Unless you’re willing to rough-it: learn to recognize foreign traffic signs, pick up some phrases from a guide book, and eat what you ordered by mistake even though it turned out to be a psychotropic carcinogen, the way the missus and I do. See: Mark Twain, An Innocent Abroad.)

 

International sex tourism.

Il y a etait un fois, guys went to “the big city” for these purposes (see: Patricia Cohen, The Murder of Helen Jewett) or Mexico (see: JFK). Now, air travel allows people to zoom all over the earth for the same purpose. Try getting through the streets around the Rijksmuseum in Amsterdam to see the porcelain violins when a ferry-load of Brits show up on a cheap-beer-and-expensive-sex outing, and start ogling the girls in lingerie sitting in the shop windows—many of whom are petting a cat in a bit of symbolic advertising.

Of course, most people aren’t beer-sodden British soccer hooligans, so there is a market in other parts of the world. Most of them are naturally hot and sweaty places: Tunisia—where the “recent unpleasantness” has left a whole class of service workers on the beach in Speedos, Gambia, Kenya, Bali in Indonesia, Thailand, Brazil, and the Caribbean.

 

Female sex tourism.

Women started traveling for “romance” in the mid-19th century. (See: the novels of Henry James and E. M. Forster). In the first half of the 20th Century there are some pretty interesting stories of women charting their own course, although this often involves highly repressed Northern women falling for highly unrepressed (to put it mildly) Southern men. There’s probably some kind of message about life there. You never see books or movies about some Greek having an epiphany and deciding to pay his bills or go to work on time.

So, skip ahead to the aftermath of the Feminist and Sexual Revolutions of the 1960s and 1970s. Women got careers outside that of “homemaker”; women had a difficult time finding men who would accept them in their new roles (or do the dishes); marriages broke down at high rates or never got formed; and women had money. This meant that some women had one sort of success, but no significant other in their life. Result: female sex tourism blossomed (although hardly to the scale of male sex tourism). Anyway, that’s the belief. It is hard to find women who will own up to this. This makes me think that there may be a certain prurient motive behind the “exposes.” Like that “I can’t believe it’s not butter” guy on the cover of romance novels.

There are a few scholarly studies in The Annals of Tourism Research and a UC-Santa Barbara Ph.D. dissertation by April Gorry. Popular culture books and movies dealing with this supposed phenomenon include: the movie “Shirley Valentine” (1989); the novel by Terry MacMillan and the movie made from it, “How Stella Got Her Groove Back” (1998); creepy Michel Houlebecq’s novel Platform (2002); and the movie “Heading South” (2005).

In voodoo, Legba is the “master of the crossroads” who controls access to the spirits.

Big discounts at the Organ Loft!

Popular culture side-swipes reality when it comes to organ-theft. Organ theft is a “trope” (a recurring motif, AKA cliché) in many Japanese anime and manga, and in American comic books, video games, and television (C.S.I.; Law and Order; Justified, Futurama). Examples:

Robin Cook, Coma (1977). The recent development of successful transplantation techniques suddenly creates an imbalance between the supply of and demand for organs, so a black-market arises. A deranged doctor in a Boston hospital induces comas in healthy patients undergoing minor procedures, then harvests the organs.

“Coma” (1979). The movie version of the book, directed by Michael Crichton.

1989: a Turk came to Britain, sold a kidney, got stiffed on the payment, and lied to the police that he had been robbed of a kidney. This is the origin of the urban legend about “I woke up in a bath tub full of ice…”

“Death Warrant” (1990). No one cares what happens to the inmates in maximum security prisons. An evil warden, corrupt guards, and a greedy doctor, kill inmates to harvest organs for sale on the black market. The very institutions that guard us are actually criminal.

“The Harvest” (1993). Writer goes to Mexico, gets robbed of a kidney, tries to find the people responsible, partially succeeds, and then finds out that his boss has just had a transplant.

Christopher Moore, Island of the Sequined Love Nun (1997). Predatory missionaries.

“Dirty Pretty Things” (2002). Hard-pressed illegal immigrants in Britain sell organs.

“Sympathy for Mr. Vengeance” (2002). Hard-pressed South Korean factory worker sells a kidney to save his sister’s life, gets cheated, she dies, and he wreaks a bloody vengeance.

“Shichinin no Tomurai (The Innocent Seven)” (2005). Seven groups of abusive parents get an offer from a mysterious figure. They’re likely to either kill their kids or lose them to the child welfare people. Why not make a different kind of “killing” by selling the children so that their organs can be harvested? A week at a mountain vacation camp will close the deal. This may reflect Japanese discomfort with transplants, plus the Aum Shinrikyo terrorist cult.

Kazuo Ishiguru, Never Let Me Go (2005). Test tube babies + cloning = human spare tires for when you come down with some life-threatening disease. Your liver goes? Just pop one out of the “donor” you paid to have created many years ago. Now everyone can live to be 100! In the meantime, the future donors are raised in ignorance of their intended function.

“Turistas” (2006). The developed world has exploited the developing world for centuries. (See: Andre Gunder Frank.) Now it is time for reparations. A deranged doctor abducts gringo tourists who visit a remote beach resort. He harvests their organs, which are donated to the poor in a Brazilian hospital.

“Repo! The Genetic Opera” (2008). In the sinister future a big corporation supplies organs for transplant on credit. Transplant technology has progressed so far that you can get replacement intestines and spines. If you fall behind on your payments, however, the company sends around some guys to re-possess your implanted organ, just like your car or washing machine. The consequences aren’t the same as having your car or washing machine repoed, however. You die. The movie is a musical.

Eric Garcia, Repossession Mambo (2009). Uses the same sinister future/big corporation/buy on credit/get repoed premise as above. Adds bio-mechanical organs/people hiding from their creditors and being hunted by repo men twists for product differentiation.

“Repo Men” (2010). The chop-socky movie version of Garcia’s novel.

“Never Let Me Go” (2010). The excellent movie version of Ishiguro’s novel.

Give my knees to the needy.

Organ transplantation.

In the 7th Century BC,[1] a Chinese physician named Bian Que tried transplanting the heart of a strong-willed commoner into the body of a weak-willed emperor.

During the late 19th Century surgeons finally developed the technical ability to conduct operations (knowledge of how the body functioned, anesthesia, antiseptics) and this made transplants possible. However, it took much longer to develop the ability to prevent rejection of the implanted organ by the body’s immune system. Thus, the transplanted “Hands of Dr. Orloc” (1924) weren’t. Lung (1963), liver (1967), and heart (1967-1968) transplants were “successful” in the sense that the patients lived for weeks to months after the operation. In 1970 the development of the immuno-suppressive drug cyclosporine finally permitted successful transplantation to begin. Since 1970 transplants have become common: hearts, lungs, kidneys, livers, pancreases, hands, facial tissue, and bones have all been transplanted. No brains, yet.

The mismatch between donors and recipients.

Generally, there are more sick people in need of an organ than there are dead people with healthy organs for “harvesting.” While the growth of organ transplantation has extended many lives, people often die waiting for an available organ. National medical systems have developed ways of determining who gets priority.

However, there are two issues to bear in mind. First, national boundaries create barriers between donors and recipients. Second, as we have seen in so many other areas, great differences of wealth and income between different parts of the world lets buyers in rich countries get what they want in poor countries. People with money who want to jump the line can seek organ transplants abroad. One outcome of globalization has been to create a market in organs for transplant.

The global trade in organs.

Some Asian countries used to have a legal market in organs: India (until 1994), the Philippines (until 2008), and China (to this day) all allowed the legal sale of organs. Sometimes governments participate in this trade. An estimated 90 percent of the organs from China are taken from criminals executed in prisons. (They used to shoot them in train stations.)

There is also a thriving black-market in organs. The average price paid to a donor for a kidney is $5,000, while the average cost to the recipient is $150,000. When the Indian Ocean tsunami wrecked many fishing villages, about 100 villagers—almost all of them women—sold kidneys. According to one report, 40-50 percent of the people in some Pakistani villages have only one kidney. “It’s a poverty thing. You wouldn’t understand.”

Both the desire to circumvent the laws at home and the need to be close-by when an organ becomes “available” have stimulated “medical tourism.”

Finally, there is the alleged problem of “organ theft.” Given a shortage of voluntary donors, it has been suggested that some middle-men may turn to theft or murder. This is a common theme in horror movies and urban legend. It doesn’t have much truth behind it. Which isn’t the same as saying it doesn’t happen at all. “Hey buddy, can you give me a hand?”

[1] I can just see the Three Wise Men—one of them played by Buscemi—impatiently flipping through the calendar in 1 BC, marking off the days until Jesus would be born, trying to get a cheap flight, then getting told that Bethlehem’s inns are all booked solid: “Zoro-H-Aster! What are we supposed to do, stay in a manger?”

Climate of Fear X November 2014.

For twenty years China has been driving hard for industrialization. About 70 percent of all Chinese energy comes from coal. Chinese industry burns coal for fuel and Chinese apartment buildings are heated by coal-burning generators. China burns about as much coal as every other country in the world combined. The newly-affluent Chine middle-class buys cars. There are already 120 million cars and as many other motor vehicles spewing out exhaust.

Of the twenty most-polluted cities in the world, sixteen are in China. All sorts of ludicrous examples of the “How bad is…?” variety can be cited. During one recent bout of smog in Beijing, for example, a factory caught on fire and burned for three hours before anyone noticed the flames. This is at least as bad as that time the river that runs through Cleveland caught fire.

The health effects are awful. Over the last thirty years, Chinese lung cancer rates have risen by 465 percent. Thousands of people stream into hospitals complaining of breathing problems whenever air pollution becomes particularly bad.

The Chinese government turned a blind eye to this problem for a long time. Recently, they have found it much harder to pretend that killer smogs are just “heavy fog.” For one thing, foreigners don’t want to visit China if it just means that they’re going to feel like they’ve been working through two packs of Camels a day for twenty years. Tourism has fallen off and foreign businessmen don’t want to base themselves in China. For another thing, ordinary Chinese people are starting to complain. Since Tiananmen Square back in 1989 most Chinese have been cautious about demonstrating for democratic government. However, the environmental problems are pushing people into the streets for reasons other than a stroll in the park. One count estimates that there are 30,000 to 50,000 protests a year over clean air, clean water, and clean food.

The pollution problems have become so severe, and have generated a measure of public unrest, that the government began preparing for a shift to nuclear power and renewable energy sources. Looking down-range fifteen to twenty years, they seem to have concluded that they would have to continue expanding the generation of electricity through carbon-burning while preparing for a transition to other forms of energy. Hence Chinas commitment in November 2014 to reach peak carbon burning and to draw 20 percent of their energy from non-carbon sources by 2030, formalized its existing policy.

Still, this commitment leaves a bunch of stuff—aside from ash particles—up in the air.     How much energy will China require in 2030? Are they close to meeting their projected needs already? If so, then reaching peak could be a simple matter. What if they’re only at their half-way mark? Is there any quantitative value assigned as the Chinese peak? Or do the Chinese just get to expand carbon burning as fast as they can until 2030, while also expanding non-carbon energy sources to 20 percent of whatever is the total peak? Will China be building nuclear power plants and solar collectors at a rapid pace for decades to come? If the Chinese government is responding now to public unhappiness with pollution, how will it respond in the future to public unhappiness with either slowing economic growth or trying to transition away from a major industry?

 

“The face-mask nation,” The Week, 15 November 2013, p. 9.

Henry Fountain and John Schwartz, “Climate Pact by U.S. and China Relies on Policies Now Largely in Place,” NYT, 13 November 2014.

Climate of Fear IX November 2014.

India is bound to be a big loser from global climate change. The air pollution in Delhi is worse than that in Beijing; sea-level rise could forcibly displace 37 million Indians by 2050, and water for farmers could be affected by accelerated melting of glaciers in the Himalayas or disruption of the monsoons. So, India has a deep interest in limiting climate change. However, India is also one of the principle forces causing climate change.[1]

Burning coal for generating electricity is central to India’s strategy for economic development. The country has huge coal deposits (the fifth largest in the world), but little oil or natural gas. Consequently, India launched a ten year plan for building coal-burning generating plants back in 2009. Generating capacity has already expanded by 73 percent. In 2013 India burned 565 million tons of coal. Most Indian coal has a high ash-content, so it pollutes more than do some other commonly used types of coal. This makes India the third-largest emitter of greenhouse gases. By 2019 the government plans to burn more than a billion tons a year. “India’s development imperatives cannot be sacrificed at the altar of potential climate changes many years in the future,” the government’s Minister of Power has asserted.

It will be difficult to argue that India should adjourn its plans for development. Three hundred million Indians have no electricity at all, and many more have it only in fits and starts. On a per-capita basis, Indians consume only one-fourteenth as much electricity as do Americans. In a country with hundreds of millions of people living in grotesque poverty, making do with less isn’t much of an option. Electricity powers industry and industry raises incomes.

India’s coal-fired industrialization effort alarms environmentalists elsewhere. “If India goes deeper and deeper into coal, we’re all doomed.” said one climate scientist at the Scripps Institute of Oceanography. There isn’t much ground for expecting push-back by Indian environmentalists. For the most part, Indians seem to accept both air pollution and the physical displacement of populations in the countryside to make space for more coal mines. The environmental movement in China seems to have more support behind it and, therefore, more influence with the government than is the case in India.

Nuclear power and solar generation offer alternative energy sources. A lot of Western India is cloudless for much of the year, so a lot of solar energy the ground. The government of Narendra Modi has said that it will launch a program of constructing solar-energy plants. Whether this can be carried forward fast enough and on a large enough scale to replace India’s reliance on coal is hard to tell.

So, that’s a problem. Still, China currently burns as much coal as every other country in the world combined. Can India’s coal-burning really pose more of a problem than does that of China?[2] The recent agreement between the United States and China called for China to cap its greenhouse gas emissions before 2030. The Chinese may continue to shovel on the coal until then, but they also might begin to shift from a reliance on coal to other energy sources. If that comes true, it will be a lot more significant for the climate than is India’s continuing development of coal. If the rest of the world moves in one direction, then India might find a way to follow. There’s a couple of big “Ifs” there. Still, the prospects look better than they did a little while ago.

[1] Gardiner Harris, “Coal Rush in India Could Tip Balance of Climate Change,” NYT, 18 November 2014.

 

[2] China produces 46 percent of the world’s coal and imports more; India produces 7.7 percent of the world’s coal, but has been developing its own reserves because of the cost of imports. See: “Climate of Fear IX.”