This is how effective energy policy will happen

James Conca recently wrote Does Our Military Know Something We Don’t About Global Warming?. Therein Jim referenced a Eugene Skolnikoff Foreign Policy essay I’ve not seen before “The Policy Gridlock on Global Warming“. This is an excellent survey of why it has so far proven impossible to assemble political support for non-trivial energy policy changes. It was written in 1990 (!) but remains true today. And my personal priors were nicely confirmed in Jim’s excerpt:

“The central problem is that outside the security sector, policy processes confronting issues with substantial uncertainty do not normally yield policy that has high economic or political costs. This is especially true when the uncertainty extends not only to the issues themselves, but also to the measures to avert them or deal with their consequences.”

“The climate change issue illustrates – in fact exaggerates – all the elements of this central problem. Indeed, no major action is likely to be taken until those uncertainties are substantially reduced, and probably not before evidence of warming and its effects are actually visible. Unfortunately, any increase in temperature will be irreversible by the time the danger becomes obvious enough to permit political action.”

I wonder if a country run more by engineers than lawyers will be able to act sooner than the Western democracies? It’s encouraging that China is making a big investment in advanced nuclear while building Gen III plants about as fast as they can.

Source James Conca, Does Our Military Know Something We Don’t About Global Warming? Many reasons to read the Conca essay. E.g. did you know this bit of history about Reagan, Bush and Thatcher?

At a time when Presidents Ronald Reagan and Bush 41, and even British Prime Minister Margaret Thatcher, called for binding international protocols to control greenhouse gas emissions, the U.S. Military was seriously studying global warming in order to determine what actions they could take to prepare for the change in threats that our military will face in the future.

Nuclear Has Scaled Far More Rapidly Than Renewables – The Clean Energy Transition Needs the Atom


Anyone interested in rapidly increasing the production of clean energy should look to nuclear. The most ambitious renewables plan — Germany’s Energiewende — has brought far less zero-carbon energy far less quickly than similar efforts focused on nuclear. Being cool, profitable and popular is fine, but irrelevant. We need a reliable technology that delivers deep energy emission cuts and we need it fast. — Geoff Russell

Please bookmark Geoff Russell’s essay on The Breakthrough. In a very few words Geoff makes it completely clear that nuclear is an essential part of any sane strategy for slashing carbon emissions.  The anti-nuclear activists are the problem.

How do the rollout speeds of renewables and nuclear power compare?

Let’s compare the speed of per capita electricity generation growth in a few countries for renewables and nuclear. I’m guessing nobody will object if we use the German wunderkind as a top performing renewables example. We’ll plot the last 11 years of wind and solar growth, starting 12 months after the beginning of their feed-in-tariff scheme. We’ll also throw in the last 11 years of Chinese per capita electricity growth from all sources. This is just to put their coal/wind/nuclear/solar/hydro build in proper per capita context.

All of our comparison cases, except one, are historical. They aren’t plans, they are achievements. Anti-nuclear campaigners are fond of finding particular nuclear power stations with time or cost overruns to ‘prove’ how slow or expensive nuclear electricity is to roll out. Cherry picking examples is a time-honored strategy when objective argument fails.

(…snip…)

Being cool, profitable, and popular is fine, but irrelevant. We need a reliable technology that delivers deep energy emission cuts and we need it fast.

It’s rapidly becoming crystal clear that the biggest enemy we face in preventing ongoing climate destabilistation is the anti-nuclear movement. They have cost the planet two decades which could otherwise have seen many more countries with clean electricity, and now they are running a distracting strategy promoting technologies which are intrinsically slow to roll out. They have, in effect, created an energy growth vacuum being filled by coal seam gas which is quick to build but which won’t prevent further climate destabilisation.

Germany renewables vs. demand

Cyril R. sums up German energy policy in a three sentence comment on John Morgan’s wonderful Catch-22 of Energy Storage:

Capacity factor of solar PV in Germany is 10%. Wind in Germany is around 16%.

Electricity demand in Germany peaks in winter, when the capacity factor of solar ranges from 0% to 3%.

These energy sources aren’t there most of the time, and certainly not when they’re needed most which is in the evening and winter.

UCB’s Per Peterson on China’s advanced nuclear program

In this essential Breakthrough interview Per Peterson summarizes China’s advanced nuclear development – including the US – China collaboration. I think this collaboration is the one global effort that could have a material impact on climate change. US support for the cooperation seems to be hidden from the usual political shout-fest — at least if there is anyone in the executive who is taking credit for even allowing the cooperation I’ve not heard of it. Imagine what could be accomplished if there was enthusiastic, high-level backing and 10x as much funding? This is just a fragment of the interview focused on China:

What are China’s plans for advanced molten salt nuclear reactors?

China has a huge nuclear program and is building almost every kind of reactor possible, including a number of experimental advanced reactors. Two years ago the Chinese Academy of Sciences decided to pursue a thorium liquid-fueled molten salt reactor, but first decided to build an intermediate reactor that uses a solid fuel with salt as coolant. (The choice to build a solid fuel reactor reduces the licensing risk without heavily compromising performance.) In 2015, China will be starting the construction of the 10 MW solid-fueled thorium molten salt test reactor. By 2017 they hope to have this reactor operating. And by 2022, they hope to have commissioned a 100 MW thorium molten salt commercial prototype reactor. Alongside this effort, the Chinese will be developing a 2 MW liquid-fueled reactor that will enter the final stages of testing in 2017.

Are you collaborating with the Chinese on this effort?

There is an ongoing formal collaboration between the Chinese Academy of Sciences (CAS) and the US Department of Energy (DOE). The DOE has a memorandum of understanding with the CAS. Under this formal umbrella, our research group has an informal relationship with the Shanghai Institute of Physics. There is also a cooperative research agreement being developed between China and Oak Ridge National Laboratory in Tennessee, which would provide funding for China’s thorium molten salt research effort.

Tell us more about US involvement in the Chinese effort to commercialize advanced nuclear technologies.

The US DOE has been reviewing the Chinese effort to build a molten salt reactor. The Chinese program has been using US expertise in reactor safety, and US experts have reviewed the early test reactor design and remain engaged. So far, China’s nuclear regulatory policy has been to adopt and follow the safety and licensing regulation of the exporting country. Russian-built reactors in China are have adopted a regulatory approach similar to that of Russia. Likewise, licensing for the Westinghouse AP1000s that are being built in China is following a US approach. There appears to be an emerging, consensus approach in the US and in China for safety for molten salt reactors as well.

How should the US participate in the commercialization of these reactors?

My view is that the United States needs to maintain the capability to independently develop advanced nuclear designs that are being studied and will be commercialized in China. Maintaining such capability could encourage US-China joint ventures, which could accelerate development and thus ensure that commercial designs are deployed at large scale as soon as possible. The United States has a lot of expertise in the areas of nuclear safety and licensing, and could bring such expertise to US-China partnerships. If new advanced nuclear designs are simultaneously licensed in both the US and China, the possibility for large-scale deployment increases.

Do you think such reverse engineering is possible? Isn’t China keeping their plans secret?

The Chinese Academy of Sciences has been remarkably open and transparent in their effort to build their thorium molten salt reactor. They’ve been doing a lot of international collaboration. All of the reports are published in an extraordinary level of detail. This collaboration is really important if we want to see this technology developed and deployed soon enough to make a real difference in helping reduce climate change. If China can stay on track to commission a 100 MW commercial scale reactor by 2022, it would be fantastic if this reactor could include substantial contribution by US industry as well. This kind of collaboration could lead to a joint venture effort that could result in more rapid and larger near-term deployment.

The April 2014 Breakthrough interview is a very concise and up to date informed perspective on the current status and the future of nuclear power: UC Berkeley’s Per Peterson Pursues Radical New Design with Off-the-Shelf Technologies. Please help everyone you know to read and understand.

 

“Eroom’s Law: the cost of developing a new drug doubles every nine years”

Arnst Drug Approval 610x408

Eroom’s Law is Moore’s Law spelled backwards. Sadly it describes the reality of declining drug approval rates. Diagnosing the decline in pharmaceutical R&D efficiency was published in Nature Reviews Drug Discovery March 2012. The abstract: 

The past 60 years have seen huge advances in many of the scientific, technological and managerial factors that should tend to raise the efficiency of commercial drug research and development (R&D). Yet the number of new drugs approved per billion US dollars spent on R&D has halved roughly every 9 years since 1950, falling around 80-fold in inflation-adjusted terms. There have been many proposed solutions to the problem of declining R&D efficiency. However, their apparent lack of impact so far and the contrast between improving inputs and declining output in terms of the number of new drugs make it sensible to ask whether the underlying problems have been correctly diagnosed. Here, we discuss four factors that we consider to be primary causes, which we call the ‘better than the Beatles’ problem; the ‘cautious regulator’ problem; the ‘throw money at it’ tendency; and the ‘basic research–brute force’ bias. Our aim is to provoke a more systematic analysis of the causes of the decline in R&D efficiency.

For some commentary on the Scannell et al paper, this is useful Why Drug Development is Failing – and How to Fix It. BTW Derek Lowe is my favorite observer of pharma research – I highly recommend In the Pipeline.

The new drug drought was highlighted in January by Derek Lowe, a pharmaceutical scientist who writes the influential blog In the Pipeline. He asked his readers to name the most worthwhile new drug that had been introduced since 1990. Of the many candidates nominated, the vast majority were brought to market in the first half of that 20-year span.

One reason for the industry’s meager R&D productivity is the sheer complexity of the human body, argue four analysts at Sanford C. Bernstein, led by Jack W. Scannell. In their article in Nature Reviews Drug Discovery, “Diagnosing the Decline in Pharmaceutical R&D Efficiency,” they examined R&D projects for more than 28,000 compounds investigated since 1990. During that 20 year period the pharma industry increasingly concentrated its R&D investments on drugs that address unmet therapeutics needs and untargeted biological mechanisms—areas where the need is great but the risk of failure highest. This is the widely-held “low hanging fruit” theory of the drug drought: the easier disease targets, such as high cholesterol, asthmatic airway passages, migraines, and ulcerous digestive systems, have been met. Complex diseases such as cancer and neuro-degenerative conditions are much harder to solve.

But Scannell and his colleagues also laid out four additional, interlocking arguments that may explain the decline in R&D output:

  • The ‘better than the Beatles’ problem: Imagine how hard it would be to come up with a successful pop song if any new song had to be better than the Beatles . Unlike cars, or electronics, with drugs there’s no interest in novelty for its own sake. And there’s no point in creating something that’s only just as good as what’s already available, especially since today’s hit drug is tomorrow’s inexpensive generic.
  • The ‘cautious regulator’ problem: The progressive lowering of risk tolerance, particularly after the pain treatment Vioxx was removed from the market in 2004 for safety reasons, raises the bar on safety for new drugs, which makes R&D both costlier and harder.
  • The ‘throw money at it’ tendency: The tendency to just keep pouring more money and resources into a research project or a widely-held theory until something sticks. Could also be called throwing good money after bad.
  • The ‘basic research-brute force’ bias: The industry’s tendency to overestimate the probability that advances in basic research and large scale screening processes will show a molecule safe and effective in clinical trials.

As an outsider I find it easy to place a lot of the blame on the ‘cautious regulator’ problem. A similar disease afflicts the US nuclear power industry. A standout example of the impact on drug development is the near impossibility of gaining approval of new drug “cocktails”. The ‘personalized medicine’ concept exploits our ability to combine very fast sequencing of the patient’s DNA with exploding ‘big data’ containing detailed cases of patients-symptoms-drugs-outcomes. Sadly it’s nearly impossible to get such drug-combinations approved. 

Scannell and his fellow authors throw water on the personalized medicine theory by pointing out that despite the shift to targeted drugs and high tech screening tools, the probability that a small-molecule drug will successfully complete clinical trials has remained almost constant for the past 50 years. And those treatments that do succeed can cost patients and insurers hundreds of thousands of dollars per year, because they will by definition only work on the small number that have the cellular target. Physicians who prescribe drugs and the scientists who invent them are increasingly embracing a more nuanced view of drug discovery, the idea that most diseases require a combination of targeted drugs, often called a cocktail, to be held in check. The cocktail approach proved effective against AIDS, and medical experts believe the same approach may be necessary for cancer, Alzheimer’s, and a range of other diseases.

The problem with cocktails, however, is that it can be difficult if not impossible for two different companies to test experimental drugs in concert, for both competitive and safety reasons. Companies are beginning to overcome those competitive challenges, however, and collaborate on some of the most difficult challenges in medicine, most notably Alzheimer’s disease, the only one of the top 10 causes of death in the U.S. with no known cause, cure or even a way of slowing its progression. In 2004 the National Institutes of Health, the FDA and 20 drug companies joined forces to start the Alzheimer’s Disease Neuroimaging Initiative (ADNI), a landmark public-private partnership tasked with mapping all the biological markers connected to Alzheimer’s. The ADNI’s defining principle is to publicly share and relinquish ownership of all data and findings as soon as possible. More than 57 sites are collecting data from thousands of patients, and the results to date have already been incorporated into research and trials by pharmaceutical companies.

Very high background radiation areas of Ramsar, Iran: preliminary biological studies

Jim Conca cited this abstract in PubMed

People in some areas of Ramsar, a city in northern Iran, receive an annual radiation absorbed dose from background radiation that is up to 260 mSv y(-1), substantially higher than the 20 mSv y(-1) that is permitted for radiation workers. Inhabitants of Ramsar have lived for many generations in these high background areas. Cytogenetic studies show no significant differences between people in the high background compared to people in normal background areas. An in vitro challenge dose of 1.5 Gy of gamma rays was administered to the lymphocytes, which showed significantly reduced frequency for chromosome aberrations of people living in high background compared to those in normal background areas in and near Ramsar. Specifically, inhabitants of high background radiation areas had about 56% the average number of induced chromosomal abnormalities of normal background radiation area inhabitants following this exposure. This suggests that adaptive response might be induced by chronic exposure to natural background radiation as opposed to acute exposure to higher (tens of mGy) levels of radiation in the laboratory. There were no differences in laboratory tests of the immune systems, and no noted differences in hematological alterations between these two groups of people.

I found an ungated version of the paper here. Study participants were 14 normal and 21 elevated background persons.

PWC: Heading for 4°C, pledging for 3°C, talking about 2°C


Globally we are out of time – now need to increase decarbonization rate by factor of five. From PWC: Low Carbon Economy Index 2014 | 2 degrees of separation: ambition and reality

The PWC 6th annual Low Carbon Economy Index 2014 (LCEI) tracks the rate that G20 countries are decarbonizing their economies. Globally we are achieving only 1% pa vs. the 6.2% pa we need to meet the 50% chance of 2°C or less. PWC has published an important contribution, very well-explained and illustrated. If you are in a big hurry, then at least look at the 2.7 minute video (with transcript).

Who Pollutes Most? Surprises in a New US Database

NewImageadsf
 “If the US were to put a carbon tax in place, it’s not the case right off the bat that the members of one party would be disadvantaged relative to the other,” Kevin says.

“The difference in political rhetoric is far greater than the difference in environmental reality,” he adds. “The rhetoric should be: Why are we taxing things we want more of, like income, instead of things we want less of, like pollution?”

The Center for Global Development has a couple of new papers out. Both contribute to the political context of possible carbon fees. And there’s a new podcast interview with researcher Kevin Ummel. Lawrence MacDonald:

Pollution has no respect for party lines. In the US, Republican and Democratic districts may differ in many ways but when comes to the carbon emissions heating our planet the differences are much smaller than you might expect. This is one of the most surprising and important findings in a remarkable new working paper from CGD visiting senior associate Kevin Ummel. I’m so excited about this paper I took a short break from my new job at the World Resources Institute to discuss with Kevin the far-reaching implications of his work for the design and politics of US carbon pollution fees.

Kevin’s paper, Who Pollutes? A Household-Level Database of America’s Greenhouse Gas Footprint, is a slender 23 pages that sits on the brawny shoulders of a fresh approach to available data and an muscular number crunching exercise to estimate the greenhouse gas emissions of households all across America.

Kevin tells me that he set out to study the consumption habits of American households based on the recognition that “every kilogram of human-caused emissions can be traced to a consumptive choice on the part of an individual, a household, or in some cases, a government.”

Kevin used data from two massive surveys (the Consumer Expenditure Survey and the American Community Survey) to determine what American households buy with their money. He then combined this survey data with data from the environmental sciences to “translate how people spend their money into an estimate of how much [carbon] pollution they are producing.”

One surprise: the high degree of what Kevin calls “pollution inequality”—the top 10 percent of US polluters are responsible for 25 percent of the country’s carbon footprint, while the least-polluting 40 percent of Americans account for just 20 percent.

Who pollutes most? Low-density, affluent suburbs, where the lifestyle includes big homes, big cars, long commutes and plenty of international air travel. Many of these people also recycle and opt for local produce to reduce their carbon footprint! (Sound like anybody you know?)

High-density cities have the lowest household carbon footprint—especially the poorer neighborhoods that tend to vote for Democrats. More surprisingly, less affluent rural communities that tend to vote Republican also have small carbon footprints.

The new data show that these geographical distinctions are much starker than the differences between the carbon footprints of Republican and Democratic districts, which tend to be “very, very small,” Kevin says.

More at the CGD source.

 

US coal imports don’t offset emissions reduction from coal to gas switching

Alex Tremblath takes a hard look at the data. Robert Wilson rebuts Greenpeace on the same question:

Greenpeace's analysis is demonstrably wrong, and the comments made by Lauri Myllyvirta on Twitter suggests he should learn some basic facts before rating analysis that would get at best a C if submitted as a GCSE assignment. Unfortunately journalists who should no better reported his analysis with no outside comment. This happens too often with unrigorous reports by NGOs.

To figure out what wind is replacing all you know to look at is the marginal fuel. Any electricity come from a wind farm will replace whatever is on the margin.

In Britain it appears that wind farms currently displace gas 1-1. That was the conclusion that Chris Goodall came to after analysing recent output data (and written up in the Guardian alongside Mark Lynas). After looking at the numbers myself the arguments seem robust, though peer reviewed research has yet to be done, as far as I know. The marginal fuel has overwhelmingly been gas recently, so wind really just displaces gas.

In the US things are more complex, because there are an array of regional grids. The paper below (possibly paywalled) provided estimates of the marginal fuel mixes in the key regions (see their table 1).

http://pubs.acs.org/doi/pdf/10…

Some wind heavy places almost have gas exclusively as the marginal fuel. In Texas it is 84% gas. Others such as the midwest are more likely to have coal as the marginal fuel. But it is clear that the marginal fuel is more likely to be gas than coal.

This shows that Greenpeace's naive assumption that wind displaces coal 1-1 is not based on reality.