UCB’s Per Peterson on China’s advanced nuclear program

In this essential Breakthrough interview Per Peterson summarizes China’s advanced nuclear development – including the US – China collaboration. I think this collaboration is the one global effort that could have a material impact on climate change. US support for the cooperation seems to be hidden from the usual political shout-fest — at least if there is anyone in the executive who is taking credit for even allowing the cooperation I’ve not heard of it. Imagine what could be accomplished if there was enthusiastic, high-level backing and 10x as much funding? This is just a fragment of the interview focused on China:

What are China’s plans for advanced molten salt nuclear reactors?

China has a huge nuclear program and is building almost every kind of reactor possible, including a number of experimental advanced reactors. Two years ago the Chinese Academy of Sciences decided to pursue a thorium liquid-fueled molten salt reactor, but first decided to build an intermediate reactor that uses a solid fuel with salt as coolant. (The choice to build a solid fuel reactor reduces the licensing risk without heavily compromising performance.) In 2015, China will be starting the construction of the 10 MW solid-fueled thorium molten salt test reactor. By 2017 they hope to have this reactor operating. And by 2022, they hope to have commissioned a 100 MW thorium molten salt commercial prototype reactor. Alongside this effort, the Chinese will be developing a 2 MW liquid-fueled reactor that will enter the final stages of testing in 2017.

Are you collaborating with the Chinese on this effort?

There is an ongoing formal collaboration between the Chinese Academy of Sciences (CAS) and the US Department of Energy (DOE). The DOE has a memorandum of understanding with the CAS. Under this formal umbrella, our research group has an informal relationship with the Shanghai Institute of Physics. There is also a cooperative research agreement being developed between China and Oak Ridge National Laboratory in Tennessee, which would provide funding for China’s thorium molten salt research effort.

Tell us more about US involvement in the Chinese effort to commercialize advanced nuclear technologies.

The US DOE has been reviewing the Chinese effort to build a molten salt reactor. The Chinese program has been using US expertise in reactor safety, and US experts have reviewed the early test reactor design and remain engaged. So far, China’s nuclear regulatory policy has been to adopt and follow the safety and licensing regulation of the exporting country. Russian-built reactors in China are have adopted a regulatory approach similar to that of Russia. Likewise, licensing for the Westinghouse AP1000s that are being built in China is following a US approach. There appears to be an emerging, consensus approach in the US and in China for safety for molten salt reactors as well.

How should the US participate in the commercialization of these reactors?

My view is that the United States needs to maintain the capability to independently develop advanced nuclear designs that are being studied and will be commercialized in China. Maintaining such capability could encourage US-China joint ventures, which could accelerate development and thus ensure that commercial designs are deployed at large scale as soon as possible. The United States has a lot of expertise in the areas of nuclear safety and licensing, and could bring such expertise to US-China partnerships. If new advanced nuclear designs are simultaneously licensed in both the US and China, the possibility for large-scale deployment increases.

Do you think such reverse engineering is possible? Isn’t China keeping their plans secret?

The Chinese Academy of Sciences has been remarkably open and transparent in their effort to build their thorium molten salt reactor. They’ve been doing a lot of international collaboration. All of the reports are published in an extraordinary level of detail. This collaboration is really important if we want to see this technology developed and deployed soon enough to make a real difference in helping reduce climate change. If China can stay on track to commission a 100 MW commercial scale reactor by 2022, it would be fantastic if this reactor could include substantial contribution by US industry as well. This kind of collaboration could lead to a joint venture effort that could result in more rapid and larger near-term deployment.

The April 2014 Breakthrough interview is a very concise and up to date informed perspective on the current status and the future of nuclear power: UC Berkeley’s Per Peterson Pursues Radical New Design with Off-the-Shelf Technologies. Please help everyone you know to read and understand.

 

Discounting and costs (Part 2): IPCC WGIII report on mitigation

This is a guest post by physicist Jani-Petri Martikainen @jpjmarti, proprietor of PassiiviIdentiteetti
(This post first appeared on Passiiviidentiteetti April 22, 2014)
 

rightwrongIn an earlier post I briefly discussed the scale of the challenge. In this one I discuss briefly how the report discusses ethical issues surrounding responsibilities towards future generations, with a special focus on discounting and how it relates to cost estimates of various energy options.

The use of a temporal discount rate has a crucial impact on the evaluation of mitigation policies and measures. The social discount rate is the minimum rate of expected social return that compensates for the increased intergenerational inequalities and the potential increased collective risk that an action generates. Even with disagreement on the level of the discount rate, a consensus favours using declining risk‐free discount rates over longer time horizons (high confidence).

An appropriate social risk‐free discount rate for consumption is between one and three times the anticipated growth rate in real per capita consumption (medium confidence). This judgement is based on an application of the Ramsey rule using typical values in the literature of normative parameters in the rule. Ultimately, however, these are normative choices.” IPCC WGIII Chapter 3

 “A simple arbitrage argument favours using the interest rate as the discount rate for climate policy decisions: if one reallocates capital from a safe but marginal project (whose return must be equal to the interest rate) to a safe project with the same maturity whose return is smaller than the interest rate, the net impact is null for the current generation, and is negative for future generations. Thus, when projects are financed by a reallocation of capital rather than an increase in aggregate saving (reducing consumption), the discount rate should be equal to the shadow cost of capital.

This descriptive approach to the discount rate has many drawbacks. First, we should not expect markets to aggregate preferences efficiently when some agents are not able to trade, as is the case for future generations (Diamond, 1977). Second, current interest rates are driven by the potentially impatient attitude of current consumers towards transferring their own consumption to the future. But climate change is about transferring consumption across different people and generations, so that determining the appropriate social discount rate is mostly a normative problem. Thirdly, we do not observe safe assets with maturities similar to those of climate impacts, so the arbitrage argument cannot be applied.”  IPCC WGIII Chapter 3

This discussion on discount rates is in my opinion very important since discount rates capture lots of the ethical underpinnings of our responsibilities to future generations. Discount rates tell about our time horizons and about how patient we are in waiting for gains. If you are offered money right now and twice as much at a later date, how long are you willing to wait? If the discount rate is 10%, you might be ready to wait for about 7 years and if it is 5% you wait for 14 years. Stern review used a rate of 1.4% for climate change damages in which case you are ready to wait for 50 years. In this case the time horizon is truly inter-generational. As explained by the WGIII, how to discount is in the end of the day a normative choice. However, it is a choice whose impact should be openly discussed and a choice that should be reasonably defended. In general I found the Chapter 3 Social, Economic and Ethical Concepts and Methods” interesting and I have to read it more carefully later. I recommend that authors of WGIII Chapter 7  “Energy Systems” also read it.

WGIII gives the levelized cost of energy for different energy sources in Figure 7.7 of Chapter 7. If you look at figure 7.7 (below) carefully you will perhaps notice something funny. In the 4th assessment report at 2007 the costs were given as shown in Figure 4.27 (see copy here). It is not the most beautiful of figures, but clear enough.

Figure 7.7 from IPCC WGIII Chapter 7 (2014)

Figure 7.7 from IPCC WGIII Chapter 7 (2014)

Fig 4.27 from WGIII 2007

Fig 4.27 from WGIII 2007

It shows the results at two different discount rates with coal, gas, and nuclear as the lowest cost options. Somebody was clearly not happy with this and wanted to change the figure into Fig. 7.7 of the new report. As I glanced at the figure first I naturally choose to compare “red” bars with red bars and blue ones with blue. After all we shouldn’t compare apples and oranges. Maybe you did the same? However, I then noticed that red color assumed “high full load hours”. What does that actually mean? In order to figure out, one has to read the annex III for detailed assumptions (how many are going to do that?). For nuclear power “high full load hours” meant a capacity factor of 84 %, for onshore wind 40%, and 27% for solar PV. For nuclear power this a typical capacity factor (although many reactors do better), but for wind and solar power those capacity factors are very atypical. So the figure is constructed in such away that uninformed reader is likely to make incorrect comparisons. In fact, WGIII concludes the caption of Fig. 7.7 (its on the next page and likely to be missed) by saying “Note: The inter-comparability of LCOE is limited. For details on general methodological issues and interpretation see Annexes as mentioned above. ” Indeed. Given that comparisons cannot really be made, why was this approach chosen in the first place? If you can come up with a charitable explanation I am all ears, but to me this seems like authors of Chapter 7 were actively working to make comparisons hard.

How did the authors of Chapter 7 approach the discounting? Let us guess that economic growth in the future is around 2%. In this case the Ramsey rule mentioned by the IPCC in Chapter 3 suggests a discount rate in the range of 2-6%. What discount rate is used in chapter 7 to compare levelized cost of energy (LCOE) for different energy sources? That would be 10%! Authors of WGIII decided not only to use very high discount rate, but also not to give their results at different discount rates so that the effects of this assumption could be observed. Considering that authors of Chapter 3 specifically emphasized how crucial this issue is in evaluating mitigation policies, the approach in Chapter 7 seems indefensible. At minimum one would expect them to show results over broad range of discount rates, but this they decided not to do. Since they refused to do it,  I will quickly do it here and see what difference it makes. (Note that some results with 5% discount rate are hidden in annex III, but these are only for the high FLH case so no honest comparison is possible.) In order to make sure that I know what I am doing I try to reproduce typical LCOE figures for WGIII high FLH case. I copy typical numbers from the annex III and this is what I get.

LCOE $(2010)/MWh comparison based on WGIII high FLH case (warning: misleading comparison!):

 Technology LCOE 10% high FLH (IPCC median) My result
Nuclear 99 97
Coal PC 78 78
Wind onshore 84 85
Solar PV (rooftop) 220 220
CCS-coal-PC 130 123

OK, the numbers are not exactly the same, but close enough for me. I am not sure how WGIII defined the median here. Also, maybe there is some index inconsistency somewhere in the summations…who knows. Basic point is that I can reproduce the WGIII values reasonably well and I am on the same map as WGIII. We are ready to go! So let me then look at the things WGIII decided not to show. I will now compute typical LCOE for few technologies at 10%, 5% and 1.4% discount rates. It turns out that as discount rate is lowered the LCOE for nuclear power drops from 97$/MWh to 62$/MWh, and finally to 42$/MWh. I will summarize the rest of the results by giving the costs relative to nuclear power. The values colored green are higher than the LCOE of nuclear while red is lower.

Difference to the cost of nuclear (go right if you prefer responsible long term thinking): 

 Technology 10% discount rate 5% discount rate 1.4% discount rate
Nuclear 0% 0% 0%
Coal PC -18% +5% +34%
Wind onshore +40% +57% +77%
Solar PV (rooftop) +190% +210% +230%
CCS-coal-PC +27% +63% +110%
(Main assumptions: Most numbers are copied from annex III of WGIII and I just list the differences here.I choose the capacity factor for wind power as 25% which is higher than European or Chinese average, but somewhat less than US average. Most of the wind power capacity in the world does worse than this. I choose the wind turbine lifetime as 20 years as opposed to WGIII value of 25 years, since 20 year lifetime is given by wind turbine manufacturers. This doesn’t change anything of relevance though. I choose PV capacity factor as 15%. In good locations capacity factor can be higher than this, but for example in Germany it is around 10%. Therefore 15% seems fair. I assumed PV capital costs as 3000 $/kW which is substantially less than the WGIII median value of 4400 $/kW. You can check the calculations and assumptions from these Matlab files LCOE_IPCC.mIPCC_Compare.m, and CompareForReal.m. In combination with annex III files should be quite self-explanatory and not too difficult to translate to other number crunching tools.)

As you can see green dominates and with the possible exception of hydro power in good locations, nuclear power is the lowest cost zero carbon source of electricity no matter what discount rate was used.  At 10% discount rate it has difficulty at competing with coal, but at 5% it becomes cheaper than coal. As discount rate is lowered the cost advantage of nuclear relative to other low carbon energy sources is rapidly increased. With 1.4% discount rate and a time horizon extending across generations nuclear power is cheaper than other options by a very large margin.  These results are based on the WGIII numbers and the only changes are those listed above to mainly account for differences in capacity factors. We could make the above table all green by adding a carbon price of only around 20 $/tCO2.

Maybe this discussion on the role of discount rates is simply too radical and WGIII is just following conventions? Well, not really. It is certainly not too radical for WGIII since in its 2011 SSREN report focusing on renewables WGIII gave precisely this type of comparison with 10%, 7%, and 3% discount rates (Fig 10.29 p. 844 in Chapter 10). Some of its authors were even authors of this report. Of course from SSREN report nuclear power was purged at the outset and results which might give readers funny ideas did not have to be shown. Absurdly the discussion on discount rates in this context is far more extensive in SSREN while in this report it has been brushed aside contrary to the emphasis by the authors of Chapter 3 of WGIII. We can only speculate as to why.

To me it seems that on this issue the authors of Chapter 7 were working hard to make sure that uninformed would remain uninformed while giving a chance to say to informed ones: “We are not lying! We are open about the methodology…see annex III etc. Yeah, maybe figure 7.7 is not as clear as it could be. Thanks for the tip! Clear communication is super important and we will keep it in mind for the next assessment report! Blaah blaah blaah…” IPCC should be an expert body giving accurate evidence based material for policy discussions. Sadly in this case WGIII decided not to give this material and compromised its supposed “policy-neutrality”. In plain english, authors of Chapter 7 decided not to do their jobs since doing it would have provided facts suggesting that some mitigation policies are likely to be more effective than others. But this is what they should do! If people decide to brush the cost differences aside, that is their choice, but it is not the role of an expert to fudge figures in such a way that implications of different policy choices are hidden.

Authors of Chapter 7 did what?

Authors of Chapter 7 did what?

While the WGIII messed up the presentation of the costs that we are in a position to know fairly well, it spends a lot of time in speculating about long term costs using integrated assessment models. Since we are not able to predict the future of mankind, I do not think that these games are much more than computer generated guesses based on the preferences of whoever is doing the modeling. I think we are better of in focusing on issues that we can actually control at least to some degree. The Economists was also not very impressed about this:

The IPCC still thinks it might be possible to hit the emissions target by tripling, to 80%, the share of low-carbon energy sources, such as solar, wind and nuclear power, used in electricity generation. It reckons this would require investment in such energy to go up by $147 billion a year until 2030 (and for investment in conventional carbon-producing power generation to be cut by $30 billion a year). In total, the panel says, the world could keep carbon concentrations to the requisite level by actions that would reduce annual economic growth by a mere 0.06 percentage points in 2100.

These numbers look preposterous. Germany and Spain have gone further than most in using public subsidies to boost the share of renewable energy (though to nothing like 80%) and their bills have been enormous: 0.6% of GDP a year in Germany and 0.8% in Spain. The costs of emission-reduction measures have routinely proved much higher than expected.

Moreover, the assumptions used to calculate long-term costs in the models are, as Robert Pindyck of the National Bureau of Economic Research, in Cambridge, Massachusetts, put it, “completely made up”. In such circumstances, estimates of the costs and benefits of climate change in 2100 are next to useless. Of the IPCC’s three recent reports, the first two (on the natural science and on adapting to global warming) were valuable. This one isn’t.The Economist. While I think the report has some interesting things as well, when it comes to cost estimates I tend to agree with The Economists.

Finally, in my opinion the fact that companies use the short time horizons implied by 10% (or higher) discount rates is a clear indication of a market failure. Climate change requires longer term decisions and if such decisions cannot be delivered by current markets, those markets need to change. Either the state with a longer time horizon must become more active or appropriate sticks and carrots should be developed to discourage short term profit taking and promote longer term visions.

Prospects for U.S. Nuclear Power After Fukushima

Click to embiggen

The chairman of one of the largest U.S. nuclear companies recently said that his company would not break ground on a new nuclear plant in the United States until the price of natural gas was more than double today’s level and carbon emissions cost $25 of ton. This seems to pretty well summarize the current prospects for U.S. nuclear power.

This paper by Lucas W. Davis (Haas School of Business UC Berkeley) is an excellent summary of the US situation as of 2011, and a good source of references for your research on nuclear construction costs. Davis is not attempting to predict the future; he is drawing inferences from the historical data. That is a depressing picture — with the 2011 evidence indicating that US nuclear suppliers have not learned even the French lessons.

Many within the nuclear industry claim that the industry is headed more toward the French model. A chairman of a major nuclear power company recently reported that new reactors would be standardized down to “the carpeting and wallpaper”. However, this claim does not appear to be supported by the license applications that have been received to date. Among the 17 applications that have been received by the NRC, there is a mix of both pressurized water reactors and boiling water reactors, manufactured by five different reactor manufacturers (Areva, Westinghouse, Mitsubishi, GE-Hitachi, and GE). Thus, it may well be the case that the industry will soon coalesce around a very small number of designs, but this is not immediately obvious based on these initial applications. At a minimum it seems clear that the French approach of supporting a single reactor design is not going to be adopted here.

Will China lead the world out of this pit by creating a mass manufacturing supply chain for a manageable number of standard designs?

Reddit AMA grills the UC Berkeley Department of Nuclear Engineering

NewImage

Members of the UC Berkeley Department of Nuclear Engineering participated in the Reddit.com Science AMA Series, responding to a large number of largely hostile questions. Lots of variations of “Can I still eat fish from the contaminated Pacific”. As typical with these AMA sessions the signal to noise ratio is low due to the uninformed questions and irrelevant branched threads of discussion by people who are more interested in politics. I “mined” the 1,447 comments for what I thought were fragments worth archiving.

I guess I’ll start things off. What type of reactors should we be building? I know a big deal a few years ago was made about liquid flouride thorium reactors. Is that the way of the future, or are there superior alternatives?

Prof. Per Peterson replies (emphasis mine):

I do not think that we have the basis to determine or select the best coolant or fuel type to use in future reactors. But there are some attributes which we do need to make sure are used in future reactors.

The first is to use passive safety systems, which do not require electrical power or external cooling sources to function to remove decay heat after reactors shut down, as is the case with the AP-1000 and ESBWR designs, and with all of the light water reactor SMRs now being developed in the U.S.

The benefits of passive safety go well beyond the significant reduction in the number of systems and components needed in reactors and the reduced maintenance requirements. Passive safety systems also greatly simplify the physical protection of reactors, because passive equipment does not require routine inspections the way pumps and motors do, and thus can be placed in locations that are difficult to gain access to rapidly.

The second is to further increase the use of modular fabrication and construction methods in nuclear plants, in particular to use steel-plate/concrete composite construction methods that are quite similar to those developed for modern ship construction. The AP-1000 is the most advanced design in the use of this type of modularization, and the ability to use computer aided manufacturing in the fabrication of these modules makes the manufacturing infrastructure much more flexible. In the longer term, one should be able to design a new reactor building, transfer the design to a module factory over the internet, and have the modules show up at a construction site, so the buildings are, in essence, 3-D printed.

The final attribute that will be important for new reactors will be to make them smaller, and to develop a regulatory framework and business models that work for multi-module power plants. While there will likely always be a market for large reactors, creating an ecosystem that includes customers for smaller reactors (inland locations served only by rail, installations needing reliable power even if fuel supplies are interrupted, mature electricity markets that need to add new capacity in small increments).

On thorium, a question:

Hello! What do you think is the most important advantage that thorium has over uranium as a “fuel?”

Prof. Per Peterson’s reply

The thorium fuel cycle has clearly attractive features, if it can be developed successfully. I think that most of the skepticism about thorium emerges from questions about the path to develop the necessary reactor and fuel cycle technology, versus open fuel cycles (uranium from seawater) and closed, fast-spectrum uranium cycles.

The most attractive element of the thorium fuel cycle is the ability to operate sustainably using thermal-spectrum neutrons. This allows the design of reactor core structures that use high-temperature ceramic materials like graphite, which have substantial thermal inertia and cannot melt. Because these ceramic materials also provide significant moderation, it is difficult to use them in fast-spectrum reactors and thus the most plausible fast-spectrum reactor designs need to use metallic structural materials in their cores.

So thorium reactors are compatible with higher intrinsic safety (cores which do not suffer structural damage even if greatly overheated) and that can deliver heat at higher temperature, which enables more efficient and flexible power conversion.

Molten fluoride salts are compatible with these high-temperature structural materials, and given their very high boiling temperatures make excellent, low pressure heat transfer fluids. In the near term, the largest benefits in using fluoride salts come from the low pressure and high temperature heat they can produce. This can be achieved with solid fuel, which is simpler to work with and to obtain regulatory approvals.

But molten salt technologies also have significant challenges. One of the most important is managing the much larger amounts of tritium that these reactors produce, compared to light water cooled reactors (the quantities are closer to what heavy-water reactors, such as the CANDU, produce, but methods to control and recovery of tritium are much different for molten salts than for heavy water, and key elements remain to be demonstrated).

To repeat a critical point “…largest benefits in using fluoride salts come from the low pressure and high temperature heat they can produce. This can be achieved with solid fuel…”. This summarizes why Prof. Peterson’s lab is focused upon developing the PB-AHTR design, which will also prove out many materials and technologies required subsequently to implement the more challenging Liquid Fuel molten salt reactor concept (such as LFTR).

Regarding waste: Prof. Peterson was a member of Obama’s Blue Ribbon Commission on America’s Nuclear Future. I consider him one of the best-informed sources regarding Spent Nuclear Fuel (SNF) which the anti-nuclear lobby calls Nuclear Waste. It is not “waste” it is an extremely valuable source of carbon-free energy. 

Q: One of the elephants in the room for nuclear power is the waste….

A: …Finland and Sweden have successfully sited and are building deep geologic repositories in granite, and France is very far along in developing its geologic repository in clay. The U.S. nuclear waste program is currently stopped and is in a state of disarray…

There are a wide range of opinions as water reactors (LWRs) is substantially more expensive than making new fuel from uranium, even if the plutonium is free. This is primarily because the plutonium must be handled as an oxide powder to make LWR fuel, and oxide powder is the most hazardous and difficult form to handle plutonium in. All of the Generation IV reactor technologies can use fuel forms that do not involve handling plutonium and minor actinides in the form of powders and that are much easier to fabricate using recycled material (e.g., metal, molten salt, sol-gel particles in either coated particle or vibropacked fuel forms).

In my personal opinion, the most sensible thing to do in the near term is to prioritize U.S. defense wastes for geologic disposal, and to use a combination of consolidated and on-site interim storage for most or all commercial spent fuel. Implementation of the Blue Ribbon Commission’s major recommendations, which include development of consolidated interim storage that would initially be prioritized to store fuel from shut down reactors, would put the U.S. on this path.

By using geologic disposal primarily for defense wastes first, and using primarily dry cask interim storage for commercial spent fuel, this will give a couple of decades for nuclear reactor technology to evolve further, and by then we will be in a better position to determine whether commercial spent fuel is a waste or a resource.

Nuclear innovation: Prof. Peterson replies

There are a number of factors which make innovation difficult in improving nuclear reactor technology, in particular the long operating life of nuclear power plants and their very large capital costs, which dissuade innovation. The trend toward designing larger and larger water-cooled reactors has increased these disincentives.

Given their lower capital cost and shorter construction times, innovation is much easier in small reactors. There will remain a role for large reactors, just as dinosaurs existed for millions of years alongside the new mammal species, but currently some of the most important policy issues for nuclear power involve creating an ecosystem where small reactors find customers. Smaller reactors, produced in larger numbers with most of the fabrication occurring in factories, would also use specialized manufacturing and skilled labor more efficiently. Imagine factories as being similar to airplanes, and the ability to keep more seats filled being really important to having low per-seat prices…

FHR (Fluoride Salt Cooled High Temperature Reactor), Where to take technical risk?

I will answer this question first indirectly, and then more directly.

A key question for innovation in developing new nuclear energy technology is where to take technical risk. SpaceX provides a good example of a highly successful risk management strategy. They focused on developing a highly reliable, relatively small rocket engine, that they tested in the Falcon 1, which uses an ancient rather than innovative fuel combination, kerosene and liquid oxygen. On the other hand, they chose to use aluminum-lithium alloy with friction stir welding for their fuel tanks, which is at the cutting edge of current technology. They have then used the approach of ganging together large numbers of these engines to create the Falcon 9, which is now successfully delivering cargo to the International Space Station.

Currently the most important barrier to deploying nuclear power is not the cost of the fuel, but instead is the capital cost of the plants, the need to assure that they can run with high reliability (which for current large reactor designs creates strong disincentives to innovate), and the relatively low electricity revenues one receives for producing base load power, particularly today in the U.S.

The primary reason that UCB, MIT, and UW, and the Chinese Academy of Sciences, are working on solid fuel, salt cooled reactor technology is because we have the ability to fabricate these fuels, and the technical difficulty of using molten salts is significantly lower when they do not have the very high activity levels associated with fluid fuels. The experience gained with component design, operation, and maintenance with clean salts makes it much easier to consider the subsequent use of liquid fuels, while gaining several key advantages from the ability to operate reactors at low pressure and deliver heat at higher temperature.

Q: Can I also ask what you think the safest way to transport the waste is?**

A: Per Peterson: There is a long record of safe transportation of nuclear waste, including spent fuel, world wide. The containers used to transport nuclear wastes are substantially more robust than those used to transport hazardous chemicals and fuels, which is why transportation accidents with chemicals generate significantly more risk.

This said, the transportation of nuclear wastes requires effective regulation, controls, and emergency response capabilities to be in place. The transportation system for the Waste Isolation Pilot Plant in New Mexico has logged over 12 million miles of safe transport, with none of the accidents involving the transportation trucks causing any release of radioactive materials.

One reason it is important to restore WIPP to service (it had an accident involving the release of radioactive material underground in late February, which had minimal surface consequence because the engineered safety systems to filter exhaust air were activated) is because the WIPP transportation system has developed a large base of practical experience and skilled personnel at the state and local levels who are familiar with how to manage nuclear waste transport. This provides a strong foundation for establishing a broader transportation system for commercial spent fuel and defense high level wastes in the future.

A commenter replied to Per’s hecklers, referring to WIPP:

Actually I work for this program and this is an understatement. Not only have there never been any accidents that caused a release of nuclear material, there have never been any accidents with a truck loaded with waste containers, ever. They’ve happened while empty, but never otherwise.

Per Peterson discussed the unpriced carbon emissions externality. Which I would say is effectively a tax on nuclear because nuclear produces nearly zero carbon energy in competition with coal and gas which do not pay their carbon externality costs. Per raised a very important issue: how the NRC gatekeeping sets up a strong incentive to free-ride on NRC rulings.

But there is another important market failure that affects nuclear energy and is not widely recognized, which is the fact that industry cannot get patents for decisions that the U.S. Nuclear Regulatory Commission makes. For example, there are major regulatory questions that will affect the cost and commercial competitiveness of multi-module SMR plants, such as how many staff will be required in their control rooms. Once the first SMR vendor invests and takes the risk to perform licensing, all other vendors can free-ride on the resulting USNRC decision. This is the principal reason that government subsidies to encourage first movers, such as cost sharing or agreements to purchase power or other services (e.g., irradiation) make societal sense.

Is this being discussed in the USgov? I’ve never seen a word about it. This is another example of the sub-optimal result we get from wasting billions on energy-farming production subsidies, while rationing a few millions for nuclear R&D. Even America has very limited funds – and needs to spend them very carefully.

CERA: Construction costs for new nuclear plants up over 230% since 2000

UPDATE: I have republished this 2008 post and comments to bring it “up front” with our ongoing discussion of new nuclear construction costs. At the end I’ve incorporated the 2008 comments. 

UPDATE: Per Peterson, Professor and a former chair of the Department of Nuclear Engineering at the University of California, Berkeley, was kind enough to comment on yesterday’s post on the CBO study. Dr. Peterson noted that only about 1% of new nuclear plant construction cost is construction materials. So the theme attributing the rapid cost rises to commodity prices has no basis. Contrariwise, wind turbine construction/installation require at least 10x the materials input per kilowatt — so have higher sensitivity to price and availability of steel, concrete, copper, etc. I cannot accurately summarize in fewer words, so I recommend you read his comments carefully.

Dan Yergin’s Cambridge Energy Research Associates (CERA) maintains the Power Capital Costs Index (PCCI), depicted in the graphic at left – as of May 2008 [click on the thumbnail for full size chart]. In brief the PCCI shows that a power plant that cost $1 billion in 2000 would, on average, cost $2.31 billion in May [in constant 2000 dollars].

You can infer that the increase in the cost of new nuclear plant construction has increased by more than that 230%. As you can see in the PCCI chart the non-nuclear costs are up 180%. The PCCI is assembled from data on a basket of 30 power generation facilities in North America. I don’t know what percentage of the capital base is nuclear so I’ll speculate that it’s similar to the current 22% that nuclear contributes to US generation. That implies nuclear construction costs are up about 400% since 2000.

I may be able to get more background from the CERA Capital Cost Analysis Forum – Power. But I discovered that viewing the replay of the June 6 webconference call required IE6, so I’ll need to fire up a windows PC to access it.

On factors driving the PCCI increases since 2000, CERA writes:

…Demand for new power generation facilities remains high worldwide, leading to continued tightness in equipment markets. Cost increases, supply issues and longer delivery times are exacerbated as manufacturers struggle to keep up with demand. The weakening U.S. dollar also increases the costs of global procurement for equipment and materials.

The number of engineers in the workforce is also declining as older workers retire and are not being replaced. The resulting shortages in plant design teams add additional delays to construction schedules. The current increase in construction for nuclear power generation and the dearth of experienced nuclear engineers in North America has been a key driver behind cost escalation.

Recent cancellations of proposed coal plants in the United States due to uncertainty over environmental regulations has provided some slowing in cost increases in the U.S. coal industry. However, international competition for coal boilers, particularly in Southeast Asia, is keeping the equipment order books very active.

Concerns over a looming U.S. recession and subsequent cut backs in residential construction have offered little relaxation to power construction. The residential slump does not free up the skilled workers required in the power industry and there is no overlap of the specialist metals and equipment required.


Upstream Capital Cost Index (UCCI) Courtesy IHS

I wonder if we are looking at market reactions to an impulse in demand. In the short run [say 5 years] the supply of new nuclear plants is inelastic. Demand has increased considerably beyond expectations, so equilibrium is only achieved by higher prices. We are seeing similar supply/demand responses in several energy sectors. The headlines hammer on oil prices. Note that the UCCI is only 10% less than the PCCI.

The UCCI is based upon a portfolio of 28 upstream oil and gas projects, so it represents the overnight cost of investment in both oil & gas field development and transportation. It may include finding costs, but I’m not sure. I do know that the cost per barrel-equivalent of finding + development costs has been increasing about as fast as oil companies have been able to ramp up their investments. The net result so far is no increase in reserve-additions, which are still lagging depletion.

2 thoughts on “CERA: Construction costs for new nuclear plants up over 230% since 2000”

  1. Paul on December 4, 2008 at 1:44 pm said: Edit

“only about 1% of new nuclear plant construction cost is construction materials” – sorry, I don’t think so. More like 30% at least.

  1. Steve Darden on December 4, 2008 at 7:02 pm said: Edit

_More like 30% at least.
_

Paul, thanks heaps for your comments. Here’s the relevant part of Dr. Peterson’s comment on commodity inputs [he gives the references as well]:

_While it is widely understood that nuclear energy costs have quite low sensitivity to the cost of uranium, it is not widely appreciated that the same applies to construction materials. If one takes the total quantity of steel, concrete, copper, and other materials required to build a light water reactor similar to those operating today 1, and then multiplies these quantities by the respective current commodity prices, the total contribution of commodity inputs is $36 per kilowatt of generation capacity 2, out of total construction prices that are estimated today to range from $3000 to $5000 per kilowatt today. The dominant cost of nuclear construction is instead in the added value that comes from converting these commodities into an operational nuclear power plant. Conversely, wind turbines require approximately a factor of 10 times as much steel and concrete to construct without considering storage capacity 3, and thus have construction costs that are sensitive to commodity costs and to potential future resource scarcity.
_

So he gave a range of 36/3000 to 36/5000 or 0.7% to 1.2%.

Can you educate us on the construction cost buildup – also on why quotes have gone up so much since 2000?

CBO Study: Nuclear Power’s Role in Generating Electricity

UPDATE: I have republished this 2008 post and comments to bring it “up front” with our ongoing discussion of new nuclear construction costs. At the end I’ve incorporated the 2008 comments.

I’ve been re-reading the CBO study from May 2008. This is probably the most current objective analysis of base load electrical generation options. Given the CBO levelized costing assumptions it appears that electric utilities will choose natural gas over 3rd generation nuclear unless they anticipate more than $45/ton CO2 carbon tax or equivalent:

Adding a carbon dioxide charge of about $45 per metric ton to the levelized cost estimates in the reference scenario would make nuclear power the least expensive source of additional base-load capacity (see the left panel of Figure 3-2). Up to that threshold, at all but the lowest level of charges, conventional natural gas technology would probably be the least costly option. Because coal is more carbon-intense than natural gas, the cost advantage of new capacity based on natural gas technology would grow in relation to coal technology as carbon dioxide charges increased; but the advantage that natural gas technology enjoyed over nuclear technology would shrink and eventually disappear as emission charges reached about $45 per metric ton. Thereafter, the levelized cost advantage of nuclear technology over conventional gas technology would grow. Although carbon dioxide charges would not change the cost of nuclear power plants at all, they would increase the cost of innovative fossil-fuel alternatives; as a result, the cost advantage that nuclear technology held over those technologies would increase with carbon dioxide charges but at a slower rate than that observed with conventional fossil-fuel technologies.

We know that construction costs for all types of generation have been going up rapidly with the increasing costs for steel, concrete etc. Nuclear is the most sensitive to construction costs, simply because nuclear fuel costs are negligible [conversely nuclear is insensitive to future fuel cost rises, but natural gas is extremely sensitive.) Here’s the relative sensitivities to lower or higher construction costs — again levelized 2006 dollars per megawatt hour:

The CBO study of course has to stick with already-built or on-order nuclear technology. But this may lead to drawing the wrong conclusions. Remember how much autos cost when each one was custom built? And the lousy quality?

That’s our experience of nuclear construction — custom design, custom built, custom approvals. But, given certainty of future CO2 charges, I believe that a competitive market will transform nuclear generation into a mass produced, modular product — and that costs will come down dramatically compared to alternatives.

We don’t know what future innovations will emerge, but as of today, the modular pebble-bed reactor [PBMR] technology looks very promising. Key advantages are safety by design (even chimps as operators can’t cause a nuclear accident), no proliferation worries, and perhaps most important – the design is MODULAR. That means industrial-scale mass production is possible, with all the attendant benefits. One of the most important benefits is the slashing of the financial risk of regulatory delays before a new plant is allowed to start up.

For more background on the Modular Pebble-bed design, see MIT’s study “The Future of Nuclear Power” [1], MIT prof. Andrew C. Kadak’s presentation “What Will it Take to Revive Nuclear Energy?” [PDF] [2], and his Pebble-bed presentation [PDF] [2a]. China is placing big bets here, see Wired’s “Let a Thousand Reactors Bloom” [3].

10 thoughts on “CBO Study: Nuclear Power’s Role in Generating Electricity”

  1. Rod Adams on August 26, 2008 at 8:06 pm said: Edit

Steve:

It is always important to check the assumptions. The CBO study that you pointed to, though completed in 2008, apparently used a fuel price table that stopped with 2005 fuel prices. It thus assumed a base case of natural gas costing about $5.00 per million BTU.

Since the cost of fuel is about 93% of the levelized cost of electricity from a natural gas fired power plant, underestimating the cost of gas would tend to sway the computed decision in the wrong direction compared to less fuel intensive alternatives like nuclear power.

Nuclear can compete without a carbon tax against gas at current market prices – which are about $8.50 per million BTU and have been as high as $13 in the recent past and may get there again with a cold winter.

Luckily for gas buyers, it has been a fairly mild summer.

  1. Steve Darden on August 26, 2008 at 9:10 pm said: Edit

Rod – thanks for the data update. Does the increase in construction costs since the timestamp on the report data offset the underestimated natural gas prices? I.e., gas operating costs up, nuclear plant construction costs up.

I added PBMR to this post – since folks search for this acronym.

  1. Per Peterson on August 27, 2008 at 10:48 am said: Edit

While it is widely understood that nuclear energy costs have quite low sensitivity to the cost of uranium, it is not widely appreciated that the same applies to construction materials. If one takes the total quantity of steel, concrete, copper, and other materials required to build a light water reactor similar to those operating today 1, and then multiplies these quantities by the respective current commodity prices, the total contribution of commodity inputs is $36 per kilowatt of generation capacity 2, out of total construction prices that are estimated today to range from $3000 to $5000 per kilowatt today. The dominant cost of nuclear construction is instead in the added value that comes from converting these commodities into an operational nuclear power plant. Conversely, wind turbines require approximately a factor of 10 times as much steel and concrete to construct without considering storage capacity 3, and thus have construction costs that are sensitive to commodity costs and to potential future resource scarcity.

Right now demand for new reactors is clearly outstripping supply. While this current supply chain inelasticity will ease in 5 to 10 years, inelasticity in supply always results in higher prices. Thus we can expect nuclear construction prices to drop over the coming decade, but the main question is by how much. While it will never get down to the $36/kW cost of the commodity inputs, there is still potential that prices could drop greatly from the current values if modular construction and factory-based computer aided manufacturing are applied more broadly in the construction.

References:

  1. From R.H. Bryan and I.T. Dudley, Oak Ridge National Laboratory, TM-4515, June 1974, current pressurized water reactors use 32.7 t of steel, 75 m3 of concrete, 0.69 t of copper, and smaller amounts of other materials per megawatt of capacity
  2. On March 25, 2008, the commodity prices of steel, concrete, and copper (which constitute 90% of the total commodities costs for a nuclear plant) were $601/t, $98/m3, and $7,634/t respectively.
  3. Wind requires 115 MT of steel and 218 m3 of concrete per megawatt, but has higher commodity input per unit of electricity generated due to a lower capacity factor (~25%) compared to nuclear (~90%), S. Pacca and A. Horvath, Environ. Sci. Technol., 36, 3194-3200 (2002).

    1. Rod Adams on August 27, 2008 at 2:26 pm said: Edit

The interesting thing about the numbers that are being bandied about with regard to nuclear construction costs is that they also include rather substantial allowances for risk premiums, interest costs, and inflation uncertainties.

Those costs can represent half of the final delivered price computation.

  1. Steve Darden on August 27, 2008 at 5:10 pm said: Edit

Dr. Peterson,

Thanks for taking the time to set us straight on the material inputs. 1% means nuclear plant costs are highly insensitive to that component. The CBO study bypassed the contributions to cost increases in their sensitivity analysis – simply assuming -50%, +100%.

Today I wrote a related post on the CERA index of power plant construction. Back of the envelope, assuming 22% of CERA’s basket of 30 plants are nuclear, I drew the inference that nuclear plant construction costs have increased around 400% since 2000. Versus the PCCI average of 230% across all modes of generation.

Similar to your comments, CERA attributes the increases to the surge in demand and the “dearth of experienced nuclear engineers in North America.”

CERA is tracking similar (210%) increases in the cost of upstream oil & gas projects – the UCCI having a similar 2005 takeoff. Much more depth on energy demand over-running supply can be found in the really excellent CIEP study “Oil turbulence in the next decade – An Essay on High Oil Prices in a Supply-constrained World”, Jan-Hein Jesse and Coby van der Linde, Clingendael International Energy Programme. They conclude that the next decade or so will see high volatility in oil markets – oscillating between marginal cost and user value.

Please advise if you have any references to recommend on the potential for nuclear costs to drop in an industry transformation to “mass production”, relatively speaking, of modular reactor components. Presumably, such standardized components would be pre-certified, so that on site certification would be reduced to a process more like inspections of other industrial facilities?

  1. Steve Darden on August 27, 2008 at 10:31 pm said: Edit

Rod,

Well, it’s interesting that the CERA index explicitly doesn’t include risk premiums, or owner’s cost. It probably includes construction period interest. If my estimates of their basket are close it indicates a 2000 to 2008 Q1 cost increase of around 400% for nuclear and about 180% for non-nuclear.

I haven’t found a source to build up that figure from first principles – so I can’t confirm the PCCI index. I sat through the one hour CERA web-conference presentation of June 6 – hoping to learn the details. They do have a nuclear index, but didn’t present it. It is part of the distribution package sent to members.

Cheers, Steve

  1. JimHopf on August 28, 2008 at 4:51 pm said: Edit

I’d just like to add a bit to what Rod said earlier. Not only does the CBO study assume a natural gas price of $5 (or $6?) per MBTU, which is lower than the price even today, but they assume that it will remain at $5/6 even if we use gas for all new power plants (and possibly also replace existing coal plants with gas plants to meet CO2 reduction requirements). In other words, they assume that the price will remain fixed at a (low) value of $5/6, no matter how high the demand for gas gets!

They simply state that for CO2 prices between $6 and $45 per ton, gas will be the cheapest source, thereby implying that it will be chosen for all new generation. They ignore all feedback effects. In the real world, as more and more gas is chosen, the price of gas goes up until the price advantage disappears. In fact, the real truth is that, for baseload power, gas will not be an option, as it will be the most, not the least, expensive in the future (even w/ little or no CO2 price), since future gas costs will be way above $6. For that reason, utility executives are not even really thinking about gas as a future baseload option. There simply is not enough gas to go around to waste it on something like baseload generation. The choice will be between nuclear and coal.

The real question is what CO2 price is required to make nuclear cheaper than coal. This price is about $20 to $25 per ton of CO2.

  1. Steve Darden on August 28, 2008 at 6:51 pm said: Edit

Jim,

Thanks – I agree with your all your points.

This price is about $20 to $25 per ton of CO2.

Doesn’t that depend on the capital cost? At 2005 CAPEX I thought $25 per ton CO2 would do it. At 4 x 2005 costs?

I’m confident new plant costs will come down. I’m optimistic that in a decade, constant dollars, that costs per MW will be lower than the 2005 CERA index.

But what do utility execs believe are the levelized costs?

  1. JimHopf on August 28, 2008 at 11:20 pm said: Edit

Steve,

Well, the capital cost of coal plants has also gone up since then, as well as the price of coal itself (which has almost doubled). That said, the price of nuclear has gone up even more, if some of the latest estimates are to be believed. Thus, it could be that it would require ~$30 or more (but only for the first set of plants).

Of course, under any cap-and-trade system with hard (declining) CO2 limits, the CO2 price will rise to whatever it has to be to make nuclear cheaper than coal (given that renewables contribution is limited by intermittentcy, and that both gas as coal w/ CO2 sequestration will be more expensive than nuclear).

  1. Steve Darden on August 29, 2008 at 3:31 pm said: Edit

Thanks Jim – two important concepts in your comments

(1) but only for the first set of plants – because once deployment gets well underway the capital costs will come down. Probably operating costs as well.

(2) the CO2 price will rise to whatever it has to be to make nuclear cheaper than coal – because that is the new equilibrium, as existing coal utilities bid up permits until it becomes cheaper to build replacement nuclear than to keep paying for permits.

Regarding (2) I still prefer a revenue-neutral carbon tax over cap-and-trade. Most importantly because it gives utilities a predictable and stable future cost environment. Secondly, because it prevents government from getting its hands on a new revenue stream, while avoiding a rich growth medium for corruption and complexity.

What’s your view on that choice?

PS – I just finished a post on “Greens make the case for nuclear power”.

Comments are closed.

Why did nuclear plant construction costs quadruple from 1972 to 1988?

The short answer is Greenpeace and their cronies such as Friends of the Earth (FOE):

A major source of cost escalation in some plants was delays caused by opposition from well-organized “intervenor” groups that took advantage of hearings and legal strategies to delay construction. The Shoreham plant on Long Island was delayed for 3 years by intervenors who turned the hearings for a construction permit into a circus. The intervenors included a total imposter claiming to be an expert with a Ph.D. and an M.D. There were endless days of reading aloud from newspaper and magazine articles, interminable “cross examination” with no relevance to the issuance of a construction permit, and an imaginative variety of other devices to delay the proceedings and attract media attention.

That quote is from Chapter 9 COSTS OF NUCLEAR POWER PLANTS — WHAT WENT WRONG? of the online version of the book The Nuclear Energy Option by physicist Bernard L. Cohen, University of Pittsburgh. The book was published by Plenum Press, 1990, so it is slightly dated with respect to recent developments in modular mass-manufactured reactors (SMR), etc. Other than that it is a terrific resource — a concise handbook that covers all the high priority questions about nuclear power [risk/safety, radiation, costs, nuclear "waste", proliferation].

Prof. Cohen was there, on the scene so to speak, during the period of the 1970′s, 1980′s when Regulatory Turbulence, Regulatory Ratcheting and Intervenors quadrupled the cost of a nuclear power plant. Here’s an excerpt from Chapter 9 covering Regulatory Ratcheting and Regulatory Turbulence:

The Nuclear Regulatory Commission (NRC) and its predecessor, the Atomic Energy Commission Office of Regulation, as parts of the United States Government, must be responsive to public concern. Starting in the early 1970s, the public grew concerned about the safety of nuclear power plants: the NRC therefore responded in the only way it could, by tightening regulations and requirements for safety equipment.

Make no mistake about it, you can always improve safety by spending more money. Even with our personal automobiles, there is no end to what we can spend for safety — larger and heavier cars, blowout-proof tires, air bags, passive safety restraints, rear window wipers and defrosters, fog lights, more shock-absorbent bumpers, antilock brakes, and so on. In our homes we can spend large sums on fireproofing, sprinkler systems, and smoke alarms, to cite only the fire protection aspect of household safety. Nuclear power plants are much more complex than homes or automobiles, leaving innumerable options for spending money to improve safety. In response to escalating public concern, the NRC began implementing some of these options in the early 1970s, and quickened the pace after the Three Mile Island accident.

This process came to be known as “ratcheting.” Like a ratchet wrench which is moved back and forth but always tightens and never loosens a bolt, the regulatory requirements were constantly tightened, requiring additional equipment and construction labor and materials. According to one study,4 between the early and late 1970s, regulatory requirements increased the quantity of steel needed in a power plant of equivalent electrical output by 41%, the amount of concrete by 27%, the lineal footage of piping by 50%, and the length of electrical cable by 36%. The NRC did not withdraw requirements made in the early days on the basis of minimal experience when later experience demonstrated that they were unnecessarily stringent. Regulations were only tightened, never loosened. The ratcheting policy was consistently followed.

In its regulatory ratcheting activities, the NRC paid some attention to cost effectiveness, attempting to balance safety benefits against cost increases. However, NRC personnel privately concede that their cost estimates were very crude, and more often than not unrealistically low. Estimating costs of tasks never before undertaken is, at best, a difficult and inexact art.

(…)

Clearly, the regulatory ratcheting was driven not by new scientific or technological information, but by public concern and the political pressure it generated. Changing regulations as new information becomes available is a normal process, but it would normally work both ways. The ratcheting effect, only making changes in one direction, was an abnormal aspect of regulatory practice unjustified from a scientific point of view. It was a strictly political phenomenon that quadrupled the cost of nuclear power plants, and thereby caused no new plants to be ordered and dozens of partially constructed plants to be abandoned.

Regulatory Turbulence

We now return to the question of wildly escalating labor costs for construction of nuclear plants. They were not all directly the result of regulatory ratcheting, as may be seen from the fact that they did not occur in the “best experience” projects. Regulatory ratcheting applied to new plants about to be designed is one thing, but this ratcheting applied to plants under construction caused much more serious problems. As new regulations were issued, designs had to be modified to incorporate them. We refer to effects of these regulatory changes made during the course of construction as “regulatory turbulence,” and the reason for that name will soon become evident.

As anyone who has tried to make major alterations in the design of his house while it was under construction can testify, making these changes is a very time-consuming and expensive practice, much more expensive than if they had been incorporated in the original design. In nuclear power plant construction, there were situations where the walls of a building were already in place when new regulations appeared requiring substantial amounts of new equipment to be included inside them. In some cases this proved to be nearly impossible, and in most cases it required a great deal of extra expense for engineering and repositioning of equipment, piping, and cables that had already been installed. In some cases it even required chipping out concrete that had already been poured, which is an extremely expensive proposition.

Constructors, in attempting to avoid such situations, often included features that were not required in an effort to anticipate rule changes that never materialized. This also added to the cost. There has always been a time-honored tradition in the construction industry of on-the-spot innovation to solve unanticipated problems; the object is to get things done. The supercharged regulatory environment squelched this completely, seriously hurting the morale of construction crews. For example, in the course of many design changes, miscalculations might cause two pipes to interfere with one another, or a pipe might interfere with a valve. Normally a construction supervisor would move the pipe or valve a few inches, but that became a serious rule violation. He now had to check with the engineering group at the home office, and they must feed the change into their computer programs for analyzing vibrations and resistance to earthquakes. It might take many hours for approval, and in the meanwhile, pipefitters and welders had to stand around with nothing to do.

Requiring elaborate inspections and quality control checks on every operation frequently held up progress. If an inspector needed extra time on one job, he was delayed in getting to another. Again, craft labor was forced to stand around waiting. In such situations, it sometimes pays to hire extra inspectors, who then have nothing to do most of the time. I cannot judge whether all of these new safety procedures were justifiable as safety improvements, but there was a widespread feeling among those involved in implementing them that they were not. Cynicism became rampant and morale sagged

Prof. Cohen goes on to document the history of how Greenpeace and friends managed to destroy the Shoreham, Long Island plant — which was eventually sold to NY state for $1.

NewImage

But the worst delay came after the Shoreham plant was completed. The NRC requires emergency planning exercises for evacuation of the nearby population in the event of certain types of accidents. The utility provides a system of warning horns and generally plans the logistics, but it is necessary to obtain cooperation from the local police and other civil authorities. Officials in Suffolk County, where Shoreham is located, refused to cooperate in these exercises, making it impossible to fulfill the NRC requirement. After years of delay, the NRC changed its position and ruled that in the event of an actual accident, the police and civil authorities would surely cooperate. It therefore finally issued an operating license. By this time the situation had become a political football, with the governor of New York deeply involved. He apparently decided that it was politically expedient to give in to the opponents of the plant. The state of New York therefore offered to “buy” the plant from the utility for $1 and dismantle it, with the utility receiving enough money from various tax savings to compensate for its construction expenditures. This means that the bill would effectively be footed by U.S. taxpayers. As of this writing, there are moves in Congress to prevent this. The ironic part of the story is that Long Island very badly needs the electricity the Shoreham plant can produce.

Government’s role in shutting down the US nuclear industry

A November 15, 2007 Heritage backgrounder “Competitive Nuclear Energy Investment: Avoiding Past Policy Mistakes” provides a brief history of anti-nuclear activists and regulatory turbulence, counseling that, this time around, we must act to avoid those enormous costs.

Amory Lovins loves to say “there are no private investors interested in nuclear power”. That is manifestly untrue. But the fact that utilities and venture capitalists are investing in nuclear today is a miracle considering the massacre experienced by investors in the period 1970 through 1994 (when Clinton killed the Integral Fast Reactor). Excerpts from the Heritage true history:

(…) Investors hesitate to embrace nuclear power fully, despite significant regulatory relief and economic incentives.

This reluctance is not due to any inherent flaw in the economics of nuclear power or some unavoidable risk. Instead, investors are reacting to the historic role that federal, state, and local governments have played both in encouraging growth in the industry and in bringing on its demise. Investors doubt that federal, state, and local governments will allow nuclear energy to flourish in the long term. They have already lost billions of dollars because of bad public policy.

The United States once led the world in commercial nuclear technology. Indeed, the world's leading nuclear companies continue to rely on American technologies. However, in the 1970s and 1980s, federal, state, and local governments nearly regulated the U.S. commercial nuclear industry out of existence. U.S. companies responded by reallocating their assets, consolidating or selling their commercial nuclear capabilities to foreign companies in pro-nuclear countries.

This paper reviews how overregulation largely destroyed the nuclear industry and why it remains an obstacle to investment in the industry. This dynamic must be understood and mitigated before the true economics of nuclear power can be harnessed for the benefit of the American people.

(…) Investors are right to be wary. Anti-nuclear activists have already exploited the authority of public institutions to strangle the industry. Now these same public institutions must be trusted to craft good public policy that reestablishes the confidence necessary to invite investment back into America's nuclear industry. To be successful, the new policies must create an industry that does not depend on the government. They must mitigate the risks of overregulation but allow for adequate over sight while preventing activists from hijacking the regulatory process.

(…) Activists Gone Wild

Anti-nuclear groups used both legal intervention and civil disobedience to impede construction of new nuclear power plants and hamper the operations of existing units. They legally challenged 73 percent of the nuclear license applications filed between 1970 and 1972 and formed a group called Consolidated National Interveners for the specific purpose of disrupting hearings of the Atomic Energy Commission.

Much of the anti-nuclear litigation of the 1970s was encouraged by factions within the government.[4] Today, activist organizations determined to force the closure of nuclear power plants, such as Mothers for Peace, continue to use the legal process to harass the nuclear energy industry.

Activists went well beyond simply challenging nuclear power in the courts. On numerous occasions, demonstrators occupied construction sites, causing delays. For instance, in May 1977, the Clamshell Alliance led a protest that resulted in the arrest of more than 1,400 people for trespassing at the Seabrook plant site in New Hampshire.[5] In California, the Abalone Alliance adopted similar tactics and frequently blocked the gates of the Diablo Canyon power plant.[6]

A watershed victory for the anti-nuclear movement occurred in 1971 when a federal appeals court ruled that the construction and operating permits for a nuclear power plant violated the National Environmental Policy Act of 1969. As a result, util ities were required to hold public hearings before obtaining a permit to start a project.[7] This decision created a major opening in the process that anti-nuclear activists could exploit.

Changing the Economics of Nuclear Power

(…) In addition, the role of the judiciary cannot be overemphasized. Congress's loss of enthusiasm for nuclear energy led to more aggressive regulation, and because jurisdiction over nuclear issues was divided among multiple committees, there was no unified congressional direction. The result was an expansion of costly and often unnecessary rules.

In June 2006, the NRC listed over 80 sources of regulation,[8] including over 1,300 pages of laws, treaties, statutes, authorizations, executive orders, and other documents.(…) Because the interpretation of NRC regulations was left to the discretion of individual NRC technical reviewers, each license application would often result in its own unique requirements.[9]

(…) This inconsistency increased costs, further sour ing Congress on nuclear power and leading to an endless spiral of legislation, regulation, and still more added costs. Between 1975 and 1983, 430 suits were brought against the NRC, leading to 2,349 proposed rules and regulations–each of which required an industry response.[10] The addi tional and unexpected controls created industry wide uncertainty and raised questions about the long-term economics of nuclear power. They also drove up capital costs.[11]

This was all done by the NRC without adequate information. The NRC recognized as early as 1974 that it was issuing regulations without sufficient risk assessment training or cost considerations. It did not even have a program to train employees in how to conduct a review using NRC guidance.[12] Yet the commission continued to issue regulation after regulation.

(…) The shifting regulatory environment gave rise to additional reviews from numerous public institutions.(…) between 1956 and 1979, the average construction permit review time increased fourfold. The average time required to bring a plant on line from the order date increased from three years to 13 years during a similar time period.[15]

(…) As more inspections and inspectors were required, delays often resulted from inadequate regulatory manpower. Workers had to spend inordinate amounts of time waiting for inspections rather than building the project. The oft-changing construction specifications also led to mistakes, which created further delays.Even after construction was complete, delays often continued. Delaying plant completion could cost up to $1 million per day.[17] Stories of costly and unnecessary delays litter the history of U.S. nuclear power. Plants such as the Shoreham nuclear plant on Long Island were completely built but never used because extremists succeeded in scaring the public and political leaders.

Overregulation Leads to a Declining Industry

Overall, regulation increased the cost of constructing a nuclear power plant fourfold. [19] Such cost escalation would have been justified if it had been rooted in scientific and technical analysis. Regrettably, it was largely a function of anti-nuclear activism, agenda-driven politicians, activist regulators, and unsubstantiated public fear. A total of $70 billion was added to the cost of nuclear reactors constructed by 1988, and this cost was passed on to the ratepayers. After 1981, the cost of constructing a nuclear power plan rose from two to six times, [20] which means that either consumers paid significantly more or utilities incurred losses if they did not charge market prices. Neither circumstance was sustainable.

(…) In total, $30 billion was spent on nuclear plants that were never completed,[26] which is more than the value of most of the companies that are considering new plant orders.

 

How Taxes Pervert our Energy Choices

In 2009 Nuclear engineer Joseph Somsel examined some of the US tax code provisions which favor building wind rather than nuclear power. This was originally published in American Thinker.

(…snip…) the current code allows what’s called accelerated depreciation so that they can recover the capital costs earlier in the asset’s life rather than later.  Like cash and lottery payouts, a tax deduction today is worth more than one 20 years from now so we can see how Congress views competing electrical generation sources by how quickly they allow the write-offs to occur.

For wind farms, the current code allows the write-offs over 3.5 years, a real boon for investors in wind mill projects. In fact, many such projects depend on this tax advantage to secure financing, especially since the right to take these deductions can be allocated with some freedom amongst the project’s investors and the developers.

Alas, for nuclear power plants, the tax picture is not so rosy.  They have to take their write-offs over 20.5 years, a significant disadvantage over a comparable investment in a wind project.  Taking a hypothetical $5 billion in generation investment in each technology, here’s a chart showing when those deductions could be taken and for how much:

Depreciationfavorswind

From this chart, it is easy to see that the investors in a wind project get to write-off a LOT more money a LOT sooner than the investors in a nuclear plant.  This is greatly to the advantage of the wind developers.  At a 35% corporate tax rate, the difference in Year 2 alone is over $650 million in bottom line after-tax profits to the wind investors – that’s cash money that can cut dividend checks.  Maybe now you can see why T. Boone Pickens is pushing wind farms.

Let’s take the figures from Department of Energy’s Energy Information Agency for capital costs and productive experience (“capacity factor”) to see exactly what this means in terms of electrical production.  Let’s assume an equal “overnight” investment of $5 billion in wind mills and $5 billion in nuclear power plants.  That will buy you about 1.5 gigawatts of nuclear capacity and 2.6 gigawatts of wind farm capacity.  However, that’s only the equipment’s theoretical ability to make electricity and not how much electricity it likely will supply per year once in service.  For that we need to multiply our capacity by something called “capacity factor” which is what it really delivers.  Again, using EIA’s numbers on what really happens out in the real world in terms of expected production:

Nuclear2xwindsameinvestment

So that $5 billion will produce over TWICE the annual electrical output for American consumers if invested in nuclear power plants than if in wind farms.  One has to ask, do these provisions in the tax code really serve Americans’ interests or are they written with someone else in mind?  Yet, Congress wants 20% of our electricity to come from “renewables” like wind.  The California legislature, to prove its green bona fides, recently passed a law to make California electric consumers buy 33% of their electricity from renewables.  All I can say is, “Thanks guys!”

(…snip…)

So, in comparing the tax treatment of wind against that of nuclear power, one could get the idea that Congress is rewarding the inefficient while hobbling the productive.  I’d call that perversion and poor public policy.

Burton Richter: America’s Nuclear Future

We cannot get a coherent accepted long-term plan. The French have a long-term plan. The Koreans, the Chinese, the Russians have it. We don’t have it. That’s not the fault of the labs, that’s the fault of the administrations.

Burton Richter is my #1 choice for energy policy advisor. In the recent Breakthrough interview you’ll read the true story of the state of Gen IV reactors and what passes today for “US policy”:

When it comes to nuclear energy, Dr. Burton Richter is Mr. Credible. Winner of the 1976 Nobel Prize for discovering a new sub-atomic particle, Richter has advised presidents and policymakers for almost 40 years. Richter has been a Breakthrough Senior Fellow since 2011, and is technical adviser to the forthcoming documentary, “Pandora’s Promise,” about pro-nuclear environmentalists.

Breakthrough interviewed Richter recently to get his opinion on next generation nuclear reactors, and why so many of them are being developed abroad and not by the Department of Energy in the United States. “The DOE is too screwed up to go into a partnership and do this in the US,” the blunt Richter told us, referring to the Bill Gates-backed nuclear design pursued in China by Terrapower.

Is DOE really to blame? In the end, Richter told us it was partisan polarization that was the problem. “George W. Bush actually had a good thing on next generation nuclear,” Richter said. “When the Obama people came in all the Gen IV activities were stopped. With a system that keeps changing its priorities every few years, the [National DOE] Labs are pretty demoralized. The French have a long-term plan. The Koreans, the Chinese, the Russians have it. We don’t have it. That’s not the fault of the labs, that’s the fault of the administrations.”

And that’s the fault, we might add, of irrational environmentalist and progressive fears of nuclear energy — something “Pandora’s Promise” hopes to change. Read the rest of our interview below.

What is the future of next generation nuclear reactors?

(…)

What about the reactor designed by Nathan Mhyrvold and backed by Bill Gates through Terrapower?

They had to change designs because the original design of a kind of slow-burning candle didn’t work. The new version is supposed to have a core that would be sealed for 50 years. But it’s not completely sealed because you have to shuffle the fuel rods. One advantage is that that at the end of 50 years, the waste is so impure that nobody would want anything to do with it for making a weapon.

Terrapower is being done in China because in the US there’s no way he could get it licensed. And the DOE is too screwed up to go into a partnership and do this in the US.

We always hear from people that DOE is screwed up. But what exactly does that mean? Can it be fixed?

Consider the fact that the DOE can, at one of its labs, go ahead with an experimental fission system that is not approved by the Nuclear Regulatory Commission (NRC). After all, the DOE is supposed to develop new technologies, while the NRC is supposed to deal with things in the civilian nuclear world.

In other words, the labs don’t need NRC approval to make a 5MW version of TerraPower’s reactor. They could just go do it. But it’s so agonizing to get [lab] approval for that kind of thing. So political. Ultra-greens would say too dangerous and NRC has to approve it, and NRC would say it will look into it and it would take a decade.

That’s the reason Nathan [Mhyrvold] and Bill Gates said, “Let’s build the first one in China.”

Is the problem with Congress or DOE?

Both. At DOE there are a lot of layers of bureaucracy and very little continuity. Everything changes with every new administration. The long-term goals change. The result is that the labs have become very conservative.

With a system that keeps changing its priorities every few years, the labs are pretty demoralized. We cannot get a coherent accepted long-term plan. The French have a long-term plan. The Koreans, the Chinese, the Russians have it. We don’t have it. That’s not the fault of the labs, that’s the fault of the administrations.

Is this a problem of ideological and partisan polarization?

George W. Bush actually had a good program on next generation nuclear. We were part of the Generation IV International Forum, working closely with Japan and France. We had a program that was headed toward certain kinds of advanced reactors, including liquid sodium, and a high temperature gas reactor. When the Obama people came in all the Gen IV activities were stopped. Yucca Mountain was shut down. And we’re off in totally new directions.

Partly, but there were even changes between the first George W. Bush term and the second. In first term, they were talking about reprocessing, and the second Gen IV designs. We have an on again off again program that changes too often. The next problem is the budget. The DOE nuclear budget is a complete mess. They are working off of a continuing resolution, and in that process you always take the lower budget line from either the Senate or House. This creates massive amounts of uncertainty in the programs.

Who can change that? Can Obama just tell the labs to build a next gen nuclear reactor?

No, it has to go to Congress to change. The whole structure has to change.

What’s your general impression of the integral fast reactor (IFR), the prototype of which ran at Argonne-West [which is now part of Idaho] National Lab, and is now being marketed by General Electric as the PRISM reactor?

The IFR is a sodium-cooled fast spectrum reactor with all the good and bad that come with it. The one sodium cooled reactor at Hanford ran for thirty years until we drilled a hole into it [after Congress ended funding for it in 1994]. France and Russia built versions as well.

What’s new to the IFR is the on-site reprocessing, and the feeding back of the actinides [radioactive elements like uranium and plutonium] back into the fuel, so that nothing ever leaves it. The new IFR trick is in the electrorefining [sometimes called pyroprocessing] to reprocess the waste into new fuel, making it a continuous fuel cycle. So think of the IFR as a liquid sodium fast spectrum breeder reactor with a trick as to how to do the separation of actinides in an effective fashion.

Electrorefining is the most interesting new element in the IFR, but it has been hard to figure out how to get it working well enough to be used commercially.

Who is working on improving electrorefining?

South Korea is very interested in electrorefining and would like to do a joint program with the US. The question is whether we’ll let them do it. The 123 agreement we have with them says that the US has to agree to any reprocessing. The Nuclear Energy Advisory Committee to the DOE has said that if you’re going to do this, then having a Korean partner would be a great idea.

Wouldn’t technologies like the IFR greatly reduce the amount of waste needed?

You need a geological repository anyway because you always have fission fragments, and that’s the really radioactive stuff. So if pyroprocessing worked perfectly the long lived components would be removed to be used as fuel, and after 500 years you wouldn’t have to worry about them any more because the radioactivity would be low.

So you’ll still need a repository, though probably not for 100,000s of years. But there’s a big if here. How efficiently can you separate these long-lived actinides from the fission fragments? If you allow only a few percent of the actinides in, then it will be for 100,000s of thousands years. It has to be really good. Right now, it’s not that good. The people working on it say they have good ideas but they haven’t fixed it yet.

(…)