The Economist on past and future emissions cuts

 

Chart 1 – click to embiggen

The above graphic is from The deepest cuts, a contribution fromThe Economist to grappling with the “big picture” on effective carbon avoidance strategies.There are some obvious problems with the numbers in Chart 1 – particularly the Cumulative Emissions avoided by Hydropower and Nuclear. There are also some very big issues with the Chart 2 where authors attempt to project the carbon avoidance situation in 2020. I  addressed some of these issues in my comments to the article:

I hope this is just the beginning of an ongoing Economist project to refine and update an understanding of what is working, what is not working – all in the context of the essential measure of cost/benefit, specifically cost-per-ton-CO2-avoided.

I need to highlight a few errors in your data presentation. In your Chart 1 you report Cumulative Emissions Avoided for both Hydropower and Nuclear that understate the actual avoidance by roughly thirty times. Nuclear and hydropower avoidance should be about 64 and 90 GtCO2-eq respectively vs. your 2.2 and 2.8 GtCO2-eq. I derived these values from two sources. First, the IAEA report you referenced Climate Change And Nuclear Power 2013 states on page 14

Over the past 50 years, the use of nuclear power has resulted in the avoidance of significant amounts of GHG emissions around the world. Globally, the amount of avoided emissions is comparable to that from hydropower.

From inspection of IAEA FIG. 5 we can see that cumulative historical Hydropower avoidance is very roughly 25 GtCO2-eq greater than the nuclear avoidance, but otherwise similar. But what is the cumulative avoidance? in “Prevented mortality and greenhouse gas emissions from historical and projected nuclear power” Pushker and Hansen, 2013 calculated that the cumulative global CO2 emissions emissions avoided by nuclear power is 64 GtCO2-eq. Here’s their Figure 3, page 12 for both historical and projected emissions avoided:

Click to embiggen

The authors calculated the 64 GtCO2-eq avoidance based on a different IAEA source document: Energy, Electricity and Nuclear Power Estimates for the Period up to 2050: 2011 Edition; International Atomic Energy Agency, 2011.

Is 64 GtCO2-eq a big number? It is a Very Big Number, as Pushker and Hansen 2013 contrast to 35 years of USA coal emissions:

For instance, 64 GtCO2-eq amounts to the cumulative CO2 emissions from coal burning over approximately the past 35 yr in USA


Chart 2: Click to embiggen

Regarding your Chart 2, forecasting “the policies likely to have the biggest impact in 2020″ is a courageous undertaking. To make useful projections requires a deep knowledge of the energy industry, the electric power industry, economic forecasting and the political trends of the significant emitting countries. That is a Very Big Ask, so I decided to have a look for related work by the firm retained by The Economist: namely Climate Action Tracker. The principles of this consulting firm are listed as Dr. Bill Hare, Dr. Niklas Höhne, Dr. Johannes Gütschow and Dr. Michiel Schaeffer. The first three gentlemen are affiliated with the Potsdam Institute for Climate Impacts Research (PIK). That affiliation immediately boosted my estimate of the Climate Action Tracker qualifications because I have been studying the work of other PIK researchers who have been publishing very important and original work on the difficult subject of integrating variable renewable generation sources, especially at potentially high future penetration levels. This work requires a deep understanding of electric power systems. In particular I will recommend these three PIK papers:

  1. Hirth, Lion, The Optimal Share of Variable Renewables. How the Variability of Wind and Solar Power Affects Their Welfare-Optimal Deployment (November 8, 2013). FEEM Working Paper No. 90.2013. Available at SSRN: http://ssrn.com/abstract=2351754 or http://dx.doi.org/10.2139/ssrn.2351754
  2. Ueckerdt, Falko and Hirth, Lion and Luderer, Gunnar and Edenhofer, Ottmar, System LCOE: What are the Costs of Variable Renewables? (January 14, 2013). Available at SSRN: http://ssrn.com/abstract=2200572 or http://dx.doi.org/10.2139/ssrn.2200572
  3. Hirth, Lion and Ueckerdt, Falko and Edenhofer, Ottmar, Why Wind is Not Coal: On the Economics of Electricity (April 24, 2014). FEEM Working Paper No. 39.2014. Available at SSRN: http://ssrn.com/abstract=2428788 or http://dx.doi.org/10.2139/ssrn.2428788

What I found in an afternoon of Internet research on Climate Action Tracker gives me concern about the Chart 2 conclusions. You have probably noticed in Chart 2 that in the six short years to 2020 nuclear power has become so insignificant it doesn’t even make the top-eleven list. That is puzzling, as nuclear power is currently the largest source of non-hydro emission-free electricity.

I confess that all of my searching for anything related to nuclear power trends in publications by Climate Action Tracker principles is alone update:  Climate Action Tracker Update, 30 November 2012 from which I have extracted the only two, widely separated paragraphs wherein nuclear is even mentioned:

…Society also would lose the ability to choose whether it wants technologies like carbon capture and storage and nuclear energy, because those, along with bio-energy, would likely have to be deployed on a larger scale.

…More pressure on future policy requirements. For example, full global participation would be required after 2020, and society may have little freedom to choose technologies, such as the freedom to reject large-scale nuclear energy, CCS, or bio-energy.

The only way I can read these comments is that the authors political view is that nuclear power should be rejected. This supports my conclusion that the members of Climate Action Tracker are possibly experts in climate science, but perhaps not so expert in the electric power industry and the economics of energy. The economics is fundamental to policies that can be implemented in the real world.

Renewables are making no progress against coal

No doubt you’ve heard that Friends of the Earth recently announced their primary objection to nuclear power is now because it is too slow to build and too costly.

I would like to introduce FOE to the data embodied in Roger Pielke Jr’s graphic. I’ve modified Roger’s chart to illustrate the only energy policy that has succeeded to rapidly displace fossil fuels at utility scale. My crude green slope indicator highlights the period when France, Sweden, Belgium, Canada, United States, Germany, Japan, Switzerland and others built their nuclear power fleets. The absence of further progress since 1995 shows the stark reality of how little has been achieved by the billions dollars of taxpayer wealth that has been spent on renewable subsidies since Kyoto. The following chart contrasts the speed and scale of the nuclear build with the  slow build of the whole suite of “renewables”.

Roger’s short Breakthrough essay is the source of the original chart:

The data shows that for several decades the world has seen a halt in progress towards less carbon-intensive energy consumption, at about 13 percent of the total global supply. This stagnation provides further evidence that the policies that have been employed to accelerate rates of decarbonization of the global economy have been largely ineffective. The world was moving faster towards decarbonizing its energy mix long before climate policy became fashionable. Why this was so and what the future might hold will be the subject of future posts in this continuing discussion.

If you are keen to learn what makes for effective decarbonization policies, then you are likely to also enjoy Roger’s The Climate Fix. For an Executive Summary of the concepts see A Primer on How to Avoid Magical Solutions in Climate Policy.

Prospects for U.S. Nuclear Power After Fukushima


Click to embiggen

The chairman of one of the largest U.S. nuclear companies recently said that his company would not break ground on a new nuclear plant in the United States until the price of natural gas was more than double today’s level and carbon emissions cost $25 of ton. This seems to pretty well summarize the current prospects for U.S. nuclear power.

This paper by Lucas W. Davis (Haas School of Business UC Berkeley) is an excellent summary of the US situation as of 2011, and a good source of references for your research on nuclear construction costs. Davis is not attempting to predict the future; he is drawing inferences from the historical data. That is a depressing picture — with the 2011 evidence indicating that US nuclear suppliers have not learned even the French lessons. 

Many within the nuclear industry claim that the industry is headed more toward the French model. A chairman of a major nuclear power company recently reported that new reactors would be standardized down to “the carpeting and wallpaper”. However, this claim does not appear to be supported by the license applications that have been received to date. Among the 17 applications that have been received by the NRC, there is a mix of both pressurized water reactors and boiling water reactors, manufactured by five different reactor manufacturers (Areva, Westinghouse, Mitsubishi, GE-Hitachi, and GE). Thus, it may well be the case that the industry will soon coalesce around a very small number of designs, but this is not immediately obvious based on these initial applications. At a minimum it seems clear that the French approach of supporting a single reactor design is not going to be adopted here.

Will China lead the world out of this pit by creating a mass manufacturing supply chain for two or three standard designs?

Reddit AMA grills the UC Berkeley Department of Nuclear Engineering

NewImage

Members of the UC Berkeley Department of Nuclear Engineering participated in the Reddit.com Science AMA Series, responding to a large number of largely hostile questions. Lots of variations of “Can I still eat fish from the contaminated Pacific”. As typical with these AMA sessions the signal to noise ratio is low due to the uninformed questions and irrelevant branched threads of discussion by people who are more interested in politics. I “mined” the 1,447 comments for what I thought were fragments worth archiving.

I guess I’ll start things off. What type of reactors should we be building? I know a big deal a few years ago was made about liquid flouride thorium reactors. Is that the way of the future, or are there superior alternatives?

Prof. Per Peterson replies (emphasis mine):

I do not think that we have the basis to determine or select the best coolant or fuel type to use in future reactors. But there are some attributes which we do need to make sure are used in future reactors.

The first is to use passive safety systems, which do not require electrical power or external cooling sources to function to remove decay heat after reactors shut down, as is the case with the AP-1000 and ESBWR designs, and with all of the light water reactor SMRs now being developed in the U.S.

The benefits of passive safety go well beyond the significant reduction in the number of systems and components needed in reactors and the reduced maintenance requirements. Passive safety systems also greatly simplify the physical protection of reactors, because passive equipment does not require routine inspections the way pumps and motors do, and thus can be placed in locations that are difficult to gain access to rapidly.

The second is to further increase the use of modular fabrication and construction methods in nuclear plants, in particular to use steel-plate/concrete composite construction methods that are quite similar to those developed for modern ship construction. The AP-1000 is the most advanced design in the use of this type of modularization, and the ability to use computer aided manufacturing in the fabrication of these modules makes the manufacturing infrastructure much more flexible. In the longer term, one should be able to design a new reactor building, transfer the design to a module factory over the internet, and have the modules show up at a construction site, so the buildings are, in essence, 3-D printed.

The final attribute that will be important for new reactors will be to make them smaller, and to develop a regulatory framework and business models that work for multi-module power plants. While there will likely always be a market for large reactors, creating an ecosystem that includes customers for smaller reactors (inland locations served only by rail, installations needing reliable power even if fuel supplies are interrupted, mature electricity markets that need to add new capacity in small increments).

On thorium, a question:

Hello! What do you think is the most important advantage that thorium has over uranium as a “fuel?”

Prof. Per Peterson’s reply

The thorium fuel cycle has clearly attractive features, if it can be developed successfully. I think that most of the skepticism about thorium emerges from questions about the path to develop the necessary reactor and fuel cycle technology, versus open fuel cycles (uranium from seawater) and closed, fast-spectrum uranium cycles.

The most attractive element of the thorium fuel cycle is the ability to operate sustainably using thermal-spectrum neutrons. This allows the design of reactor core structures that use high-temperature ceramic materials like graphite, which have substantial thermal inertia and cannot melt. Because these ceramic materials also provide significant moderation, it is difficult to use them in fast-spectrum reactors and thus the most plausible fast-spectrum reactor designs need to use metallic structural materials in their cores.

So thorium reactors are compatible with higher intrinsic safety (cores which do not suffer structural damage even if greatly overheated) and that can deliver heat at higher temperature, which enables more efficient and flexible power conversion.

Molten fluoride salts are compatible with these high-temperature structural materials, and given their very high boiling temperatures make excellent, low pressure heat transfer fluids. In the near term, the largest benefits in using fluoride salts come from the low pressure and high temperature heat they can produce. This can be achieved with solid fuel, which is simpler to work with and to obtain regulatory approvals.

But molten salt technologies also have significant challenges. One of the most important is managing the much larger amounts of tritium that these reactors produce, compared to light water cooled reactors (the quantities are closer to what heavy-water reactors, such as the CANDU, produce, but methods to control and recovery of tritium are much different for molten salts than for heavy water, and key elements remain to be demonstrated).

To repeat a critical point “…largest benefits in using fluoride salts come from the low pressure and high temperature heat they can produce. This can be achieved with solid fuel…”. This summarizes why Prof. Peterson’s lab is focused upon developing the PB-AHTR design, which will also prove out many materials and technologies required subsequently to implement the more challenging Liquid Fuel molten salt reactor concept (such as LFTR).

Regarding waste: Prof. Peterson was a member of Obama’s Blue Ribbon Commission on America’s Nuclear Future. I consider him one of the best-informed sources regarding Spent Nuclear Fuel (SNF) which the anti-nuclear lobby calls Nuclear Waste. It is not “waste” it is an extremely valuable source of carbon-free energy. 

Q: One of the elephants in the room for nuclear power is the waste….

A: …Finland and Sweden have successfully sited and are building deep geologic repositories in granite, and France is very far along in developing its geologic repository in clay. The U.S. nuclear waste program is currently stopped and is in a state of disarray…

There are a wide range of opinions as water reactors (LWRs) is substantially more expensive than making new fuel from uranium, even if the plutonium is free. This is primarily because the plutonium must be handled as an oxide powder to make LWR fuel, and oxide powder is the most hazardous and difficult form to handle plutonium in. All of the Generation IV reactor technologies can use fuel forms that do not involve handling plutonium and minor actinides in the form of powders and that are much easier to fabricate using recycled material (e.g., metal, molten salt, sol-gel particles in either coated particle or vibropacked fuel forms).

In my personal opinion, the most sensible thing to do in the near term is to prioritize U.S. defense wastes for geologic disposal, and to use a combination of consolidated and on-site interim storage for most or all commercial spent fuel. Implementation of the Blue Ribbon Commission’s major recommendations, which include development of consolidated interim storage that would initially be prioritized to store fuel from shut down reactors, would put the U.S. on this path.

By using geologic disposal primarily for defense wastes first, and using primarily dry cask interim storage for commercial spent fuel, this will give a couple of decades for nuclear reactor technology to evolve further, and by then we will be in a better position to determine whether commercial spent fuel is a waste or a resource.

Nuclear innovation: Prof. Peterson replies

There are a number of factors which make innovation difficult in improving nuclear reactor technology, in particular the long operating life of nuclear power plants and their very large capital costs, which dissuade innovation. The trend toward designing larger and larger water-cooled reactors has increased these disincentives.

Given their lower capital cost and shorter construction times, innovation is much easier in small reactors. There will remain a role for large reactors, just as dinosaurs existed for millions of years alongside the new mammal species, but currently some of the most important policy issues for nuclear power involve creating an ecosystem where small reactors find customers. Smaller reactors, produced in larger numbers with most of the fabrication occurring in factories, would also use specialized manufacturing and skilled labor more efficiently. Imagine factories as being similar to airplanes, and the ability to keep more seats filled being really important to having low per-seat prices…

FHR (Fluoride Salt Cooled High Temperature Reactor), Where to take technical risk?

I will answer this question first indirectly, and then more directly.

A key question for innovation in developing new nuclear energy technology is where to take technical risk. SpaceX provides a good example of a highly successful risk management strategy. They focused on developing a highly reliable, relatively small rocket engine, that they tested in the Falcon 1, which uses an ancient rather than innovative fuel combination, kerosene and liquid oxygen. On the other hand, they chose to use aluminum-lithium alloy with friction stir welding for their fuel tanks, which is at the cutting edge of current technology. They have then used the approach of ganging together large numbers of these engines to create the Falcon 9, which is now successfully delivering cargo to the International Space Station.

Currently the most important barrier to deploying nuclear power is not the cost of the fuel, but instead is the capital cost of the plants, the need to assure that they can run with high reliability (which for current large reactor designs creates strong disincentives to innovate), and the relatively low electricity revenues one receives for producing base load power, particularly today in the U.S.

The primary reason that UCB, MIT, and UW, and the Chinese Academy of Sciences, are working on solid fuel, salt cooled reactor technology is because we have the ability to fabricate these fuels, and the technical difficulty of using molten salts is significantly lower when they do not have the very high activity levels associated with fluid fuels. The experience gained with component design, operation, and maintenance with clean salts makes it much easier to consider the subsequent use of liquid fuels, while gaining several key advantages from the ability to operate reactors at low pressure and deliver heat at higher temperature.

Q: Can I also ask what you think the safest way to transport the waste is?**

A: Per Peterson: There is a long record of safe transportation of nuclear waste, including spent fuel, world wide. The containers used to transport nuclear wastes are substantially more robust than those used to transport hazardous chemicals and fuels, which is why transportation accidents with chemicals generate significantly more risk.

This said, the transportation of nuclear wastes requires effective regulation, controls, and emergency response capabilities to be in place. The transportation system for the Waste Isolation Pilot Plant in New Mexico has logged over 12 million miles of safe transport, with none of the accidents involving the transportation trucks causing any release of radioactive materials.

One reason it is important to restore WIPP to service (it had an accident involving the release of radioactive material underground in late February, which had minimal surface consequence because the engineered safety systems to filter exhaust air were activated) is because the WIPP transportation system has developed a large base of practical experience and skilled personnel at the state and local levels who are familiar with how to manage nuclear waste transport. This provides a strong foundation for establishing a broader transportation system for commercial spent fuel and defense high level wastes in the future.

A commenter replied to Per’s hecklers, referring to WIPP:

Actually I work for this program and this is an understatement. Not only have there never been any accidents that caused a release of nuclear material, there have never been any accidents with a truck loaded with waste containers, ever. They’ve happened while empty, but never otherwise.

Per Peterson discussed the unpriced carbon emissions externality. Which I would say is effectively a tax on nuclear because nuclear produces nearly zero carbon energy in competition with coal and gas which do not pay their carbon externality costs. Per raised a very important issue: how the NRC gatekeeping sets up a strong incentive to free-ride on NRC rulings.

But there is another important market failure that affects nuclear energy and is not widely recognized, which is the fact that industry cannot get patents for decisions that the U.S. Nuclear Regulatory Commission makes. For example, there are major regulatory questions that will affect the cost and commercial competitiveness of multi-module SMR plants, such as how many staff will be required in their control rooms. Once the first SMR vendor invests and takes the risk to perform licensing, all other vendors can free-ride on the resulting USNRC decision. This is the principal reason that government subsidies to encourage first movers, such as cost sharing or agreements to purchase power or other services (e.g., irradiation) make societal sense.

Is this being discussed in the USgov? I’ve never seen a word about it. This is another example of the sub-optimal result we get from wasting billions on energy-farming production subsidies, while rationing a few millions for nuclear R&D. Even America has very limited funds – and needs to spend them very carefully.

CERA: Construction costs for new nuclear plants up over 230% since 2000

UPDATE: I have republished this 2008 post and comments to bring it “up front” with our ongoing discussion of new nuclear construction costs. At the end I’ve incorporated the 2008 comments. 

UPDATE: Per Peterson, Professor and a former chair of the Department of Nuclear Engineering at the University of California, Berkeley, was kind enough to comment on yesterday’s post on the CBO study. Dr. Peterson noted that only about 1% of new nuclear plant construction cost is construction materials. So the theme attributing the rapid cost rises to commodity prices has no basis. Contrariwise, wind turbine construction/installation require at least 10x the materials input per kilowatt — so have higher sensitivity to price and availability of steel, concrete, copper, etc. I cannot accurately summarize in fewer words, so I recommend you read his comments carefully.

Dan Yergin’s Cambridge Energy Research Associates (CERA) maintains the Power Capital Costs Index (PCCI), depicted in the graphic at left – as of May 2008 [click on the thumbnail for full size chart]. In brief the PCCI shows that a power plant that cost $1 billion in 2000 would, on average, cost $2.31 billion in May [in constant 2000 dollars].

You can infer that the increase in the cost of new nuclear plant construction has increased by more than that 230%. As you can see in the PCCI chart the non-nuclear costs are up 180%. The PCCI is assembled from data on a basket of 30 power generation facilities in North America. I don’t know what percentage of the capital base is nuclear so I’ll speculate that it’s similar to the current 22% that nuclear contributes to US generation. That implies nuclear construction costs are up about 400% since 2000.

I may be able to get more background from the CERA Capital Cost Analysis Forum – Power. But I discovered that viewing the replay of the June 6 webconference call required IE6, so I’ll need to fire up a windows PC to access it.

On factors driving the PCCI increases since 2000, CERA writes:

…Demand for new power generation facilities remains high worldwide, leading to continued tightness in equipment markets. Cost increases, supply issues and longer delivery times are exacerbated as manufacturers struggle to keep up with demand. The weakening U.S. dollar also increases the costs of global procurement for equipment and materials.

The number of engineers in the workforce is also declining as older workers retire and are not being replaced. The resulting shortages in plant design teams add additional delays to construction schedules. The current increase in construction for nuclear power generation and the dearth of experienced nuclear engineers in North America has been a key driver behind cost escalation.

Recent cancellations of proposed coal plants in the United States due to uncertainty over environmental regulations has provided some slowing in cost increases in the U.S. coal industry. However, international competition for coal boilers, particularly in Southeast Asia, is keeping the equipment order books very active.

Concerns over a looming U.S. recession and subsequent cut backs in residential construction have offered little relaxation to power construction. The residential slump does not free up the skilled workers required in the power industry and there is no overlap of the specialist metals and equipment required.


Upstream Capital Cost Index (UCCI) Courtesy IHS

I wonder if we are looking at market reactions to an impulse in demand. In the short run [say 5 years] the supply of new nuclear plants is inelastic. Demand has increased considerably beyond expectations, so equilibrium is only achieved by higher prices. We are seeing similar supply/demand responses in several energy sectors. The headlines hammer on oil prices. Note that the UCCI is only 10% less than the PCCI.

The UCCI is based upon a portfolio of 28 upstream oil and gas projects, so it represents the overnight cost of investment in both oil & gas field development and transportation. It may include finding costs, but I’m not sure. I do know that the cost per barrel-equivalent of finding + development costs has been increasing about as fast as oil companies have been able to ramp up their investments. The net result so far is no increase in reserve-additions, which are still lagging depletion.

2 thoughts on “CERA: Construction costs for new nuclear plants up over 230% since 2000”

  1. Paul on December 4, 2008 at 1:44 pm said: Edit

“only about 1% of new nuclear plant construction cost is construction materials” – sorry, I don’t think so. More like 30% at least.

  1. Steve Darden on December 4, 2008 at 7:02 pm said: Edit

_More like 30% at least.
_

Paul, thanks heaps for your comments. Here’s the relevant part of Dr. Peterson’s comment on commodity inputs [he gives the references as well]:

_While it is widely understood that nuclear energy costs have quite low sensitivity to the cost of uranium, it is not widely appreciated that the same applies to construction materials. If one takes the total quantity of steel, concrete, copper, and other materials required to build a light water reactor similar to those operating today 1, and then multiplies these quantities by the respective current commodity prices, the total contribution of commodity inputs is $36 per kilowatt of generation capacity 2, out of total construction prices that are estimated today to range from $3000 to $5000 per kilowatt today. The dominant cost of nuclear construction is instead in the added value that comes from converting these commodities into an operational nuclear power plant. Conversely, wind turbines require approximately a factor of 10 times as much steel and concrete to construct without considering storage capacity 3, and thus have construction costs that are sensitive to commodity costs and to potential future resource scarcity.
_

So he gave a range of 36/3000 to 36/5000 or 0.7% to 1.2%.

Can you educate us on the construction cost buildup – also on why quotes have gone up so much since 2000?

CBO Study: Nuclear Power’s Role in Generating Electricity

UPDATE: I have republished this 2008 post and comments to bring it “up front” with our ongoing discussion of new nuclear construction costs. At the end I’ve incorporated the 2008 comments.

I’ve been re-reading the CBO study from May 2008. This is probably the most current objective analysis of base load electrical generation options. Given the CBO levelized costing assumptions it appears that electric utilities will choose natural gas over 3rd generation nuclear unless they anticipate more than $45/ton CO2 carbon tax or equivalent:

Adding a carbon dioxide charge of about $45 per metric ton to the levelized cost estimates in the reference scenario would make nuclear power the least expensive source of additional base-load capacity (see the left panel of Figure 3-2). Up to that threshold, at all but the lowest level of charges, conventional natural gas technology would probably be the least costly option. Because coal is more carbon-intense than natural gas, the cost advantage of new capacity based on natural gas technology would grow in relation to coal technology as carbon dioxide charges increased; but the advantage that natural gas technology enjoyed over nuclear technology would shrink and eventually disappear as emission charges reached about $45 per metric ton. Thereafter, the levelized cost advantage of nuclear technology over conventional gas technology would grow. Although carbon dioxide charges would not change the cost of nuclear power plants at all, they would increase the cost of innovative fossil-fuel alternatives; as a result, the cost advantage that nuclear technology held over those technologies would increase with carbon dioxide charges but at a slower rate than that observed with conventional fossil-fuel technologies.

We know that construction costs for all types of generation have been going up rapidly with the increasing costs for steel, concrete etc. Nuclear is the most sensitive to construction costs, simply because nuclear fuel costs are negligible [conversely nuclear is insensitive to future fuel cost rises, but natural gas is extremely sensitive.) Here’s the relative sensitivities to lower or higher construction costs — again levelized 2006 dollars per megawatt hour:

The CBO study of course has to stick with already-built or on-order nuclear technology. But this may lead to drawing the wrong conclusions. Remember how much autos cost when each one was custom built? And the lousy quality?

That’s our experience of nuclear construction — custom design, custom built, custom approvals. But, given certainty of future CO2 charges, I believe that a competitive market will transform nuclear generation into a mass produced, modular product — and that costs will come down dramatically compared to alternatives.

We don’t know what future innovations will emerge, but as of today, the modular pebble-bed reactor [PBMR] technology looks very promising. Key advantages are safety by design (even chimps as operators can’t cause a nuclear accident), no proliferation worries, and perhaps most important – the design is MODULAR. That means industrial-scale mass production is possible, with all the attendant benefits. One of the most important benefits is the slashing of the financial risk of regulatory delays before a new plant is allowed to start up.

For more background on the Modular Pebble-bed design, see MIT’s study “The Future of Nuclear Power” [1], MIT prof. Andrew C. Kadak’s presentation “What Will it Take to Revive Nuclear Energy?” [PDF] [2], and his Pebble-bed presentation [PDF] [2a]. China is placing big bets here, see Wired’s “Let a Thousand Reactors Bloom” [3].

10 thoughts on “CBO Study: Nuclear Power’s Role in Generating Electricity”

  1. Rod Adams on August 26, 2008 at 8:06 pm said: Edit

Steve:

It is always important to check the assumptions. The CBO study that you pointed to, though completed in 2008, apparently used a fuel price table that stopped with 2005 fuel prices. It thus assumed a base case of natural gas costing about $5.00 per million BTU.

Since the cost of fuel is about 93% of the levelized cost of electricity from a natural gas fired power plant, underestimating the cost of gas would tend to sway the computed decision in the wrong direction compared to less fuel intensive alternatives like nuclear power.

Nuclear can compete without a carbon tax against gas at current market prices – which are about $8.50 per million BTU and have been as high as $13 in the recent past and may get there again with a cold winter.

Luckily for gas buyers, it has been a fairly mild summer.

  1. Steve Darden on August 26, 2008 at 9:10 pm said: Edit

Rod – thanks for the data update. Does the increase in construction costs since the timestamp on the report data offset the underestimated natural gas prices? I.e., gas operating costs up, nuclear plant construction costs up.

I added PBMR to this post – since folks search for this acronym.

  1. Per Peterson on August 27, 2008 at 10:48 am said: Edit

While it is widely understood that nuclear energy costs have quite low sensitivity to the cost of uranium, it is not widely appreciated that the same applies to construction materials. If one takes the total quantity of steel, concrete, copper, and other materials required to build a light water reactor similar to those operating today 1, and then multiplies these quantities by the respective current commodity prices, the total contribution of commodity inputs is $36 per kilowatt of generation capacity 2, out of total construction prices that are estimated today to range from $3000 to $5000 per kilowatt today. The dominant cost of nuclear construction is instead in the added value that comes from converting these commodities into an operational nuclear power plant. Conversely, wind turbines require approximately a factor of 10 times as much steel and concrete to construct without considering storage capacity 3, and thus have construction costs that are sensitive to commodity costs and to potential future resource scarcity.

Right now demand for new reactors is clearly outstripping supply. While this current supply chain inelasticity will ease in 5 to 10 years, inelasticity in supply always results in higher prices. Thus we can expect nuclear construction prices to drop over the coming decade, but the main question is by how much. While it will never get down to the $36/kW cost of the commodity inputs, there is still potential that prices could drop greatly from the current values if modular construction and factory-based computer aided manufacturing are applied more broadly in the construction.

References:

  1. From R.H. Bryan and I.T. Dudley, Oak Ridge National Laboratory, TM-4515, June 1974, current pressurized water reactors use 32.7 t of steel, 75 m3 of concrete, 0.69 t of copper, and smaller amounts of other materials per megawatt of capacity
  2. On March 25, 2008, the commodity prices of steel, concrete, and copper (which constitute 90% of the total commodities costs for a nuclear plant) were $601/t, $98/m3, and $7,634/t respectively.
  3. Wind requires 115 MT of steel and 218 m3 of concrete per megawatt, but has higher commodity input per unit of electricity generated due to a lower capacity factor (~25%) compared to nuclear (~90%), S. Pacca and A. Horvath, Environ. Sci. Technol., 36, 3194-3200 (2002).

    1. Rod Adams on August 27, 2008 at 2:26 pm said: Edit

The interesting thing about the numbers that are being bandied about with regard to nuclear construction costs is that they also include rather substantial allowances for risk premiums, interest costs, and inflation uncertainties.

Those costs can represent half of the final delivered price computation.

  1. Steve Darden on August 27, 2008 at 5:10 pm said: Edit

Dr. Peterson,

Thanks for taking the time to set us straight on the material inputs. 1% means nuclear plant costs are highly insensitive to that component. The CBO study bypassed the contributions to cost increases in their sensitivity analysis – simply assuming -50%, +100%.

Today I wrote a related post on the CERA index of power plant construction. Back of the envelope, assuming 22% of CERA’s basket of 30 plants are nuclear, I drew the inference that nuclear plant construction costs have increased around 400% since 2000. Versus the PCCI average of 230% across all modes of generation.

Similar to your comments, CERA attributes the increases to the surge in demand and the “dearth of experienced nuclear engineers in North America.”

CERA is tracking similar (210%) increases in the cost of upstream oil & gas projects – the UCCI having a similar 2005 takeoff. Much more depth on energy demand over-running supply can be found in the really excellent CIEP study “Oil turbulence in the next decade – An Essay on High Oil Prices in a Supply-constrained World”, Jan-Hein Jesse and Coby van der Linde, Clingendael International Energy Programme. They conclude that the next decade or so will see high volatility in oil markets – oscillating between marginal cost and user value.

Please advise if you have any references to recommend on the potential for nuclear costs to drop in an industry transformation to “mass production”, relatively speaking, of modular reactor components. Presumably, such standardized components would be pre-certified, so that on site certification would be reduced to a process more like inspections of other industrial facilities?

  1. Steve Darden on August 27, 2008 at 10:31 pm said: Edit

Rod,

Well, it’s interesting that the CERA index explicitly doesn’t include risk premiums, or owner’s cost. It probably includes construction period interest. If my estimates of their basket are close it indicates a 2000 to 2008 Q1 cost increase of around 400% for nuclear and about 180% for non-nuclear.

I haven’t found a source to build up that figure from first principles – so I can’t confirm the PCCI index. I sat through the one hour CERA web-conference presentation of June 6 – hoping to learn the details. They do have a nuclear index, but didn’t present it. It is part of the distribution package sent to members.

Cheers, Steve

  1. JimHopf on August 28, 2008 at 4:51 pm said: Edit

I’d just like to add a bit to what Rod said earlier. Not only does the CBO study assume a natural gas price of $5 (or $6?) per MBTU, which is lower than the price even today, but they assume that it will remain at $5/6 even if we use gas for all new power plants (and possibly also replace existing coal plants with gas plants to meet CO2 reduction requirements). In other words, they assume that the price will remain fixed at a (low) value of $5/6, no matter how high the demand for gas gets!

They simply state that for CO2 prices between $6 and $45 per ton, gas will be the cheapest source, thereby implying that it will be chosen for all new generation. They ignore all feedback effects. In the real world, as more and more gas is chosen, the price of gas goes up until the price advantage disappears. In fact, the real truth is that, for baseload power, gas will not be an option, as it will be the most, not the least, expensive in the future (even w/ little or no CO2 price), since future gas costs will be way above $6. For that reason, utility executives are not even really thinking about gas as a future baseload option. There simply is not enough gas to go around to waste it on something like baseload generation. The choice will be between nuclear and coal.

The real question is what CO2 price is required to make nuclear cheaper than coal. This price is about $20 to $25 per ton of CO2.

  1. Steve Darden on August 28, 2008 at 6:51 pm said: Edit

Jim,

Thanks – I agree with your all your points.

This price is about $20 to $25 per ton of CO2.

Doesn’t that depend on the capital cost? At 2005 CAPEX I thought $25 per ton CO2 would do it. At 4 x 2005 costs?

I’m confident new plant costs will come down. I’m optimistic that in a decade, constant dollars, that costs per MW will be lower than the 2005 CERA index.

But what do utility execs believe are the levelized costs?

  1. JimHopf on August 28, 2008 at 11:20 pm said: Edit

Steve,

Well, the capital cost of coal plants has also gone up since then, as well as the price of coal itself (which has almost doubled). That said, the price of nuclear has gone up even more, if some of the latest estimates are to be believed. Thus, it could be that it would require ~$30 or more (but only for the first set of plants).

Of course, under any cap-and-trade system with hard (declining) CO2 limits, the CO2 price will rise to whatever it has to be to make nuclear cheaper than coal (given that renewables contribution is limited by intermittentcy, and that both gas as coal w/ CO2 sequestration will be more expensive than nuclear).

  1. Steve Darden on August 29, 2008 at 3:31 pm said: Edit

Thanks Jim – two important concepts in your comments

(1) but only for the first set of plants – because once deployment gets well underway the capital costs will come down. Probably operating costs as well.

(2) the CO2 price will rise to whatever it has to be to make nuclear cheaper than coal – because that is the new equilibrium, as existing coal utilities bid up permits until it becomes cheaper to build replacement nuclear than to keep paying for permits.

Regarding (2) I still prefer a revenue-neutral carbon tax over cap-and-trade. Most importantly because it gives utilities a predictable and stable future cost environment. Secondly, because it prevents government from getting its hands on a new revenue stream, while avoiding a rich growth medium for corruption and complexity.

What’s your view on that choice?

PS – I just finished a post on “Greens make the case for nuclear power”.

Comments are closed.

Why did nuclear plant construction costs quadruple from 1972 to 1988?

The short answer is Greenpeace and their cronies such as Friends of the Earth (FOE):

A major source of cost escalation in some plants was delays caused by opposition from well-organized “intervenor” groups that took advantage of hearings and legal strategies to delay construction. The Shoreham plant on Long Island was delayed for 3 years by intervenors who turned the hearings for a construction permit into a circus. The intervenors included a total imposter claiming to be an expert with a Ph.D. and an M.D. There were endless days of reading aloud from newspaper and magazine articles, interminable “cross examination” with no relevance to the issuance of a construction permit, and an imaginative variety of other devices to delay the proceedings and attract media attention.

That quote is from Chapter 9 COSTS OF NUCLEAR POWER PLANTS — WHAT WENT WRONG? of the online version of the book The Nuclear Energy Option by physicist Bernard L. Cohen, University of Pittsburgh. The book was published by Plenum Press, 1990, so it is slightly dated with respect to recent developments in modular mass-manufactured reactors (SMR), etc. Other than that it is a terrific resource — a concise handbook that covers all the high priority questions about nuclear power [risk/safety, radiation, costs, nuclear "waste", proliferation].

Prof. Cohen was there, on the scene so to speak, during the period of the 1970’s, 1980’s when Regulatory Turbulence, Regulatory Ratcheting and Intervenors quadrupled the cost of a nuclear power plant. Here’s an excerpt from Chapter 9 covering Regulatory Ratcheting and Regulatory Turbulence:

The Nuclear Regulatory Commission (NRC) and its predecessor, the Atomic Energy Commission Office of Regulation, as parts of the United States Government, must be responsive to public concern. Starting in the early 1970s, the public grew concerned about the safety of nuclear power plants: the NRC therefore responded in the only way it could, by tightening regulations and requirements for safety equipment.

Make no mistake about it, you can always improve safety by spending more money. Even with our personal automobiles, there is no end to what we can spend for safety — larger and heavier cars, blowout-proof tires, air bags, passive safety restraints, rear window wipers and defrosters, fog lights, more shock-absorbent bumpers, antilock brakes, and so on. In our homes we can spend large sums on fireproofing, sprinkler systems, and smoke alarms, to cite only the fire protection aspect of household safety. Nuclear power plants are much more complex than homes or automobiles, leaving innumerable options for spending money to improve safety. In response to escalating public concern, the NRC began implementing some of these options in the early 1970s, and quickened the pace after the Three Mile Island accident.

This process came to be known as “ratcheting.” Like a ratchet wrench which is moved back and forth but always tightens and never loosens a bolt, the regulatory requirements were constantly tightened, requiring additional equipment and construction labor and materials. According to one study,4 between the early and late 1970s, regulatory requirements increased the quantity of steel needed in a power plant of equivalent electrical output by 41%, the amount of concrete by 27%, the lineal footage of piping by 50%, and the length of electrical cable by 36%. The NRC did not withdraw requirements made in the early days on the basis of minimal experience when later experience demonstrated that they were unnecessarily stringent. Regulations were only tightened, never loosened. The ratcheting policy was consistently followed.

In its regulatory ratcheting activities, the NRC paid some attention to cost effectiveness, attempting to balance safety benefits against cost increases. However, NRC personnel privately concede that their cost estimates were very crude, and more often than not unrealistically low. Estimating costs of tasks never before undertaken is, at best, a difficult and inexact art.

(…)

Clearly, the regulatory ratcheting was driven not by new scientific or technological information, but by public concern and the political pressure it generated. Changing regulations as new information becomes available is a normal process, but it would normally work both ways. The ratcheting effect, only making changes in one direction, was an abnormal aspect of regulatory practice unjustified from a scientific point of view. It was a strictly political phenomenon that quadrupled the cost of nuclear power plants, and thereby caused no new plants to be ordered and dozens of partially constructed plants to be abandoned.

Regulatory Turbulence

We now return to the question of wildly escalating labor costs for construction of nuclear plants. They were not all directly the result of regulatory ratcheting, as may be seen from the fact that they did not occur in the “best experience” projects. Regulatory ratcheting applied to new plants about to be designed is one thing, but this ratcheting applied to plants under construction caused much more serious problems. As new regulations were issued, designs had to be modified to incorporate them. We refer to effects of these regulatory changes made during the course of construction as “regulatory turbulence,” and the reason for that name will soon become evident.

As anyone who has tried to make major alterations in the design of his house while it was under construction can testify, making these changes is a very time-consuming and expensive practice, much more expensive than if they had been incorporated in the original design. In nuclear power plant construction, there were situations where the walls of a building were already in place when new regulations appeared requiring substantial amounts of new equipment to be included inside them. In some cases this proved to be nearly impossible, and in most cases it required a great deal of extra expense for engineering and repositioning of equipment, piping, and cables that had already been installed. In some cases it even required chipping out concrete that had already been poured, which is an extremely expensive proposition.

Constructors, in attempting to avoid such situations, often included features that were not required in an effort to anticipate rule changes that never materialized. This also added to the cost. There has always been a time-honored tradition in the construction industry of on-the-spot innovation to solve unanticipated problems; the object is to get things done. The supercharged regulatory environment squelched this completely, seriously hurting the morale of construction crews. For example, in the course of many design changes, miscalculations might cause two pipes to interfere with one another, or a pipe might interfere with a valve. Normally a construction supervisor would move the pipe or valve a few inches, but that became a serious rule violation. He now had to check with the engineering group at the home office, and they must feed the change into their computer programs for analyzing vibrations and resistance to earthquakes. It might take many hours for approval, and in the meanwhile, pipefitters and welders had to stand around with nothing to do.

Requiring elaborate inspections and quality control checks on every operation frequently held up progress. If an inspector needed extra time on one job, he was delayed in getting to another. Again, craft labor was forced to stand around waiting. In such situations, it sometimes pays to hire extra inspectors, who then have nothing to do most of the time. I cannot judge whether all of these new safety procedures were justifiable as safety improvements, but there was a widespread feeling among those involved in implementing them that they were not. Cynicism became rampant and morale sagged

Prof. Cohen goes on to document the history of how Greenpeace and friends managed to destroy the Shoreham, Long Island plant — which was eventually sold to NY state for $1.

NewImage

But the worst delay came after the Shoreham plant was completed. The NRC requires emergency planning exercises for evacuation of the nearby population in the event of certain types of accidents. The utility provides a system of warning horns and generally plans the logistics, but it is necessary to obtain cooperation from the local police and other civil authorities. Officials in Suffolk County, where Shoreham is located, refused to cooperate in these exercises, making it impossible to fulfill the NRC requirement. After years of delay, the NRC changed its position and ruled that in the event of an actual accident, the police and civil authorities would surely cooperate. It therefore finally issued an operating license. By this time the situation had become a political football, with the governor of New York deeply involved. He apparently decided that it was politically expedient to give in to the opponents of the plant. The state of New York therefore offered to “buy” the plant from the utility for $1 and dismantle it, with the utility receiving enough money from various tax savings to compensate for its construction expenditures. This means that the bill would effectively be footed by U.S. taxpayers. As of this writing, there are moves in Congress to prevent this. The ironic part of the story is that Long Island very badly needs the electricity the Shoreham plant can produce.

“Safe, cheap, clean energy is probably the most important thing an individual can do for the future of the world”

Samaltman

“I have studied a lot about what I think is sort of the best use of my time and money and what I think will help the world the most. And I really do believe that safe, cheap, clean energy is probably the most important thing an individual can do for the future of the world.” — SAM ALTMAN

If you listen to the Econtalk interview I think you will agree that Sam has done his homework. Not surprisingly I think his conclusions are indicators of an open, inquiring mind:

“There are two nuclear energy companies in this batch. I believe that–the 20th century was clearly the carbon century. And I believe the 22nd century is going to be the atomic power century. I’m very convinced of that. It’s just a question of how long it takes us.

Y Combinator is well-positioned to harvest the rewards of innovations that require a long development cycle and heaps of capital. Unlike the typical 10 year venture fund, YC makes a large number of small ($120k) bets, 700+ such bets since Paul Graham launched YC in 2005. New nuclear generation is obviously a very long-term bet.

Question: will the NRC license a new design that isn’t just a variation of existing PWR designs? How is it possible to innovate in this regulatory environment?

I think it will take way too long and too much capital to launch a new design based on NRC licensing. So Sam Altman’s new ventures will almost certainly have to move to a friendly-regulator nation for the initial licensing. Note: Sam is more optimistic  than I am about the NRC. That said, if I were talking my book publicly I would be carefully deferential to the NRC.

Update: I found one of the two YC S14 batch nuclear companies. It is Helion Energy who is building an SMR concept. But it is FUSION, not fission:

Helion is making a fusion engine 1,000 times smaller, over 500 times cheaper, and realizable 10 time faster than other projects.

Obi Wan: could this be the one that works? There’s a bit more at TechCrunch. Enjoy the Sam Altman interview – it’s not your everyday porridge.

Why is Econtalk interviewing all these Silicon Valley entrepreneurs and VCs? Since Russ Roberts in now full time at Stanford’s Hoover Institution, he has been spending more of his time at the vortex of the Silicon Valley innovation cluster. One of the benefits is that he is becoming progressively more involved-with and excited-about the innovation culture. So his Econtalk guests include a growing number of Silicon Valley insiders. In July Russ interviewed Sam Altman, CEO of the iconic accelerator Y Combinator (YC). Sam confessed in the interview that he doesn’t filter himself very well – meaning it was a refreshingly frank discussion.

How can the developing world escape poverty without climate change calamity?

This article is the result of some very interesting discussions below a recent TEC article on the potential of coal, nuclear and wind/solar to supply the rapidly growing energy needs of the developing world. In that article, I estimated that nuclear is roughly an order of magnitude less scalable than coal, but more than double as scalable as wind/solar. These estimations were challenged by both nuclear and wind advocates and, as such critical discussions often do, have prompted much closer investigations into this issue. In particular, data pertaining to the near-term prospects of nuclear energy in China, the nation accounting for fully 43% of nuclear plants currently under construction, has been analysed in more detail. — SCHALK CLOETE

Schalk Cloete’s superpower is the ability to execute and explain exactly the analysis required to penetrate a difficult, controversial topic. And there are a few others – you know who you are. 

Schalk’s recent article Can Nuclear Make a Substantial Near-Term Contribution? supports answers to my “most important questions”: How can we help the large fast-growers to make the transition from fossil to clean energy? For discussion, let’s focus on three key nations:

  1. China
  2. India
  3. Africa

The reason I posed this in terms of three different developing countries is because the support & partnership that the rich countries can offer is different in each case. 

  1. China is already putting more resource than any other nation into building up their nuclear deployment capability. Even so, China can benefit hugely from without-limit contributions of capital, science, and engineering know how. I left regulatory know how off that list, though there may be possible contributions there. As it stands today the US NRC is probably mostly a hinderance to the deployment of advanced nuclear – not because of the NRC staff, but because of the budgetary straight-jacket imposed by the US Congress (make the ‘customers’ pay for everything up front).
  2. India is improving their nuclear deployment capability at a slow, deliberate pace. But India too could benefit from external technology contributions. Remember that India was cut off for decades from western nuclear tech as punishment for their indigenous nuclear weapons development.
  3. Africa needs affordable energy-machines that are suitable to their infrastructure and operational capabilities. If Africa does not have access to affordable and suitable nuclear they will have no real choice but to build more and more coal and gas.

Cumulative CO2 avoidance potential over lifetime of investment (Gton CO2)

 

Our affordability challenge is that we need to offer clean, reliable electricity at the best price per ton CO2 avoided. So what can compete economically with coal and natural gas? If you study Schalk’s chart for a few minutes I think you will conclude, as I have, that we need to pull out all the stops to accelerate deployment of mass-manufactured “nuclear batteries”. By “batteries” I mean simply that no-maintenance energy-machines that can be rapidly installed by underground burial, connected to the grid, then left alone for up to four decades until the maintenance crew arrives to replace the “battery”, trundling the original off to the factory for refueling. 

China is training-up to build and staff Western-style plants like the AP1000 – which China will be building internally on Chinese-owned IP. That is not going to happen very soon and at scale in Africa. While my guess is that India will need some time to develop their skill-base and supply chain. Sadly, Greenpeace has succeeded in preventing availability of the simple plants that Africa wants to purchase. Given the reality of the nuclear supply chain, it will be close to two decades before vendors are manufacturing and installing plants suitable for most low-tech nations.

Africa isn’t waiting for someone to make a clean generation option available to energize their growth. Currently seven of the ten fastest growing economies are in Africa. Sadly the massive scale of African urbanization and growth is going to be enabled the same way it happened in Europe, N and S America – building relatively cheap coal and gas plants as fast as they can be built. That trajectory will end very badly unless we get serious about what happens next. We can create a happy ending if, inside the next two decades, we achieve the capability to produce affordable nuclear plants that can be installed and operated without losing two additional decades developing a deeply-trained nuclear workforce and local supply chain. By 2015 Africa’s urban population is expected to triple [UN World Urbanization Prospects: The 2011 Revision].

It’s obvious that these SMR designs must be substitutable for the fossil thermal machines that got built in the first phase of dirty industrialization. It will be a lot easier and cheaper if the first-stage dirty plants are designed for such an evolution: rip the dirty heat out, stick the clean heat in.

There’s heaps more to be learned by studying Schalk’s essay, so get on over there. If you find any flaws in his work, please contribute to the dialogue there on TEC (I am subscribed to those comments).

Footnotes from Shalk’s essay: why China’s nuclear avoidance potential is actually greater than the above chart.

[1] It should also be mentioned that the Chinese tariff system favors wind over nuclear by paying a fixed feed-in tariff of $83–100/MWh to wind and $70/MWh to nuclear. Another important factor to consider is the reduced value of wind relative to nuclear due to the variability of wind power (see my previous articles on this subject here and here). Wind power also requires expensive high voltage transmission networks to transport power from good wind locations to population centres, something which is creating substantial challenges. Thus, if the playing field were to be leveled, the difference between nuclear and wind scaling rates should increase substantially.

LNT, UNSCEAR and the NRC “State-of-the-Art Reactor Consequence Analyses”

UNSCEAR 2012 “Therefore, the Scientific Committee does not recommend multiplying very low doses by large numbers of individuals to estimate numbers of radiation-induced health effects within a population exposed to incremental doses at levels equivalent to or lower than natural background levels;”

The main NRC SOARCA page, which indexes the definitive 2012 NRC severe accident study. This study is large so I’ll rely on the NRC’s own words of summary:

SOARCA’s main findings fall into three basic areas: how a reactor accident progresses; how existing systems and emergency measures can affect an accident’s outcome; and how an accident would affect the public’s health. The project’s preliminary findings include:

  • Existing resources and procedures can stop an accident, slow it down or reduce its impact before it can affect public health;
  • Even if accidents proceed uncontrolled, they take much longer to happen and release much less radioactive material than earlier analyses suggested; and
  • The analyzed accidents would cause essentially zero immediate deaths and only a very, very small increase in the risk of long-term cancer deaths.

Rod Adams posted his thorough analysis of UNSCEAR here, which Rod summarizes thusly:

  • The individual early fatality risk from SOARCA scenarios is essentially zero.
  • Individual LCF risk from the selected specific, important scenarios is thousands of times lower than the NRC Safety Goal and millions of times lower than the general cancer fatality risk in the United States from all causes, even assuming the LNT dose-response model.

If I may underscore that last: even assuming the LNT dose-response model For more plain English here’s UK environmentalist Mark Lynas in Why Fukushima death toll projections are based on junk science:

As the Health Physics Society explains[1] in non-scientific language anyone can understand:

…the concept of collective dose has come under attack for some misuses. The biggest example of this is in calculating the numbers of expected health effects from exposing large numbers of people to very small radiation doses. For example, you might predict that, based on the numbers given above, the population of the United States would have about 40,000 fatal cancers from background radiation alone. However, this is unlikely to be true for a number of reasons. Recently, the International Council on Radiation Protection issued a position statement saying that the use of collective dose for prediction of health effects at low exposure levels is not appropriate. The reason for this is that if the most highly exposed person receives a trivial dose, then everyone’s dose will be trivial and we can’t expect anyone to get cancer. [my emphasis]

The HPS illustrates this commonsensical statement with the following analogy:

Another way to look at it is that if I throw a 1-gram rock at everyone in the United States then, using the collective dose model, we could expect 270 people to be crushed to death because throwing a one-ton rock at someone will surely kill them. However, we know this is not the case because nobody will die from a 1-gram rock. The Health Physics Society also recommends not making risk estimates based on low exposure levels.

James Conca explains the UNSCEAR 2012 report, which finally drove a stake into the heart of LNT:

The United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR) (UNSCEAR 2012) submitted the report that, among other things, states that uncertainties at low doses are such that UNSCEAR “does not recommend multiplying low doses by large numbers of individuals to estimate numbers of radiation-induced health effects within a population exposed to incremental doses at levels equivalent to or below natural background levels.” (UNDOC/V1255385)

You know, like everyone’s been doing since Chernobyl. Like everyone’s still doing with Fukushima.

Finally, the world may come to its senses and not waste time on the things that aren’t hurting us and spend time on the things that are. And on the people that are in real need. Like the infrastructure and economic destruction wrought by the tsunami, like cleaning up the actual hot spots around Fukushima, like caring for the tens of thousands of Japanese living in fear of radiation levels so low that the fear itself is the only thing that is hurting them, like seriously preparing to restart their nuclear fleet and listening to the IAEA and the U.S. when we suggest improvements.

The advice on radiation in this report will clarify what can, and cannot, be said about low dose radiation health effects on individuals and large populations. Background doses going from 250 mrem (2.5 mSv) to 350 mrem (3.5 mSv) will not raise cancer rates or have any discernable effects on public health. Likewise, background doses going from 250 mrem (2.5 mSv) to 100 mrem (1 mSv) will not decrease cancer rates or effect any other public health issue.

Note – although most discussions are for acute doses (all at once) the same amount as a chronic dose (metered out over a longer time period like a year) is even less effecting. So 10 rem (0.1 Sv) per year, either as acute or chronic, has no observable effect, while 10 rem per month might.

UNSCEAR also found no observable health effects from last year’s nuclear accident in Fukushima. No effects.

The Japanese people can start eating their own food again, and moving back into areas only lightly contaminated with radiation levels that are similar to background in many areas of the world like Colorado and Brazil.

Low-level contaminated soil, leaves and debris in Fukushima Prefecture piling up in temporary storage areas. (Photo by James Hackett, RJLee Group)

The huge waste of money that is passing for clean-up now by just moving around dirt and leaves (NYTimes) can be focused on clean-up of real contamination near Fukushima using modern technologies. The economic and psychological harm wrought by the wrong-headed adoption of linear no-threshold dose effects for doses less than 0.1 Sv (10 rem) has been extremely harmful to the already stressed population of Japan, and to continue it would be criminal.

To recap LNT, the Linear No-Threshold Dose hypothesis is a supposition that all radiation is deadly and there is no dose below which harmful effects will not occur. Double the dose, double the cancers. First put forward after WWII by Hermann Muller, and adopted by the world body, including UNSCEAR, its primary use was as a Cold War bargaining chip to force cessation of nuclear weapons testing. The fear of radiation that took over the worldview was a side-effect (Did Muller Lie?).

(…snip…)

In the end, if we don’t reorient ourselves on what is true about radiation and not on the fear, we will fail the citizens of Japan, Belarus and the Ukraine, and we will continue to spend time and money on the wrong things…

That’s just Jim’s summary – please read his complete essay for the charts, tables and implications for Japan. And did Muller Lie? The evidence seems pretty conclusive that all this enormous waste of resources was based on a lie. Not to mention the fear, and in the case of Fukushima at least a thousand unnecessary deaths due to the panic and mismanagement of the evacuation.

Footnotes:

[1] While link testing, I found that Mark’s HPS link fails – that’s the Internet. Here’s the most recent HPS position statement I could find this morning. Radiation Risk In Perspective: Position Statement Of The Health Physics Society (updated 2010) 

In accordance with current knowledge of radiation health risks, the Health Physics Society recommends against quantitative estimation of health risks below an individual dose1 of 50 millisievert (mSv) in one year or a lifetime dose of 100 mSv above that received from natural sources. Doses from natural background radiation in the United States average about 3 mSv per year. A dose of 50 mSv will be accumulated in the first 17 years of life and 0.25 Sv in a lifetime of 80 years. Estimation of health risk associated with radiation doses that are of similar magnitude as those received from natural sources should be strictly qualitative and encompass a range of hypothetical health outcomes, including the possibility of no adverse health effects at such low levels.

There is substantial and convincing scientific evidence for health risks following high-dose exposures. However, below 50– 100 mSv (which includes occupational and environmental exposures), risks of health effects are either too small to be observed or are nonexistent.

[2] Environmentalist Stewart Brand on the retirement of LNT.

[3] Report of the United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR) Fifty-ninth session (21-25 May 2012) [PDF]. 

[4] EPA’s decision to allow risk-based decisions to guide responses to radiological events