Jean-Marc Desperrier: we don’t engineer the grid to fail – perspective on EDM (2011)

I appreciated the real world engineering perspective offered by Jean-Marc Desperrier on Mark Diesendorf’s response. Jean-Marc is commenting on the BNC discussion of the Elliston, Diesendorf and MacGill paper “Simulations of Scenarios with 100% Renewable Electricity in the Australian National Electricity Market” EDM (2011). Jean-Marc is responding to Dr. Diesendorf’s rebuttal of Peter Lang’s analysis.

@John Morgan : I think Dr Diesendorf’s assessment of NEM reliability requirement indeed have some very serious problems : – NEM allow at most 10 minutes of unavailability per year (0.002%), and that becomes several hours in his papers – But having if he had the number right, he really doesn’t realize that what he is doing is deliberately designing a system so that it fails 0.002% of the time, instead of trying to bring a guarantee that the system will not fail more 0.002% of the time. The difference between the two is absolutely huge.

Instead of planning for the shortfall not to happen, allowing a limited risk of it happening anyway, he is planning in the reference scenario for it to happen at the very max of the allowed limit, this is not serious engineering.

Of course, Dr Diesendorf is not an engineer, he has never had inreallife to design a system so that it meets a requirement like that, endorsing the responsibility that the failure for the system to work as well a was planned may cost millions to his company, or simply put it belly up (and also in the case of an energy supplying system, may cost lifes). But I hope he will listen to engineers telling him that the requirement does not means what he believes.

From an engineering point of view, it’s obvious the 0.002% requirement is really, really hard, it means planned maintenance periods added with unexpected malfunction within the remaining systems will still not be allowed to impact the availability of the whole system, because the probability of such an event to happen is already more than 0.002%, and 10 minutes leaves no time at all to fix anything.

Usually the actual method you end up with to meet reliability requirements is at some stage to stop trying to be more precise in the calculation of failure probability and instead incorporate a redundancy quite above what should be needed, and still be left hoping some very unfortunate of chain of events won’t happen that will breaks it anyway.

You never ever aim for the allowed failure rate as is done in that paper, you aim significant below it to have a least a chance that all the unknown in the actual implementation of the solution still leave you with an adequate likelihood of not actually ending above it. And frequently even having done that, you fail.

UPDATE:  Jean-Marc Desperrier adds an important clarification 21 July, 2012:

I feel I should specify I was actually talking from the personal perspective of how one conceives a computing architecture to respect a level of SLA engagement, and how difficult it gets when you reach the level of multiples 9 (99,99xxx). Here with 10 minutes a year, it’s almost at five nines which gets *really* hard. OTOH I’ve had some exchanges with people actually working in the energy sector, and I’m confident the methods they use are completely similar when this kind of reliability of a system is required. I notice that the wikipedia entry about nines in engineering http://en.wikipedia.org/wiki/Nines_%28engineering%29 explicitly refers the electricity case.

2 thoughts on “Jean-Marc Desperrier: we don’t engineer the grid to fail – perspective on EDM (2011)

  1. Although I agree that renewable energy is not practical as the major source of power for large developed countries, I do think that we have to realize that no power system is 100% reliable. If renewable sources could actually provide reliable power 100% of the time except for 10 minutes a year (which I doubt), wouldn’t that be satisfactory? However, several hours of failure per year would not be acceptable.

    Of course there care critical uses of electricity where such an outage could be fatal, such as in operating rooms (operating theaters for speakers of British English) and life support equipment, but those applications already use batteries and stand-by generators as back-up.

  2. The assumption here is “100% renewable” meaning the redundancy required to back up intermittent solar and wind must be renewable. EDM (2011) proposes biogas as the dispatchable back up.

    If we are willing to project that sufficient and affordable biogas-fired power will be installed, then we could presumably engineer to a practical standard as Jean-Marc outlines. How much extra redundancy would be required above the EDM2011 I don’t know — 40% of peak power?

    As Peter Lang and others have documented, there are a number of layers of faulty assumptions in the EDM2011 analysis. Two standouts are (a) the enormous cost of such a system – no economy could shoulder that burden, and (b) the biogas backup is not real-world even at multiples of current LCOE. Robert Rapier has covered the impracticality of land use and infrastructure required to deliver sufficient feedstock into biomass converters at scale. E.g. Robert’s recent post wherein he notes

    a newly released study from Purdue reiterates the points I have made: ‘Without solving the logistical issues, commercial production of second-generation biofuels will not take place.

Comments are closed.