I appreciated the real world engineering perspective offered by Jean-Marc Desperrier on Mark Diesendorf’s response. Jean-Marc is commenting on the BNC discussion of the Elliston, Diesendorf and MacGill paper “Simulations of Scenarios with 100% Renewable Electricity in the Australian National Electricity Market” EDM (2011). Jean-Marc is responding to Dr. Diesendorf’s rebuttal of Peter Lang’s analysis.
@John Morgan : I think Dr Diesendorf’s assessment of NEM reliability requirement indeed have some very serious problems : – NEM allow at most 10 minutes of unavailability per year (0.002%), and that becomes several hours in his papers – But having if he had the number right, he really doesn’t realize that what he is doing is deliberately designing a system so that it fails 0.002% of the time, instead of trying to bring a guarantee that the system will not fail more 0.002% of the time. The difference between the two is absolutely huge.
Instead of planning for the shortfall not to happen, allowing a limited risk of it happening anyway, he is planning in the reference scenario for it to happen at the very max of the allowed limit, this is not serious engineering.
Of course, Dr Diesendorf is not an engineer, he has never had inreallife to design a system so that it meets a requirement like that, endorsing the responsibility that the failure for the system to work as well a was planned may cost millions to his company, or simply put it belly up (and also in the case of an energy supplying system, may cost lifes). But I hope he will listen to engineers telling him that the requirement does not means what he believes.
From an engineering point of view, it’s obvious the 0.002% requirement is really, really hard, it means planned maintenance periods added with unexpected malfunction within the remaining systems will still not be allowed to impact the availability of the whole system, because the probability of such an event to happen is already more than 0.002%, and 10 minutes leaves no time at all to fix anything.
Usually the actual method you end up with to meet reliability requirements is at some stage to stop trying to be more precise in the calculation of failure probability and instead incorporate a redundancy quite above what should be needed, and still be left hoping some very unfortunate of chain of events won’t happen that will breaks it anyway.
You never ever aim for the allowed failure rate as is done in that paper, you aim significant below it to have a least a chance that all the unknown in the actual implementation of the solution still leave you with an adequate likelihood of not actually ending above it. And frequently even having done that, you fail.
UPDATE: Jean-Marc Desperrier adds an important clarification 21 July, 2012:
I feel I should specify I was actually talking from the personal perspective of how one conceives a computing architecture to respect a level of SLA engagement, and how difficult it gets when you reach the level of multiples 9 (99,99xxx). Here with 10 minutes a year, it’s almost at five nines which gets *really* hard. OTOH I’ve had some exchanges with people actually working in the energy sector, and I’m confident the methods they use are completely similar when this kind of reliability of a system is required. I notice that the wikipedia entry about nines in engineering http://en.wikipedia.org/wiki/Nines_%28engineering%29 explicitly refers the electricity case.