Martin Rees: Will technology’s dark side eclipse humanity’s bright future?

Martin Rees 10th anniversary.

In November 2015, Martin Rees gave the Oxford Martin School 10th Anniversary Lecture [here’s the video, here’s the transcript]. The theme of the lecture is that the 21st century is special — let’s make sure we get to the other side intact. We humans have technologies under development that make me think of Stewart Brand’s famous quote in the Whole Earth Catalog “we are as gods, we might as well get good at it”. Today Stewart says

“What I’m saying now is we are as gods and have to get good at it.”

We have to get good at our job because our technologies, from fossil fuels to biotech to AI, give us the opportunity to screw it up. So we need to pay very close attention to making our way successfully through the next 100 years. Lord Rees:

Why is the 21st century special? Our planet has existed for 45 million centuries, but this the first when one species – ours – can determine the biosphere’s fate. New technologies are transforming our lives and society – they’re even changing human beings themselves. And they’re inducing novel vulnerabilities. Ecosystems are being stressed because there are more of us (world population is higher) and we’re all more demanding of resources. We’re deep into what some call the Anthropocene.

And we’ve had one lucky escape already. At any time in the Cold War era, the superpowers could have stumbled towards nuclear Armageddon through muddle and miscalculation. Robert McNamara, US defence secretary at the time of the Cuba crisis, said after he retired that “[w]e came within a hairbreadth of nuclear war without realizing it. It’s no credit to us that we escaped – Khrushchev and Kennedy were lucky as well as wise.”

This is a terrific lecture, applying science-informed optimism to the benefits and risks of some of our most powerful technologies.

For the rest of this talk I’ll focus on a different topic –the promise, and the dark side, of novel technologies that change society and empower individuals – and I’ll venture some speculations about the far future.

We live in a world increasingly dependent on elaborate networks: electric-power grids, air traffic control, international finance, just-in-time delivery, globally-dispersed manufacturing, and so forth. Unless these networks are highly resilient, their benefits could be outweighed by catastrophic (albeit rare) breakdowns — real-world analogues of what happened in 2008 to the financial system. Our cities would be paralysed without electricity. Supermarket shelves would be empty within days if supply chains were disrupted. Air travel can spread a pandemic worldwide within days. And social media can spread panic and rumour literally at the speed of light.

It’s imperative to guard against the downsides of such an interconnected world. Plainly this requires international collaboration. (For instance, whether or not a pandemic gets global grip may hinge on how quickly a Vietnamese poultry farmer can report any strange sickness.)

On pandemics, Oxford Martin colleague Larry Brilliant has taught us how critical it is to invest in “early detection, early response”. Early detection is enabled by the growing power of our networks. Early response is enabled by human and physical infrastructure, and by investing in molecular biology so that we can rapidly analyze detected pathogens, then formulate and manufacture vaccines or antiviral compounds.

One of Martin Rees’s concerns is malign biotech, especially since CRISPR.

Malign or foolhardy individuals have far more leverage than in the past. It is hard to make a clandestine H-bomb. In contrast, biotech involves small-scale dual use equipment. Millions will one day have the capability to misuse it, just as they can misuse cybertech today. Indeed, biohacking is burgeoning even as a hobby and competitive game.

So what do we do about this risk? Regulation is useless for controlling the behavior of the “malign or foolhardy”. In fact we want to be very mindful that we do not entangle our best researchers in a net of over regulation. Because our best defense is exactly our rapid detection-response capabilities created to minimize the impact of natural pandemics.

What about the benefits and risks of advanced AI, specifically General Artificial Intelligence (GAI)?

The timescale for human-level AI may be decades, or it may be centuries. Be that as it may, it’s but an instant compared to the future horizons, and indeed far shorter than timescales of the Darwinian selection that led to humanity’s emergence.

I think it’s likely that the machines will gain dominance on Earth. This is because there are chemical and metabolic limits to the size and processing power of ‘wet’ organic brains. Maybe we’re close to these already. But no such limits constrain silicon based computers (still less, perhaps, quantum computers): for these, the potential for further development over the next billion years could be as dramatic as the evolution from pre-Cambrian organisms to humans. So, by any definition of ‘thinking’, the amount and intensity that’s done by organic human-type brains will be utterly swamped by the future cerebrations of AI.

Moreover, the Earth’s biosphere isn’t the optimal environment for advanced AI – interplanetary and interstellar space may be the preferred arena where robotic fabricators will have the grandest scope for construction, and where non-biological ‘brains’ may develop powers than humans can’t even imagine.

But we humans shouldn’t feel too humbled. We could be of special cosmic significance for jump-starting the transition to silicon-based (and potentially immortal) entities, spreading their influence far beyond the Earth, and far transcending our limitations.

So, even in this ‘concertinered’ timeline — extending billions of years into the future, as well as into the past — this century may be a defining moment where humans could jeopardise life’s immense potential.That’s why the avoidance of complete extinction has special resonance for an astronomer.

That’s the rationale for the Future of Humanity Institute, the element of the Martin School that addresses ‘existential’ risks on the science fiction fringe.

Watch or read, and please tell your friends. We really, really need to focus much more energy on long term thinking.

I almost forgot to mention that Martin Rees is a cofounder of another prestigious risk research institution, the Centre for the Study of Existential Risk at Cambridge.

More on the Oxford Martin School. Lastly, good news: our home star is good for another six billion years. Just imagine what we can accomplish before we are forced to move!

Oxford Martin School

NewImage

Dr. James Martin founded the School in 2005 with Oxford’s largest ever benefaction. The mission of the Oxford Martin School is to develop practical solutions to the really hard problems.

Martin’s vision was that the Oxford Martin School should be a unique, interdisciplinary research community designed to address the most pressing global challenges and opportunities of the 21st century, using rigorous interdisciplinary research to find solutions. This is vital because the problems facing humanity will become so severe, but so also are its new opportunities. A new methodology was needed for interdisciplinary research and problem-solving, and this came to pervade the Oxford Martin School.

The School now has over 30 institutes and projects concerned with different aspects of the future, from the governance of climate change to the possibilities of quantum physics; from the future of food to the implications of an ageing population; and from new economic thinking to nanotechnology in medicine. Each institute can only function by integrating multiple disciplines, and now separate institutes are becoming connected. Together, the different issues of the School connect to form an understanding of our future. The School has over 300 post-doctorate scholars and professors, working across the University of Oxford.

The Advisory Council of the School is populated by some of the most thoughtful and influential people that I know of. Other examples in addition to Martin Rees there are Nicholas Stern, Larry Brilliant and J. Craig Venter. The faculty is similarly first-rate, including Steve Rayner — one of the principals of the Hartwell Paper [see Kyoto Wrong Trousers: Radically Rethinking Climate Policy]. Steve has also been an important contributor to the birthplace of Ecomodernism at the Breakthrough Institute. See Climate Pragmatism, a revised and updated version of the Hartwell Paper.

The School is also home to the Future of Humanity Institute (FHI), lead by Founding Director Prof. Nick Bostrom. Nick is the author of Superintelligence: Paths, Dangers, Strategies, a superb introduction to the challenges of ensuring future AIs are friendly. Nick also directs the Oxford Martin Programme on the Impacts of Future Technology.

Michael Douglas narrates James Martin’s one hour documentary The Meaning Of The 21st Century based on the book of the same title. The film synopsis page says “THIS FILM IS ABOUT THE MOST VITALLY IMPORTANT SUBJECT OF OUR TIME”.

Lord Martin Rees in conversation at The Wellcome Trust

NewImage

It’s dependably fun and illuminating to see Martin Rees unconstrained by the political “don’t go theres”. So, have some fun with this Lord Martin Rees in conversation at The Wellcome Trust [13 June 2014, 93 minutes]. Skip the first 10 minutes of formalities. Then as the interview begins with ‘where it all started’ Martin explains the basic principles of his grad school experience in the 1960s:

If you go into an established field you can only do something new by doing something that the old guys got stuck on.

Whereas if you go to a field where new things are happening, then the experience of the old guys is at a heavy discount.

Max Planck’s longer quote can be paraphrased as Science advances one funeral at a time.” I had to endure only part of that experience, as advisor Woody Bledsoe would try anything promising. But my mathematics chair was a classical guy who insisted that thesis exams concentrate on partial differential equations. Very relevant to planning the Juno rendezvous with Jupiter, not so helpful in computer science. Here’s the challenge: how do we develop young scientists without trapping them inside the boundaries of the department hierarchy?

Enjoy!

Appeals to the climate consensus can give the wrong impression

Credit John Cook 2014 www.skepticalscience.com

Image credit John Cook http://www.skepticalscience.com, Closing the Consensus Gap on Climate Change

Dr. Will Howard, recently published an essay that will appeal to those of you interested in science communications, especially in the challenging and politically-charged context of climate change. Dr. Howard makes the extremely important point that “scientific consensus” on climate change reflects strong consilience of evidence. I confess that I had to look up “consilience” to learn that it is indeed the perfect term to capture how we have developed confidence in our understanding of the causal connections between human-generated greenhouse gases and climate change.

In public discourse, if we had chosen “consilience of evidence” to describe the accumulation of research, then perhaps people might have understood more readily that we are not talking about the results of an opinion poll or a negotiated statement (yes, the IPCC Summary for Policymakers [PDF] is a negotiated statement, though I don’t know how else such a summary could be produced).

I thought Will’s essay captured this science communications challenge succinctly, and especially how this strong consilience of evidence is separate from the politics of what to do about it:

“Consensus” is understood differently in science compared to politics or society.

Scientists use this word to refer to consilience of multiple lines of evidence that underlie widespread agreement or support a theory.

In the case of climate change, multiple lines of evidence underpin the prevailing view that the climate system is showing decade-on-decade warming over the past 50 years.

In particular, this warming bears temporal and spatial patterns, or “fingerprints”, that point to human causes.

For example, the stratosphere (that part of the atmosphere higher than about 11 km) has been cooling as the lower atmosphere and the ocean warm. This is the pattern we expect from the addition of greenhouse gases and not from, say, changes in the sun’s output.

But in public and especially political discourse, “consensus” tends to imply majority opinion or concurrence. As consensus in this public context is often arrived at by negotiation, saying there’s a scientific “consensus” may imply to the community that prevailing scientific views represent a negotiated outcome. This is the antithesis of science.

Consensus of the non-scientific kind does have a role to play in the climate debate. This includes negotiating whether warming is a “good” or “bad” thing and what, if anything, we should do about it.

These are not scientific questions. These are issues of values, politics, ethics and economics. As a nation and as a global society we need to reach consensus to resolve those questions and to make and implement appropriate public policy.

I’ve nothing to add to Will’s excellent essay, so I recommend that you go directly to The Conversation to read the original and the comments. Some effort is required to weed the growing number of comments so I will highlight a segment of the conversation which focuses upon the important question of effective science communication:

John Cook
Climate Communication Research Fellow at University of Queensland

This is an interesting article with many important points. I would be the first person to stress the importance of communicating the many “fingerprints” being observed in our climate (and in fact have created a human fingerprints infographic which I frequently use in public talks http://www.skepticalscience.com/graphics.php?g=32).

However, the article is missing a crucial element to this discussion – what does the evidence tell us about the efficacy of consensus messaging? A number of studies have found that one of the strongest predictors of public support for climate mitigation policies is perception of consensus (i.e., the level of agreement among climate scientists about human-caused global warming). Also, consensus messaging significantly increases acceptance of climate change. A randomised experiment by Stephan Lewandowsky found that informing Australians of the 97% consensus increased their acceptance of human-caused global warming and intriguingly, the increase was greatest amongst conservatives. In this case, consensus neutralised ideology to some degree.

When people think there is still an ongoing debate about human-caused global warming amongst climate scientists, they’re less likely to accept climate change and support climate action. And given the Australian public on average think there is 58% agreement among climate scientists, rather than 97%, then this misconception has serious societal implications. Science communicators need to take into account that people use expert scientific opinion as a heuristic to inform their views on complex scientific issues.

To underscore this fact, I’ve actually tested the human fingerprints message (linked to above) and the consensus message in a randomised experiment. Consensus messaging significantly outperformed the fingerprints message. The lesson here is that we need to understand how laypeople think about complex scientific issues like climate change.

However, I don’t think there need be that much conflict between what social science is telling us and the views of the OP. A recent paper by Ed Maibach tested various forms of consensus messaging and they found the most effective was a message that emphasised both consensus and the evidence-based nature of the scientific method:

“Based on the evidence, 97% of climate scientists have concluded that human-caused climate change is happening”

John Cook
Climate Communication Research Fellow at University of Queensland
In reply to Anna Young

Anna, the problem you raise is exactly why communication like the John Oliver YouTube video embedded in the OP are so powerful. Not only does Oliver communicate the 97% consensus, he also does something equally important – he communicates how people cast doubt on the consensus (in this case, by perpetuating false balance in the media). What Oliver is doing is equipping people with the knowledge and the critical thinking skills so that when they see the mainstream media show a debate between a climate scientist and a tinfoil guy, they can see it for what it is. It’s not only a funny video, it’s brilliant communication. The fact that it’s been viewed millions of times means millions of people have now been “inoculated” against the misinformation of false debate in the mainstream media.

So kudos to Will Howard for embedding the video.

Will Howard
Research scientist at University of Melbourne
In reply to John Cook


Thanks John, for contributing that perspective. The points you raise, I would suggest, may be applicable to many areas of “contested” science, in health, resources (e.g. coal seam gas) and others. 

Whatever is said about the consensus, I do think we need to do a better job of communicating what underpins it. As your co-author Peter Jacobs notes

“to those suggesting that the consensus message is an appeal to authority that ignores evidence- the consensus exists *because of* the overwhelming physical evidence, which is detailed at length in the scientific literature.”

But I wonder about this: both the consensus and the consilience of evidence (my preferred term) seem to be strengthening, yet public support for policies aimed at mitigating climate change seem not to be.

I note polls suggesting climate change and environmental issues have moved down peoples’ priorities. Here in Australia, our current government was elected with a major plank in its platform being the removal of the carbon tax. (Whether we agree or disagree with their policy they ran on that issue and were elected).

Is this because people are skeptical of the science? Is it just that other issues take on more urgency: jobs, the economy, international conflicts, etc.?

John Cook
Climate Communication Research Fellow at University of Queensland
In reply to Will Howard

I like the term “consilience of evidence” also but when I test-drive it in public talks, it tends to inspire blank looks from the audience. It’s a term that scientists love. Laypeople, not so much. Which is why, again, it’s important that we understand our audience when we do science communication.

Why is public support not changing that much? Public concern about climate change does correlate with economic performance hence the drop in climate concern after the GFC. Another predictor of public concern about climate change is cues from our political leaders so you can see why Australia has a problem in that department at the moment. There’s certainly a number of factors that influence people’s attitudes to climate.

But as I said above, several recent studies have found perception of scientific agreement on climate change is one of the biggest factors. And given public perception of consensus is very low (I measured it at 58% on a representative Australian sample), this misconception is definitely a significant problem. It’s not the only factor delaying public support for climate action but it’s a big one.

Also, communicating the 97% consensus is a lot easier to understand than explanations of why greenhouse gases in the upper atmosphere are more efficient at radiating longwave radiation to space, hence contributing to the cooling stratosphere. From a communication point of view, consensus is a low lying fruit. This is why consensus messaging outperformed fingerprint messaging in my data.

So communicating the 97% consensus can help with removing one roadblock delaying climate action. It won’t fix everything – it’s not a magic bullet. But ignoring the “consensus gap” only serves to give extra life to that stumbling block.

I wrote a post a while back How to break the climate change gridlock including a conversation with  Andrew Dressler, Professor of Atmospheric Sciences at Texas A&M, about how we might more explicitly get each party’s values and economic interests on the negotiating table.

Will Howard has received funding from the Australian Research Council, the Australian Government Department of Climate Change, the Cooperative Research Centres Program, and the Australian Antarctic Science Program.

This article was originally published on The Conversation. Read the original article.

Good advice for science advisors – from the book “Future Directions for Scientific Advice for Whitehall”

I was surprised to find a well-informed essay on science policy – in, of all places, the Guardian. At the end of the article I found out why this is such a good essay:

Geoff Mulgan is chief executive of Nesta, the UK’s innovation foundation. He is on Twitter @geoffmulgan. This article is from the book Future Directions for Scientific Advice in Whitehall (edited by Robert Doubleday and James Wilsdon) which is free to download here from 18 April 2013.

The essay is longish for sound reasons – here are some excerpts from the concluding paragraphs to motivate you to read the complete essay:

(…) Formal scientific knowledge sits alongside these other types of knowledge, but does not automatically trump the others. Indeed, a politician, or civil servant, who acted as if there was a hierarchy of knowledge with science sitting unambiguously at the top, would not last long. The consequence is that a scientist who can mobilise other types of knowledge on his or her side is likely to be more effective than one that cannot; for example, by highlighting the economic cost of future floods and their potential effect on political legitimacy, as well as their probability.

These points help to explain why the role of a chief scientific adviser can be frustrating. Simply putting an eminent scientist into a department may have little effect, if they don’t also know how to work the system, or how to mobilise a large network of contacts. Not surprisingly, many who aren’t well prepared for their roles as brokers, feel that they rattle around without much impact.

For similar reasons, some of the other solutions that have been used to raise the visibility and status of scientific advice have tended to disappoint. Occasional seminars for ministers or permanent secretaries to acclimatise them to new thinking in nanotechnology or genomics are useful but hardly sufficient, when most of the real work of government is done at a far more junior level. This is why some advocate other, more systematic, approaches to complement what could be characterised as the “clever chap” theory of scientific advice.

First, these focus on depth and breadth: acclimatising officials and politicians at multiple levels, and from early on, to understanding science, data and evidence through training courses, secondments and simulations; influencing the media environment as much as insider decision making (since in practice this will often be decisive in determining whether advice is heeded); embedding scientists at more junior levels in policy teams; linking scientific champions in mutually supportive networks; and opening up more broadly the world of evidence and data so that it becomes as much part of the lifeblood of decision making as manifestos.

Here the crucial point is that the target should not just be the very top of institutions: the middle and lower layers will often be more important. A common optical mistake of eminent people in London is to overestimate the importance of the formal relative to the informal, the codified versus the craft.

Second, it’s vital to recognise that the key role of a scientific adviser is to act as an intermediary and broker rather than an adviser, and that consequently their skills need to be ones of translation, aggregation and synthesis as much as deep expertise. So if asked to assess the potential commercial implications of a new discovery such as graphene; the potential impact of a pandemic; or the potential harms associated with a new illegal drug, they need to mobilise diverse forms of expertise.

Their greatest influence may come if – dare I say it – they are good at empathising with ministers who never have enough time to understand or analyse before making decisions. Advisers who think that they are very clever while all around them are a bit thick, and that all the problems of the world would be solved if the thick listened to the clever, are liable to be disappointed.

(…) In optimistic moments, I hope that we are moving towards a period of more overtly experimentalist governance, where governments are willing to test their ideas out – to run RCTs and embed continuous learning and feedback into everything they do. Experimental government would certainly be better than government by instinct, government by intuition and government solely guided by ideology.

In such a context, the old model of a clever man given a desk in Whitehall, sitting in a corner writing memos may be even more anachronistic. We certainly need highly intelligent eminent experts to guide decisions. We need to pay more comprehensive and sophisticated attention not only to the supply of useful knowledge, but also to how that knowledge is used. By doing this, governments and advisers can make more informed decisions, fewer mistakes and respond better to the complex problems they face. But let’s be as serious in making use of the evidence about evidence, as we are about the evidence itself.

Highly recommended!

Good advice for science advisors – from the book Future Directions for Scientific Advice in Whitehall

I was surprised to find a well-informed essay on science policy – in, of all places, the Guardian. At the end of the article I found out why it is such a good essay:

Geoff Mulgan is chief executive of Nesta, the UK’s innovation foundation. He is on Twitter @geoffmulgan. This article is from the book Future Directions for Scientific Advice in Whitehall (edited by Robert Doubleday and James Wilsdon) which is free to download here from 18 April 2013.

 

(…) Formal scientific knowledge sits alongside these other types of knowledge, but does not automatically trump the others. Indeed, a politician, or civil servant, who acted as if there was a hierarchy of knowledge with science sitting unambiguously at the top, would not last long. The consequence is that a scientist who can mobilise other types of knowledge on his or her side is likely to be more effective than one that cannot; for example, by highlighting the economic cost of future floods and their potential effect on political legitimacy, as well as their probability.

These points help to explain why the role of a chief scientific adviser can be frustrating. Simply putting an eminent scientist into a department may have little effect, if they don’t also know how to work the system, or how to mobilise a large network of contacts. Not surprisingly, many who aren’t well prepared for their roles as brokers, feel that they rattle around without much impact.

For similar reasons, some of the other solutions that have been used to raise the visibility and status of scientific advice have tended to disappoint. Occasional seminars for ministers or permanent secretaries to acclimatise them to new thinking in nanotechnology or genomics are useful but hardly sufficient, when most of the real work of government is done at a far more junior level. This is why some advocate other, more systematic, approaches to complement what could be characterised as the “clever chap” theory of scientific advice.

First, these focus on depth and breadth: acclimatising officials and politicians at multiple levels, and from early on, to understanding science, data and evidence through training courses, secondments and simulations; influencing the media environment as much as insider decision making (since in practice this will often be decisive in determining whether advice is heeded); embedding scientists at more junior levels in policy teams; linking scientific champions in mutually supportive networks; and opening up more broadly the world of evidence and data so that it becomes as much part of the lifeblood of decision making as manifestos.

Here the crucial point is that the target should not just be the very top of institutions: the middle and lower layers will often be more important. A common optical mistake of eminent people in London is to overestimate the importance of the formal relative to the informal, the codified versus the craft.

Second, it’s vital to recognise that the key role of a scientific adviser is to act as an intermediary and broker rather than an adviser, and that consequently their skills need to be ones of translation, aggregation and synthesis as much as deep expertise. So if asked to assess the potential commercial implications of a new discovery such as graphene; the potential impact of a pandemic; or the potential harms associated with a new illegal drug, they need to mobilise diverse forms of expertise.

Their greatest influence may come if – dare I say it – they are good at empathising with ministers who never have enough time to understand or analyse before making decisions. Advisers who think that they are very clever while all around them are a bit thick, and that all the problems of the world would be solved if the thick listened to the clever, are liable to be disappointed.

(…) In optimistic moments, I hope that we are moving towards a period of more overtly experimentalist governance, where governments are willing to test their ideas out – to run RCTs and embed continuous learning and feedback into everything they do. Experimental government would certainly be better than government by instinct, government by intuition and government solely guided by ideology.

In such a context, the old model of a clever man given a desk in Whitehall, sitting in a corner writing memos may be even more anachronistic. We certainly need highly intelligent eminent experts to guide decisions. We need to pay more comprehensive and sophisticated attention not only to the supply of useful knowledge, but also to how that knowledge is used. By doing this, governments and advisers can make more informed decisions, fewer mistakes and respond better to the complex problems they face. But let’s be as serious in making use of the evidence about evidence, as we are about the evidence itself.

 

Nature: Time to confront academic fraud

One percent fraudulent papers is much higher than I thought. Yes, it is just one study, but this certainly raises the question: what are appropriate countermeasures?

Considerable hard data have emerged on the scale of misconduct. A metastudy (D. Fanelli PLoS ONE 4, e5738; 2009) and a detailed screening of all images in papers accepted by The Journal of Cell Biology (M. Rossner The Scientist 20 (3), 24; 2006) each suggest that roughly 1% of published papers are fraudulent. That would be about 20,000 papers worldwide each year.At the time of the Baltimore case, it was widely argued that research misconduct was insignificantly rare — and irrelevant to the progress of science, which would self-correct. Few senior scientists now believe that. They know that misconduct exists and that, unchecked, it can undermine public regard for science and scientists.

 

False Positive Science

This science policy post by Roger Pielke Jr. is a gem. You’ll want to keep these principles in mind whenever you read new research press releases (much of the science reporting you read in the media is regurgitated press releases). Here’s an excerpt:

(…) The problem of “false positive science” is of course not limited to the discipline of psychology or even the social sciences. Simmons et al. provide several excellent empirical examples of how ambiguity in the research process leads to false positives and offer some advice for how the research community might begin to deal with the problem.

Writing at The Chronicle of Higher Education, Geoffrey Pullam says that a gullible and compliant media makes things worse:

Compounding this problem with psychological science is the pathetic state of science reporting: the problem of how unacceptably easy it is to publish total fictions about science, and falsely claim relevance to real everyday life.

Pullam provides a nice example of the dynamics discussed here in the recent case of the so-called “QWERTY effect” which is also dissected here. On this blog I’ve occasionally pointed to silly science and silly reporting, as well as good science and good reporting — which on any given topic is all mixed up together.

When prominent members of the media take on an activist bent, the challenge is further compounded. Of course, members of the media are not alone in their activism through science. The combination of ambiguity, researcher interest in a significant result and research as a tool of activism makes sorting through the thicket of knowledge a challenge in the best of cases, and sometimes just impossible.

The practical conclusion to draw from Simmons et al. is that much of what we think we know based on conventional statistical studies published in the academic literature stands a good chance of just not being so — certainly more than the 5% threshold used as a threshold for significance. Absent solid research, we simply can’t distinguish empirically between false and true positives, meaning that we apply other criteria, like political expediency. Knowing what to know turns out to be quite a challenge.

Roger Pielke Jr.: US science and politics

Roger irritates advocates of all political leanings — because he keeps insisting on the facts. Such as this essay, which highlights just a few of the Bush/Obama administration cases of “political interference”:

Why have a number of areas of US science become so politicized?

One answer to this question is that those concerned about science in politics have ceded discussion of issues of science policy to the most overtly partisan, many of whom see science as nothing more than a convenient tool to extract political advantage. This dynamic manifests itself in the overwhelming selectivity of attention among those who purport to be concerned about science in politics.

Consider a few examples:

Remember when James Hansen was told that his access to the media would be limited and controlled by minders at NASA? Of course you do. It has been a talking point for years.

But what about when the Obama Administration recently muzzled scientists and other officials at the Department of Health and Human Services? If you frequent the science corner of the blogosphere you might have missed it (though if you visit the conservative-o-sphere you may have seen it). Here is what one long-time journalist said about the policy:

The new formal HHS Guidelines on the Provision of Information to the News Media represent, to this 36-year veteran of reporting FDA news, a Soviet-style power-grab. By requiring all HHS employees to arrange their information-sharing with news media through their agency press office, HHS has formalized a creeping information-control mechanism that informally began during the Clinton Administration and was accelerated by the Bush and Obama administrations.

AAAS? Chris Mooney? Crickets. Remember when the Bush Administration was accused of couching its ideological preferences in the name of science in order to prohibit research on stem cells? Well, of course you do.

But what about the Obama Administration’s hiding its decision to close Yucca Mountain behind science? As President Obama’s spokesman explained:

“I think what has taken Yucca Mountain off the table in terms of a long-term solution for a repository for our nuclear waste is the science. The science ought to make these decisions.”

Read the whole thing »

The Hartwell Paper: Oblique strategies

…in The Economist 11 May 2010 there’s a discussion of the Hartwell Paper:

(…) Where the Hartwell paper becomes controversial is in its approach to decarbonisation. The authors argue that the large emerging economies are clearly fuelling themselves with renewables and nuclear as well as, rather than instead of, fossil fuels, for various reasons, and that this will not change soon. Nor, they imply, should it. They argue that there is something wrong with a world in which carbon-dioxide levels are kept to 450 parts per million (a trajectory widely deemed compatible with a 2 degree cap on warming) but at the same time more than a billion of the poorest people are left without electricity, as in one much discussed scenario from the International Energy Agency.

Their oblique approach is to aim instead for a world with accessible, secure low cost energy for all. The hope, intuition or strategy at play here is that since fossil fuels cannot deliver such a world, its achievement will, in itself, bring about decarbonisation on a massive scale. Following a path stressing clean energy as a development issue provides a more pleasant journey to the same objective.

(…) The Hartwellites do not disagree with the science in general and certainly don’t think there is no reason to act. They simply doubt that action along this one axis (carbon-dioxide reduction) can ever be made politically compelling. Instead, their oblique strategies (…) are to concentrate on easy opportunities and efficiency, energy and dignity.

In the comments I found the following observation from one of our favorite energy policy analyst/observers, the pseudonymous “harrywr2“:

One of the problems in the ‘energy debate’ is that various institutions use the ‘average’ price of coal to decide which actions may or may not make ‘economic’ sense.

The worlds greatest pile of coal sits in Gillette, Wyoming..where one can show up with a pickup truck and get a ton of coal for $12. There aren’t any ‘alternative’ energy options available that will ever compete against $12/ton coal.

In the ‘real’ world, coal has to be shipped to a market. That $12/ton coal in Wyoming ends up costing $100/ton by the time it is put on a train, hauled over the rocky mountains, put on a boat and floated across the pacific to China.

The Copenhagen folks I suppose could point to the level of investment the Chinese are making in hydro,nuclear and wind and congratulate themselves on finally convincing the Chinese on the need to be ‘environmentally friendly’.

Or one could take another view and conclude that the Chinese calculated the cost of importing coal from Wyoming and decided that ‘alternative energy’ was cheaper and as a bonus they would be congratulated by the Copenhagen folks for finally becoming ‘environmentally conscious’.

If one believes the later then the ‘Hartwell’ focus makes more sense.

Global treaties to reduce CO2 emissions are only going to happen if they coincide with the goal of ‘cheap plentiful electricity for all’.

As Harry outlines, my shorthand of “cheaper than coal” can be misleading unless regionally nuanced. I think that hurdle is valid for most Chinese utility investment decisions – but obviously does not incentivize a Wyoming region utility to choose a low-carbon option.