Martin Rees: Will technology’s dark side eclipse humanity’s bright future?

Martin Rees 10th anniversary.

In November 2015, Martin Rees gave the Oxford Martin School 10th Anniversary Lecture [here’s the video, here’s the transcript]. The theme of the lecture is that the 21st century is special — let’s make sure we get to the other side intact. We humans have technologies under development that make me think of Stewart Brand’s famous quote in the Whole Earth Catalog “we are as gods, we might as well get good at it”. Today Stewart says

“What I’m saying now is we are as gods and have to get good at it.”

We have to get good at our job because our technologies, from fossil fuels to biotech to AI, give us the opportunity to screw it up. So we need to pay very close attention to making our way successfully through the next 100 years. Lord Rees:

Why is the 21st century special? Our planet has existed for 45 million centuries, but this the first when one species – ours – can determine the biosphere’s fate. New technologies are transforming our lives and society – they’re even changing human beings themselves. And they’re inducing novel vulnerabilities. Ecosystems are being stressed because there are more of us (world population is higher) and we’re all more demanding of resources. We’re deep into what some call the Anthropocene.

And we’ve had one lucky escape already. At any time in the Cold War era, the superpowers could have stumbled towards nuclear Armageddon through muddle and miscalculation. Robert McNamara, US defence secretary at the time of the Cuba crisis, said after he retired that “[w]e came within a hairbreadth of nuclear war without realizing it. It’s no credit to us that we escaped – Khrushchev and Kennedy were lucky as well as wise.”

This is a terrific lecture, applying science-informed optimism to the benefits and risks of some of our most powerful technologies.

For the rest of this talk I’ll focus on a different topic –the promise, and the dark side, of novel technologies that change society and empower individuals – and I’ll venture some speculations about the far future.

We live in a world increasingly dependent on elaborate networks: electric-power grids, air traffic control, international finance, just-in-time delivery, globally-dispersed manufacturing, and so forth. Unless these networks are highly resilient, their benefits could be outweighed by catastrophic (albeit rare) breakdowns — real-world analogues of what happened in 2008 to the financial system. Our cities would be paralysed without electricity. Supermarket shelves would be empty within days if supply chains were disrupted. Air travel can spread a pandemic worldwide within days. And social media can spread panic and rumour literally at the speed of light.

It’s imperative to guard against the downsides of such an interconnected world. Plainly this requires international collaboration. (For instance, whether or not a pandemic gets global grip may hinge on how quickly a Vietnamese poultry farmer can report any strange sickness.)

On pandemics, Oxford Martin colleague Larry Brilliant has taught us how critical it is to invest in “early detection, early response”. Early detection is enabled by the growing power of our networks. Early response is enabled by human and physical infrastructure, and by investing in molecular biology so that we can rapidly analyze detected pathogens, then formulate and manufacture vaccines or antiviral compounds.

One of Martin Rees’s concerns is malign biotech, especially since CRISPR.

Malign or foolhardy individuals have far more leverage than in the past. It is hard to make a clandestine H-bomb. In contrast, biotech involves small-scale dual use equipment. Millions will one day have the capability to misuse it, just as they can misuse cybertech today. Indeed, biohacking is burgeoning even as a hobby and competitive game.

So what do we do about this risk? Regulation is useless for controlling the behavior of the “malign or foolhardy”. In fact we want to be very mindful that we do not entangle our best researchers in a net of over regulation. Because our best defense is exactly our rapid detection-response capabilities created to minimize the impact of natural pandemics.

What about the benefits and risks of advanced AI, specifically General Artificial Intelligence (GAI)?

The timescale for human-level AI may be decades, or it may be centuries. Be that as it may, it’s but an instant compared to the future horizons, and indeed far shorter than timescales of the Darwinian selection that led to humanity’s emergence.

I think it’s likely that the machines will gain dominance on Earth. This is because there are chemical and metabolic limits to the size and processing power of ‘wet’ organic brains. Maybe we’re close to these already. But no such limits constrain silicon based computers (still less, perhaps, quantum computers): for these, the potential for further development over the next billion years could be as dramatic as the evolution from pre-Cambrian organisms to humans. So, by any definition of ‘thinking’, the amount and intensity that’s done by organic human-type brains will be utterly swamped by the future cerebrations of AI.

Moreover, the Earth’s biosphere isn’t the optimal environment for advanced AI – interplanetary and interstellar space may be the preferred arena where robotic fabricators will have the grandest scope for construction, and where non-biological ‘brains’ may develop powers than humans can’t even imagine.

But we humans shouldn’t feel too humbled. We could be of special cosmic significance for jump-starting the transition to silicon-based (and potentially immortal) entities, spreading their influence far beyond the Earth, and far transcending our limitations.

So, even in this ‘concertinered’ timeline — extending billions of years into the future, as well as into the past — this century may be a defining moment where humans could jeopardise life’s immense potential.That’s why the avoidance of complete extinction has special resonance for an astronomer.

That’s the rationale for the Future of Humanity Institute, the element of the Martin School that addresses ‘existential’ risks on the science fiction fringe.

Watch or read, and please tell your friends. We really, really need to focus much more energy on long term thinking.

I almost forgot to mention that Martin Rees is a cofounder of another prestigious risk research institution, the Centre for the Study of Existential Risk at Cambridge.

More on the Oxford Martin School. Lastly, good news: our home star is good for another six billion years. Just imagine what we can accomplish before we are forced to move!

Oxford Martin School

NewImage

Dr. James Martin founded the School in 2005 with Oxford’s largest ever benefaction. The mission of the Oxford Martin School is to develop practical solutions to the really hard problems.

Martin’s vision was that the Oxford Martin School should be a unique, interdisciplinary research community designed to address the most pressing global challenges and opportunities of the 21st century, using rigorous interdisciplinary research to find solutions. This is vital because the problems facing humanity will become so severe, but so also are its new opportunities. A new methodology was needed for interdisciplinary research and problem-solving, and this came to pervade the Oxford Martin School.

The School now has over 30 institutes and projects concerned with different aspects of the future, from the governance of climate change to the possibilities of quantum physics; from the future of food to the implications of an ageing population; and from new economic thinking to nanotechnology in medicine. Each institute can only function by integrating multiple disciplines, and now separate institutes are becoming connected. Together, the different issues of the School connect to form an understanding of our future. The School has over 300 post-doctorate scholars and professors, working across the University of Oxford.

The Advisory Council of the School is populated by some of the most thoughtful and influential people that I know of. Other examples in addition to Martin Rees there are Nicholas Stern, Larry Brilliant and J. Craig Venter. The faculty is similarly first-rate, including Steve Rayner — one of the principals of the Hartwell Paper [see Kyoto Wrong Trousers: Radically Rethinking Climate Policy]. Steve has also been an important contributor to the birthplace of Ecomodernism at the Breakthrough Institute. See Climate Pragmatism, a revised and updated version of the Hartwell Paper.

The School is also home to the Future of Humanity Institute (FHI), lead by Founding Director Prof. Nick Bostrom. Nick is the author of Superintelligence: Paths, Dangers, Strategies, a superb introduction to the challenges of ensuring future AIs are friendly. Nick also directs the Oxford Martin Programme on the Impacts of Future Technology.

Michael Douglas narrates James Martin’s one hour documentary The Meaning Of The 21st Century based on the book of the same title. The film synopsis page says “THIS FILM IS ABOUT THE MOST VITALLY IMPORTANT SUBJECT OF OUR TIME”.