Privacy Policy

@2019 Geneva Global Initiative (Assoc.), Geneva, Switzerland





Licensed under the Creative Commons License Attribution 4.0 International (CC BY 4.0)

Artificial Intelligence and Existential Risk:
A Soothing Balm for Our Troubled Times?

One of the curious developments of the last few years has been the growing interest in existential risks to humanity arising from the emergence of Artificial Intelligence.

Stephen Hawking, Elon Musk and Bill Gates are among the high-profile people who have expressed concern. The Machine Intelligence Research Institute (https://intelligence.org/) , The Centre for the Study of Existential Risk at Cambridge University (https://www.cser.ac.uk/), and the Future of Humanity Institute at Oxford (https://www.fhi.ox.ac.uk/) are investing considerable resources in exploring such risks and their mitigation.

The central idea is that we create a machine that has such a high level of general intelligence that it starts augmenting its own capacities in a re-enforcing cycle so that it becomes to us as we are to bacteria.

A growing capacity for intelligence is linked to more structural and dynamic complexity – which we can understand as having more parts, and more interactions between them happening at faster rates. Such an intelligence would also need to be embodied in some way that can sense the environment in which it’s embedded, access raw materials, manufacture, maintain, upgrade, monitor and defend itself.

All growth in complexity, from the beginning of the universe to life, to socio-economic systems, to the AI machine destined to destroy us represents the increasingly complex organisation of matter. To maintain the increasing complexity requires rising flows of energy and matter. One way to consider the process is to imagine lots of nascent AI machines evolving to more effectively respond to their environment, both human and natural. The ‘winning’ machine will be the one that is most effective at maximising its energy and resource throughput to maintain itself, increase its intelligence, and respond to new problems.

Humans and their civilisation, especially for the last 250 years, have been escalating their consumption of energy per unit time, having evolved under a similar maximum power principle. But high-quality energy is finite. If the AI machine is to survive in an environment of restricted resources with a human competitor, it might decide it has to eliminate us.

Maybe it would cut off our food supply – by shutting down the infrastructures that allow us to manufacture seeds and fertilizers, or the transport networks and financial systems that stand behind food access in a complex world. Engineering a pandemic should be an easy for such an intelligence. Just like the many species human have displaced, a new apex predator may drive us to extinction. Of course, the AI machine might keep a few of us around as pets.

However, just like your computer does not really come from Apple, or the processing chip from Intel, any evolving AI can only exist through its relationships within a complex and globalised socio-economic system that provides the integration of supply-chains, factories upon factories, critical infrastructures, financial systems, energy and resource extraction and processing, distributed knowledge and skills, human behavioural coordination and so on. The more complex it becomes, the more it will rely upon an even more complex, correlated and integrated global system to maintain itself and continue adapting.

Let us say it takes 20 years to make a general AI, and another 50 years for it to be in a position to make humans extinct. Then the probability that it is gets to that point is also the probability that nothing causes the breakdown of global socio-economic integration in the intervening 70 years.

This is a long shot. Firstly, as our societies have become more complex, integrated, interdependent and high-speed it has become more vulnerable to large-scale systemic failure. This complexity also means that if there is a collapse, it becomes harder, and probably impossible to recover. If there is such a collapse, just once, that the end of the Doom-AI journey.
 

Secondly, as multiple and growing stresses, from food and oil constraints, to climate change and biodiversity loss propagate and interact through civilisation, it will further amplify risks to economic activity, the financial system, to social polarisation and geopolitical tensions. Under such conditions it becomes more difficult to maintain existing critical systems, to respond to mounting challenges, and to invest in the research and development of AI technologies. It also opens more and more avenues to collapse.

We can think of other source too - the potential for catastrophic natural disasters occurring in critical parts of the global system that cause irreversible contagion across the planet; powerful electromagnetic storms incapacitating the electric grids, shutting down the production of and distribution of almost everything; or a global pandemic…

The question for the AI derived existential risk community is how confident they are that there will be no collapse in global system integration, from any potential sources, before Doom-AI has some reasonable chance of materializing?

Nobody is suggesting Doom-AI is just around the corner. So there is something talismanic about the rise in such concerns now as we wait for the next iteration of the global financial crisis, when the impacts from of climate change are becoming apparent, when almost daily a new study shines a light on how fast our species is destroying the living world, when societies are becoming more polarised, and when all manner of crazy is shattering old political certainties.

The Doom-AI narrative claims to bravely faces a potential dark future while keeping it safely displaced in time. It speaks to our myths of technological progress and our terrifying powers to create a life that might consume us. It echos that storied night two-hundred years ago on the shores of Lake Geneva when Mary Shelley created a man who stole the spark of life from the gods. Then it was with the power of (electric) Galvanism, now it’s Artificial Intelligence.

As real crises mount, there is something reassuring about acknowledging the potential for abstract calamity, while also reflecting on our powerful capacities and a distant future where they may be realised.

But we are not powerful. We are trapped and we are vulnerable. We daren’t consider the chasm that is opening beneath our feet, so we rock ourselves to sleep with fairy-tales.

Maybe that is why there are more institutes focused on distant nightmares than those devoted to how we might respond to a large-scale systemic failure today. But if you listen, you can hear our foundations shaking.
 

David Korowicz is director of Risk and Response at the Geneva Global Initiative.

  • Facebook
  • Twitter
  • LinkedIn