Human extinction isn’t as unlikely as we’d hope

By

The precarious nature of humanity has never been brought to our attention more in popular culture as with recent satirical films such as Netflix’s Don’t Look Up.

Once an abstract concept, in recent years the events that have unfolded have very much confirmed how precarious humanity’s place on Earth has become. Pandemics, corrupt governance, and climate change have constantly been blazoned across news feeds, with many having little to do but to stew and worry about the seemingly imminent ‘end of the world’. So, with existential panic heightening, the question on everyone’s lips is: what is the biggest threat to humanity?

The end of humanity has been a question studied both by scientists and philosophers for many years. Moral philosophers such as Toby Ord have estimated that the chance of an existential risk occurring in the next century is one in six – a concerning prediction, and one that has been echoed in various degrees by institutes researching existential risk across the globe.

In order to discuss humanity’s ‘doom’, let’s first establish what we mean by ‘existential risk’. Defined as an event that could either cause the extinction of humanity, or drastically curtail the potential for humanity to survive, existential risk doesn’t necessarily have to result in full human extinction. Instead, if humanity was so severely damaged that they were unable to grow and thrive, this would be classified as an existential risk.

And this risk isn’t limited to the natural causes our minds are most likely drawn to first — it could even include the potential of invasion by hostile extraterrestrial life.

There is a 7% chance that human-level AI would be ‘extremely bad’ for the human race

Whether humanity’s risk of mass extinction is more likely to be anthropogenic or natural is a highly debated topic. Natural causes, including events like an asteroid impact, as dramatized in the recent Netflix film, Don’t Look Up, are highly predictable. With extended past records of geologic, climatic, and celestial events, scientists can generally gauge how likely a global catastrophic event may occur. For example, supervolcanic eruptions are estimated to occur approximately every 50,000 years, so we are aware of the risks, and the timeframes we have in order to develop measures so we can adapt or mitigate.

However anthropogenic risks are a lot more unpredictable. The risk of nuclear weapons, for example, is one that is very unassured, as humanity has only survived 75 years since their creation. All of us are very aware of the increasing threat of anthropogenically-enhanced climate change, yet the exact rates and impacts our development has had on the climate are still playing out before our very eyes. Alongside this, as technological developments grow at an unprecedented rate, risks from anthropogenically-derived sources are growing rapidly. A 2016 survey found that there is a 7% chance that human-level AI would be ‘extremely bad’ for the human race. For reference, a natural pandemic has a 0.005% chance of resulting in the same depressing outcome.

The ever-developing technology industry is a key example of an area in which the risks associated with highly complex and intelligent tech have very rarely been studied, and institutions such as the University of Cambridge have set up research centres that work alongside other existential risk institutes in order to impact policy and offer improvements in areas such as long-term AI safety and managing extreme technological risks.

But is all this just an act of scaremongering? Some may argue that there is no need to think about such a topic, and maybe they’re right. Maybe it’s better to turn a blind eye and carry on as business-as-usual. Yet, if there was one clear message that came out of Don’t Look Up, it’s that science denialism could literally lead to the end of the world. An exaggeration, yes, but a truth that could so very easily develop into an existential risk if not carefully mitigated.

Without thinking of the future, we may never be able to mitigate its risks.

Illustration:

Leave a Reply

Your email address will not be published. Required fields are marked *

 

This site uses Akismet to reduce spam. Learn how your comment data is processed.