Episode 34 • 2 August 2021

Anders Sandberg on the Fermi Paradox, Transhumanism, and so much more

Leave feedback ↗

Contents

Anders Sandberg is a researcher, futurist, transhumanist and author. He holds a PhD in computational neuroscience from Stockholm University, and is currently a Senior Research Fellow at the Future of Humanity Institute at the University of Oxford. His research covers human enhancement, exploratory engineering, and ‘grand futures’ for humanity.

(Image) Anders Sandberg

Image credit: Future of Humanity Institute

In our interview, we discuss:

In the article below, we summarise and illustrate these key ideas; providing both more detail and further readings for listeners who want to find out more.

Book Recommendations


The Fermi Paradox

Discussed in the interview

The Drake Equation:

N=RfpηeflfifcLR=Rate of star formationfp=Fraction of stars with planetsηe=Number of habitable planets per system with planetsfl=Fraction of such planets with lifefi=Fraction with life that develop intelligencefc=Fraction of intelligent civilisations that are detectable/contactableL=Average longevity of such detectable civilisations in years\begin{aligned} N &= R_*f_p\eta_ef_lf_if_cL \\ R_* &= \text{Rate of star formation} \\ f_p &= \text{Fraction of stars with planets} \\ \eta_e &= \text{Number of habitable planets per system with planets} \\ f_l &= \text{Fraction of such planets with life} \\ f_i &= \text{Fraction with life that develop intelligence} \\ f_c &= \text{Fraction of intelligent civilisations that are detectable/contactable} \\ L &= \text{Average longevity of such detectable civilisations in years} \end{aligned}

The New Yorker cartoon discussed at Los Alamos:

(Image) New Yorker Cartoon

Image credit: The New Yorker magazine

Quotes from the interview

On the Fermi Paradox —

If the universe is really big and old, and intelligence can emerge in a lot of places, and it can also spread or at least make a big fuss so you can detect it from afar; shouldn’t we be noticing a lot of such fuss? A lot of flying saucers, remote nuclear wars, or at least ruins on other planets? Why do we see an empty sky? Why are there no billboards on the Moon?

On Eternity in Six Hours

The reason you don’t need astronomical resources to spread life over astronomical scales is that life is all about information.

Maybe it’s absolutely impossible to build nanomachines. [That would be] really weird, given that we actually have in practice nanomachines in all our cells. We are in some sense nanotechnology. You might say you can’t make artificial intelligence — which is weird, because we are in some sense a material system with intelligence.

What we tried to do in Dissolving the Fermi Paradox was take [this uncertainty] seriously and actually run through with some actual probability distributions and try to see what happens if you plug in that kind of data? And the cool thing is that we don’t solve the Fermi paradox, but it kind of dissolves […] even if you plug in fairly optimistic values […] the uncertainty that comes out is [still] very big. My average estimate for the number of civilisations in the galaxy might be a million, but the median one is 10, and I get a 30% chance that we’re alone in the observable universe. Now I have a probability distribution that’s not unreasonable given the data we have […] there could be a lot of aliens, but it’s also fairly likely that there’s nobody around. At this point the empty sky doesn’t seem that weird any more.

On observer selection effects and the timing of evolutionary transitions

We know life emerged on Earth relatively early in its history. Normally, if you have a long span of time and something happens early we should say “yeah that’s probably a likely occurrence, since it happened early on.” But in this case, we’re also biasing the observation. We are life. We cannot be around on the planet where life emerged ten minutes ago. We need at least some time for [intelligent] life to emerge from the primordial goo. Which means that our observation of Earth’s past is suspect because of this.

Out of a big universe, there are some lucky planets that avoid things and there you get observers, whether humans or aliens, sitting around saying “there doesn’t seem to be many giant asteroids hitting us.” They could be wrong. We could be living in a super dangerous universe where asteroids typically hit planets every ten minutes. But on some rare planets, you’re very very lucky. Now this is disconcerting for thinking about the probability of life. Because if we can’t trust the observation from Earth, what are we going to do?

The funny thing is, when you do the math and see the lucky few planets that end up with having been through all the steps and ending up with observers, they end up roughly equidistant, statistically speaking. There is of course a lot of randomness between the different random planets, but this is roughly where we end up. And the more hard steps you get, the more you get crowded towards the end. So if we had a million super hard steps, we should imagine ourselves being very close to the end of habitability. If it’s just one hard step, we could be somewhere in the middle without too much problem. So what we did in this paper was basically take the data on when the possible hard steps were taken, and then we just fitted this probability model to it, to get ‘what’s the overall likelihood of the parameters we get out?’. And that generally fits that there are a few hard steps […] And we don’t know how hard [the evolutionary transitions] are exactly; but that gives us some hint that yeah, intelligent life is very rare in the universe.

What happens is that you have a few lucky planets that get life, and most of them of course don’t develop intelligence anyway, so there is no observer seeing them. And the few rare ones end up with having intelligence emerge just before the end. And they’re super rare. But the beings on these planets are going to look back and say “life emerged fairly early here, oh life must be easy. And we’re around, so intelligence must be easy. Why are the skies so empty?”

On UFOs

We always package these unknown sightings into our cultural framework. So the funny thing now of course is if I see something weird in the sky and say that might have been aliens, people might not believe me but they’re not going to think I’m crazy. But if I said I saw fairies up in the sky, people will say “Anders, are you alright?”

The recent Pentagon films mostly show weird stuff that could be almost anything. So when you want to update your beliefs you should try to do it in a Bayesian manner. You say the probability of something given the evidence should equal the probability of getting that evidence if this was true, times the basic probability you believe this could be happening, divided by the probability of seeing the evidence.

If it’s easy for advanced civilisations to spread across the universe, maybe advanced civilisations have done so, but they aren’t interfering with us, like the prime directive in Star Trek that states that you shouldn’t mess around with the primitives. It seems like that ‘zoo hypothesis’ is very fragile, because it only takes a few alien teenagers who want to [visit] people with their flying saucer to break things.

The interesting question is: why are people so interested in UFOs? This is part of a modern mythology. To some extent, we have replaced the Greek gods, the heroes, with the cartoon characters. Batman is our new form of Hercules. And indeed many people are using them to reason about the world and mirror virtues and sins in various ways. People are using Star Wars as a mythology. And it makes sense to teach kids about how to behave!

Western civilisation has this relatively unique property as a civilisation of being slightly obsessed with being wrong. Most civilisations have been very firmly confident that we are the centre of the world, what we know is morally and factually right, and that’s it. What has happened in Western civilisation since the Enlightenment is that we have made doubt something very valuable. Actually asking: are we a good civilisation? And seriously trying to find an answer to that and quite often coming up with “no, we’re not good enough; or we might actually be quite awful, we should replace ourselves with something better.” That has been a winning trick, because we have been inventing new institutions, we have been questioning old things […] And then of course, the UFO concern: maybe there is a civilisation out there that could do unto us as we have done to others. It both fits in with this guilty conscience and this realisation that we’re kind of fragile.

On grand futures —

The overall structure of the universe has been set mostly by the standard physics [but] intelligent life is a bit like Maxwell’s demon in that thought experiment. By nudging things in just the right moment, it can make things move into a very unlikely state. This is what we’re doing all the time on the Earth’s surface [so] when you try to search for intelligence in the universe, you want to look for really unlikely states. And an advanced civilisation might do this on an advanced scale and turn the universe into ever more unlikely things. And I think this is good. Entropy is quite boring, but life and intelligence can create these low-entropy states that are full of meaning.

Further reading

Fermi Paradox

Exploratory Engineering

Evolutionary Transitions

(Image) Survivorship bias

“The damaged portions of returning planes show locations where they can sustain damage and still return home; those hit in other places presumedly do not survive. (Image shows hypothetical data.)”

We can’t put armour everwhere, because that would be too heavy. [A statistician] pointed out: put them where there are no bullet holes. Not the places riddled with bullet holes, because those planes still made it home.

UFO News

Transhumanism

Discussed in the interview

Quotes from the interview

On ageing and status quo bias —

Generally there are a lot of human limitations that really bring us down. An obvious one is ageing. There is not that much time for any human to actually learn how to be a good human; to acquire skills and wisdom to do actually something good. And then our bodies start breaking down, and we actually lose the energy that we might have needed to use that wisdom in a good way, and our life projects are necessarily cut short. I think we should make ageing and death optional.

Another thing is our brains: they’re probably about the dumbest brains that could produce a global technological civilisation. We normally try to solve this by leaning on each other; a lot of very clever group cognition. But we have demonstrated that we can make things better both in the sense of improving coordination through information networks and outsourcing to machines, but also sometimes improving the brains themselves.

We have an interesting situation here we might have a status quo bias, where people think this is the ideal length of life; but historically it has been changing. It kind of doubled over the 20th century in many parts of the world, and yet people always felt that this is the right length of life.

The reason you don’t want to have two short lives in favour of one long is that you lose something when you die. We are very contingent beings. Our personality, our way of thinking and looking at the world is something that will never repeat […] You can’t recreate a human. It’s fairly easy to recreate a cell with the same genome. If you plant a tree, it’s going to grow up roughly in the same way. But when you get to complicated organisms like humans, you don’t get the same thing. Something that can never be repeated has been lost. […] In most cases, there are some unique, very valuable things in a life that are irretrievably lost when that life ends.

[On the present day as a period of unprecedented knowledge] Even the couch potato who just wants to watch Game of Thrones actually is a highly literate person — by Medieval standards they are a real scholar. They actually know an enormous amount of nontrivial historical information.

Ageing kills about 100,000 people per day. If there were a disease doing that, we would say “ok, Covid was nothing. We need to fight this with everything we’ve got.” But nowadays people take it for granted and instead say “why should we give any funding to these weirdos trying to slow this down?” Meanwhile, the bodies pile up.

We might be accepting things that are really unacceptable. One of the beauties of the human condition is that we can adapt to almost anything. That’s also one of the great tragedies, because we can adapt to and get used to absolutely horrible things and say, “this is normal, this is fine”. So there could be moral disasters unfolding around us that we don’t even notice. Historically, we have been accepting of sexism, racism, and homophobia. It might be that now we’re starting to realise that factory farming is causing an enormous amount of suffering […] in 100 years it might be that people will look back and ask “why didn’t most people rise up against the chicken farms?”

As I’ve matured politically, I still regard myself as a libertarian but now I’m a Bayesian libertarian. I start with this prior that people should be free to do what they want, and governments are pretty dangerous things we should be very careful with; but if there’s evidence of a market failure, ok — let’s try to fix it in the minimal way possible, let’s try to update in a sensible manner.

Sometimes we were woefully naive. When naive ideas get scaled up, they don’t work or cause interesting trouble. Many of the ideas we had about digital currency freeing up people to live in an anarcho-capitalist utopia; ok, Bitcoin didn’t quite usher in that golden era. Looking back, some of those arguments were really bad in the first place. Still, it’s interesting to notice that the digital currencies […] seem to be really fruitful.

Transhumanists were one of the few groups of people willing to take superintelligence very seriously; which then led to very useful research on AI safety. Just because you start out with an unusual angle doesn’t mean that you end up with something useless. In fact, the transhuman willingness to realise that the future could be radically better implies by symmetry that it could be radically worse, which means it’s a fertile reason to try to work against existential risk and be willing to think unthinkable scenarios. So it’s not a coincidence that I’m also doing x-risk research and quite a lot of old-timers in the x-risk community have links to the transhumanist community.

On internet communities —

The really dramatic transition was the availability of people across the world who could encounter each other. So from the current perspective […] the 80s were a really weird era, because finding people meant going somewhere, or maybe even writing a letter. Kind of like you would have done in the 16th century with a quill […] Actually communicating with and finding people with similar views to you was very hard if you had unusual views. It was hard to find the people who shared unusual positions. And then you got online, and mailing lists and Usenet […] allowed people interested in a particular topic to exchange messages about that. That created online communities that later became websites and blogging […] This meant that people could form communities even when they had very rare preferences. That is the joy and beauty of the 90s. A lot of people ask “what happened in the 90s?” and the answer is the web. The web essentially created the modern world where unusual communities formed.

As you invent new media, different forms of connection are possible — and also of course different forms of flame wars and disconnection and witch hunt. One shouldn’t imagine that these technologies are neutral.

[On podcasts —] The interesting thing is it creates context. Without context somebody can quote my most outrageous sentence and tweet about it. Look, Anders believes this horrible thing. Without context, it’s very easy to make me look like a racist or an idiot. And that can feed on itself. But this is relatively hard to do with sound and video. Also the searchability is interesting. If I say a particular sentence right now, it’s very hard to do a Google search for my saying it in a podcast. In a few years, that’s not going to be true.

You could imagine an email system where it cost you a tiny microtransaction to send things. That would have been an internet without spam.

Further reading

Online Communities and How Ideas Evolve

Discussed in the interview

The Gartner hype curve (Wikipedia)

Quotes from the interview

People quite often imagine that innovations are something that spring up complete and whole, but most innovation consists of taking 90% of things that exist before, putting them together in the right way, and adding — if you’re really innovative — 10% of something new. And you quite often need that past innovation because that makes it compatible with existing things.

When you start out in a particular tribal or ideological corner, it might be hard to escape that. I think that is what we’re seeing with the environmentalist movement. They’re kind of stuck right now, because they’re become partially part of the left. Which means that conservative environmentalism has nowhere to go. It needs to be dropping the conservatism or the environmentalism [but] the success of the green parties in Europe has been very much about making every party, regardless of their ideology, actually try to do something good for the environment. Similarly, you could imagine the same situation for effective altruism. You want all [the parties], regardless of their ideology, to try to be effective altruists. Then why they’re being effective might be totally different from the conservatives and the leftists.

I think the hype curve peak for effective altruism has passed. In some sense the honeymoon where things were flocking in has stopped, and there’s a lot of critics. Which is brilliant.

In the long run, even a fairly low growth rate totally takes over and changes things. The web is still transforming the world in a lot of ways […] many of these things are cumulative. We’re still kind of reeling from the industrial revolution, and in some ways we’re still adjusting to what we messed up with the agricultural revolution. So we often believe that the latest coolest idea is going to transform the world really soon now. That’s a mistake […] Generally, it needs to take time and [so] we should be planning for the long-term future. I think that’s one of the most important issues: we are not going to win this generation maybe, but the next one.

Further reading

Thank you very much to Anders Sandberg for his time.

If you enjoyed this episode, you might also like some of our other interviews: Thomas Moynihan on the History of Existential Risk, and Simon Beard on Parfit, Climate Change, and Existential Risk.