Jason Crawford on Progress Studies
May 12, 2022
Finished listening? Click here to rate the episode.
Table of Contents
- Jason's recommendations
- What is Progress Studies?
- Frontier growth versus catch-up growth
- Resource constraints and degrowth economics
- Incentivising innovation
- What causes progress?
- Are ideas becoming harder to find?
- Progress Studies and longtermism
- Progress Studies and effective altruism
- Jason's book and research recommendations
82 minute read (21638 words)
Jason Crawford is the founder of The Roots of Progress, a nonprofit dedicated to establishing a new philosophy of progress for the 21st century. He writes and speaks about the history and philosophy of progress, especially in technology and industry. In our conversation, we discuss:
- What progress is, and why it matters (maybe more than you think)
- How to think about resource constraints — why they are sometimes both real and surmountable
- The 'low-hanging fruit' explanation for stagnation, and prospects for speeding up innovation
- Tradeoffs between progress and (existential) safety
- Differences between the Progress Studies and Effective Altruism communities
- Enlightenment Now by Steven Pinker
- The Beginning of Infinity by David Deutsch
- Where Is My Flying Car? By J. Storrs Hall
- The Wizard and the Prophet by Charles Mann
- World GDP over the last two millennia (Our World in Data)
- The Great Stagnation by Tyler Cowen
- The Rise and Fall of American Growth by Robert R. Gordon
- Stubborn Attachments by Tyler Cowen
- Rule of 70 by Marginal Revolution University
- The Coal Question by William Stanley Jevons
- Thomas Robert Malthus
- Why I’m a Proud Solutionist by Jason Crawford
- The Haber-Bosch process
- The Alchemy of Air by Thomas Hager
- The Simon–Ehrlich wager
- The history of billiard balls
- Advance market commitments
- Why Napoleon Offered A Prize For Inventing Canned Food (Planet Money)
- The Orteig Prize
- Louis Pasteur: Free Lance of Science by René Dubos
- Florence — Role in art, literature, music and science
- How Vienna produced ideas that shaped the West (The Economist)
- Secrets Of The Great Families by Scott Alexander
- Science: The Endless Frontier by Vannevar Bush
Hey, you're listening to hear this idea, a podcast showcasing new thinking and philosophy, the social sciences and Effective Altruism. In this episode, we talk to Jason Crawford, who is the founder of the Roots of Progress, a blog turn nonprofit that is dedicated to establishing a new philosophy of progress for the 21st century, and helps foster the emerging progress studies movement. Amongst other things, we discuss progress as a concept and what it tangibly means to boost sustainable growth, the relationship between progress studies and Effective Altruism, especially how to balance inventing new technologies, and minimising existential risk and the role of altruism as a motivation for progress and choosing a career. Fin and I got to speak to Jason whilst attending the moral foundations of progress studies workshop in Austin, Texas. So this is incidentally also the first time that Hear This Idea got to hit the road. I think there is just an incredible amount to learn from reaching out to other movements and think through these really big questions together. So I should also plug that Jason has launched a progress forum, similar to the EA forum. So that means that there's ample space to continue having these discussions online too. But for now, without further ado, here's the episode.
Thanks for having me on here. Looking forward to this conversation. My name is Jason Crawford. I write a blog called The Roots of Progress, which is now also a nonprofit on the founder of the roots of progress. And I write about the history and philosophy of progress, beginning with the history of technology, and now I've kind of broadened to the broadest topic of human progress.
Well, awesome. And one question we like to ask guests to begin with is what is the problem you're currently stuck on?
Well, you know, one thing that I'm sure we will definitely get into is I've been thinking about and doing some research on the topic of progress and safety. What is the relationship between those two? And it's, it's not a simple, straightforward, you know, linear relationship, right? It's subtle. I think it's subtle and nuanced. So.
Cool. So the way that I propose we maybe structure this conversation is, one, I just want to get a better understanding of what progress studies is, and especially the case for sustainable growth, just like being a really important and underrated factor in the world. And then, two, I want to maybe talk a bit about, what interventions or real world things boost this and could actually be really effective in bringing this about? And then three, I want to touch on exactly what you just said there and think about how progress studies relates to risk and safety and also just the long termism or community as a whole. So yeah, that sounds good. We can start with that.
Let's do it.
Yeah, pretty natural question to start off with is, can you just try summarising the thesis behind progress studies? In your own words?
Yeah, sure. Let me start with what got me into it. Because I started reading and writing about this stuff in early 2017, just a couple of years before the current progress studies community took off. I was just looking at history and the last couple 100 years have been really amazing for human welfare in many different dimensions, right. And we often think about just the really obvious one of GDP per capita increasing for pretty much the first time in human history, and increasing by more than an order of magnitude, way more in developed countries over the last couple 100 years. But then, if you also think almost everything we know in science has come in the last few 100 years. And really, even though moral, or social progress sometimes feels more elusive or fragile, there's actually been a lot of that in the last few 100 years, too. If you just realise that in 1775, pretty much the entire world was under monarchy, and had been for 1000s of years with, like, for all of human history with very sort of rare and delimited exceptions. And today, right, monarchy has mostly kind of gone by the wayside and has been replaced with Democratic Republic's and that's just one example, right, of the ways that things didn't change for 1000s of years that have changed a lot. And I think, you know, in that case, for the better in the last few 100 years. So, if you look at that, you just say, Wow, something went really right. It's actually you know, in some way the industry revolution is the greatest thing ever to happen to humanity. We should just take a look at that, right like that deserves study. And so the way I called it out was, I think if you care about human well being and then you realise how good this has been the last couple 100 years, you've just got to ask one, how did it happen? Just kind of nuts and bolts like a gears level understanding on the grid from the ground up? What were we doing wrong a couple 100 years ago that was leading to such a terrible standard of living, right? What did we figure out? What did we start doing right? Second, why did it take so long? Right? Why did we have to go through 1000s 10s of 1000s of years of so much suffering and death among people before we kind of finally found these keys to growth? And then the obvious question, three for the future. How do we keep it going? How do we continue this progress, maybe even accelerate it? How do we get more of what we got? Is it fragile? Is it or is it in any way? You know, to what extent do we have agency over it? Is this under our control? Is it something that moves forward inevitably, or inevitably stops? Or is there some way in which it's up to us to keep it going? So those are the questions that animated me and began my study. And I think that all of those would resonate with the broader progress community,
I remember Tim Urban posed this question, which is something like, Would you rather be born into the mid 18th century as a French monarch or be born into a kind of median income middle class family in a developed country? And that's at best a toss up for me, right, which really nicely illustrates that progress. Also to pick up on something you mentioned, about how fragile is this enormous uptick? Right? So one framing could be, you said something like, what were we doing wrong for so long? Right? What took us so long? Another framing is something like what on earth went right? You know, poverty is the default state of almost all of humanity for almost all of history. Something very weird happens, potentially that's very fragile, if it is that's worth caring about.
So some people over the last decade or so have been very concerned about technological stagnation. Peter Thiel is one who started sounding this alarm. Tyler Cowen wrote a book called The Great Stagnation over a decade ago, Robert Gordon, another economic historian, has written a bunch about this, he has this book, The Rise and Fall of American Growth. All three of those thinkers, by the way, I think have very different views of stagnation and certainly of the future. But they all are sort of calling attention to a relative slowdown in progress in the last 50 years. Now, that was not my personal motivation. I actually was sceptical of the whole stagnation hypothesis at all in the beginning. And I eventually came around and now I think it's probably about right. But for me, my motivation was less saying, oh, no, things are slowing down, how do we get back on track? And more just well, things have been really good, let's get underneath this. Right, let's and I began with an idea that this is really going to inform or really ought to inform our worldview, like what we care about in society, in political discussions in the culture, even down to basic things like respect for reason, and science. Like, ultimately, the history of progress is a big part of that, and ought to really inform our worldview.
Yeah, I think there's something interesting in this as well that progress as a term kind of catches that it's not just about economic growth, it's about social advancement. And these other like factors as well. I'm curious if you can, like maybe speak to that a bit more. There's one version of this, in my head, which is just like, economic growth is the important thing. And the other thing is, like, happened downstream. There's another narrative that these things are more embedded, or maybe come down from institutions and culture. And that is like, ultimately, the driving factor. Maybe it's just a big messy thing as a whole. Yeah, like, how do you think about that?
So there's at least three major types of progress that I think about, the one that I've mostly been studying and writing about so far is progress in technology, industry and the economy. But there's also a very obvious progress in science and knowledge. And then somewhat less, obviously, but again, I think very real is progress in morality, and society and governments. And I think of those three strands as kind of being the big three themes of human progress. I think, ultimately, they are distinct, but inseparable. They are ultimately all intertwined, and they reinforce each other and make each other possible. And so I think if you really want to understand the story of human progress, you have to understand all of them together.
I guess, like on Fin's point before, you know, when he framed it as like, well, what the hell went right, if you look at the literature of what caused the Industrial Revolution, you definitely see all three things being suggested, right of like the Scientific Revolution being really important. Just the institutions and markets were really important then economic growth, or just access to resources or ideas of steam engines and engineering stuff being really important too. But did that ever require like a whole bunch of things, kind of coming together in order to unlock this big explosion?
And then the flipside could be something like, why didn't China industrialise first? And then maybe the answer isn't so much here’s a specific factor, but more all these factors didn't come together at the right time, in the way that maybe they did in Europe.
And of course, I mean, economic historians have devoted entire careers to answering these questions.
And we'll cover them now!
Cool, I think one thing I want to touch on a little bit as well is I was reading kind of on the flight here, the Tyler Cowen book that stubborn attachments thing and one thing that kind of stood out from that a little bit as well is this question of when we're talking about growth and progress, how much are we talking about progress on the frontier? Or how much are we talking about other countries or regions and stuff catching up to what is at the frontier? And yeah, I guess there's an interesting question here as well of when progress studies talks about advancement is it centred around frontier is it centred around catchup, is it about both. Yeah, we'd love to hear your views on that.
Yeah, so in my opinion, both of those things represent progress and are a form of progress. I would say in general, the progress studies community focuses more on the frontier growth. The catch up growth is interesting, because it's something I've researched less, but my impression is that a lot of it just has to like whether whether that growth happens, where it happens, how fast that happens, has a lot to do with just sort of the quality of the institutions, and especially the government. And it's about sort of getting some of the right foundation of rule of law, getting rid of corruption, and so forth. There aren't many people who actually talk a lot about that, even within the global development community, and I still don't fully understand why. So I think the problem of how to solve the catch up growth and you know, why isn't Nicaragua as wealthy as Norway yet? Right. Like, I think I think that's an interesting, important problem. It's also pretty distinct from the problem at the frontier. But of course, if we only saw the catch up problem, and we don't solve the frontier problem, then all that's going to happen is eventually everybody will catch up to the frontier, and then there'll be nowhere left to go. So I think that's part of why I'm just sort of more focused on, let's make sure that we continue to blaze that trail at the frontier, right?
Yeah, I think there was an interesting point, in this level, kind of going back to what we said before, about just taking a step back to reflect on how amazing human advancement has, the thing that I'm kinda thinking of are these like our world and data graphs that map out poverty reductions, or access to health and stuff. And a lot of this stuff didn't happen when the industrial revolution happened, right in the 1700s. It happened in the 20th century, right, in the 1900s, with essentially big countries, China, now India, and stuff, catching up and getting out of poverty, and a lot of that necessitating, as you said, Frontier Growth, but the, channel that kind of gets it to trickle down being Catch Up Growth, or being countries getting to the frontier.
Yeah, and I note, by the way, that, particularly in health, right, so health outcomes, there's less disparity in health outcomes overall than there is in like GDP or overall wealth. And part of that is because of international aid, you know, type programmes, part of it is just because, you know, once you invent a vaccine, right, you can manufacture and distribute that vaccine all over the world, for instance, or once, once some researcher somewhere figures out the right, the right solution of, you know, water, sugar and salt that you need for cholera. Right, I forget the term for that solution but you can drink to combat the dehydration, right? Some researcher somewhere figures that out, and then that knowledge can spread all over the world. So that's another reason to sort of care a lot about frontier growth, because there do end up being these effects that affect the whole world.
Maybe that's also true of something like institutional technologies, right? And so maybe they're like managerial practices that get invented at the frontier. And then that's a positive externality, maybe also, in the broader sweep of human history, something like markets technology, you know, eventually have been imported to potentially good effect. So I guess when we're talking about technologies, we're not just talking about silicon or gears, Right? but also social technologies,
Cool. Well, let's maybe unpack this idea of frontier growth or scientific advancement or something like a little bit more. I think one of the things that just really stands out is that seemingly small changes in the growth or in the progress where you're like, whatever we want to kind of call it really just compounds a lot like overtime and stuff. And this is just a really underrated phenomenon. Yeah. I'm just curious to maybe hear a bit about that and see how it fits in with the idea of progress studies. It's kind of like theory of change.
Yeah, sure. I mean, ultimately, that's just math, the difference, but the difference between a 2% growth rate and a 3% growth rate, for instance, you know, compounded over a long period of time, it's just not intuitive to people, right, compounding growth in general, sort of not intuitive. But you know, you compound that over a couple of generations, and you've got, you know, like a 3x difference. And so it's something that seems kind of trivial, right? And like, Oh, why do you care whether the GDP growth rate is like 3% or 2%? Well, it makes a huge difference over the long term.
One of the maths hacks that I remember being taught at like undergrad is the like, 70 over x rule that if you want to think about how long does it take for something to double, divide 70 by the growth rate, so 70 divided by 2% is 35 years for it to double 70 divided by seven is 10 years to double. And that is like one way to like maybe I get more just like how incredibly like quickly these things can happen.
I feel like quote unquote, just math is often highly underrated.
In general progress, it feeds on itself, right? It is self reinforcing and self building. And this is true of again, so it's true of a number of different types of progress. So it's certainly true of economic and scientific progress right. In many ways, some of which are more obvious, some of which are less, obviously, you know, technology, or sorry, obviously science helps create advanced technology. But then, of course, technology helps drive science, better technology gives us better measuring instruments, for instance, or the ability to send probes into space, or maybe one day build a radio telescope on the far side of the moon, or, you know, who knows what, right. And so that then drives science forward. So there's a reinforcing loop there. I think there are a lot of different reinforcing loops at many different levels. And that is why progress has been so so throughout most of history, and then, you know, it compounds and builds on itself, and just the pace just keeps increasing.
I guess, one scepticism that I think you can, often come across when you're thinking about these things is just like, a scepticism more broadly of like how much room there is like just for growth in general. And how much of that can really be sustainable. You hear this sometimes from, I mean, possibly, like most prominent from, the degrowth movement, but also just, more general or, milder, worries of, environmentalism, or just resource constraints and stuff as well. The great thing about compounding, or, as we said, maths is that, things can grow really, really quickly. But there is also this concern that things are really really quickly in just like Consumer Law, and, do we not at some point, just hit barriers that put a limit to this?
Yeah, so this has been a concern for a very long time, you know, even so long before people were worried about peak oil in the 20th century, people were worried about peak coal in the 19th century, there was this guy Jevons, William Stanley Jevons, who sort of warned about how England was running out of coal, and was going to use up all of its coal and coal was the source of its might, and that was going to be a big problem. Around the same time, people were warning about the end of gold and silver and other precious metals, which were going to run out really quickly. Of course, there was, you know, Malthus, from this time, who basically thought that agricultural productivity could never be improved exponentially, and therefore population would always outstrip it if we weren't very careful to basically do population control. So it just depends, it depends a bit on what level you look at these things. So sometimes when you look at it, at one level, it's absolutely true that there are resource limits. So one example that I wrote about, I wrote an article on the MIT Technology Review, where I use this as a case study, in the late 1800s. William Crookes, the physicist who invented the quarks tube, gave this lecture and turned it into a book warning about how we were running out of fertiliser, and that we were going to, there was gonna be no more fertiliser, we weren't gonna be able to grow crops, we weren’t gonna be able to grow wheat, and we'd be facing these food shortages and so forth. Now, on one level, that was absolutely true, we were running out of the sources of fertiliser that we had at the time. Now, what Crooks correctly saw was that this didn't have to actually constrain progress. And what he called on was for the chemists of the world to solve this problem by coming up with some sort of synthetic fertiliser. And he even had an idea of how you could do it by fixing nitrogen in the atmosphere, which turned out to be correct. He had an idea that you could do with electricity, which turned out to be not the way that we do it. But he knew that lightning, so lightning actually will break nitrogen bonds and will create NO nitrogen oxygen compounds in the atmosphere. And that's where actually some of the world's you know, it's just out in nature, like that's where some of the fertiliser, fertility of the soil gets replenished from. So he had some idea you could do this, you know, using a big electric power plant, and then use electricity artificially too anyway. So that turns out to be not the way we do it. But we did find a different way to do it, which was the Haber Bosch Process. And so within a couple of decades of his initial warning, you know, we had this way of creating synthetic fertiliser. Incidentally, there are also other things that happened that Crooks didn't foresee. So Crooks didn't think that we were going to be able to expand land usage very much. But then what happened was something came totally out of left field, which was we got agricultural mechanisation. So we got the tractor, the gasoline engine and then that created the farm tractor. And so this helped to further mechanise, Agriculture had already begun to mechanise at the time, but this began to further mechanise agriculture. And, and you might wonder, well, okay, but how does just having the machines solve the fertility problem? Well, it does in the following way. When you lower the labour cost of the land, you can open up new lands that were marginally productive, right, that were not profitable to farm under if you had to use lots of labour, but then they become profitable to farm if you can mechanise the labour. So very often, these things just come from unexpected sources. So anyway, coming back to the resource issue, right. So was there a resource crisis? On one level? Yes, there was a resource crisis because we were running out of the known sources of fertiliser, but on another level, no, we were able to come to continue exponential growth and agricultural productivity unimpeded, because we put some of our best minds on the problem, and we actually solved it by finding a new resource, right? So the thing about resources is, at the end of the day, there are no natural resources, the term is something of a misnomer, right? On one level, like, every resource is natural, there are no supernatural resources. But on another level, all resources are artificial, because all resources are the product of our knowledge and our understanding, you know, even right, I mean, sand was, was not the, you know, consider sand as a resource before and after, like silicon, right. Or even consider coal as a resource before and after the steam engine, right. I mean, even oil, you know, oil, really, it only became super useful when we had chemical techniques to refine the oil, right? Crude oil to get out of the ground as the sludge that has all sorts of crap mixed together, it's not super useful, you don't want to just burn it. But if you apply chemical techniques, you can refine it into different weights and fractions. And, different ones are good for different things. And so that's where you get kerosene and gasoline and you know, all the different all the different things that we use. So, so ultimately, you know, the greatest resource is the human mind, right? It has reason, intelligence. And as long as we have matter and energy at our disposal, we're going to be able to do something with that, given the right knowledge. Yeah.
Makes me think of the Simon-Ehrlich bet. Yep. So Paul Ehrlich makes this bet that the price if I remember, right, of five precious metals or something will go up in some time periods.
And Simon let him pick the metals, like pick any five metals, you want a basket of them? Right.
And, you know, of course, you know, it turns out that I think the price of like, at least three of them went down, not because we discovered more metals, but because presumably we innovated, you know, came up with new technologies to extract the metals, okay.
And Simon, of course, I mean, his blog talking about resources, Simon wrote a book called The Ultimate Resource, right, which was about, you know, largely about this phenomenon, right. And the point of that was that the ultimate resource is our intelligence.
But the thing I didn't know until recently about that bet is that Ehrlich proposed the second bet, which was not so much looking at prices, but looking at kind of more first order measures. So things like, you know, concentration of carbon dioxide, and average temperatures or something, and Simon refused the bets. And I think it really nicely draws out the difference between look, on one hand, these people are correct that, in fact, you know, certain resources may be depleting. In fact, these kinds of first order measures, you know, might be changing in the way they expect. It just misses the things we really care about. Those are the kinds of problems we can innovate our way out of right.
And you know, the thing with fertiliser is really just one example of a major trend that was going on in the 19th century, which was that there was a major shift away from especially sort of biological resources and towards much more abundant mineral resources. So kerosene, right, it's an example, right? We were using whale oil for lighting, right, and candles. And so we had these fats and oils from plants and animals sources for lighting. And then we switched to kerosene, which was and so you know, we managed to avoid driving the sperm whales to extinction, right. And there were similar things with plastic. So before plastic, you know, what did we use when he needed some sort of a lightweight, insulating, you know, waterproof kind of thing? Well, very often would be animal parts, horns, bones, tortoise shell combs, for instance, ivory, right, so knife handles and umbrella handles and all kinds of things. billiard balls, in particular, were made from elephant tusks. And in order to get the billiard ball, just like perfectly weighted, so it would roll on the table, like you had to make it out of the centre of the tusk. So it's kind of wasteful. And again, these elephants were getting hunted to extinction. And there was, you know, there was sort of like a risk to the billiards industry.
And again, before electric lighting, before plastic, I could truthfully claim that we cannot, you know, sustain this level of oil lamps or billiard balls before we very quickly run out of whales, or elephants, right? So you'd be like, yeah, sure, that's, that's correct. What I'm missing there is that the thing that I ultimately care about isn't whale oil lamps, it's lighting, and we can come up with new ways of lighting my room or playing, playing billiards.
I think it's maybe worth emphasising as well that this, human ingenuity, or this, sudden breakthrough thing isn't entirely random. Either there is, I think a lot of this kind of boils down to a question of, are these things kind of self correcting in a way where you have a problem, because you're running out of, this resource, which provides this really important function, and then this problem grows and grows and a solution to this problem becomes even more valuable, which incentivizes a lot more resources to try and solve it and at some point, you know, we put in so many resources we put so many of our best minds at work to, try and solve it. And then this seems to really work for a lot of problems as we've just discussed and I guess the scepticism around this is like, does this also work for problems that are just on a much larger scale where maybe these incentives for these resources to dive into these problems, aren't as great as Fin said, with like co2 concentrations, just because like the damages that climate change, maybe incurs are like 100 to 200 years? Or because, you know, when it comes to animal extinction? It's, I mean, there's an open question. I don't know if the concern was about elephants or sperm whales in particular, whether it was like advocacy groups for sperm whales who were trying to incentivize, or invent new methods of lighting, or it was a business problem. But I think that market failure questions or things to think about whether that self correcting loop also works for these other problems? Or not?
Yeah, it's interesting, I think you're absolutely right, that there is a feedback loop in that the more something becomes a problem, and the more the more intense it is. And the more imminent it is, the more people's attention is directed towards the problem that happens through a number of different ways. There are market mechanisms, economic mechanisms. In the case of the billiard balls, someone actually from a billiards company actually put out a prize announced something like $10,000 and this is in like, late 1800s money so it’s like a lot of money. $10,000 in gold, you know, to somebody who could come up with a substitute for ivory and make billiards, and people, you know, people were working on it. And this is actually, I think, where celluloid came from, or like the earliest proto plastic material, even though it didn't end up replacing ivory for Billiards in particular, you know, it was in part motivated by this contest, as I recall. Oh, and I think kerosene was a similar thing, right? Where it was seen that there was a business opportunity, right. Sometimes it is more just at the level of people raising awareness. Right. So the William Crux thing he gave this speech about the fertiliser at, he was like President of the British Association of sciences or Academy of Sciences, I forget the name of it at the time. And, you know, the President gives an annual address. And so he just used this opportunity. He's like, I got a bunch of people listening to me, I'm gonna give this speech about the weed problem, you know. And so sometimes it's just through communication and raising awareness, right, that that can happen, especially if it's not just an economic proposition, right. But you need to turn the attention of scientists or researchers to the problem, which is what needed to happen here. But I do think that there's something really underrated and underappreciated about how much we can accomplish if we start throwing resources at the problem. And let me take this to something that is much less historical and is much closer to all of us. What happened in the last two years with COVID. So we got a vaccine for COVID, in absolute record time, you know, typically, from the identification of a pathogen to like the approval of a vaccine takes, you know, like a decade on the order. And we got multiple vaccines in less than a year. What happened? Well, okay, so there are a number of different ways you can look at this one way, you can look at it from a science and technology angle, which is that we have this new technology that had been developing for decades, this mRNA stuff, it allowed us to create new vaccines very quickly. And so they were getting tested right away, you know, you might almost ask why we didn't even have it sooner. But there, I think there's another angle on this, which is underappreciated, which is, you go to the Michael Milken Institute, they have a tracker, where they tracked all of the vaccine efforts, and also all of the like pharmaceutical efforts to try and find cures and therapies for COVID. And there are hundreds. There was like close to 300 vaccine efforts, and over 300, I think therapeutic efforts that they were tracking. And it's just when you get so many parallel efforts, right from so many, right, and people are trying, like every known vaccine technique against this thing. And we've got all this redundancy in different labs and different like, yeah, a few of them are going to come through and a few of them are going to come through sooner rather than later. So I just think there's a lot of progress that we can make against a problem when it becomes the world's number one problem, right? And everyone acknowledges that and sees it and there's just tonnes of resources flowing into it. And the whole scientific and engineering community is focused on it.
Yeah, I guess the story of Covid vaccines, it becomes obvious that governments will be like primary buyers for vaccines if they get developed, like enormous sucking sound, as pharmaceutical companies realise the incentives but then maybe there are times where there's like a less clear story about the kind of market pool but nonetheless, it's very good right, this like, you know, whether externalities or just like there aren't big buyers So one example is vaccines in like poor countries. You might think maybe there's actually like, especially kind of there's like an especially exciting opportunity to do like a tonne of good to there because you can just create the incentives yourself if you care enough. So an example is that as far as I understand DNA vaccines versus mRNA vaccines, they're not quite as good on a lot of measures, but they're much more shelf stable. So like, especially good for transporting around whole countries haven't really been developed because the like, enormous incentives weren't there. But you can just like notice that, create your own incentive, and then throw resources at it in just the same way. And I'm kind of hopeful that maybe we'll see a bunch of DNA vaccines soon and just like, solve the rest of the COVID problem.
Yeah and I think that one general mechanism that is underappreciated underused, is, is the advanced market commitment, or the or, more broadly, maybe demand pull on type of mechanisms, where you basically I mean, so, okay, I'll frame it this way, you got a bunch of money, and you want to do something, you know, for a problem? Well, so one way you can use money to try to solve the problem is by giving grants to people who are going to do you solve the problem, right, or by just investing directly in solving the problem yourself. And that, that can work and that can be good. But it also requires picking winners, and or managing the process and sometimes risks micromanaging the process. Another thing you can do is just to announce and make a credible commitment to being the purchaser for these things, if someone should come up with them. Right. So you could say, and this is a thing that has prevented vaccines from getting developed in the past. Right. So we've had other Coronavirus epidemics in the past, right? We had SARS, we had MERS. And I think in each case, you know, pharma companies started thinking about like, should we try to develop a vaccine for this thing, and they just didn't know how big it was gonna get. They didn't know how long the epidemic was going to last if it was going to become a pandemic, which it didn't in those cases, right. It didn't become global. They didn't know if there was going to be a market for it. They didn't know who would buy, right? And so there's just this hesitancy to invest. One way you could think about this, by the way, is you could think, oh, we need to provide a profit incentive. You know, I actually think about it slightly differently. I think, like the researchers, and even I mean, the pharma companies, like they almost don't need an incentive to solve the problem. They just need an excuse. Right? Like, I mean, I think they would love to solve the problem, but they can't justify it if it's not if an enormous investment isn't going to get repaid, right. So if you can just say, no, look, look, your investment will not go to you're not going to lose tonnes of money by diving into this problem. Right. The money is there, then I think lots of people are motivated to use.
Wait, sorry so I’m not fully clear, so there's some difference between an excuse and an incentive, which I think I missed there.
Maybe it's a subtle distinction. I just think that in general, and maybe this is a bit of a tangent also from the previous topic. So, sorry. But I think that in general, especially scientists and engineers, they're motivated by solving problems. And so the role of, I think the role of, of profit mechanisms, and even like intellectual property protection and so forth, right, is less is less to sort of stoke the fires of like monetary greed. And it is more to provide a financing mechanism to allow for a rational monetary investment in the science and engineering.
So he's like giving permission to the scientists to do the thing they already care about doing like, scientists rarely wake up in the morning, like, Oh, my God, it makes so much money. And I'd have to solve this problem, right?
I mean, look, so when COVID became the world's number one problem, right, every researcher who could do who could in any way sort of help with the problem, I'm sure was thinking about a vast majority of them, right? We're just thinking about how can I help? Right?
I think to maybe challenge this a little bit, there is a big question of, well, what do we mean with the world's number one problem? Like what gets to be the world's number one problem? I think in no small part, the reason why COVID became the number one problem is because it affected the US and Western countries a lot too, there is definitely just in vaccine or pharmaceutical history of stuff, just this big problem that a lot of global health issues just don't get prioritised because there isn't a market there because it isn't as profitable. I mean, in many ways, the EA movement came out of this, like simple equation of you look at how many dallies have caused costs, you look at how much funding there is, and some things like cancer or other things are more prevalent in the west or with consumers that have money get way more attention than problems like malaria or tuberculosis that don't and that don't have maybe the resources to make this the world's number one priority, even if on dally terms like year on year or if you will take the long run view or something it is definitely more costly than then other issues.
Yeah, sure. I mean, ultimately if you are a person with money or an institution, right, like you have to decide what your priorities are. All I'm saying is that occasionally something does become the top priority for the entire world. Right. And then it’s sort of clear, but you know, other than that, I mean, there are different problems and it's not clear that there was really a strict ranking of the problems even right? Like, yeah.
Yeah, I guess, in some ways, and I don't want to labour too long in the plan, but it's again a question of, how much of this frontier advancement kind of because that because you can definitely tell a story of, well if we advance mRNA technology for whatever reason be it COVID or be it some other concern that is incentivized by, you know, willing consumers or just by big markets in the US or the NIH or, what kind of have you, then that just makes vaccine development, on the whole, just a bunch of cheaper and that will trickle down. Or, you know, it just helps advance, yeah, science as a whole helps other breakthroughs down the line that will then go on to help with malaria or with what kind of have you? Yeah, I don't know. It's a big question for sure.
Yeah, I mean, I mean, as far as I'm concerned, there are lots of, there are lots of important problems in the world, some of them affects some people, some of the affects different people, what problem you know, you personally want to work on, or you personally want to fund or whatever is like, is, in my opinion, more of a personal choice, right. And, people can choose to solve, you know, problems in all sorts of different places. They're all valid problems.
Yeah. There’s something of a meta point, I feel like I'm picking up on, which is, there's an interesting difference between, for instance, grants and prizes, where when you establish a prize, you're describing the problem, and then kind of giving people permission to go and solve this well defined problem. And what matters is the endpoint. And when you're giving a grant, maybe there's some more of a feeling that this will get cut, I'm sure, there's like more of a feeling that you're kind of just, you know, keep doing the work you're doing, we really care about it with less of a view to the thing we care about is like getting from here, but we don't have a solution to the solution. And that's what ultimately, ultimately matters. Does that makes sense?
Yeah, I mean, I do think I do think that, you know, when does it make sense to use grants and when does it make sense to use prizes or advanced market commitments or something? Yeah, it does depend in part on do you think you know what the form of the solution looks like? And do you think you could identify who might do it or, you know, like, the more uncertainty there is around there, the more it could come from anywhere, and it could take any form, the more it makes sense maybe to pay for outcomes? Yeah, there's, there's an interesting history actually of prizes. And there's, I was looking at one whole website a whole, I had a whole list of kind of major technological and scientific developments that have been, well, that won prizes, at least, whether or not they were directly motivated by them, but the invention of canning, right, so the technique of sealing something in a can and then heating it to, to kill the microbes in it and using, you know, to make fresh food. Napoleon put out a prize for a technique of basically like preserving food for his military on campaigns, right. And there was some French chef who came up with this. The Charles Lindbergh flight across the Atlantic, right. So the flight, won some sort of a prize that somebody had put out. Some of Louis Pasteur’s work on some of his work in kind of the run up to the germ theory, specifically when he was, Pasteur essentially, kind of disproved once and for all that spontaneous generation theory. And that particular work, there was like a prize for progress on the spontaneous generation question and so he won that prize. So there's kind of all these things, you know, throughout history where, I don't know, I don't know what the counterfactual is like, would people have done this stuff without the prize? Who knows? But it's been involved in a number of the major sort of, you know, breakthroughs in history.
Incidentally, if anyone's listening and has 10 minutes to look for a Wikipedia page, Louis Pasteur is fascinating.
Oh, absolutely. And while we're on him, I'll recommend a biography of him that I read: Louis Pasteur Free Lance of Science, by Rene Dubeau. I enjoyed the book.
Okay, so we've talked a bunch about, I guess, just the case for progress, and the things that kind of drive it and what have you. We've also touched a little bit on or teased a little bit at the start of this question of, well is it kind of drying up, right? Are we hitting a great stagnation or not? And I think that then, naturally takes us to a question around, well is there anything we can do to either boost scientific advancements actively, maybe through grants and prizes as Fin mentioned? Yeah, what is driving these things?
I guess, like a fairly simple and extremely difficult question is something like what are the levers to actually boost these kinds of progress, especially growth? And especially over the long run, if you want these kinds of exponential differences to kick in? You know, where do we look?
I don't have a full explanation yet for sort of the causes of progress. But I'm starting to get a view of it. So here's my current incomplete view. I think one thing that's clear to me now is that there are a number of overlapping feedback loops, so I mentioned a little earlier that progress feeds on itself progress compounds, right? So many forms of progress lead to faster progress. And this is true at multiple levels. And all of the following levels are true simultaneously. So first off, just technology itself can often make technological development faster. Obviously, communication technology, for instance, the better we get at writing down knowledge and ideas and communicating them, the faster people can exchange ideas and learn and then you know, find out what applies to their situation. Okay. Slightly less, obviously, as we create transportation networks, we get larger markets, right. And larger markets can drive, you know, more kinds of progress. When we come up with fundamental innovations, like manufacturing technology, the development of precision manufacturing, especially around the 19 century, was a fundamental enabler of many other, you know, types of many types of machines, and therefore, you know, many types of progress. And so, you get a lot of these things. Now, not everything feeds directly back into progress, but many things do. Okay, so that's at the technology level, at a broader kind of economic level, just like building up surplus wealth gives us more to invest in R&D, and to invest in new ventures and so forth, right, which then makes progress happen faster, which builds up more surplus wealth. So that's another reinforcing loop. I mentioned earlier, the sort of obvious one of science, right. Science gives us advanced technologies, and then technologies can feed back into making science happen faster, whether that's through new types of instruments, large hadron colliders, LIGO space telescopes, or just the internet, right? Archive and PDFs, and you know, all of that stuff, Google Scholar, you know, all those things. So, there's also I think, a, there's a feedback loop at the very deep level of philosophy. So for a long time, people didn't believe in progress, because they didn't think it was possible or desirable. And part of that was because they hadn't seen much of it. And if you go back and look where did the modern idea of progress originate, it really kind of originates in the west around the time of Bacon, Francis Bacon, and you look at writers of that time, like, what were they looking at? Why did they think that progress was possible? It's actually because they saw a few key instances of it. So one was the voyages of discovery, generally, the Age of Discovery, like opening up the world, entire new continents had gotten discovered and new trade routes. There was more international trade now. And this was making people wealthier. But then there were also inventions. So the compass, gunpowder, the printing press, right big things like this sort of made people realise, Oh, wow. There, here's some inventions that we didn't have for 1000s of years. And even the revered ancients, right? In Europe, right. That'd be like the Greeks and the Romans, who were looked at as these, this race of like giants, who, you know, and these super wise people who had this, you know, knowledge that was lost, and we're just beginning to rediscover it now and we haven't even built a huge Coliseum like they did and right. So there was this notion that maybe like they were greater, they were like, better than we are. Yeah. And that started to change, as they realised, Oh, okay. Well, they yeah, they sure, the Romans had all you know, like formulas for cement that we lost for 1000 years, but they didn't know about the Americas. And they didn't have the printing press, and you know, and so forth. The printing press, in particular Bacon called out he was kind of like, yeah, you know, like the compass, gunpowder, these things maybe you need like a sort of some aha insight, you know, or, you need some special knowledge that, oh, this special kind of, you know, chemical has this reaction. But the printing press, he's like, the printing press kind of should have been obvious, shouldn't it? It's really just a mechanical invention. Why didn't we come up with this before? And so and so that made Bacon Like, it's so it's sort of funny, but that made him optimistic, right? Because he's like, look, guys, there's nothing standing in our way. We can just do this. And so now today, we've had a, no, it took centuries for like people to really sort of believe this. And, and for it not to just be kind of this conviction among a few elites, intellectuals and scientists, maybe but to percolate out to the world. But today, it's unmistakable, right. And part of it, because now we can see progress happening in our lifetimes. So, you know, if you, hundreds of years ago, 1000s of years ago, right, people pretty much died in the same world they were born into, right, they did not see the world change in their lifetimes. And now we see the world changing every decade or two, right? There are major things right. You know, just I mean, you know, anybody who's my age now, right? Just grew up in a world basically, without the internet or with almost no internet, right? And now the internet absolutely rules the world.
So it may be looking to the present day or looking to the near future, what do you see as maybe being some of the potential, really exciting ideas that could be part of these feedback loops? Or could even create these new feedback loops potentially? Are there any technologies or ideas or institutional reforms that stand out as just deserving a lot more attention or focus or yeah, becoming possibly even the world's number one priority.
So, um, so I have a few ideas for why progress might have slowed down in the last 50 years or so and what and the flip side of that is what we might want to address to keep progress going or accelerated. So I'll start with the most fundamental, which is what I sort of just discussed, like the very idea of progress, and what I've termed the philosophy of progress, people's kind of fundamental attitudes towards it, which I think were very positive up until the early 20th century. And then the world or at least the West kind of soured on progress in a key way through the course of the 20th century. And now I think we're very conflicted about it. Right. I think people are very, have very mixed feelings.
Could you speak a bit more to that, like, yeah, what was it that soured? Or how have attitudes changed there?
Yeah, sure. Um, I mean, it was a number of strains of thought. But I think the sort of the key historical turning point really was the World Wars. And so before the World Wars, you see, people are very optimistic, not only about the progress of science and technology, but also about progress and morality and society. And they saw those two as going hand in hand. And so if you think from the perspective of 1913, right, what had happened in the last 100 or 200 years, well, again, monarchy had fallen many places in the world, right, replacing it with democratic republics. We had ended slavery in the West, there were after the end of the Franco Prussian war, there was a period of relative peace in in Europe for 30, 40 years. And so, you know, people, some people started to think, Oh, well, with the expansion of industry, the growth of trade between nations, nations aren't gonna want to go to war so much, you know, everybody's got plenty of goods now. And we're all trading with each other. And then like, the telegraph comes along, and people like, wow, this is great. We can communicate now. There'll be no more misunderstandings because we can because we can communicate across borders. And, and so people were optimistic that like, we were on the verge of a new era of world peace. And then the World Wars hit, and it's a cataclysm, right, and it is a blow to, to the optimists, there's a blow is a challenge to the idea of progress.
That seems particularly surprising to me, because I think especially World War Two, or at least the aftermath of World War Two, often gets cited as this, really big, reflection point, which kind of in turn, actually, sparked a lot of modern day growth. If you think about where the internet or where computing and stuff came from, a lot of that kind of originated from technologies developed in the war, or subsidised after the war. If you think about human rights, for example, as well, that, in large part, is a huge progress, right? Like or advancement in terms of, social, you know, or human, values and stuff, and really came out of the dark period of the war as well, right? I don't know, like, I maybe want to challenge that notion a bit.
I mean, good things can come out of adversity, right. Which doesn't justify the adversity or mean that it was net good, even. But it just means that, you know, humans can pull good things that have kind of almost anything, right. Yeah, but so but coming back to the idea of what did this mean, for sort of people's idea about progress itself? Right. I think through the world wars, people saw that, oh, well, certainly, technology is not automatically leading to moral progress. It certainly didn't lead to an end of war. In fact, it made for a much more horrific, destructive, deadly war, right? And if this wasn't obvious through World War One, it became absolutely obvious during World War Two, right.
So perhaps just stating the opposite of what you just said, but you know, war might be innovative, or it might generate new technologies, but there's nothing inherent of these new technologies being created, even like in the post war time, these being good, right? If you think about developing nuclear weapons, it is not obvious that progress will lead to better human welfare.
And so there's a new consciousness after the war's of, well, is this technology going to get used for good? Right? And are we going to use it in a wise and prudent way? Are we going to, you know, right, are we going to be are we gonna be good or evil with it? Are we going to be foolish, foolish or and reckless with it or prudent? And, you know, I mean, to some extent that cautiousness is good. Um, however, I think it also is a setback for the proponents of progress. And in a way it opened up kind of room for some reactionary views that had always been around since the very beginnings of the Industrial Revolution, or even or even before, but a number of people came forward with a new set of explanations. So the thing about history is people react to events, but their reactions, I believe are not contingent, right? They're not determined by the events, there are multiple ways to interpret any sort of historical thing. And so people will come forward as different. And so when there's a major event, like the wars, you have this period where there are a number of competing interpretations, right, and different people looking at the same events in different ways, and trying to say what it means and where we go forward. And what happened in the West in the mid 20th century was a number of explanations came forward that were based on a very deep distrust of the entire project of modernity, and the entire and certainly of technology and industry. Right. And so they took some of the concerns from the war, and then those got fused with a number of other concerns. Like poverty and inequality, right? Is all of the is all this wealth that we're creating sort of getting distributed fairly. pollution and the environment, right, like, what are we what are we doing to the world around us? Is that going to be good for us? Do we have any right to do it? And so all of this kind of fused into some of the countercultural movements in some way into the environmentalist movement? Right. And so there were a number of, of these interpretations that just said, Look, this whole project of trying to move the world forward with science and technology is mistaken. Let's stop it or at least let's slow it way down. Let's return to nature, return to our roots, return to tradition, return to family and community, return to whatever you can put what x is whatever romantic romanticised thing you want to return to that you imagine was better in the past. And so I think we are still struggling with this today. Right? We're in, again the world is very conflicted. On the one hand, I think people can see many ways in which technology does make their lives better. People love their iPhones, a lot of people even, you know, make a hero out of somebody like Elon Musk, and all of his ambitious ventures. And yet at the same time, they're worried that robots are going to take all their jobs, they're worried that climate change is going to destroy civilization. Right? They're worried about a lot of these.
There's a claim here, which seems like a really big deal if it's true, and I have no idea how to think about it. So it sounds like you're saying something like, how things pan out depends, importantly, and causally on people's attitudes and beliefs about progress, what kind of progress is possible? Are certain kinds of progress actually desirable in the first place? So how what, how do we start thinking, what's the evidence that the causality goes the other way? And attitudes actually matter for what happens in the world at the frontier?
So I think causality goes both ways. Right? So I think the relationship is reciprocal. And that's why I say it's a feedback loop. Certainly, the more again, the more progress happens, the more people are gonna believe in it, the more the progress seems to be good, the more people are gonna think it's, it's desirable. Right? But then you say, Okay, what is the evidence that it goes the other way? I mean, so first, like, let me just appeal to you directly. And to each listener individually, just like introspect for a moment, like, doesn't what you do in the world depend on what you believe is possible and desirable to do, right? Like we are guided by our ideas about these things. And then I think, when you look at okay, so, again, you look at the history of these things, right? Like, how many 1000s and 1000s of years went by, with most people kind of like not coming up with new and better ways of doing things, right. Like, why did that change so much recently? Again, you can look at it from all of these different levels, right, you can look at the impact the compounding influence, you know, impact of technology, or of wealth, or of the growth of science and so forth. And again, I think all of those are true, but I think that the level of ideas and guidance is also true. Again, note like some of this, this fundamental notion that we that this progress is possible and desirable. It goes all the way back to the early 1600s. Right? It goes back to writings like Bacon, like long before it really started happening in much of any significant degree, right? People believed in it. And they were, I think they were pursuing it because they believed in it. Another so another thing I'll point to is Anton House, who writes about economic history, he has a good blog, he has done some work on he looked into I think it was a set of like, some 14 or 1500 inventors like English, British inventors, from a period of about three centuries, I think it was roughly like 1550 to 1850 or so. And he just kind of went a little bit into every one of their life stories, what did they invent, where, where did they grow up? And how did they get into inventing and so forth. And he found in many, many cases, that they had some prior contact with another inventor before they began inventing. It could have been a family member, it could have been someone in their village, you know, whatever, but and so and so he came up with this hypothesis that maybe invention is sort of contagious. Like maybe the idea of invention is something that you have to you know, once you see somebody else do it. You know, you were more likely to do it themselves. And he noted, by the way, it didn't. It's not just, it wasn't just like there was technical training, because sometimes somebody would maybe have contact with a mechanical inventor, and then they would go on to become a chemical inventor or something. So it was like, it wasn't they weren't necessarily in the same field, it seems to be more of a general. Just a general influence of like, yeah, tinkering with stuff is cool and fun. And maybe you'll come up with something new and useful.
Got it, and I guess I saw other evidence, maybe something, well, some of the great innovators from the last 50 years, are also sci fi nerds, maybe there's something there. You also get kind of concentrations like geographical concentrations of innovation. Yes, for instance, Florence, if we're going that far back, Vienna. And, more recently, The Bay, maybe that has something to do with just the attitudes, you also get dynasties and families, which could just be something more than passing down technical knowledge.
But also, you know, look at just look at the institutions that drive this stuff for it, right. I mean, go, you know, go back and look at MIT or Johns Hopkins or something, right, or some university and specifically naming ones that were kind of maybe found in like the 1800s. But you can look at, like, why did they get started? Right? What did they say about themselves when they were getting started? What was the project and very often, like they have this very explicit project, right? We are going to have better technical training, better knowledge, and it will ultimately read or read Vannevar Bush's 1945 Science: the Endless Frontier. So there was this like, post World War Two or just very end of World War Two kind of manifesto and it was a report to the President was the the form of it, but he wrote about why the US should invest in science and technology and basic research, and how important this was going to be to prosperity and security and, and, you know, everything good for the country. And so it was a very explicit, you know, motivation.
I think there's definitely something to, different attitudes and responding to problems, just in general, I kind of want to pick up on, what you mentioned there with, way before, with the climate and environmentalist example, I think there's, to flag, somewhat of a false dichotomy, but I think it's a useful intuition. But there are two, very stark reactions, you can have to this one being this, degrowth, who just needs to stop progress and halter. And there's another reaction being, ‘oh, shit, we need to actually double down on progress and just invent a tonne of advancement in solar, nuclear, battery, restructure the economy and stuff.’ And actually, it's just a call for more progress and stuff as well. Right. And I think that framing or reaction is interesting, not just on an individual level, but as you said, like on an institutional level, and how governments think about this too, right? With really influential decisions on r&d, or regulation, or, what kind of have you?
Yeah, it's like running down a steep hill, you can't stop running now. Yeah, I guess another thing that’s occurred to me is, what would be the experiment you could run to find out whether this is true? If you could create a duplicate world, so maybe you could imagine, seeding classical civilization with Atlantis style myths about civilizations, develop technology, or maybe just biographies of thinkers. Maybe, you know, if you run those two parallel, parallel versions of, you know, Ancient Greece or Rome or something, maybe they'd be very different outcomes, or even ancient China.
Yeah. I mean, yeah. So I mean, obviously, you can't, you can't do the RCT. And so the way I mean, I think, part of the way that we have to learn and sort of test hypotheses in this way is not through us, but through comparisons across time and across space, right. So you look at different countries and regions, and you look at different ways. So one book that does some of this is the book A Culture of Growth by Joel McKenna, as an economic historian, and it is specifically about this idea of progress, how it got established, and how it and how it did affect the Industrial Revolution. And he makes some contrasts to China as an example. So looking at Europe versus China, and we're like, what were you know, was there a concept of progress in China? You know, and, and was there the same, you know, a sort of unique, unprecedented spirit kind of grew up in Europe around the time of the Reformation. And the scientific revolution and so forth, where ideas could be tested? Right, where there was a, there had all in most places in times, there's a lot of reverence for tradition and authority. And so sort of, you know, modern, Western Europe is sort of, sort of the exception to the rule geographically and historically, right a time when, for pretty much the first time people just stopped giving quite so much weight to tradition and authority and allowed there to be be,for those ideas to be challenged and tested by experiment.
Yeah, it just kind of reminds me - our very first ever podcast episode was on the Industrial Revolution, where I think there was a fair bit of time spent talking about Maca and his ideas. And he also has this thing of just kind of just accepting that you can even do things right or that you can change the world, right? So if youm you know, lightning strikes or there's a drought or something, rather than just blaming it on the gods or, what kind of have you, actually taking ownership and thinking about, well, what can you do to prepare yourself against that next time, rather than just saying it's kind of fate?
Yeah. And part of that was the development of the probability and statistics. To take a contrast the ancient Greeks, for instance, were very sort of intellectually curious and active and, and were maybe not, maybe not tradition bound, but they did not have the concept that all of their intellectual curiosity was going to actually improve the useful arts. So they didn't see a connection between science and the economy. And that is really one of the new unique things that comes in with Bacon and others around his time, is the idea that this knowledge is actually going to be useful. We can improve arts and manufacturers, and maybe agriculture and so forth, with the right knowledge. That was a really key breakthrough that was, you know, that led to the motivation to do all this stuff. Yeah.
So I suggest, this is super fast. And I think we can talk another hour about this, I definitely wanna make sure we get or leave enough time for the longtermism, and security stuff. But before we do that, I really want to hit the are ideas becoming harder to find things. So let's maybe just tackle that straight on. And then we ask are ideas becoming harder to find, in an important way?
Yes. So there's an economics paper with this title, Are Ideas Becoming Harder to Find. It's a very interesting paper with a maybe slightly unfortunate title, because I think that that phrase was very, it's a catchy phrase, but it can be a little misleading. Because what is an idea? And what does it mean to find an idea? And what does it mean for ideas to be easier or harder to find? Those are? Okay, so you can quantify this? So what if you want, I would suggest, scrap that idea for a moment, what the paper actually looked at was, is it taking more investment to create to sustain exponential economic growth? Right? And so and the answer was they looked at data from a number of different, you know, places and concluded yes, so an example is like Moore's law. So, so in Moore's Law you've got transistor density, you know, doubling on chips, every whatever, you know, constant increment of time, was two years or something. And how many researchers do you need to keep Moore's Law going? Well, it turns out, you need an exponentially increasing investment in Silicon r&d, to keep that exponential growth in, in, in chip density going. And so there are a number of other things like this, where you have this exponentially increasing investment, in order to maintain exponential growth in some key metric or something. And so this was the idea of, quote, unquote, are ideas getting harder to find. It is taking more and more investment to sustain, you know, more and more growth. Why does this ultimately matter? Well, so. So some people point to this and say, either to this paper, or just to this general concept, and they say, like, okay, maybe this is the explanation for why there might be some technological stagnation, right? And you can, again, even without the data in this paper, you can sort of just get an intuitive, like, sense of, of why this might be. There's this notion of we pick low hanging fruit, right? So you come to some area and certainly in science, right, like it used to be, you could discover fundamental laws of physics sort of playing around with magnets and wires on your kitchen table. And now, you need to build like the Large Hadron Collider, or LIGO, or the James Webb Space Telescope or something, right? It's a huge multibillion dollar, you know, project, and, you know, maybe similar things with, like, you know, you invent the aeroplane, and like, there's some really obvious thing, okay, that was a huge leap. And there's some really obvious things to do in the beginning, like, maybe we should make the aeroplanes out of metal instead of wood and canvas, and we should give them you know, more powerful engines and you know, some things like this, and, you know, but then then, like, eventually, it gets harder and harder to figure out, like, well, how do we make aeroplanes better? Right? And then maybe you hit physics limits like this, like the sound barrier, right? And so now, it's hard to go beyond the sound. But you know, obviously, not impossible, but you know, but that's a limitation for some planes. And so it just like at a certain point, you've, you've exhausted the obvious things to do. So you say okay, we've picked all the low hanging fruit. I don't think this is sufficient. Okay, so there's a couple of so there's a couple of ways you could go with this one. You could one way you could go with this is you could say well, therefore, progress is going to slow down the the fast progress we've seen in the last couple 100 years is this historical aberration, we got lucky, never going to see it again, don't hope to maintain that into the future. Get used to stagnation here on out, right? Or another thing you could say is like, you could at least say, well, this is an explanation for maybe why we've started to see some, you know, slowing down of progress in the last 50 years. I don't think it's enough. And the reason is that, as ideas get harder to find, quote, unquote, we also get better at finding them and we get more powerful tools for finding them right? Or if you want to use the apple or the fruit analogy, right? Yes, we pick the low hanging fruit. But then we get ladders to reach the higher fruit and then eventually, we get mechanised fruit pickers, you know, so we get better at picking fruit, the higher it gets. And so if you want to explain the piece of technology, I think it's not sufficient to look at one half of this, you have to look at these two counterbalancing forces, right? So yes, as so to go back to like this making progress in science example. Yeah, now we need LIGO. But also now we have the ability to build LIGO, which we didn't have, you know, maybe 100 or 200 years ago, we have the wealth that it takes to invest in something like LIGO, and we have the science to, you know, to build it, and we have the precision machines that can build it. And we have the computers that can analyse the data and all of this stuff, right. So we're getting better at making progress at the same time, that progress is sort of getting inherently harder to make. And, by the way, so the low hanging fruit thing, it tends to be also that when we open up some new field, right, we make some breakthrough, it opens up like a whole orchard of new low hanging fruit that we didn't, you know, originally have, right? So when we, you know, you invent computers, and it's like, all of a sudden, oh, wow, there's all these things that we were doing by hand that now we can do on the computer, like, obvious, right? Or you invent the internet. And then it's like, cool. There's lots of stuff that should go on the internet now. Right? And so fundamental breakthroughs open up lots of new low hanging fruit. And so I think the more interesting thing rather than looking at any sort of one narrow area of technology and sort of saying, well, this is petering out, or this is plateauing, this is we're, we're we're milking the last bits of it now. Right? What you should actually be looking at is like, what is our rate of opening up new fields? And having new breakthroughs that open up new orchards? Right? And is that slowing down? You know? And if so, and if so, why? And I think I think that is kind of the more interesting thing, I think, if you look at the, again, sort of like the last 50 years, and has there been a bit of a slowdown, part of it has been like, let we've just been, we've been milking some of the breakthroughs from decades ago. Not all of them even but some of them and and some of them like maybe computing. You know, maybe that is hitting a point where it's starting to plateau, or starting to hit a bend in the curve. But why didn't we come up with sort of even more new things to replace, right? Why haven't there been totally new breakthroughs like computing in the last couple of decades that have that have just opened up fresh green fields,
I guess I could just try to get back to my metaphor. So, you know, one story you can tell about, pick, the thing you care about says transportation, you want to tell the history of transportation. Often it looks like a series of S curves, right? So we start with some crude form of transportation, like horse drawn carriage, you get rapid innovation, once the technology opens up, then we reached diminishing returns, once we kind of more or less, you know, perfected the horse drawn carriage just around the time that we invent the combustion engine. So Right. And then one thing you could do is look at all those S curves and be look, every example of a technology that's opening up has resulted in diminishing returns, ideas getting harder to find, stagnation. So the lesson we should draw is that overall, we should expect a similar pattern. We saw this extraordinary period of growth last century or two, we should expect that entire thing to slow down as well, for the same reasons. But the suggestion is something like well, hang on, maybe there can just be a bunch more of these newer and newer S curves. And the fact that each curve has this shape doesn't tell us anything about how many possible curves are there or something. How many new breakthroughs are lying ahead of us? Is that roughly the end?
Maybe look, maybe you can make some argument that like, eventually it has to run out. But, um, but I think you are mistaken, if you are trying to posit any sort of foreseeable end, right. Like we're so far away from, from what the theoretical, you know, end of it might be like, there's just so there's just so many things that we can't even imagine that to posit that you have any sort of notion that it could be any kind of foreseeable near you know, near term timeframe is just it's just wrong.
I really like this idea of a metrics paper and maybe listeners will will know of one that kind of looking at this question of actually seeing the rate of breakthrough or it's funny, we're saying these S curve things presumably is just harder because these things are, by definition, rarer events and therefore make it harder to kind of study systematically. But yeah, really interesting. I do think there is something really interesting in thinking about this really abstract broad idea of breakthroughs and are these becoming harder or not to find because it just feels really stakes-y for this question of progress as a whole. And I think one interesting way of framing this is going back to what you were speaking before about these, feedback loops and feedback loops can be very different shapes, right? They can be self-sustaining, they can be exploding, or they can be converging. And presumably, that matters, a whole bunch. Yeah, for human progress. And I, I liked the frame, you kind of gave there as well of, just thinking ahead of, what is possible. And I think one way that I like to, maybe think of progress studies, or, think of this breakthrough thing is how much growth should we be expecting to have right now, if we could actually get this going through? Like, is this a question of just being able to sustain three or 4% growth that we kind of had back in the, like, 50s and 60s for like, another 100, 200 years? Or are we talking about something very radical where, if we're gonna increase the number of scientists or r&d by 20x which I think we did in the 20th century? Like, if we do that, again, should we be expecting growth to accelerate by like, 20%? Or by 20x, as well too? Like, that feels very different kind of worlds we're talking about?
Yeah, well, I mean, going back to the idea is getting harder to find paper, like, we may have to keep increasing the number of scientists and researchers, just to keep up the three or 4%, you know, economic growth, right? In fact, this is one reason to actually worry about population levelling off or even decreasing. At a certain point, like it could turn out, we're actually, limited by population, because there just aren't enough, we just need more and more researchers, scientists and engineers kind of on the frontier pushing things forward in order to keep up exponential growth. And if we don't, if we don't have that many, right, a certain point, like you can't, you know, at a certain point, if your population isn't growing, like, you just can't have, you know, you hit 100% of your population as scientists. Right. Yeah, you can't keep growing from there. So I think that's one reason to think about why a number of people who are interested in progress are actually very concerned about population, you know, slowing down, right.
Yeah. I should maybe like, first of all like, and open borders, presumably stuff, too, right? Is that part of it there as well of just getting more people to have access to this thing? Or is it just overwhelming? Yeah, a population thing?
Um, I mean, both are relevant. But again, the open borders only take you so far. Right? That is like one finite source of talent, right? Whereas if your population doesn't grow, then you don't, then you're gonna hit some point, you know, limit talent limit, at some point
Have you given this like explicit thought, do you have a number in your head of how fast could we be growing if we solve these institutional or social life problems, or shifts, if the world really was just focused on advancing science as much as it could, and we dedicated a whole bunch of resources from these recommendations, we kind of spoke a bit before about like, how quickly could we be growing? Like, consistently or sustainably?
That's a great question. Yeah, I don't know, off the top of my head, I would look back to what the growth rates were, you know, sometime between about 1870 and 1940 or so. Right in, you know, especially in the US, which was growing pretty fast in that time. And say, like, that's at least, like maybe a baseline.
Cool, let’s talk about long term stuff. So I'm not sure how exactly to lead into this, but on the Roots of Progress website, right, you have this line, like we need a new philosophy of progress for the 21st century. And one way you can begin thinking about that question, is taking this like, expansive long view, when you consider the trajectory that we mean in humanity might eventually take right? And then you might frame that question or something like, or you might frame the question that progress studies is asking as something like, how do we ultimately achieve, bring about the most progress possible? Or the progress we're capable of achieving? Is that what you have in mind when you talk about progress studies?
I care about the long term. I think that maybe one so well, maybe one difference between me and the most hardcore long termist, or maybe two differences. So one is, I do place a nonzero kind of moral discount on the future or in terms of what I care about. I do care about the long term, I care about posterity. But I don't care about it infinitely much, or I don't, I don't care about, you know, lives, like like 10,000 years from now, are not equally meaningful to me as my life and the life of people I care about, you know, or even just other other lives today. So, while I think you can't necessarily just apply a simplistic kind of economic, like, percentage over time, right? What exponential discounting? Like, that leads to some weird conclusions. But I also don't think you can just apply a zero discount. The other thing is like, I apply some epistemic discounting, where it's just super hard to predict stuff, you know, much further out in the future, right? And so, like I can, I feel like by looking at history, maybe I can kind of look into the next decades and even a century or so, with some clarity, but like 1000 years out 1000 years back was almost unrecognisable, right. Like, we didn't even have the plough. Right. And I mean, the world was just extremely different. And so 1000 years for, and progress, and then and there was also not moving very much, right. So, the difference, so 1000 years from now, it's gonna be even more different from today than today is from 1000 years before. And so it's just, it's just very difficult to say anything about a certainty, right?
So okay, two points that let's talk about the second one first. So the thesis is not, let's try to kind of look forwards to the very far future and kind of trace back in some complicated way, how we can kind of influence that, right, in some Rube Goldberg way, it's often something like, you know, as much as we can’t fill in the details of, you know, the world and 1000 10,000 years time, we can trace the broad contours precisely because we can look back and see that we've made all this progress, we can just say in fairly kind of hand wavy terms, probably the world could be extraordinarily better. You can imagine spreading beyond earth, for instance, or creating digital people like the, all bets are off, right? So let's not worry too much about predicting it. But what are the simple things we can do now to bring that about, you can tell this story, right, where maybe something like this century could be fairly pivotal, just for whether or not that like, broad future even comes about in the first place. And so, if you actually buy into that, you can imagine thinking, okay, ultimately, maybe I actually do care about achieving all this progress. You know this century seems so pivotal, that the urgent priority now is just to make sure things go well, then we can take a sigh of relief. Then think about the progress stuff, if that makes sense. I'm not saying do you agree? I mean, does that story at least seem internally coherent?
Yeah, I totally get the argument. And I'm familiar with it. And I think there's something to it, that's even reasonable. And I don't know, to me, at the end of the day, it just matters like, well, what? It just comes down to what is like, what literally, are you proposing, like low what literally, do you think we should do right? Like you said, Well, maybe progress can wait, right? But what do you even like? Like how it's so smooth? So suppose you took this point of view? Like what would you even do? How would you use our resources? Right? It wouldn't be literally progress waiting, because what we need is at least some specific form of progress. Yeah, to write to write. So if we're in some vulnerable position, we need some form of progress in like, like, that is a problem to be solved with, you know, ultimately, better, something better knowledge, maybe better technology, maybe better social systems, maybe some combination of all the above, right? So the question is, how are you actually going to tackle the problem?
I think there's one thing here of just breaking progress, which is this incredibly broad term, just down into, as you said, like, well, literally what does this mean? And I think there's maybe a thought here of just thinking harder about what the social consequences of certain technologies are. And there's one easy way to like, again, draw a false dichotomy here, between some technologies are good for human progress, and some are bad, and we should do the good ones. And we should hold off on the bad ones. And that probably doesn't work because a lot of technologies have dual use. You mentioned World War Two before, and that causing a lot of skepticisms around progress. On the one hand nuclear technologies are clearly incredibly destructive, and awful, and could have really bad consequences. And on the other hand phenomenal and really good and can provide a lot of energy and what kind of have you and we might think the same around bio and AI. But maybe the question just here is we should think about this stuff harder before we just go for it or have more processes in place to have these conversations and discussions and yeah, like, processes maybe to navigate this stuff?
Yeah, definitely. I mean, look, I think we should be smart about how we pursue product progress. And I think we should be wise about it as well. And, you know, look, look, let's take bio, because that's what may be one of the clearest examples and one that actually has a history right, like, over the decades, as we've gotten better and better at genetic engineering, there's actually been a number of points where people one way or another have, have proposed and actually have gone ahead and done like a pause on research and tried to work out better safety procedures, and through that process, so maybe one of the most famous is the Asilomar Conference in the 1970s. So right after recombinant DNA was invented, some people realise that Whoa, we could end up creating some dangerous pathogens here. There's a particular simian virus that causes cancer that, you know, people started thinking, what if this gets modified and can infect humans, right. And just more broadly, there was a clear risk, and they actually put a moratorium on certain types of experiments, they got together, about eight months later, they had a conference, they worked out certain safety procedures. Like I haven't researched this deeply, but my understanding is that that went pretty well. In the end, we came up with kind of like, we didn't have to ban genetic engineering, or cut off a whole line of research. But also, we didn't just run straight ahead without, without thinking about it, or without being careful. And in particular matching the level of caution to the level of risk, you know, that seems to be in the experiment. And this has happened a couple of times since I think there was a similar thing with CRISPR, where a number of people called out hey, what are we going to do especially about human germline editing, there was a, there was a pot in, NIH had a pause on gain of function research funding for a few years, although then they unpaused it. And so I don't actually quite know what happened there. Right. So, you know, there's no sense in just sort of barreling ahead heedlessly, right, we should, I think part of the history of progress is actually progressing and safety. In many ways, certainly, at least at a day to day level, we've gotten a lot safer, both from the hazards of nature and from the hazards of the technology that we create. And we've come up with better processes and procedures, both in terms of operations, think about how safe airline travel is, you know, today, that's an operational that there's a lot of operational procedures, you know, that lead to safety. And then also, I think, in, in research, right, and so these bio lab safety procedures, here's an example. Now, I'm not saying that's a soft problem, from what I hear, there's still a lot of unnecessary or unjustified risk in the way we run Bio Labs today. And so maybe there's some important reform that needs to happen there. I think that sort of thing should be done. And ultimately, like I said, I see all of that as kind of the story of, the story of progress, right? Because safety is a problem too. And we attack it with intelligence, just like we attack every other problem.
Totally. So you mentioned aeroplanes, makes me think like, you can imagine kind of getting overcautious with these kind of crazy inventors have built these, like flying machines. We don't want them you know, to get reckless, potentially, you know, either crash to them, maybe they'll cause property damage. Let's place a moratorium on building new aircraft, let's make it very difficult to innovate. And now, air travel is, on some measures, the safest way to travel anywhere. So you can imagine, okay, what's the, how does this carry over to something risks from for instance, engineered pandemics? This is totally obvious, but presumably, both the moratoria, regulation, foresight thing, that is important. But in the very long run, it seems we'll reach some sustainable point of security against these risks from biotechnology, not from these kind of fragile arrangements of trying to slow everything down and pause stuff as as important as those things are in the short term, but rather from barreling ahead, on defensive capabilities, like, you know, an enormous distributed system for picking up on pathogens super early on. And these, this like, fits better in my head with the progress five, because this is a clear problem, we can just like, funnel a bunch of people into solving. Right. And, I mean, I anticipate you'll just agree with this, right? But if you're faced with a choice between let's like, get across the board progress in biotechnology, let's just kind of invest in the full portfolio. Or on the other hand, you know, the safety stuff seems especially better than risky stuff. Let's kind of go almost all in on that. And make a bunch of differential progress there. Seems like that second thing is not only better, but maybe, you know, an order of magnitude. Better, right?
Yeah. I don't know how to quantify it. Sure. Right. And I mean, I think, like so. So one of the good thing that this points to is that, like, I think we can look at technologies and like some of them, like different technologies just have clearly different risk benefit profiles, right. And so something like some wastewater monitoring system that will pick up on any new pathogen, right? Just seems like a clear win, right? And then on the other hand, I mean, I don't I don't have a strong opinion on this, but maybe a gain of function research is just a clear loss, right? Or just clearly is just one of those things where risk clearly outweighs benefit. And so yeah, I mean, again, yeah, we should be smart about this stuff. But if we, if we find, so the good news is yes, if we find the right general purpose technologies, the right general purpose technologies can add layers of safety. Because general capabilities can prevent us or can protect us against general risks that we can't completely foresee. So the wastewater monitoring thing is one but like, I mean, here's another example. What if we had broad spectrum antivirals, that were as effective against viruses as our broad spectrum, antibiotics are against bacteria, right? Well, that would significantly reduce the risk of the next pandemic, right? Right. Now, dangerous pandemics are pretty much all viral. Because if they were bacterial, we'd have some bacteria, or we'd have some antibiotic, right, that works against them. Probably right. There's always a risk of of resistance and multi drug resistance and so forth. But in general, like the dangerous stuff recently has been viruses for exactly this reason. You know, similar thing, like if we had, suppose we had some highly advanced kind of nanotechnology that gave us essentially like terraforming capacity, climate change will be a non issue. We just be in control of the climate.
Nanotech seems like a worse example to me. For reasons which should be obvious.
Okay. Sure. Yes, we've talked about, that wasn't the point, the point. The point was, if we had, if we had the ability to just sort of control the climate, right, then we wouldn't have to worry about these kind of like, what are runaway climate effects and what if the climate gets out of control? So? Yeah, so general, so general technologies can prevent or can protect against general classes of risk? And I do think that also, you know, some technologies have a very clear, kind of, like, risk benefit trade off in one direction or the other. And that should guide us.
Yeah, I think like, I want to, I want to make two points. One is, I think, kind of just listening back to this, it actually strikes me that a lot of what we're just saying, on the bio stuff was kind of analogous to what we were before saying about the climate stuff, where there are almost two reactions, you can have to the problem. One is to just stop growth or progress, as we kind of defined it just across the board, and just hold off. And that is clearly silly or has bad consequences from it. Or you can take what we kind of discussed as the more nuanced approach where you want to actually double down on progress in certain areas, such as detection systems, or what kind of have you and maybe selectively hold off on others, right, like in a function, but actually, in many cases is a case for progress, not against in order to solve these problems that we're kind of incurring. The thing I kind of wanted to pick up on there, kind of at the end, though, of what you said is like, general purpose technologies, all these, just really powerful capabilities just seem really hard. I think when we're kind of talking about general purpose things, I think we're kind of having a discussion here implicitly about AI. But maybe to also use the geoengineering example, like there was just a big problem of just having things that are that powerful. Like, let's say, we can choose whatever climate Yeah, we can definitely solve climate change, or like control the overshoot or something. But if the wrong person gets their hands on it, or if it's like a super decentralised technology, and anybody can do anything, and the like, offence, defence balance isn't clear, then you can also just really screw things up as well. Right. And I think that's maybe why it becomes a harder issue. And it becomes even harder when these technologies are super general purpose, which makes them really difficult to stop or like to not get distributed or embedded and stuff. Like if you just think of all the potential upsides you could have from AI, but all the potential downsides you could have if just one person uses it for a really bad thing. Yeah, I don't know. That just seems really difficult.
I don't want to downplay any of the problems. Right. Problems are real, technology is, you know, not automatically good. Again, can be used for good or evil can be used wisely, or foolishly? We should be super aware of that. Right? Yeah.
I think the point that seems important to me, is that, you know, maybe there's a cartoon kind of version of progress studies, which is something like, there's this one number we care about, it's like the scorecard. And that numbers like gross world product, or whatever, right? And we would drive that up. And that's all that matters. There's also a nuanced and sophisticated version, which says, let's kind of think a bit more carefully about what things stand to be best for, you know, longer timescales. Understanding that there are risks from novel technologies, which we can kind of foresee and kind of describe the contours of. And what that tells us to do, is to maybe focus a bunch more on speeding up the defensive capabilities, putting a bunch of just smart people into thinking about what kind of technologies we can do to address those risks. Right, and maybe not just throwing everyone to the entire portfolio and hoping things go well. And I think, you know, maybe there is some day between the kind of, longtermist crowd and the progress studies crowd, one of those differences might not be a difference in ultimate worldview. But maybe it’s just like, what are the parameters? Like what numbers are you plugging in? Yeah. Right. And what are you getting out?
It could be, it could be, you know, actually I, or it might actually even be the opposite, it might just be that it's a difference in temperament and how people talk about the stuff when we're not quantifying. And then if you came down to, if we actually sat down to allocate resources, and agree on, like safety procedures and agree on, you know, which technology, we might actually find out, we agree on a lot of this stuff. It was like this, it was, I think it was the Scott Alexander line about AI safety was like, you know, on the one hand, some people say we shouldn't freak out and ban AI or anything, but we should at least get a few smart people, you know, starting to work on the problem. And other people say like, yeah, maybe we should at least start getting a few smart people working on the problem, but we shouldn't freak out or ban AI or anything, right? It's the exact same thing, but just with, you know, with a difference in emphasis. And so I think some of that might be going on here. And that's why I keep wanting to bring this back to like, and what are you actually proposing, right? Like, let's come up with, let's say, which projects we think should be done, which investments should be made? Right, and like, and we might actually, you know, we might actually end up agreeing,
I think, I mean, in terms of temperamental differences and similarities, there's a tonne of overlap. So, you know, one, one bit of overlap is just appreciating how much better things can get. And being bold enough to spell that out, there’s something taboo about just noticing we could just have a tonne of wild shit in the future. And it's kind of up to us whether we get that or not. Right. That seems like an important overlap.
Yeah. And you kind of mentioned before, I think, like the, like, agency mindset or something.
Yeah. Yeah. As in we can make the difference here.
Yeah. Yeah, I totally agree. I think if there's a way to reconcile these, maybe it is just like, ultimately, understanding safety is a part of progress. It is, it is a goal. It is something we should all want. And it is something that we ultimately have to achieve through applied intelligence, just like we achieve all of our other goals, right? Just like we achieved the goals of food, clothing and shelter, and even transportation and entertainment, and you know, all of the other kind of obvious goods, that progress has gotten us, like safety is also one of these things where we just have to understand what it is, agree that we want it right, define it, set our sights on it and go after. Yeah. And ultimately, I think we can achieve it.
I want to ask Fin's question to you actually, like, when you look at the EA community, are there specific things that stand out as things you really like or things you think that it could improve upon and stuff?
Yeah, let's see. So the EA community is highly overlapping with at least the rationalist community. And, I mean, that is, there's a lot that I admire about that community, just in terms of the epistemic approach, being extremely intellectually honest, intellectually curious, trying to get very clear about what we believe and understand, being very empirical, being quantitative, when that makes sense, and so forth. I think all those are, are pretty good. I think a lot of the progress community also reflects and admires some of those sort of epistemic virtues. And I think that's maybe a lot of why these two communities kind of get along and have a lot of really interesting conversations. So you know, that's one thing. I appreciate the ambition of the community, right? Just like, yeah, we want to do a lot of good for the world. And, and we're going to be very, we're actually gonna be radical in thinking you know, about how we do that. I mean, at the end of the day, I'm not an altruist. I'm an individualist. And I like the idea of doing good for the world. I don't feel that it's necessary. I feel like it's more of a personal choice than a moral imperative. I'm sometimes like, not entirely clear where even the EA community falls on that. Sure. But, you know, I think, there, at the end of the day, there is an interesting kind of difference in moral framework or approach of kind of, you know, there. And it's interesting to me that even kind of very different moral approaches can find sympathy in a lot of goals, right, or at least, or apparent sympathy in certain goals and programmes like the school of progress. And so, I mean, for the listeners, I'll just say that, like, the reason we're in the same room today is like we're getting together for a workshop on this, where we're going to talk about what are the different possible moral foundations for progress studies and the pursuit of progress. And looking forward to those discussions. I think we're gonna, you know, come up with some interesting ideas.
I do want to ask, like, obviously feel free to pass: is there anything you would like to see the EA community do differently or things you think that it gets wrong and stuff? Yeah, feel free to pass if you want, but I'd be interested to hear any hot takes. And I'll flag as well that I think a big culture thing of EA is self criticism and stuff.
Not at all. I'm just, I mean, it's interesting, because I think I think I'm actually sort of seeing the EA community evolve a bit. And so like, there was a time when there seemed to be a whole lot of focus on like, our spreadsheet says that bed nets for malaria are the most important thing. And just like this is the most important thing in the world now, right. And it's certainly evolved beyond just that. So there was a really interesting Twitter debate that happened that a bunch of people got involved in from sort of the Progress community and the EA community around whether you should direct your attention and resources and maybe your career to something that you can kind of justify through a spreadsheet, versus something where you have more maybe have just a personal vision, or motivation or passion. And I think the EA community was coming down a little more on the spreadsheet side of, you should, you know, have this justification for why you're going to end up doing a bunch of good, and the Progress community was saying, like, look at the things that have done great good in the world. Often, you trace them back to just some scientist who was really interested in microbes, you know, before anybody even knew that they cause disease, right? Or somebody who was just tinkering, right, and who, I mean, it's funny, even I mean, actually, even after we knew microbes cause disease, like Howard Florey, who ran the lab that developed penicillin as a drug, right, maybe the biggest medical breakthrough of the 20th century, certainly one of them, he, hey, there's some quote from him that was like, we weren't, like we didn't do this to help humanity. We just thought it was just an interesting scientific and technical challenge, right. And there was some similar thing from the Wright Brothers, they were like economic advantages? Nah we just kind of wanted to show that it was possible for people to fly.
I feel like I'm hearing a false dichotomy here. So I want to say that, at least there's a kind of, roughly speaking, you know, long termist camp to EA, where it's honestly close to impossible to model things out at the, with the granularity of an Excel spreadsheet, for obvious reasons, right? You're talking about these kinds of wild developments that we haven't seen in human history yet. And so you're just forced to, like, make bets and take guesses and do things which no one's done before? You know, we don't, we don't have evidence that only following the extremely well evidenced interventions is the best thing to do.
I think the point though, like, is it motivated by altruism?
That seems like the difference, yes. Like, I will be surprised. And maybe there's actually no disagreement here that if I really just wanted to do the best thing from an impartial perspective, just make the world go best, it would be surprising, if it turns out, the best you can do is just kind of try to do cool stuff. And some of it will turn out good, you know, try to invent you know, useful things or something. Seems like you could actually do an awful lot more good if you spent some time thinking systematically, not at the level of spreadsheets, but just at the level of, you know, prioritisation. What looks important in the world right now? And then once I've done that, then I can kind of maybe take a more of a creative, less constrained approach.
But so maybe that again, if you applied that retroactively, you would have cut off an enormous number of amazing discoveries and inventions in the past right. The problem is that what is just so hard to foresee, right, it's it's, you can't you can't foresee all of the paths and connections and therefore what actually turns out to be really valuable often it's just very hard to explain and very illegible in the beginning right and so if you constrain yourself to things where I have a good rationale for why this is the most important thing to work on it just you're just gonna miss stuff. I mean, so look, I would here's one thing I would say if you have no idea what to work on and there's nothing that is driving you and pulling you and you have no fire and, like sure go approach the problem rationally and maybe and make you know make your spreadsheet and make your list or something but if you have some just some internal it's just like driving passion some something you're just obsessed with and you're so curious about and you can't stop thinking about you should probably just go pursue that think, like whatever it is.
I also don't want to like I think maybe frame as like and I don't think you were making this the case that EAs just care about the spreadsheet or whatever, personal fit is a huge question right? Like when it comes to career and stuff. And I think, definitely for myself, it matters a tonne, there are some things I can get more excited about than others. I do think there is like, an important point that like, I think it is kind of true that like, often you don't know what you are going to get excited about, or it's a lot more malleable than you think , like if it turns out that you just really like doing operations or this type or that type of work then choosing to do that work for a cause that is maybe more altruistic or more higher up on the spreadsheet or something's like it'd be, I don't think like your disagreement, any of that. It's just something to maybe think about.
There's a lot going on in my mind right now. So one thing to say is if you are as idiosyncratic and passionate and as smart as the great innovators of the last few centuries, then it will be incredibly dumb to try to divert them away from the thing they're passionate about, that seems pretty exciting, even though they find it hard to articulate the, you know, the kind of spreadsheet style and vision and try to divert them into doing something which is much more legibly good, you know, like, take young Elon Musk and tell him to go work at Wall Street and donate his money, right? Like, this is just like a terrible move. So I expect part of what's going on is actually more agreement than you might think. And maybe it's worth thinking about people as you know, some people have these characteristics, they're extraordinarily driven and creative. But their specific passion is somewhat malleable, right? If you just can nudge them to care about these really big problems in the world, then in expectation, they'll just stand to do a bunch more goods, or maybe there’s like crazy things which could have been invented in the past, but which weren't. Because people, these people have, you know, other passions. And a second thing to mention is that there's like, presumably some amount of kind of, you know, the silent evidence of the hundreds of like, kind of smart either idiosyncratic people who ended up barking up the wrong the wrong tree, I think it's less important because it's fine to have a bunch of losses if you get like enormous hits.
Yeah, so one way to resolve this might be, I do think it's very important. If you want to do great work, I think it's important to, to spend a lot of time thinking about what the big problems in the world are right? And to have a developed worldview, to expose yourself to lots of things, to read about what other people think are the most important things to work on and so forth, right? And then all of that can ultimately sort of feed into your subconscious. And then, and might actually filter through in some intuition for what you want to be doing right now. That that is that might not even be legible to you let alone, you know, the outside world. So I agree, you shouldn't just ignore what we know or can say about what, you know, important things are to work on in the world. And I mean, Richard Hamming was the one who would just like exhort researchers to actually like, work on important problems, right, like, and so I think there's something actually really true about that. But I would just caution against having it be a requirement that you can explain to someone else or justify to anybody else why this is important, like, so if you have a feeling that it's important. Maybe you should go, you know, maybe you should go and work on it, even if you don't feel that people, like your community would agree. And definitely, and definitely you want to be careful about traps, like prestige traps, right? Where you end up just sort of, okay, well, this is the thing that my whole community says a good person would work on and so I'm gonna go work on that, right like, that is a way to really lose touch with your authentic motivator.
Totally agree. Here’s what here's what I would agree, you do not need it to be a requirement that you can perfectly and legibly articulate why what you're working on is important to you know, your immediate peers, your, the people you're surrounded with, I do think it's important to be able to explain to yourself, and to be honest with yourself about whether or not this thing you're working on, could actually be in important. Because if you don't try to do that, and it's easy not to, it is pretty easy to delude yourself. And you can be totally brilliant, right? Think of all the utterly brilliant mathematicians who got hung up on one fairly inconsequential problem. And they made extraordinary progress in this thing that doesn't really matter. Seems like they could have done more.
Yeah. Yeah. Hard to tell. I mean, I think another way to look at this might be to say, especially if you're in science or math, right? Sometimes you might have an intuition for why the problem that you're working on is an important problem to your field. You might not have any clue how that, whether, or how that will eventually be useful to the economy or to some humanitarian cause. Right? Yeah, that's the sort of thing where you have to just say, Look, I don't, I can't connect all the dots, but knowledge is important. Science in general has been one of the greatest things for humanity. If I can advance science, then like that is sort of justification enough.
Yeah, I guess maybe to draw this out there are, two points, I think you're kind of making. One is just you don't need to be inherently motivated by altruism in order to do good. And maybe just, there are, better, like actual ways to do this in real life in order to do good other than explicitly trying to do good. And then there's maybe this second point of just it's really, really hard to just know where things are. And maybe just even reflecting on it, a bit explicitly and stuff doesn't really get you anywhere, maybe it actually runs counter to it, or is just a bit of a distraction. does. Does that kind of resonate?
All right, let's, let's wrap up. So the first question we ask everyone is what, three or more books, films, articles, whatever, would you recommend to anyone listening to this basically, and curious, someone who's curious about finding out about what we've talked about?
Yeah, sure. Let's see. So a few of my top recommendations for sort of progress reading. So one is Enlightenment Now by Steven Pinker, which might be just the best single sort of introduction to progress as such. Another one I'll recommend is the Beginning of Infinity by David Deutsch. Yeah, deeply philosophical book that really made me think and rethink a number of things. And he explicitly talks about progress, and its meaning and where it comes from. And then another one I'll throw out is, Where is My Flying Car? by Jay Storrs Hall. This just came out in a new strike press edition, and an audiobook as well. Oh, yeah. Great. And it is a number of things, but it is in part, it is a work of futurism. So he talks about the extraordinary potential of things like nuclear technology, or nanotech, and got me excited about a number of things. And then, okay, so I already had three, but if I could throw in one bonus, I will name the \wizard and the Prophets by Charles Man, same guy who wrote 1941 It is a really interesting sort of study in the contrast between the sort of techno optimist versus Enviro pessimists sort of worldview in a way that I think, understands both sides and is and tries hard to be fair to both of them, and just paints the contrast between the two very clearly through a number of case studies and also through biographies of two really interesting figures, one of whom is Norman Borlaug. And the book, in my opinion, is worth it for like the two chapters on Borlaug alone.
Yeah, the other question I want to ask to kind of close off is, are there any specific questions that you would like to see more good work on? And the more specific, the better?
These can be totally self interested as well. Like, are there just any little things that you would just to see answers on?
Yeah, okay, here's one. Here's one random specific one that came to mind recently. I was reading David McCullough's history of the Brooklyn Bridge. And he mentioned that in the US, I think, and around the time of the, that the bridge was getting built late 1800s. There were something like 40 bridge collapses a year, or some number that I don't know, it just seemed high to me. It's like almost one a week. And so I'm really curious about this, why were the bridges collapsing? Why did we not know how to build bridges? And what ultimately solved that? I'm assuming it got solved? Because like, that sounds like a lot of bridge collapses, right. Like, yeah,
And this was in the US?
In the US. Right. So I'm curious about that one. Right. So that is a and it fits into the safety theme, right? Yeah. There's a lot of safety that again, we sort of take for granted that we forget, like, oh, yeah, bridges just used to maybe collapse, and now buildings, you know, fall down or catch on fire or yeah.
Fantastic. And then the last question is, where can people find you and what you're working on online?
Well, awesome. Jason Crawford, thank you so much.
Yeah, thank you guys. It's a great conversation.
That was Jason Crawford on the routes of progress. As always, if you want to learn more, you can read the write up at hearthisidea.com/episodes/crawford. There, you'll find links to all the papers and books referenced throughout our interview plus a whole lot more. And if you know of any other cool resources on these topics that others might find useful to them, please send them to us at email@example.com. Likewise, if you have any constructive feedback, do feel free to email us or click on the website where we have an anonymous form under each episode. And lastly, if you want to support and help us pay for hosting these episodes online, then you can also leave a tip by following the link in the description. A big thanks as always, to our producer Jason for editing these episodes, also to Claudia for writing the transcripts. And thank you very much for listening.
← See more episodes