Episode 79 • 14 September 2024

Tamay Besiroglu on Explosive Growth from AI

Leave feedback ↗

Contents

Tamay Besiroglu is a researcher working on the intersection of economics and computing, focusing on AI automation. He is currently the Associate Director of Epoch AI, a research institute investigating key trends and questions that will shape the trajectory and governance of AI.

(Image) Tamay Besiroglu

In this episode we talk about Tamay’s recent report, co-authored with Ege Erdil: ‘Explosive growth from AI automation: A review of the arguments’.

We discuss:

Resources

Epoch research reports

Tamay’s recommendations

Other resources

Let us know if we missed any resources and we’ll add them.

Transcript


Intro

Fin

Hey, this is Fin, and in this episode I spoke with Tamay Besiroglu. Tamay is a researcher working on the intersection of economics and computing, focusing on AI automation. He is currently the Associate Director of Epoch AI.

In this episode we spoke about the possibility of explosive growth from AI; that is AI causing is a sustained period of economic growth at something like 10x or greater than the growth rates we’re familiar with in frontier economies.

So you’ve probably seen those graphs of economic output over the last thousand years or so, where there’s this big hockey stick that takes off around the time of the Industrial Revolution. And that’s just showing how, you know, for a long time, economic output took centuries to double, but more recently it’s been doubling every 20, 25 years or so.

And of course that one line on a graph is so important because it tells a story about really big transformations in the real world, right? Like billions more people now alive, new social institutions, and of course new technologies. Now, at least some people think that there is a decent chance that AI somehow accelerates growth by roughly as much as growth changed before and after the Industrial Revolution. And that would mean the world economy doubling every five years or less, and all of the across-the-board changes that that would involve, which would obviously be huge if true. And that’s the question of my conversation with Tamay. Should we expect this kind of widespread automation and explosive growth from AI this century?

So we talked about how AI could start this kind of feedback loop of technological progress, maybe through a very fast-growing workforce of digital workers. How the models which best explain the Industrial Revolution also seem to predict explosive growth from AI. How explosive growth might happen even if AI doesn’t accelerate tech progress at all. And also arguments against explosive growth, like the possibility of binding regulations that slow down AI enough, and also the best historical analogies there. Blocking factors like the availability of land, energy, materials. Maybe there are certain crucial human jobs which are just very hard to automate. The relevance of ideas getting harder to find in general. The possibility of another AI winter. People just straight up insisting that humans keep doing certain crucial jobs, like teachers or judges, and just the outside view argument that this all sounds kind of crazy, nothing like it’s happened before. Okay, without further ado, here’s Tamay Besiroglu. Tamay, thanks for joining me.

Tamay

Glad to be here.

What is explosive growth?

Fin

We’re going to be talking about the prospect of explosive growth from AI, and a very obvious question to begin with is just what do you mean by “explosive growth”?

Tamay

Yeah, so by explosive growth, I mean a kind of drastic acceleration of growth in economic output. So, you know, gross world product or gross domestic product in specific economies. Specifically, I think explosive growth is something that’s really much higher than what we’ve observed historically in frontier economies, maybe roughly a 10x increase. Commonly, this is defined in terms of 10x relative to frontier economies, which is maybe roughly the acceleration that we’ve seen since the farming era. So the Industrial Revolution brought about maybe a 10x or maybe slightly more of a rate of output growth.

Fin

Yeah, I guess one way that I like to think of this is, you know, we’ve all seen these kinds of hockey stick graphs of gross world products over time, or at least like estimated output from pre-industrial revolution to today. And you see this huge reflection around the time of the Industrial Revolution, where growth went from something under 1%, and maybe a bit above 0%, to roughly 3-ish percent or something. And you can imagine flattening down the part of the hockey stick you’re on now, and it would be another hockey stick like that. It would be another inflection in growth rates.

Tamay

Yeah, that’s right. And I think it’s kind of unclear. It’s actually an interesting kind of open question about whether this is an inflection per se, whether this is a different growth mode or whether this is just a continuation of this historic pattern of acceleration that we’ve seen for as far as we have data to be able to tell. Now, there has been this recent stabilization of growth rates, which I’m sure we’ll talk about.

Endogenous growth models and increasing returns

Fin

Okay, cool. So in this report, you present some different arguments for expecting explosive growth from AI of the kind you just described. And the first argument has to do with so-called increasing returns to scale from the economy, from the world economy. And that argument borrows from a model of growth, which really foregrounds technology and research and development to kind of explain and retrodict the kind of growth patterns we’ve already observed. So can you just tell me a bit about what that model is and what are the intuitions behind it?

Tamay

So in these growth models, and the ones that you’re referring to here are these R&D-based growth models, and they come after these kind of exogenous growth models in the 50s, where kind of ideas were treated as falling out of the sky, and people didn’t really think carefully about where they came from. You had these R&D-based growth models that followed, which kind of foregrounded the technology improvement process, this R&D. And these growth models… they have basically three key inputs. So one is capital, which are machines, buildings, plows, what have you, you have labor. And then the final thing is kind of ideas, which is the economist’s term for things that multiply with your inputs to make you more efficient at producing more valuable outputs. A key property of ideas is that they are non-rival, which contrasts with usual economic goods. If I want to give one person access to a computer, I need to buy one computer. If I need to have 100 people use a computer, I need to buy 100 computers. Whereas with ideas, so the chain rule in calculus or Maxwell’s equations or things like this, they are non-rival. As many people can use it without depleting it for others. And so those are the three fundamental inputs into these models. And I think these models explain the pattern of historical growth rates. They do this crucially by having this property of increasing returns. So increasing returns just basically means that if you double all the key inputs in your economy, your output more than doubles. The way this happens is that if you have an economy, you know that you can roughly double the output by just duplicating all the existing processes that you have. So, you know, say OpenAI produces, I don’t know, some number of AI tokens per day, they could just double the GPUs they have and produce twice as much now. But you would also double your labor. And you would double the people dedicated to doing R&D. And those could improve efficiency or have ideas that improve how much bang for buck you get for each of the labor and capital inputs, and that is kind of the reason that you get these increasing returns to scale. So you can double the key labor and capital inputs, which give you roughly a doubling of output, plus you get this additional oomph from having slightly more efficient processes that give you more output per unit input. And that’s how you get a greater than doubling of output for every doubling of inputs, which results in increasing returns. And I can talk about how this explains the acceleration.

Fin

Okay, yeah. So I will ask about that. And I guess I mentioned this kind of hockey stick graph of world economic outputs, where around the time of the Industrial Revolution, there seems to have been this big acceleration in growth. And you’re suggesting that these R&D-based growth models have something to say about why growth accelerated then and also maybe why growth has not continued to accelerate more recently. So how do they explain that?

Tamay

So, you know, earlier I talked about this property of increasing returns in these growth models, which just means that if you double your key inputs, you get a greater than doubling of outputs. In our history, we have lived in this kind of Malthusian condition where having more output enabled you to have larger populations. So in effect, you were able to reinvest your output into sustaining larger populations. And so if you doubled your key inputs, you were able to produce more than doubling of output of, say, crops, which enabled you to scale up both the number of people and the number of plows. Those people, again, have ideas that produce the efficiency of that production. And so in a world in which we reinvested our output into sustaining larger populations, you get these increasing returns to scale. And so you get an accelerating growth pattern. And that is basically what you see in the data as far as, you know, the data isn’t very good, but we can tell that there has been this acceleration from these hockey stick graphs and so on. And so I think these R&D-based growth models that have these increasing returns are a compelling explanation for historical growth patterns. Now, on top of that, it also predicts that in a world in which we, for whatever reason, no longer reinvest our output into sustaining ever larger populations, it would predict that output would no longer accelerate. In fact, it predicts that output growth should be on the order of the growth rate of the population. And so that’s basically what we see today, where our growth in output in frontier economies is about 3%, which is on the order of the growth rate of our populations. And so I think that’s another kind of feather in the cap of these R&D-based growth models, that not only do they predict this historical acceleration, they also predict that once the demographic transition arrives and we no longer spend output into producing ever more crops to feed ever larger populations, then we get a stable rate of growth that is on the order of the growth rate of our population.

Fin

And just to be clear, so this demographic transition you mentioned is just this transition from a regime where more output translates into growing population, to a regime where there are just other factors which kind of break that link.

Tamay

Exactly. So we no longer reinvest our output into just growing and lockstep our population. We instead are growing our GDP per capita. We’re growing our output faster than we’re growing our population.

Fin

You’re getting richer per person. Okay, so… We have these semi-endogenous growth models. They seem to do a fairly nice job of explaining and retrodicting historical rates of economic growth. But we’re talking about the future, and specifically we’re talking about what AI might do to growth. We’ve seen lots of new technologies in the last century or so, and lots of them seem like they had the potential to unleash faster growth. What seems importantly different about AI?

Tamay

Yeah. I think there are two things that are importantly different about AI. One is that it has this potential to reinstate this feedback loop of more output to more people, including digital people or AIs, to more ideas, to more output, where we are able to build, like once we have sufficiently advanced AI, we can build systems that are flexible substitutes for human labor in a broad range of tasks. And once we have that, then we can reinvest our output. We can just buy more compute, run more AI systems, which enables us to grow our economy. And that expanded economic capacity can then be reinvested. In an increasing returns world, this would result in accelerated growth. So that is, I think, the key reason, though there are other intuitions that are useful as well. One intuition is just that labor is the largest kind of factor compensation in our current world. So about 65-70% of global output is spent on compensating labor. And so it’s by far the most important factor of input in our current economies. And so it seems, you know, pretty natural that once you automate labor, you should have pretty massive effects. Now, that doesn’t tell you exactly how massive those effects are. I think the fact about increasing returns is some indication that you could have just extremely rapid growth effects as a result of this automation.

Returns to innovation in software and beyond

Fin

Cool. So let’s stick on the increasing returns argument first, which is roughly that once it’s possible to just accumulate the kinds of things which generate ideas like AI, that is, you can reinvest output into the idea-generating part of your economy, then you reignite this feedback loop that we saw in the post-industrial revolution. I think one thing I want to ask about is this thought that, well, ideas in general you should expect to get harder to find. A few centuries ago, you could have just been some random gentleman scientist with not much formal training, and you could just hit on really important scientific and engineering ideas just by tinkering around because people hadn’t done that before. Nowadays, it just requires enormous and consistently growing investments to make any meaningful progress on developing some significant new technology. Shouldn’t we expect that to continue to be the case? And isn’t that a bit of a dampener on this kind of runaway feedback loop you’re suggesting?

Tamay

Yeah, so this is indeed potentially a damper, but I think it’s fairly unlikely that this is a sufficient damper for the entire explosive growth or accelerating growth story to not go through. So in order for this to happen, what needs to be the case is that when you double your key inputs, capital, labor, and so on, you get a greater than doubling in output. There’s this basic argument that we hinted at before, which is this kind of replication argument. You can just duplicate your existing production processes and you can get at least a doubling, or you can get roughly a doubling of output. All that ideas need to give you is this additional increment on top of that, such that you get this overall greater than doubling. And if doubling all your labor and capital gives you roughly a doubling in output already, then you actually don’t need that much. And so the condition is actually fairly weak. Now, you might say, well, maybe our world is such that doubling labor and capital, but keeping ideas or technology fixed gives you slightly less than doubling in output. And, you know, that seems plausible; there are all sorts of reasons you might get decreasing returns to scale. Now, empirically, there are estimates of the kind of production technology of the overall US economy, and what people find is that, yep, it’s roughly constant returns or maybe slightly worse, such that you only need a very small increment coming from the R&D side to kind of get you over this hump to increasing returns to scale. And so we do have estimates of this parameter that basically says how hard it gets to develop new ideas. And so there’s this famous paper, “Are Ideas Getting Harder to Find,” which tries to do some estimation for the US economy. And they find an estimate of about 0.3, which kind of adds on with this other parameter of the overall kind of returns to non-idea inputs, so capital and labor, which people estimate to be close to one. And so all you need is for the sum of these things to be greater than one. So in order for this to fail, you need much more pessimistic estimates of the rate at which ideas are getting harder to find than this paper finds. And there have been some other efforts to try to estimate the returns to R&D. And what they find is, yep, it’s, you know, ideas are getting harder to find, but it’s not getting sufficiently hard to find that it would block this overall argument.

Fin

You mentioned this 0.3 estimate of some parameter. What was that number?

Tamay

Sure. So that’s roughly, if you double the growth in the inputs, what happens to the growth in the kind of innovation or improvements in your technology? So this 0.3 just multiplies by that. So you get a 2x increase in the rate of your input, the growth of the scientists that the economy is dedicating to R&D, then you get 0.3 times 2 growth in technology.

I guess we’re talking very abstractly about the returns to idea generation in general, but in the real world, there are many different sectors where you can choose to spend your innovation points. One thing we might look to is software, right? AI is just software; it’s probably going to be pretty good at generating new software ideas because it doesn’t need to run physical experiments or build things in the physical world, which could take a while. Is there anything we can say about the returns to software R&D in particular?

Tamay

Just one quick point about the estimates that exist and why to estimate it specifically for total factor productivity rather than theoretical physics or some other field, which might be harder. I think this is basically the average of the type of R&D that improves economic productivity. This is precisely the thing that you would wish to estimate. Now, in a world in which we have a very digital, AI-driven economy, the composition of the type of R&D that produces improvements in efficiency might look slightly different. In particular, we might have more software-relevant R&D or hardware-relevant R&D that ends up dominating this equation to contribute most to efficiency improvements in the economy. For that reason, it might be worth thinking about what the returns to R&D are in software specifically. If we end up in a world in which the economy is dominated by AI, then the returns to software R&D in particular might matter. It might also matter for some other reasons. One reason it might matter is that if we can automate R&D for AI, even when we haven’t yet automated all relevant economic tasks, we might get this feedback loop starting in the software world before it starts in the overall economy. We have this paper where we try to estimate the returns to R&D in software specifically. These returns to R&D tell you roughly if you double the growth in your inputs, what effect does it have on the rate of improvement of the technology? Do you get a proportional increase in the rate of technological improvements, less than proportional, or greater than proportional? Those are kind of the key questions. We look at a bunch of domains of software that we think are close proxies for AI. Some of these are actually AI; we look at computer vision and reinforcement learning. We also look at computer chess, which is kind of a mix of symbolic systems and neural network-based approaches. What we find is that the returns to R&D are maybe slightly better than they are for the overall economy, though for the overall economy, it’s hard to estimate. There’s a lot of uncertainty there. But we find that in a world that is much more dominated by AI, the software R&D might end up being much more important, leading to faster or higher returns to R&D. As a result, you might get faster acceleration; the rate at which things accelerate might be faster than it otherwise would be.

Fin

Can you say a bit more about trying to estimate the returns to software R&D from looking at chess engines? I’m just curious to know the methods there.

Tamay

Yeah, that’s right. What we do is we want to see whether, if you have a proportional increase in the inputs and the number of person-hours dedicated to trying to improve this, what happens to the rate at which we’re making progress in these chess engines, say, in terms of saving the amount of compute needed to achieve a certain level of performance. With chess, we looked at Stockfish, which is an open-source chess engine. There, we have really nice high-frequency data about the improvements over time, given in units of ELO, which is a common metric for judging how well chess players perform. The Stockfish community also provides an estimate of how much less time or compute you need to use to do searches or to run the engine for more iterations to analyze deeper positions. It tells you how much of that is saved by this innovation, by this improvement in technology. We’re able to say, in units of compute savings, how much of an improvement a certain innovation is. We go from this score in terms of ELO, which is kind of hard to think about, to a score to measure in terms of how much compute is being saved, which is a much nicer metric to think about. We have data on the inputs to this R&D. In particular, we have data on tests that are submitted. When someone proposes an innovation, they submit a test, and it plays games with the existing engine. It figures out whether the innovation actually makes the engine stronger. If it beats the previous engine sufficiently consistently, then this is considered an innovation and is adopted. We have data on the number of these tests run and the compute savings from the innovations that are adopted. We can then put these two series together and tell us something about whether doubling the amount of research effort gets you a doubling of the rate of improvement or less than doubling.

Fin

Greater than doubling? We find that it’s slightly less than a doubling in the rate of improvements for every doubling of the growth in the inputs. Very cool. I just thought that’s a very neat experiment. To clarify, you can have less than increasing returns in that sense, where you get less than a doubling in compute savings from a doubling in idea generation inputs—people working on the chess engines—but presumably, you can still get increasing returns from the economy overall, given that kind of number, right?

Tamay

That’s right. This is holding constant, say, the amount of compute that you’re using to run this chess engine. But if you’re also doubling that with every doubling of the economy, plus you get this additional boost of whatever it might be of efficiency gains, then you get greater than doubling of output or technological improvements for every doubling of all the key inputs.

Fin

Just backing up, we’re talking about arguments for explosive growth from AI. This first argument involves increasing returns to scale from growing the economy or growing frontier economies. The picture, at least the one I have in mind, is you have this economy; it’s this kind of machine that takes inputs and produces outputs. Inputs, in the growth models we’re discussing, are roughly speaking labor, which is currently people, and capital, like machines and other inputs, which combine with people to get outputs. Finally, ideas—technology, scientific innovations, management innovations, and so on. Currently, you can reinvest your outputs into some of those inputs. Labor, the stock of people doing work, is not easily accumulable with outputs. You can’t just spend to get more people because people decide whether or not to have kids. The suggestion here is that with AI that could do this work of generating ideas, in particular, the stock of labor would be accumulable. You could just invest output to get more labor. That gives you this potentially explosive growth when the returns to scale are big enough. I guess, unless you want to correct me on that.

Tamay

That’s totally right. One important thing to note here is that this argument doesn’t require AIs to be special in any way. All it requires is for these to be able to flexibly substitute for humans. This does not require superhuman thinking speeds, superhuman intelligence, superhuman coordination, or any of that. All it requires is that there’s some substitute for human labor that you can basically spend more money to obtain more of. In some sense, this is not an argument specifically for AI. There could be other technologies that enable you to reinvest output to get more people. There have been hypothetical duplicator machines, or you could have whole brain emulations where you simulate a human brain on a computer. Maybe artificial wombs would enable some of this. This is kind of a fairly general requirement from AI; it just needs to match human performance.

Fin

In an increasing return to scale world where AI can flexibly substitute for humans and you do get increasing returns to scale, what is the functional form of what happens to growth over time? Are you saying you get a boost to growth that lasts some period of time, or maybe as long as we’re in this regime? What would happen?

Tamay

In a world where you can substitute for human labor with AI, the rate of growth is something that’s increasing. The way in which this increases depends on the overall returns to scale of the economy. This is shaped by the combination of the returns increasing, the non-idea inputs, and this parameter that says how much harder it gets to find new ideas. But it doesn’t specify where this stops, and there’s a point at which surely this breaks down. This model does not give you a very complete picture of what happens very far beyond the point at which we are able to do this. At some point, I think growth rates that are extremely high are probably not going to be permitted for a bunch of reasons related to physical constraints, energy constraints, land constraints, and other things.

Explosive growth from widespread automation (without AI R&D)

Fin

Okay, so backing up again, we’re talking about arguments for explosive growth from AI. We were just discussing the argument involving increasing returns to scale from the economy once you introduce AI that can substitute for humans and generate ideas—these non-rival goods, which are the magic ingredient for this increasing returns situation. That’s not the only argument you give for explosive growth. You discuss another argument that doesn’t rely on this kind of R&D dynamic and just relies on growing a stock of digital workers very quickly. So what’s the argument there?

Tamay

One basic intuition for why being able to automate non-idea labor inputs—everything except R&D—is that currently in the world economy, labor is by far the largest factor input by compensation. The global wage bill is about $65-$70 trillion. This is the factor we spend most of our money on. Being able to automate the most important factor and making this very cheap, there’s a good intuition for why this should have a pretty large impact on output. It’s unclear whether this is a permanent effect in terms of raising growth rates or just a level effect that happens one time and raises the level of output without accelerating our growth very much. In a world where all the key inputs of capital are accumulable because you can just invest money, and you can also accumulate labor by investing in compute to run AI systems, the rate of growth actually gets determined by how much money is invested in building the stock of digital workers. This depends on the amount of money being saved in this economy and reinvested, but also on how far that money goes, how cheap it is to run digital workers. If it turns out that it is sufficiently cheap to run these digital workers, the rate of growth could accelerate to, you know, 10x what it is today. In the paper, we try to derive the kind of costs that would be consistent with an acceleration of about 10x relative to current rates.

Fin

I just want to have a better picture in my head of what such a world could look like where AI is effectively substituting for people in various jobs. Currently, AI is just like a text box that I chat with, which is not very economically useful. So what might change?

Tamay

What I’m imagining in terms of what these AI systems are like is that they’re much more capable than the current large language models we have. These systems should be able to act as remote digital workers. I think what I’m imagining here also might involve the robotic capabilities that we’re currently very much lacking. I’m certainly not claiming that this is something that would result from a slightly better language model. I’m really imagining AI systems that are much more capable than the systems we have today.

Fin

It feels helpful for me to imagine how many roles in the economy could be performed remotely, where your work is being channeled through this, through an internet connection, more or less.

Tamay

There are estimates of this, and I think I’ve seen some estimates of around 30% in the US economy, roughly, which is a fair amount. This is not sufficient, I think, for many of these arguments to really go through. It requires not just automating the current share of remote work; it would involve automating potentially a larger range of tasks.

Fin

Okay, great. You mentioned that you had estimated the relevant parameters for what it takes to get explosive growth just from this channel of being able to cheaply accumulate AI, which can substitute for various kinds of human work. An obvious question is just what are those parameters, and what do they need to be to get the very high levels of growth we’re talking about?

Tamay

The costs for these AI workers need to be such that they can be quite pricey. The key uncertainty here is exactly how much compute is required to run these systems. We don’t really have an idea about how much compute you might need to run a system that’s able to substitute for human labor. One thing you could do is look at the human brain as a reference point for how much computation is being used by the human brain. This computation is quite different, and the brain is fairly different from a GPU in that it might have more memory and a different pattern of computation. Some reports estimate roughly how much computation the human brain performs, and it turns out that this is roughly equivalent to the amount of computation that, say, the state-of-the-art data center GPU, the H100, performs. Even at current costs, we’re able to run human workers, if we get to the same efficiency of the human brain, at pretty competitive costs to human workers. In the paper, we work out exactly what the condition on these costs needs to be for explosive growth to happen along this kind of balanced growth path. It turns out that this could be quite a bit higher than human wages. Even if it costs in the order of $100,000 per year to run a human-equivalent worker, you could get much accelerated growth. You could get explosive growth in this world as long as we invest a sufficient fraction of income into building these digital workers.

Yeah, I guess what we’re talking about is how feasible it is that AI might reach roughly cost parity with humans in terms of wages. You make this nice point in the report about training. So there’s this important difference where suppose you have a human who really knows a lot about how to do software engineering, and then you want to train another human. Unfortunately, you can’t just copy their brain. They need to grow up and learn how to learn all of this stuff from scratch again. With AI, how it currently works is you do this enormous expensive training run, and then once you have the result, you can just copy it very cheaply across instances. And maybe that’s a reason for expecting that once you figure out how to teach AI to reach human parity on various tasks, then you might expect it to be cheap from that point to kind of copy those capabilities across many instances. Does that sound right?

Tamay

Yeah, that’s right. So I think there are a bunch of advantages that AI systems have. They could have perfect motivation, they could run 24/7, they could never get tired or go on lunch breaks, or what have you. There are a bunch of these advantages; they could be copied, and you can spin them up as you need them. So if there’s a project that’s especially valuable to do in a time-sensitive way, it’s hard to recruit suddenly, like to spin up your team and then scale it down when demand is more sluggish. Whereas with AI systems, you’re able to spin up many copies to do something and then spin them down as you have completed this task. So there are a bunch of these advantages that AI systems have, including benefiting from this kind of very large training run that folds in a bunch of information that no human could ever learn over the course of their entire lifetime. Now, I should clarify that the cost we’re referring to here in terms of this argument about how expensive a human worker needs to be in order for the kind of growth rate to be sufficiently accelerated to give rise to explicit growth. These are the inference costs. These are not the training costs. The training costs could be much, much higher, and I expect them to be many orders of magnitude higher than the inference costs. As long as you can amortize those training costs over sufficiently many copies, and as long as the costs of running each individual kind of AI worker are, at least on average, sufficiently small, by just plugging this into a kind of standard growth model and assuming that you have constant technology—so we’re not even improving technology, we just have access to these digital workers that are doing everything except R&D—you still get this kind of explosive growth. I think this shows that you can weaken this increasing returns to scale argument. You can remove the increasing returns property and still get this explosive growth.

Fin

Let’s imagine that tomorrow Google or whatever announces that you can now purchase a humanoid Google robot, which is at parity with humans at more or less any task you throw at it. And they just have an unlimited stock of these rolling off the production line. You can rent one out for something like $200,000 a year or something. I can totally believe that that would lead to explosive economic growth at the frontier anyway. Sure, you need to assume a certain rate of investment, but I can really imagine that it’s worth investing in piling these things up.

Tamay

Yeah, there would be these adjustment costs; you would have to readjust the economic processes, you have to build the facilities to produce these, you have to scale up, you have to continuously accelerate production to be able to sustain this. But yeah, I agree that this is quite plausible. Now, it is less plausible if you assume that we’re operating at a constant level of technology. I don’t think that’s plausible to happen. I think the stock of technology will also improve in lockstep. This argument just weakens that and still shows that explosive growth is plausible.

Fin

But I guess I’m just imagining that that is not the likely way that things pan out. In fact, we should expect a more incremental replacement of economically useful tasks. Currently, if you’re like a… If you just write marketing copy or something, then it is now possible to cheaply replace, to substitute that work with AI. So you also mentioned that it probably is not enough to replace all work that could be done remotely, but nothing else to get explosive growth. So, you know, how far do things need to go? And just what does that kind of more smooth and incremental rollout look like?

Tamay

For this more smooth and incremental rollout to produce explosive growth, you need this automation to happen relatively quickly. So this is on the order of multiple decades for the process of this automation to give rise to explosive growth. There is this condition on this argument; this argument requires this to happen fairly quickly. If it takes centuries for this automation to happen, then the argument is much weaker. Now, there are kind of independent reasons to suppose that this process of automation takes less than, you know, three decades or something. There are estimates for how much computation you might need in order to be able to reproduce the kind of input-output behavior of a human brain to produce AI systems capable of flexibly substituting for human labor. There are some estimates for how much compute you might need to train AI systems to start this process of really incrementally automating more and more tasks. The gap between when we start automating a large fraction of tasks per year and the amount of computation you need for basically full automation, some estimates of this place this gap sufficiently tight that it’s hard to get a very long, drawn-out process of automation.

Aren’t some jobs near-impossible to automate?

Fin

So that’s an argument that has to do with, in some sense, just how much brain power different tasks require. Presumably there are just some kinds of work where that’s not the relevant bottleneck. For instance, some people just prefer an actual human to be looking after their kids or to be acting in a film they’re watching or whatever. And if that’s the case, then there are going to be some tasks which are very difficult to automate. Like it’s hard to automate Hollywood access, for instance. Is it a requirement here that all tasks are eventually automated to sustain this kind of growth?

Tamay

Not really a requirement. I think what is needed is for a sufficiently large fraction of tasks to be automatable. There’s some substitutability between AI-produced output and human-produced output, and there might be a preference for human-produced output. Now, if human-produced output is extremely expensive and AI-produced output is much cheaper, there might be some substituting. There might be some kind of demand for AI-generated output, even though at constant prices they would prefer the human output. You might set this up in a kind of a toy model and think about what plausible rates of substitution might be and what fraction of tasks need to be automated for you to get a kind of increase in output by, you know, two, three, or four orders of magnitude. It turns out that if you are able to automate, you know, maybe 80, 90% of tasks in the economy, and with some kind of plausible values of how humans might be willing to substitute between human-produced and AI-produced output, you still get kind of very large increases in output. This is sufficient to give rise to extremely high levels of growth, as long as this happens within a sufficiently compressed period, so within multiple decades. So that’s kind of one response to that, which is to say, even if you don’t automate everything, you still get these very large level effects. And those level effects, if they happen within a sufficiently compressed period of time, produce very high growth effects and give rise to explosive growth. And I guess the other response to that is to say, you know, once we have automated on the order of 80, 90% of tasks, you know, why will we stop there? What is the reason that we expect that AI systems just won’t be able to substitute for humans for those remaining tasks, especially given that there is this incredible incentive to try to automate those incremental tasks?

Fin

Yeah, that makes sense. I guess, tell me if this is wrong, but one thing I have kind of in my head as a picture is you have some sectors where maybe there are these kind of preferences for… There’s like brute preferences for just humans doing the work. But it’s not the case that because there is some work where people prefer humans to do it, that they will in some sense bottleneck growth. You kind of need to say more about why that would happen.

Tamay

Yeah. So I think you need to tell the story of why there are this kind of large share of tasks that is forever out of reach for AI systems. I think part of why this intuition is compelling to people is because they consider a kind of large language model and they think, well, surely I don’t want my large language model to be my representative in government or my therapist or what have you. I think that intuition is kind of mistaken because I think at the point at which we have automated a very large fraction of tasks, we will have systems that are just extremely compelling substitutes for human labor. I think you really need to have extremely strong preferences, which seem to me to be somewhat implausible, to get sufficient levels of automation to give rise to fast growth.

Fin

Yeah, I guess maybe another way of saying that is something like, in this case, where a large fraction of tasks have been automated, upwards of 80%, say, then in terms of your kind of buying power for all the goods which are made by automatable processes, all these kind of remaining holdout forms of work like daycare or teachers, judges, politicians, actors, whatever, they are going to become just vastly expensive. You could just buy so much more of other things. There will also be a huge amount of interest in terms of willingness to find ways to automate them and willingness to spend on that, because therefore the gains to automate it will be so big. Also, in terms of finding substitutes, if you’re a consumer and you’re wondering whether to spend on the human daycare or the robot daycare, well, this is a world where the robot daycare would just be vastly less expensive. That itself is a reason that might break your brute preference for a human.

Tamay

And potentially much better than humans too. There are advantages that AI systems could have. Carl Schuman had this nice story about how AI nannies could in fact be superior to human nannies in a bunch of ways. That they would have the ability to administer emergency care, the ability to speak any language that you might desire your nanny to speak.

Automation and human incomes

Fin

So we’re talking about this way you could get explosive growth from AI, but it doesn’t involve AI automating R&D. In my head, I’m kind of picturing a world where you just have this growing almost population of digital people doing work, which human people used to do. That raises the question for me of what happens to incomes if these were just humans that were being accumulated. If this were just human population growth, then it’s a bit unclear to me whether the incomes of the original people would go up. That’s a kind of case where in some sense the economy is growing out, it’s becoming bigger in scope, but it’s not growing up in the sense that average incomes are increasing and even average quality of life is increasing. I guess what can we say about incomes in this world?

Tamay

One point to make here is that even though you might lose out, so in the very long run, you might expect wages to fall to subsistence levels of the cost of being able to sustain AI workers, which might be below subsistence levels for humans. The kind of limit in which we’ve scaled up, and when we’re starting to see decreasing returns to scaling up further, where the marginal product of an additional AI worker might be quite low. At that point, even though the kind of wages that are paid out by humans might be very small, you could still have very large incomes earned from just rents on capital. So as long as humans are able to own sufficient amounts of capital, this could produce very large incomes. This could give you some modest or some large fraction of global output being paid out to the owners of the relevant capital. So the kind of compute-based capital, lithography machines, data centers, fabs, and so on. As long as humans own some of that, then the capital incomes could be really quite large. Now the labor income might, especially in the limit, be really quite small. There might be this intermediate period where labor incomes could still be very high, even when AI has automated a very large fraction of tasks. That’s just because the marginal product could be very high in this world where you have much more advanced technology, a much larger capital base, and maybe more capital per worker. I think it is this complicated question as to what happens to incomes. A lot of this depends on what wages will do over the course of this automation period, how long the marginal products of workers could stay at high levels, and what capital is owned by whom. There are some distributional questions about this where wealthier people tend to have much larger fractions of their incomes from capital, and so we could see this kind of inequality increase as most of one’s income derives from capital incomes.

Fin

These outcomes are kind of descriptive, right? You might very reasonably think that it’ll be very bad for, for instance, very sharply increased inequality because most income comes from holding capital. There are also things you could maybe do about this, right? You can kind of decide to break the model’s assumptions if you think that’d be good, but it’s very worth knowing what the models tell you as long as the assumptions hold. Yeah, that’s right. Cool. So yeah, maybe, um, backing up again, so we’ve talked about two broad dynamics now by which you might get explosive growth from AI. One relies on increasing returns to scale as long as AI can do R&D, and the second has to do with AI just substituting for other kinds of human work cheaply enough. You also just mentioned something which feels separate to me, which is the thought that when you introduce new kinds of AI and new capabilities, then you get a kind of one-off level effects that could be very big, rather than a growth effect that lasts a long time, and it’s more like switching into a new growth regime. So yeah, what’s the thought there?

Tamay

Yeah, so this distinction between growth effects and level effects is important in economics. There’s this famous paper, which is one of the most cited in the field, that makes this point and complains that people in policy often conflate level and growth effects. Economists and policymakers should care about growth effects much more than level effects because they change the trajectory. However, there are level effects to AI automation that are sufficiently large, such that if they occur over a compressed period, they would give rise to extreme growth effects. If you increase your output by three or four orders of magnitude over a couple of decades, that results in double-digit growth rates. The argument here is that if you automate 80 or 90% of tasks in the economy, and once we have sufficiently capable AI to do so, then the level effects of doing this are many orders of magnitude. You can think of it as aggregating economic inputs in a way that creates bottlenecks. This is standardly done in growth models, where we can’t just expand one dimension of having much more input into some tasks but have slower growth in inputs in other tasks and still get very large effects in output. Even when accounting for these bottlenecks and just expanding inputs on the tasks that you’ve automated, using standard estimates for the degree of substitutability, you still get extremely large level effects as you automate a large share of the tasks done in the economy. If you automate something like 80 or 90% of tasks, you could get a two or three orders of magnitude increase in output. If this happens over a sufficiently compressed period, over the course of two or three decades, as one might expect, then this gives rise to explosive growth.

Fin

So this would be a case where introducing new kinds of AI doesn’t switch the world economy onto a new track where it’s just growing much faster indefinitely until it reaches long-run limits to growth. Rather, you are just boosting output a huge amount. But of course, if you boost output enough over a short enough period of time, that has to mean that growth increases definitely. AI could be such a big deal just in this sense of boosting output that it could give rise to at least temporary explosive growth.

Tamay

In some sense, this is kind of a worst-case argument that says, okay, we can ultimately automate R&D. We are stuck with some tasks that humans will, for some reason, forever do. You still have the potential of having these very fast growth rates.

Fin

Can you help me picture the difference between growth and level effects? Can you give an example of some new technology, for instance, which would cause a level effect but not a growth effect?

Tamay

One shock that would cause a level effect but not a growth effect would be if we imagine a hypothetical world in which we suddenly doubled the world’s population but didn’t change the growth rate of our population or the growth rate of our technology. In that world, you would have this large level effect that would raise total world output but wouldn’t put us on a trajectory to see faster growth in the future, assuming our population growth stayed constant.

Fin

So you get this sudden spike in output, and then it’s going to level off because you haven’t changed these fundamental facts about how outputs translate into population growth and other inputs. Okay, cool. So we have now discussed three rough arguments for expecting explosive growth from AI. As a reminder, the first has to do with AI being able to automate research and development, which gives you increasing returns to scale from the world economy. It’s like potentially hyperbolic growth, increasing growth rates over time. The second argument had to do with automating all the other tasks. If that’s cheap enough, then you still get increased growth rates because you can reinvest outputs in labor, which doesn’t happen easily now. Then there’s this third thought, which is that even if you don’t get what’s called a growth effect, even if you don’t get this switch onto an indefinitely lasting new growth regime, you might still get these transitory effects, which are nonetheless big enough to translate into explosive growth. I’ve got to ask, which of these three arguments really moves you the most when you think about what is likely to happen?

Tamay

I think the argument that moves me the most is the first argument, the increasing returns to scale argument, which supposes that AI can flexibly substitute for humans in the economy, including in idea generation or R&D. I think that argument, conditional on AI being able to do that, is really quite strong because the R&D-based growth models have a good track record in explaining historical growth acceleration. Not only that, they also predict the rates of growth that we should observe today. I think we don’t have very good competing explanations for this type of pattern. There are some institutional-based ideas that suggest institutions matter a lot, and I think that’s right, but I don’t see them as having nearly as good explanatory power for the historical growth pattern. For this argument to fail, you need to assume either very strongly decreasing returns to physical inputs, labor, and capital, or very strongly diminishing returns to R&D, which are not consistent with the parameter values we typically see in the empirical literature. You really need to go against all the empirical estimates to say that this argument is not valid.

Fin

When you look at historical growth rates, you picture this hockey stick graph that we’ve talked about. There’s some debate over whether economic history is a series of growth modes or just a noisy translation of a smooth increase in growth rates. But if you squint enough, you see a history of increasing growth rates, right? We also have models that explain why that is and why that trend has maybe fallen off a little bit more recently. In some sense, explosive growth from AI would represent a continuation rather than a break from that historical trend.

Tamay

I think that’s right. This is not a break from the historical norm; rather, it’s a continuation. This constant growth rate itself is a break from the historical norm. Additionally, it’s the historical norm not just for human history but potentially for biological history, where initially it took much longer to double total organic matter on the planet. This seems to have accelerated, as has the amount of computation that happens in brains across all species. This has likely also accelerated rather than grown at a constant exponential rate. This is a very natural growth pattern, similar to what one observes in human economic history and potentially in the history of life. One should assign pretty high credence to this being plausible, and in light of strong arguments, it’s worth taking this seriously, if not becoming convinced that this is the likely outcome for growth over the next century.

Fin

I find that kind of zooming out macro history, the history of biological life, quite interesting. Here’s a wildly speculative and probably wrong analogy: all life is doing something a bit like innovation, generating mutations and selecting on them. It’s like speciating and testing out what works. If you have more life, you have more mutations, which is analogous to having more people generating more ideas. That’s the loop that explains why growth rates would increase as long as population increases with output. It’s a neat analogy. We’ve been talking about explosive growth from AI. You’ve done a very good job representing the arguments for expecting it precisely and dispassionately. It’s worth saying that talking about something like this is kind of crazy—it’s a very different world from what any of us are used to. It would involve faster growth rates than the world has ever seen in terms of world economic growth and a bunch of technologies that don’t exist, like AI that can replace human work. A pretty reasonable reaction is that this just seems crazy. We’ve talked about why you might think it’s really a continuation of trends of increasing growth rates. However, we should work through the more specific arguments against the case you’ve laid out for expecting explosive growth. The first one you mention in this report is the possibility of regulations that are targeted and strong enough to really put a damper on growth. How might that happen?

Tamay

I expect there to be a bunch of regulations that dampen the rate at which AI is deployed. The types of regulations that I expect to have the best chance of curtailing this explosive growth or this rapid process of automation would be compute-based regulation. This is largely because the training of these AI systems, as well as running inference, requires large data centers filled with GPUs, for which there’s a concentrated supply chain with only one company producing the lithography machines and basically one company doing the fabrication. It’s relatively easy, given both the footprint of those data centers and the concentration of the supply chains.

Fin

To be able to regulate the training and the advances of these systems and the deployment of these systems. Just as a side note, before this era of compute-intensive deep learning-based AI, which is clearly leading the way in the most capable kinds of AI, some people expected that the way to get generally capable human-level AI systems would be through small teams tinkering around with innovations that would finally unleash AGI. They would do this without using much compute, flying under the radar. If that were the world we were living in, it would be very difficult to prevent that without extremely invasive and expensive regulation. The reality is that we live in a world where two things are true: whether or not we get human parity AI soon, it’s going to get even more expensive, and it’s already incredibly expensive. It has a huge, unignorable physical footprint, uses a lot of energy, and requires many talented people. That’s pretty easy to target as a regulator because it’s just there. The second thing, as you mentioned, is that the various parts of the supply chain and the process of building these AI systems are quite bottlenecked. You don’t need to cover a huge number of bases to prevent all possible routes to building more powerful AI. In some cases, you can identify just a small number of manufacturers or companies. This suggests that regulation might be much easier than you would have expected 10 or 20 years ago. The question here is that we’re talking about regulations that would dampen vast levels of growth, which means it would make the world far less rich in economic terms, at least in the short term, than it would be otherwise. You might think that’s superficially unlikely because why would you forego all these economic gains? One question here is, do we have any examples of regulatory initiatives historically that have prevented very large amounts of growth?

Tamay

I don’t think we have successful examples of regulations that forestalled technologies as consequential as I expect AI to be, largely because there are few such technologies. One technology that might be in this reference class would be those associated with the Industrial Revolution. These technologies included machinery for textiles, metalworking, leatherworking, and the steam engine. It’s quite interesting that the UK actually had strong regulations and restrictions on these technologies in the late 1700s through the mid-1800s. This was a pretty elaborate effort to forestall the diffusion of these technologies to other countries to contain the Industrial Revolution within Britain. Six government departments were involved in this effort, so it was not at all a small effort. They had restrictions on machinery that you could export, it was illegal for skilled artisans to leave the country and set up shop elsewhere, and they even had spies come over to try to do industrial espionage to discover secrets about how these factories and machines worked. However, this effort turned out to be ineffectual because it was hard to specify exactly what things should be restricted. They restricted some things but forgot about others. For instance, the machines and tools used to build the machinery for producing textiles or rubber were not on the restricted export list. Those were successfully exported to continental Europe, the US, and Russia. There was also a lot of emigration of skilled artisans who set up shop elsewhere, and it was hard for that to be caught because the necessary surveillance turned out to be something they didn’t have the capacity for. This is an example of one technology that the relevant technologies did accelerate growth, and regulation tried to curtail the diffusion of that, but it turned out not to work.

Fin

That’s very interesting. I didn’t know that at all. Can you say anything about what motivated those regulations from the UK? It seems surprising to me.

Tamay

I think the motivation was basically the recognition that these technologies were powerful and would give them an edge in producing high-value outputs, enabling them to do so cheaply and export this to earn a lot of money from those exports.

Fin

Okay. So it was kind of protectionist motivations. These regulations failed, right? Industrial machinery spread pretty quickly from the UK. You might also think that protectionist regulations limiting the export of AI technology and hardware might not be enough if you can just build the thing domestically. The prospect of building the AI itself might be a reason for those limits on exports. Does that sound right?

Tamay

I think export controls are motivated precisely by the opposite reason of trying to get growth. The basic idea is that people have tried to prevent these technologies from diffusing and have failed. This suggests that keeping technologies in the box, however you define that box, is just kind of hard.

Fin

I guess people also like to point to nuclear power. If you could run through the experience curves and get it a little cheaper, nuclear power would seem like a clean, cheap, good source of energy. However, it seems like there isn’t a country on earth that is building vast amounts of nuclear power or innovating there. That means the world is presumably foregoing some large economic gains. Maybe that’s an example.

Tamay

I think nuclear energy is not a super compelling example to me. This is often brought up as like, we basically killed nuclear power, and so what’s to say that AI would be any different? Why can’t governments just crack down and prevent this from happening? My response to that is actually nuclear power is great, but it’s actually not nearly as big of a deal as being able to automate human labor. One very basic point is just that the factor compensation of energy is about 5%, which is kind of an order of magnitude smaller than what we pay labor globally. The other thing is just that the elasticity, the kind of price elasticity of demand is quite low. It’s quite inelastic, which means that if we’re able to half the price of energy, this results in less than doubling the amount of energy consumed.

Fin

So you just shrink the sector by making it cheaper.

Tamay

Exactly. Every innovation that makes things cheaper would just reduce the fraction of world output dedicated to energy. This is true for what’s known as this short-run elasticity, but also for long-run elasticity. What if we innovate and have these cost-saving technologies? What happens to the amount of usage of energy? It turns out not to grow by very much. And I think this is one reason that the case of nuclear fission isn’t a strong argument for expecting that regulation could be successful for AI.

Could AI-slowing regulations prevent explosive growth

Fin

I guess if you imagine regulation succeeding at preventing explosive growth from AI, what comes to mind? What do you think is the most likely story there?

Tamay

Yeah, I do think that making it illegal to do large training runs above a certain scale means that we never end up building the relevant technology, so that we never end up in a regime where we have credible demonstrations of AI being this extremely powerful force for economic growth and technological change, and welfare and other things. So that, I think, is probably the key thing. If you were to ask me, in the future, we don’t get very powerful AI, why did this happen? That is kind of my best guess as to what might have happened.

Fin

That’s interesting. So it sounds like preventing new AI capabilities is more likely relatively early on because the later you leave it, the more demonstrations you have. This could in fact unlock just a lot of economic gains.

Tamay

Exactly. And I think even though I think this is maybe the most likely candidate, as my arguments about the Industrial Revolution and nuclear fusion suggest, I actually don’t think this is more likely than not. I think this is kind of less… I don’t think this is very likely to end up happening that we end up killing AI early on. I think we’re already quite close to having very compelling demonstrations.

Fin

Short of just killing AI, do you expect regulations to significantly slow down the kind of arrival dates of things going crazy and explosive growth?

Tamay

I mean, maybe on the order of multiple years or something, which may or may not be large in proportional terms, depending on how quickly you expect this to arrive […] I don’t think it’s like a very large proportional slowdown or something.

Fin

Yeah. Just finally on this, in the report, you give some reasons why it might become harder to regulate AI of a given capability because of fooling. What’s that thought?

Tamay

So currently we’re seeing efficiency improvements on both the software side. So these are the kind of algorithms that are used to train these systems, the kind of key implementations of these algorithms on the actual hardware. We’re seeing improvements in price performance also on the hardware side. And so these two forces drive down the cost of achieving some level of performance. On the software side, we’re seeing kind of really fast improvements in kind of how much some amount of compute gets you in terms of performance. So this is, you know, on the order of eight months, we see a halving of the cost in terms of compute in order to achieve a certain level of performance in, say, vision models or language models. And so in order to really regulate the training of systems that are very capable, as long as you have these algorithmic improvements in the background, you start to need to expand the amount of surveillance that needs to happen in order to be able to find these training runs and to stop them from happening. And that seems just really quite hard.

Fin

Yeah, that seems like quite an important factor. And one kind of neat example is that a couple of months ago, Andrej Karpathy, kind of founding influential AI researcher, managed to train GPT-2, or a GPT-2 kind of equivalent model, for about $20 in about an hour and a half with just consumer hardware. And yeah, I think the original GPT-2 cost something like $50,000 to train over much more time. And I understand that that was relatively just, you know, on trend for the declining costs in terms of compute and also in terms of dollars for training kind of equivalent models over time.

Tamay

That’s right. So the kind of footprint of the data center needed to achieve this level of performance is just something that drops by orders of magnitude over the course of multiple years. And so as long as you don’t also stem the tide of these algorithmic innovations, it becomes just really difficult and requires ubiquitous surveillance in order to prevent training runs to produce models of this capability.

Fin

Yeah, totally. And I guess, you know, to kind of make a trite point, we have this existence proof in the form of humans, which require much less than, like way less than the energy and capital expenditure required from the biggest training runs to get human performance, which we currently can’t reach.

Tamay

Exactly. Okay, so that was one reason you might think that explosive growth from AI doesn’t happen, namely regulations. It sounds like you come down on the side of thinking that perhaps regulations will delay the onset of rapid growth by years, but to prevent explosive growth entirely and indefinitely just seems quite unlikely. I’ll mention another case against explosive growth from the report, which is this idea of factors of production that are non-accumulable. What is the idea there exactly?

Tamay

We’ve previously talked about how labor is currently non-accumulable, which means that we can’t reinvest our output to build more humans. With AI, that becomes accumulable. We can build more data centers. There might be other key inputs besides capital and labor that end up being quite important. So two examples might be land, the amount of physical land on the surface of the earth in order for us to take up to do economically relevant things. And the second is energy. There might be a limit on how much energy we’re able to access, either from the sun or from fossil fuels.

Resource and energy constraints

Fin

Okay, yeah. So these are factors of production where you might run into limits, which are pretty hard to work around. We gave three examples there. So you might run out of land, you might run out of useful power, and other kinds of physical capital. Yeah, I don’t know if you could say more about how close we are to limits in each of those cases? So I know if you want to start with a particular one, maybe land.

Tamay

Before I talk about the details, I think there might be some kind of intuition for why this might be kind of important, which is that historically we’ve seen these accelerations happen. So we’ve had the industrial revolution that gave us maybe an order of magnitude acceleration in growth rates. And then for some reason, we ran into this kind of thing that became non-accumulable but important, which is labor. And so the reason that I think this is a plausible objection is just because in our history we have run into these kind of key inputs that are non-accumulable that slow down. And so one might just naively expect that there might be an additional such factor that ends up blocking us. And I think historically, we’ve seen these accelerations of maybe an order of magnitude and then coming to a halt after. And so maybe this is kind of the rate at which such key inputs end up blocking the next acceleration. So that’s the intuition for why we might expect something like this. I think no particular examples are really quite compelling to me. So land, I don’t know, we use about 1% of land that we are able to, in principle, build on. There’s just two orders of magnitude more land on the earth. There’s also the oceans, which is an additional large factor on top of that. So I don’t think this is a particularly compelling argument that we will be bottlenecked by land in particular. I think the other thing is that there might be these agglomeration effects. There are these agglomeration effects that we humans experience that being close together is really quite important. And I think to some extent, AI systems might also benefit from being geographically close together for reasons to do with latency and so on. Now I still think that even with being spaced far apart and communicating at the speed of light and incurring some latency cost, I think it’s really kind of hard to imagine that that is like the key reason that we can’t grow faster than 30% per year or something. And that seems really quite far-fetched.

Fin

Yeah, okay. So that’s land. What about power? I guess there are two things there. One is just how quickly we can scale up energy production, and another is absolute limits on how much power we can generate before crazy things start happening. Yeah, what’s the story there?

Tamay

Yeah, so I think with power, we are currently receiving from the sun on the order of 1E16 gigawatts of power, whereas we use about three orders of magnitude less in global consumption. And so there is this three additional orders of magnitude of expanding that is feasible for us to do in principle. Now, we probably won’t be able to absorb everything that hits the atmosphere, but some fraction, it might be feasible with much expanded technological capacity and industrial capacity we might be able to achieve. So that leaves us with many orders of two or three orders of magnitude of expansion on just using more of it. Now there’s also the efficiency that you could gain. And in fact, energy intensity, like the amount of energy per dollar of output is something that has improved, has gone down by about 30% since 20 years or something. So we are making pretty somewhat modest, but we are making noticeable improvements in efficiency. And so it seems plausible, especially in a world in which we hit some of these limits, that we can also gain one or two orders of magnitude in just efficiency.

Fin

And I guess, yeah, to note that earlier on we’re talking about the price elasticity of demand with energy. Presumably, if it looks like power production were a serious bottleneck to continued growth, then there’d be much more intense pressure to make power production more efficient, more than today. Cool, so that’s land and energy as two potential bottlenecks where, in some sense, the ceilings seem quite far away, like 100 or 1,000 times current land or energy use. You also mentioned physical capital in general. So, for instance, I hear people talk about various precious metals or whatever, which are currently crucial for some part of the world economy. Maybe it’s AI hardware itself, maybe it’s building more solar panels. And often you hear that we are near the limits on extracting those resources. So doesn’t it seem likely that we might hit into one of those barriers soon?

Tamay

Silicon is incredibly abundant. So I think for none of these that I’ve looked into, one could tell a compelling story that it’s sufficiently scarce, that it’s not possible for us to increase its production by 10 or 100 times. And it is also not the case that we are able to produce a relevant substitute for this material. And also that it’s extremely crucial that even when we can’t substitute for the material, we can find other ways of producing a close enough output. I think the other argument about it might just be hard to produce enough robots or other types of capital, scale-up or lithography production or data center production or chip production. I think this is somewhat plausible, though it’s something that is diminishing in its plausibility as we have seen these very large scale-ups in the kind of AI relevant output. So we’ve seen kind of production of data center GPUs scale by a factor of two or more per year for at least a couple of years. This, I think, makes it somewhat credible that we should be able to see expansions at least as fast, especially in a regime where the margins on these outputs are sufficiently high, that it’s worth just spending a lot of capital on making this happen faster. I think there’s kind of examples from fabrication of chips and building of data centers that are relevant here. There’s also other kind of industrial scale-ups that are relevant, Chinese EV manufacturing scaling up at 100% per year or more. There’s kind of cases of historical growth, such as Southeast Asia, where we’ve seen, you know, double-digit percentage growth rates and not be bottlenecked by the rate at which they’re able to build roads and buildings and things like this.

Fin

Yeah, that sounds right. I guess I find it interesting where historically people have made various arguments with the form like, oh, rare earth metal X is currently required for industry Y and it seems like we’re at limits on getting more X so we should expect industry Y to reach some brick wall soon. And it seems to me like that has very rarely if ever materialized. Sometimes it always kind of seems like last minute some substitute is found or some way of rooting around the problem is found or some new deposits are found or whatever. But I guess there is some systematic reason why those arguments don’t always work and that is that’s if you can’t substitute directly for the resource, then you could substitute for the component, or you could go up a higher level. And if you can’t find a substitute for that component, then you can just find another product. And you really have to fail in a lot of those layers before you find some kind of significant slowdown for better or worse.

Tamay

I think that’s right. Now, I think the argument is slightly stronger than the usual peak oil or peak resource story, which is that everything is growing really quickly, and so it accelerates the kind of rate at which we hit these limits. So I think this argument is not a very weak argument in that historical base rates suggest that we tend not to accelerate forever. We stall at some point. And so I think one should assign this some credence even when we can’t identify specific limits here.

Will new technological ideas get too hard to find?

Fin

That sums it up. Sounds right. Okay, so let’s talk about another argument against explosive growth. And that has to do with R&D specifically. So, you know, you’ll remember that at the beginning of this conversation, we talked about a reason for growth, which is that you get these increasing returns to scale with the economy from automating R&D. But maybe it just turns out that the ideas just get really hard to find really quickly. You actually don’t get much juice out of being able to automate R&D because you just run out of like really economically valuable new technologies, new ideas. So does that seem like enough to prevent this kind of rapid growth? And does that seem likely?

Tamay

I think that’s a fair point, and it’s certainly worth considering the historical context of technological advancements. However, I would argue that there are significant differences between AI and previous technologies like personal computers. While computers did enhance productivity, they often required a substantial amount of time for industries to adapt and integrate them effectively into existing processes. In contrast, AI has the potential to fundamentally change the nature of work itself by automating tasks across a much broader range of activities.

Moreover, the pace of development in AI is much faster than that of previous technologies. The rapid advancements in machine learning, deep learning, and natural language processing have already demonstrated capabilities that were not possible before. This suggests that AI may not follow the same trajectory as earlier technologies, which often experienced long periods of adjustment before their full impact was realized.

Additionally, the integration of AI into various sectors is occurring at a time when the infrastructure and data availability are vastly improved compared to the past. This means that AI can leverage existing resources more effectively, potentially leading to more immediate and transformative impacts on productivity.

So while skepticism is healthy, I believe that the unique characteristics of AI, combined with the current technological landscape, make it reasonable to expect that AI could drive more significant changes in productivity and economic growth than previous technologies did.

Yeah, I think AI is really unique in that it promises the ability to substitute very broadly for human workers in a way that no other technology can. You can have some automation of mechanized tasks, say agriculture or something like that, or you can have some automation of specific professions or various tasks that are performed within a profession. But for each of those, standard theory would predict that that shouldn’t change the rate of growth; that shouldn’t produce extremely large level effects, maybe with the exception of agriculture, which before it was automated, was an extremely large part of the economy. But these other types of technologies never held this promise of being able to, across the board, automate this key economic input. I think for computation, it’s quite interesting in that most computation, most economically relevant computation today happens within human brains. So we don’t, like, although we have large production of GPUs, those GPUs are still not doing as much computation as humans are doing today. And so, you know, maybe it’s not surprising that being able to automate computation at the levels that we’ve been able to do so doesn’t have very large effects. But perhaps if that changes, where most computation happens within GPUs rather than human brains, that might reverse.

Differences to previous new (information) technologies

Fin

I guess as well as the kind of general track record of previous AI-like technologies, we could also just look to present-day AI as a kind of indicator of its effects on the economy in the future. And I don’t know, when I think about present-day LLMs, in some sense they are just objectively incredibly impressive. They’re just more broadly knowledgeable than any living person. They read an entire book just faster than anyone possibly could. And on the other hand, it just seems like they haven’t meaningfully moved the needle on productivity statistics or growth statistics, basically at all. Despite some people, by the way, confidently predicting that, for instance, GPT-4 would begin to show up in productivity statistics. So again, isn’t that just a reason to be skeptical looking forward?

Tamay

I think it’s a reason to be skeptical of the specific iteration of large language models that we have today being able to move the needle. It’s maybe some evidence against deep learning more broadly, but I don’t think this is extremely strong evidence given that we’ve only landed with the current systems very recently and are kind of rapidly trying to improve on the capabilities of these systems. There are labs that are saying that we’re nowhere close to what the current kind of setup of deep learning and transformers and so on are able to do. There’s just a lot of work that you can do to make these more economically valuable. Now, this might take some time, but I think this is not a very strong argument against the entire project of AI not being able to produce large effects on the economy.

Why aren’t more economists thinking about explosive growth from AI?

Fin

But maybe it is an update towards thinking that there’ll be some lags from introduction to integration and then the statistics changing. Cool. Okay. So I’ve just gone over a bunch of arguments for and against expecting explosive growth from AI. But I kind of think in general, if the case you are making is roughly right, that there is just some meaningful chance of a growth explosion that has higher growth rates, much higher than the world has ever observed. And that falls out of successful and predictive models of growth that are just, you know, in mainstream popularity like in economics, then that just sounds like the kind of thing that you really live for as an economist, right? Like it’s just a really huge if true kind of claim where you have a bunch of tools that you can bring to bear on how likely it is. And so I kind of really feel like this should be the kind of question that, you know, the relevant fields in economics should really just jump on and yell about what’s going on.

Tamay

I’ve been frustrated about this, and I mean, I’m confused. I don’t quite understand what’s going on there. I do just think that the field of economics is dropping the ball on AI in a very important way. I mean, my specific explanation as to what’s going on here is that there’s this kind of denial of the hypothetical of having AI that could broadly substitute for human workers. There’s a kind of unwillingness to entertain that premise. I don’t quite understand why, because I think it would make sense to defer to AI researchers about what they expect the technology to be able to do. AI researchers do broadly expect AI systems to be able to broadly substitute for human workers, and yet there isn’t this deferring to the relevant experts on the question of what AI systems can do. Instead, economists just have their own opinions about what AI systems can and can’t do, and suddenly factor that strongly into their thinking, which I think is a mistake. I mean, the other thing is, yeah, there’s just this focus on current large language models and thinking about AI as just a large language model. We invented the large language model, and this is a thing that we will continue to have for the next 50 years, like GPT-4, maybe slightly better than that. We’ve hit that, and AI researchers have hung up their hats and said, here’s the large language model, and economists, please think about what the implications of this are. That is one way that I think economists are going about thinking about this, which maybe reflects that they’re not sufficiently involved in speaking with people working on AI who are emphatically saying, like, this is not the endpoint of the thing we’re building. Sam Altman will say things like this is the most embarrassing thing that we could possibly put out given this is the worst thing you’ll ever need to use. Very soon we’ll have things that are much better. And I think there isn’t enough deferring quite to this story, and there’s just more of this, oh, this is just a technology that is added to the toolkit of technologies we have, just like anything else, just like building the harvest combine or something like that. I think another explanation is that economists have long needed to explain why technology X won’t be this big revolution in the world for many things, like the mechanization of physical labor in kind of basic robotics and agricultural automation and things like this. These didn’t fundamentally change the growth mode that we are living in. And I think people have made mistakes in reasoning about technologies in the past, which economists have helpfully pointed out, you know, what was a mistake and kind of isn’t the reason to expect some grand revolution. And I think there are kind of ruling out too many technologies than they ought to be. They’re right 99.9% of the time, but I think they’re just wrong about this. They have too strong of an immune system against these types of arguments that they’re not willing to engage with this. One additional thing is there’s this focus on trying to explain the historical growth patterns of the last 50 years or so, where our data is the best. There, we’ve seen this very constant like two to 3% growth rates. And so economists, I suspect, take this to be some kind of physical law. Not quite sure if it’s that strong, but I think that they think this regularity is just really quite central about the economy that we currently live in.

Fin

Yeah, I wonder if, I don’t know if this sounds wrong, but as an economist, you care about your reputation among economists. And there’s a difference between coming up with some really neat idea where you’re not really staking your reputation because it’s not making a strong prediction, and this, which is just like a prediction where it’s totally obvious if you get it wrong in a fairly embarrassing way because it sounds very sci-fi and crazy to begin with. And so if you’re just sensitive to your reputation, you know, roughly as much as you care about working on big if true claims, you’re quick to avoid really putting your neck out. Does that sound right?

Tamay

Yeah, I think that’s right. I think there are some kind of failures of the field. And in particular, there is this kind of distaste for making quantitative predictions within the field of economics, where economic papers are very often of the sort where the claims are of the sort, you know, this is the qualitative effect of what happens given some change. This is the comparative statics. If this thing goes up, this thing goes down. They often are less interested in precisely quantifying these effects and making these types of quantitative predictions, which I find quite frustrating.

Fin

Yeah, I assume you’d think it would be good if there were more high-quality research in economics on these questions. I guess if you’re listening to this and you can contribute, then consider this your reason to do so or your extra oomph to do so. Well, okay, so let’s just begin wrapping up, and I really just want to ask this overall question of, given all these considerations, where do you actually come down on, let’s say, the chance of a growth explosion from AI this century?

Tamay

I think conditional on there being AI that is able to substitute for most human workers, I’m maybe three-fourths or something of explosive growth happening. And I also think that this century, it’s very likely that we will build AI systems that are capable of broadly substituting for human workers. So my unconditional view is roughly the same—close to an 80% chance of seeing very high growth rates. And then I think I’m very uncertain about exactly how high those growth rates might be. I think they could be a lot higher than 30%, even. I think that seems also plausible. And I have a pretty wide distribution over growth rates that I would expect to see this century.

Fin

Yeah, I guess in terms of your uncertainty distribution over what growth could look like this century, how all or nothing does this feel? How much is it just like, look, either we get into this feedback loop regime, or AI things stutter out, or maybe it’s smooth. Maybe there’s just lots of ways things could go.

Tamay

I think of it as being some smooth thing where there are a bunch of things that could push things in favor of faster growth or slower growth. And there are, like, you know, we’ve talked about a dozen or so different considerations. I think it makes sense to basically average over a bunch of these things, consider their correlation, where if AI turns out to be not super powerful, then it becomes easier to regulate, it becomes easier to keep out of domains and have preferences for humans doing things. And so there is some large mass at the nothing at like just 3% per year. But there is this mass at many other, like over a long range of high growth rates.

Open questions and recommendations

Fin

You know, we just talked about how it seems like the field of economics is currently dropping the ball on these questions. If someone listening to this wanted to help push forward some of these questions and maybe they have some background in economics, what are some open questions that people could take a look at?

Tamay

So one question that I’ve encountered is that existing models of production, where you have, say, a constant elasticity of substitution production aggregator, those don’t do well for modeling AI, in particular because they don’t permit new tasks forming and entering into the production for fairly technical reasons. I think this is a workhorse, and because this doesn’t enable us to think about what if an AI could do a task that humans previously couldn’t do, and how does that impact output? What if they can read a book in a matter of 10 seconds or something? That just like our machinery breaks when we start introducing things like this. So that’s one thing. Another thing that I think has featured in this conversation is the returns to R&D. I think the returns to R&D estimates that people have produced are just unfortunately not great. And I think this is partly because the empirical techniques weren’t great. I mean, I co-wrote a paper where we introduced a bunch of empirical techniques for doing this in a better way, and applying some of those for estimating this would be good. Unfortunately, macroeconomic series are just very short and strongly correlated with each other, so that’s kind of hard. You know, in our “Ideas Getting Harder to Find” paper, we did a back of the envelope, and we found that it was based on effectively three data points of TFP, because it’s so strongly correlated with everything else. So making progress there. I think the other thing that could convince economists about some of this is just by building GPT-5. So it’s like, if you’re at OpenAI and you want to convince economists, you have a really good shot of just building a really powerful AI system. I unfortunately think this is maybe one of the only ways economists would consider this hypothetical more seriously if they see just the capabilities in front of them. And then there’s maybe some stuff about historical economic data. This has featured in trying to provide evidence for these R&D-based growth models. And unfortunately, the data here isn’t great. In fact, much of this data is based on McEvady and Jones from some work from the 70s, I think, which is like 50 years old. And it’s not great. And since then, no one has really done anything much better. We just have a lot of uncertainty about population sizes in Europe and in different important periods. And so that seems kind of interesting. I’m not sure how important, but certainly I would appreciate that.

Fin

Yeah. There’s so many cases where you kind of follow the citation trail for some fairly load-bearing empirical estimate. It just turns out it’s like some guy made a guess in a paper one time.

Tamay

This is very much of that sort.

Fin

Anyway, okay. So those were very useful and we will list all of them out. And yeah, just some closing questions. So a question we ask everyone is if you could recommend three resources, so books or papers for people who wanted to learn more, what would you recommend?

Tamay

One paper that I would recommend is this paper by David Rudman on modeling the human trajectory, which is my absolute favorite growth theory empirics paper ever, basically. And I think David Rudman is just really great and always super rigorous and exacting with his work. And I think he applies that same mindset to this very important question of what has the human trajectory looked like and uses a similar type of R&D-based growth model to fit that data. I think that paper is just really great, both technically and also for raising this very interesting question. David Rudman replicated a paper by Robin Hanson on economic history as a sequence of growth modes. That replication itself is also really great. I like replications. I think he does a really great job pointing out a subtle issue with the paper by Robin Hanson. Basically, Robin Hanson gets a result which comes down to a limitation on finite precision math. And so he painstakingly shows that, you know, there was an issue with the optimizer that was being used. And I think, again, this is a very important question, and I like that paper. Chad Jones has written a bunch about this kind of growth stuff, and I think he’s basically the person who has invented semi-endogenous growth theory. His paper on the past and future of economic growth, a semi-endogenous growth perspective, is insightful. I mean, I think he doesn’t pay enough attention to AI there, but it’s still worth reading. Finally, Tom Davidson’s explosive growth report is another place to go for more reading about explosive growth.

Fin

Excellent. And final question is just where can people find you and Epoch’s work?

Tamay

So you can find me on Twitter — @tamaybes. You can find Epoch’s work at epochai.org.

Fin

Very good. And we’ll link to all of those things you mentioned on the website. Okay, Tamai Besiroglu, thank you very much.

Tamay

Thank you.

Fin

That was Tamay Besiroglu on explosive growth from AI. If you’re looking for links or a transcript, you can go to hearthisidea.com forward slash episodes forward slash Besiroglu. That is B-E-S-I-R-O-G-L-U. Now, Tamay is the associate director of Epoch AI, which is a very impressive research institute looking into key trends in AI. If you want more Epoch content, we’ve already interviewed the current director, Jaime Sevilla. That is episode 60, Jaime Sevilla on trends in machine learning, and we’ll link to that. If you find this podcast valuable in some way, then probably the most effective way to help is just to write an honest review wherever you’re listening to this. You can also follow us on Twitter; we are @hearthisidea. As always, a big thanks to our producer, Jason, for editing these episodes, and thank you very much for listening.