Episode 29 • 12 May 2021

Phil Trammell on Economic Growth under Transformative AI

Leave feedback ↗

Contents

Phil Trammell is a graduate student in economics at the Oxford University and a research associate at the Global Priorities Institute.

Phil Trammell

How will AI change the way the economy works? Will it make us richer, or leave us unemployed? Could AI increase the rate of technological discoveries — and just how rapidly?

These are just some of the ambitious questions that Phil Trammell and Anton Korinek explore in their latest working paper. It does a great job summarising and explaining a wide range of possibile answers.

The authors wrote the article to be accessible to people without a background in macroeconomics. However, for readers who might be intimidated by a sixty-page document, this write-up will try to highlight some of the key ideas more informally. It should also help build a baseline understanding of some economic terms and notations, making the working paper more approachable.

Thanks for listening! We'd love to hear what you thought about it — email us at hello@hearthisidea.com or leave a rating below. You can help more people discover the podcast by tweeting about it. And, if you want to support the show more directly, consider leaving us a tip.

Phil’s Recommendations 📚


A toy model of the economy

Before we can start looking at how AI might transform the economy, we need first to understand how the economy works — or rather how economists represent it in the model.
The simplest way to do this is to treat all output as a single, homogeneous good. We will call this YY.

To produce this output, we also need inputs. Economists think of two main factors of production: labour (LL) and capital (KK). Labour involves all the hours that humans work; capital involves all the physical stuff that might help — including factories, tools, desks, etc. In addition to these inputs, we also have technology (AA). This concept is less concrete and precise — including anything that can make LL and KK more productive. Examples include scientific ideas, efficient factory processes, or entrepreneurship.

Putting this together, we can say that output is some function of these inputs. The challenging problem is working out what function this is.

Y=f(A,K,L)Y=f(A,K,L)

This function is our supply equation. We will use this formulation to begin with:

Y=A1K+1LY=\dfrac{A}{\dfrac{1}{K}+\dfrac{1}{L}}

Note that this can also be written as follows (which we will end up using later on):

Y=A11K+1L=A1LKL+KKLY=A*\dfrac{1}{\dfrac{1}{K}+\dfrac{1}{L}}=A*\dfrac{1}{\dfrac{L}{KL}+\dfrac{K}{KL}}

Y=A1K+LKL=AKLK+LY=A*\dfrac{1}{\dfrac{K+L}{KL}}=A*\dfrac{KL}{K+L}

Y=AKLK+LY=\dfrac{AKL}{K+L}

Let us work out what this equation implies about the economy. The first (and most obvious) thing to note is that output is increasing in its inputs. More capital, labour, or technology means that we can produce more output—nothing surprising there.

Secondly, we see that capital and labour are “gross complements”. If we set either input (L,K)(L,K) to zero, then we will have no output at all — regardless of how much we have of the rest. For example, if we had lots of machines but no people to use them, then we’re not going to be able to produce anything at all. More generally, we will find it optimal to use a mix of KK and LL. This characteristic makes intuitive sense:

Capital and Labour are complements. So each worker gets more done with a better desk or better equipment in the factory. And obviously, desks and equipment are more useful when there are people to use them.

Thirdly, we can also see how much each marginal amount of KK and LL contribute. Under perfect competition, we should assume that factors get paid their marginal rates. As Phil explains:

At a time, capital and labour are each paid their marginal product. So that’s how much extra output gets produced by adding one more worker to the system, holding the capital fixed. That’ll be the going wage. And likewise, how much extra output gets produced by adding a bit of capital, that’ll be the interest rate you get.

This insight can give us a way to work out what determines wages ww and capital rents rr from our supply equation. We do so by differentiating and then rearranging things a bit:

r=dYdK=A(1K+1L)2K2r=\dfrac{dY}{dK}=\dfrac{A}{(\dfrac{1}{K}+\dfrac{1}{L})^{2}K^{2}}

r=AL2(K+L)2r=\dfrac{AL^{2}}{(K+L)^{2}}

And likewise for labour:

w=dYdL=AK2(K+L)2w = \dfrac{dY}{dL}= \dfrac{AK^{2}}{(K+L)^{2}}

There is an inverse relationship between how much a factor gets paid and how much of it exists relative to other inputs. For example,

[]w=AK2(K+[]L)2[\downarrow]w = \dfrac{AK^{2}}{(K+[\uparrow]L)^{2}}

Again, this makes intuitive sense. If labour is scarce and the bottleneck lies in production, labour becomes highly valuable — and vice versa. If there are more capital than workers, wages are high, and rents are low; if there is not much capital per worker, wages are low, and rents are high. Note that a rise in technology benefits both workers and owners of capital.

(Looking at ww, we see that when there is lots of LL, the denominator increases by more than the numerator; when there is lots of KK or AA, only the numerator grows).

Now, we can also write a demand-side equation for our economy. If every worker gets paid ww and every capital owner receives rr, then in total:

Y=rK+wLY=rK+wL

Substituting in our equation for wages and rents, we can try and solve this equation. What we find is that, in equilibrium, supply equals demand. Neat.

Y=AL2(K+L)2K+AK2(K+L)2LY=\dfrac{AL^{2}}{(K+L)^{2}}K+\dfrac{AK^{2}}{(K+L)^{2}}L

Y=ALK(K+L)(K+L)2Y=\dfrac{ALK(K+L)}{(K+L)^{2}}

Y=ALKK+LY=\dfrac{ALK}{K+L}

Y=YY = Y

For our purposes, this should be enough to draw on some interesting intuitions regarding AI. It should hopefully also help understand what the notation means. Of course, the models that economists use are much more complicated. At the moment, our toy model is somewhat static, and doesn’t answer how and why our factor inputs might increase over time (think population growth, capital accumulation, technological discoveries etc.).

If you are interested in learning more, the first “proper” model that undergraduates get taught is the Solow-Swan Model, and my favourite online explanation of this comes from EconomiCurtis. I’d also recommend Jones’ Introduction to Economic Growth as a concise and accessible textbook on this.

Thinking about growth trajectories

Separately from this toy model, let us also think about what we mean by a “transformative” effect on the economy. First, it’s worth keeping in my mind just how unusual our current growth rate already is. Whilst we usually don’t think of a 2-3% annual rise in GDP as being particularly significant, by historical standards, the fact that it has been positive at all and consistently so is unusual. The graph below shows how the Industrial Revolution marks a clear paradigm shift in our growth trajectory. We discuss this in our very first episode with Victoria Bateman.

Let us now consider how this growth trajectory could change even further. Phil gives us three cases which would count as transformative:

At first glance, we might dismiss any growth singularity as infeasible — irrespective of how impactful AI might be. Economists are typically wary of anything that breaks from the past few centuries of observations. Many of these trends have been semi-formalized, such as the Kaldor facts, and growth models typically try to satisfy those. Needless to say, any kind of growth singularity would break all these models and trends.

But given that the past few centuries have been so unusual in human history, we may be putting misplaced trust in expecting these trends to continue. Note that this can go both ways. Growth could explode as a singularity; growth could completely stagnate. The point is instead that we should perhaps be more open to thinking about growth trajectories that look very different to the present day.

Another obvious critique is that we cannot literally grow forever until the end of the universe, and we cannot literally get infinite output in finite time. But as Phil explains, we are concerned here with what kind of trajectory growth could take before running up against hard physical contraints:

It’s physically impossible for there to be infinite output in finite time […] but so is constant exponential growth. In fact, so is constant output with no growth. The universe will end at some point. So these are all impossible. I think the interesting point is that these all seem like paths that the growth trajectory could resemble, at least until we start running into some constraint that these growth models didn’t have to consider historically. So if the bottleneck ends up being a natural resource constraint that isn’t currently binding.

It is also hard to know when these resource constraints become binding. Many people have predicted this would happen within our lifetimes, but are now widely seen as false flags. See, most notably, The Population Bomb and The Club of Rome. Nordhaus et al. (1992) have an excellent discussion of these predictions which failed to materialised. They make the point that “just because boys have mistakenly cried ‘wolf’ in the past does not mean that the woods are safe”. Nor does it mean that we can ignore huge obstacles to climate change.

However, techno-optimists believe that these obstacles are surmountable. Thinking into the far future, many concepts that currently seem sci-fi may help us overcome these constraints — such as asteroid mining, space colonization, or even dyson spheres. Robin Hanson highlights this point in his analysis of a growth singularity. Almost certainly, the future will be weird.

As Phil further notes, there are no inherent barriers to (even sustained) growth of more than 2% per year:

It seems mistaken to me to write off these growth explosions as economists currently seem to be doing. Long-run growth has accelerated if you take the long view like we already in a sort of Type I singularity, where the growth rate has been increasing […]. There’s no deep theoretical reason why growth can’t be much faster. Lots of processes in the world self replicate at more than 2% a year. If you put mould in a Petri dish, it will grow at more than 2% a year.

AI as a technology shock

Now let’s consider how we can use our toy model to think about how AI might impact the economy. As previously mentioned, our toy model does not explain where growth comes from (because it is too simplified). It could be an increase in AA, KK, or LL. We have not yet looked at how much each of these inputs has historically mattered. Phil gives us some insight into this question:

Capital accumulation on its own can’t sustain growth because labour is too important a part of the production process. So the idea is if you keep the current technology the same, but give everyone bigger and bigger desks, as capital per worker goes to infinity like that, output just rises a bit — to some upper bound […]
But in the developed world, we’ve seen exponential growth. So what must be going on is we’ve not just increased capital. Yet, the rise in labour is not enough to explain this (and cannot explain the increase in GDP per capita). So what happened?

The answer is that technology — also known as Total Factor Productivity — has rapidly increased. A famous paper by Caselli (2005) found that technology accounts for around 60% of differences in incomes across countries today.

Perhaps the most obvious way AI might affect the economy is to increase the stock of technology AA. From our toy model, we can see that this will have an unambiguously good effect:

But is this enough to be considered a transformative effect? A one time increase in technology will also only have a one time effect on the economy. So it is hard to imagine how this will give rise to the singularity scenarios that Phil outlined — that is, radical growth sustained over time. The arrival of AI might just be another step in staying on our business-as-usual 2% growth trajectory, following other innovations like the washing machine and polio vaccine. Acemoglu and Restrepo (2018) discuss just this question.

Some economists have even argued that new technologies like AI will fail to have the kind of significant impact on the economy that earlier inventions did — like the internal combustion engine and electricity. See Gordon’s The Rise and Fall of American Growth for more on this.

AI as substituting for labour

If AI as a simple technology isn’t enough to have the transformative consequences that we discussed, could it affect the economy through other channels?

Perhaps the channel that has received the most media attention here is automation: replacing human labour with machines which do the job better than humans. Alarmingly, the BBC’s job risk calculator highlights that 35% of current jobs in the UK are at high risk of computerization over the following 20 years.

In our toy model, capital accumulation was not enough for sustained growth, because at some point it gets bottlenecked by labour. We got this result was because we assumed capital and labour are complements. But what if AI gets so good that it can also start substituting for labour?

We can write a more general equation to account for this, introducing a so-called “constant elasticity of substitution” (f you want to understand the maths behind see here):

Y=A(αKγ+βLγ)1γY=A(\alpha K^{\gamma}+\beta L^{\gamma})^{\dfrac{1}{\gamma}}

Note that when γ=1\gamma=-1 we get back to our “gross complements” equation:

Y=A(αK1+βL1)11=AαK+βLY=A(\alpha K^{-1}+\beta L^{-1})^{\dfrac{1}{-1}}=\dfrac{A}{\dfrac{\alpha}{K}+\dfrac{\beta}{L}}

And when γ=1\gamma=1 we KK and LL become perfect substitutes. Now αK\alpha K and βL\beta L are completely interchangeable:

Y=A(αK1+βL1)11Y=A(\alpha K^{1}+\beta L^{1})^{\dfrac{1}{1}}

Y=A(αK+βL)Y=A(\alpha K+\beta L)

Going forward, we will also assume that

Tangent: When γ=0\gamma=0 we get a so-called Cobb Douglas production function, which is the equation that textbooks typically use to start off with:

ln(Y)=ln(A)+1γln(αKγ+βLγ)\ln (Y)=\ln (A)+\frac{1}{\gamma} \ln \left(\alpha K^{\gamma}+\beta L^{\gamma}\right)

limγ0ln(Y)=ln(A)+αln(K)+βln(L).\lim _{\gamma \rightarrow 0} \ln (Y)=\ln (A)+\alpha \ln (K)+\beta \ln (L) .

Y=AKαLβY=AK^{\alpha}L^{\beta}

This also gives us another way to think about substitution, whereby Robot AI acts as a separate factor input altogether. If a Robot can do everything a human can do, then we can rewrite our equation

Y=AKα(L+R)βY=AK^{\alpha}(L+R)^{\beta}

AI as an imperfect substitute

As Phil describes, even if the elasticity of substitution is permanently raised just somewhat above zero (γ>0\gamma>0) and machines become imperfect substitutes for labour, then capital accumulation is sufficient for exponential output growth. Labour is no longer a bottleneck, and we don’t need a constant flow of ideas either.

So the total pie will keep on growing. But will workers benefit from this, given that robots will end up outnumbering humans by a huge amount under this scenario? As long as this substitution remains imperfect (γ<1\gamma<1), it appears so. Again, we take a derivative to find the marginal product of labour, which will be what determines the wage:

Y=A(αKγ+βLγ)1γY=A(\alpha K^{\gamma}+\beta L^{\gamma})^{\dfrac{1}{\gamma}}

w=dYdL=βALγ1(βLγ+aKγ)1γ1w=\dfrac{dY}{dL}=\beta AL^{\gamma-1}(\beta L^{\gamma}+aK^{\gamma})^{\dfrac{1}{\gamma}-1}

This maths may look complicated, but the only thing that matters for our purposes is that so long as 0γ<10\leq\gamma<1, an increase in KK increases ww too. That is, having more robots benefits human workers! Intuitively, human workers do still complement robots, even if just a little. And having a human worker is highly valuable precisely because they can make use of so many robots.

AI as a perfect substitute

This case changes drastically when robots become perfect substitutes for labour. It no longer makes sense to use a mix of inputs — you should only use whatever input is cheapest (a so-called corner solution). If w<rw<r, then businesses will only hire workers. And if w>rw>r, then businesses will only hire robots.

Y=A(αK+βL)Y=A(\alpha K+\beta L)

w=dYdL=Aβw=\dfrac{dY}{dL}=A\beta

r=dYdK=Aαr=\dfrac{dY}{dK}=A\alpha

If α<β then r<w so only use K;if α>β then r>w so only use L\text{If } \alpha<\beta \text{ then } r<w \text{ so only use }K; \text{if } \alpha>\beta \text{ then } r>w \text{ so only use }L

Presumably, as advances in AI continue, we will reach a point whereby robots become cheaper than labour. Hanson (2001) describes this as “crossing the robotics cost threshold”, whereby at first human wages will rise, but this will also increase the incentive to replace them, meaning that eventually “wages fall as fast as computers prices do now”.

Social consequences

Of course, in real life, labour isn’t homogenous, and we might imagine AI having a different impact across sectors. Phil notes that

The central theme, if anything, of the literature on the economics of AI in general has been its likely impact on the distribution of wages.

However, we may also want to generally consider what a world looks like without much use for human labour. Concerns around a rise in inequality (between wage labourers and owners of capitals) have led many to advocate for a universal basic income, broader distributions of shares, and even a robot tax. Many people have discussed (and criticized) these concepts, so there is rich literature out there to explore.

AI as increasing the rate of discovery

Another way through which AI may be transformative is by changing the way we make future discoveries. This idea is in line with Griliches’s (1957) “inventing a method of invention”. We have already seen examples of this phenomenon in the real world, such as AI advancing science in protein folding. Rather than AI replacing labour, it can also solve the growth bottleneck, creating a stream of such technological discoveries. Phil describes this logic in our interview:

Where does labour augmenting technology even come from? It presumably doesn’t fall out of the sky. Somehow or other people make it. People think up ways to reorganize the factory or something so people can do 2% more work than they could last year. And if AI can speed up that process, then that’s a whole new path to growth.

A great hope here is that we might get a form of “recursive self-improvement”. AI might make discoveries, which in turn help AI make even more discoveries. However, this virtuous cycle is not a given. For it to hold, AI would have to consistently improve its problem-solving ability at a rate faster than the rate at which problems become more challenging.

What you need for this AI recursive self-improvement thing to produce a singularity is positive research feedback. To a first approximation, you need the “standing on the shoulders of giants” effect to outweigh the “fishing out” effect. And we have no idea what these effects will be when we have AI that is as smart as AI researchers and as flexible.

The same challenge exists for humans when we think about whether ideas are getting harder to find. This insight is at the core of Romer’s famous model “endogenous technological change”. On the one hand, we may be “standing on the shoulders of giants”, whereby breakthroughs in the past inspire and drive innovation in the present. Scotchmer (1991) is a classic exploration of this idea. On the other hand, we may be “fishing out” ideas, wherein advances can still be made, but only at an ever increasingly costly rate. In other words, is the process of finding new ideas more like solving a huge jigsaw (later pieces come easier), or mining for precious gems (going deeper requires heavy machinery)? Bloom et al. (2020) is a recent and rare empirical investigation into this question. Here the authors find the following:

The number of researchers required today to achieve the famous doubling of computer chip density [Moore’s Law] is more than 18 times larger than the number required in the early 1970s.

Whether AI (and we) can overcome these challenges remains an open question.

Conclusion and Further Readings

Now that we have taken a whistle-stop tour of the growth and AI literature, we suggest you also check out Phil’s working paper. It goes into these concepts in much more detail. Additionally, we hope you find the links below useful.

Thank you to Phil Trammell for his time.

Economic Growth

Impact of AI

Automation

Challenges to Growth

Transcript

Introduction

Luca 00:06

Hello, you’re listening to Hear This Idea, a podcast showcasing new thinking in philosophy, the social sciences and effective altruism. In this episode, Fin and I talk to Phil Trammell who’s a research associate at Oxford’s Global Priorities Institute. Phil describes his work as at the intersection of economic theory and moral philosophy. If you are already familiar with effective altruism then you might know Phil best for his work on patient philanthropy and he’s discussed this already at length on the 80,000 Hours podcast which we highly recommend you check out. Instead in our interview, Fin and I talk to Phil about his most recent work in paper, which he wrote together with Anton Korinek about how transformative AI might affect economic growth. We touch on three different channels through which AI might have an effect.

Luca 00:52

The first is about how AI can be generally seen as just an increase in technology and we ask Phil about whether this is really enough to drastically change GDP or wages. The second is about how AI might change the substitutability between capital and labour. This sounds a bit wonky, but really what it is is whenever you read in the news about how AI might take all our jobs and we’ll all become unemployed because robots will replace us, that’s really what substitutability means. And lastly, the third channel we talk about is how AI might itself change the way we make technological discoveries. I think this is probably the most interesting part of our conversation and Phil also believes that this might be what really has the most transformative consequences of all.

Luca 01:33

Overall, we just examine a lot of different economic concepts and whilst they’re all kinda through the lens of artificial intelligence, I just also generally think that this is a really great introduction and gives you actually a really good impression of how macroeconomists tend to think about growth and how they end up building their models. That said, things do get a bit wonky at times, so if you ever feel lost or kind of confused we recommend you check out Phil’s paper which kind of gives you a chance to see these things more step by step, as well as our own write-up, where we have tried to explain some of these kind of core intuitions a bit more kind of informally. I should also say that we are currently recording episodes still remotely, so audio quality can fluctuate a bit and admittedly this one was a bit on the rougher side at times. We are doing our best to try and improve this and hopefully we’ll be back to doing some in-person interviews again very soon, but for now here’s the episode.

[music]

Phil 02:25

Yeah, my name is Phil Trammel and I am at Oxford where I’m a research associate at GPI, this is a Global Priorities Institute, one of those pretentiously named EA Research organizations in Oxford. And at GPI I do research on the long-run timing of philanthropy. It’s sort of become my main thing and then to some extent, questions of long-run growth in general, including what AI might have to do with that. And I am an econ grad student at Oxford.

Luca 03:00

Awesome. So outta interest, how did you come to study economics and what was it within that, that got you interested in this Global Priorities Research?

Phil 03:10

My route to economics is that… So I started out at Brown thinking I might wanna study philosophy, actually. I didn’t really have any clear sense, but that was seeming like the most likely thing, because I was interested in the big normative questions mainly. So, ethics and decision theory and epistemology, but I quickly got the impression right or wrong that the people doing the most interesting and precise and rigorous work in those areas were economists. So, you know, like social choice theory, sort of the econ version of ethics and so on. That’s what first drew me to it. I was mainly interested in the most like, abstract philosophy adjacent corners of econ, but on the practical side I also wanted to make money and get a better understanding of how the world worked, in part so I could be a better philanthropist. I was sort of thinking along EA lines, even though I hadn’t come across the EA community at that point. And, yeah, econ obviously seemed way better than philosophy on those fronts too. So, it was sort of a no-brainer in the end.

Fin 04:21

Offense taken.

[chuckle]

Phil 04:22

Oh, sorry. Yeah, I should have… [chuckle] Anyway, that’s how I was thinking at the time anyway. Yeah. I mean, just so about how I came across EA and what people are now calling Global Priorities Research. I just came across EA online a few years ago and was pretty quickly sold because like I said, I was already pretty sympathetic to at least like quasi utilitarian thoughts about how we should live and kind of what at least some of the implications were of that, like earning to give, and… I was sort of thinking about global poverty mainly. But on reading some of Will MacAskill’s work on moral uncertainty, I had an idea for a paper which was pretty far from the econ I’d done or anything I’d really been into before that, but it grabbed me and I wrote a paper and sent it to this philosophy journal of Senteza, how do you say? And it got sent to Will and Toby Ord as referees, you know?

Phil 05:29

I’m sure you both know, but in case any listener don’t, they are these two moral philosophers in Oxford who are like central figures in EA. And Will thought Toby might have written it, but really just had no idea and then, and Toby thought Will must have written it, but that it was too technical for Will. So maybe Hillary helped him. And when I went to EA Global and talked with Will about it and talked about some other things, he was impressed enough that he encouraged me to come to Oxford to help out with this new thing they were setting up at the time calling it GPI, and I did.

Luca 06:07

That’s so interesting. That’s so cool as well that both thought the other was the one who read, that must be quite the paper. What was it about exactly?

Phil 06:17

Yeah, I wouldn’t say it was so good that they each thought only the other one [laughter] was good enough to write. They’re just niche enough. Yeah. It’s about the regress problem as it’s called in Normative Uncertainty, where if you’re not sure which moral theory is right or which decision theory is right or whatever, you might invoke some principles for figuring out what the right way to behave is in light of that uncertainty about the first order, normative principle. But different philosophers proposed different ways of dealing with this normative uncertainty. So maybe you should have uncertainty at that level as well, and then you invoke some principle to deal with that, but you can have uncertainty there. And some papers had said that this was just like a knock-down argument against the idea of taking normative uncertainty seriously, because if you did, then you would you have to take it seriously at all the orders of this hierarchy, and it’s just… It’s turtles on turtle. There’s no… There’s nothing. There’s no action guiding norms at the end of the day.

Phil 07:25

And I… So that at least under some circumstances you can have uncertainty at every order of this infinite hierarchy and yet still have some answer at the end of the day as to which action, all things considered you should perform? So it’s sort of a, yeah, technical exploration of what conditions you need for that kind of… For that to go through. And I think maybe, one day one person besides myself will read it, I’m not sure.

Growth models

Luca 08:00

Let’s delve into the topics you already mentioned, one of which is perhaps the more econ-y one, which is talking about how growth might be affected by transformative AI, which is this working paper that came out last year. And I guess, to maybe set the scene a little bit is, AI seems really pertinent and a very salient topic at the moment about how robots are gonna replace our jobs. There’s gonna be mass unemployment. Other articles were much more optimistic talking about how this is gonna be this huge revolution and gonna unlock all of these benefits. And I think it would be really interesting to kind of delve into this and talk a bit more about how economists in particular understand this. So, maybe to kind of kick us off there, can you set the scene a little bit more about how economists think about growth generally in the long run? And yeah, how that affects people as well?

Phil 08:54

I guess it would be good to start with a crash course on just the basics of production even before you get to growth. And then once we have a grasp of at least how economists think about how production proceeds in a given year or something, and you say, “Well, what has to change for production to grow over time?” The basic ingredients in the standard economic model of production are capital and labour. The idea is we can think of the economy as sort of this big machine and every year it takes in… Well, it takes in all sorts of ingredients, but we can partition them and we can call some of them capital. [chuckle] We can call some of them labour, all the different kinds of human labour that gets done. It all goes into this machine, and it spits out a bunch of output.

Phil 09:47

And then we use some of that output to make more production in the future. We have the machines spit out more factories and desks and screws and wrenches and all of that and we consume the rest of it. Okay. So, at a time, capital and labour are each paid roughly their marginal product. So that’s how much extra output gets produced by adding one more worker to the system for an hour, holding the amount of capital fix, that’ll be the going wage. And likewise, how much extra output gets produced by adding a bit of capital, that’ll be like the interest rate you get by investing some capital. Finally, the last big fact to keep in mind is that capital and labour are compliments. So each worker gets more done with a better desk or like, better equipment in the factory and stuff. And obviously, desks and equipment are more useful when there are people to sit at them and work them.

Phil 10:51

So when there’s more capital per worker, wages are high and the interest rate is low. And when there’s not much capital per worker, wages are low and the interest rate is high. Those are the ingredients we need, I think, to bear in mind before we even start thinking about growth.

Fin 11:07

So just to clarify some points there, so is the idea here that I can expect a higher wage if I am working with tools and machinery and other kind of non-human useful things, which help me make more stuff per hour and that stuff is capital?

Phil 11:27

That’s exactly right. Yeah. The stuff that you’re working with is capital, and then the stuff that you’re producing could be used either as consumption or as future capital.

Fin 11:34

Got it. And then one more question is, is there any way to increase the supply of labour without just increasing the number of people working?

[chuckle]

Phil 11:46

Yes, indeed. So, this is where growth comes in. So yeah, so remember that model I just sketched. Well, as time goes on, if we hold the number of workers basically fixed, the capital piles up, right? So the production machine is producing more output 'cause there’s more capital going in. So we get some growth in output per person and some growth in wages just by piling up the capital. But capital accumulation on its own can’t sustain growth, because the labour is too important a part of the production process, right? So the idea is if you give… If you just keep the current technology the same, so like the whole… The way the factories are organized and everything, but just give everyone bigger and bigger desks, give them standing desks, bigger computer screens, factories with plenty of elbow room, as capital per worker goes to infinity like that, output just rises a bit. It just rises to an upper bound. So that kind of makes sense, intuitively, like… I don’t know. You can give people more and more elbow room in the factory, but like…

Phil 12:56

It’s not, you’re not gonna have infinite output just as elbow room increases to infinity. But in the developed world, we’ve seen exponential growth pretty… On some… When you zoom out, it’s kind of remarkably steady, exponential growth for hundreds of years now, basically since the industrial revolution. So what must be going on is we’re not just piling up the capital, we’re also somehow piling up the labour. Right? Even though the population… I mean, it’s increased a bit, but even like output per person has increased. So, people are in some sense, doing the work… Like workers today are doing the work of two workers yesterday or four workers two generations ago. And so, what we’re doing over time is we’re not just piling up the capital, we’re also developing what economists would call labour augmenting technology.

Phil 13:52

And the idea there is we’re sort of figuring out ways to organize the factories and stuff, or just organize systems of production, so that people are doing different tasks than they used to, and they’re getting educated on how to perform these tasks and… But all of this ultimately allows one person to do in a day, what it took his parents, like, two people to do in a day. So that’s kind of how you get… The labour augmenting technology is how you in a sense grow the supply of labour, of effective labour as it’s called without having to actually have population growth. And that combination of things where the capital accumulation and the labour augmenting technology is ultimately what drives long-run growth.

AI as augmenting labour

Luca 14:41

I think that gives a good background about how to generally think about these growth models. And we’ll include more stuff in the write-up as well, if listeners are a bit unclear about it or kinda want to see some nice graphs and stuff that can help visualize it. But let’s delve into some of the more AI specifics. And I guess, it might be good to start off with the most simple way to think about AI, which is that it is just increasing technology, right? And this labour augmenting technology you said that allows us to do more things per person. Can you generally talk about how that kinda fits into this growth model? And what we might expect to happen to wages or growth in general?

Phil 15:24

So if AI just constitutes labour augmenting technology, then it would constitute sort of the next stage in a process that’s been ongoing for hundreds of years in the developed world at least. And it shouldn’t necessarily be expected to have any really transformative consequences. It would just be that well, what it took to press on at 2% to 3% a year, 50 years ago was we had to invent washing machines so that now a person just has to throw in some clothes and do some folding and gets the same output, clean clothes, that it used to take much more, many more hours to perform. And now will likewise come up with methods of production that require even less hours of input. Pre-unit of output, but it’s just… Well, if we didn’t have this constant stream of new technologies like that then growth would eventually plateau, because the capital would just… You know, it would get saturated in capital, but labour would be the bottleneck.

Phil 16:37

So, it’s possible that for some reason advances in AI could amount to increases in labour augmenting technology that are faster than those produced historically by things like washing machines. But I don’t see any reason… If that’s a category of thing it is, but it’s just another labour augmenting technology. I don’t really see why it would be particularly more or less impactful than any of the ones that have come before.

Luca 17:10

I don’t know how directly this relates to AI, right? But you can hear a lot of economists kind of of this view where they’re very skeptical of any new inventions. I’m thinking about people like Robert Gordon, right, who kind of point out to this technology or TFP thing kind of falling since world war II and saying that inventions like the Internet and smartphones and the like, which we feel are very transformative, don’t really do much in the statistics at the moment, at least, other than kind of continuing a trend that has already happened. And if anything, not being as transformative as inventions like steam or coal, or these really big changes when we think about things like the industrial revolution.

Phil 17:54

Right. I think that’s a good point. So if you take a really long view at the growth rates of technology and of output directly, the growth rates were increasing for most of human history in the sense that they were really, really low before the agricultural revolution, and then a bit faster and then a bit faster again after the industrial revolution. So, when you project that kind of trend-forward, you can come to the conclusion that the singularity is near or whatever. And so, there are papers that have done projections like that. David Roodman at Open Phil recently wrote, I think the latest sort of example of this kind of projection.

Phil 18:36

But looking more recently, actually growth in the developed world has slowed down a bit. And Robert Gordon, author of this book, The Rise and Fall of American Growth is most associated with really making this observation and, yeah, making the case that likewise technologies on the horizon are going to prove to be over-hyped just like in his view, and I think reasonably so, technology in the past few decades turned out to be a bit over-hyped and the Internet as great as it is made people a bit more productive, but people just kept on getting more productive at a few percent a year or so. And it wasn’t… It didn’t like turn the world upside down anymore than the washing machine did. I think that’s a… I mean, one thing we’ll get into are some of the hypotheses people have for why growth has slowed down a little bit to the extent that it has. I mean, it all just gets back to what AI will do to the whole production process, the whole model of production and growth that I sort of sketched out above. If it just turns out to be another labour augmenting technology, like the Internet probably is best thought of as being, then I don’t think it’ll be radically transformative. But if it does something different, like allow capital to fully substitute for labour say. So now you don’t need humans and capital going into the factory. You can just have capital and more capital.

Phil 0:27

You take some of that stuff coming out of the factory and you just make robots and put it right back in the factory. That really does change the model in a deep way. And then Gordon-style predictions about the slow down in labour augmenting technology growth, what you called TFP growth. It’s sort of the same thing, those sorts of projections wouldn’t hold up because you really, AI would just be kind of something of a different class.

Fin 20:34

And TFP being total factor productivity. Is that right?

Phil 20:38

That’s right. Yeah. So right. I was saying we could rearrange things so that people get more productive, right? You just, you don’t need as many person-hours to get a certain amount of output. To relieve the labour bottleneck, you need labour augmenting technology or labour productivity growth. That’s the same thing to relieve the capital bottleneck. You can either have capital augmenting technology, or you could just pile up more capital, right? You kind of can’t pile up more people and get more output per person. You get more output, but then you just have more mouths to feed, but you get growth in output per person with either… You need those two ingredients, right? You need labour augmenting technology, and then you need capital augmenting technology or just raw capital accumulation. And total factor productivity is technology that augments both of those factors that is capital and labour. So the only necessary part of total factor productivity for growth is the labour productivity benefit. If that makes sense.

Defining ‘AI’

Fin 21:42

Thanks. Yeah. Thanks for clearing that up. That’s really useful. And since we’re on the topic of definitions, could you also explain what you’re meaning by AI, if you are beating anything more specific than clever computers.

[chuckle]

Phil 21:56

I don’t have a precise definition of AI. And on some level, I sort of think that for the best, because at least it’s for the best for interpreting the literature on the economics of AI, because it seems like what economists have done is to sort of say, well, here are basic models of production and growth. Let’s just scratch our heads and think of all the ways in which we could tweak them. We could shake them up a bit in light of how computers could start, behaving differently than they currently do. I don’t think… I shouldn’t try to speculate about what’s in their heads, but I… By and large, I think they don’t have precise ideas about exactly what sorts of technological advances count as AI. They’re really just papers about the extent to which capital could substitute for labour better and stuff like that. So I’m definitely not coming at this with some background in machine learning or something where I have a clear threshold for what the line is between just advanced statistics running on big computers and something that truly qualifies as AI.

Transformative AI

Fin 23:10

Yeah. That makes a lot of sense. One more precise term that gets banded around is this term transformative AI. But if you are setting out to ask questions like, will AI end up being transformative in certain ways then substituting AI with transformative AI makes that a pretty uninteresting question, right?

Phil 23:30

Yeah. I mean on the transformative front, that’s where I do have precise definitions. On some level, you could just interpret this literature review on the economics or the on growth theory under transformative AI as being a little good, but it’s sort of just like, well how could growth be transformed? [chuckle] right. And then just for almost any way you think to tweak these models, you can sort of have some story for how AI was the thing that tweaked it. And what I mean by transformative is on the growth end. So when we’re just thinking about output, or output per person and effect is transformative, if it does one of three things, if it increases the growth rate. Okay. So instead of carrying on at 2% to 3% per year we carry on at something noticeably faster than that. So maybe 8% a year, or, I mean any amount higher so 50% even, or whatever that would qualify as transformative. If it’s a so-called type one singularity.

Phil 24:53

So that’s, if the growth rate itself increases without bound, if you’re growing 3%, but then the year after that it’s 4% and then 5% and then 6%. And that goes up without bound. And the last is a type two singularity, and that’s where growth carries on so quickly that output actually approaches a vertical asymptote. So this is the real singularity. This is like mathematically where the idea of an AI singularity comes from. And yeah, I mean, you chuckled a bit and all as anyone can see, it’s physically impossible for there to be infinite output and finite time. So that vertical asymptote I meant with respect to time. So it’s like, this [chuckle] if you put time on the X axis and output on the Y axis, you… Sometime before which you’re gonna surpass any fixed amount of output.

Phil 25:58

And yeah, that’s totally physically impossible. But so is constant exponential growth. In fact, so is constant output with no growth. I mean, the universe will end at some point, so these are all impossible. I think the interesting point is that these all seem like paths that the growth trajectory could resemble, at least until we start running into some constraints that these growth models haven’t historically had to consider. So if the bottleneck ends up being that there just isn’t enough. If it’s like a natural resource constraint, that’s currently not very binding. Like there’s just not enough stone left in the [laughter] on the planet to do things there’s like no more time in the universe or something like if that’s the constraint that breaks the curve, then that’s sort of outside the model. But the point is that the curve will look like one of those three scenarios. And if it does, then I call it a transformative effect.

Fin 26:58

Something that occurs to my kind of very, non-economist mind is isn’t it sometimes difficult to compare output across times or across places. Like if my country at kind of time A is just making, like making loaves of bread and olive oil and cotton or something, and then fast forward to time B and my country’s now making computer games and other kinds of software. And it’s like, how am I allowed to say that that is any amount more than what I was making at the first time, assuming I stopped making bread and olive oil at time B and I guess that’s relevant because you can imagine some longer-term future where output is characterized by what would seem really foreign to us. So maybe it doesn’t even exist right now.

Phil 27:56

Yep. That’s a really good point. The standard way of doing this over short time horizons is to ask when this new product gets introduced or improved, right. 'Cause our bread is also maybe better in some ways than, at least some, some bread people had to deal with long time ago. And, and so on. You just say how much of the old kind of bread, would people be willing to give up for one of these, for one of these new loaves or how much bread would people give up for a computer game. And so it’s like now we have… You can sort of convert all of our output now into the units of consumption in the previous period. I think that makes sense over short time horizons so that you can do this in a reasonable way.

Phil 28:53

But over the very long run, it does run into some big difficulties. So for instance, it might be that there’s actually no amount of the kinds of consumption that people had to make do with a long time ago, that would bring someone up to the same utility level as consumption of the basket of goods that we have available today. Like now maybe, if it’s just, bread and cotton to give up, central heating and central plumbing, like you could just like dump all the cotton you want on my doorstep and I’m just not taking it. If it means that I have to give up plumbing and heating. So then you’d have to conclude that now, in some sense, we’re infinitely rich, in the units of ancient Rome or whatever. And you don’t wanna say that. I mean, we’re not infinitely rich and they were not zero. So I think this reveals that there are just some difficulties of with doing these sorts of long-run comparisons and people have different ways of getting around these issues. And I mean, I think, it would be a podcast of its own to go through all the methodologies people have of making these sorts of long-run comparisons. But I agree that it’s not straightforward.

Luca 30:37

And I guess it’s worth saying as well, that in the context that we’re kind of talking about in this like very theoretical abstract growth model sense, then we’re kind of treating output as this, homogenous kind of blurry, just kind of thing, right. We’re not really specifying what that output is. It’s just output. And, I guess the models are more interested in yeah… In these more kind of abstract results rather than anything concrete we might be able to think about.

Phil 30:38

Yeah, that’s true by and large, I guess the one exception of the models that I explore in this survey is Rodhouse’s 2015 paper in which he looks at potentially transformative implications of the fact that computers and like non-computer inputs to production might produce different kinds of outputs. So computers get you video games and stuff, but you can’t eat them, and bread’s never gonna be as fun as a good video game. It’s true that that’s not been a central focus of the literature on, the econ of AI. And I mean, this is still such a new field and there there’s still so many, big questions about just like how to best think about what AI will be and like how it’ll broadly speaking, in broad brush strokes affect how growth proceeds, right. Thinking about these like good, specific impacts on output seems sort of second order to me. But yeah, at some point it would be great for someone to do more research on that angle.

AI as substituting labour

Luca 31:49

Well, let’s, delve into some of those transformative effects then. You just mentioned that if we’re just thinking about AI as this labour augmenting technology, then we’re not really gonna get any of this. But as you mentioned in, in your literature review, there are, a lots of people who have thought about some of these transformative effects. And one of the ones that seems to be getting a lot of attention is this idea of capital, labour, substitute ability. So, I guess to, to start us off here, could you briefly explain again what we are really talking about and how we might visualize this when it comes to AI of capital, beginning to substitute labour, and then also how this will then affect right. These growth models and in particular, unemployment and wages.

Phil 32:30

So, as I was saying before for growth to proceed, as it, as production currently has to unfold, you need two ingredients, you need capital to accumulate and you need labour augmenting technology, 'cause then basically you have an increase in both of these necessary inputs to the factory. If you will.

Phil 32:56

If capital can start substituting for labour, then you don’t need labour augmenting technology anymore. Lack of labour isn’t a bottleneck to increasing production in the future. You can just put capital in both, slots and empirically. The bottleneck to, growth really does seem to be the lack of labour augmenting technology. It’s not a lack of capital accumulation. So given how high the savings rate is, how much of, their paychecks people save as opposed to consuming what, what we’re collectively doing. It’s not just about individual saving governments also affect the aggregate savings rate in various ways.

Phil 33:42

Given all that, if labour weren’t a bottleneck and capital could just pile up and, that would determine output growth, growth would end up being maybe something like 20% a year don’t quote me on that exact number, but it would be like much higher than it is now. And maybe something like 20% a year. So just making capital, fully able to substitute for labour would definitely have a transformative effect on the growth rate. Now, what that would do to wages is ambiguous. It depends on just how substitutable the capital is for labour. So if capital can fully substitute for labour. So there’s, no sense in which having more capital around now makes labour more, productive capital, just, it’s just, you just make a robot. It just does exactly what the person did, and there’s nothing for the person to do to help out the robot or for the robot to help out the person they’re just side by side, you know, competing applicants for the same job. Then the wage rate should end up being the, capital rental rate. And that, should be driven to how much it, costs to. Well, yeah, basically how much the capital costs. So if you can hire a robot, right, well, you know, however much whatever the rental rate is on robots, which would just, would be like the amortized, you know, cost of producing a robot and, you know, the electricity and the maintenance and all of that.

Phil 35:16

Then why would you ever pay someone more than that? And that will presumably be lower than the way it eventually, I mean, you know, technology just makes, so this is capital augmenting technology now. But it makes, it makes it cheaper and cheaper to, produce computers and TVs and everything 'cause the same would hold true for robots. So in that scenario, it’s not… Doesn’t look good for wages, on the bright side though, in this sort of intermediate case, right? So intermediate between the, current world where capital really just compliments like sort of mainly compliments labour on the one hand and the robots fully, perfectly substitute for people scenario on the other hand, where wages fall in this intermediate case, increasing the substitute ability of capital for labour to that, middle zone can actually increase wages a lot.

Phil 36:10

And what’s going on there is that growth now is no longer bottlenecked by labour. So growth proceeds at that 20% per year or whatever, because capital just piles up. That’s, all you really need, but there’s just enough complementarity between all this capital that’s piling up. And the humans that you have that all that piling up capital pulls up wages as well. So that seems to be the main two scenarios there.

Luca 36:37

To make sure I, understand this right? So in this second case, this, optimistic case, I guess, is the reason why wages are going up. That if you imagine, if you have one human and that one human is gonna make a hundred robots more productive, then that having that one human is super valuable. So you’re gonna be willing to pay them a, a higher wage is, that’s, what’s what what’s going on.

Phil 37:00

Yeah, that’s, a good way of thinking about it. You’re gonna be willing to pay them a higher wage when there are a hundred robots than when there’s only 50, because they’re complimenting a larger amount of, of capital. So yeah.

Luca 37:09

The big question now then is right, which of these two scenarios, we’re kind of heading to, if, AI does change this, the substitute ability and you know, it might be very naive asking if we have an idea here, but it does seem plausible, right? That we could come to a stage where AI is able to do everything humans do, or at least at some price. Right. And then the question kind of almost becomes if it can do it cheaper than humans as well. Right. And, that seems to be the, a pretty big threat. There was a paper I really enjoyed, that you reference in, in your working paper, by, Hanson 2001, he talks about the robotic, cost threshold. Can, you mention that as well?

Phil 37:51

On the story I just gave. The thing that has transformative consequences is some technological breakthrough that allows capital to be substantially more substitutable for labour than it currently is. So right now, all we can do is make desks and factory, you know, widgets and so on. But in the future, we’ll be able to make these, you know, we’ll make all this robotics and stuff, which substitutes for labour well enough that growth can proceed at this, higher rate. But we’re not gonna be able to, do that until we have the breakthrough in AI and robotics. And so on.

Phil 38:50

There’s another possibility though, which Hanson explores, which is, let’s say we already knew how to make the perfect robot, but it costs like a billion dollars. No, one’s gonna build robots now they’re just too expensive. So we’re just gonna keep on producing things with human workers, right? We’re gonna have our capital on the side. We’re gonna have our humans as well. And we’re gonna, that’s how we’re gonna make our output and to get growth while we’re just gonna do what we’ve always done. We’re going to pile up the capital. We’ll develop better labour augmenting technology. And this process will keep on getting us growth at 2% a year or so.

Phil 39:11

But that means we’re getting richer and richer, right? So one day we’ll be rich enough that it’ll actually be worthwhile to start making lots of these billion-dollar robots. Right. 'Cause at that… Because we’re all billionaires to that point. And after that point, we’re in the regime where we get fast growth, that 20% per year or whatever, because there it’s, you know, once after that, it’s just about the capital accumulation. So human wages probably fall after that. All the money that’s being made goes to whoever is owning the robots, but you don’t need a technological breakthrough. You just need to wait for people to get rich enough for it to be worthwhile substituting robots for the human.

Luca 39:53

I find that like such an interesting idea as well, where you get this very sudden shift. Right. And where in some ways it’s really great, right, seeing your wages go up, but that also might be sowing the seeds of your destruction, that as wages are going up, it’s becoming more and more incentivized from the owner of firms to eventually replace you with this fully substitutable robot. And that seems to be a very scary scenario where initially we might be thinking that AI is great for society because it is helping to make workers more productive and increase their wages. But at some point that very effect is gonna be the very transformative thing that might hurt a lot of workers as well.

Wages and unemployment under transformative AI

Fin 40:33

So I have some top-level questions about the effects of AI on work and the economy. And maybe a place to start is that if it turns out that capital ends up being really substitutable for labour under AI, for whatever reason, then the natural thought there is that you’re going to get a whole lot of unemployment, you could even imagine like most people being unemployed. First, like very quick question is, am I allowed to draw that conclusion from what you’ve been talking about?

Phil 41:12

What I said before was that if AI gets substitutable enough for human work, then you should expect wages to fall a lot, right? They could fall to like, the electricity of keeping your robots running or something like that. And I didn’t say anything about unemployment, but you can’t survive on pennies a day or whatever. So realistically, I think that would just be a model in which it’s not that humans and robots are working side by side, both getting paid the electricity cost of the robots. I think it would just be that there would be a lot of unemployment.

Fin 41:53

Yeah. You have a lot of actors and musicians and everyone else is out of work unless you’re under a tech person or something that makes sense as yeah. As a kind of cartoon picture. And then the other obvious effect to mention or potential effect is, a huge rise in inequality because you’re going to have some people… I’m like channelling my inner Marxist here, but you’re going to have the owners of capital who do very well. And then people who used to not own productive capital, but who used to be workers, for want of a better words, suddenly they’re out of work because a machine is doing that job for a wage lower than they would reasonably take, so these two things together, unemployment and like massive inequality, they have alarm bells attached to them, but I can imagine the models or like a kind of simple, formal model that you might use in economics might end up passing over those potential, like social or political dangers and harms from those two things. So I’m curious to hear you say a bit about whether that’s true, whether I’m being unfairly harsh, whether the inequality worry is a real worry. And then maybe later we can talk about any potential fixes.

Phil 43:22

Sure. So first off the inequality worry is definitely a real worry and I don’t think the models failed to deal with that, per se. In fact, the central theme, if anything of the literature on the economics of AI in general has been its likely impact on the distribution of wages, okay. So that’s different from something else which is considered, but not quite as much, which is the labour share. Like what fraction of output gets paid out as wages, as opposed to as capital rents or as interests on investments. Historically for a long time, you know, a few hundred years in the developed world, another one of these surprising regularities besides the growth rate being about constant, is that the labour share and capital share have been about constant, about two-thirds of output gets paid out as wages and about one third as capital rents. So there’s two ways that inequality could be affected.

Phil 44:27

One is by changing the labour share. And this is more of this kind of Marxist concern maybe where you think, well, there’s some people that own the capital and then different people are earning the wages and so the primary thing, determining inequality is what the labour share is as opposed to the capital share, but then there’s also the question of the distribution of wages. And I guess to some extent, the distribution of returns on investment, but that’s not as significant. And just because you can’t cover everything in one go and the, this document I wrote up is as long enough as it is, I just focus on the overall growth rate of output and the labour share, well and absolute wages. I don’t flash out the models in enough detail to have room for differences in wages, across individuals like low skilled, high skilled, different industries, anything like that. But it is something that economists look at, it’s just not something that I’ve looked at. I think that…

Phil 45:31

Something that you see kind of throughout these models is that if you do get transformative growth of any kind, so the increase in the growth rate or a type one or type two singularity, the labour share falls to zero or something very low. Okay? So instead of work is getting two-thirds of all output collectively, they get approximately none of it, and this is true even in that middle substitutability scenario, where I said, we get 20% growth a year or whatever, or even, depending on those details of the scenario, you can even get a type one singularity there. So you get lots and lots of growth. And wages go up a lot, wages might themselves exhibit a type one singularity. But even so, the labour share is falling to zero in these scenarios. So there’s a pretty deep tension there, and the tension is that whether the labour share stays constant is all about whether labour remains a bottleneck.

Phil 46:35

If labour is still a bottleneck, then labour will still be getting most of the pie because that’ll be what’s sort of marginally productive in a sense. And once labour stops being a bottleneck, absolute wages could go up or down, but the labour share will fall, so that’ll be a rise in inequality but it could be a price worth paying if it means that wages go up a lot. To my mind, though, there is no necessary connection between a fall in the labour share and a rise in inequality, there is in a sort of in a world along the lines that marks out the world was, yeah, and which is to some extent, like there’s a sense to characterize the real world, where you have… Which is totally disjoint, where you have some people who just own capital and live off the rents, and you have some people that just be working and don’t own any capital. In reality, of course, most people do a bit of both, so there aren’t many people who just live off their capital rents, they also work, even people with their inheritance and so on. And workers have their retirement plans, which own shares in companies, and so at least in the US…

Phil 47:48

I don’t know the numbers everywhere in the world, but in the US, a majority, I think it’s just a slight majority, but a majority of people above the age 25 or something own share, own… Either directly or indirectly, they own level sub-range of all the companies that will be piling up the capital income. So yeah, so there’s no necessary connection there. I think it would just be good to ensure that ownership of the firms, which could become much more productive as technology advances is more widely distributed, there’s a whole range of policy avenues to getting to that goal, but then people can be unemployed, but that’s fine 'cause they’re getting their dividend checks.

Fin 48:32

I’m curious to dip into that a little bit, so in the scenario where the wage share… The labour share goes down, which like you said, it’s not necessarily linked to inequality, but in practical terms, probably is, thinking ahead to that future where we have these unbelievably productive robots, but a lot of people out of work, a lot of people earning orders magnitude more than other people. Yeah, what are the policy fixes that you find most interesting? Maybe the most obvious one is UBI, but is there anything beyond that?

Phil 49:09

Yeah, UBI really does seem like a pretty straightforward answer. In a sense, it’s just, what happens if the government owns a share of the company that’s making all the money off of these advances in AI? 'Cause it has the right to tax away a third of its profits each year or whatever, that’s sort of just like someone owning a third of the company and having to receive a third of the dividends every year, and then distributing it among its citizens. So there is some sense in which it might seem like an improvement on UBI to just directly have citizens own some fraction of the company in question. Because you get the, all the benefits of UBI, but it’s also tradeable, so if someone just has some extra need this year and the other person doesn’t or whatever, you could sell a little bit of your right to the dividend stream that effectively you have from the UBI. In practice, I think that’s risky because there’ll always be someone who just like, whatever the alcoholism is of the 23rd century, that there’re really amazing drugs, to have 10 people to sell them, whatever, I don’t know.

Phil 50:28

Yeah, it would be a shame if someone… Not just the person, but their whole lineage, all their probably descendants were totally broke because they took advantage of this right they had to cash out their ownership of that fraction of the firms that are making all the money. Yeah, I think in practice, some combination of capital gain subsidies or whatever, that would make it easier for people to buy, at least for some people, for the capital gains, at least for the poor or something, that would push in the direction of equalization of ownership on the one hand, and then just flat out UBI on the other hand, it’s probably gonna be optimal.

AI as speeding up innovation

Luca 51:12

So moving on from this question of substitutability, one other way that economists have been thinking about how AI might be transformative, is that AI is not just a technology, but it can actually affect the rate of how future technologies get discovered. So to kind of think about this AI, we’ve seen being involved in, I think protein folding most recently, and that discovery in of itself will boost the growth rate to in this case, presumably, a more trivial way, but we can definitely imagine in the future going forward how this can have quite transformative consequences. Can you speak to that a little bit?

Phil 51:53

Yeah, absolutely. So I think this is actually a channel through which AI could have even more transformative consequences than follow just from capital becoming more substitutable for labour. So here’s the idea, here, this idea that we got… So we talked about substitution, right? So there, the thought was we don’t need this labour augmenting technology anymore. Capital accumulation on its own can just take over. Right it can take over the process of driving growth, but hold on, where does this labour augmenting technology even come from? Like where has it been coming from over these centuries, this two or 3% a year or whatever, presumably it doesn’t fall out of the sky somehow or other people make it, like people think up ways to reorganize the factory or something so that now a person can do 2% more work than they were able to last year. And if AI can speed up that process, that’s a whole new path to growth, right? And this is maybe it’s becoming clear how this could get to a singularity because if AI systems are increasing labour augmenting technology, but robots are themselves doing the labour and getting more productive. Well, now one thing they’ll be even better at is coming up with further labour augmenting technology, like further robot augmenting technology, right? So you see how this could sort of spiral out of hand. This is the whole, recursive self-improvement idea, just in a language of economics.

Why ideas might get harder to find

Luca 53:28

One of the ideas that I really like about this is that when you think about this, you know, endogenous growth, so to say, so trying to work out what factors actually affect growth itself is that one really important relationship is how having more researchers or more resources dedicated to growth affects the growth rate, right? Oh, the growth rate of those ideas. So there’s kind of two ways to think about it. On the one hand, you can think about how researchers complement each other. So if you have more researchers, there’s more discussions. And you’re kind of standing on the shoulders of giants, I think is the quote. And then there’s this other idea which is more negative, which is that when you have more researchers they’re actually duplicating work or kind of stepping on each other’s toes a little bit, and AI might have a really important role here to play into how this kind of relationship works. Can you talk a bit about that?

Phil 54:18

Sure. So, I think you might be getting two things a little bit mixed up there. There’s two interesting variables to keep an eye on in an endogenous growth model, where an endogenous or a semi-endogenous growth model is one in where we actually model how, where the labour augmenting technology comes from, right. Where you need researchers and stuff going. Okay. So the two variables to keep an eye on are first, what you refer to as stepping on toes versus complementarity, right? At a given time, in a given year, if you have… If you have 50,000 people working as researchers, from everything from basic scientists to people doing R and D in an applied setting or whatever, if you have 50,000 of them, do you create more or less than double the labour augmenting technology you would’ve made that year, if you just had 25,000 of them, right?

Phil 55:24

So that’s stepping on toes versus complementary, but totally separately. There’s this question of standing on giants… Standing on the shoulders of giants versus fishing out. Okay. And the idea there is if the researchers working this year are working after a whole lot of other researchers have already piled up a big mountain of ideas, right? We’ve got all the technologies that have been invented up to today. On the one hand, this makes it easier for researchers today to make further progress. 'Cause we have access to the tools that our advisors didn’t have access to. We have computers and all the rest of it, but on the other hand, it makes it harder to make further progress because there’s less low-hanging fruit in terms of technological improvement to be had. And that’s fishing out. So I call that the research feedback parameter.

Phil 56:32

So positive research feedback means as technology piles up, it gets easier to make further progress but that effect wins out and negative research feedback is at the fishing out effect or the low hanging fruit effect wins out. And that’s just totally separate from the stepping on toes, so with that in mind. So I was just talking about how AI could carry out a process of recursive self-improvement that gives you some sort of singularity where the people… The entities coming up with this labour augmenting technology are robots and that makes them more productive, but then they themselves are better at, developing further and further advances in labour augmenting technology and so on. And all of this is true for human researchers as well, but it can sort of proceed more quickly with robots in part because the robots themselves can accumulate, in a way that the people can’t.

Phil 57:32

So at the same time, they’re not just getting better at labour augmenting technology. They’re also like cranking out loads and loads of robots or lots and lots of AI computations on a chip in a way that we’re not really doing with when it comes to human. I mean, to some extent we’re making it possible for the earth to support more people over time or something, but we’re not really like, there’s no like Moore’s law for humans where we’re just, having twice as many people every 18 months or something, what you need for this AI recursive self-improvement thing to produce a singularity is that you need positive research feedback as a rule of thumb. I mean, the models, there are different ways of doing it, but kind of to refer to the approximation, you need the shoulders of giants effect to outweigh the fishing out effect.

Phil 58:27

And we have no idea what, like what these effects will be. Once we have AI that is as smart as like a human AI researcher and is flexible, but we can try to estimate research feedback in lots of other domains. So we can look at the extent to which it’s gotten easier for people at Intel to develop better chips in light of the fact that they have the chips they’ve already developed to help them. And compared to the fact that it’s harder to make further progress, 'cause you’ve picked the low hanging fruit and in lots of other domains a paper that came out just last year, I think, or no two years ago now called idea… Are Ideas, Getting Harder to Find does this sort of estimation both for the economy as a whole and for lots of sort of subdomains and finds that sort of wherever you look, these research feedback numbers are substantially negative.

Phil 59:31

And I think that’s pretty intuitive once stated, because in principle you could imagine research feedback taking place all over the place, but we don’t even think to look in most places because it’s so negative. So brain surgeons could operate on their brains or on each other’s brains and then become like smarter and better at doing brain surgery to make themselves even smarter. Right. And like in fact this could produce like infinitely smart brain surgeons in a week, you know? But it just doesn’t because… [chuckle] Well, the biggest research feedback I found it’s a negative. It just doesn’t help that much too. It doesn’t make you all that much smarter to do lots and lots of brain surgery on you, and…

Fin 01:00:16

It’s really worth underlining just how hard it is to get that positive feedback. Right. Especially when you look at just the empirical facts, which I hadn’t looked at until recently, but one surprising number which embarrassingly I can’t remember is just how many researchers are working right now, relative to all the researchers, roughly scientists who have like lived in history. And that is like a pretty significant fraction. What becomes clear there is just the rate of like scientific progress per person is lower than it has been historically, even though presumably individual researchers are just like far more effective than their predecessors. Think about how many scientists lived and worked during the like Victorian period in Britain. And if you just like happen to be some kind of gentleman eccentric inventor, you have like a decent shot at making like a really significant discovery.

Fin 01:01:22

And now I can’t remember exactly much like sun costs. I know that the lighter the new fusion reactor is going to cost more than 20 billion euros. And you’re pushing like in the case of theoretical physics, at least the kind of narrative is, right. The story is unless there’s some kind of breakthrough just waiting behind the curtain. You’re getting like massively diminishing returns. You’re throwing so much money at problems that you need. You need international cooperation and governments putting up like a lot of taxpayer money to build these like huge machines. And you’re making the kind of progress that… Like I said, you could do if you happen to be a kind of lucky, eccentric scientist living curve 200 years ago. So yeah, it’s like the pressing and it means that the whole like feedback, positive feedback idea in the case of AI is like a pretty tall bar to meet.

Phil 01:02:15

Yeah, I agree. I think… Yeah, for anyone who is still just maybe not 100% clear what we’re talking about or like why we should expect this to hold in the AI case, I’ll just give a quick illustration. But yeah, I mean, so yeah, let’s say you’ve got an AI system sitting on a table and it’s gotten as good at AI development as a human AI developer and it’s working on improving itself. So it works away all year. It’s tweaking its code. And by the end of the year it’s twice as smart as it was at the beginning. If being twice as smart means that it’s more than twice as hard to make further progress then next year this AI won’t make progress as quickly, 'cause it’s twice as smart, but the problem’s gotten more than twice as hard. And the AI can spend some of its time working on itself and some of its time in production. So cranking out more AI systems to set to work on the problem. But it turns out if you have negative research feedback then even when it’s doing that splitting optimally, so that you’re getting more and more AIs working on the problem, AI progress will slow down unless you pour more and more investment in from the outside or something.

Phil 01:03:41

So if the AI industry grows then you can sustain… You’ll be able to sustain a lot of growth, but obviously that’s gotta run out if there’s no source of growth, some other source of growth we haven’t talked about. So yeah, there won’t be a growth explosion. And yeah, I think, I mean, again, we just have no idea how AI will unfold and it’s totally possible that something about research feedback is currently limited by like how many distinct thoughts the human skull can hold in view at once or something. And AI will allow for some big breakthrough in which research feedback turns positive, but it would really be a big break from what we observe in basically every domain of potential recursive self-improvement.

Fin 01:04:26

And just to finish that thought just for my own sake is the idea is that as AI improves in its capacity to make research discoveries it’s also outrunning the corresponding increase in how difficult problems get. Like once you solve the low-hanging fruit now you’ve got this kind of next batch of problems. It gets harder. You are also better. And the rate of which you improve needs to be faster. And that’s the kind of difficult, the tricky thing. The idea is not that humans are unable to get better at doing science or doing research because, or even just kind of get smarter because they can.

The Flynn effect

Fin 01:05:02

And one thing I just kind of wanna throw in is just the Flynn effect seems like a kind of cool example here. And I was reading recently about the Flynn effect as applied to chess. And one fascinating concrete example is that if you take a decent club player, so you should expect like a dozen or so of these people to like live in your city. If you live in a kind of like small, medium-sized city, there is a pretty good chance that they could have beaten the world’s champion in the early 20th century, which is a kind of cool, cool but mostly irrelevant fact. [chuckle]

Phil 01:05:39

I guess the relevance is, it’s just all about what’s driving the Flynn effect. And if it’s just about some obvious things like getting lead out of the pipes and so on, then even though we’re smarter now, we’re only a bit smarter. I mean, it’s impressive how much smarter people are on average now than it seems they were a 100 years ago, but it’s not like they’re so much smarter that they’ll come up with something as important as getting lead out of pipes for making further advances in intelligence.

Phil 01:06:09

So given that you should expect the Flynn effect to slow down, and maybe we should say what the Flynn effect is for listeners who don’t know. It’s just this relationship that scores on IQ tests seem to have been going up over time, like faster than could be accounted for by any sort of selection thing where more intelligent people have more children, which I don’t even… My understanding is that that’s not even true, but anyway, even if that were what was happening it wouldn’t be enough to make the effect as big as it is. And in fact, my understanding is that the Flynn effect has been slowing down.

Did the internet really change things?

Luca 01:06:43

One thing I kind of want to add into the mix here as well is like one of the reasons I think I’m like personally a bit skeptical about this like AI revolution and kind of goes back to what we were talking about right at the start with questions of how impactful the internet has been. Because if you’re thinking about this idea of duplicating work or this like standing on the shoulders of giants. There’s this kind of like meta-growth research thing.

Luca 01:07:06

You could imagine the internet being super transformative in the sense that it just gives you access to so much of the existing stock of ideas. And it lets you check what has already been done. And you can imagine all of these ways that clearly has been really revolutionary to research. And yet if this in of itself hasn’t driven many more new ideas coming out to kind of match this increasing difficulty of problems. Does that raise like questions as well of like how much AI can do if the channel through which it is impacting growth is this same thing?

Phil 01:07:43

Sure. It raises questions. So like I said, it is possible that the bottleneck wasn’t how many ideas a researcher could get access to, but it was something more tightly connected to limitations of the human brain and so if it’s just like, well, you can only have five research papers in your field division at any one time or something if you have more than enough at the local library then the internet might mean that you can populate your life with somewhat more interesting ideas or thought-provoking ideas than you could before. But it’s not going to have any dramatic effect, but AIs could make much better use.

Phil 01:08:34

And in fact, obviously do make much better use of large amounts of data and facts and stuff than humans ever do. So maybe coupled with the internet, AI could allow growth to accelerate a lot faster, even if it’s not a singularity, because you have negative research feedback, you could still just… And yeah, there’s a paper that looks at how AI could accelerate growth in sort of precisely this way by like increasing the fraction of existing ideas that can be like used as inputs to the production of new ideas, new technology, using new ideas and technology sort of interchangeably which is sort of interesting which you can link to with. That’s very interesting.

Implications of a new growth mode

Luca 01:09:18

So we’ve been talking right here about these like different possibilities of transformative growth, but as we mentioned in the intro, it has traditionally been really hard for these models to predict this kind of thing. And there’s a lot of skepticism out there, of this kind of infinite growth and what kind of like implications this might have as well for society. What are your general thoughts on this about how economists should be thinking about it? And if they’re thinking about this in the right way?

Phil 01:09:43

I would say that it seems mistaken to me to write off these growth explosion scenarios as thoroughly as economists currently seem to be doing. Long-run growth has accelerated. If you take the long view, as I said before, it seems like we’re already sort of in a type one singularity where the growth rate has historically been increasing. And there’s Michael Kremer and Robin Hanson and most recently, David Roodman at Open Phil all have sort of empirical papers looking into this along with some thoughts about what might be driving it.

Phil 01:10:21

Also there’s no deep theoretical reason why growth can’t be much faster. Lots of processes in the world self-replicate at more than 2% a year. If you put like mold in a Petri dish it’ll grow at more than 2% a year. And when you pick apart the ways in which AI could change how growth unfolds, there are a number of ways in which it seems like it could be transformative as we discussed. So it could just substitute for labour really well. So that capital accumulation can drive growth and or it could drive research and so increase productivity.

Phil 01:10:58

And that would be important for growth even if you don’t get something super singularitarian with positive research feedback. I think the main reason I… Could be there’s some great arguments I don’t know about, but I think the main reason economists write it off is just because it would be such a break from the past few centuries of observations in the developed world that it feels a bit speculative and it doesn’t really come to mind as a plausible scenario. In fact, there’s even…

Phil 01:11:29

The whole term stylized fact comes from this paper from the economist Nicholas Kaldor, in which he came up with these sort of surprising regularities that we’ve discussed about the growth rate being roughly constant over time and the labour share being roughly constant over time and so on. So economists generally constrain their models to not deviate from those facts too starkly. Just to be a little more precise about that, economists do sometimes make very long-run growth predictions like to make a judgment about what the carbon tax should be, right, how bad it would be if climate change destroyed a lot of wealth in a 100 years. You need to make some guesses about how rich will be in a 100 years.

Phil 01:12:18

And people disagree a lot about how to do these sorts of projections but basically no one ever argues for a substantial growth rate increase let alone a type one or type two growth singularity. And I think it’s most likely that there won’t be one, at least not within a 100 years. I do think eventually if you just give it long enough I’d be surprised if we came up with no way to make capital that could do everything a human can do. It seems like in principle you should be able to make a good enough robot, but I definitely don’t have any sense that the singularity’s coming in 2036 or anything like that but still I don’t know. I don’t really know why economists just don’t have this more on their radar as on the list of possible scenarios. It’s just sort of like will growth keep on at the current rate or will it slow down a little, seems to be the two main options that they consider.

Luca 01:13:09

On that point of stylized facts and definitely correct me here if I’m wrong, but it seems to be a bit of a question as well of like what kind of time scale you are looking at where a lot of like economic literature and this kind of like question of time series and stuff really starts with the industrial revolution where in many ways kind of economic history begins. But if you kind of look before that then it does look like these explosions can happen, right? Before the industrial revolution you didn’t have 2% or 3% growth, right? You had almost stagnant or even declining economies for longer periods of time. And I think it might lead to a bigger question as well of whether this 2-3% isn’t just because our time scale isn’t long enough, well, where we’re kind of trapped in this industrial revolution kind of paradigm just waiting for the next big revolution to come.

Phil 01:13:58

Yeah, that seems right. I will say some of these projections that are, I mean, even the people making them of course would agree that they’re tentative, but when used to come up with what the optimal carbon tax is or whatever they are predicated on projections out centuries. So given that there was a transformative growth event a few hundred years ago, at that point you really are thinking on a scale where you should open yourself up to the possibility of another one, I guess. But that’s very rare. I mean, there are very very few people who are working on growth projections on that kind of time scale. So I think by and large, yeah, it’s just like they’re just thinking about the next few years or decades or something. And on that basis it seems most sensible to project off of the past few hundred years.

Luca 01:14:55

Can you pin that down a little bit more that what you just mentioned there with the carbon tax as well. Why these like growth explosions are relevant for that type of policy and those types of like social impacts as well.

Phil 01:15:07

If you knew for sure that we would all be billionaires with amazing technology and it would cost us half a day’s income to build levies around all of Florida and everything in 60 years or a 100 years or whenever climate change will start to have its most severe impact, then there’s no point doing much about it now. 'Cause we’ll just be rich anyway, the AI-produced wealth can just solve all our problems at the time. So even if you don’t think that’s going to happen, it’s, you know, if you’re gonna try to construct a distribution of possible growth paths and figure out how much we should sacrifice now in light of the expected cost of climate change… Yeah, you know, the expected welfare cost, you need to think about the whole distribution. And… I mean I guess in this case, it’s not so important to leave out the positive tail of that distribution because it’s more just about insurance in case of the negative tail.

Phil 01:16:23

But yeah, I mean, for all kinds of decisions like that you want to have some kind of distribution of possible growth rates for the next century or whatever. And I think climate change is the one that’s most salient, like most of the economists thinking about ongoing growth, they’re thinking about it in the context of environmental economics but it’s just not a lot of… It’s not a ton of people. They don’t really take the possibility of a big growth rate increase very seriously. Also, actually we know that it’s not just that they leave off the positive end of the distribution because it’s less important. We know because there’s surveys. One survey of economists who do this long-run environmental econ stuff, and environmental scientists asked some questions about what they thought growth would be like but the questions didn’t even allow for the possibility. So even the people writing this survey, didn’t think to say… So the questions were like, by what date do you think the growth rate will have fallen below 1% a year?

Degrowth

Fin 01:17:34

Yes. So something I was wondering is what you have to say about this crowd of economists and beyond who go under this kind of de-growth umbrella, where their mantra is… Well, on one hand, an expectation that growth just must level off at some point. And on the other hand, a kind of normative claim that that is a good thing, and that maybe we should get busy making that happen sooner because you know, what growth means is like more consumerism. It means wrecking the environment. It means like making more shit that we don’t need. What we need to do is like pull back, tighten our belts and live within our means. If we care about making the long run future go better and passing on a kind of healthy planet to our predecessors. We haven’t really discussed whether growth is a good thing. Although my guess is that the assumption here is that it is all else being equal. So I’m just curious what your reaction to this kind of de-growth crowd is.

Phil 01:18:48

They seem to be a lot of motivations for de-growth put together there, at least in the presentation that you just gave. And so I’d say some parts of it make more sense to me than others. I don’t think consumerism per se is bad. Like if it’s really all else equal and it’s just about people having more knick-knacks that’s not, I don’t see the word as stigmatizing. But the possibility that what looks like growth now is always just gonna be unsustainable because it just means sort of depleting natural resources and so on. And the sooner we realize that and slow down and kind of work toward a sustainable plateau of global consumption the better, that possibility seems more reasonable, right? You don’t want to grow unsustainably and then collapse.

Phil 01:19:53

Okay. So just to go back to the original story about growth, I only mentioned two inputs to the big machine. I mentioned labour and capital. If we’d been speaking a few hundred years ago, but if I’d known more economics better than anyone knew at the time I would’ve probably mentioned labour and land or maybe labour, land and capital. And now land is kind of an insignificant input to production. And so I can just sort of like hand wave and call it a kind of capital and natural resources as well. They’re just a very small, in a sense they’re a very small contribution to production. Now, what do I mean by that? Obviously they’re necessary, right? If there were just no land [laughter] and where we’d all just be in space. I don’t know that we wouldn’t make anything.

Phil 01:20:42

And if there were no natural resources you couldn’t ultimately produce any of the inputs to the factories and so on, but they’re insignificant in the sense that the marginal contribution of these inputs is low enough, that they get paid a small share of output. Okay. So every year the big machine [chuckle] the economy produces a bunch of output. I was saying something like a third of it gets paid out as capital rent. Something like two-thirds gets paid out as wages. A few percent gets paid out as land rents, or, you know, the cost of natural resources and so on. And it’s so low because we have enough land and natural resources that if you just having one more acre of land, just if the whole globe of the earth expanded a little bit, we had this an acre out of nowhere, or if we had a bit more natural resources that wouldn’t really increase output much output is basically bottlenecked by labour and capital.

Phil 01:21:47

And so they’re the ones that get all the payment and labour in particular is the bottom line. Whereas back in the day, even though there were fewer people and less capital, we were just so much less efficient at using land that the landlords got a higher fraction of total output, just for renting out the land and starting from that point without any advances in technology, an extra acre of land would’ve been more valuable. So as time has gone on surprisingly, we’ve gotten more efficient at using the land and natural resources that we have by and large than we’ve been running out of them. And I agree that this couldn’t proceed literally forever. So I like, I don’t think there’s any like deep philosophical disagreement about whether ultimately growth has to slow away.

Phil 01:22:49

I guess this sort of brings a whole conversation full circle, but none of these growth paths are truly sustainable, not exponential, no singularity, but not exponential growth either, but also not plateaued growth at some club of Rome, ideal scenario, I mean, what, whatever it is, a billion people consuming 1950s level consumption for the sun expands and the earth dries out. Like that’ll still be, that’ll still be a limit to growth. So this question is just about, which are the most binding constraints and which will bite first. And I’m no expert on natural resource usage and forecasting, when will, when certain constraints will start to crunch. But my understanding is that the consensus is that we can probably sustain a lot more growth. There’s no kind of reason to think that one of constraints on that front are looming.

Phil 01:23:44

So I would say, yeah, I mean, at the moment I would just sort of take the evidence of the factor shares at face value and say labour is the main constraint. And if we could just create AI that could substitute for labour, we would have a lot more growth for a long time. Obviously this got sort of into space element too, because one downside to just plateauing off is that it means, yeah, maybe you get a long future on earth for sure, but you never reach the level of technological advancement that might have let you settle space.

Phil 01:24:21

And then there too eventually you’d have a resource constraint that it would be much you can figure out ways to use all the stars that would be a, you’d significantly relaxed that particular constraint. And maybe that’s not feasible, but we’ll just never find out unless we really go for it and try to grow. And whether that’s a project worth embarking on a risk worth taking, I guess depends on your priors about how long we would survive.

Fin 01:24:48

That is an entirely different podcast episode [laughter] I look forward to doing at some point. Let’s ask some final questions then that we ask everyone. And the first one is, what significant thing have you recently changed your mind about and why?

Phil 01:25:05

I guess I’ve changed my mind to some extent about the value of working in government or working to change government relative to just doing philanthropy. So what I had thought was the following. So I thought, well, in the US, at least about 2.5% of output every year of the national income is allocated philanthropically. And like 38%, like 15 times that is allocated politically. US is sort of an unfair example because US both has less government spending and more philanthropy than most of the developed world. But it’s also a big part of the developed world. So that’s like 15 to 1 and way more than 15 times more effort goes into lobbying government and writing angry articles and tweets and everything about what some policymaker has done than go into lobbying philanthropists, or just trying to inform philanthropists who want advice on how to do good. And on top of that, I think there’s just a substantial share of philanthropy, especially these days is coming from these sort of technocratic often like Silicon-valley-billionaire types who are really open to ideas that people have, kind of novel and sincere ideas about how to achieve impact. And to some extent there are also technocratic types and money that’s open to being advised on from… In a sort of socially useful way in government. But I think, it’s sort of a harder thing to measure, but I think some…

Phil 01:26:58

On some ways, at least of trying to measure it, you would come to the conclusion that there’s just much less, a much smaller share of public spending is sort of technocratic in the relevant sense than in philanthropic spending. And finally, all of this would be a strong argument for doing philanthropy as opposed to policy making or policy advising, even if we were just answering this question from first principles and didn’t know like our place in the world. But as a matter of fact, we, I say, I can just speak for myself, but we actually are philanthropists to some extent. We’re, I don’t know if you donate money anywhere, but like it’s, I don’t know something I do and something a lot of people do.

Phil 01:27:55

And being part of the EA community, I’m in touch with a lot of people who are philanthropists of some sides or another, and I don’t know any policymakers, I don’t… I can’t just like call up a policy like [chuckle] so it just seemed overdetermined to me, it seemed like clearly there’s this like giant low-hanging fruit of working in philanthropy instead of policymaking. But two things, right? So first as many people pointed out to me, and as I sort of have slowly come to realize it’s a giant, a giant consideration. It’s really not just about the amount of money it’s about what you can do with it. Governments, maybe the main thing they do, isn’t just shuffle money around it’s changing regulations and so on. Including some that are very philanthropy relevant, including about like disbursement requirements, like how quickly foundations have to spend down their money and stuff, which I happen to think is like a really important type of regulation.

Phil 01:28:48

There’s that, and it sort of that’s really important. And then also maybe more importantly, I think, unless you’re working through government, it’s just much harder to know what you’re even doing when you spend money on a public good. So we’ve been talking about AI. As a philanthropist, you might want to top up the incomes of the very poor who are dis-employed by AI, or you might want to fund AI safety. We haven’t even talked about that whole side of it, but you might wanna hire someone at FHI to do AI safety or something.

Phil 01:29:21

For every dollar you spend on that UBI program or that AI safety program or whatever. You could just have some other actor, the government, or some other philanthropist or something who just spends less on that and spends more on some other thing, right? Google could just say, you’ve hired one new AI safety person at FHI and just we’re just, that’s covered now. We’re not gonna hire one. So unless you know the whole general equilibrium consequences of your contribution to a given public good as a private actor, you kind of just don’t know what you’re doing.

Phil 01:29:57

And you can try to work that out. And I think it would be valuable to do more research along those lines for doing philanthropy better. But a real bright side to working through government is like, well, Hey, if you just increase subsidies for giving to the poor for AI safety or whatever it is or have like a UBI that’s really just universal. If you just have a policy like that, then you can like systematically skew the incentives or the allocations in a way that like when you know kind of what outcome you’re getting. And I think there might just be a lot of value to that. So on a whole… I think they probably, these arguments probably don’t mean that we should all stop thinking about philanthropy and just try to work in government or something. But, I’m definitely, I’ve been pushed more in that direction by them.

Fin 01:30:43

All right. Last question then is what three books or films, articles, whatever else would you recommend for anyone interested in finding out more about what we’ve talked about?

Phil 01:30:58

Yeah. So first, I’d recommend checking out that paper from North House I mentioned on what happens when the substitutability between… The substitutability of capital for labour rises to that middle case where it’s high enough that you don’t need labour for production anymore. So output’s not bottlenecked by labour and you can get dramatic growth increase, but it’s not so high that everyone’s out of a job and unemployed. In fact, in this happy middle ground, all this capital that piles up actually increases wages. I think it’s a nice kind of easy-to-understand introduction to at least one path AI might take. But better, yet, it also has a bunch of empirical tests of whether we seem to be moving in that direction of capital becoming more substitutable for labour in the relevant way. A second, would be another paper we mentioned on that feedback exponent seeming to be negative.

Phil 01:32:13

So this is our idea is getting harder to find by web at all, just because it’s such a central, yeah, just a central input to this second channel through which AI could be transformative. And it’s also one of the few relevant papers in this whole area, that actually figures out a way to bring data to bear on this sort of question. So for its empiricism, I would recommend that one alongside the North House one. And finally, I’ll say, it would be a great if there were a textbook on the econ of AI. I could just point people to that, but at the moment there isn’t and the summary literature out there other than the document we’ve been talking about is mostly focused on relatively short-term issues rather than the long-term growth-related issues.

Phil 01:33:16

You know, more about wage distribution rather than things like changing the growth rate. But if anyone wants an introduction to growth in general, can’t really do better than Daron Acemoglu’s Introduction to modern economic growth. If they’ve got any economic students listening. Yeah. I’ve found it quite accessible and helpful for understanding, just sort of what the range of plausible growth models is and how one might most naturally tweak them one way or another to allow for some effect transformative or otherwise upon, adding AI to the picture.

Fin 01:33:54

Fantastic. That’s a great list. Philip Trammell. Thank you very much.

Phil 01:33:57

Thank you.

[music]

Luca 01:34:00

That was Phil Trammell on the Economic Growth Under Transformative AI. As always, if you want to learn more, you can read the write-up at hearthisidea.com/episodes/phil. There you’ll find links to all the papers and books we reference throughout the interview, plus a whole lot more. If you enjoyed the episode, you might also like our very first episode with Victoria Bateman about the industrial revolution. We’ll also hopefully do some more episodes about AI explicitly in the near future. So keep an eye out for that. We’d be very grateful, if you could also leave an honest review on Apple podcasts or wherever you are listening to this, if you have constructive feedback, there’s a link on the website to an anonymous form, or you can get in touch by emailing us directly at feedback@hearthisidea.com. We’d love to hear from you. And lastly, if you wanna support the show more directly and help us pay for hosting these episodes online, you can also leave a tip by following the link in the description. Thanks very much, for listening.