Doyne Farmer on Complexity and Predicting Technological Progress

June 13, 2022

Finished listening? Click here to rate the episode.

Further comments

We talk to Professor Doyne Farmer, the Baillie Gifford Professor in Mathematics at Oxford, the Director of the Complexity Economics programme at INET, and an External Professor at the Santa Fe Institute.

In this episode, we discuss:

  • How Doyne and his friends used physics and hidden computers to beat the roulette wheel in Las Vegas casinos
  • Advancing economic models to better predict business cycles and knock-on effects from extreme events like Covid-19
  • Techniques for predicting technological progress and long-run growth, with specific applications to energy technologies and climate change

Doyne Farmer

Image credit: Bloomberg

Doyne's recommendations

Coming soon!

Resources

Beating Roulette

Science of Chaos

General / introductions

Doyne's work

Complexity Economics

General / Introductions

Business Cycles and Finance

Covid

Long-term Growth

Predicting Technological Progress

General

Implications for Climate Change

Resources for Forecasting

Humanity's Future Trajectory

Transcript

Introduction

Luca

Hi, you're listening to Hear This Idea. In this episode, we talked to Professor Doyne Farmer, who is the Baillie Gifford Professor in Mathematics at the University of Oxford, the Director of the Complexity Economics programme at INET, and an external professor at the Santa Fe Institute. We just cover an insanely broad range of topics in this episode, including how Don and his friends used physics and hidden computers to beat the roulette wheel in Las Vegas casinos; how advancing economic models can help us better predict business cycles and knock on effects from extreme shocks like COVID-19; and lastly, techniques for predicting technological progress and long run growth, with specific applications to energy technologies and climate change. I should flag that Don said he will email us with his three book recommendations, so we will add that to the website as soon as possible. I thought it was just really insightful how Doyne’s work manages to touch on so many topics within the Effective Altruism space, such as tail risks, differential tech development and forecasting, but I also definitely expect some listeners might just disagree with some of the conclusions that Doyne comes up with, especially on topics like nuclear power and AI risk. I think we were sadly too short on time to really get into a longer discussion of some of the cruxes and disagreements here, though if there is listener demand we’d definitely be up for organising a round two! But in any case, I hope that laying out Doyne’s thinking here will leave you with new insights to synthesise and questions to continue pondering on. Lastly, I'm also really excited to announce that Fina and I have launched a listener survey to get more feedback on how we can improve the podcast. It should take around 10 minutes of your time to fill out, and there's a choice of a free book in it for you at the end. We're really interested in continuing to experiment with new formats and make sure that we can provide you with content that's different or complimentary from other podcasts you might be listening to. So if you have the time, we'd really love for you to help us make this show as good as it can be. But without further ado, here's the episode.

Doyne

My name is Doyne Farmer. My formal titles are that I'm the Director of Complexity Economics, at the Institute for New Economic Thinking at the Oxford Martin School, and I'm also the Baillie Gifford Professor of Mathematics, and I'm in the external faculty at the Santa Fe Institute and the Complex Systems Hub in Vienna.

How Doyne beat roulette

Luca

Awesome, that's great. So I would be really keen to maybe take the conversation right to the very beginning, because I think you've got just a really cool origin story, if I can maybe call it that. Can you tell us the story of Eudaemonic Enterprises, and in particular, the story of how you came to beat the roulette wheel?

Doyne

Sure. So yeah, I got my start on doing prediction things when my friend Norman Packard had the idea that, since Roulette is just a physical system, we should be able to use physics to beat roulette. And so we bought a roulette wheel, we did a lot of measurements on roulette balls, and we determined that indeed it was possible to predict the trajectory of a roulette ball, and predict roughly where it would land on the wheel, so that we could then place bets. The key is that after the croupier releases the ball, the order of 15 seconds elapses before bets are closed, and the ball falls off, but bets are only closed a few seconds before the ball actually exits the track. And so we were then able to tap with our toes, with little switches we built into our shoes, and determine the velocity of the ball by tapping every time the ball completed a revolution. And similarly for the rotor. And then we solved the equations of motion for a rolling ball in a circular track. And then we would send a radio signal to a second person - one person would stand at the roulette wheel and make the measurements, a second person would receive the betting signals and place bets.

Fin

What were you placing bets on - colours?

Doyne

No, we had bets on numbers, because the colours go, you know, so the numbers, on the other hand, are on a specific part of the wheel. And so we placed bets on three numbers that were in roughly the part of the wheel we thought the ball was likely to come to. And our predictions were not very precise, but they were good enough that we could say the ball was very unlikely to land on about eight numbers opposite to where it was coming out. And that gave us about a 20% edge over the house.

Luca

Right, right. And how come you guys were the first people to do this? Is this kind of physics isn't really hard?

Doyne

- Well so we actually were not the first people to do it. There was another group by Claude Shannon, who's a pioneer of information theory, and Ed Thorp, who was probably the most famous gambler ever. And actually it was Thorpe's idea, but they built their computer in Shannon's basement. Now their computer was an analogue computer, and they never successfully took it into a casino. They took it into a casino, but they had hardware problems, something we became very familiar with, because we had a lot of hardware problems. But our effort was different in that we actually built the first wearable, digital computer - it was not only wearable, it was concealable.

Where was it concealed?

Well, the early version was concealed under an armpit. And so we had a computer under one armpit, we had a battery pack with 12 AA batteries under the other armpit, we had a little plate with solenoids that would buzz you with different frequencies to tell you which number was going on or help you keep track of what was happening, we had wires coming from our shoes up to the computers - we had to put on a whole wiring harness, we had an antenna wrapped around our shoulder. So kind of a complicated rig. And we made 11 different trips to Nevada. We made 20%, relative to the house, but I have to say we didn't get rich. You know, our dream was that we were going to make a bunch of money and start a science commune and be independent of the system. We didn't want to work in the military industrial complex and wanted to be independent.

Fin

What is a science commune? What does that involve?

Science communes and taking risks

Doyne

A science commune well, it was, you know, this was happening from 1976 to 1981. So we grew up in the days of hippies, and we lived in a little commune ourselves already. And so it would be a science commune because we were a bunch of scientists, we’d buy, you know, a nice lab and computers and just do science research on whatever we felt like thinking about. Yeah, we thought it would be pretty great. But we didn't make all the money and we ended up going off and, you know, getting jobs in academia. And Norman Packard and I later on we started another company called Prediction Company that beat the stock market. So that's where we made our money in the end. And that, you know, honed our prediction skills even more. So, you know, regarding careers, I'm probably an example of somebody - I view those things I did as somewhat self indulgent, with hindsight, they were just fun. And there were interesting challenges. It was an adventure, to beat the casino- a little scary at times, actually. And it was an adventure to beat the market. But now I want to do something useful for the world.

Luca

Yeah. Well, what do you think that these adventures and things have taught you or what did they go on to lead you to do?

Doyne

Well, they taught me so many things. At a technical level, it taught me a lot about just science, and engineering. The roulette project was a front to back science research project, we had to do the experiments, work out the theory, build the equipment, test everything to make sure it all worked.

Luca

And how many people did you say were involved in that?

Doyne

Well there were a total of about 30, at different times, most playing fairly small roles, because, well, we had, there were probably about four or five of us involved in actually building the computers and doing the science stuff. But there was another guy who helped build the little signal devices, you know, most of the people who actually placed the bets were women. So they had their signalling device built into their bra. And so you know, somebody had to do all the sewing to build that. We had all the trips to the casino. So people trained to be betters or data takers. And so it was, you know, a saga over the years, a lot of fun.

Fin

I can expect there might be some grad students or even undergrads listening to this. Do you think in general, these people are underrating taking big random risks?

Doyne

Are they underrating, meaning in what sense of underrating?

Fin

Meaning, if they are considering doing it, do you think they should tilt towards doing it more than they currently are?

Doyne

Well, I think it just depends on what you like, you know. Certainly with hindsight, I'm very glad I did it, even if it didn't really succeed, strictly speaking. I mean, we succeeded in beating the house, we didn't succeed in stockpiling large sums of cash. And that was due to a combination of persistent hardware errors and fear of casino security; every time we raised the stakes, things got very tense. And you know, there were stories of blackjack counters getting beat up and stuff like that. And so we were kind of cautious. And I was worried, I think we probably somewhat conspicuously looked like graduate students, although we did try to dress the part. But, you know, I learned so much, it was a lot of fun. It was just a great adventure. I'm very glad I did it. I mean, it did mean I dropped out of graduate school for about a year and a half to make it happen. And, you know, wasn't exactly directly on any career path.

Studying chaos

Fin

So after the roulette saga comes this Dynamical Systems Collective? What was this thing called chaos that you decided to study?

Doyne

Well, chaos is a mathematical phenomenon, where nonlinear equations have the property that nearby trajectories separate exponentially in time. So two points that are initially very close together later on become quite far apart. And this happens through a process that in an abstract space is a lot like mixing, it's like mixing bread dough. You know, you put a little spot of dye into bread dough, and then you start kneading it, and eventually, the whole bread dough turns pink, because the red spot of dye that you put in gets spread over the whole thing. So chaos is like that, except it's happening in a more abstract space. If you think, say, about the equations that describe fluid flows, like flowing water, or the atmosphere, or the weather, those are chaotic, which means that over the passage of time, you can't make a long range prediction of the weather that's accurate. I can't say a year from now whether it's going to rain or not on a specific day, and no one will ever be able to do that.

Fin

You used this word ‘nonlinear’ - could you say where that comes from and what it's doing?

Doyne

Yeah so nonlinear means - as the word suggests - not linear. And linear means that the whole is equal to the sum of its parts. Nonlinear means the whole is not equal to the sum of its parts. And it means that things are not additive. There are also multiplicative interactions that can amplify, that can contract, that can damp. And that means you can have feedback. And feedback’s a very important thing in all kinds of things we do. You know, if I take this microphone, and put it next to a speaker, then I get feedback between the two and I hear the scream. But feedback occurs in all kinds of systems, you know, a radio wouldn't work without feedback. Polarisation in society is a feedback mechanism.

Luca

Well I'm interested in maybe picking up on that last bit there that chaos seems to just come up in lots of real world phenomenons and stuff. Are there any that you want to particularly highlight as areas where you currently see a lot of research but also when we're thinking about, you know, improving the world or trying to do good and stuff where chaos and understanding these phenomenons is really important?

Doyne

Yeah.I already mentioned weather, that's a place where it's been very important to understand that. It's been very important actually in understanding heart failure and different forms of metabolic dynamics because you have a lot of feedback systems, regulatory systems in your body. Everything from insulin to - well, the hearts are good example. Why is your heart beating? Your heart’s an example of what in dynamical systems theory would be called a limit cycle. That is something that spontaneously makes a roughly periodic motion. And it's doing that because of a neurophysiological feedback loop that keeps the heart beating. But in fact, your heart is not strictly periodic. It's varying all the time, and variation is normal and healthy. But there are other modes of variation that are not. And that involves things like heart arrhythmias. That can be our examples of where dynamical systems theory and chaos have been very useful to understand these phenomena. So that's a very tangible one. One that I'm engaged in working on now is in economics, where I hypothesise that many phenomena like business cycles involve chaos.

Luca

Can you speak a bit more about that?

Doyne

Sure. You know, the economy is changing all the time, and there are some forms of change that are secular, meaning that, you know, GDP tends to go up through time; because we make technological progress, we become better at making things. But there's other irregular movements of the economy - the Great Depression, the great financial crisis, and more mundane variations that are called business cycles - where the economy for a period of, say, five years, or ten years may do really well, and then it will decline for a while, we have a recession, and so we have booms and busts. And so these are ubiquitous phenomena in economics. And I think the only plausible explanations in most cases, in many cases, - not all - are that it comes from internal dynamics. Actually let me back up and say, standard economic theory says that it all comes from external stimulus, that something happens out there in the rest of the world that causes the business cycle to take place. So it's a driver on the economy, not something that's generated internally by the economy. And sometimes that's a good explanation, like COVID affected the economy significantly; it was clearly an external driver. And but other phenomena, like the great financial crisis, I would argue, was clearly an internal phenomenon. It came from a combination of loose credit and the use of mortgage backed securities, and naivete about diversifying risks. And so it was something that the economy generated by itself. And a proper theory of economics would explain that. And standard theories don't look at the crisis that way, and I don't think are very workable.

Complexity economics

Luca

So just to check that I understand this distinction right: so inside versus outside factors. Is it the case that inside factors are factors that we hoped that models would have theories for kind of explaining themselves. So when it comes to economic models, we hope that they would have a way to explain financial aspects, leverage cycles and the like, versus when it comes to outside factors, we don't have an expectation that an economic model will be able to predict when pandemics come, like we see this as being outside of it.

Doyne

Yeah. An economic model can certainly not predict when the next pandemic will start. But an economic model could predict how the economy will respond to a pandemic. And in fact, one of the predictions I'm most proud of is that we predicted quite successfully the impact of the pandemic on the economy of Great Britain. When the pandemic broke out, we realised that it was something we were set up to do. And I rounded up my favourite students and ex-students, and we did a kind of crash programme to build a model for the pandemic. We figured out which occupations which were likely to not be going to work and wouldn't be able to work productively because they couldn't work remotely. So in other words, people who were working in non essential industries and couldn't work from home, that those people would be removed from the workforce, and that would then cause a significant shock, a supply shock to the economy. In addition, there would be a demand shock due to people not going to restaurants, not flying on aeroplanes, not taking buses, and so on. So we, because we had access to some rich datasets compiled by the US Census Bureau, we knew things like how close together did people in different occupations work together. So we could assume that if they had to work close together, it wouldn't be safe, they'd be in danger of infecting each other, so they wouldn't be able to go to work.

Luca

I'm guessing the implication here is that you're able to make much better predictions this way by explicitly looking at it, I guess, through the pandemic lens, than mainstream economic models, where they will just simply shift supply and demand curves.

Doyne

Well, I mean, mainstream economics models did some version of this, but they would typically couple a standard epidemiological model, they typically took very simplified epidemiological models that don't produce very realistic predictions, and coupled them to an economic model and made assumptions about people's choices under utility maximisation. Whereas we put in a lot of detail to our model, we had 450 occupations, we knew how each of those occupations was linked to the industry that it sat in. So we had a model with 56 industries. And so we could then predict how much each industry would be shut down as a result of people not being able to come to work. And then we built a dynamic non-equilibrium model where we could watch the shocks propagating through the economy, the supply shocks propagating downstream and the economy and the demand shocks propagating upstream and colliding with each other, and amplifying the initial shocks of people not going to work.

Agent based modelling

Fin

Yeah, I guess I came across this analogy of the alternatives to something like, you know, agent based modelling, where I think the phrase is a kind of ‘hobby horse economy’. Where we have these exogenous shocks, which tip the economy off balance. And because you can find some single fixed point it will eventually equilibrate to, and it just kind of rocks back to an equilibrium until the next shock comes along, and then it's rocking again.

That's right.

But it sounds like the claim is ‘no, this is in fact, just an inaccurate picture of how economies work.’

Doyne

Yeah, I don't think - I mean, sometimes the economy works a bit that way. And actually, our model of COVID had some elements of that. That is, if we shocked it and waited a long time, it would return to its steady state. So we were using, in this case, it was a bit like the rocking horse model. The difference being we really did it in a lot of detail. It was a much, much more complicated beast than a rocking horse. And we were able to then make extremely accurate predictions. We actually had an inventory for each industry of goods. So we could look at how the inventories of goods were being depleted. And if an industry ran out of goods, then its production would shut down. And so we just did things in a lot more detail. And we predicted a 21.5% drop in GDP in the second quarter of 2020, and what was observed was a 22.1% drop in GDP. So we pretty much nailed the number. And, you know, of course you can get lucky, we did get lucky in a few respects - because we've done an elaborate post mortem of the model now. But mostly we did things right. And we also had pretty good industry by industry predictions.

Systemic risk

Fin

It sounds to me like when you - and I'm speaking as someone who's just naive to how even just standard models work, macro models - but when you do this thing where you're taking complexity and chaos seriously, one thing that you might get is that you begin to appreciate something you might call like, ‘systemic risk’, which is to say, I guess, standardly, you'll have some distribution over outcomes, and the tails are gonna be fairly thin. And so you know, in 2007, the likelihood of something as catastrophic as the 08 crisis is just like, you know, almost totally negligible. But the tails get a bit thicker, when you realise that there are these kinds of complex interactions. Someone described an analogy to me recently, which was: if you imagine some kind of plane with dominoes on it, and all the dominoes are standing independently from another and you're kind of shaking it. Well, the chances of one domino falling over is independent from the others, so the chance of them all falling at the same time, or a large number, is just, you know, you'd multiply the chance of a single one. But maybe they're kind of close together so that one domino knocks the other one over. And maybe that's just a more accurate picture of how economies work. And therefore, maybe we should be taking these kind of tail risks a bit more seriously on these kinds of pictures. Does that make some sense?

Doyne

No, that makes total sense. And I've been involved in several other models that really dug into that quite a bit. In our COVID model the amplifying effect was the interaction of the shocks with each other, and the fact that if one industry shuts down, and it's supplying goods for another industry, then that industry can get shut down. So that's exactly your domino analogy, right? You've got 56 industries or like 56 dominoes, except they're arranged in a way so there's a notion of upstream and downstream. You know, from my laptop, the chip maker is upstream, because you can't make the laptop without the chip, the laptop maker has to buy the chip from the chip maker and the chip maker has to buy the raw materials for the chip from you know, a molybdenum mine or whatever it goes into chips, and so on. So there's some amplification there and some form of systemic risk. It's even more pronounced in models we built of the financial system, where, for example, we have a model built with Stefan Thurner and John Geanakoplos, Sebastian Poledna, where we could see how we could have an economy where we would assume that the investors weren't allowed to use leverage. And as you were mentioning a moment ago, that financial system had thin tails, the market never made large crashes, you know, at most prices moved by 5% at a time. But if we let the investors start borrowing money and using leverage to buy stocks on borrowed money, then as we turned the knob up, turned the leverage up, we saw the economy get more and more varying in its behaviour. We saw crashes start to happen, we saw heavy tails in the outcome, so that you could get crashes as big as 40% in a single day. And so that was a good example of systemic risk, because it's the interactions of investors with each other, and the fact that once you use leverage, you can be forced to sell into a falling market in order to maintain your leverage limit. So ironically, it's precisely the measures that are taken to mitigate risk that caused the crash to happen. Because the bank says, ‘well, I'll lend the money to you, but your leverage can never be more than 15:1’. In other words, the ratio of the amount of money you have to the value of the stocks you own, or actually, let's do it the other way, the ratio of the value of the stocks you own, to the money you have, can never be more than 15. Then the problem is that if the market crashes, or starts to go down - maybe it's not crashing and it's just going down - that means your leverage goes up, which means you have to sell stocks in order to bring your leverage down to where it was. So it means when the price is falling, you're selling. And so if there's a lot of leveraged investors, they all do this together. That's the systemic part. They amplify the downward movement that makes prices go down even more, because everybody's selling, and then people have to sell even more. And so you get this avalanche, you get a feedback effect.

Fin

And is the point that you noticed this when you tried to simulate this and this is harder to notice when you're just kind of, you know, solving for some equilibrium.

Doyne

People - actually my colleague, John Geanakoplos - had built an equilibrium model where he could show in a three step process that a crash might occur when there were leveraged investors. So he could show the mechanism in a very stylized three step model. Whereas we had a model where the market just, you know, went along. And what we showed is, it's actually more than that. It's not that once in a while you get a crash, it's that there's little crashes and medium crashes and big crashes. And big crashes are rarer than the medium crashes and medium crashes are rarer than the little crashes that are happening quite frequently to these mechanisms. So we could just give a much fuller description of what was happening. And another example, we showed that if we use what's called ‘Value at Risk’, which is a standard way to manage risk and financial markets that came into popularity in the 90s, and was recommended by the Bank of International Settlements as part of what's called Basel II. So people were supposed to follow this. And the basic idea is that if markets become more volatile, that means that they're likely to be more volatile in the future, which means that you should reduce your leverage.

Fin

And if I'm like an institutional investor, I'm going to be following this Basel II?

Doyne

Yeah, almost everybody. When we had a prediction company, we followed Basel II. It was a standard thing to do. But we built a model that showed that if everybody follows Basel II, and everybody's using leverage above some level, then you get an oscillation where the whole market will climb for 10 years roughly, because, you know, prices go up, but volatility goes down. And this is exactly what happened in the run up to the great financial crisis; for about 10 years, the market went up, prices went down, because everybody's risk management practice is damping these little fluctuations. And then through time, as volatility goes lower and lower, leverage gets higher and higher, and leverage goes up nonlinearly, under Basel II. And so eventually, leverage got high enough that it hit kind of a critical point, and then any little disturbance can cause the market to crash, because everybody starts, again, selling into a falling market, and you have a crash, and then the cycle would repeat itself. It repeated itself chaotically - each crash was not exactly the same as the previous crash. But you know, this could happen even without any noisy inputs. So it's chaos happening. So we could see this happening in a very simple model that produces dynamics that looks a lot like what happened in the run up, and then during the great financial crisis. So our point was that, back to what you said, it's systemic risk, and systemic risk due to boundedly rational, if not fully rational investors, investors who don't understand everything - because none of us understand everything - making choices that seem very plausible to reduce their risk. The turn up that would be good, if only one investor was doing that, it would be very sensible. But when everybody starts doing it, the whole market starts acting in sync, and systemic risk takes over.

Luca

Yeah, so zooming out a bit, I guess: one way in which I hear a lot of these complexity economics arguments is that, in large part, we just have, you know, these amazing advances in computing and data that just let us do a whole bunch more with models, and why not do this? So I guess like with the COVID example you gave, a lot of this seems to be linked to just having data that lets you model heterogeneous agents and networks, and here what you were saying about 2008, a lot of this seems to be about modelling more complex behaviour and bounded rationality and the like there, and that a lot of these things is just about kind of exploring and utilising a lot more of this complexity, which we now can actually model. Is that about right?

Doyne

Yeah. What you said is pretty much true. I think there's also a different conceptual framework about how the economy works. Standard economics, mainstream economics, drives equations, and then solves equations. And the equations are derived from assumptions about selfish utility maximisation. So each agent - this is called ‘methodological individualism’ - each agent makes choices that are computed from a theory, that are designed to maximise utility. And so we say, well, we don't think that's the right way to think about this. Really, what happens is people have behaviours. And you can observe their behaviours. They use heuristics, simple rules of thumb to make decisions. They reason at some level, there's some learning going on. But the learning is relatively simple. And so people muddle their way along and make decisions to get by. And they may have goals but they're not utility maximizers. I mean, they may even try to achieve their goals. They probably don't, in general, achieve them perfectly. And so we model the world with those kind of rules in computer simulations. And this allows us to make models that are much more realistic in terms of institutional structure, and so on.

Why isn't complexity economics more mainstream?

Luca

So I guess one question I have here is: if you're able to kind of predict what kind of recession or drop in GDP you might get from a pandemic, or if you're able to identify these systemic risks from, you know, financial crises, why aren't these models more mainstream? I think I'm correct in characterising that complexity economics still is under heterodox economics, on the fringe. If there is such explanatory power here, and especially one that is maybe very lucrative in the financial sense of being able to harness these things, why aren't these things more widespread?

Doyne

Yeah, that's a good question. The field is young. You know, the mainstream is not doing that. So who does it? People at a few odd economics departments in Europe. I sit in a mathematics department. I have a colleague who's at George Mason University - Rob Axtell who sits in a computational social science department. So there are very few economists who have been using this approach, so it hasn't been properly tried. Second reason is it's not easy. You know, if you want to make a good model that is really accurate, you’ve gotta gather a lot of data, you have to really write some non trivial software. There's a lot of trial and error involved. So it's a substantial effort, and not something that it's likely that one person is just going to easily do on their own by themselves. There are lots of qualitative models, like the one I mentioned, that we made for understanding the effects of Basel II on the Great Financial Crisis. You know, it sort of qualitatively does something that looks a lot like what happened, but it's not making sharp quantitative predictions. Our COVID models, I think were the first time where an agent based model made a real time prediction - we made the prediction before this stuff happened, so nobody can accuse us of, you know, being biassed by seeing the answer first. So we made a real time prediction that was pretty accurate. That's a first.

Luca

I guess I'm especially curious here, maybe outside of academia, if you see there being any uses? You mentioned the prediction company and the stock market and things before. Yeah, do you see, I guess, any low hanging fruit there? Or do you have any timelines yourself on when you expect, let's say, a hedge fund or some other institutional investor being like, ‘look, these models have got a huge amount of power, if academia isn't going to do it, we'll kind of pioneer this.’

Doyne

So there are definitely some hedge funds and investment banks that are making agent based models down. You know, I hear rumours and I know quite tangibly of a few things. I'm not sure what I'm allowed to say. So it's definitely starting to happen. And it's already happened in lots of other fields, like agent based modelling is mainstream in epidemiology. It's mainstream in traffic management. It's mainstream in Battlefield strategy, in inventory management. So there are a lot of areas where agent based modelling is being used very successfully now. And I think that's about to start happening in economics. I'm planning, starting in the Autumn, to start a company that will build agent based models for economic applications. I’m starting a company really because there's a limit to how much you can do at any university. If you want to build large infrastructure, databases, a big software base, that's not the right thing for students and postdocs to be working on, it's more engineering and applied work. And I'm really hoping this time, unlike with the prediction company - I found it very frustrating that, okay, we made money for ourselves, we made a lot of money for some Swiss bankers, and we had to keep everything secret, so it didn't do much for the world. And, you know, maybe we were making markets more efficient, but I'm not totally convinced that was valuable. Whereas this time, I really want to do it in a more open way, and in a way that can benefit policymakers and provide better guidance so we can steer the economy more intelligently.

Fin

And is the idea that you can consult a policymaker and run simulations to see the effects of policy ideas?

Doyne

Yeah, precisely. I mean, we may also have more commercial clients that have more money, and so they may be a better source for putting the fuel in the engine. But certainly the main goal is to put these kind of models in the hands of policymakers.

Superforecasting

Luca

One kind of odd, doesn't-really-fit-nicely-into-the-structure question I have is: do you have any takes on superforecasting?

Doyne

So I've spent a lot of time making predictions, and I make other kinds of predictions too, like using just data, what are called ‘time series models’. For example, how do we predict the sunspot cycle? You might think, ‘well, we know a lot about the physics of the sun. And we make measurements and run a sophisticated model of what's going on inside the sun.’ No. The way we predict how strong the sunspot cycle is going to be next year is we use a data set that was compiled by Belgian monks, starting in the 17th century, who would go out every day and use a telescope to look at the sun and count how many sunspots they saw, and write it down in the book. And then they taught it to their, you know, junior people, and they did it through centuries. And so we have a record going back a long way about how many sunspot cycles there are. And we just use a very simple model that uses the past history of sunspot cycles to predict the future. So that's an example of just, you know, using data to make predictions. So I have a lot of experience making forecasts from data. To be honest, I'm pretty suspicious about people's ability to make forecasts. And one of my postdocs, Rupert Way, worked on a paper with Laura, Elena, Jing and some others from Cambridge, where they compare the success of what's called ‘expert elicitation’ to models just based on history. And the models based on history did better in general. And we've actually been using this quite a bit to forecast technological progress for climate change. Because, you know, if you want to understand the right policy for climate change - which technologies should we support to make power, to create energy without creating carbon dioxide or other greenhouse gases. And you know, you can do it with nuclear power, for example, or you could do it with solar cells. And 10 years ago, or 15 years ago, many people were strongly supporting nuclear power, because they thought, ‘well, this is the way to go, solar cells are too expensive’. But if you look at history, you'd see solar solar prices were rocketing down, and nuclear energy prices were going up. And so a fairly simple forecast based on data would tell you that solar was the way to go. But a lot of the experts were saying ‘no, no, nuclear is the way to go.’

Fin

I guess before we start talking about forecasting technical progress, I want to stick up for the kind of super forecasting story. I think people would agree with what you're saying there, which is one way you can be an exceptional forecaster is to ignore the pundits and just use the crude time series data factors and that in fact turns out to be best, which often it does, as you point out. So I'm not actually sure there's a huge difference of opinion.

Doyne

Well, yeah, the question is, you know, what are the super forecasters doing? And I don't deny the possibility that they might be able to do very well. But we're lacking any substantive studies on how well people do, actually.

Fin

Is that the case? So I guess I can, I can write down my predictions. And I can scroll down and I can learn about my accuracy and my calibration over time. It's a long process, but it is quantitative.

Doyne

I mean, there may be studies I'm unaware of. The best one I know is the one I mentioned that involved Laura Anadon and Rupert Way, where they gathered what data they could find on expert elicitation and then compared it to the kind of methods we use. And, you know, I've heard anecdotal evidence about superforecasters doing a good job. But to do this kind of thing right, you need to have a data set that's not biassed by data sampling, where you just pick the good stuff, and you leave the bad stuff out. And so it's very easy to come up with a biassed study. And I haven't seen any unbiased studies to show that. It may well be that some people are really good at forecasting, or not.

Fin

But in some sense, I'm saying maybe the people who are exceptionally good are good precisely because they ignore their biases and kowtowing to the experts.

Luca

I guess it's a question of like, when we say expert elicitation, who were the experts? Is it people trained in forecasting or is it people trained in the domain in which the forecast question is being asked.

Doyne

Yeah. And, and people have in the past tended to look to people with domain knowledge. And, and you know, they're often biased, if you're an expert in nuclear power, you're gonna tend to be biassed to think things are gonna go well, and nuclear power is going to do well. And so it's been a complicated story.

Predicting technological change

Luca

Yeah, well, let's maybe shift the conversation to, as we kind of touched on there, this question of technology. One way I want to frame this is, we previously were talking about business cycles, which are these short term fluctuations in the economy. But really, when economists are often asked about what are the real drivers of long term economic growth, and in that sense, kind of prosperity too, technology really seems to play a kind of central role. And in some ways, technological innovation looks like a really random kind of chaotic process; people come up with ideas out of the blue, and it's really hard to get a sense as to what that might lead to. But one of the interesting insights here is that maybe this thing actually does have some kind of insights. So maybe a question to kind of kick things off here is how predictable do you think technological progress is?

Doyne

Well, it depends, is maybe the first answer, but surprisingly predictable. While we can't predict the precise innovations that will, you know, cause improvements, trajectories of improvement can be very predictable. And the most famous example is Moore's Law, which was formulated by Gordon Moore in 1965, when he said that the density of integrated circuits has been doubling every - initially he said 18 months and then made a little correction and said, ‘well, really, I think it's about every two years.’ And that prediction has been remarkably accurate, seven years into the future from then. And it's now finally stopping because it's hit a fundamental wall, but a seven year run of doing really, really well. And it turned out that increasing the density of integrated circuits meant not just making chips smaller, but making them more power efficient and faster. And, and it means that computers are roughly a billion times faster than they were in 1965. And so it's quite a remarkable prediction. And one of the things we've been involved in showing is that this phenomenon doesn't just happen for computers, it happens for lots of other technologies. Not every technology, most technologies don't show it. But some technologies do. Integrated circuits have dropped in price, as Moore said, increasing in performance, doubling their performance roughly once every two years, which translates into about a 40% improvement per year. Solar photovoltaics have improved at roughly 10% per year. And they have been doing that since their inauguration in 1958 for the Vanguard satellite. So this is the first commercial application of a photovoltaic cell. So it's been a remarkably steady improvement track. It has wiggles and bumps, but roughly speaking has stayed on that course. Wind power has also gone down at a fairly steady rate. Lithium batteries have showed a fairly steady improvement rate. These are all key technologies for climate change. But with other technologies like fossil fuels: fossil fuel prices now are within a factor of two of what they were 140 years ago. Fossil fuel prices like coal, oil, gas, they bump up and down, they vary through factors of five or even ten, but there is no systematic trend.

Fin

The efficiency of combustion engine cars and vehicles as well, I looked at recently. It's roughly constant.

Doyne

It's roughly constant. It went down at the beginning, and then it kind of plateaued, and maybe you could go down a little more. But it's not like, solar cells have improved in power, in cost by a factor of 5000, since their inception.

Fin

So this has something to do with something called ‘Wright’s Law’.

Doyne

Yeah. So Wright’s Law is related to Moore's law. It's actually stated much earlier. Moore was not aware of it when he stated Moore's law. Wright’s Law dates back to 1936. And Wright - very interesting guy, Theodore Wright. His two brothers were very famous. One of his brothers was Sewell Wright who's one of the most famous evolutionary biologists. His other brother was Quincy, who's considered one of the founders of political science. But Theodore Wright was a flyer in World War One, was a flying ace who then came back and went into the aviation business. And so he's a businessman. But in 1936, he wrote scientific paper based on the observation that every time a Arafat aircraft manufacturer plant doubled its production, the cost of manufacturing the aeroplane tended to drop by about 20%. And so, people latched on to this law and used it not just for aeroplanes, but other technologies, and not just at the level of factories, but for global production of technologies.

Fin

So to be clear, this is the idea that costs drop as a kind of power law function of total cumulative production tod date. And apparently this applies to solar PV to semiconductor chips, and aeroplanes. These are very different technologies. And also, for any particular technology it's not as if the innovations which are getting them cheaper and cheaper are the same every year. So what's the kind of general story, the explanation that you can give for why this kind of pattern apparently seems to hold?

Doyne

Well, there's several factors, and it's hard to tease them apart. One of them is what's called ‘learning by doing’ - the more you do something, the more you learn about it. And it turns out that there's something associated in psychology called the parallel of practice, that if you take a rote task, like summing numbers by hand, the more you sum numbers, the quicker you get at that task, and you reach a point of diminishing returns in the sense that, you know, you'll improve a lot at the beginning, and you'll improve more and more slowly, but you to continue to slowly improve. And roughly every time you double the number of sums you've made in your life, you get some fraction better. So we see this in lots of things due to learning. It can also happen due to economies of scale. And learning can happen in different ways; it can happen at the level of society. Like, we don't just get better at manufacturing solar energy modules, we get better at installing them cheaply. So, learning is certainly a big factor. We have a model where we assume that engineers can improve things by just throwing darts at a dartboard and they're smart enough to see when they've made a good throw, and realise that throw’s better than some past throw.

Fin

I guess you can zoom in on the winning throw and then iterate again or something?

Doyne

Yeah, yeah. And it gets harder and harder to improve through time because your best throw gets pretty good. So you've got to beat an increasingly better throw. It turns out that gives Wright’s Law, and when you look at our model, we can also see that if you couple different - technology has a lot of moving parts that depend on each other, you know. Like in a car, the engine depends on the carburetor, the carburetor depends on an air filter, and so on and so on. The more interacting parts you have, the rate at which improvement happens drops, because you have to have coordinated throws into different departments to make things work. So that gives you Wright’s Law.

Luca

Does this then help explain why you know, Wright’s Law seems to really apply for some technologies, like solar, but not for others, like fossil fuels? What's the story there - that way more learning happens in one rather than the other?

Doyne

Yeah, it's at least part of what's going on. And there's some weak empirical evidence now - there have been a few papers testing our idea. So it partially explains what's going on. I don't want to claim that we have a full explanation. Like, exactly why is it that nuclear reactors have not only not gotten cheaper, they've gotten more expensive. Now, that may be in part because of increased concerns about safety through time, but they're certainly not going down in price. I mean, if they were improving by a factor of 5000, a factor of two or three for safety could be taken into account. But they just haven't done that. So we don't have really good explanations for why some technologies improve so much faster than others. Modularity is part of the story, but not the whole story.

Fin

And modularity meaning it's easy to make kind of small iterations?

Doyne

Yeah. Like things that you can print, for example, tend to improve rapidly. That was clearly part of the story behind integrated circuits; it’s essentially a printing process. And all you had to do was learn how to print smaller, and you got huge advantages across the board.

Fin

I take it it’s harder to get better at building dams or something, because it's so lumpy. You gathered some data on different technologies, and you found that some fit this Wright’s Law story, and others don't - I guess, notably, nuclear power and also fossil fuels. And then you can kind of retro-predict technological trends, and you can get at least a picture of how feasible it is to forecast these trends. And my question is, what did you find? How accurate can we be as we look forwards?

Doyne

Well, the accuracy varies, depending on the technology; some technologies, like transistors, come down at a remarkably smooth exponential rate. So you get a pretty darn accurate forecast. And by the way, let me just tell you an anecdote about that. I remember being in a meeting, we were talking about technology, and there was somebody there who was a chip designer, and his story at the meeting was, ‘Everybody thinks Moore's law just guaranteed that integrated circuits are just going to get cheaper and faster all the time. But I can tell you, we repeatedly just felt like we were up against the wall, we couldn't see how to make it happen. And then somebody would have an idea and there would eventually be a breakthrough. But for us, Moore's Law was like, people began to just expect we were going to pull the rabbit out of the hat again, and again and again.’ Which they did.

Fin

Interesting. I guess, by definition, you can't anticipate the thing that gets you out of the hole.

Doyne

Yeah. And when you go through, and you look at the technological breakthroughs, the specific innovations were not anticipated ahead of time. But remarkably, the level of performance that they would give was anticipated. And it's this trend of technological improvement that's driving the whole increase in complexity in civilization, as our technologies have gotten more and more sophisticated, as measured by fairly concrete performance metrics. But, back to your question: so forecasts are, you know, it depends. The forecast for solar energy is not as accurate as a forecast for transistors, because it follows a wigglier or path as it goes along. You know, there was a period in 2000 to 2010, where there were shortages of materials and where the price kind of plateaued for quite a while. And then in 2010, the Chinese got involved and prices plummeted down. We can't predict any of that. But long range, you know, you draw a line through 40 or 50 years of data and it’s pretty steady. And in fact, in 2010, by doing that, I made a forecast that was published in Nature, that solar energy was going to be cheaper than coal fired electricity by 2020. And, you know, magazines, like the Economist, said that it was crazy. And we were right. So, it has been shown to be useful, in making predictions. They're not perfectly accurate. And in fact, I think our biggest contribution has been showing how to forecast how accurate the predictions will be. Because a prediction is not very useful unless you know how accurate it is. If you make a prediction, but you know, it's wildly inaccurate, then you probably shouldn't pay much attention to it. But we can quantify a priori how accurate the predictions from this method or from Wright’s Law, or Moore's law are, and then use that to make probabilistic predictions. And therefore we know how accurate the predictions are. And that's something we've tested on 50 technologies, making 6000 predictions, pretending we were in the past and forecasting the future.

Fin

There's some lessons here - just briefly, and maybe this won’t make sense - but there's some kind of U curve with how accurate you can be with forecasts where, like the weather, you know, I can easily predict the price of semiconductors tomorrow, because I know it'll be similar. In the short-ish term - on the order of, you know, a couple of years - well, I just can't anticipate the supply chain shortage or whatever, so it's close to noise. But then again, in the long term - let's say something like five or 10 years - then you can begin making the kind of predictions that you did about solar PV, because you can look at these kind of long lines, which roughly hold when you zoom out far enough, and you can just kind of go forward again. And of course, at the very long term end things get fuzzy again, but there's some kind of valley where on both sides you can make roughly confident predictions.

Doyne

Well, it depends on what you're trying to predict. Like maybe just to take the weather example: I can't predict whether or not it will rain, you know, on this date a year from now, but one can predict, and climate models can predict that if the CO2 level goes up, the world will warm. And because you're making a different kind of prediction, you're making a statistical prediction. You're not predicting what's going to happen on a specific day, you're predicting that on average, winters are going to be warmer than they were. And so some kinds of predictions are possible, and others aren't. It's just like saying that we can predict this technology's likely to get better in terms of its performance - or this lineage of technologies, because the technology is never the same, that's how you improve them, you keep changing the technology, but there are lineages - and you can predict this lineage is very likely to lead to improvements and this other lineage is not.

Luca

There is kind of a tension or something I see here, on the topic of climate change and solar power versus other energy sources, which is that there's this worry I have that, you know, Wright’s Law is this really awesome predictive thing, kind of backwards looking. But as we've seen with Moore's law as well, like, we can kind of hit barriers, and this thing can kind of slow down. And there's this question of, you know, as a world should we be taking this one really big bet on solar power, and all the kind of complementary technologies, you know, that kind of come with it - batteries and other things, because they've worked so well in the past? Or is there like a risk here, maybe even a systemic risk, that a lot of the world becomes dependent on what are also just a few kind of rare supply chains and rare minerals and the like? I'm curious, when you're thinking about, I guess, the policy implications of this, how much of this is like a solar power forward question versus do we need, like, a broader portfolio of technologies?

Doyne

Yeah. Yeah. So it's a good question. So we've written some papers about this. The paper is called ‘Wright meets Markowitz’. So Wright is Wright’s Law, and Markowitz is Harry Markowitz, who, you know, is the inventor of portfolio theory. And in Markowitz style portfolio theory, you choose assets that you're going to invest in, but you assume that your investments don't affect what those assets are going to do, and you also assume that you know the risk and the return on the assets and their correlation to each other with perfect precision. And if you can do that, that's the right way to invest. But there's two big differences in investing in technologies. One is that you don't know the outcome - there's a lot of uncertainty. I mean, we can make forecasts, we can forecast progress, but there's uncertainty in those forecasts. But the other is that your investments affect the outcome, and that's a big difference. Because if you want to get a technology to go down a learning curve, you've got to invest in it. So if the world consisted of 1000 technologies, all of which were all the same, what should you do? And Markowitz would suggest - Harry Markowitz, he’s a very smart guy - but the theory he used naively would suggest you should just invest in all 1000 technologies. But because actually investment is required to make progress, that would be the wrong thing to do. What you should do is pick some of them - not one, because there's uncertainty about their characteristics - but pick a few of them, and invest in those. And how many you should pick depends on the nature of the uncertainty. So you shouldn't put all your eggs in one basket. Neither should you put one egg in every basket, you should put a few eggs in the right baskets. It takes a long time to bring a technology to fruition to commercial use in a practical way. I mean, it takes typically 50 years or more. And so, at this stage when we think about a problem like climate change, we know a lot about the track record of the technologies, so we have a lot of information that we can use in making these investments. But we shouldn't just make a bet on one thing. We don't know enough to do that. And so, you know, in addition to solar, we're betting on wind; we should keep doing research on other technologies - you know, fusion may eventually pay off someday, and it actually has a pretty good performance record. So it's starting from a long distance away, but it may eventually turn out to be a good thing. So we should keep doing some more research on that, keeping some options open. But we do need to make some bets and put our money down on the table and just go for it in enough technologies to really make it happen.

Luca

And there's definitely an overarching story here, when we're talking about bets: it's not necessarily a zero sum game amongst just these technologies. If you look at R&D spending on just clean energy as a whole, it's pretty small compared to a lot of other things.

Nuclear power

Doyne

Yeah. But you know, we do have some bets that I could definitely say are bad bets; we shouldn't be betting on fossil fuels anymore. No research should go into fossil fuels any longer. No research should go into fission nuclear power anymore.

Why?

It's just not gonna - I mean, it has a very bad track record. You know, if I pick trajectories of 50 technologies or so, nuclear sticks out as the worst performing technology. So why make a bet on that one, when we've got a lot of good ones to bet on?

Luca

Right. I think that that leaves loads of things for listeners to contemplate, and yeah, maybe be provocative there, too. So I should flag that we've done an entire episode - I was just looking at it just now, basically exactly a year ago - with Matt Ives, kind of exploring some of the implications this has for climate change policy specifically. So I'll just refer listeners to listen to that if they're interested in some of the questions around costs and fast transitions, slow transitions and the like. And I'm curious to maybe explore some of the other implications here. So one of these is: I'm just kind of curious for your take on the general ‘are ideas getting harder to find?’ I don't know if you've kind of explored this, or have any thoughts, but it's broadly this discussion that, you know, Moore's Law holds, but it's becoming increasingly more expensive to do, in the sense that it requires a lot more scientists, a lot more inputs, just to be able to extract the same amount of progress. And that this might be pointing towards this broader stagnation or technological stagnation that we're risking here. Yeah. I'm curious if you've got any thoughts on either that paper specifically or on this broader argument that I think it often feeds into?

Doyne

Yeah, I think it's an interesting idea. I am not completely convinced, I certainly think that there's truth that there's a tension between two factors. One is that as we make technological progress, we get ever better tools; ever better tools allow us to make ever better technological progress. So the computer is a great example. You know, computers basically weren't used in science until 1950, or roughly. And since then, as they've been deployed, they're now used everywhere. And they're an incredibly powerful tool for allowing us to do science better, because we can simulate just about anything now. And we can gather data that we could never have gathered before as a result of computers. So our tools are getting better. On the other hand, as we solve more and more problems, the problems that remain to be solved get more and more complex, because the low hanging fruit has been picked. And, you know, fossil fuel extraction is a good example. Fossil fuel extraction is vastly more technologically sophisticated than it was a century ago. But in parallel, the extraction problem has gotten harder, because the easily accessible oil fields have been mined, have been extracted. And so we have to go deeper and deeper, and the oil becomes harder and harder to find. There's still tonnes of oil down in the earth, so we're not gonna run out of it, but we have to keep getting better to keep extracting it. And that's an example where remarkably, everything has just held almost exactly even over a century and a half.

Luca

Quick kind of follow up question on that: what do you make of the shale gas revolution and the kind of advances in drilling there?

Doyne

Well, I mean, there are swings in the price of fossil fuels. There'll be an advance like the shale gas revolution, which makes oil accessible that wasn't accessible before. You know, it's still more expensive to extract than Saudi oil - quite a lot more expensive to extract. And, you know, once again, there have been other revolutions with drilling methods and other technological innovations through time, and there's been kind of an arms race against the oil getting harder to find. I think we'll end up leaving almost all the oil in the ground, because other technologies are becoming cheaper.

Fin

So you have the seminal story where as you innovate, two things happen. One is you get better at solving problems, and the other thing is that the problems get harder.

Doyne

Yeah, so there's definitely an arms race there. Now, how that's gonna play out, does that mean we're going to have a slowdown where we just get worse at doing this? I'm sceptical about that part.

Fin

So the really key question seems to be: do we get better at solving problems faster than the problems get harder? In other words, do we hold the microphone close enough to the speaker that the speaker's blow out? Or is it just far away enough that you get kind of fading? And it seems like a sensitive, potentially kind of chaotic feedback, style mechanism. Unless you have general reflections, I was curious to ask a question about whether we can say anything about these kinds of long run forecasts about economic forecasts when we're talking about stagnation by looking at production networks?

Doyne

Well, I’ve written a paper about that, with James McNerney as the lead author. So in that paper, we actually started out by thinking about trying to think about production networks in ecological terms. And because there's something like, you know, in an ecosystem, there are what are called trophic levels, food webs. So in a food web, you assign grass, say trophic level one, because you know, it's at the bottom. Zebras eat grass, then, and only grass, and then they would have a trophic level of two. And if lions eat zebras and only zebras, they have a trophic level of three. And so, it's a very useful way to think about ecosystems, because one of the things you realise is if you want to regulate, if you want to think about grass, you need to think about lions, because lions control what's eating the grass. And, therefore, if, you know, if all zebras get eaten up, the grass will become very plentiful, and vice versa. So the lion population determines a lot about grass. Well, in production networks, it plays out a little bit differently. You know, if you have a company that gets all its parts from another company that gets all its parts from another company, you have a different kind of trophic level. And so you can have roughly the same idea. It turns out food webs are never so simple as having trophic levels 1-2-3, because zebras eat other things than grass; lions eat other things than zebras; there can be feedback loops, because the bacteria will eat decaying lions, and worms will eat decaying lions, and those may be in the soil. And so feedback loops can be more complicated. But but you can still calculate trophic levels for organisms and ecology, by looking at the dietary fractions, if you can measure what fraction of a lions diet is zebra, and you do that for all the things the lion eats, and all the things all the other animal eats, then you can calculate the levels in a consistent way that is actually very useful for managing ecosystems. And you can do the same thing in economy, except it's looking at, in a given industry to make a given product, what fraction of their inputs come from the other industries. You use exactly the same set of equations to calculate that. And now the remarkable thing that we discovered is that this is useful in thinking about production networks, because innovation is proceeding in the same way. If the maker of the chips for your laptop makes an innovation, that will cause the price of the chips to come down for a given performance level, which will make your laptop for a given performance cheaper, or similarly for the same price you'll get a better laptop So one of those two things will happen. And so the prediction we made is that industries with deep supply chains will improve more quickly than industries with shallow supply chains. And that prediction held out remarkably well in data. So it kind of harks back to Adam Smith's idea, except we're not just specialising to make production efficient, we're specialising to innovate better.

Luca

And I guess there’s kind of a sectoral story here as well, which I'm keen to maybe get to as well, which is that as economies move from agricultural to manufacturing, these kinds of supply chains become a lot longer, and innovation kind of accumulates quicker, and we see quicker growth. But then interestingly as well, as economies maybe move from manufacturing to more service sector industries, the supply chains get shorter again. And I'm curious if you can maybe speak a bit more about what this means, maybe for that secular stagnation hypothesis.

Doyne

Yeah I think this may be part of a partial explanation of secular stagnation. Because economies tend to go through cycles, that you know, you start out with a primitive economy where there's just agriculture, and then you acquire metalworking and more sophisticated technologies. And then eventually you have a transition, after the Industrial Revolution, where you get powerful manufacturing capability; there's a lot of specialisation and the trophic level of the economy increases, which, according to our theory, means the rate of innovation should increase too. And then, through time, if manufacturing is outsourced to other places, then it becomes more of a service economy and the innovation rate slows down. And we roughly speaking saw this to be true in the data. But that's not fully tested, because it's hard to actually measure these trophic levels very far back very accurately.

Fin

I was going to ask that. So how do you go about coming up with some measure of the trophic level of the world economy?

Doyne

Well, in this case, we took advantage of something called the World Input-Output Database. That's allowed us to go back about 14 years, initially - it's now longer than that. But with the first data set, we only could go back 14 years. But we could see these trends quite clearly over the 14 year span, because they had the inputs and outputs for each country and each industry for you know, 35 industries and 43 countries over a 14 year span of time. But yeah, data is hard to come by.

Are ideas getting harder to find?

Luca

I was curious to kind of pick up on a question Fin asked earlier about this kind of arms race, right, between ideas getting harder to find and us getting more tools? Do you have a sense or even just aspects you think are really important to consider when thinking about this question as to which side might win out? Either, you know, this being a drastically different world where we kind of stagnate and plateau, versus a world where you know, we really see exponential growth or what have you, kind of like very weird futures.

Doyne

I don't feel like I can hang a prediction on firm ground. But if I can just speculate. You know, there's a sort of a dichotomy. On one pole you've got Ray Kurzweil, who says our technology is becoming more and more sophisticated, and we'll start to see feedback where we're effectively designing ourselves. And I think he's right, that that will happen. So at one end, you've got Ray Kurtzweil. And at the other end you've got Robert Gordon, who says the golden days of technology were 1880, the days of Edison, when we created all the modern comforts of society; refrigerators, and light bulbs, and, you know, recording equipment and movies and all these things that make our life pleasant. And so, Ray Kurzweil has a science fiction view, that we're going to evolve beyond something that we would call human, and Gordon imagines we're just going to stay human forever, and all we really needed were refrigerators and, you know, light bulbs. And I think, to be honest, I think Kurtzweil is closer to being right, though Kurtzweil had completely unrealistic timescales over which evolution was going to happen, and I think at times a completely unrealistic idea about what that will really mean.

AI

Luca

Do you have any thoughts on artificial intelligence and timelines now?

Doyne

Yes. I think, you know, we're already seeing substantial improvements in artificial intelligence that went beyond what was anticipated 20 years ago by quite a bit. Although that's oscillated through time; there was a period in the 50s and 60s, where people were saying, ‘we're going to solve all these problems really fast’. And they thought they were gonna make artificially intelligent computers that could think like people in a matter of a decade or something. And that just didn't turn out to be true. They thought that problems like face recognition or voice recognition were going to be easy; they turned out to be hard. So when I was a graduate student, or a young postdoc, those people, it suddenly dawned on them: these are hard problems. And then we've seen them get solved! And so though they're not perfect, they're pretty damn good. And we've seen game after game Chess and Go; machines now play them better than people. There's still some things machines can't do. And current machine learning methods like deep learning neural nets still cannot really form a sophisticated model of the world. You know, they don't form a model that they can explain to you the way in which a child does. Children are doing something that we still don't understand. The developmental process of a human being from age one to five does something that's still magic from the point of view of AI. But I think we'll eventually figure out what's going on there, and so I don't see any impenetrable barriers to AI doing everything that we imagined it can do. But I also don't see the kind of doomsday scenario that people like Nick Bostrom throw out; I think that's just kind of silly. Because I think we see the way AI and human beings exist symbiotically now. AI is really, really smart, it's just smart in a very different way than the way we're smart. It needs us to make it and we like having it to make our lives better. And we live in a very nice symbiosis. And I don't see that breaking down for a long time. And I see there being a kind of merger where the boundary between the two starts to get fuzzy. And I think that's the way the future will play out.

Luca

Yeah, I think this AI express question is definitely something to push back on, but also just a huge can of worms that I don't think we'd be able to do justice to in the 10 minutes we have left. So I think it's best to just move on to closing questions. Fin, do you want to do that?

Fin

Yeah, sure. Are there specific research questions - and you can be as granular as you want - which someone listening to this might actually be interested in just taking up if they have some kind of econ background?

Doyne

Well, there's so many problems to be solved. You know, we're still very far from understanding technological change. What are the drivers that make it come about? We need to gather much better data to understand that. So it's an example, you know, low hanging fruit at this point. But I've tried many times to get funded to do it; never managed to convince a funder. It sits between all of the disciplinary boundaries.

Fin

What's ‘it’ in this case?

Doyne

Well, on one hand, it sounds like economics, but it involves gathering data and thinking about technology, which then sounds like engineering. But probably the right way to think about it is to think about innovation as it's done in evolution. So now that looks a lot like biology. So it's sitting at the intersection of those fields, not to mention informatics and computer science and time series analysis and all the other things that need to be done, and ultimately, gathering and curating a dataset which is like library science.

Progress Studies

Fin

Have you heard of Progress Studies?

No.

Luca

Yeah it’s right up that field.

Fin

Yeah, there's a kind of, I guess, nascent field - it kind of has one foot in academia, where really it is trying to answer these questions about technological change and in particular interventions to speed up various kinds of progress. And just kind of - I don’t know how exactly to summarise it, but it's about, let's say two or three years old.

Luca

Yeah. Doyne Farmer, thank you so much.

Alright. Thank you guys.

Outro

That was Doyne Farmer on complexity and predicting technological progress. As always, if you want to learn more, you can read the write up at hearthisidea.com/episodes/farmer, that's F-A-R-M-E-R. There, you'll find links to all the papers and books referenced throughout the interview plus a whole lot more. And if you know of any other cool resources on these topics that you think others might find useful, please send them to us at feedback@hearthisidea.com. Likewise, if you have any constructive feedback, email us or click on the website, where we have a link to an anonymous form under each episode, as well as the listener survey that I plugged at the very start. And lastly, if you want to support the show more directly and help us pay for hosting these episodes online, you can also leave us a tip by following the link in the description. A big thanks as always to our producer Jason for editing these episodes, and Claudia Moorhouse for writing the transcript. And thanks very much to you for listening!



Further comments
← See more episodes