Episode 43 • 15 March 2022

Glen Weyl on Pluralism, Radical Markets, and Social Technology

Leave feedback ↗

Contents

Glen Weyl is Microsoft’s Office of the Chief Technology Officer Political Economist and Social Technologist (OCTOPEST), where he advises Microsoft’s senior leaders on macroeconomics, geopolitics and the future of technology. Glen also co-authored Radical Markets: Uprooting Capitalism and Democracy for a Just Society; a book about “Revolutionary ideas on how to use markets to bring about fairness and prosperity for all”. Building on ideas from that book, Glen founded RadicalxChange, a global movement for next-generation political economies committed to “advancing plurality, equality, community, and decentralization through upgrading democracy, markets, the data economy, the commons, and identity.”

In our conversation, we discuss —

(Image) Glen Weyl

Image credit: WIRED Magazine

Glen’s recommendations


Resources

Transcript

Fin 0:04

Hello, you’re listening to Hear This Idea, a podcast showcasing new thinking in philosophy, the social sciences and Effective Altruism. In this episode we talked to Glen Weyl. In his day job, Glen is the ‘OCTOPEST’ at Microsoft, where he advises Microsoft’s senior leaders, and I’ll leave him to explain that acronym. He’s also the author of a book called Radical Markets, which imagines how market-like mechanisms can be extended to address pressing problems, which are typically addressed in outdated or more ad hoc ways. For instance, more expressive ways of voting, more efficient ways of taxing and changing ownership of property, and fairer ways to share in the gains from migration. Glen is also the founder of RadicalxChange, which is a global movement for next generation political economies. We began by talking about some of the suggestions from Radical Markets, especially quadratic voting and funding, which I tried to explain later. One thing that came through for me was, I guess, a more nuanced picture of what’s important about democracy. I Find that as a word, it can be taken to mean any number of things, but I think I’ve got a clearer sense of which precise aspects of democracy are most underrated. In roughly the second half, we then talk about Glen’s views on Effective Altruism and longtermism. He’s said some critical things in the past about them, and I think that a few of his comments there have been novel and potentially very constructive, so we asked Glen to elaborate on them. Some of the things he said here resonated with me more than others, but I think Glen articulated just a very smart, sceptical perspective that was really useful to hear. And in general, I think it’s just acutely important to hear critical outsider takes on any kind of ambitious or radical worldview, that you’re letting guide the things you work on. Just as a side note, I feel like at some point, I gave the impression that I was absolutely confident that AI is somehow going to end the world as we know it, this century, and I would like to clarify that I am at best agnostic on this question. Anyway, a big thank you to Glen Weyl for joining us. And without further ado, here’s the episode.

Glen 2:12

Hi, I’m Glen Weyl and I am the founder of the RadicalxChange Foundation, currently just a board member there, and I’m also at Microsoft, where I’m the Office of the Chief Technology Officer, Political Economist and Social Technologist, which is an acronym for OCTOPEST. And there I advise Microsoft on the intersection between geopolitics, macroeconomics and our tech strategy.

Fin 2:40

Fantastic. I thought the physicists had monopolised acronyms, but I’m glad to see the economists are getting on it.

Glen

Political economists!

Fin

Political economists, okay, okay. So the structure of this conversation: I’d like to begin, or we’d like to begin, by going through some ideas in your book, Radical Markets. It’s been nearly three years since you wrote that, and I know that, or expect that you’ve updated on some things there. And then, roughly, in the second half, I think it’d be interesting to talk about your takes on Effective Altruism, and longtermism, and maybe also rationalism. So that’s the context.

Luca 3:17
Yeah, cool. So maybe one good place to start before we delve into, like a bunch of the proposals and kind of solutions that you suggested in the book, is to maybe like take a step back and lay out like, why we even need, right, like these kind of like radical solutions, like in the first place. So you spend, like a lot of time talking about, like market power and needing to spread out like gains from innovations. Can you maybe like, talk about the fundamental problems that are kind of like underlying, like the world that you’re like looking to address with the book and with the proposals that you kind of suggest?

Glen 3:48
Well, I think my perspective on that has somewhat changed. I think, you know, in the book, I focused on a lot of very traditional economic framings of what the problems are, like inequality and growth and so forth. I think my perspective has come to be a little bit different. I think that fundamentally, what motivates my work and those ideas these days, is the concept that our technologies or physical technologies have advanced much more than our forms of social organisation have, and that that’s not sustainable. That if our technologies advance too far ahead of our forms of social organisation, our social organisation will be incapable of governing those technologies and we’ll end up being ruled by our technologies rather than the other way around. And so the book and sort of the broader work that I do is very much dedicated to trying to allow us to have an imagination for the way that we organise ourselves that is as capacious as the way we manage our technologies. Very few people think that oh, you know, our current technologies are just like, have some slight failures in them but if we correct those, then we’ll have like the optimal technology. Like nobody thinks about technology that way. And I don’t think we should think about social organisation that way, we shouldn’t think, ‘oh, there are like, these two forms of social organisation or three that have ever existed, and now we just need to refine them a little bit, and then we’ll be at the optimal form of social organisation’, which is often how we talk about politics, you know? And I think that’s fundamentally what drives the book.

Progress in technology and social institutions

Luca 5:33
You said that you changed your mind on this, right? I guess, like, from before how you were thinking about this. How did that come about? Like what caused you to, I guess, initially view this right, in terms of like, more standard, like economic concepts, or kind of, like, specific, like problems and stuff to this, like, broader theme? Like, like, what yeah, caused you to change your mind?

Glen 5:52
Well, I used to be an economist, you know, that’s where I came from. That was my, the foundation of my thinking, I was trained as an economist, but as I went around and started talking about the book, and started, the ideas started getting applied, I started seeing the both potential of them, which was, I think, greater than I imagined, and at the same time, the limitations of them, I started seeing that the book was a first step into a field - it wasn’t an endpoint. And that started making me - and also, I started seeing the sort of flaws in many of the premises that were used to derive the ideas in the book. And all of that process made me see that there was this much bigger problem and that the challenges that it addresses are much more fundamental than just the rate of economic growth, or, you know, the degree of some narrow measure of inequality, but rather sort of our ability to continue to live in this planet without, you know, destroying ourselves.

Luca 7:11
Yeah. I guess I’d be curious to hear from you about why the situation then has like, come about, like, has it always been the case, right, that like our regulation, or like, our social organisations, as you said, like, hasn’t been able to keep pace with technology, and now technology is just moving at much more rapid speeds? Are there like other like fundamental trends or things that you think are like worth pointing out, or like considering? Or like, what’s the story there?

Glen 7:34
Well, I think two things. First of all, it’s always been, it’s been for a long time, the case, and for a long time, I really mean since the late 18th century, that our technology has accelerated, and our social institutions have not quite kept pace. That being said, we’ve had some really astonishing periods of advance in our social institutions to keep pace with our technologies. I think, the labour union movement, the emergence of public utility regulation, the emergence of public funding for research, especially around the middle part of the 20th century, we saw a huge advance in our social technologies. And even before that, we saw the emergence of democracy, and nationalism, and so forth. And I think those were very important advances to keep that parallelism going. But I think that our social imagination closed dramatically around the middle part of the 20th century, and that relates to the Cold War and neoliberalism. And we could get into all these sorts of things. But I think that that period really started to shut down that imagination, and especially at the end of the Cold War, there was this sense of the end of history. And that was, I think, a very wrong turn for us to take. And I think we’re starting to recover from that turn, to open our social imaginations again.

The Great Stagnation and the history of personal computing

Fin 9:10

I’m curious to linger on this for a bit. It does seem true to me when you mention it, that maybe from something like the 70s onwards, we see much less innovation in something like modes of how we organise ourselves. Just in our last interview, we were talking about ‘The Great Stagnation’. It seems to be true of innovation elsewhere as well, like in culture, science and engineering. Do you think that there is some common cause there or is there a kind of a special explanation for this apparent slowdown or more or less stalling in how we think creatively about how we relate to one another?

Glen 9:49

I think that I would attribute the slowdown on the scientific side, or rather in the cultural and social implications of our scientific advances to this slow down in our social organisation. I think people mistake - I actually don’t think the pace of fundamental science has necessarily slowed, I think the problem is that the ability of fundamental science to mean things in the lives of people is driven, is fundamentally a socio-technical problem, not a purely technical problem. And the view, like it was precisely when we decided, let’s step back and let science just take its course, that science stopped doing much for us. Because science doesn’t take its course. It’s a process of society digesting those scientific advances that really makes things possible. A great example of this is by a man called Donald Norman, I believe is his name. He’s one of the few founders of the field of interaction design, and he said that video conferencing was invented in the 1880s, it was first prototyped in the 1920s, it was commercialised first in the 1950s. And, at the time of his writing, it hadn’t even gained widespread adoption - it ended up doing that as a result of the pandemic, not as a result of any scientific advance, right. So it’s fundamentally our social capacities that enable advance in productivity and things like this, not our scientific capacities on their own. And I think one of the great mistakes we make is focusing on scientific advance rather than social advance that complements those scientific advances.

Fin 11:43
Understood. Okay. Video conferencing is an interesting example - I’m sure you’ve heard of ‘the mother of all demos’ that Doug Engelbart gave Xerox PARC, the vision for video conferencing was more or less there then.

Glen

Well it was there long before that!

Fin

Right.

Glen 12:00
Even before that, I mean, Engelbart was just reprising you know that but yeah, it was convincingly shown at scale in the 60s. Yeah, Engelbart is a remarkable figure, and the whole project that led up to that is a very important part of my intellectual development. So.

Luca 12:23
There is definitely a difference, right between like, just ideas, or like just gains in the abstract accruing. And then like them actually, like being spread out or like becoming real world. And, you know, that maybe like links both - one on an implementation side, and on the other hand, a using side, right, I mean, it’s possibly like a classic distinction between, like, invention and like innovation. And that itself, right, is maybe like another part of the formula where, like, there’s the coming up with just ideas, like kind of in the abstract thing, and like things that go into that. And then there’s a thing of how that technology then interacts or like spreads out with society, in addition, right, like how society then in turn, like affects the even coming up with ideas in the first place.

Glen 13:02
Yeah. And furthermore, it’s the integration between those two things that allows it to be actually effective when they’re separate spheres that it doesn’t work well. When they interact closely, it works well. I mean, the most impactful technologies of the last 60 or 70 years came precisely out of the Licklider Engelbart project, which became the internet and personal computing and so forth, which was set up from the start as a network of user experience researchers, not as a fundamental, detached, abstract scientific enterprise, which is what made it so successful, I think.

Fin 13:48
Yeah, I mean, earlier on, you just said that Engelbart, and I suppose also Licklider, were in some way important to your development. I think it’d be silly not to return to that and ask more. Maybe one framing here is: okay, I have a kind of, you know, total like outsider’s, hot take about this period in history of the internet, which is, maybe it’s easy to over-aggrandise this period. And perhaps what’s going on is, we just walked into this kind of room full of low hanging fruit, or whatever the metaphor is, once we got, you know, the technology that’s sufficient for something like networks, computers, personal computing, and maybe really kind of any project would have just hoovered up all these ideas, sooner or later. And perhaps then it’s a mistake to read too much into the particulars of how this project, for instance, at Xerox PARC was organised. Do you think you could explain what I’m getting wrong there, if I am getting something wrong?

Glen 14:47
Yeah. So I would put that almost exactly the opposite way, which is to say, so there is an attitude towards the social deployment and social technology that is used extremely commonly in technical communities, which views it as either completely impossible, or totally inevitable. Whereas scientific advance is viewed as the product of hard work. So the attitude is either ‘Oh, well, once we have the science, you know, it’ll just, it’ll just diffuse will just happen. Don’t worry, no worries’ or ‘Oh, you can never change political institutions. I mean, they’re just fixed forever, you know, like, like, societies are never going to adopt this, people are too, like recalcitrant’. You know, people are super concerned like that tends to be, that’s a typical attitude that I encounter among people with an engineering mindset, and they sort of oscillate between those two things from like, often, you know, like sentence to sentence within a conversation, you know, but they’re unable to accept the intermediate position, which is, it requires concerted and thoughtful work to surmount the impediments to social innovation. And I think that the fact that like, those two positions are sort of equally held and and like, oscillated between so frequently, I’m not gonna be able to make a sustained case here for taking the intermediate position. But I think one should just reflect on the fact that the plausibility of those two things sort of from moment to moment being equally true, might lead one to consider that the intermediate position might have some, you know, value to it. So, like, it’s just remarkable how much you hear both of those positions expressed by people with an engineering mindset. Both are ways of avoiding serious engagement with social questions.

Quadratic voting

Fin 16:44
Okay. I thought it’d be good to dive into some particular ideas from the book, and that you’ve come up with since the book, one of us will try to actually explain them, you know, after we record this, so don’t worry about doing that. But we do have some questions about them. So in here, I am going to try to quickly explain quadratic voting, but you can skip ahead a minute or so if you’re not interested.

Okay, so imagine you have a bunch of people voting on some binary decision. A standard way to do this is to give everyone one vote per decision, and then the option with the most votes wins, right. An issue here is that if 51% of these people very weakly prefer option A, and the remainder very strongly prefer option B, that option A will still win, so we might want a way for voters to express the strength of their preferences also. We can do this by instead giving everyone a pot of like vote credits, which they can allocate between different decisions. So if you have 100 credits, you might choose to spend, you know, 60 of them on some decision that’s especially close to your heart, and then sprinkle the rest across other decisions as they pop up. Quadratic voting is similar to this idea, but now the influence you buy on some decision scales with a square root of the credits that you spend on it. So I could buy one unit of influence with one credit, but nine credits would buy me three units of influence, and so on, right. Okay, so why this change? Well, the intuition is that you want to avoid situations where someone dumps all their credits into one issue, just because that issue matters to them a little bit more than any other. And that might happen if a unit of influence on that issue is always worth more than anywhere else. But an additional unit never gets more expensive for this person, right. And in this world, people will just dump all their credits only on the issues that matter most to them, and nowhere else, right. And then decisions will end up being dominated or determined by whoever is most fanatical about them. And that’s bad. So what you really want is for people to influence a decision exactly in proportion with how much it actually matters to them. And this means that the cost of an additional unit of influence over a decision should scale with how many units of influence you’ve already bought. And in this arrangement, you will spend credits on the decision up until the point where an extra unit of influence costs more than the benefit that you’d expect to get from it. Now, if you think about it, the cost of ‘n’ units of influence here is going to scale with the square of n. And what’s cool is that on a simple model, you can show that this quadratic method is the only way to get people to vote exactly in proportion to how much an issue matters to them. Now, quadratic funding is basically this, but for deciding where to spend money rather than which decisions should get made. So the idea is that there could be like 10 projects, and you’ll distribute your money between them in proportion to how much they matter to you. And then a larger pool of funds will match your donations in a similarly clever way, which allocates funds to each project in proportion, hopefully, to how much each one matters to everyone. I appreciate this was not as lucid an explanation as it could have been, so I would encourage you to check out an excellent summary that Vitalik Buterin wrote, which I’ll link to in the show notes. Okay, back to the show.

So maybe we could begin with quadratic voting and quadratic funding. I think one question here is something like: what’s the route from here to a world where these mechanisms are widespread? In other words, what are the kind of small scale or keyhole implementations of quadratic voting and funding?

Glen 20:28
Well, this is a perfect exemplar of the conversation we were just having. Which is to say that, I think that a world where those are adopted is both possible and challenging to bring about. It is not either inevitable, or impossible. And I think the key to it is a very strong engagement with, and attention to, the dynamics of social and political change. And social and political change does not happen by brilliant abstract ideas being expressed in technocratic language to a small number of, you know, intellectuals. In fact, that is one of the quickest ways for them to stagnate and disappear, and not even be appreciated by the intellectuals. So like quadratic voting, for example, was very hard to communicate, even to economists, even though it was simple, and they got it, and it, you know, there’s lots of economic logic behind it. But it was neither sort of the particular sort of challenging mathematical advance that people wanted to see within a particular area of, you know, the economics academy, nor was it something that already existed in the world and therefore could be studied as an empirical phenomenon. It was instead like a new design. And that kind of a design only really flourishes in communication with communities that can make something of it. And that is where quadratic voting has flourished, and I think it will actually come back to the scientific establishment as an empirical phenomenon in the coming years. It’s already starting to happen. And like, what, what has made it do that? Well, it’s been the interaction with social movements, the interaction with technologists, the interaction with artists, and those who help shape our imagination of the future. And it is through that diversity of pathways, I believe, that it has a chance of really making change in the world. It’s because it’s being used in computer games that are being played by millions of people. It’s because it is making its way into the practices of the Taiwanese government and the Colorado State Government. And because it’s become sort of a meme within the Ethereum community, or the broader blockchain, you know, Web3 ecosystems, and the convergence between all of those and the way that that then seeps into the media. All those things together are making it work. And you asked for the keyhole application, you know, what would really make it work? And the answer is, none of those applications I just listed, which have been so effective, are anticipated in the book. They were all things that came by throwing things out there and having people make things of them, just as none of the major applications of the iPhone were anticipated in the initial Apple spec. And, you know, more broadly, I think that there is an imaginary within sort of academic and engineering discourse of what I would call ‘experimentation on’ where the engineer designs, ‘here’s what I think this could be used for, and let’s go see if it works and whatever’. But there’s another attitude towards experimentation that I’ve come to be a strong believer in, which I would call ‘experimentation with’, which is that you can’t know what the impact is. You can, what you can hope to do is to communicate and to generate something that people can play with. But that play, and that return to the engineer is what makes innovation in the world really possible.

RCTs contrasted with deliberation and other ways of knowing

Luca 24:48
Yeah, that’s interesting and it sounds to me that there are like maybe two things kind of going on here. One is maybe this idea of just people needing to be exposed to ideas before you know they are like ready to kind of like accept or like use them. When I would like initially hear about quadratic voting, like my immediate thought, right, would kind of go to politics and general or like presidential elections, kind of like running by this scheme. But here, you know, such a big change, or like such a big different method to count votes and stuff seems a bit scary. Maybe I just need more gradual like exposure in order to accept the idea, be it through video games, online reviews, or what kind of have you. But then I think you’ve also picked up on this second mechanism here, which is something more around like experimentation and refinement that kind of goes beyond right, like the first invention and is like maybe more of a deliberative process and seeing and what like circumstances, right this is functioning and like adding value in what in what case is not. Is that like, roughly right? Or am I like missing something here?

Glen 25:43
Yeah, I mean, like, people within the EA community, love RCTs - randomised controlled trials, which is a classic experimentation on paradigm. But imagine trying to do an RCT with a virtual reality headset? Like, what is it exactly that you would measure? Like, who would you measure? Who would you give it to? And I mean, it’s nothing people haven’t done things like this, and they’ve been able to measure specific aspects of specific things, but most of what we’ve actually learned about virtual reality headsets has come from people like doing all kinds of crazy shit with them, like that couldn’t have been anticipated by the designers, coming up with applications that couldn’t have been anticipated by the designers, that then leading the designers to do something completely different in the next generation of headsets they make. And that sort of treating those who are the subjects of experimentation as peers, as epistemic peers, as design peers - that I think is central to me to what it takes to make real fundamental social innovation work. And I’m a fan of RCTs for many things. I’m even a fan of that type of thinking for areas that many other people don’t think of it. So like, I think, in finance, we could use a lot more top down valuation of things and much more constriction of the innovative space, actually. But I think that there are many areas where we have come, especially in things like economic policy, or social design, where we’ve come to think that that should be the mode of thought and that just setting things up that way, basically undermines the capacity for fundamental innovation in social institutions.

Luca 27:31
I was just gonna say like, I think that’s also, like, one area that I think like, I have definitely actively, like, changed my mind on or like am changing my mind on is in terms of like, what we even mean, when we talk about like, democracy and stuff, where I think like, at the beginning, I very much like viewed it as just like an aggregate like preference kind of thing. And like, how do you do that? Like, how do you like make that like, fair and stuff? Whereas like, here, I think, like, a lot of the emphasis right seems to be on what you’re kind of describing here as like deliberation, and that just act of like having participants and like having, like feedback, and just kind of I guess like co-creating in some ways as well seems to be like super important. And kind of goes beyond very, like what you mentioned at the beginning as well of just like economic like ideas and stuff and more kind of grounded in like a political or like a social, like background too.

Glen 28:14
Exactly. When I evince a lot of enthusiasm for democracy, I think many people within an engineering mindset, or especially you know, including the rationalist community are like, ‘oh, like, majority vote isn’t so great’, or whatever, you know. And like, that is profoundly not what I mean by democracy. Like, what I mean by democracy is an attitude of epistemic peership to a wide range of people. Like that is like, fundamentally what I’m celebrating when I celebrate democracy.

Quadratic funding

Luca 28:53
One area that you did talk and wrote this paper on, where we can see QV [quadratic voting] being really useful is this idea of liberal radicalism, and in particular, how, yep, you can have like a design mechanism for philanthropic matching funds

Glen 29:08
Now labelled quadratic funding, just to keep things clear.

Luca 29:14
Okay, to get the terminology straight. Yeah. Well, could you like maybe lay out the case for this there? And I guess, especially as well, echoing Fin’s question right at the start of kind of like keyhole, like solutions and stuff here. Yeah. Like what do you see as potential like applications or what’s going on?

Glen 29:31
So quadratic quadratic funding is really, in some sense, just equivalent to quadratic voting, but it’s formulated as a mechanism for funding things, sort of, against the backdrop of a capitalist economy rather than his way of making decisions against the backdrop of a democratic system. And in it, rather than making a vote, people fund some sort of a organisation or an enterprise or a public good provision/entity, and individual contributions are matched according to a sort of democratic principle, which is that small contributions are matched more than large ones, and contributions to something where there’s many different contributors receive more matching than contributions to things where there’s a small number of individual contributors. And the basic concept there is to reverse the logic of the public goods problem where small contributors to large causes don’t have an incentive to contribute as much value as they receive, because they, you know, free ride effectively. Like they hope that those other value creators will add, you know, contribute and they don’t internalise the value of those folks add. And one way you could overcome that is by matching the contributions. So you’d like to sort of match according to sort of Kantian principle, where if I’m 1/1000 of the value, you want a 1000-for-1 match to my contribution, so that I internalise that all. And the question is, when people are diverse, is that possible? And this quadratic formula does that. It says that no matter how large or small you are, your match will be inversely proportional to your share of the total value. And so that leads to, that overcomes the problem of free riding.

Luca 31:29
Yeah, and I guess we’re now talking about like, applications of this, and especially like experimentation and this kind of like co-creation, or like deliberation in the real world, can you like give some examples here? So one that kind of comes to mind is Gitcoin, and these like quadratic lands, but I’m wondering, yeah, if you could either go into more detail there or talk about some others?

Glen 31:46
So Gitcoin is the application that’s been most successful so far, which is the supportive open source software. And the basic structure of a market that Gitcoin would try to make is that you need some entity, which is sort of benevolent, but uninformed. So like, some entity that has an interest in seeing some ecosystem behave well, but doesn’t really know what are goods that people in that ecosystem need. And then you need people in the ecosystem who want to support its success, and then you need projects that they can support. And open source software is a classic example of this, you’ve got people who are participating in the Etherium ecosystem, you’ve got the people who own a lot of Ether or otherwise benefit from it prospering, who are willing to provide those matching funds, and then you have projects that propose to create different forms of open source software or media that support the development of that ecosystem. So that’s been quite successful, they funded it, you know, I think 10s of millions of dollars of funding has been channelled to open source software projects, both within the Etherium ecosystem, and to some extent outside of it, using Gitcoin. But that’s obviously just one application.

Fin 33:03
One quick question that’s occurred to me here is, so the way you might be able to find funding within Effective Altruism, at least one of the ways, is that you can apply to a fund, and that fund will be managed and decisions will be made by some board of you know, hopefully, well informed, well intentioned decision makers, right, and they’ll distribute the money. I wonder if you can imagine some arrangement that looks more like Gitcoin in the context of Effective Altruism. Do you think that could work?

Glen 33:39
Sure, any community that sort of is aware of its own lack of knowledge, but has benevolent intentions for the development of some ecosystem would find a mechanism that elicit signals from that ecosystem of value, useful in directing its resources?

Fin 33:59
Yeah, as I take it, it’s a particular kind of lack of knowledge that this solves. I wonder if you could elaborate on what kind of ignorance is the important kind here?

Glen 34:09
The ignorance here would be what particular projects add value to some community of people that are collective, and you want to hear from those people about what they know about that?

Fin 34:22
Yeah, good. So I guess there’s, the information’s out there, like people know what they want, and the challenge is getting that information to like, percolate into the right place or something.

Glen 34:29
You know, a market economy, you know, under certain conditions, should kind of do that as well, right? You get demand signals and whatever. But those conditions are quite limited, and if they’re, in particular, if there’s increasing returns, if there are things that create value for people, that increases the number of people who consume it grows, you know, or the costs of providing it are lowered then then markets won’t do, you know, standard capitalist markets won’t do a particularly good job of providing that, whereas this type of mechanism will. And those examples are like, you know, anything that you would call it, quote, ‘exponential technology’, anything you would call things that, yeah, anything that’s really a technical advance or an advance that can be consumed by many people will have that property.

Luca 35:17
Yeah, at the risk of like, possibly getting like, a bit too wonky here, can you like explain that mechanism of like, what is it about like, increasing returns, and the like, that means that like, markets, like, free market and stuff, like wouldn’t be able to provide it in a way that this like quadratic funding system would?

Glen 35:36
Well, so, there’s a fundamental theorem in economics, which is the basis of all the claims that markets work well, called ‘the first fundamental theorem of welfare economics’. And what this says is that under certain conditions, markets lead to efficiency. And one of those conditions, one of the most important, is what’s called decreasing returns. This is the notion that all production processes have the property that the more that you put into it, the less it produces per unit. This is also known as sub modularity. So that basically, that’s like the whole is less than the sum of the parts.

Increasing versus decreasing returns to scale

Fin 36:16
Yeah, can you give a quick example of a case where there are decreasing returns?

Glen 36:21
Well, like a classic example would be if you have a factory, eventually, if you put more and more people working at the factory, they’re gonna crowd each other, and they’re not gonna be able to produce as much as the last person was able to. But in almost all the interesting cases that are really transformative are increasing returns cases - you think of a city, like if we had decreasing returns, we wouldn’t live in large agglomerations of people, we wouldn’t have networks, we wouldn’t, you know, like anything that really adds a lot of value has to be like that, because otherwise you couldn’t do it in at scale with large numbers of people. And in those cases, we know that capitalism can’t lead to efficiency. Why is that? Well, it’s a little bit intricate mathematical logic. But the basic notion is that the idea under capitalism is you’re supposed to pay people for their incremental contributions, because that incents everyone to participate in the things that they add the most value to. But if you tried to do that in an increasing returns context, you more than exhaust the total amount of funds available. Because the marginal returns summed over everyone are greater than the total amount created. And so it’s just not possible to support things under that principle in a capitalist system. And if you allow people to take profits out of the system, it makes it even worse. Right? So capitalism can’t be consistent with efficiency, in the cases that were really interested in, these cases of exponential technologies.

Fin 37:51
Okay, that makes sense to me, I think. But I wonder if it would be useful - maybe this is too putting you on the spot - to think of some case study where just having some free market of some capitalist setup doesn’t get you the efficient thing, but having some alternative, like quadratic funding does get you the efficient or optimal thing, just to kind of draw out that point.

Glen 38:14
Oh sure I mean, like, my favourite example is media. So media is a classic increasing returns phenomenon. Like once you create a piece of content, it’s more or less costless for everyone to enjoy it, and so the more people that enjoy it, the cheaper it is per person to deliver that service right. Now, how do we try to fund media, given this? You can do subscriptions that excludes people, it also probably doesn’t get you anywhere near the value that they’re getting, because they could just get that same information. Like most of the value of a new piece of media is not actually consuming the media itself, but consuming the new information that came from it, and that information can be conveyed to you in a variety of ways; you have very little incentive to pay to consume the, the piece of content itself, right? On the other hand, like something that’s just engrossing, but actually has very little value to it, you might spend a lot of time looking at, like a cat video or whatever. And so then you can put an advertising model on top of this thing, right? But then you’re going to send most of the funds to the least informative, most engrossing content, right? So the capitalist model just doesn’t track value almost at all. Whereas if people are saying, well, I want the things that are generating real value, and I know what generates real value for me, I want those things to be created. People may have a very good sense for where that real value is coming from. Now, they don’t have much of an incentive to just tip and donate, because they think someone else will do it and you know, they’re only getting a small fraction of value, but with sufficient matching funds, they would have the incentive to create that. Now, you could do it through the public, like you could just have the government support these things, and like, of course, public radio is useful and whatever. But we think there are real problems of having like some nation state that has to get elected, determine the information about its, you know, future existence. So we don’t really want that to happen either. So neither of those models is particularly effective, whereas the matching model can be much more so.

Fin 40:23
Yeah. So I have one more question here. And it’s something like: how do we extend quadratic funding to future generations, that is people living in the future? You know, I guess, some central part of this idea is that you’re getting information from living people who know what they want. And I’m curious whether we can come up with roughly similar mechanisms for, you know, provisioning these goods, which also intergenerational gates, right? And the kind of challenge here is that many of the beneficiaries are just like not able to express their preferences, because they don’t exist yet. Is this just hopeless, or is there some kind of, you know, interesting starting point here?

Glen 41:01
I don’t, I haven’t worked it out, but I don’t think in principle, it should be fundamentally challenging. We have mechanisms for sort of intergenerational transfer within the capitalist system, which are basically that I can borrow money, invest in something, and then make a profit in the future to pay that loan back, right? And we should be able to do something similar with quadratic funding. And in fact, it should compound. Which is to say, like, suppose that I chose, so right now quadratic funding happens in a static way, it’s like everyone in a moment in time contributes, but what if you actually decomposed it in such a way, that if you get contributions from someone in the future, and you got contributions from someone in the past, those should be sort of get that quadratic matching associated with them as well. So like, and I think that, I haven’t worked it out fully formally, but there should be a way that like, if you’ve gotten, you know, contributions in the past, and then you get contributions in the future, sort of all the matching that was supposed to come sort of gets matched to those people in the future. So that it’s like, extra big in the future, you see what I mean? And that incorporates both the matching that like the first person should have gotten and the matching that the person the future should get. And I think you can do that sort of indefinitely. So that gives us a way in which things that are very long lived actually get a huge amount of support, because they’re now getting, and in fact, you could imagine that, in fact, most matching funds might end up going to things that were like incepted in the past, and then contributed to in the future, because those accumulate the most individual contributors over time. You see what I mean? And, in fact, if we go beyond quadratic funding, to sort of the next versions of this that I’m thinking about, there’s kind of an even stronger case for that sort of thing. Because the new versions that I’m thinking about, actually index matching as a sort of a function of social distance, rather than just as a function of the number of people involved. And so that you might argue that, like future generations are probably likely to be even more socially distant than people, you know, at the present moment. So, yes, I think there becomes really interesting questions when you start taking an intergenerational or intertemporal perspective on quadratic funding.

Luca 43:39
Yeah, I think one point just at the very beginning of what you said there that, I think is like maybe worth highlighting as well, which is that as exactly as you said, right, like, there seem to be like, intergenerational transfers happening all the time, right? And it makes me think of this like, article written, I think, by Matt Levine describing, you know, lots of financial mechanisms as essentially working like time machines, right, in that they’re able to allocate resources, resources from past to future in their life, but then making that equitable and, as you said, a big part of like, what quadratic voting seems to be is kind of communicating with these signals and stuff. It seems to be a really interesting problem, right? That either like listeners or yourself and the like, should yeah, like maybe spend more time thinking on and like actually coming to grips with? ,

Glen 44:24
Yeah, no it’s a very, very interesting - one of many design problems that I wish people would actually grapple with so that we could have advances in our social technologies like we have advances in our physical technologies.

‘Circular co-creation’ between new ideas and creative media

Fin 44:39
A while ago, you mentioned when we were talking about quadratic voting that it had appeared in ‘Civilization VI’, this new video game. I’m curious how that came about, first of all?

Glen 44:53
I think it’s a very plausible hypothesis, that around the time that this expansion pack for it came out, which was about eight or nine months after the release of Radical Markets, that like the people who designed the expansion pack were swimming in the streams of conversations that sort of surrounded Radical Markets and so forth. So I think that’s quite plausible, but I’m not sure.

Fin 45:15
Yeah, I can see some parallels between mechanism design and game design, so it’s not fantastically surprising, but it is really great. You said that there was some other radical exchange ideas in Civ’ six. I actually didn’t know that, so can you tell us what those ideas are?

Glen 45:29
Well, I think the the concepts around future governance, that show up in in the game are not necessarily derived directly from radical change ideas, but from the conversations that I think radical exchange helped stimulate and the debates around it, so yeah.

Fin 45:47
Okay, interesting. So one broader question I had here is, how do you think about sending ideas like this mainstream, or at least giving them purchase, through creative media, like video games? As far as I can tell, this is just an idea that’s more or less been at least underrated by, for instance, the EA community.

Glen 46:09
Well, I think if you want to have any chance at the sort of circular co creation that we talked about, artistic imagination has to be key to it. Because there are many people who will not digest ideas in a purely linear, logical way. And in fact, many of the most creative people are precisely those who are least likely to digest them in that way. So if you want to invite that process of co creation, you have to do that. I mean, you think of someone like Steve Jobs, who was like critical to the birth of the computer industry, he’d never read an academic article. I mean, he had no interest in any of that, right? Like if the rationalist community had been around he would’ve been about the last person to be consuming that kind of stuff. It was the Homebrew Computer kits that got sent out that made a difference to him, right? And it was the Whole Earth Catalogue, and it was the, the hippie that, you know, like, if you want to have a chance of attracting the Steve Jobs of the future, you’re not going to do it through sort of like highly intellectual, and deductive, like academic work or blogs or something like that.

New political divides of the 21st century

Fin 47:15
I see. Last Civ VI question: I saw a tweet recently, from you, and you said that this game civ’ six, captures, these are your words, what may be the defining political divide of the 21st century. And this is, as far as I understand, these are kind of different, you know, camps you can select in the game. And the names are ‘corporate libertarianism’ versus ‘digital democracy’ versus ‘synthetic technology’. And I guess I just want to ask, who and what in the real world do you associate with, with those camps? And why does that demarcation work better for you then something more standard, like communism versus democracy versus something else?

Glen 47:55
Well, communism and democracy are the ideologies in the 20th century era of the game. But the question is, what the 21st century air of the game will be, in our game and the world as well, right? So I associate corporate libertarianism with the cluster of ideas around Peter Thiel and the book The Sovereign Individual, roughly the concept being that the internet is going to sort of liberate us from existing social institutions, and allow for, you know, through cryptography and other things, individuals to sort of be in a kind of anarcho-capitalist state, where there is no longer redistribution or collective institutions, everything is driven by these, you know, highly, quote, ‘decentralised’ atomisticly individual mechanisms, and even the traditional violence, you know, control of violence and suppression of violence will be performed by sort of mafias and things like this, so think of the book Snow Crash by Neal Stephenson like that world is kind of what they’re imagining. So that’s what I would associate with corporate libertarianism. Synthetic technocracy, I would associate with the notion of sort of fully automated luxury communism, the notion that like, you know, some AGI, some aligned AGI is going to sort of just produce abundance. And then we’ll distribute that in some way, maybe like a universal basic income or whatever. And then, you know, people will be free to just enjoy themselves. And all of the like hard stuff will be done by some kind of computer assisted central planning. So that’s a vision that I would associate with many people in the Chinese Communist Party. I’d also associate it with Sam Altman’s thinking, and many sort of AI maximalists. And I think many people within the rationalist community are attracted in that direction as well. And then the third one I would associate with Audrey Tong, and RadicalxChange and other people who imagine that what technology can offer us is new means of communication and collective intelligence creation, and, you know, richer forms of social organisation and democracy and things like this. And that that’s the sort of, that there’s a new form of democracy or pluralism that will be empowered by the advance of digital technologies.

Fin 50:41
Thank you. That was really interesting. It reminds me of, I think, Peter Thiel has this saying ‘crypto is libertarian; AI is communist’, and kind of maps the two.

Glen 50:50
Exactly. So he’s got two of those things.

Fin
Right, but he’s missed something.

Glen
And he’s trying to define away the third, because the third is the biggest threat to him, because both of those are deeply unappealing, and he hopes to use the sort of divide between those two to make people think they have to adopt his, if they don’t want to be communist or whatever. And the third pole, I think, is the one that most people would find most attractive, so.

Impressions and criticisms of effective altruism

Fin 51:16
I think it might be good to switch over to talk now about various things you’ve said about Effective Altruism, and rationalism and longtermism. Some of those things you said in this interview, but also elsewhere, you’ve made kind of critical noises about each of these more or less overlapping communities. And I think a lot of what you have said is going to be novel for some people, and also can just be taken to be very constructive, so I think it’s worth talking about that. And I guess, you know, a lowball first question is something like what is your, you know, current assessment, 10,000 foot view, of effective altruism, to begin with.

Glen 52:00
I think of these communities roughly the way I think of religion, which is to say, like religion at its best offers a sense of community, a foundation for moral self improvement, a sort of answer to some deep questions about purpose and meaning for its participants, as well as, on the darker side, potentially, a source of narrowness and inability to confront other worldviews. When viewed in a totalizing way or aspiring to sort of supreme earthly power, potentially a very dangerous direction. So, yeah, I mean, I sort of understand these things roughly the way that I understand a faith community. And with all the, you know, pluses and minuses that that has.

Fin 53:16
Maybe one follow up is: do you see this as a common feature of all similar ideologically motivated communities? In particular, do you think the things you just said apply to RadicalxChange? Or is there an important difference?

Glen 53:29
No, I think there’s an important difference, which is that I think that, like, black lives matter, I don’t view it as a religion. I don’t view RadicalxChange as religion, those are political movements, they do not aspire to be sort of like a pretty complete community of sort of moral solidarity for the members, they have a relatively contained set of socio-political goals that are usually fairly concretely stated. They like when you ask them, like, what do they stand for, there’s usually a pretty easy way to convey that, or at least some slice on that, that is, like, you know, reasonably actionable by the people that, you know, hear it. There’s not sort of like a deep regress into like deeper and deeper sort of claims about fundamental statements of who you are, you know what I mean? Like, when I thought of EA and longtermism as sort of political social projects, which I did for a time, I was, I think, just much more like straight up just negative on them. But when I came to understand them in the context of sort of more of a spiritual project, I sort of understood their place and sort of how they might be compatible with other things that I respect much more clearly, I think.

Fin 55:09
Okay. Maybe one way of saying back what I’ve just heard there is one feature of RadicalxChange, for instance, and many other political social movements is that they’re fairly well circumscribed. That is, they have a more or less concrete, more or less easy to state set of goals. Maybe by contrast, something like Effective Altruism is a little more totalizing, in that the mission is something like, let’s figure out the ways to do the most good, all things considered, in some broadly impartial welfarist sense, and it’s hard to see where the wiggle room is there, like, what’s left.

Glen 55:52
And in particular, like, the, like, you know, if you think of the classic, you know, ontology, epistemology, you know, ethics, politics, like political movements really try to like focus on the politics, and usually are extremely pluralistic about the rest of the stack, you know, and, like, religions tend to like have a lot of focus on ontology, a fair bit on epistemology, and a lot on ethics. You know what I mean? And I think that, if you, like, try to think where in the stack is this playing? I think that, you know, the fundamental commitments of people in the community are much more sort of ontological and epistemological and to a certain extent ethical, than they are political or social or esthetic or something like that, you know, and, and so as such, like, I view, I’ve come to understand them in those terms, you know.

Luca 57:00
Yeah, I’m kind of like thinking out loud here, but I’m wondering, like, how much of this is like a question of, like, certainty and like, epistemics or something like that? Or like maybe if it turns out, you know, that the best way to do good is like, very obviously, donating to AMF or like doing, like, some straightforward action to reduce, like, x-risk and stuff, then that, you know, kind of like stated goals or something like, would like be like, a lot easier, like done? And would you know, then mean that like, EAs and like longtermists or what have you kind of like go behind, like, what you kind of described like, maybe a bit more before of like, one or two, like clearly stated goals, and these are the things, you know, they would focus on and kind of, like, leave the rest, as like, kind of up to its members itself. Or is it like just something like more fundamental where, like, if you are already asking the question of like, how can you do the most good possible, then there will always be like, you know, two goals that are like maybe like the most important, but then you can always still do a third thing and a fourth thing and a fifth thing, and then it becomes like, just very like totalizing? Is that like a maybe fair description of what you’re getting at?

Glen 58:08
Well, I mean, one reaction to uncertainty is to say, ‘We have nothing to say, go about your business’. Another reaction to uncertainty is, ‘We don’t have anything concrete to say, but we do have some method or set of principles, that is the right way to come to knowledge’, you know, and I think that, you know, initially EA had a little bit more like, ‘Oh, here’s what you should do’, you know, and then people started really, ‘Well, maybe not’, you know. And when they did, instead of saying, well, maybe this was like a misconceived project, like, maybe we weren’t thinking about what we were trying to achieve in the right way. They instead said, no, we just need to go deeper down that stack, we need to, like make our commitments at these epistemological and ontological levels. You know, and they said that that was uncertainty. But I think that that’s not quite the right way to think about it. Because there are different ways of approaching uncertainty, like there’s a simply, like, socially humble way of, you know, confronting uncertainty, and then there’s like, a more religious way of confronting uncertainty, which is like, no we think we have a way of understanding how to think. We think we have a way of understanding what are valid and invalid arguments and so forth, and we’re gonna stake our commitments there, you know. And, I think that’s the direction that the community has taken and that is more of a religious direction. For better or for worse.

Fin 59:50
Yeah, I feel like I have a couple of kind of disjointed questions. So this is just me thinking out loud again. The analogy to religion is interesting to me, and I’m not sure I’m fully buying in. So maybe one disanalogy is that, as far as I see them, organised religions centrally depend on some kind of faith, that is a belief not founded on something like evidence or reasons, for better or worse. From my perspective, I don’t see what the article of faith is, or at least if there is one, it’s not like some enormous leap, it seems fairly sensible. It would be like - analogy held - it’d be like going to church and then, you know, the priest is sitting there, like, ‘We haven’t quite figured out exactly which God to worship yet, but we’re kind of running some experiments, we’re kind of trying to discuss with one another what we should do.’ And I would be more down for that kind of religious experience, then something else. And then maybe another comment is that if-

Glen 1:00:42
Well, first of all, there are plenty of religions like that. I mean, most syncretic religions are like that. Baha’i is like that, Unitarian Universalism is like that. So like, there are spiritual communities that are precisely as you describe, I’ve actually been a member of several of them over time. Well, so the first thing to point out is that the notion that quote, ‘There is no article of faith’ is a, like, red flashing signal that A. one is like missing some basic things in the history of philosophy, which says there always is an article of faith. Like, you have to have read, if you’ve read Hume, if you’ve read, like, if you read almost any epistemology, you know, that there’s always an axiom hidden somewhere, right?

Fin 1:01:30
Sure, but when people say ‘article of faith’ they don’t mean, you know, you’re assuming you can get over the problem of induction or something, they mean something more substantive, I take it.

Glen 1:01:37
Well, but, and I would claim that like when one goes into things like longtermism, there’s just, the only way to proceed is, I mean, once you start thinking about the long term future, things are driven by ratiocination, on premises that are often very hard to disentangle, but have to be very strong. I mean, there’s just like, there’s like, no way to actually like, for example, like, if you take Toby Ord’s book, you know, you get to the thing of like, why should we believe that, like, AI is this thing that’s coming and whatever? And he says something like, ‘well, experts in this field think that this is happening’, which I’m not sure I actually agree with, and whatever. But like, let’s suppose that you believe that, well, like what’s the basis, then? Well then you think there’s this set of people who have a particular pattern of reasoning that gives them access to transformative changes that are likely to occur, with at least some reasonable probability. And what is it that is the basis of this? So you start scratching a little bit lower and you realise, well, it’s like assumptions about the nature of intelligence and the ways in which like - is that just Hume’s problem? Like, no, it’s actually there’s a lot more that’s going on there, right? And so they’re actually, it turns out that like, what it’s actually founded on, is a bunch of substantive claims about the nature of knowledge in various of these, you know, fields, which are, like, reasonably widely accepted amongst certain circles of people, you know what I mean, but are not actually at all widely accepted within the broader society, and certainly don’t have like the sort of scientific or evidentiary basis that something like climate change, or whatever has. And so, like, I just don’t think like, how exactly one distinguishes those from the, quote, ‘faith based’ claims of religions, I think it’s very slippery. And they have a lot of eschatological character to them as well. Like, there’s this notion of like, there’s some event that that may occur, which is sort of like of great importance, you know, and it’s somewhat speculative, it has been predicted repeatedly to occur by people in a similar space for some period of time, it does not occur - you know, like, there’s just like a bunch of things that bear a lot of resemblance to like actual, like very concrete features of religious practice, you know.

Fin 1:04:26
Well, speaking of the problem of induction, I would be weary to extrapolate from the fact that this thing hasn’t happened yet, that it definitely can’t happen or something.

Glen 1:04:33
No of course, exactly. No, I’m just saying like, that’s true of the Second Coming of Jesus as well. There are many things that - you laughed a second ago - like I really genuinely do not understand the basis on which people in this community would like laugh at the Second Coming of Jesus and think that these other things just are like, like if I object to something, that’s what I object to. It’s the dismissive attitude that they have to other ways of knowing and making sense of things that’s fundamentally the issue, you know.

Fin 1:05:07
I see, okay, thank you. I can try, you know, narrating my own experience. I take you as saying when you kind of follow the trail of assumptions down from this big kind of eschatological claim about general intelligence arriving like the second coming of Christ, you find that it bottoms out in articles of faith in a way that’s on par with things that everyone would call articles of faith. In my experience, I find that it bottoms out in things which we definitely can’t know about for sure, for obvious reasons, but which seem more or less reasonable to me. You mentioned that some of these longtermist claims, for instance claims about the influence of general artificial intelligence, can’t be grounded in any kind of scientific way that’s equivalent to the way that we can ground predictions about the effects of climate change on just like, concrete observations of how the world is working - like this is just hardcore science. And I think that’s worth just totally conceding, I think that’s just totally true. And it’s like, necessarily true, like we’re talking about a technology which has not yet arrived in the history of mankind. And so you’re faced with a decision, right? I think it’s clear that if something like this happened, it would be a very big deal. And so you’ve got to make this choice between something like giving up and just trying to make sensible guesses given what we have to work with. And maybe that’s the difference there.

Glen 1:06:40
Look, what we could say, empirically, I think, is that there is a certain sort of a person who, when they get exposed to the religious teachings of the Catholic Church reacts to those the way that you react to some of these claims about AI. And probably most such people will not react to the claims about AI the way that you react to them. And vice versa. Like, that’s an empirical claim that I think I’d be willing to stake quite a bit on being the case. And, like, we could sociologically describe the sorts of people who are one way versus the other, right? And, you know, and then we could say, like, how can we, you know, like, we could describe one as reasonable or not, I’m not sure that that’s a very useful label to put on it. Like, I think the only, you know, purpose that that sort of label puts on it is like, to use a word that’s highly normatively inflected in certain places, to characterise one set of people, you know, in contrast to the other, you know. Like rationalistic, I think, would be a reasonable phrase to put on it, you know?

Fin 1:08:10
Well, I take it, you do, in fact, believe that some lines of argument are more reasonable than others, right?

Glen 1:08:17
I think that certain, I think that there are many different ways of knowing, and that ways of knowing that manage to resonate with many of those different perspectives are those that are likely to be conducive to sort of social progress, and greater coming to grips with the problems that we face. I don’t think that like talking about the, like, a primary criterion that I use to make sense of those is not like, this is just more reasonable than that, you know. Like, I think you can say that within the context of frames of reasoning, and making sense of the world. And you can say that certain methods of making sense of the world resonate with many of those frames, I think those are both completely reasonable statements to make, and that those are likely to be productive in various ways, you know? But I don’t think that I would ascribe to certain, you know, schools of thought, sort of inherent, fundamentally, superiority in their ability to make sense of the world over some of these other ways of thinking.

Fin 1:09:39
Okay, fantastic. I’m reluctant to draw this out too much, because there’s a lot to talk about, but I think what you’ve said there - we haven’t we haven’t reached a conclusion, but I think it’s nicely, I think, shown two modes of of thinking, which you often kind of encounter when you’re having these kinds of discussions, so I appreciate that. Luca, do you have a question?

Luca 1:09:55
Yeah, I mean, possibly at the risk of rambling on for a bit more, I’ve got one question that I’m maybe interested on this thread here. And this is like also like, I want to step away from superiority or unreasonable things here. But we drew up a comparison here between AI and climate change. And like, clearly, right, when you look at climate change science now, there is so much evidence so much like literature and stuff, but this is like itself, right, like a, you know, in the grand scheme of things like a really kind of recent phenomenon and the actual discovery of of global warming or climate change, which hopefully, at some point, we’ll do kind of a separate episode on. But like, was a big task. Imagine that even in the 1950s, or 1960s and stuff was like, far from obvious when we really didn’t understand things.

Glen 1:10:33
I know that very well, because my grandfather, in the 1980s was the pioneer of a theory of global cooling, that at the time was sort of like, almost on a par with global warming and then turned out to be completely wrong, so.

Luca 1:10:48
Well, I guess, and this is as I said, like, I think big tangent, we’re kind of planning for an entire episode about this exactly, right. But like, I think the takeaway here is like it wasn’t obvious in the 1950s, right? If it was global warming, global cooling, like what the mechanisms were, whether like, aerosol pollution was scaling quicker than CO2, like kind of greenhouse effect and the rest. But anyway, this is kind of in the details. But I’m wondering here, if you are a scientist, or if you’re like a policymaker, in the 1950s, and 60s, were arguably you have a lot more influence, right, over affecting climate change than people maybe today have just because you are like much earlier on. And you are then faced, right, with very limited evidence. And, you know, there’s still like an ongoing progress and stuff here. How does this question of faith or like reaching a kind of conclusion here of like, whether you should take this thing seriously, or whether you should just kind of take a bet on this? Either, because it is like very neglected now, or it seems really important, if true. Just like viewing on this case here like what do you see as reasonable heuristics or like approaches? What should that policymaker or scientists take?

Glen 1:11:43
I mean, I think, you know, if there are concrete ways that have shown progress, promise to get greater clarity on this and greater information on this. Like, that’s desirable, I think, if the only way to do that is sort of like string theory levels, sort of ratiocination with like, no empirical verification in sight, except at the cost of like enormous amounts of resource, I would tend to have a lot more scepticism about the, like desirability of following that path. Like, I think the string theory project has not been terribly illuminating. To take an example that’s not the AI safety example, which was the natural one to give there, you know, but, um, so like, I think, you know, not something not that distant from what the scientific community actually followed was probably a pretty good course of action. And if there had been, like, really crisp evidence that, you know, you could prevent some probability of climate without causing some other probability of some other problem, like, there could have been a case for making that public, I think it kind of would have been moot, because they what would have actually needed to have been done is like worldwide fundamental change to the way that everyone lives. And there’s sort of like almost no way that you could have or necessarily would have even wanted to have persuaded people to undertake that change, absent much stronger evidence than was available at the time. So like, I’m not sure exactly what one would have done, that would have been better. I mean, even with the overwhelming evidence we have now, it’s proven very challenging, and, to some extent, proven very challenging, precisely because it’s been approached in a manner that is very much like, ‘here’s the science, you guys are idiots’ rather than, like, a more sort of inclusive discourse about how to bring people around to that consensus. So yeah, I mean, I don’t think the way of approach was like that far off, and I don’t think like sort of banging on it, you know, with less evidence would have led to a better outcome necessarily, so.

Fin 1:14:06
That’s very interesting, thanks. I think we’d like to just touch on one more point you’ve made about, I suppose, effective altruism and longtermism which speaking personally, I think actually, this resonates more with me. I think this is a tweet or something you said in an interview, you said ‘what has been most disappointing to me about longtermism and to a somewhat lesser extent EA and rationalism more broadly, is how non generative they have been’. So I was wondering if first of all you could just explain what you mean by that?

Problems with long-term planning

Glen 1:14:36
So let me say one more thing on sort of just the, you know, just like the abstract critical side, which is, I think one very important thing to understand about long term thinking is, um, what is like, dangerous about it. Now, like, again, I want to compare this to religion. So like, I also think that planning lots of things based on the second coming of Christ would be very dangerous. Like, I don’t think this is different from other religions, you know. But what’s dangerous about long term thinking? A problem with really long term thinking when it’s not extremely grounded in things that are like immediately perceptible is that it’s just like very easy for reasoning to go very badly in a wrong direction, if not grounded to like many different ways of verifying that reason, that are not just rational deduction. Like you can quite quickly persuade yourself based on like chains of logical reasoning, of a good deal of different things, many of which are in great tension with each other. And that’s very easy to see, like, you know, just on the ground. But the thing is, it’s also the case now, you might say that, well, those things will quickly settle out, and then there’s some sort of consensus that comes out of that, or whatever. But the problem is, I think that sort of reasoning is also likely, especially within a relatively limited community, even once it’s reached whatever consensus it reaches, to still be vulnerable to that sort of problem and disruption and so forth. It’s just a very fragile thing. And like, I often use the analogy to L’Hopital’s Rule here, like, when you look very, very, very distantly, you can have a lot of very large effects. L’Hopital’s rule says that, when you have zero over zero, the right way to figure it out is actually to look at the trajectory rather than to look at the levels, right. And the problem is, there’s just a lot of different things to potentially be said, and it’s just very confusing. And like trying to draw any concrete conclusion from that sort of thing, if it’s not supported by a bunch of other types of reasoning, that are much more concrete and grounded, like the number of times I’ve made enormous mistakes in mathematics. If I just tried to treat it as pure mathematics, and I don’t ground it in some historical fact, or some data, it’s just really easy to do that, you know. And so I think that’s a very, reasoning about things that are very distant is very fragile.

Fin 1:17:15
I’ll just comment very briefly on that. I feel like I want to say that if you are saying, ‘it’s close to impossible to predict how the world turns out, beyond even fairly short timeframes, and then alone, to think about how to influence that future in some way that involves a long chain of reasoning’, then I fully agree, and I also don’t think that is what longtermism is about, as I understand it. It’s not about projects which take a very long time and require this kind of predictive capacity. If an asteroid collides with earth tomorrow, then I think I can confidently say that in 1000 years time, there still won’t be any humans walking around. And I don’t think that involves a long chain of reasoning. And I think it’s clear then, insofar as this is likely, that maybe we should do something to prevent that. So there are cases where it’s not a kind of complicated chain of reasoning.

Glen 1:18:04
Yeah, I think that that’s fair. I also think that in most of those cases, you don’t, like what you open the door to with longtermism I think is ,versus what could be reached by other means of persuading people, is much more inclined towards the things that I worry about. Like, persuading people that, you know, an asteroid could wipe us all out and that that’s a really bad thing is not a particularly difficult thing to do. You know, like, whatever, like, we’ve been investing in it to some extent, and whatever. It’s when things get fragile that I get worried, and it’s when things get fragile, that I think it’s really hard to use any other means to reach it. So when longtermism overlaps with lots of other things, I think it’s a nice contributor. And that’s how I feel about religions as well. I think religions add a lot and when lots of Religions can agree on something, that’s like a really good basis for being a moral consensus, you know, in the world. On the other hand, when, like, when there’s one religion that like says something very specific and strong, I would tend to be very concerned that like, that’s not a sound basis for a society making a decision, you know, at a broad level, so.

Luca 1:19:21
Yeah, I guess like one thing that like this makes me think of and like definitely resonates here as well is like when you like start looking right at like, what actual like real world implications and like interventions that this suggests as well like if you’re like, super uncertain and like so uncertain as well that you don’t know if you know your total effect ends up being like positive or like negative, right? So like to take maybe like Fin’s like asteroid analogy, if we’re then thinking about like, an intervention that we’re not even sure about will make it more likely or less likely that the asteroid like comes towards us, but we know that the like magnitude is like very large, or like the outcomes of this like inoculation are very large. I think that is definitely like one where I agree if you’re just like grounded in like Excel or like in maths or like what kind of have you and they’re like BOTECing, it can be, like, very fragile, as you said. And I think that definitely bites and like resonates with me here.

EA and generativity

Glen 1:20:10
And so coming back to the question of generativity, I didn’t mean this to be a complete detour. I think one reason why many of these things have not been terribly generative, and that the few things that they have generated, I have, like, pretty deep scepticism of. Like, the one thing that some of my friends point to is like something concrete that’s been generated by this is like, this notion of, like, if an AI ends up generating more than 1% of global GDP, that all that money should be donated to whatever, something like that - that’s like, something that people point to as like a really good outcome. I’m, like, profoundly, like I don’t think that that is a meaningful or thoughtful or desirable or good direction for us to think about alleviating these problems. And I actually think it’s like a huge distraction, politically demobile like, I think it just sucks. Like, I’m not a fan of that. And, like, I think that the reason why it has, like, ended up on that trajectory is that, like, putting your focus on that long term stuff exclusively, rather than saying, ‘okay, well, yeah, that gives me some general set of things like I’m, I should have a real concern about stuff that might be like massively disruptive at global scale, and maybe I should like, really think about that sort of thing. And then let me turn to like, yeah, let me just ground myself really in history, politics and the concrete lives of people or on what people think are crises. And then like, you know, just keep my eye out for things that might have that kind of disruptive scale. Like, I think that’s a much better attitude to take, rather than to view yourself primarily as a longtermist, like, I would say, that’s the attitude that I try to take within RadicalxChange, like the reason why I care about systems of democracy of social liberty, like, I think that if we don’t sort those out, I think that’s upstream of most potential global crises. And, you know, Daniel Schmachtenberger has argued this, but by the way, you don’t need to believe in global crises to care about that stuff, you can care about it for lots of other reasons as well. I think the concern about generativity is relevant to the extent that you view longtermism and so forth as socio political, socio technological movements. You know, Christianity generated art, and, you know, and it generated people who had certain attitudes, but didn’t generate, like political ideas, particularly, you know, and we shouldn’t look to it to do so, you know what I mean? Like, that’s not the role of religion, the role of religion is to, you know, generate people who have certain types of attitudes, you know, and to generate various forms of solidarity. And I think that that’s what we should look to longtermism is, if we think of it in that terms. If you think of it as this sort of political, so you know, then you should be really concerned that it’s not generating meaningful paths of action. I don’t consider like, we should put more money into longtermism thinking, and we should put more money into, like, AI safety research as, like meaningful contributions that one makes. Like all that, that’s completely self referential. It’s saying, like, you know, like, ‘oh, what we’re doing is really important, give us more money to do it’. You know, usually like the way that startups work the way that like, it’s not like I started RadicalxChange by being like, ‘well, people should think about social technology problems, give me some money to do it’, you know. Like, I started it with an idea of something concrete that people could do, you know what I mean? And like that generated value for external communities, hopefully, or hopefully will. And as it manages to, I think there may be a chance that people will be willing to support further development in those directions. And like, you know, that’s how a sort of technological or social technological project, you know, works or should work, I think.

Fin 1:24:08
Okay, that sounds reasonable to me. I can imagine some people wanting to say something like, take biosecurity, obviously seems to matter. But if we wait until the next pandemic to demonstrate the value that we’ve created by various biosecurity measures, then we’ve waited too long. And so in some sense, we’re forced to do things before we can demonstrate the concrete value in really clear terms. Does that make sense?

Glen 1:24:32
Well I mean, so that may be true for some of these things, but like it even paths of action, like what are the paths of action that have actually come out of this? Like, I mean, biosecurity, we could talk about that. I’m actually, you know, I think those are important things that we should be doing. I’m not sure, like the countries that actually did that most effectively were the countries with better social infrastructure in general, like Taiwan did that really well. And that was not I don’t think because like, you know, EA people were in there like doing a bunch of stuff about biosecurity, I think it’s because like, they had a high functioning society with good social infrastructure. And so they took care of a number of challenges, you know, really well. But we could argue about that. But like, AI safety, for example, what is it, like, what’s the great insight that people have come up with by like, talking about AI safety, you know? Like, the things that people have pointed to me or this, you know, 1% thing, and we can talk about that, and whether that makes any sense or not. But, you know, I’m happy to do that, in fact, if you want. But like, I don’t view that as, like an example of like, something great that like, came out of this line of research for all the money that’s been put into it, you know.

Luca 1:25:45
And I guess to be clear, if I’m like, understanding you, right, as well, like this would be still right, perfectly compatible with like, a consequentialist, like worldview or something where now this is just a question of like, well, how do we generate like ideas that can reduce like bio risk, or like AI risk here? And this seems to be like, more an idea of like, well, where do you start? And maybe just like having a system or like a community, like in place that starts or like has like, more like a generative culture or like, incentives or like, what kind of have you, or like an ideology underlying that, that then is, like, more generative, like, would be better, right? Like even from like a consequentialist view.

Glen 1:26:17
Yeah. Like I think, I don’t think you need to depart at the level of, you know, meta ethics or something like that, in order to like, see what is limiting about this, you know, sort of thing. I think you can depart at the level of meta ethics and maybe it’s even useful to do that. Because I actually think that, like, communities that are grounded in other types of practices than consequentialism might actually be consequentialist, more generative. But, I don’t think you have to, I think you can justify it in consequentialist terms as well.

Fin 1:26:53
I wonder if one way of translating what you’re saying into consequentialist, EA language is something like: the point you mentioned about Taiwan, and this thought that maybe social technology or how we organise ourselves is, in some sense, upstream of the particular concrete ideas, and if we want to be more generative, then we should think at that level. Maybe some people will hear that and think, ‘okay, well, here’s the new or here’s a new EA cause area, or perhaps we’ve been underrating thinking about this level, and for instance, following Taiwan’s lead’. I suggest we close off that rabbit hole though. And I think we should begin -

Glen 1:27:34

Sure. And by the way, just to be clear, like, I hope RadicalxChange and all the things you know that it supports around it will be like, funded by people within the EA space. Like I don’t think that their money is so dirty that, like, I don’t want to take it or whatever. Like, I hope we’ll be funded by Catholics and I hope we’ll be funded by Jews and I hope we’ll be funded by Effective Altruists, and I hope we’ll be funded by Unitarians and all those things, you know what I mean? Like, I believe in pluralism, you know what I mean? Like, I do think it’s a political project, not a meta ethical project, you know what I mean? And I think there’s good consequentialist grounds for doing, you know, that sort of work and I make the case for it, often, and I welcome the opportunity to do so.

Underrated historical thinkers

Fin 1:28:23

Neither of us are ambassadors for EA or anything like that, but almost certainly the reverse is true, right? Like, clearly EA should be welcoming insights from RadicalxChange or from anywhere else. And this should look like a two way exchange, like a conversation more than walls, cities doing their own thing competing for how to improve the world or something - that’s just a kind of ridiculous vision. In my view, and I think Luca’s too, I think you are unusually good at finding and building on and explaining ideas, from figures from history, like economists or writers or whatever, who, for whatever reason, have just been buried in history and go kind of overlooked by other people. Perhaps you could just describe some kind of insight or idea that you value from these people.

Glen 1:29:12

Sure. Love to.

Fin 1:29:15

Fantastic. The first person was someone we mentioned earlier, which is J.C.R. Licklider. What have you learned from him?

Glen 1:29:22

So J.C.R. Licklider was the ARPANET ARPA programme officer who founded the ARPANET, who funded the first five computer science departments in the world, and who, you know, helped really stimulate, he funded Engelbart who really made the personal computer revolution possible. And I think there’s a few things I learned from him. One is that there is a long tradition of sort of human interaction computers as a communication device, that this is actually like the most fundamental tradition like, despite the whole AI narrative, and whatever, the like foundations of most of the technologies that we have today actually come out of this tradition of thinking about computers as a way of facilitating human exchange. And that was really interesting. The second thing I learned from him is that the internet, the project of building the internet, was really seen as a much more ambitious one than what it became. That there were much more substantive protocols around identity and data sharing, and so forth, that were always sort of part of the imagination of where this would go, and that the funding just ended up not being there. And that a lot of the stuff that we’re now talking about in the context of Web3, and whatever, is, I think, a result, or predictable result of not having developed the elements that we wanted

Luca 1:31:03

So fantastic. Next one kind of rattling on would be Henry George.

Glen 1:31:09

So Henry George, was the best selling author in the English language other than the Bible for like 30 years or something like that. He was the, his book gave the name to the progressive movement in the United States and to the social gospel. He was, he inspired the game Monopoly. He was the first real centre left candidate in the United States in sort of a modern centre left sense. And yet, again, he’s almost totally forgotten. And I think what I’ve taken from George more than anything, is the spirit of the connection between economic ideas and real public engagement, the capacity to speak in the language of theology, in the language of popular mobilisation, in the language of economics all in one breath. And the idea of sort of connecting the sort of libertarian instinct with the socialist instinct,
Luca 1:32:13 Yeah, and I maybe don’t want to, like, read too much into this as well, but like, there’s maybe like an interesting, like echo of our previous conversation with like, Henry George and monopoly in our previous kind of discussion around like Civ’ 6 and, you know, creative media to like distribute, like ideas and stuff too. But no, I’ll leave it there.

Fin 1:32:33

Last one is Beatrice Webb.

Glen 1:32:36

Beatrice Webb was the founder of the London School of Economics, and the Fabian Society, and one of the first people to theorise the role of labour unions and controlling monopoly power over labour. And I think sort of laid the foundations for thinking about how labour unions help make the economy work better. And again, someone who worked at many different levels from the cultural level to the organisational level to the political level, as sort of an integrated approach to advancing ideas.

Fin1:33:15

Fantastic. And we’ll put books and links in the page. Let’s move on to the very last questions then. Luca I think you have the first one.

Reading recommendations and outro

Luca 1:33:25

Yeah, and this is a question we kind of ask all our guests but what are three books, articles, films or other bits of creative media that you would recommend to anyone interested in finding out more about what we’ve talked about?

Glen 1:33:39

Read ‘How Taiwan’s Unlikely Digital Minister Hacked the Pandemic’ in Wired; that’s a profile of Audrey Tang. I’ve got a piece coming out shortly called ‘Why I am a Pluralist’, which is about the direction that I want to take RadicalxChange ideas soon. And probably Vitalik Buterin’s essay ‘Quadratic Payments’, which is a review of how quadratic stuff works.

Fin 1:34:12

Last proper question is: are there areas or specific questions that you’d really like to see more good work on? And I have in mind that people listening might want to be doing this.

Glen 1:34:26

Well, the ‘Why I’m a Pluralist’ will spell out a whole direction of thinking about, like what comes beyond some quadratic type thing. There’s also the questions that we talked about, the intergenerational questions with quadratic funding that are really interesting. But other ones include, how do you think about cultural capital and meaning making in similar terms to how, like we thought about monopoly power over property in the book? Like how do you think about who could, like what people are understood to mean and what different, you know, styles of communication, what, who they privilege, who they give power, all those sorts of things, but in terms that allows actual mechanisms to improve on them and to adjust those balances of power. That would be, that’s something I’m really excited about.

Luca 1:35:32

Fantastic. And then simple questions just to conclude: where can people find you and what you’re working on online?

Glen 1:35:38
glenweyl.com, radicalxchange.org - that’s radical - lowercase x - change.org. And glenweyl on Twitter.

Fin 1:35:48
Glen Weyl, thank you very much.

Glen 1:35:50
Thank you.

Fin 1:35:51
That was Glen Weyl on radical markets, quadratic voting and criticising Effective Altruism. As always, if you want to learn more, you can read the write up at hearthisidea.com/episodes/glen. That’s Glen with one N. There, you’ll find links to the books and resources that Glen mentioned, along with a full transcript of the conversation. If you get something out of this podcast, the best gift you can give us is to leave a review or a comment, wherever you’re listening to this - Apple podcasts or whatever. Reviews make such a difference. They mean we know what kinds of things people appreciate about the podcast, and also it helps it become more visible for new listeners. Also, if you have constructive feedback, there’s a link on the website to an anonymous feedback form. There’s also a star rating form at the top and bottom of the write up. And you can send suggestions, questions and whatever else to feedback@hearthisidea.com Thanks very much for listening.