Liv Boeree on Healthy vs Unhealthy Competition
Contents
Liv Boeree is a former poker champion turned science communicator and podcaster, with a background in astrophysics. In 2014, she founded the nonprofit Raising for Effective Giving, which has raised more than $14 million for effective charities. Before retiring from professional poker in 2019, Liv was the Female Player of the Year for three years running. Currently she hosts the Win-Win podcast (you’ll enjoy it if you enjoy this podcast).
In this episode we talk about:
- Is the ‘poker mindset’ valuable? Is it learnable?
- How and why to bet on your beliefs — and whether there are outcomes you shouldn’t make bets on
- Would cities be better without public advertisements?
- What is Moloch, and why is it a useful abstraction?
- How do we escape big societal multipolar traps?
- Why might advanced AI (not) act like profit-seeking companies?
- What’s so important about complexity? What is complexity for that matter?
Liv’s recommended reading
- ‘Natural Selection Favors AIs over Humans’ by Dan Hendrycks
- Misalignment, AI & Moloch | Daniel Schmachtenberger and Liv Boeree
- The Culture Series by Ian M. Banks
More resources
- The Win-Win podcast
- Meditations on Moloch by Scott Alexander
- Governing the Commons by Elinor Ostrom
On the analogy between powerful AI and powerful corporations
- Comment from Gwern on LessWrong —
Corporations are not superintelligences. They are, in fact, extremely stupid, much stupider than the sum of their parts (a million corporate employees sum to a lot less than a million times smarter human), suffer from severe diseconomies of scale, and subject to only the weakest forms of natural selection due to their inability to replicate themselves reliably leading to the permanent existence of very large dispersion in efficiency/quality between corporations. (You will never see a single especially-well-run corporation take over most of the business world, the way you repeatedly saw more-fit COVID viruses drive to extinction lesser variants.) They are so stupid that they cannot walk and chew bubblegum at the same time, and must choose, because they can only have 1 top priority at a time - and CEOs exist mostly to repeat the top priority that “we do X”. // Why then do we have corporations and they have any real-world power at all? Because they are simply very large and parallel and potentially-immortal, and are the least-bad organizations human minds can reliably form at present given the blackbox of human minds & inability to copy them. Not because they are optimal or intelligent.
- A Physicist Solves the City (New York Times) —
After buying data on more than 23,000 publicly traded companies, Bettencourt and West discovered that corporate productivity, unlike urban productivity, was entirely sublinear. As the number of employees grows, the amount of profit per employee shrinks. West gets giddy when he shows me the linear regression charts. “Look at this bloody plot,” he says. “It’s ridiculous how well the points line up.” The graph reflects the bleak reality of corporate growth, in which efficiencies of scale are almost always outweighed by the burdens of bureaucracy. “When a company starts out, it’s all about the new idea,” West says. “And then, if the company gets lucky, the idea takes off. Everybody is happy and rich. But then management starts worrying about the bottom line, and so all these people are hired to keep track of the paper clips. This is the beginning of the end.” The danger, West says, is that the inevitable decline in profit per employee makes large companies increasingly vulnerable to market volatility. Since the company now has to support an expensive staff — overhead costs increase with size — even a minor disturbance can lead to significant losses. As West puts it, “Companies are killed by their need to keep on getting bigger.”
Transcript
Intro
Fin: This is Hear This Idea. In this episode, I spoke with Liv Boeree. Liv is a former poker champion turned science communicator with a background in astrophysics. She co-founded the nonprofit, Raising for Effective Giving, around the time she was the number one ranked female poker player in the world. Since retiring from poker, she now runs a YouTube channel and an excellent new podcast called Win Win, which is linked in the show notes. We discussed the poker mentality, how to bet on your beliefs, the concept of mollusc, analogies between powerful AI systems and powerful corporations, and much more. This was a wide-ranging conversation, but I hope you enjoy it. Here’s Liv Boeree. Liv, thanks for being on the show.
Liv: Thanks for having me.
Fin: There’s a lot to discuss. I want to start by talking about poker. There seems to be a pipeline from playing professional poker to doing unorthodox, impact-focused work. You and Kate Hall are examples of this. How much of this do you think is down to selection? Are the same kind of people who want to do these jobs attracted to poker, and does poker help you learn the right lessons for these jobs?
Liv: I’m sure it’s a bit of both. The type of personality attracted to poker often loves abstract and mathematical concepts. People who choose to play poker full time at a young age tend to be against the grain. There’s probably some selection effect going on there. But some of us learned about effective altruism and quantified philanthropy around 2014, which created a ripple through poker. We met a group of Swiss philosophers who gave all the arguments to me, Igor, and a few other poker players. Stefan Huber, in particular, was playing poker to donate a bunch of his money to effective charities. That’s where the spark of interest came from. Poker teaches you to navigate uncertainty, think probabilistically, understand when your intuitions are useful versus when they are unhelpful, and recognize biases. All of that is ingrained in the game of poker, so it maps nicely onto the types of problems that we’re all focused on now.
Fin: Chess, for instance, is less disproportionately represented in this world of trying to have a ton of impact. Chess is a game of perfect knowledge and certainty, and the world is not.
Liv: Chess is a game of perfect information. Both players can see the same thing. There’s no hidden information or luck. The best player wins every time. In poker, there’s so much variance, especially in the short term. One of the skills you have to develop is learning to recognize whether your results are a result of your skills or luck.
Searching for signals in noisy data is a key part of much of the philanthropy work we do. For this reason, I believe poker is the superior game for honing business and decision-making skills. It’s a closer analog to life than a game like chess.
Fin: Anyone’s smartphone can easily beat anyone in the world at chess. However, this hasn’t killed chess. In fact, people enjoy watching computers play chess. Is this the case with poker? Are AI now better than humans at poker? Is that good and interesting?
Liv: It’s TBD. It’s certainly made it harder to become a professional. I wouldn’t recommend anyone become a professional poker player now, especially if you want to play online. Online poker is done at high stakes because you’ve now got real-time solvers. Back in 2015-16, you could find the game theory optimal solutions to different situations, but it would take about 8 hours to run a simulation. Now, you can get a good answer in less than a minute. The latest I heard is about 10 seconds. This makes it less interesting to play against someone online who could be using one of these things. I’m essentially playing against a near-perfect machine. Where’s the fun in that? As for whether it makes the game less interesting for spectators, I don’t know. Chess isn’t as popular as it was in the Bobby Fischer days, but it has gone through a resurgence. The world championships in 2018 were energetic and well-attended. Players have to play in a glass box to be isolated and avoid interference from the audience. We have the perfect solutions in our hands just using an app.
Fin: Even a 2-hour game of chess, with moves I don’t understand, can be made interesting by the commentators.
Liv: I’m back to being completely addicted to chess. I’m terrible at it, but it’s still exciting. Before, there was a mystery of what the players were going to do. All that knowledge was locked up in the players’ heads. Now, you can sit and know what they need to do. It’s a different way of spectating the game, but it’s not less fun. In poker, I haven’t seen anyone using a solver and watching a livestream to see if the correct play is made. However, now that we have these real-time solvers, maybe that will start happening. It would certainly make it interesting for someone like me, who doesn’t really watch the game anymore, to come back and watch.
Let’s see what the latest trends are among kids.
Fin: When you run these little solvers that produce the game theory optimal answer, is the outcome ever really surprising?
Liv: Some of the results are really counterintuitive. For example, the first superhuman-level bot, Libratus, came up with and implemented some really strange strategies. Things like betting a 10th pot on the turn and then check raising 7x. These are bizarre sizings that no human would do, but actually, it knows the most unexploitable sizings. That’s what game theory optimal means. It doesn’t mean it’s always the most profitable, especially if your opponent is making mistakes. You might want to deviate from it and exploit them. But you cannot be exploited by your opponent if you’re playing game theory optimal. It was really interesting to see the humans’ reactions. They were confused when the bot bet 6x pot into them. In most games, people don’t ever bet more than 2x pot. Usually, it’s under 1x pot. But the bot’s unusual strategy was likely causing them to make huge mistakes, which it could exploit even more.
Fin: This leads to an abstract question. When AI gets good at a wide range of things, including negotiating on behalf of people, how weird will the strategies it comes up with be? One way to predict this might be to look at games, which are kind of like examples, and see how alien the optimal moves generated by these computers are compared to human moves.
Liv: That’s a very good point. It’s hard to imagine what it would be like to negotiate against an AI. In a live auction scenario, for example, someone using an optimal AI might employ some really strange tactics.
Fin: Poker involves bets, and you’ve been known to make bets off the poker table as well. Why is it generally good to do this, given that most people aren’t in the habit of constantly making bets about real-world outcomes?
Liv: Making bets can be a good thing because it’s essentially a tax on nonsense. If someone claims they can do something or that something is true, you can challenge them to bet on it. As soon as you ask people to put money on the line, they often change their tune. So it’s a good way of holding yourself and others accountable. It also forces you to think through a problem more thoroughly. Additionally, you can use betting as a way to incentivize yourself to do something that you might otherwise not want to do.
There’s a significant trend in the poker community to engage in health bets, whether it’s running a mile in a certain time or weight loss. These can be effective ways of incentivizing people to achieve a goal.
Fin: How does that work concretely? For example, if I’m trying to get a new personal best on my squat?
Liv: You need to consider your odds. What is the goal you want to achieve, and what is the likelihood that you’ll accomplish it? Ideally, you want to bet against someone who isn’t trying to take advantage of you. Sometimes you may need to give them a bit of an edge.
Fin: You mentioned something about a shark and juice?
Liv: A ‘shark’ is essentially a good bet or a smart bettor, sometimes referred to as a ‘shark bettor’. This term originates from poker where a good player is called a ‘shark’, a poor player a ‘fish’, and a poor rich player a ‘whale’. Everyone’s looking for a ‘whale’. Depending on who you’re betting against, assuming they’re looking for fair odds, you try and figure out what those are. A good example recently was Igor and his friend Bill. Igor claimed he still had residual fitness from his running days, but Bill doubted this since Igor hadn’t trained or run in years. After some back and forth, they decided to bet on whether Igor could run a 7-minute mile the next day.
Fin: So you agree on odds, specifically?
Liv: They decided to bet 1 to 1. They then had to decide on the line, or their indifference point, which they settled on as 7 minutes. The next day, despite the heat and poor pacing, Igor managed to complete the run.
Fin: But I mean
Liv: Our friend Bill loves these bets. He’s a good example of someone who bets for the right reasons. He truly wants to incentivize his friends to get in better shape. There’ve been some really funny ones. For example, one guy had to do lunges for every step he took for 24 hours during a poker tournament. This turned out to be quite a challenge, especially when it came to moving around a lot. It’s a cool way of holding yourself accountable, incentivizing yourself to do things you wouldn’t normally do, and making life a bit more fun. They don’t even have to be for money.
Igor and I share our resources, so we don’t focus on money. We value our time, though. That’s the resource we both care about. I don’t want to compile my receipts for my tax return.
Fin: You could do it for me. One of us has to do this.
Liv: Do it for me.
Fin: Let’s decide.
Liv: We’ll bet time against each other, maybe in 15-minute increments.
Fin: I like that. Things get interesting when we’re trying to settle on odds. It’s interesting when I expect my guess to influence your actual chance. If my implied odds suggest that I think you’re very likely to do it, maybe that motivates you and makes you even more likely. It raises the question of how or whether you can find an implied probability that doesn’t change.
Liv: As soon as I tell you, it changes.
Fin: You can find a fixed point where it doesn’t change, but it’s not always easy.
Liv: Usually you can.
Fin: For example, imagine a graph where the y-axis is your chance of releasing your next podcast episode within a week, and the x-axis is the probability that I tell you. Your actual chance is a function of the probability that I tell you. If you don’t care what I say, it should be a straight line, but it could be a wiggly line. At some point, this line intersects with your actual probability line. Where it intersects is a fixed point where my stated probability equals your actual probability. I’m looking for those fixed points.
Liv: That’s a very analytical way to do it.
Fin: It’s a very nerdy discussion.
Liv: I’m sure it happens in professional sports betting, but it certainly doesn’t happen in the poker community when someone is deciding whether they can swim across to that island 5 miles offshore without dying.
Fin: What are some bad bets to make, or circumstances where it might be unwise to bet on something?
Liv: I think bets where you’re incentivizing someone to do something they wouldn’t want to do are bad. For example, there were these horrific things called “bum fights” where people would bet homeless people to fight each other. It was exploitative, taking advantage of disadvantaged people who needed money and incentivizing them to do something that was clearly bad for them. Betting on whether a couple will last is also bad. An acquaintance once bet against whether Igor and I would last longer than a year. It was distasteful. Some bets are just fine.
There was a man who bet that he could live in the bathroom of a Bellagio hotel room for a month without leaving. Someone accepted his bet. This was an example of incentivizing someone to do something that wasn’t necessarily good for them. However, he proposed the idea, everyone found it amusing, and he seemed okay at the end of it. So that’s alright. But generally, anything where you’re incentivizing someone to do something, particularly in an exploitative way, that they probably wouldn’t otherwise do or that their better judgment would realize is not a good idea, is not a bet that would be beneficial if the whole world copied. These are probably examples.
Fin: I found that really interesting. Even in general, when you’re thinking about free exchanges of money for doing something that shouldn’t be possible. You can imagine a company paying someone to tattoo their logo on their face.
Liv: Right.
Fin: You could argue that it’s a consensual exchange and both parties believe they’re better off. But there’s something undignified about it and it’s hard to articulate exactly what that is.
Liv: Yes, people have an intuition that there’s something wrong about that, but if you break it down, there’s no clear step. With the tattoo example, let’s say both parties are of sound mind, their brains are fully formed, they’re both of age, and the person receiving the tattoo is over 25. They’re being paid a substantial amount of money and will walk away happy. Then technically, I lean more on the side of giving people freedom of choice. It’s their body. I’d much rather live in a society where someone has the choice to do something like that than not. However, I would also rather live in a society where someone doesn’t feel the need to do that in order to survive or get by.
Fin: Yes, I agree.
Liv: If it’s done in the spirit of fun, then I guess it’s okay. This leads into the question of surrogacy for money. Paying someone to carry a child for you is a free exchange. Everyone’s an adult and there are checks and balances. I personally think that’s okay. However, I understand why people might feel uncomfortable about it. It’s something about commodifying your body. I oscillate on this issue. It’s not clear to me where the ethical line should be.
Fin: I understand.
Liv: It feels like there should be an ethical line, that’s my point.
Fin: One way to explain this feeling, which I share, is to say that there are certain cases where it’s possible to know better than the person making the choice. But you don’t really want to go there to explain why it’s not good for companies to pay people to tattoo their logos on their faces. So what else is it? What explains these intuitions? It’s unclear. Dignity feels like a relevant word.
Liv: It’s the idea of dignity, some notion of sacredness. Some things should be held sacred and not commodified. I believe that. For instance, I remember a drone show. I love drone shows for art. They’re really cool.
The sky is like a beautiful sculpture. Turning the sky into pixels is really cool. However, when I saw a Candy Crush advertisement made out of drones across the city, it created a visceral reaction in me. I think it’s because it’s the start of a slippery slope.
Fin: It feels like a race to the bottom.
Liv: Exactly. In isolation, if it’s done first for a singular art piece or to make a point of commodifying the sky, that’s fine. But if we’re going to start letting people use our night sky as advertising billboards, that’s the beginning of a race to the bottom. This resource will eventually get completely depleted if some sort of regulation or limiting factor doesn’t come into play. The sky is a common resource that has unified humanity for millennia, and we really should avoid commodifying it for the highest bidder.
Fin: This point is underrated. We think about advertising in general, including advertising in the sky. It benefits the advertiser for people to see the ads. But do random billboards generally enhance the vibe of the place? Typically, I’d say no. Most films, especially period films set in cities, tend to have basically no adverts. This could be because they’re slightly ugly. So maybe there’s a reason for making it harder to advertise or taxing advertisements.
Liv: Words themselves are not particularly aesthetically pleasant. They’re harsh, especially in a nature scene. A word is pretty blocky, and it has low information density.
Fin: Some advertisements are really beautiful.
Liv: That’s the thing.
Fin: I like some advertisements.
Liv: A friend of mine made a good point. She would rather live in a world where there is at least one city with dense advertising, like a Blade Runner-esque cyberpunk style with billboards everywhere.
Fin: I see.
Liv: I agree with her. It’s a more interesting world with that juxtaposition. If nothing else, we’d see what’s possible. Let’s hand a patch of space over for full commodification, billboards everywhere, like Vegas or Miami. But let’s draw a firm fence around it and don’t let that bleed into everything else, into the suburbs, and beyond, into farmland, the commons in general, and certainly not national parks. That’s a more interesting world. That said, if the choice was to let it bleed everywhere or have no advertisements at all, then I would rather live in the world with no advertisements.
Fin:
I appreciate the idea of embracing diversity.
Liv: Some people genuinely appreciate that aesthetic. I’d rather live in a culture series where everything that could potentially exist, does exist. It’s a more interesting universe. For example, let’s have a planet that is completely dedicated to mollusc commodification. It’s optional. You can leave if you want, although in reality, people probably can’t.
Fin: This book uses the phrase “the coral reef vision of the future”, where there’s something for everyone. As long as people always have the option to go somewhere else, more diversity is better than less.
Liv: However, if you truly appreciate a dystopian Blade Runner style planet or city, to live up to its definition, people would not be able to leave because it is dystopian.
Fin: Like a form of voluntary slavery.
Liv: Exactly, like an art piece.
Fin: I see.
Liv: You would want it to stay that way.
Fin: Which means we’re getting into some ethical problems. Earlier, you mentioned that one reason we might not want certain kinds of voluntary exchanges to be easy is because you might expect a race to the bottom dynamic, where each individual is making themselves better off, but collectively it becomes worse for everyone.
Liv: Yes.
Fin: This kind of thing has a name, right? Which is Moloch, something you’ve talked about. How would you summarize this whole Moloch idea?
Liv: There are various definitions. The first person to put it into game theory terms was Scott Alexander, in his blog, Meditations on Moloch. He described Moloch as the god of multipolar traps, these poor Nash Equilibria, where the incentive encourages each person to do an action which in aggregate creates a worse outcome for the whole. A classic example is fish farmers farming a lake. It would be best if everyone installs a more expensive filter around their fish farms to reduce pollution. If everyone starts farming as many fish as they can without a filter, the lake gets more polluted, which then lowers the number of fish and everyone’s yields. Whereas if they all do the individually slightly more costly thing of installing an expensive filter, overall, everyone will end up making more money. But then there’s the incentive for each individual person to quietly turn off their filter and save that money, getting a competitive advantage over everyone else, and the whole system ends up falling back down to that same poor state. A visual analogy I like to give is a football stadium at a concert, where everyone starts off sitting down and they’re on a slope.
Some people at the front of a crowd decide to stand up to get a better view, forcing those behind them to also stand up. Soon, everyone is standing, and due to the noise, there’s no effective way to coordinate and sit down together. Consequently, everyone ends up standing for the rest of the show, resulting in tired legs. Even though the initial aim was to gain an advantage, no one now has one, and they are stuck in this unfavorable state. This scenario is an example of a classic multipolar trap, another term for a coordination problem. Everyone would be better off if they coordinated, but the short-term incentives acting on each individual make coordination almost impossible.
Fin: So, in the stadium example…
Liv: Yes.
Fin: Everyone has made a decision that benefits them. No one is missing out on crucial knowledge. They’ve made the correct decision for themselves. But in the end, every single person is worse off than when they started.
Liv: Exactly. That’s a classic multipolar trap. I’ve expanded on the definition in my films, focusing on misaligned incentives that drive these scenarios. What they have in common is competition. Specifically, when competition creates negative externalities. You could call it negative-sum games or unhealthy competition. Competition is neutral; it entirely depends on the incentive structure within it. It can lead to a race to the top and positive outcomes, like the Olympics, which is a positive-sum competition. It brings the world together every four years, providing entertainment, and making athletes rich. However, competition can also be unhealthy and negative. From a zoomed-out perspective, Moloch is the god of unhealthy competition or competition gone wrong.
Fin: We often hear about negative-sum and positive-sum. Where do these terms come from? What is the sum that’s negative or positive?
Liv: In a game of chess, we play for who wins versus who loses, which is a zero-sum game. Most games are zero-sum, with a finite number of points that some people win and others lose, usually adding up to zero. However, in reality, there’s no such thing as a truly zero-sum game because there are always externalities. Even in a zero-sum game of chess, we may learn something or become better friends, which are positive externalities. Despite deviating from classic game theory, I like to say that’s a positive-sum game because the universe’s pie has grown due to the game’s existence. Conversely, a game which makes the world worse could be considered a negative-sum game due to its negative externalities.
That’s why I prefer the terms healthy versus unhealthy competition because it’s more intuitive and doesn’t provoke angry game theorists.
Fin: I wonder if there are some games where only by working together in a certain way can you increase the total amount of resources you can share between yourselves. However, it may be that the only equilibria, Nash equilibria, involve one of the players defecting early on. So, not all positive sum games are necessarily healthy. I quite like the phrase ‘healthy’.
Liv: Yes, exactly. This can also result in bad outcomes. This is where we need some form of quantification system, perhaps utilitarianism or something similar, to measure the externalities. It’s about trying to measure all these different variables, whether it’s happiness or something else.
Fin: That’s famously easy to do.
Liv: Value, yes. I think a lot of people have a decent heuristic or intuitive sense of whether a competitive process is resulting in outcomes that are a net bad for the world. One of the examples I gave in my first film was beauty filters on Instagram. Do you want me to talk about those?
Fin: Absolutely.
Liv: So, I was trying to play the Instagram game, build my following, and I noticed that if I posted a photo with less clothes, it would get more likes. Also, when beauty filters started appearing, which subtly adjust your features, I noticed if I used them, they would get more likes than if I didn’t. There was a strong incentive to use these filters. However, using them has clear negative impacts, not only to yourself but also to others. For example, I would upload a photo that I previously loved without the filter, and then once I’d seen it with the filter, I no longer liked the original. I no longer liked my natural face, which is problematic. Also, I’m misleading all my fans, who see an artificial picture of me and feel bad about themselves. I spoke to other influencers, and they all agreed these filters made them feel terrible. We should all stop using them, but because there are so many people, it’s hard to coordinate. Social media influencing is competitive, and you are competing against others within your niche. If even if other girls aren’t using the filters, if they think they are, then you’re incentivized to do it anyway.
Beauty filters and lose-lose competition
It’s a classic example of what we call a multipolar trap, or as I’ve rebranded it, a Moloch trap. This is an archetypal example of a mechanism where you don’t want to do something, but if you don’t do it, someone else will, so you do it anyway. This mechanism drives almost every major problem we face, whether it’s a company cutting corners on pollution regulation to gain an edge, or farmers on the edge of the Amazon Rainforest destroying the land under strong incentives to acquire land or sell wood for lumber. It’s the same mechanism.
Fin: Absolutely.
Liv: This also applies to the race to AI. If there are strong race conditions, companies are incentivized to go as fast as possible, which often compromises safety. It’s the same dynamic, and that’s what Moloch is.
Fin: Giving all these different dynamics the same name is clever because it highlights the common thread in all these issues.
Liv: Correct. That’s the monster. I’m not saying it has agency or that it’s a real demon, but it’s helpful to give this blind, dumb collection of economic forces a personality, a character, a persona, because humans remember things through stories.
Fin: Absolutely.
Liv: We like the idea of having good guys and bad guys. This is a clear case. If we’re going to have a bad guy, this is it.
Fin: It’s like an anti-villain because it has all these horrible effects, but there’s no one in charge. There’s no head you can cut off to kill the beast, right?
Liv: Exactly.
Fin: I remember studying Marxist theory in philosophy. We discussed why people wear makeup or follow the latest expensive fashions even when they’d prefer not to. The explanation was that people have absorbed this ideology and are living with a false consciousness where they don’t understand what’s best for them. But the obvious answer is that people do know what’s best for them and are making those decisions.
Liv: Yes. In the short term, that is the most winning strategy, and that’s why they’re doing it. We can see that in the long term, it might not be the best, but they’re stuck in a short term game. If you’re trying to attract a mate, you’re going to get more attention if you’re made up and wearing heels.
Fin: Exactly. You can play this game and be perfectly aware that you’re inside of it and that it’s unfortunate that it exists.
Liv: Another way to frame it is that Moloch originally came from an old Bible story about a cult that would sacrifice everything, including their own children, to this god in order to gain military might and power to win wars.
The topic at hand is the concept of sacrificing other important values to win at a specific goal. This is often applied in various scenarios, such as the use of beauty filters where one sacrifices authenticity or self-worth to gain more likes and followers. The same concept applies to being the first to develop a powerful technology, often sacrificing safety for the victory.
Fin: Certainly. I’m curious, as you said, it’s incredibly difficult to escape from these traps, which makes them quite fearsome. However, there have been instances where it seemed the world was stuck in such a trap, but managed to escape, or at least some parts of the world did. Do any stories come to mind of successful coordination to escape these traps?
Liv: Yes. A classic example is the ozone layer crisis of the seventies and eighties. We understood that it was being caused by CFCs, which were an integral part of refrigeration. Despite the massive economic incentive to use these gases, we managed to create the Montreal Protocol, which banned the CFCs, and people stopped using them. They found a way to innovate and build the same technologies without those harmful substances.
Another excellent example is the reduction of nuclear weapons on Earth. We were on a race to produce more and more nuclear weapons, reaching a high point of 60,000, which was ten times the number needed to destroy the world. However, we managed to reduce the number down to around 12,000. Not every country on Earth has nuclear weapons. Countries like Argentina, despite having the incentive, chose not to develop them. They stepped out of the game, taking a short-term risk for long-term good.
Another instance is the Antarctic Treaty signed in 1959, by 12 countries to preserve Antarctica for only peaceful scientific purposes. So, it’s possible to coordinate even in these large multipolar trap situations.
Fin: Absolutely.
Liv: So it’s possible to coordinate even in these large multipolar trap situations.
Fin: Yes, indeed.
I wonder if there’s a common thread to how these things all happened. There’s a perspective where you’re viewing things from the standpoint of a theoretical economist, modeling all the actors as perfectly self-interested with no coercive force to bind agreements. From this perspective, it’s difficult to understand how these agreements form. However, they do form. Perhaps this is because a country, for instance, isn’t a perfect self-interested agent. It’s made up of individuals who care about the future of the world and scientists who care about seeing their findings turned into sensible policy. Humans care about one another, at least to some extent.
Liv: Humans are naturally very cooperative. We can be competitive, but we are also one of the best species on Earth at cooperating with each other. We’re like apes who behave like ants, able to coordinate millions, even billions of us, to build incredibly complex civilizations. This wouldn’t be possible without our default behavior, which is more cooperation and coordination than competition. However, there are parasitic forces that arise when certain ingredients are right, where the incentives are strong enough, scarcity is strong enough, and the short-term incentives acting on the individual are powerful enough to create negative spirals.
Fin: A useful analogy might be immune systems. In bodies, occasionally a cell will go rogue and start replicating itself. In some sense, it’s not cooperating anymore, but becoming self-interested. The reason we don’t immediately die of cancer is because we have immune systems. We also have collective immune systems like police forces, and subtler social pressures against antisocial behavior.
Liv: We evolved from tribes that did better when they cooperated and had strong immune responses to selfish behavior. If someone took more than their fair share, they’d be ostracized, so there were powerful deterrents to selfish behavior.
Fin: If someone was born with a mutation that made them incredibly selfish, you might think that mutation would quickly overrun the population. However, people recognize selfish behavior and counteract it. This is why not everyone is a psychopath.
Liv: If everyone was a psychopath, our species would have died out. But that doesn’t mean that a psychopath, with enough technological power, couldn’t overrun and ruin the game for everyone.
Fin: It’s currently easy to detect and counteract psychopathic behavior.
Liv: Our immune response was always sufficient in pre-exponential tech days. But now we have exponential technologies.
Fin: Yes.
These arguments could be applied to AI. There’s a concern that psychopathic tendencies in AI systems could overrun the population, even if no one wants this outcome. This should be worrying.
Liv: Indeed, especially if we start training AIs on adversarial datasets, such as the internet. Do we really want to train a language model on 4chan? It’s a highly adversarial, unpleasant environment. Even something seemingly innocuous like training an AI on Call of Duty could produce an AI that’s been exposed to a very realistic, yet negative aspect of human life. These are not the values we want to instill into a powerful technology that has agency.
AI is a multipurpose tool that can be used for almost anything, achieving almost any goal. That’s the definition of intelligence: the ability to achieve goals across a wide range of environments. Hence, wherever there’s an economic incentive to use AI, it will be used. This leads to the concern that as AI becomes more ubiquitous and easy to implement, any company or system that is misaligned with the good of the whole will be accelerated.
For instance, if we get better at marketing alcohol using AI, this could be problematic as the alcohol industry is arguably misaligned with the good of humanity. A more extreme example would be the meth industry. If meth dealers suddenly have powerful AIs to sell as much meth as possible, that’s a clear example of a misaligned industry being amplified by AI.
This is a near-term problem we need to address. Arguably, many of our industries are misaligned with what’s good for humanity, as evidenced by the damage done to the biosphere over the last 50 years. Even industries that use a lot of energy could be problematic if AI speeds up their rate of economic growth, given that most of our energy still comes from fossil fuels. On the other hand, we could use AI to figure out how to produce more clean energy. It’s a race of exponentials, and the question is which one will win.
Fin: Let’s ensure the good ones maintain parity. I’m curious to zoom in on the analogy you suggested between advanced AI systems and companies. There are perhaps some similarities.
Companies are often more competent than any individual person. They’re powerful and often act like agents, with their goal being to maximize profit within the law.
Liv: Yes.
Fin: To achieve that goal, they act as if they have instrumental goals. They want to accumulate influence and improve their reputation through advertising. They can also cause harm, like in the case of climate change, when they’re not internalizing those harms. It becomes someone else’s problem. I’m curious about how you think about this analogy. What lessons can we learn from it with respect to AI?
Liv: I heard someone recently describe how a corporation’s incentive structure is designed to behave like a psychopath. It doesn’t inherently have a conscience. It only has a conscience if the people who make it up have a conscience, which is usually the case. However, the way the corporate structure is set up is that they have one main optimization function, which is to maximize shareholder profit. This is fine if that metric aligns with not just the people within the company, but also the customers and the wider community. If a company’s profits are directly aligned with a healthy biosphere, mental health of humanity, and a healthy informational landscape, then that’s an example of an aligned company. But in most cases, there’s usually some misalignment.
Fin: Right.
Liv: That’s where externalities come in. Trying to put a dollar value on certain things invariably loses a bunch of information.
Fin: I guess we were talking about that at the start, where it’s hard to put a dollar value on aesthetics.
Liv: Right. There are these intangible values which are incredibly hard to quantify. Markets need values to be quantified. Any values that aren’t being quantified are not being incorporated into the market. They’re not being priced in. So these companies, it’s not that they want to be doing this, but the structure of the market is set up to fail in terms of alignment. If you then stick a bunch of AI on it to speed it up and make it more efficient, you’re just going to amplify that misalignment further.
Fin: Yeah.
Liv: One thing we have seen that has worked to a degree with stopping companies from being too out of whack is regulation. We have added regulations on casinos to make sure that they are doing at least some modicum of checking whether people are getting too addicted to their products. We put regulation on polluting industries. If they were left to their own devices, they are not incentivized to switch to more expensive, cleaner fuel sources. But with regulation, they get penalized if they don’t. It’s not working perfectly, but it’s at least slowed down the problem a bit. Regulation has certainly helped. So that suggests to me that we should be considering that when it comes to these new AI agents. We need some degree of regulation there because that’s been our best mechanism thus far of minimizing the externalities of slightly misaligned industries.
Fin: Yes, I totally agree. Also, in most industries, especially consumer industries, there are clear lines of liability.
If I get food poisoning from a meal at McDonald’s, I’m confident I can sue them. This gives them a clear reason to avoid such an incident. However, it’s currently unclear where liability falls for an AI model that is leased out by one company to another. If it’s unclear who to sue, then there’s no deterrent against allowing such a situation to happen. Drawing from these analogies seems useful.
The general idea is that companies have something akin to a utility function, which can be roughly described as the profit motive. This often overlooks real harms and benefits, failing to internalize them. AI systems, by analogy, would be optimizing even better and presumably causing even more harm as long as they’re not internalizing those effects.
Liv: Mhmm.
Fin: I’m curious if there are disanalogies or places where this comparison breaks down. If the AI risk arguments apply to companies, why haven’t companies caused ruin? Why isn’t there just one company that’s taken over the world? We seem to be coexisting with companies pretty well, and I benefit a lot from them. So, what’s going on?
Liv: We are coexisting well, but would we continue to if the status quo persisted for the next 50 years? Would we already be in more trouble were it not for regulation, liabilities, and checks and balances that try to align incentives? I’m not sure. It’s a topic I feel conflicted about. On one hand, I’m enjoying my life with all these wonderful gadgets. On the other hand, the more I read into it, the more I realize how much we’re borrowing from the future.
Many of the rare earth materials used in computers or iPhones are massively subsidized. We’re not factoring in the real rate of depletion of these things. If we could see how rare these materials are, we wouldn’t casually throw an iPhone away after 4 years. If we were truly pricing in the value of these materials, they would be much more expensive. The markets are very shortsighted.
We’re also borrowing value from the future with fossil fuels. We’re consuming oil at an alarming rate. It’s getting harder and harder to reach it and, even without considering the environmental problems, we’re going to run out of oil. We’re not transitioning away from it fast enough. There might be step-change technologies that come along, but it’s not guaranteed.
For example, if we can achieve a breakthrough in superconductors, particularly one that operates at room temperature and ambient pressure, it could lead to significant innovation. However, the current situation is that the price of oil should actually be much higher than it is, given that it is a scarce resource that we’re depleting. It’s incredibly valuable and integral to our society. Think of it as a carbon pulse or a binge. Our society has experienced a boom due to the discovery of rich hydrocarbons. We are currently enjoying the benefits of this carbon binge, but there is a potential for a significant downturn when these resources become scarcer.
I acknowledge that this sounds pessimistic. However, I believe there is a way out of this, possibly through AI or through our own intelligence and innovation. There are many emerging technologies that could potentially save us. However, we also have to deal with the problem of rushing to find these new innovations. The more we delve into Pandora’s box of powerful technologies, the more likely they are to have dual use. AI could solve all our problems, but it could also amplify existing ones and create new ones we couldn’t have even imagined.
Fin: You could take an AI-powered rocket to an oil company. It’s just doing what it’s already doing.
Liv: It’s a very exciting time. We’re getting to see how this big dance is going to play out.
Fin: May you live in interesting times. One thought that comes to mind is the comparison between companies doing what they do, like the carbon binge model of corporate behavior, and what that could tell us about AI systems. Maybe we’re fortunate that companies are like learning optimizing agents that people tinker around with. They’re doing things in the real world, but in a crude slow-motion way. The feedback signals they’re getting, like going bust or changing profits, are quarterly and very crude. You don’t really know how you got there. So it’s subject to very weak selection and other training signals. Also, if you double the number of people in a building, you’re not doubling the productivity of that company. People have to talk to one another and hierarchies are really inefficient. So you have these diseconomies of scale, which is maybe quite fortunate. At least as long as companies are doing bad things. But maybe that doesn’t apply to powerful AI systems. Maybe you could just keep getting more productive and better at optimizing whatever you’re optimizing without that curve bending. We have human-shaped reasons why the curve needs to bend for companies.
Liv: I agree. Many of these diseconomies of scale, I would imagine, are due to human factors.
I’m not an expert in company structure, and I’ve never run anything more than a small team of three people. I can imagine that the traditional company structure is more of an artifact of physical limitations.
Fin: Yes, that’s right. Additionally, humans require approximately 20 years to become economically productive.
Liv: Indeed. Humans are slow, and we also have unique biases and preferences. We care about things like status. These factors can create inefficiencies that wouldn’t constrain multi-agent AI systems. They could probably be programmed in, but I don’t see why that would be the case. We certainly can’t rely on that. If anything, I would expect that as you add more agents to an AI system, it would result in a super-linear relationship, as opposed to an S-curve.
Complexity and AI
Fin: I remember you talking about complexity as something that matters. Are you still thinking about this?
Liv: Yes, I have been considering what would happen if an AI went wrong or if a race to the bottom destroyed the biosphere. What would be the end result? If all life on Earth ended, or if the universe was filled with paper clips, as per the classic thought experiment, what would happen? The system would become very boring.
Fin: So, it’s
Liv: losing complexity. Right now, Earth is probably the most complex place in the solar system, if not the galaxy. By complexity, I mean it’s a very rich, nuanced, hard-to-describe place. If you were to try and simulate it, it would be very difficult. If a misaligned AI were to fill the universe with its optimization function, it would permanently curtail any potential new complexity and make everything very boring. It’s like a form of heat death. It’s not even entropy, it’s just a victory for boredom, which is very sad. That’s why I have such a negative reaction to the concept. If the universe has a preference, it seems to want more emergent complexity.
Fin: So, more life, structure, richness.
Liv: And beauty, weirdness, and diversity. It’s like
Fin: Cells integrated into organisms, which formed societies.
Liv: Yes, organs turned into bodies, which turned into communities, which turned into societies, which turned into civilizations.
Liv: Things seem to become increasingly emergent and rich over time. Emergence gives rise to greater complexity. That’s what Moloch does. It either temporarily, or in extreme cases, permanently curtails complexity. If Moloch is the god of lose-lose situations, resulting from unhealthy competitive processes, what’s the inverse? What’s the god of win-win situations? I got stuck on the term win-win. If you’re looking for a definition, Moloch is that which permanently curtails emergent complexity, and win-win is that which enables more emergent complexity.
If we were to give these entities personalities, Moloch is mono-focused, psychopathic, focused on winning one particular narrow metric. It’s short sighted, lacking the wisdom to see the greater whole. Win-win, on the other hand, enjoys competition and games, but is also wise enough to see the wider picture and intervene when competition starts to have too many negative externalities or begins to curtail complexity.
Finite and Infinite Games
Liv: There’s a great book called ‘Finite and Infinite Games’. Moloch just wants to win the finite game right in front of it, while win-win wants to keep the game going. It wants everyone to get to play more games. This is just my philosophical musing on the topic of complexity.
Fin: I really like this. You might ask what a good future looks like versus a bad one. If we’re building these AIs, whose values do we build into them? Here’s a proxy: which futures are incredibly boring, and which futures keep the game going?
Liv: Exactly. Unfortunately, there’s so much dystopian fiction compared to utopian or protopian, partly because dystopian futures are lower complexity states. They’re easier to imagine because they’re more boring. It’s hard to describe amazing futures because by definition, they are so rich, complex, dynamic, and diverse.
Fin: Right.
If you strive for the most evident utopia, it might resemble a group of people on SOMA, the drug from Brave New World, lined up in a row. Crudely speaking, this might maximize something you thought was good, like hedonic pleasure. However, the intuitive reaction to this is negative. It seems like an incredibly dull world. This idea of complexity, the common-sense notion of a complex arrangement of elements versus a simple arrangement, nicely explains the difference between these pseudo-utopias and the types of utopias one might feel genuinely excited about.
Liv: That’s why the term ‘protopia’ is much better than ‘utopia’. Utopia sounds like a steady state, an end state. It’s as if we’ve reached utopia and it’s now fixed. This concept doesn’t align with the ideas of emergence and complexity. Protopia, on the other hand, is something that is evolving and has room to evolve. It will keep becoming something else. This feels intuitively and logically like the right direction.
Fin: I agree. Protopia is something that people from various value backgrounds can agree on. We might disagree on the best way to live life, but we can agree that having the space to live life in that way and having a variety of options is a good thing. This is obvious, but it’s nice to articulate it.
Liv: One of the most beautiful quotes I’ve heard is, “Love is that which enables choice.” While choice can lead to option anxiety, if you look deeper, a loving act is one that empowers someone to make the right choice for themselves.
Fin: I like that.
Liv: It’s a deeply loving act. It’s a win-win. Make more choices, do more things, get smarter so you can make better choices. Don’t minimize your mistakes, but increase your options. It’s both freeing and empowering. The quote is by Forrest Landry.
Fin: Forrest Landry. Nice. It reminds me of discussions on how to measure welfare in countries. You could look at the GDP per capita, but that misses something important. Another approach is to ask what options and freedoms are open to people, even if they don’t take them. What can someone realistically choose to do in their circumstance?
Liv: Right. You could be materially rich but freedom poor.
Fin: We’re talking about complexity as if it has a clear definition. But it doesn’t. You could have something that’s highly complex, but that’s not what we’re discussing. One way to max out this measure is to have white noise, something incredibly hard to succinctly describe. But that’s not the point.
White noise isn’t utopia, it’s just complexity at every level. There may be operationalizations.
Liv: It’s a dance between patternicity and randomness. The randomness allows new things to emerge, while patternicity gives them meaning.
Fin: There are terms close to this in physics, like self-organized criticality.
Liv: Shannon entropy adds complexity. Sean Carroll described this well in his book ‘The Big Picture’. He explained that entropy increases over time, but so does complexity.
Fin: It seems upside down, right?
Liv: Yes, it’s like a parabola. The starting conditions of the big bang were very simple. But over time, as things cooled and coalesced, slight differences evolved into clusters of material, which turned into galaxies and all the cool stuff. Entropy and complexity have been increasing. But in theory, a maximum point will be reached, after which all the free energy starts getting used up and things get more spread out.
Fin: I remember an image from that book of stirring milk into coffee. At first, you have a simple layer of milk and a layer of coffee. After stirring, you get complicated swirls. Eventually, it just becomes milky coffee, which is easy to describe and predict. We want to stay in the swirly bit, the top of the upside-down U. Currently, life is this swirly bit, let’s keep swirling.
Liv: Entropy isn’t the final end boss. It’s a tool that gives us space to do things. The end boss is molecule dynamics and things that stop complexity, which is not really entropy. It’s something else.
Fin: We have a lot of time before we run out of free energy. This isn’t our main concern.
Superconductors
Fin: Do you think the new superconductor that’s been in the news is real? The paper came out yesterday, right?
Liv: It was two days ago. I hope it’s real, but if I had to bet, I’d say no. By ‘real’, I mean a newly discovered superconducting material that operates at room temperature or higher and at ambient pressure. We’ve had room temperature superconductors, but they operate at extremely high pressures.
Fin: So they require a diamond anvil?
Liv: Yes, they require two diamond points and operate at gigapascal pressures.
Bayes would suggest that my initial assumption would be no, given that I haven’t fully read the paper or their methodology. However, they’re using quite basic chemicals, including a bit of lead and some other substances I can’t recall. They grind these up with a pestle and mortar. It seems like a few thousand dollars’ worth of lab equipment would be sufficient to replicate this with really basic materials.
Contrast this with the previous room temperature superconductor. I visited the lab in Rochester, New York, and it required a couple of million dollars’ worth of equipment. It’s incredibly expensive and requires fine-tuning. There are lasers involved. It’s really cutting-edge stuff.
If we do have this fairly simple-to-make superconducting material that can exist in our atmosphere, the implications would be significant. It could be a game changer for energy efficiency and could transform our economy. For instance, nuclear fusion might become relatively trivial if we can figure this out.
Part of the problem is keeping the superconducting thing going because it means there’s no resistance. This leads to energy loss, essentially, in the form of waste heat, which causes problems and breaks things down.
However, if we can solve this, it could make nuclear fusion much easier. We could transport electricity over vast distances with virtually no energy loss. For example, we could connect a solar panel in the Mojave Desert to New Boeree and lose almost no energy along the way. It would be a game changer.
Still, my initial assumption is that there’s a mistake somewhere. It might be some kind of diamagnetic property that’s making it look like it’s levitating. There are other explanations for why this is happening.
Also, these are fairly unknown scientists. The paper itself is written somewhat poorly, and there’s even a spelling mistake in the title. There are a lot of strange things happening. My money is on an earnest mistake. That’s not to say that it’s not pointing at something useful, but as always, if something seems too good to be true, it probably is.
That said, it would be amazing if this is indeed the case. It would also be astonishing because it would make us wonder what other low-hanging fruit we’ve been missing.
Fin: Right. It would not only transform the world with this one technology, but it might also indicate that there are many more surprisingly low-hanging fruit in material science and other areas of engineering. Add AI that can do R&D very quickly, and we’re looking at a different world.
Liv: Oh, yeah.
Fin: Shall we move on to some final questions? I’m curious if you have an answer to this. We often ask academics if there’s any kind of research or other work that they’d be excited to see someone do, especially someone listening to this. I wonder if there’s an answer in your case.
Liv: Anyone who is inspired to help solve this meta problem of Moloch, or in other words, how do we find better coordination mechanisms and create more trustless protocols, there are already people working on this. I’ve previously interviewed the Bankless guys, who are a good resource on using cryptography to create these foolproof protocols. These protocols eliminate the need to trust other parties in order to coordinate, which would be extremely helpful.
Recently, I’ve been delving into the issue of climate change. I was distracted from being concerned about climate change because of the advent of AI, synthetic biology, and other exponential technologies. However, after looking at some of the data this year, it seems things are getting quite alarming. For instance, the Antarctic sea ice is at a 5 sigma low or something like that.
Fin: That is wild.
Liv: It might even be 6 sigma. This level of anomaly is insane. The climate is one of the most complex systems on Earth and who knows what kind of feedback loops this might create. For example, if you have 20% less Antarctic sea ice, which I think is the current situation, that’s 20% less albedo, or reflectivity. So, that’s 20% more energy it’s already absorbing. Additionally, the oceans are already reaching their absorption capacity. There was literally hot tub temperature seawater off the coast of Miami a few days ago. It was 39 degrees Celsius or 38 degrees Celsius. This is unheard of. I’m getting this information from Twitter, so it could be wrong. However, there is enough signal that some of it is going to be valid.
This makes me wonder if we should be doing meta-level research into the potential harms and benefits of different types of geoengineering before we start actually building them. Geoengineering needs to be approached incredibly carefully. We are already inadvertently changing the climate, but to actively change it would create an immensely powerful dual-use technology that could be used for incredible harm.
Fin: Are there consensus mechanisms or ways to get international coordination around geoengineering so we don’t have these kind of really thorny political worries?
Liv: Exactly. It’s going to get split by the culture wars like everything else. We need some kind of culture war-proof ability to solve hard problems. Moloch is the culture wars. It’s the god of war and sacrifice. If it was a brainworm infecting people’s minds, this is how it would manifest. Understanding how we can better solve these really tense, tricky coordination problems, which are invariably going to create a lot of differing opinions, is crucial.
I came across a really cool thing the other day, a collective intelligence mechanism called the Society Library. It’s mapping out the terrain of arguments on contentious topics. They did one on the debate around the Diablo Canyon nuclear reactor.
She mentioned that it took eight full-time analysts about eight months to construct this decision tree. The more arguments you put into it, the stronger it gets, even the unconventional ones. For example, if there are people out there believing that the Earth is flat, and therefore geoengineering is impossible, the geoengineering debate would benefit from something like this.
Fin: It does seem like there are these really important, high-stakes, complicated questions in the world. Currently, the best technology we have to discuss these questions is to exchange text. It’s surprising that we haven’t figured out something better that’s more widely adopted.
Liv: Exactly. Any culture wars topic is highly complex by definition. It’s a culture war because there are truly many nuanced arguments for and against and everything in between. If we can have better ways of modeling all of those threads of argument, and then apply AI to that, it could be beneficial.
Fin: Or just to keep track of the structure of an argument. If I say, I believe ‘a’ because I think the conjunction of ‘b’ and ‘c’ is true, and ‘c’ is true because ‘d’ is true, and ‘d’ is true because either ‘e’ or ‘f’ is true, by the time I’ve said that, you’ve forgotten everything. So, having some way to visualize the structure of what you’re talking about seems like a good start.
Liv: Right. Just being able to see it. They represent it on a 2-dimensional laptop screen. It’s a multi-dimensional problem represented visually. I would like to see more thought in that direction.
Fin: Tools for thought, I guess, is a nice concept.
Liv: Something like that. Systems level stuff, basically.
Fin: Nice. That was a great answer. Another question we ask everyone is, if you could share three things, like books, films, articles that you think people listening should take a look at.
Liv: If the misalignment of corporations and market structures, especially as it relates to AI, was interesting to people, I recommend my 90-minute interview with Daniel Schmachtenberger on my YouTube channel. I think there are some real nuggets of wisdom there. Also, my Moloch videos, if anyone hasn’t seen them. I also recommend the Dan Hendrix AI paper. In terms of books, I recommend the Culture series by Iain Banks.
Fin: Oh, great. That’s super relevant.
Liv: It’s really good. I think it’s important to read books that inspire people. The Culture series by Iain Banks is one of the best for capturing a protopia.
The idea of living in a sci-fi future is very appealing. It’s a win-win situation that maximizes choice for everyone. At the same time, it knows how to manage situations when people become too unruly. This future is not only fun but also sexy, especially when considering surface details. However, it comes with significant information hazards.
Fin: What would you name your spaceship in this culture series?
Liv: I’m not sure about the spaceship, but probably Winwin.
Fin: Of course.
Liv: The Winwin. It’s amusing that the spaceship, a core part of the book, goes into battle at the end. It’s one of the best depictions of a superintelligence that is orders of magnitude more powerful than us. I also found it strangely attractive. I think I was slightly sexually attracted to the spaceship in the book.
Fin: That’s surprising.
Liv: So, there’s a selling point for anyone who’s interested.
Fin: Interesting.
Liv: The ship’s kind of an antagonist as well, but it’s a really good book.
Fin: Great. What a recommendation. Now, you also have a podcast and a YouTube channel. Where can people find them?
Liv: Ideally, people can subscribe to me on YouTube. I recently launched a podcast called Win Win, where we explore the topic of competition in various industries. We’ll discuss big existential problems as well as more down-to-earth topics. It’s not all doom and gloom. I want it to be fun and we have some exciting guests lined up. I’d love people to check it out.
Fin: Fantastic. I noticed you released an episode with Isabelle Boemeke, which I plan to listen to.
Liv: It just came out. I highly recommend it. She’s great.
Fin: Wonderful. Liv Boeree, thank you so much.
Liv: Thank you.
Liv: That was Liv Boeree discussing poker, Moloch, lessons from game theory, and much more. If you’re looking for links or a transcript, visit hearthisidea.com/boeree.
If you find this podcast valuable, please write an honest review wherever you’re listening to this. We appreciate it, and it does help. You can also follow us on Twitter at Hear this idea. A big thanks to our producer, Jason, for editing these episodes, and thank you very much for listening.