Episode 80 • 26 October 2024

Dan Williams on How Persuasion Works

Leave feedback ↗

Contents

Dan Williams is a Lecturer in Philosophy at the University of Sussex and an Associate Fellow at the Leverhulme Centre for the Future of Intelligence (CFI) at the University of Cambridge.

Dan’s research centres on the social functions and causes of beliefs; especially self-deception, religious beliefs, political ideologies, and delusions.

(Image) Dan Williams

We discuss:

[I]f beliefs are the maps by which we steer, why are so many people’s maps so radically misaligned with reality?

Resources

Let us know if we missed any resources and we’ll add them.

Transcript

Fin: Hey. This is Fin. In this episode, I spoke with Dan Williams. Dan is a lecturer in philosophy at the University of Sussex and an associate fellow at the Leverhulme Centre for the Future of Intelligence at Cambridge. Dan’s research focuses on the social functions and causes of beliefs like self-deception, political ideologies, and delusions. He was actually our sixth guest ever way back in 2020, but I wanted to catch up on some of his work from the last few years. So we talked about different metaphors people use for how certain, especially inaccurate beliefs, take hold, like the idea of mind viruses, luxury beliefs, or the marketplace for ideas. We also talked about the idea of rationalization markets, why the societal risk posed by misinformation and disinformation might be overblown, where fact-checking works, why just good faith attempts at rational persuasion seem weirdly underrated, and, of course, we talked about AI. So what we know about the possibility of super persuasive kinds of propaganda AIs, whether deep fakes and AI impersonators might massively undermine social trust, and just generally whether a world full of very powerful AI systems is good or bad news for the world’s epistemics. Okay. Without further ado, here’s Dan Williams.

Dan Williams, thanks for joining me.

Dan: Thanks. Cheers.

Reasoning and Language

Fin: So there’s this question of where our faculty for reasoning comes from, right? And here’s an obvious answer: so, especially once you have language, being better at reasoning helps you to map out the world better, to learn more non-obvious techniques, threats, and technologies. That’s just a really obviously useful and adaptive skill for navigating a complicated world. If that’s true, then why are humans so terrible at reasoning?

Dan: A few assumptions, I guess, are embedded in that question. I’ll just touch on a few of them where I might push back a little bit. One thing you mentioned is this sort of connection between reasoning and language. I think it’s definitely true that when it comes to human beings, the fact that we’ve got this capacity for language massively amplifies the power of reasoning. But I don’t think reasoning is essentially tied to language. Much of slow, effortful thought, reflection, deliberation, and so on involves constructing and manipulating mental models of the world, running simulations, that kind of thing. For that reason, I don’t think this capacity for reasoning is unique to human beings either. Other kinds of animals also have the capacity to engage in this sort of conscious deliberative thought and reflection about the world. Of course, in the human case, it’s far more sophisticated and flexible and impressive than you find in other animals, but I don’t think it’s unique to human beings.

Then the thing that you ended with, which is why are we so terrible at reasoning, I guess the thing I’d say there is we’re often terrible at reasoning, but we’re not always terrible at reasoning. There are contexts when our goal is to get at the truth, to form accurate beliefs and make good decisions, where actually people tend to be pretty good at reasoning. They can be pretty objective and impressive in terms of how they go about solving problems consciously and reflecting upon them. But you’re right that there are also many cases where people seem to reason in this incredibly biased or dogmatic way. A lot of what’s going on there—this is an idea that goes back a long way; you find it in Philip Tetlock, Hugo Mercier, and Dan Sperber—is that reasoning that often seems incredibly irrational relative to the goal of getting at the truth or making good decisions is often socially adaptive. Our goal is not really to get at the truth; it’s to persuade other people or advocate for specific kinds of views or advocate for our interests, or also just to manage our reputation. We’re these really social animals, and it’s really important to frame our actions, characteristics, and attitudes in ways that cast us in a positive light. When we reason in that way, we tend to reason in a pretty loyal fashion. We engage in that quote, unquote, my side bias.

We sort of pile up reasons that are in line with what we already believe or what we’re inclined to do. But that’s not really a mistake; it’s using reasoning in a way that’s suited to goals distinct from the truth. Does that make sense?

Fin: Makes sense. So it depends on the goals. One obvious goal is finding the truth. There are other goals and benefits to certain kinds of motivated reasoning as well. I was thinking, at least in certain familiar contexts, if someone is making a decision, especially on behalf of other people, then they’re really expected to produce, explain, and defend a kind of legible reason about that decision or their beliefs. I was wondering, is that expectation for being able to produce reasons a modern development? Did hunter-gatherers care about giving reasons to one another for their decisions?

Dan: So I’m not an expert on hunter-gatherers or anthropology, so what I say should be taken with a grain of salt. My inclination is to think that when it comes to matters of immediate practical importance, the tendency to provide reasons for our actions and decisions, and to expect reasons from others, is pretty culturally universal. You find it as a basic mode of human social interaction. There are really interesting ethnographies. For example, the San people of the Kalahari Desert, a tribe in Southern Africa, clearly engage in sophisticated forms of reasoning and exchange of reasons regarding tracking and hunting. However, I think there’s cultural and subcultural variation in at least two things.

  1. What constitutes a good reason. For example, if you ask many people in small-scale societies why they’re engaging in a ritual or a particular practice, the reason they give is often just custom or tradition. That is very common throughout the world and throughout human history. However, in some cultures and subcultures, that might not be considered a good reason. One aspect of the Enlightenment, broadly construed, was the idea that merely saying something is custom or tradition does not constitute a good reason.

  2. The domains about which people are expected to or encouraged to reason. As I said, when it comes to ordinary practical existence, day-to-day existence, the process of giving reasons is pretty much cross-culturally universal. But when it comes to reasoning about big cosmic questions—like the deep past, the deep future, supernatural issues, and religion—in those cases, it’s definitely not culturally universal. The obvious reason for that is that the sorts of beliefs and narratives upheld about those domains often do not aim to get at the truth. Instead, they aim to bind the community together, facilitate cooperation, or ground certain kinds of rituals. In a despotic hierarchical regime, elites may enforce particular orthodoxies, making it unwise for anyone to reason about those issues. They may punish anyone who develops heresies or heterodox claims. Under those conditions, reasoning is not advantageous. However, in some cultural contexts, particularly in post-Enlightenment Western societies, there is an expectation to provide satisfactory justifications and reasons for beliefs concerning those abstract cosmic questions.

Fin: Yeah. You mentioned earlier that animals seem to be able to reason, at least in certain senses. This raises the question of what sets humans apart in many ways. One answer is culture. Humans can accumulate and refine ideas through social learning over generations and between people at a time. This is something Joseph Henrich talked about, as you know.

And in one of your blog posts, you wrote that in your view, Joe Henrich’s book, The Secret of Our Success, is the most important work of conservative political thought published in the past couple of decades. What do you have in mind there?

Dan: Yeah, this was a passing comment on a blog post. So, conservatism is a very diverse and complex body of thought, but one strand within the conservative tradition is the idea that you should be deferential to ideas, norms, and institutions that are handed down from previous generations of tradition and custom. The idea is that these things embody certain kinds of wisdom, often implicit. It’s not even clear to the beneficiaries of that wisdom exactly how these cultural practices work. But because they emerge over long periods of time through trial and error, experimentation, and social learning, they often embody this sort of wisdom. Moreover, the flip side within the conservative tradition is that when it comes to human reasoning, because reality is so complex and because we’re so individually limited, it’s often greatly inferior to the wisdom and knowledge embodied in those cultural traditions.

So, famously, Edmund Burke’s Reflections on the Revolution in France, which was published as a critique of the French Revolution, made those two points as a criticism of the revolutionary utopian ideals of the French Revolution. Much of that conservative critique is really a kind of vibe-based critique. It’s not rigorously based upon evidence or a scientific understanding of human psychology and social behavior. It’s based on common-sense observation and certain kinds of armchair arguments.

I think what you get in the work of someone like Joe Henrich, which reports on the work of Rob Boyd and Pete Richardson on cultural evolution, is an attempt—he wouldn’t frame things this way, but this is how I read that book—to provide a kind of scientific vindication of that Burkean view, one which is dismissive of individual brainpower, intelligence, reasoning, and so on. Henrich argues that if you want to understand what’s distinctive about human beings and what explains our ecological dominance, you shouldn’t look for individual-level intelligence. Instead, focus on culture and cultural evolution and the capacity of cultural evolution to create adaptations, even when the beneficiaries of those adaptations don’t really understand how it works. So that’s kind of what I meant by saying why I think it’s an important work of conservative political thought. It’s not intended that way, but I think what you end up getting from it is an attempt to provide a scientific vindication of a certain kind of Burkean view of reason and culture.

(Image) A vortex of misinformation

Metaphors for Reasoning

Fin: Yeah, right. So I might not be able to give a reason that bottoms out further than it’s just a custom for some custom I take part in. But the custom might nonetheless be really smart and adaptive. Let’s talk about metaphors for reasoning because, you know, I often hear people throw out lots of different kinds of analogies for, for instance, how we form political beliefs. And especially how really bad or false ideas take hold. Just to begin with, people talk, for instance, about certain especially bad ideas being a kind of virus—like a mind virus. And presumably, there is something to that. So, you know, there are some ideas that are both totally wrong and very compelling to certain people. And some of those ideas also, in some sense, cause their believers to spread them or at least a bundle of ideas, which include exhortations to go out and proselytize about why the other ideas are important. Also, you know, the Internet allows certain messages to spread in a kind of viral way. So what, if anything, is wrong with that mind virus metaphor?

Dan: So much going on there. And we should return to the point about proselytizing and evangelism if I forget it because I think there’s something important to say about that. The first thing to say is, what exactly are people claiming when they liken or analogize a particular idea to a virus? So, like, famously, Richard Dawkins has claimed that religious ideas are viruses of the mind; misinformation researchers have likened misinformation and conspiracy theories to mind viruses.

Elon Musk routinely likens wokeism to a mind virus. There’s a way of interpreting those claims where I think what they’re saying is sort of true, but trivial. The true but trivial reading is that human beings are social animals. We communicate, engage in social learning, and influence each other. Often, ideas or information, in the broader sense of that term, propagate rapidly through communities or, quote unquote, go viral. I think that’s true. It’s not clear to me what we really gain from redescribing those facts in terms of a mind virus. I consider that set of claims perfectly legitimate as a metaphor. However, there’s nothing distinctive there about bad ideas. That’s going to be true not just about ideas, but also about fashion, rituals, table manners, or whatever it might be. Even in the domain of ideas, it’s going to be just as true of Einstein’s general and special theories of relativity and how they spread through the physics community in the first half of the twentieth century. It’s not that illuminating to me, at least, to say they spread through the community like a mind virus.

Typically, what people mean is not that this is just a generic claim about human beings influencing each other and ideas propagating through communities, but the idea is that there’s something distinctive about allegedly bad ideas, which is usefully likened to the spread of a contagious virus. That’s what I think is really mistaken. There are sort of two reasons for that. The first is that people often assume that because certain ideas are wrong or they take those ideas to be wrong or mistaken, therefore, insofar as those ideas spread, they must have, in some sense, bypassed people’s rationality, like their ability to evaluate evidence and think through things reasonably. As an assumption, that’s completely wrong. One of the things that I’ve written about elsewhere is that many people are instinctive naive realists— that’s the term from social psychology— in the sense that they assume the truth, what they take the truth to be, is kind of self-evident. Like, of course, anyone would have those views about what the truth is. Inasmuch as people don’t see the truth, inasmuch as they don’t agree with my beliefs, therefore, they’re either lying, being deceitful, or they’re clinically insane, or they’ve been brainwashed or infected by some kind of virus.

Fin: Huh.

Dan: The problem with that is the truth about these issues— about the cosmos, about religion, about politics— is never self-evident. Our access to reality is always mediated through the information we’ve encountered, the sources we trust, how we interpret that information, and so on. What that means is you can have the same reality and multiple different perspectives upon that reality. The people with those different and conflicting perspectives needn’t have been reasoning in fundamentally different ways. It’s just that reality has been mediated to them in different ways. In other words, you can end up with incorrect, inaccurate beliefs even if you’re being perfectly rational, just because you’ve been exposed to selective or misleading evidence or trusted the wrong people. The second thing I’ll just quickly say, and then we can maybe talk about this in more depth, is I do think there are some cases where bad beliefs— in the sense of beliefs which are not accurate and are not well-supported by evidence— really do spread through different kinds of processes. In my view, in those cases, I think you find them in certain contexts of religion and politics. The process by which people adopt and spread those beliefs is much more strategic. It’s much more instrumentally rational.

Fin: It’s not passive.

Dan: It’s not passive. It’s not usefully illuminated by an analogy to a contagious mind virus.

Fin: For biological viruses, you can inoculate someone against that virus. Meaning you can, at least in the crudest versions, expose someone to small doses so they build up immunity. The mind virus analogy might suggest that it could be a useful strategy to, you know, expose people to quote unquote, fake news or other kinds of perniciously false ideas labeled as such.

So that when they encounter these bad ideas in the wild, they are somehow more immune to them or can spot them more easily. Does that kind of inoculation strategy against bad ideas work in practice?

Dan: I’m very skeptical of this idea. It’s often referred to as inoculation theory. There are very smart, thoughtful people who take this seriously. As you said, the idea is, just as with traditional vaccines, exposing people to a weakened dose of a virus enables their immune system to develop antibodies against that virus by learning to detect it and then fight it off in the future. The idea is that misinformation may have distinguishing characteristics, often referred to as the DNA of misinformation or, in another metaphor, the fingerprints of misinformation. If you train people to detect the diagnostic markers of misinformation, then once you’ve done that, they’re going to be better at detecting and fighting off misinformation in the future. They’re going to develop mental antibodies, so to speak.

I think there are several problems with this. One is I just don’t think there’s good evidence for the claim that misinformation has these diagnostic markers. In general, whether a claim is true or well supported by evidence or whether it’s false or misleading is not an intrinsic feature of the claim. It’s based on the degree to which it accurately represents how things are or the degree to which it’s supported by available evidence. When you look at people advancing alleged empirical justifications of this idea, they’ll point to things like, well, maybe fake news has distinguishing markers. For example, if you compare content from a fringe, disreputable outlet online to content from the New York Times, the stuff from the fake news website tends to be more emotional or more polarizing in terms of the surface-level features of the content than the stuff from The New York Times, which I think is true. I think that’s a robust, not particularly surprising finding.

But, of course, fake news is only one very specific kind of misinformation. Maybe we’ll return to this later on, but I think in the grand scheme of things, it’s pretty rare and not particularly impactful compared to other forms of misleading content. When you look at these alleged diagnostic markers, like emotional language, the obvious problem is that much communication, which is highly emotional, is perfectly legitimate and, in fact, appropriate to the circumstances. There are contexts in which the only way to communicate honestly is by communicating in emotional ways. There are other contexts in which it’s a very influential propaganda strategy to communicate in a dispassionate, neutral tone as a way of masking the truth. For instance, if you describe torture as enhanced interrogation, you’ve used less emotive language, but in so doing, you’ve been more misleading.

I think it’s even worse than that because in many contexts, the very idea that emotional language is indicative of misinformation is itself a kind of propaganda tactic. It’s often used to dismiss the views of marginalized groups or activists. They’re hysterical; they can’t be listened to. You can go through all the other examples as well. The underlying message there is that figuring out the truth is incredibly complicated. It would be nice if there were simple diagnostic markers or if misinformation had simple, easy-to-detect DNA, but it just doesn’t. Figuring out the truth is much more complicated, in my view.

Fin: Yeah. I mentioned that some bundles of ideas are kind of self-promoting and that they recommend believers evangelize for them. Maybe that’s a reason they could be like viruses. Do you have more takes on that?

Dan: You’re exactly right. This is a view that people often advance in the context of defending the mind virus idea. They’ll say certain kinds of ideas have characteristics conducive to their spread. For example, if there’s a religion and part of that religion encourages people to evangelize, if people accept that religion and they go out and proselytize and propagandize, the religion will spread.

And that’s supposed to be somehow similar to the ways in which viruses develop adaptations conducive to their spread. I don’t think this is the correct way of looking at things at all. There are over 2,000,000,000 Christians today. It’s a feature of Christian doctrine that one ought to go out and evangelize. Yet a tiny fraction of Christians actually do that. Moreover, there are many other belief systems where it’s not an essential feature of the belief system to spread the beliefs, and yet people do. They devote a lot of time, energy, and effort into doing so. So what’s going on? I would argue that what’s happening is the content of the belief in these cases is really not sufficient to drive people to evangelize and proselytize. Instead, there are belief-based communities or coalitions, and individuals who devote time and energy and make sacrifices to propagate the group’s beliefs are often rewarded with status within the group and enhanced trust from fellow group members.

So there’s this status game within communities, which often encourages members to spread the beliefs, recruit new members, and display their commitment and loyalty to the group. It’s a strong signal of commitment and loyalty if you’re literally giving up your time to spread a group’s beliefs. Even in that case, I think it’s not usefully illuminated by talking about a contagious mind virus. I think it has to do with these strategic social incentives. We are coalitional animals. We develop systems of social rewards and punishments, and these encourage people to behave in particular ways. Again, there’s so much more agency and strategy than just talking about people being infected with an idea that causes them to proselytize. That, in my view, is not a serious psychological theory of what’s going on in those cases.

Luxury beliefs

Fin: Another metaphor I wanted to talk about is the idea of luxury beliefs. At least I understand these to be beliefs that have properties similar to luxury gifts in that demand for that good increases with income or social status. They may also have a conspicuous signaling function in the same way that a fancy watch might. Is this a useful concept?

Dan: A lot to say about this idea. The term comes from Rob Henderson, and he draws on the views of other people like Bourdieu and Thorstein Veblen. When it comes to luxury goods and services, there’s a common-sense idea that the reason why high-status, affluent people might buy an expensive Rolex or seemingly waste a lot of money on pointless but expensive goods and services is not just because of the intrinsic enjoyment they get from them, but as a way of flaunting their wealth or status. I think that’s a commonsensical idea, and it works because those sorts of goods and services are differentially expensive. Only if you’re very wealthy can you afford to buy a Rolex. So if you’re wearing a Rolex, and it’s legitimate—if it’s a real Rolex, that is—then that sends a clear and honest signal that you’re a wealthy person.

Henderson wants to apply this to the domain of beliefs. Part of that is there’s a kind of historical story that I think he wants to tell, where the idea is that thanks to mass consumer culture, the signaling value of luxury goods has become diluted. So the elite or high-status people need to develop alternative ways of flaunting their status. I don’t think that really makes any sense. This is something Roxandra Tesla has talked about. It’s true that some goods and services have become more accessible, but it’s also true that today, luxury goods are still incredibly expensive. So, as a historical story, I don’t think that makes much sense. But considered on its own terms, I think his target is beliefs among highly educated, upper-middle-class people, often at elite universities, that he regards as silly. Something like “defund the police” is a canonical example within this way of talking.

And there, the idea is, just as wearing a Rolex is differentially costly—only very wealthy, high-status individuals can afford to buy a Rolex—the idea is that when it comes to something like defunding the police, only wealthy, highly educated, affluent people are going to be able to propose that kind of policy. Unlike people who are poor and don’t live in gated communities, they can shelter themselves from the negative consequences of the policy. Again, I don’t think that makes any sense either. Because whatever you think about that specific case and what people mean when they say “defund the police,” even if it were true that the policy disproportionately impacted lower-status or poorer people in negative ways, the question isn’t about the costs of the policy but about the costs of endorsing the policy. Anyone can say the words, “I support defunding the police.” So if it were really true that this was a way of signaling status, it would be the easiest thing in the world for anyone to endorse.

Then there’s the other issue: I don’t think supporting a policy like that is diagnostic of status in general. If there’s anything going on in these cases connected to social signaling, there are two possibilities that might be consistent with each other. One is that certain kinds of ideas can become badges of group identity and allegiance. If you’re a progressive activist, endorsing a particular kind of idea might mark you as a member of that community. That’s not essentially tied to status; it’s true of any community.

There’s another thing going on, which is also in Henderson’s work and found in the work of other people, which is somewhat different. It’s the idea that it’s not just about endorsing the relevant policy, but that in order to acquire certain kinds of knowledge, use arcane vocabularies, and understand certain concepts and ideas, you need access to particular social networks and have spent time within them. If you then deploy those ideas, that kind of marks you out as a member of that subculture, which might be true. But again, that’s not distinctive of status as such; it’s a very general feature of subcultures where people want to display their membership. Often, they’ll do that by mastering various norms, ideas, and rituals, which you can only master if you’ve invested a lot of time and energy in the relevant subculture.

Marketplaces for ideas

Fin: Got it. Yeah. I should try saying some of that back to make sure I’m understanding. So if you drive a fancy car, wear a fancy watch, or go on fancy-seeming holidays, that’s a credible signal that you could afford them, because it’s not really possible to acquire or do those things otherwise. Maybe there’s an analogy to certain beliefs, but it’s quite unclear whether that really works because it seems, in fact, quite cheap to just say the words of a belief. It’s quite cheap to fake your beliefs or at least exhibit certain beliefs. So they’re not really credible signals of status or wealth or anything else.

Maybe there’s a story where they are cheap to exhibit but costly to acquire because you need to mix in the right crowds and accumulate a certain kind of fluency in talking about these ideas, which does show that you’ve spent time in the right community. But that’s true of scenes and communities in general; it doesn’t seem especially tied to status per se. So again, the luxury aspects of the belief, rather than just a coalition membership signaling function, is a bit unclear.

Yeah. If that sounds right, then another metaphor I hear is to talk about forums for sharing ideas as kinds of marketplaces—the idea of a marketplace for ideas. Often, that is paired with the understanding that just as the best consumer goods win out in a competitive market, the best or most accurate ideas should win out in a kind of free, open, and competitive marketplace for ideas.

I guess the question there is, well, if there is a marketplace for ideas, then what are people shopping for?

Dan: Okay. Yeah. There’s so much to say about that marketplace of ideas metaphor. It goes back to debates about free speech and censorship, where the original idea was that heavy-handed censorship of public debate and deliberation is misguided. If you just don’t interfere, then, as you’ve described, people sort of shop around for the best ideas, and people competing within that context will result in a process where the truth wins out. However, there are many problematic aspects of that metaphor. One is that if you think about ordinary consumer goods markets, like if I go to a market and buy an umbrella, I’m able to test that umbrella. If it breaks immediately, I’m not going to shop at that place again. I might write a bad review or spread gossip about it. It’s going to go out of business, and so on. But if you think about ideas, they’re kind of nothing like that. If someone gives you an idea, you’re often not in a position to verify it. This is why the slogan “trust but verify” doesn’t really make sense in most cases. If you are in a position to verify, there would be no need to trust what the person is saying. Of course, you can cross-check what people are telling you against what others are saying, but then you get into the same situation where you don’t know if what those other people are telling you is true because you’re often not in a position to verify it. In everyday social life, we sometimes can verify things. If you badmouth someone and then I meet that person and discover they’re actually really nice, then I have verified the idea and can reduce my trust in you. But when it comes to politics and culture, you’re never really in a position to verify those ideas. That massively complicates the idea that just shopping for truth means that the truth is going to win out in the end because we’re often not in a position to know whether what we’re being told is true. What happens is we often default to the ideas that people tell us, gauging plausibility checking. That’s problematic if our preexisting beliefs, which we use to evaluate other people’s ideas, are mistaken, as they often are. We just rely on whether the source is trustworthy, and if we get that wrong, then a marketplace of ideas isn’t really going to help you. That’s one thing. Another thing is that, even though I think there’s something to the idea that you want there to be lots of freedom in terms of the ideas that people can generate, I think heavy-handed censorship is often a bad thing, and I think strong bottom-up pressures of social conformity are often a bad thing. Even though I think having that kind of freedom is necessary for a community to be good at figuring out how things are, it’s not sufficient, obviously. If you take something like science, which is a very impressive knowledge-generating institution, it’s not just a free-for-all.

There are all of these norms and principles, like peer review, that you want to adhere to when it comes to reporting findings, etc. And the third thing is that even if you set those two issues aside, people aren’t, when it comes to politics and culture, exclusively or even primarily shopping for truth anyway. Very often, people are shopping for information that they can use to justify or rationalize the claims, narratives, and decisions that they’re motivated to endorse. This is why elsewhere in my research and writings, I talk about a marketplace of rationalizations rather than a marketplace of ideas.

Fin: Yeah. And I mean, just as the institutions of science are not anarchic, right? They involve a bunch of structure and rules. It also seems, per your saying, that healthy consumer goods markets are also not a free-for-all. They involve a bunch of regulations and structures, so, you know, there are laws against counterfeiting money. It’s illegal to commit fraud.

Often there are protections on IP, like a patent system, and some legal system to enforce contracts. You might think that the best functioning marketplaces of ideas, insofar as our metaphor is useful at all, are also going to have certain kinds of rules and structures. Maybe they’re also imposed in a fairly top-down way, like a scientific journal or a patent system. What do you think?

Dan: Yes, I completely agree with the general sentiment, which is that unregulated spheres of public debate and deliberation, where there are no norms, are not conducive to getting at the truth. It’s difficult to generalize too much. The kind of intellectual activity you find in science is inevitably different from the kind of intellectual activity you find broadly within the public sphere. That’s fine. You have to consider things individually rather than trying to draw sweeping generalizations about all forms of intellectual activity. I also completely agree with the point that you want norms. If people are deliberately spreading false information, that should be a norm violation. You can think of the analogy with consumer goods markets, where certain kinds of activity would also be prohibited or forbidden. However, when it comes to lots of human communication and debate, many of those norms emerge informally.

Fin: Right.

Dan: You don’t necessarily need top-down enforcement of those rules. If you discover that somebody has misled you about something, you will automatically reduce your trust in that person. Given the way human beings are when we have grievances, you are likely to spread negative gossip about that person. There are reputational effects. We are generally quite good at enforcing these norms in a somewhat spontaneous bottom-up manner. Whenever those norms are enforced top-down by some governing body, there are hazards that arise that don’t occur in the same way in typical consumer goods markets. This is partly because there is extreme disagreement about what’s true and what constitutes a norm violation, what’s deceptive, or spreading disinformation. There are complications about what the norm should be and who should enforce it. It’s reasonable to be skeptical of too much top-down enforcement of those sorts of norms. Beyond that, I completely agree with the general principle that if it’s just a free-for-all, it’s not going to produce great systemic outcomes.

Fin: Yeah, I gotcha. Since we’re on the topic, I wonder if there’s also an analogy to shopping for donation opportunities in the charity context. In some sense, this is a market. Different nonprofits are competing for donations and attention. Potential donors value many things, including how effective the charity is. However, unlike the market for umbrellas, once you’ve made your donation, it’s quite hard to find out what your money actually made happen, at least by default. There is this feedback mechanism where the most effective charities should get the most donations in the long run. Maybe marketplaces for ideas end up being more like marketplaces for charities than for consumer goods, which would not be very encouraging if it were true. Anyway, you mentioned the idea of a marketplace for rationalizations, where people are shopping for ways to justify certain preexisting beliefs. One question here is if people are valuing things other than truth in the kinds of beliefs they eventually acquire. It feels a bit weird to talk about why I might prefer one belief to another for reasons that don’t have anything to do with whether those beliefs are true or not. In what sense do we value anything other than just accuracy when we are out shopping for beliefs?

Dan: Yeah, it’s a complicated question. In the overwhelming majority of cases, our preoccupation is with accuracy because it benefits us to have an accurate model of the world.

You know, if I’m crossing the road, I want to have an accurate picture of whether there are cars hurtling down the road. Because if I get that wrong, that could be bad for me. Right now, we’re having a conversation. If I thought that I was on a beach in the Bahamas, I’d be making all sorts of bad decisions. So, in general, it’s helpful to have accurate beliefs about the world. I think we are pretty good at evaluating evidence and trying to figure out who to trust in ways that are conducive to achieving that goal.

Having said that, there are some cases where the truth is really not going to be particularly advantageous for us, and we’re going to have goals that conflict with the truth. I think there are two primary cases here, not necessarily exhaustive, but I think they are the two most important cases. One is a situation where it’s really in our interests to advocate for a particular view or narrative, or for our own interests or the interests of our allies. When it’s in our interests to advocate for something, advocacy is different from the goal of just trying to figure out the truth.

Also, it can be in our interests to genuinely internalize the beliefs and claims that we’re advocating for because it will make us more effective advocates. I’m not 100% sure that’s true, but that’s a somewhat influential idea that there’s some evidence for. Then there’s another thing, which is connected to that but different—connected in the sense that these two things sometimes co-occur. I think we’ve already touched on it, but it’s the idea that it can often be beneficial to embrace a belief or a narrative that signals socially attractive qualities. It signals that you’re a good, loyal member of a particular group or community or subculture.

In those two cases, you’ve got reasons for advocating for and embracing beliefs that are distinct from truth. So those would be, I think, two relatively clear-cut cases where there is a conflict between our goals when it comes to belief formation and accuracy. Even though I completely agree that the normal case is when we are trying to figure out what’s true.

Fin: The impression I get is that often it is very difficult to behave in certain ways or to advocate for certain things without holding certain beliefs. It’s just hard to lie, for instance. It’s hard to misrepresent what you really think. So in that case, it can be useful to hold certain beliefs independent of the truth, at least in certain domains. But it seems quite hard to just kind of consciously choose to hold particular beliefs. It feels like that’s not really how belief formation works. It’s not really penetrable to my top-down preferences. So, is this relevant at all? How can the beliefs I come to hold really depend on their downstream usefulness, independently of just whether I think these things are true?

Dan: It is relevant, and I think it’s an important point. Generally, you can’t just choose to believe something. There’s no amount of money I could give you where you could just choose on the spot to believe the moon is made of cheese. Nobody’s run that experiment, to be honest. We’ve run the thought experiment, but you might think thought experiments are not that reliable when it comes to figuring out the way the world is. But let’s just accept that you can’t directly choose to believe something, no matter how great the incentives are to believe it.

What that means is not that our motivations and interests don’t shape our beliefs; it means that when they shape our beliefs, they do it indirectly. What I think is going on in these sorts of cases is the degree to which we’re going to end up with beliefs that we’re motivated to advocate for or to adopt for social signaling reasons is going to depend on whether we can satisfy a kind of rationalization constraint. That is, whether we can become genuinely persuaded by the belief. If I’m advocating for a belief and, in the process of advocating for it, I’m able to recruit evidence and arguments that justify that belief, then I’m going to become persuaded of that belief.

Not because I’ve chosen directly to believe it, but because I’ve ended up, through this process, persuading myself. Similarly, if I’m motivated to endorse a belief or narrative because it marks me as a member of a particular group or community, if through that process I search around for evidence and arguments that justify that belief, I’m going to end up persuaded of that belief. I think unless you can satisfy that constraint, then at best, what’s going to be happening is a kind of deception, where you’re claiming to believe something that you don’t really believe. But if you satisfy it, then you will end up internalizing these beliefs that you’re motivated to advocate for or to adopt for social signaling reasons.

Fin: Got it. So there’s this kind of rationalization constraint on beliefs which is useful to hold. Can you say more about how that turns into a marketplace for rationalizations as you’re describing them?

Dan: The idea here is traditionally when psychologists think about motivated reasoning, which is this phenomenon where we end up believing what we’re motivated to believe, they assume that the way we go about satisfying this rationalization constraint is through our own psychological acrobatics. We seek out evidence that confirms what we’re motivated to believe. We apply different standards to evidence that confirms our beliefs and to evidence that disconfirms them. We selectively encode and retrieve memories depending on whether they confirm or disconfirm the belief. All of these are individual-level processes. But I think what often happens when you’ve got a group of people who are motivated to advocate for a conclusion or to embrace it for social signaling reasons is they search for evidence and arguments that will justify that belief or narrative. This creates an opportunity for some people to devote their time and energy to producing content that satisfies that demand for rationalizations. In much the same way that we all eat food, but very few of us devote our time to producing food, there’s a market process, an efficient division of labor, whereby we delegate the task of producing food to people for whom it’s profitable to do so. In the same way, when it comes to rationalizations, if you’ve got a group of people motivated to push a particular conclusion, they tend to shop around for evidence that can justify that conclusion. Certain people find it in their interests, given their abilities, to devote themselves to producing evidence and arguments that satisfy that demand. Just like ordinary markets, this kind of process is competitive. People are going to shop around for the highest quality rationalizations they can find. That means anyone who wants to win within that market context is incentivized to do the best job they can. So, very roughly, that’s the general idea.

Fin: I’m wondering if you could say more about the kinds of beliefs we’re talking about, the kinds of rationalizations we’re discussing, and why it’s useful to shop for those rationalizations. Clearly, we’re not talking about beliefs like what time it is, or what country am I in, or who am I talking to right now. Presumably, these have more to do with group membership, maybe politics or religion. If you could add more color to that, I think that could be useful.

Dan: I think the classic case is partisan narratives. If you think about a polarized political context where people strongly identify with a particular political faction or tribe, what you tend to find is that people are very motivated to push claims and narratives that paint their own tribe in a positive light. They often denigrate or demonize rival factions or coalitions. That’s one clear example where you’ve got these partisan narratives that people are often invested in, and they want to seek out evidence and arguments that justify those narratives. You can also think about conspiracy theories often having this kind of character. Some conspiracy theories are honest attempts to get at what’s true, and when the conspiracy theories are wrong, as they often are, people just aren’t successful at that.

But I think there are lots of conspiracy theories that perform a kind of demonizing function. Right? It’s shadowy elites demonizing them in a really extreme way by people who have a kind of incentive or motivation to do that. When that’s true, they’re gonna want to recruit evidence and arguments that can justify that narrative. Throughout history, you also find the reverse of that, which are elite narratives. If you think about hierarchical societies, elites are often highly motivated to push ideologies that legitimize their social position. That often involves denigrating and demonizing lower-status groups, and that’s gonna create a market. An extreme case of that would be monarchs who are incentivized to legitimize monarchy gravitating towards ideas like the divine right of monarchs, along with intellectuals, priests, and scholars who can win favor by churning out justifications for that kind of narrative. When related to all of this, which is gonna be somewhat topical, you can think of intergroup conflict in the case of wars and military conflict, where people are often strongly attached to one side. They’re gonna be motivated to push narratives that paint their side in a good light and demonize the other side. Therefore, they search for evidence and arguments that justify those narratives.

Fin: So, you know, I’m imagining someone shopping around for rationalizations, justifications, or their partisan narrative, or their favorite conspiracy theory, or stories that support their particular group. You are explaining how that leads to a kind of market for those rationalizations. But if there is such a market, then it’ll be quite an unusual one, right? Because rationalizations are non-rivalrous and often non-excludable, like many other ideas. Once one person has learned something, it doesn’t stop someone else from learning it. It’s very cheap to broadcast an idea once you know it. In practice, if I’m looking for some justification for a belief I want to be true, I’m just gonna go on Twitter or whatever, or just Google. These things are free. So where do the market incentives come from on the supply side to actually come up with, produce, and sell these rationalizations?

Dan: As you know, there’s a whole field of information economics that tries to understand and theorize how suppliers of information get around those kinds of problems. You’re right. Rationalization, specifically, and information generally, typically qualify as public goods—non-rivalrous, non-excludable. That means it can be really difficult to profit from producing them. I think people get around that in various ways. One is the classic subscription model that you find these days with lots of political and cultural media, where you transform public goods into kind of club goods. You exclude people because they need to pay to access the relevant content. That’s one way you can profit from doing this. More generally, the important thing to bear in mind about these markets is that what people are shopping around for is not so much the information itself, but access to reliable sources of high-quality rationalizations. What people are competing to do is build up a reputation as such a source. Once they build up that reputation, people are gonna attend to them and tune into their content. Attention itself can be transformed into profits through advertising and so on. There’s also this other kind of social currency, which I think is relevant to human communication and the division of intellectual labor in general. When people provide useful, in-demand information, we tend to be grateful, and gratitude is something you can pay people with. We also tend to admire and respect those people. If you can develop a reputation as an extraordinarily skilled, effective pundit that churns out sophisticated justifications for a particular side’s narrative, lots of people are gonna come to respect and admire you. That kind of status is something people are intrinsically motivated by for obvious evolutionary reasons. Once you’ve developed that kind of prestige, it can be used to make money through various mechanisms and to gain a platform in various ways.

So I think through those sorts of ways, even though you’re completely right that the rationalization market, like information markets generally, is not going to behave like typical markets. There are lots of ways in which people can benefit socially and financially from acquiring a reputation as brilliant lawyers and press secretaries for different political and cultural tribes in society.

Fin: Got it. If there is a marketplace for rationalizations, or many marketplaces, then is it efficient?

Dan: No. I mean, there’s so much to say about that. And if there are any economists listening to this who have much greater expertise in thinking through these questions, I’d love to hear from them. One thing I’ll say is, maybe it’s not directly connected to the point about efficiency. But the thing with rationalizations is, ultimately, what you want is information, evidence, arguments that you can put to use in persuading other people. But what makes that challenging is when you’re evaluating whether information is actually persuasive, whether it’s a good justification of a view, you’re relying on your own preexisting beliefs, and it couldn’t really be otherwise. But that’s going to mean that you’re going to be selecting content ultimately based on what you’re finding persuasive, which could be totally different from what different communities in society, third parties, or rival tribes might find persuasive. This is why I think if you look at really successful pundits—I won’t name any names here—but, you know, people that rack up a lot of attention, status, and money by churning out content that is clearly partisan, biased, or one-sided, it’s often incredibly unpersuasive to people outside of the community because they’re coming at it with radically different kinds of beliefs and priors, etc. But to people within the community, it seems very persuasive to them because it’s targeted at them with their specific preexisting beliefs. And that’s a kind of—you might think of that as a very general failure with what’s going on here. People are searching for stuff that’s ultimately going to be useful in the service of persuasion and argument, but they’re not in such a great position to evaluate what’s likely to be persuasive to other people.

Market Mechanisms

Fin: If these kinds of market mechanisms for rationalizations, for partisan narratives and so on, if they were much more efficient, then you might think that they would become more limited by whether they’re in fact accurate. In that case, maybe whether they’re accurate is more relevant in which justifications win out. That can be a good thing if you care about accurate beliefs winning out. So would you like to see these markets become more efficient?

Dan: Yeah. I don’t know. I just haven’t thought enough about it. But I think your intuition is right. You can think about the legal system as, in some sense, working in a way where individual actors within the system, you know, the prosecution or the defense, are reasoning in incredibly biased ways. The defense lawyers have a predetermined conclusion, which is that my client didn’t do this, whatever it might be. Their job is to select, frame, interpret, and organize information in ways conducive to rationalizing that conclusion. That means if you only encounter the defense lawyer’s case, it’s going to be pretty misleading because, by design, it’s pretty selective and biased. But, of course, the whole point of the legal system is you don’t only encounter one side of the argument; you also encounter the arguments from the prosecution. So even though individual people within the system are functioning as rationalization producers, at the collective level, that can actually have desirable epistemic consequences that bear on knowledge and understanding. You might say something similar about rationalization markets within society, which I think connects a little bit to what you’re saying. If people were really good at developing solid epistemic justifications of beliefs and narratives, then, even though that might in and of itself be quite a biased project, because of how that interacts with the broader information ecosystem, it ends up having good consequences. But I will just say one thing, and maybe we can return to this later on. You said something like good justifications are presumably going to be kind of accurate justifications.

And I think that’s true in one sense, but it’s easy to miss the fact that you can have content that is accurate in and of itself. Yep. But which is selected, framed, organized, and interpreted in such a way that it ends up being incredibly misleading. That could provide a very effective justification of something, even though, ultimately, it is quite misleading. So I think the accuracy issue is complicated because you can have individual pieces of information, and I think this is actually the norm, which are accurate considered in and of themselves, and that is what makes them good justifications. But it’s because of the ways in which people synthesize, interpret, and organize the accurate information that ends up making it misleading.

Fin: Yeah. I mean, it’s been said before, right? But even very partisan media sources rarely literally lie, right? They’re just very selective and hyper about what true things they report. Now, you mentioned misinformation. I was wondering what’s the difference between this idea of a marketplace of rationalizations compared to the picture I guess most people have in their heads, which is one of misinformation more directly persuading people of false beliefs.

Dan: That opens up a whole kind of world about what misinformation is. What are we supposed to be thinking about when we hear this term? What I would say is when many people talk about misinformation, they’re thinking of clear-cut cases of falsehoods and fabrications. So, like, a paradigmatic example of misinformation so understood would be fake news in the technical sense of that term. So a website presenting itself as a legitimate news organization just making stuff up. You know, the Pope endorses Donald Trump for president, a classic fake news story, in the literal sense that it’s fake news. Like, that didn’t happen. Somebody just made that up. If that’s what you mean by misinformation, then as we just discussed, I think rationalizations rarely take that form because that kind of content tends to be pretty easily debunked, and it’s just not particularly persuasive. And as you say, this is why partisan media, which I think often performs this rationalizing function, the effective impactful partisan media outlets don’t churn out fake news in that sense. They’re just incredibly selective in terms of which real news they attend to and how they frame, package, and organize that information. So in that sense, I think the concept of rationalizations is not just different from misinformation but rarely takes the form of misinformation as so understood. But then there’s this other thing, which is that when many people think about misinformation, the model is people have certain kinds of beliefs about the world. They come into contact with misinformation. This persuades them of something different. Now they’ve got a false belief, and they make a bad decision to really simplify it. Whereas what you’re finding with rationalization markets is a certain kind of preaching to the choir. You’ve got people who are motivated to endorse a particular kind of narrative, and the content circulating within these markets is catering to that preexisting audience demand. It’s not the case that people are going through social media, they come across some fake news, and now they’re an anti-vaxxer. You know, it’s more like, yeah.

Fin: Yeah. Yeah.

Dan: People have quite strong intuitions to begin with or narratives that they’re motivated to endorse, and they’re searching for content that is going to justify those beliefs or those narratives. Having said that, I don’t think rationalization markets are completely harmless or sort of epiphenomenal. It’s not that I think they’ve got no negative consequences because I think they can end up entrenching people in particular kinds of beliefs. People can be radicalized if they’re exposed to lots of one-sided content over time. And I also think the content that these outlets generate can end up reaching people outside of the audience space. So through those kinds of mechanisms, I think it can be impactful, but it’s still fundamentally a very different story from how misinformation is normally thought about, where it’s about people stumbling across misinformation, developing false beliefs, and making bad decisions. It’s a much more demand-driven kind of model.

Fin: Rationalizations and partisanship aside, I mean, if you’re just a, let’s say, small news outlet, and if you’re wondering whether to cater to demand for rationalizations versus to grow the market by persuading people of, in many cases, egregious beliefs where people don’t have a strong incentive to believe them.

It just makes economic sense to go after the existing demand. Yes, okay, so let’s talk about misinformation and disinformation. An obvious question here is just what do people have in mind when they talk about disinformation or misinformation?

Dan: So as I mentioned, this is a whole mess in terms of the literature surrounding the subject and how people throw these terms around. One way of understanding it, which is quite common within modern misinformation research, is that misinformation as a term picks out false or misleading content. That itself raises lots of questions about what exactly it means for content to be misleading, etc. But the idea is that it’s picking out communication that is misleading in some ways, and it’s sort of silent on questions of intent. So it says nothing about the motivations of the people spreading the content. Whereas disinformation is intentional misinformation. It’s a case where you can have a high degree of confidence that it’s not just that the communicator’s content is misleading; the person peddling that content or spreading that content knows that it’s misleading and is doing so deliberately. That’s often how the distinction is drawn. As I mentioned, it doesn’t get you that far because you now have to address questions like, what exactly does it mean for communication to be misleading? How do you go about establishing that somebody is deliberately lying rather than just being mistaken? But at least at the abstract level, that’s how the distinction is often drawn.

Fin: It is worth saying that there are people out there who are intentionally introducing false narratives with the purpose of getting people to believe them. And so, disinformation is a real phenomenon. I guess the question is how significant it is. You mentioned in some of your writing there was this survey of experts by the World Economic Forum for a global risk report. They were asked to rank societal scale risks over the next couple of years. In the top five, there was nuclear war, military conflicts, economic collapse, but number one was misinformation. So, I mean, were the experts correct there? Is this, in fact, a very socially significant problem?

Dan: There’s so much to say about that. The first thing that needs to be established is what are we talking about when we’re talking about misinformation and disinformation? Because you can’t evaluate whether it’s a more serious threat than nuclear war unless you’ve got a very precise understanding of what we’re supposed to be referring to. Now, I mentioned earlier that traditionally, when people were using these terms, they were focusing on really clear-cut cases of obvious falsehoods and fabrications. Especially in the aftermath of 2016, when lots of concerns about social media and online digital content emerged, the focus was on fake news in the sense that I mentioned earlier—outright fabrications, deliberate falsehoods by organizations that are disreputable, presenting themselves as real news organizations. If that’s what we’re talking about when it comes to disinformation or misinformation, I would argue there’s simply no way that it’s the biggest threat or the greatest societal risk over the next two years. I think the empirical research on that kind of content suggests overwhelmingly that it’s pretty rare as a feature of people’s overall consumption of content online. To the extent that it exists, it is concentrated among a narrow fringe of active social media users with preexisting attitudes and views. In other words, it tends to preach to the choir anyway.

Fin: Yeah.

Dan: Which is not to say that it doesn’t have any harmful consequences. I think it clearly can have harmful consequences. Recently, we had the riots in the UK, this awful outburst of racism. And, you know, clear-cut fake news played a role in that. So it would be ridiculous to say that kind of content doesn’t have any harmful consequences. But you need a sense of proportion when you’re thinking about economic catastrophe, inflation, nuclear war, military conflict, and so on. It’s nowhere near as significant or impactful as those things. What people will often say is, okay, that’s true if you’re focusing on that kind of content.

But if we expand the meaning of misinformation and disinformation to focus on misleading communication in the broadest possible sense, then what you find is that in any of these areas we worry about, there’s going to be a lot of misleading communication. People are going to have mistaken beliefs about things and make bad decisions. If you’re making bad decisions, that means you’re in error in some sense, and that means you probably accepted misleading communication. The response will often be that if you focus on these things in a narrow sense, of course, it’s not the greatest threat. But if you really expand the meaning to cover misleading content across all domains, then it really is a big threat. However, if you’re expanding the meaning that much, the terms have really ceased to be helpful at all. You’re basically just talking about the problem of human error and fallibility, which is a significant problem. It’s not a new problem; it’s always been a problem. So I don’t think focusing on that as the most immediate threat over the next two years would be particularly helpful. One interpretation is definitely not the biggest societal threat over the next couple of years, and on a really expansive understanding of the relevant terms, that’s just not a helpful way of understanding the world. So let’s say on that interpretation, it’s not so much that it’s wrong; it’s sort of not even wrong.

Fin: Mhmm.

Dan: It’s just a confused way of thinking about issues.

Fin: One thing that would make me more worried about misinformation and disinformation is if I saw evidence that people are, in some sense, much more easily duped than I thought by just coming across false claims. You have been talking about people with some narrative who seek out justifications for that narrative. That is different from being duped or persuaded by arbitrary false claims, including fake news. Do we have much empirical evidence either way on just how gullible we are to arbitrary disinformation?

Dan: We do have research, and I would say the overwhelming conclusion of the research is that people are very, very difficult to influence. That doesn’t just include false information or disinformation; unfortunately, it often also includes good information, you know, accurate information. I think the best single place where somebody has made that case, drawing on lots of empirical research, is Hugo Mercier in his book, Not Born Yesterday. There are many things to say about that, but one of them is that when we evaluate communicated content, we have these capacities of epistemic vigilance. This is a term introduced by Dan Sperber, Hugo Mercier, and others. It’s the idea that generally, when people tell us something, because it’s extremely risky to be manipulated or deceived by others—this is true today and throughout our evolutionary history—our default is generally to reject what people tell us unless it coheres with what we already believe. We only really overcome that disposition if either the person gives us all the relevant sources, or if they provide an argument that we deem persuasive relative to our preexisting beliefs, or if we trust the source and have positive reasons to think the relevant source is trustworthy. Even that, we tend to be very sophisticated and vigilant concerning. When it comes to trust, we’re going to ask questions, not necessarily consciously, but I think this is how our cognitive mechanisms work: Is the source likely to be in possession of useful information on this specific topic, and are our interests aligned? Even that is very complicated because it doesn’t just boil down to whether they are a good person; it boils down to the context and their incentives in this situation. For example, my mom is an extraordinarily trustworthy person, but if I’m playing poker against her, I’m not going to trust what she says. This gives a cartoonish example, but it shows how flexible this sort of social learning is.

And so people have to cross this big hurdle if they’re going to really influence other people’s attitudes. It’s not something that just happens. Incidental exposure to bizarre false content online is not going to shift people’s attitudes and behaviors. It’s much more difficult than that. Additionally, persuasion happens over time. If you tell me something and I trust you as a good source of information, then later, if someone else I trust contradicts it or I find out it’s wrong, I’m likely to abandon that belief and reduce my trust in you.

What really matters when it comes to influencing people at scale is building up a trustworthy reputation, which is a dynamic process. It’s not just about flooding the zone with misleading content; it’s about acquiring a good reputation. Those are just two things to mention. There are other aspects as well, but what it highlights is that people have this idea that others are easy to influence. In reality, that’s just not true. If anything, the problem is that people are very conservative in how they adjust their attitudes and beliefs, making it really difficult to influence them. This is true whether we’re discussing bad information or reliable information.

In many cases, the issue with misperceptions in society is not that people are credulously accepting lots of bad content; rather, they are very vigilant about manipulation from what are, in fact, reliable sources of information. It’s the exact opposite problem than you might think.

Fin: Yeah. I don’t know where I came across it. It might have even been from you, but I remember someone pointing out that if you take some kind of wild fake news claim that you might worry is persuading gullible people on Twitter, just consider how easily you could persuade anyone you know of this claim. Just like in the pub. The answer is usually that you realize it’s actually very hard. And then, who are these people you don’t know who are, you know, ten times more gullible than anyone you know? Maybe they just don’t really exist.

Dan: A hundred percent. No.

Fin: You’re saying there that, in some sense, the problem is that we can be over-vigilant against accurate new information. I wanted to ask about that. You’ve been talking about this market for rationalizations, where people arrive with their existing beliefs or narratives and seek out justifications. You mentioned that, in general, people are not easily duped by random fake news. That might leave you worried that people are flat-out unpersuadable, including on really important topics where there are good arguments available. Are people just unpersuadable by rational argument at all?

Dan: Definitely not. I think people can be persuaded. As I mentioned, our default is generally to reject content if it doesn’t cohere with our preexisting beliefs. However, one way people can overcome that is by presenting good rational arguments in defense of a position. There is now a lot of research from people like Alexander and Ben Tappin and numerous others that shows if you present people with rational arguments in favor of a particular position, they tend to update a little bit in that direction. Just a little bit. It’s not like people are going to radically change their worldview upon exposure to that kind of content, and these effects tend to decay over time for various complex reasons. Nevertheless, you can persuade people if you’re in a context where they’re listening to what you’re saying, which often depends on whether they trust you and whether you have good effective arguments for the position you’re trying to argue for. You can persuade people; it’s just that persuasion is really difficult. It’s difficult on multiple levels. It’s challenging to persuade people about a specific thing, but also, often what people really care about is not just persuading others to update their beliefs about certain topics, but to really change their behaviors in specific ways.

So you think about classic arguments, central to effective altruism, about how you should give your money to charity, for example. It’s much easier to persuade people to think, “Oh, actually, yeah, that’s a good argument. I probably should give my money to charity,” than to actually persuade them to give their money to charity. It’s a totally different thing. That kind of persuasion, where you’re really changing people’s attitudes and behaviors, is even more difficult than just changing their beliefs, but it is possible. I’d even go so far as to say, because it’s possible, and I think sometimes people don’t take the effort to really spend time giving good arguments rather than just spamming them with your opinion or fake news or whatever, that actually taking the time to give careful arguments is probably undersupplied. People probably underrate the extent to which you can reach people with that kind of content.

So I think there’s this weird situation where, on the one hand, people overestimate human gullibility when it comes to incidental exposure to bizarre claims online and things. But they also probably underestimate the impact you can have if you really take the time to build up a reputation as a trustworthy source and provide people with persuasive arguments in defense of a position. It’s worth noting there would be no point in a marketplace of rationalizations if nobody referenced rational arguments. The whole point of that marketplace is you want to get intellectual ammunition that you can use to persuade people ultimately. So there’d be no point in any of this if people were completely unreachable. They are reachable; it’s just that it’s difficult to reach them, much more difficult in some cases than many people realize.

Fin: So people talk about, kind of, contra what you were saying, this backfiring effect, where if I present you with an argument pointing in the opposite direction to your existing views on some hot-button question, then that might reliably cause you to entrench your existing view further. There are some experiments where this seems to happen reliably. So isn’t the backfiring effect some evidence against the idea you are suggesting that people are generally persuadable by good arguments?

Dan: My understanding of the empirical literature here is that the evidence we have suggests that most of the backfire effect is extremely rare. A plausible reading of the empirical literature is that we don’t really have any solid experimental evidence that that occurs. As much as certain experiments where they might seem to have found that haven’t replicated, or it’s so at odds with what we find in lots of other situations that we shouldn’t think it’s a robust phenomenon. I think I mentioned Alexander Kock. He’s got a good book called Persuasion in Parallel, where he reports lots of experimental data. In none of the experiments that he runs does he find that kind of backfire effect. When you’re presenting people with evidence and arguments in favor of a position, I suspect it is different when it comes to just assertions that something is the case. If, for example, people think the source of the assertion is extremely unreliable, then you might think, “Well, if they think it’s true, maybe that’s evidence to the effect that it’s not true.” But in cases where you’re actually taking the time to present people with arguments, my understanding of the empirical issue is there’s basically no good evidence that the backfire effect is happening. Now, if you think about it, it wouldn’t make any sense for human psychology to work in that way. It would be extremely maladaptive if, when somebody sits down and presents you with an argument in favor of a position, you think you shouldn’t pay much attention to it. But to actually backfire would be a very strange way for the human mind to work.

Fin: So, yeah, you also mentioned that maybe just plain old rational persuasion, sitting down and presenting and exchanging arguments over a meaningful length of time, is underrated, at least compared to more piecemeal ways of presenting information. So why do you think that is?

Dan: I think there are multiple reasons for it. I’m not that confident in my analysis. The most obvious reason is it’s just often unpleasant to really try to persuade people of views that they are very confident in.

So even if you know that taking the time to present people with evidence and arguments against their views might shift their views in a positive direction, those sorts of conversations can often be unpleasant. People can have a negative emotional reaction to being presented with evidence and arguments against what they strongly believe. In person, it can be quite unpleasant to try to walk through an argument with someone who is not already inclined to believe that argument. It can also be unpleasant when it comes to producing content online or trying to get published. For instance, when I publish on my Substack or write pieces for a general audience, I often receive quite hostile responses from people who strongly disagree. So, while it would be nice if people would change their minds, it can be really unpleasant for me to go through the process of trying to bring about that change. Therefore, I might avoid doing it.

There’s also this interesting dynamic in how we think about human psychology. On one hand, people tend to think of others as gullible, believing that someone might stumble across a fake news story online and become a nonbeliever. On the other hand, people often overestimate how stubborn and dogmatic others are when it comes to interpersonal communication. This misperception might lead them to underestimate the potential of this kind of work. These two factors likely play a significant role, but I don’t feel massively confident about my understanding of why people don’t engage in this as much as I think they could.

Fin: Yeah. I wonder if this is the kind of thing that AI could do pretty well soon. You know, AI doesn’t mind that it’s emotionally aversive to try to walk someone through arguments they initially disagree with. So, is there some promise there? Are you hopeful about AI rational persuasion for the good?

Dan: I think so. There’s a paper by Dave Rand and Gordon Pennycook that was recently published in Science. There’s another co-author as well that I’m blanking on the name of. That paper seems to show that you can have a persuasive influence on people using large language models. I think they use GPT-4. Even when it comes to fairly strongly held conspiratorial beliefs, which are often thought of as the paradigm case where people are beyond persuasion, the effects they get aren’t massive, but they seem real. From what I can tell, these effects also tend to persist over time. If I remember correctly, when they go back and look after a period, the reduction in confidence in conspiratorial beliefs after interacting with GPT-4 still exists.

So, I think there is something potentially promising about that. When people typically think about this, their worry is that these systems will be used to persuade people of bad beliefs. As much as these systems are good at persuasion, one might think that this will apply equally to good beliefs and bad beliefs, and I think that’s probably correct. How this plays out will depend on whether we think these systems are likely to produce arguments for reliable conclusions versus unreliable conclusions, which will depend on various factors.

Additionally, people are generally very resistant to the idea that they are being manipulated. If people become aware that other agents in society are using these large language models to persuade them, they might become very resistant to engaging with those kinds of systems. What you find in an experimental context might not generalize to the real world once people understand how these systems are being used. But I think you’re exactly right.

That is a potentially promising application of these technologies because these large language models, as you say, don’t have to worry about the unpleasant emotional character of engaging in rational persuasion. They can dispassionately present evidence and arguments, take into consideration what other people already think, and explain why that’s wrong in a way that human beings often find really unpleasant. So, I do think there’s something promising there. I don’t have any real confident views about how that’s going to play out in the future, to be honest.

Fin: Yeah, interesting. I do wonder if, in many cases, people have no interest in hearing someone try to persuade them of a view that is very far from where they stand on some issue, but some interest in hearing what local direction they should be moving. In that case, you can imagine a more incremental approach where you just employ your own AI interlocutor to tell you what to think about the details of some issue. If that happens every time, and the AI tool is reliably producing good arguments and tracking the truth somewhat well, then maybe you can get people to move in the direction of accuracy based on their own interest. Who’s to say, I guess? But anyway, this is a hopeful case for language models. There is at least as obvious a case for concern, which is that one thing AI is unbelievably good at is churning out fluent text, more or less for free. It can also generate deepfake videos, which are very realistic. You can imagine if an interest group tried to flood the Internet with content aimed at persuading people of some harmful and false set of beliefs. That feels potentially very worrying, maybe worrying for reasons that human-generated misinformation is not especially worrying. Yeah. Does that sound right? Is this qualitatively different from the status quo of misinformation and disinformation?

Dan: It’s difficult to say, and I think it’s worth drawing a distinction between what we should expect from AI technologies that are roughly similar to what we already have but just better in some respects, and a case where we’ve got superintelligent AI that outcompetes humans along every dimension of performance. Who knows what that kind of world is going to be like? I would not like to make any confident predictions about how things will play out in that context because I think that’s going to impact so many different things all at once. It’s really difficult to forecast. But if we’re thinking about the former kind of case, one thing to say about that concern is it’s not obvious why those characteristics of AI—the ability to produce content and make it appealing—would benefit bad information over good information. So, you can’t just focus on the potential risks; you have to focus on the opportunities, as we’ve already mentioned. Then there’s the fact that we’ve already talked about, which is that human beings are very difficult to persuade. It’s not as if, when it comes to most topics, the issue is a lack of persuasive, potentially deceptive information. Take any topic you want, whether it’s election denial, Holocaust denial, or climate denial; the Internet is saturated with an enormous amount of that content already. There’s no paucity of that content. The challenging thing is, firstly, to reach people with that kind of content, and then, once you’ve reached them, to change their minds. That’s much more difficult than just producing the content to begin with for all sorts of reasons. One is that our attention is limited, and we overwhelmingly attend to sources of information that we consider credible and trustworthy. That suggests it’s really difficult to reach large numbers of people unless you can build up a reputation as being trustworthy. The mere fact that there’s technology that can produce deepfakes is not going to give anybody the ability to build up that reputation. So that’s one thing.

And then connected to that is this fact that persuasion happens over time. Even if there’s a situation where, at a moment in time, some sort of deepfake is impactful in what a group of people thinks, if they then discover that they’ve been duped by a deepfake—which they often will, because there are so many incentives within the system for people to point that out—they’re going to reduce their trust in the source of that deceptive information online that they came across. So you can’t just look at the impact at a given moment in time; you have to look at this much more lengthy process. Given that having a genuine persuasive impact and reaching people is so difficult, I’m not that worried about the impact of technologies that are roughly similar to what we’ve already got.

Fin: Yeah. Maybe we could at least speculate about the further-out capabilities. I agree it’s a mostly doomed enterprise, but I think it’s still fun to do. One question here is, do we know anything about the limits to persuasive abilities in humans? Are there just people who, by being unbelievably charismatic or manipulative, are able to persuade others of arbitrary things? Or is there some actual defensive advantage where, even within the limits, it’s very difficult to persuade anyone of anything?

Dan: I think that when it comes to persuasion, the things that actually work are those we talked about, which include acquiring a reputation as being trustworthy and presenting evidence and arguments that people deem persuasive. All of this other stuff about the tricks of persuasion, I think there’s honestly just a lot of BS there, and there’s not good evidence that much of that really works. There’s an interesting preprint from Ben Tappan and colleagues looking at the persuasive impact of large language models. What they seem to find is that you quickly reach diminishing marginal returns. As the scale and sophistication of these systems improve, you do get persuasive improvements, but you hit a ceiling quite quickly. That’s kind of what you would expect if you think that the way persuasion works is that you just have to present people with evidence and arguments they find persuasive. There aren’t any magic bullets when it comes to persuasion; you just have to do that hard work. My inclination would be to think that when it comes to achieving superintelligence along a given dimension, with persuasion, you’re probably going to hit a ceiling relatively quickly because the bottleneck is going to be whether you can actually find good evidence and arguments for a conclusion, not whether you have some sort of special ability independent of that. One thing you might push back on is that, once you’re thinking about persuasion as something that occurs over a period of time, maybe there are degrees of freedom where AI systems could outcompete human beings. I don’t know; I don’t have confident views about that at all. But when it comes to just presenting an argument at a snapshot in time, I think you’re going to see seeding effects relatively quickly.

Will AI increase polarisation?

Fin: Another question is whether AI, in the near or further future, ends up fragmenting us or polarizing us in terms of driving communities oriented around beliefs further apart. You might worry about this for the same reasons you’d already be concerned about this kind of market for rationalizations. If there is a much more sophisticated supply of rationalizations for increasingly detailed and niche sets of beliefs, that could make it much easier for these classes of beliefs to drift apart. On the other hand, AI could be very good at fact-checking and argument-checking. Maybe there are AI fact-checkers that can earn trust across communities of belief because they do a really good job at checking everyone else’s arguments. It’s kind of unclear what direction this goes. Do you have a take on whether AI will kind of fragment us epistemically?

Dan: I don’t have a confident take.

On the fragmentation point, I would say I’m not that worried about it because, even though it’s true, we tend to seek out evidence and arguments that rationalize what we’re motivated to believe and endorse. We want that kind of information so we can use it on others in the context of persuasion, argument, and recruiting people to our coalition and subculture. I don’t think people generally have a motivation to retreat into their own echo chambers where they’re totally isolated from other human beings. Maybe there are some fringe cases with cults, but it’s not clear why AI specifically would be a difference maker there. You can already find cults that isolate themselves.

The point about whether we’re going to deploy this technology at scale for purposes like fact-checking and argument evaluation, and whether that might have positive consequences, again, I think the outcome will depend on various factors independent of the AI itself. We already have lots of fact-checking and argument evaluation, with people making high-quality contributions to the public sphere. This has some good consequences and some bad consequences in different contexts.

When it comes to the fact-checking industry, there’s a perception that much of it is biased against one side of the political spectrum. I think that’s overstated in most cases, but there can be a grain of truth to it. To the extent that this is how fact-checking is deployed, it’s unclear when you step back and think about the system as a whole whether it will have good or bad consequences. I believe it will depend on how these technologies are rolled out and broader societal factors, such as the degree to which society is polarized and how different segments of society trust institutions.

So, for all of these cases, the disappointing thing to say is, firstly, it’s really difficult to know how it’s going to play out. Secondly, how it plays out will really depend on how these technologies interact with the preexisting factors within society.

Does fact-checking work?

Fin: Actually, I’m interested to hear more about fact-checking as it’s done now. I know there are different kinds of, especially political fact-checking organizations. How has that gone? Has it gone badly in any sense?

Dan: I think the honest answer is we don’t know. What we do know is there’s evidence showing that fact-checking, in the sense of presenting people with evidence and arguments indicating that something is false in an experimental context, tends to have some impact on what people believe in the desired direction. So, if you fact-check people, they tend to update their beliefs in line with the fact check.

One nuance is that even when people update their beliefs, they often won’t change their attitudes. For example, you might say this claim about vaccines is wrong, and they might update their beliefs in the direction of the fact check, but they’re not going to change their beliefs about vaccines. Or you’ll explain to them that this claim from a politician is wrong. They might update their beliefs but still vote for that politician.

Another important aspect of fact-checking is not just the impact of individual fact checks in experimental contexts, but rather the overall consequences of the fact-checking industry on the information ecosystem. This is incredibly complicated, especially in a highly polarized society like the US, where many people on the political right perceive fact-checking as highly partisan, disproportionately focusing on the mistakes of one side of the political spectrum. If that’s the perception, and the fact-checking industry is embedded in mainstream institutions, one consequence might be that some people lose trust in those institutions as they come to believe they are biased against them.

I’m not saying that we know that to be true, but I think it gets at this deep issue where if you really want to evaluate an initiative like fact-checking, it’s very complicated. It touches on broad questions about institutions and institutional trust, and which identity groups in society are associated with the fact-checkers. Are they from one side of the political aisle, and so on? It’s just a really difficult question to answer. When it comes to evaluating the consequences of fact-checking as a whole, nobody’s really in a position to assert with a high degree of confidence that it has been good or bad. It’s so complex; it’s just really difficult to know.

AI and orthodoxy

Fin: I should ask, just from your perspective as a researcher of social commission and community belief, what else are you worried or excited about when it comes to AI?

Dan: One thing you mentioned earlier was about how, when it comes to these sorts of AI technologies, they might improve our accuracy incrementally on certain topics as we use them as tools to present us with arguments, even if we’re not asking them to persuade us of a radically different kind of belief. Certainly, in my own life, AI has been massively beneficial, and I can’t think of a case where it has been negative. I’m a writer; I blog. I find that AI is useful as a kind of mediocre research assistant, and I expect it will just get better at that. When there’s something that I’m kind of vaguely familiar with, I’ll use GPT-4 or Anthropic’s Claude. They’re pretty good; they definitely improve my ability to acquire information efficiently.

I also find that they are quite effective as mediocre copy editors and fact-checkers. Even if you just ask them, “Okay, here’s an argument. Please present me with a set of criticisms of this argument so I can think through it,” I’ve found they’ve been enormously helpful for that as well. I’m excited about that because I think this technology is only going to get better. In my own case, it’s been a real benefit to how I think through topics and write, even in something as seemingly simple as advances in AI in recent years when it comes to just reading stuff out. I think that’s been massively beneficial for people like me who benefit from that.

In all of those areas, I’m really optimistic. I can’t think of a situation where it’s been negative for me in terms of thinking through an idea and writing about it. I think the biggest thing to worry about, where there should be more attention, is that when most people are thinking about the threat from AI, they’re thinking about it within the context of affluent liberal democracies with functioning institutions that are pretty good today. The risks that many people are considering, especially those who work on this topic, are what you might think of as counterestablishment content—views about vaccines that are at odds with public health authorities or views about science that contradict overwhelming scientific consensus. I completely agree that this is something people should be worried about and should focus on.

However, if you look at the broad sweep of human history, overwhelmingly, the problem has been the reverse. It’s been elites and establishment institutions using their power to lock in self-serving establishment narratives. That really is something to be on guard about when it comes to this sort of technology, not just in authoritarian, deeply hierarchical, corrupt, or outright totalitarian societies today, where I think that’s going to be a big worry as these AI technologies facilitate the locking in of establishment narratives.

Even in liberal democracies, dissent from consensus narratives and counterestablishment content, in the broadest possible sense of that term, can be really important. I think that is something to think through carefully and worry about looking into the future. If you take a long view of human history, that has overwhelmingly been the big problem. It’s not whether this is going to amplify counterestablishment content, but whether it will give elites and those in positions of power and influence the ability to lock in establishment narratives.

That’s really something to be concerned about.

Fin: Yeah. I strongly agree. Historically, you consider an autocratic regime with a ruling class that seeks to suppress counter-establishment narratives and entrench its power by controlling the flow of information. What does that actually look like in practice? What are the mechanisms there?

Dan: I think there are multiple ways in which establishing certain kinds of self-serving orthodoxies has worked historically and could work in the future. The obvious way is clamping down on heresy or any content that contradicts the preferred narratives of those in positions of power and influence. Throughout most of human history, from the Neolithic Revolution onwards, there have been punishments for heresy or heterodoxy—punishments that could be as severe as being killed or burned at the stake, but also milder punishments such as ostracism and exclusion. You can easily imagine how AI technologies might facilitate that kind of thing in the future, with much better ways of monitoring public conversations online to flag content that is in tension with these preferred orthodoxies.

One complicating factor is that we live in a world where lots of counter-establishment content, within a country like the UK, is wrong. This can encourage the idea that it is actually desirable to clamp down on counter-establishment content. There is an argument for that if you’re looking at specific cases, but the worry is that it gives cover for a more general clampdown on counter-establishment or heretical content. So, that’s a simple way—using these technologies to facilitate the monitoring and regulation of the public sphere. This ties back to the issue of fact-checking and the misinformation industry. There are clear cases where bad decisions were made because something was viewed as counter-establishment content. The risk is that if you’re not vigilant about this going forward, these preferred orthodoxies, narratives, and consensus views could end up getting locked in, which can be really counterproductive for the broader epistemic health of society.

Reading recommendations and final questions

Fin: Let’s do some final questions. Can you recommend three resources for listeners who want to find out more?

Dan: Okay. One thing I recommend is a great book by Nikhil Harihani called The Social Instinct. It covers two parts of my work: on one hand, looking at social cooperation and how we achieve it, and on the other hand, examining issues of social epistemology in a broader sense. I’m increasingly of the view that these issues in social epistemology should be viewed as part of a general project of understanding social cooperation. I think that’s one of the best books out there for getting a solid grip on the evolution of human cooperation, how it works, and the challenges involved.

The second recommendation is everything by Dan Sperber. He’s a cognitive scientist and anthropologist, brilliant in his field. His influence has been present in many of the things I’ve said in this conversation about reasoning and epistemic vigilance. Look him up on Google Scholar; even though he’s highly rated, I think he’s underrated because he does fantastic work.

The third book I’m somewhat obsessed with at the moment is by Walter Lippmann, called Public Opinion. It’s from the early 20th century and contains loads of fantastic insights about public opinion, misinformation, institutions, public trust, and so on. I think it’s really underrated. I hadn’t read this book until recently, and I believe it’s filled with insights about how to think about human psychology, forming beliefs about complex matters, and how that interacts with establishment institutions.

Fin: Awesome. I will check that out myself. Many listeners might actually be in a position to contribute research themselves, maybe in economics, psychology, or philosophy. What research would you really love to see done?

Dan: One thing we touched on was people with actual expertise in economics trying to apply that to certain questions in social epistemology. I think that’s really important as a project.

More generally, I think there’s lots of really excellent work being done today, both theoretical and practical, on understanding and promoting progress in a really evidence-based, rigorous way. I think most forms of progress are fundamentally downstream of epistemic progress, our ability to produce, acquire, and distribute knowledge about the world. When it comes to understanding epistemic progress, there’s still so much that we don’t know, even about what seem like quite basic questions, like psychological questions and what sorts of cultural and institutional factors are conducive to epistemic progress. Even though there is lots of excellent work in that area, I still think, given how important it is as a subject, it’s underrated by the kinds of people who are interested both intellectually and practically in progress. So studying the roots of epistemic progress would be one area.

Fin: That’s very interesting. One thing I wonder about is, you know, in periods of rapid and sustained growth historically, that tends to line up with cultural beliefs in the possibility of progress. There’s a question of how epiphenomenal that is or how causal it is. As far as I know, I think that’s not entirely resolved as a question, and that’s pretty relevant for future progress. So, final question: where can people find you and your work online?

Dan: I have a blog called Conspicuous Cognition on Substack. You can find my mostly weekly essays where I write about these and various other issues and try to think through topics. I’ve also got a website, danwilliamsphilosophy.com, where I put links to my published academic research.

Fin: Great. And I’ll link to both those things. Okay, Dan Williams, thank you very much.

Dan: Thank you. Cheers.

Fin: That was Dan Williams on rationalizations, misinformation, and persuasion. If you’re looking for links or a transcript, you can go to hearthisidea.com/episodes/williams. If you find this podcast valuable in some way, then probably the most effective way to help is just to write an honest review wherever you’re listening to this. You can also follow us on Twitter. We are just @hearthisidea. As always, a big thanks to our producer, Chasson, for editing these episodes, and thank you very much for listening.