Episode 52 • 31 August 2022

Michael Aird on how to do Impact-Driven Research

Leave feedback ↗


In this episode, we talked to Michael Aird.

Michael is a senior research manager at Rethink Priorities, where he co-leads the Artificial Intelligence Governance and Strategy team alongside Amanda El-Dakhakhni. Before that he conducted nuclear risk research for Rethink Priorities and longtermist macrostrategy research for Convergence Analysis, the Center on Long-Term Risk, and the Future of Humanity Institute, which is where we know each other from. And before that he was a teacher and a stand up comedian.

Michael Aird

We discuss:


See also:


Expressions of interest forms for longtermist research

Relevant: Michael’s list of pros and cons for working at Rethink Priorities.

Applying for research jobs and funding

All authored by Michael.

Organisations mentioned in the episode

Research skills and concepts

General writing skills

Not mentioned my Michael but still maybe useful —

Nuclear risk

AI governance

Other research topic ideas




Fin 0:06

Hey, you’re listening to Hear This Idea. So right off the top, I want to say that this one is a bit of a departure from our regular programming. Basically, you’re about to hear a three-hour-plus chat, where we really just get into the weeds with research and career advice. So it’s probably not for everyone, but if it does sound potentially relevant to you, based on this intro, stick around, because I think this could be one of the most practically useful episodes we’ve made.

So, Luca and I have been lucky enough to speak to guests who do all kinds of research in academia or beyond, and many who have decided to explicitly use that research to have a positive impact on the world. But I think it’s fair to say that most research that gets done in the world is not squarely aimed at actually improving the world, even though that is often an incidental hope. At least, I think it’s fair to say that research really squarely aimed at impact may well just look quite different when it comes to the choice of topic and the way it gets done and disseminated.

So there’s some chance you’re listening to this, and maybe you’re in some corner of academia, or you’re considering a career that involves some kind of research and you are sympathetic to the idea of using your work to really make progress on the biggest problems in the world. And if so, this episode could be for you. It is three-plus hours of extremely concrete information and advice about how to start doing impact driven research. Now, when we were thinking, ‘who could be best placed to give this advice?’, we both immediately thought of Michael. Michael is a senior research manager at Rethink Priorities, where he co-leads the artificial intelligence, governance and strategy team, alongside Amanda El-Dakhakhni. Before that he conducted nuclear risk research for Rethink Priorities and long term macro strategy research for Convergence Analysis, Centre on Long-Term Risk and the Future of Humanity Institute, which is where we know each other, and before that he was a teacher and a stand up comedian. In fact, as you’ll hear, Michael was a teacher and stand up comic until very recently; he’s transitioned to co-leading a team of researchers working on a hugely pressing problem that has taken about two years total.

So Michael seemed like an especially good person to ask about how to really nail that transition to impactful research. We covered things like how and when to apply for research jobs, how to apply for funding, how to write research summaries, the idea of reasoning, transparency, the uses of reductionism, case studies in nuclear risk and AI governance, and how not to get stuck becoming an expert on the history of birds. Before we get into it, Michael asked me to flag that the reason we’re focusing on research here as a topic is because that’s what Michael knows about. But of course, many people should not focus on research in their careers, and some people should maybe try research for less than a year to help them develop knowledge, skills, and connections, which they can then draw on in other kinds of work, which are not research. Which means this could be worth hearing, even if you’re not aiming at a full career in research, and equally, the message is not that everyone listening should definitely focus on research over other things. Okay, without further ado, here’s the episode.

Michael, thanks for joining us.

Michael 3:30

Thanks. Yeah. Thanks for having me.


Do you want to -

[Everyone laughs]

Luca 3:39

Do you want to give a quick elevator pitch of what your name is and what you do?


Yeah. Okay. So I’m Michael Aird, my main role is I’m a senior research manager at Rethink Priorities’ AI governance and strategy team, where I do some management of individual people, some team level management, so I’m helping the department as a whole - the longtermism department at Rethink Priorities - I also do some grant making on the infrastructure fund, and a variety of other advising type things like that, advising various orgs and projects.


Great. And I think we’re gonna be talking a bunch about research within the EA community, and possibly some funding sides as well. So it may be good to do some disclaimers. Do you want to disclaim anything else?


Yeah. So we are personally friends, and we’re also part of this EA community where everything’s pretty interlinked. So yeah, Rethink Priorities is funded, one of its major funders is Open Philanthropy, and we’ll talk a bunch about Open Philanthropy for various unrelated reasons. And yeah, I’ll just say I think what I would have said anyway, but just that disclaimer should be on the table. Also, there’s a lot of organisations that I work for, or I’m associated with, and what I’m saying is not representing any of them. This is just like my takes. I’ll use a lot of Rethink Priorities examples, but yeah, there’s just my stances.


Cool. And I should likewise disclaim that we are friends.

Michael 4:56

Thanks for confirming. For the record.

Luca 4:59

And I work at Open Philanthropy but will likewise for this interview be wearing my Hear This Idea hat, so everything I say is on me.

What is impact-driven research?

Fin 5:06

This is setting us up for some seriously controversial takes! I’m also friends with both of you. So I think one framing for a conversation is to talk about research, and research which is directed at having an impact on the world. So, one first question is: what does it mean for research to be impact driven? And what kinds of routes to impact can you imagine for different kinds of research?


Yeah. So three sort of complementary framings I guess for this question. One is that impact driven thing. Another, I might often refer to this as EA research or EA aligned research, or longtermism research or longtermism aligned research. They aren’t exactly the same. Really what I care about is the world getting better, and so the impact driven thing, but EA can be like a shorthand for that. And longtermism, is the one I focus on most. Impact driven, I would say - my, janky breakdown that I came up with - would be sort of four key components: one is you’re aiming to actually change what happens in the world. That’s one key thing. So you’re not, for example, just trying to sort of quote unquote, ‘fill a gap in the literature’. Filling a gap in the literature is often a good way to change what actually happens in the world. But often, you know, no one was depending on that gap being filled. So you change some actual decisions, either immediately or later, or you change stuff about yourself, that means your own career can have a lot more impact. But it bottoms out in the world being different. That’s one thing - impact. Hopefully, everyone is aware that impact should be a shorthand for like net positive impacts - like changes that are actually good! But I think a lot of people in the world do just want to see the world be different because of them. And I’m like, ‘no, let’s make it better because of you!’ On net, like they’ll often be downside risks, but let’s see if they balance out to positive. And then the third thing is aiming for that to be roughly the biggest you can make it. So roughly the biggest net positive impact you can make. And there particularly I’m talking about in expectation, so this expected value idea of probability times consequences. So think of all the good things that might happen, or the bad things might happen from your research. How likely are they? How big a deal would they be? I think there was a fourth one, but I can’t remember what it is.

Luca 7:12

Yeah. Can you maybe draw a parallel here between what you’re describing there and how you often see research done in the ‘normal (quote unquote) world’, whether that be like academia, think tanks, consultancies, finance or whatever. How does this maybe look tangibly different compared to what people might be doing otherwise?

Michael 7:31

Yeah. So I guess we can think of impact-driven versus the other, you know - what could come before the driven? Some of the other things are like curiosity-driven, or funding-driven, or something like status or fad-driven, or things like that. Now, yeah, a lot of the time in this conversation, I might sound like quite dismissive of a lot of the world. But yeah, I think the world’s full of a lot of smart researchers who have strong methods, and often good intentions, but they aren’t sort of relentlessly focused on this maximum net positive impact thing. So yeah, a lot of what happens will be funders or governments or philanthropists are focused on a particular area, that’s the area you can get money and so you flock to that. And that’s understandable, you want a job. But one nice thing about the world nowadays is there is a lot of money available, potentially, for the right kind of impact-driven work. We might talk about that later. Another thing would be curiosity-driven, and also something like understanding-the-nature-of-the-world-driven. And that’s pretty reasonable. But understanding the nature of the world - there’s so many bits of the nature of the world. So I want to choose the subset of the bits of the nature of the world where if I get a better understanding of that, in five years the world’s much better, or it’s on track to be much better. And also sometimes understanding the nature of the world isn’t good. Nuclear weapons are sort of the obvious example. I mean, it’s not super clear nuclear weapons made the world worse, but they, you know, it doesn’t seem clear to me they made it better. So understanding some bits of physics may have been dangerous there. There’s some things currently that are like that. So if you follow the sort of the status landscapes in academia, or the filling the next research gap and stuff, this will often make the world better, sometimes make it worse, sometimes does not make a big difference.

Fin 9:07

And you made this distinction between doing research to change things about yourself or more directly change things about the world. I want to zoom into both of them but: changing things about yourself. Why could that be worth doing? What does that mean?

Michael 9:23

Yeah so basically I make this distinction, often, between sort of three main impacts that research can have. One is the direct impact of the work itself. So you have this paper or this report, or this conference you do, or emails you send people where you explain the findings, whatever. And that then makes things better in a way that doesn’t flow through your own later actions, except disseminating this stuff. Then another one would be testing your fit. So finding out where you should fit in the world, what are you good at? What are you passionate about? What could you become good at? Because basically, if you’re listening to this podcast, you the listener, there’s a fair chance that you could play a pretty substantial role in making things a lot better. Because I think a lot of key issues in the world are just not many people paying attention, or they’re just not paying attention in a particularly strategic or focused way. And it’s not that hard to be a key player on some of the most important stories and make things a lot better. So if you find out where you can best fit, and also personal fit matters a lot to that, so you might have a lot more impact in some jobs than in others. This is both research and other things. You could via research, test your fit for other things like grant making, or policy in certain areas, for example. So yeah, finding out where you’re best can help you do a lot better later. And then the third thing is like building your career capital. This is this concept, I think it’s from 80,000 Hours, they definitely are the main promoters of it. And the components there, as far as I recall, are like knowledge, skills, connections, and they call it ‘credentials’, the fourth one, but I want to call it ‘credible signals of fit’. So have you demonstrated that you’re good at stuff? And so as you build those, that allows you to have more impact later. So a lot of your impacts are not going to come from this first research project you do. It’s going to come in five years, once you’re more skilled and connected, and you know where you should slot into, and people trust you more, and that sort of thing. So building yourself into that amazing thing later can be where to focus sometimes.

Fin 11:08

Okay, cool. And then we can zoom in on another one of those three factors, which is something like direct impact on the world, via your research, independently of what it does for you. Yeah, are there any kind of distinctions we can make there about different ways that research can just make the world go better, or improve decisions?

Michael 11:26

Yeah, for sure. I can always make distinctions - it’s one of my main things! I recently said to someone, I started saying something like, ‘I think we could sort of split this into four separate dimensions’, and then realised you could make a meme template of me just saying that. Yeah, so some distinctions for direct impact of the work itself. In fact, I have four distinctions of types of distinctions! So yeah, one would be the types of like cause areas or topics you’re focusing on. So I’m particularly focused on these longtermist issues, so things that could plausibly have a really big impact on the long term future. I know you guys are pretty interested in that as well. Often these are existential risks, but not always. So within that, it’s like AI risk, nuclear weapons risk, bio risk, various other things. Also, there’s other areas like animal welfare, etc. So that’s one type of distinction - which of those are you influencing? Another is like, which types of decisions - you could influence other researchers and what they do, you could influence funding decisions, you could influence, you know, major grants being made. You can influence the sorts of policies that governments are pushing for, and regulations and standards and that sort of thing. You can influence entrepreneurs to start new projects or organisations, founding new things to fill these gaps that are needed. So for example, there’s this organisation called Charity Entrepreneurship, where they do research, they have like this factory model of producing amazing new nonprofits where they like -


We interviewed Sam Hilton.


Oh, cool. Okay, so maybe the listener’s already aware. But yeah, they have like a pipeline and do a bunch of research to churn out these new organisations to fill these gaps that they’ve identified. You can also influence people’s career decisions. So again, this circles back to individuals who are strategically working on these things are really important, and so helping people find what they should slot into can be really important. And I also have, yeah, ‘other’ would be an important category of you know, maybe someone does a conference or something - like any other decision I hadn’t thought of.

Fin 13:20

And maybe one of the others is crowding in more research.

Michael 13:23

Yeah. So you could get more research to come in. And you could also make it go better - like building the foundations for the next researchers to have an easier time picking up also. Yeah, one thing I forgot as well, that’s pretty important, is like technology research and development and deployment. So you could influence whether people do some dangerous AI stuff, you could influence which sort of AI technologies they develop that mitigate risk. So one thing could be figuring out that interpretability might be really important for reducing the chance of extreme AI risks. And then you can get people to fund more of that and do more of that work. There’s also things in other non longtermist areas like clean meat research or something. I’ll use various jargon, I’m not sure like how - don’t be afraid to poke me on any of that.

Research outside of academia

Fin 14:03

Maximise jargon. Okay, so for all of this research, you can do it within or beyond an academic setting. So for instance, I could do a PhD at a university, or instead, I could join a research organisation, like Rethink Priorities. Why, in general, might I consider doing this work outside of that academic setting?

Michael 14:26

Yeah, so yes, there’s academia, there’s organisations, like Rethink Priorities, which are sort of fairly explicitly Effective Altruism focused and just use whatever methods seem most important to get at the most important questions. There’s also a variety of other things. Like you could do research within governments, you could do research within a standard non Effective Altruism aligned think tank, independent research, various other places. For academia versus other things. I mean, there’s definitely pros of doing academia, but you sort of, there’s a lot of norms within it. There’s a lot of standards that you’re meant to meet, a lot of paper structures you’re meant to do, a lot of types of methodology you’re meant to do. Often it pushes towards quite narrow questions. Another thing is it could also push towards trying to do big conceptual breakthroughs, but not really focused on what’s most impactful or things like that. Which is understandable, like academia grew up this way for a particular reason. It was pushing against people making random crap claims and trying to make sure that everyone could critique each other and no one’s allowed to just make claims. But if you want to make a decision in the next five years, often you do need to have a sort of hot take, that is minimally researched, and then you move on. And academia often doesn’t let you do that. So in academia, people often push against having clear bottom lines, like rushing to a bottom line quite quickly of what people should do. And the distinction will be different for other areas. Like for for think tanks, think tanks decently often do rush towards bottom lines fairly quickly. But sometimes it’s, you know, justifying the existing policy choices and stuff like that. So yeah, so each one has pros and cons, each one is quite good for different people, like, each of the things I listed are things I advise people to go into sometimes, you’ve just got to think about what makes most sense for you. A claim I would make though, is if you do just actually want to find out what’s true, rather than things like advocacy, or making sure that the decision makers listen to you - what’s true in the most important things, like impact-driven stuff, then I would claim Effective Altruism aligned research organisations are unusually good for that, because they strip away most of the baggage and they just try to get there fast.

Luca 16:27

Do you think the same holds for skill building and stuff as well? So I guess one argument you could make for doing a PhD is that it’s just a good five year, four year, three year programme to just acquire skills, learn how to do independent research, get a supervisor and stuff as well. If you’re wanting to do that maybe more independently or within the EA community, do you think that still the same argument holds?

Michael 16:47

Yeah, so I do think like, for sure people should take into account skill building. And like, sometimes you should be willing to do four years of basically pointless research in order to just become great at something, and also become viewed as great at something. So when you do go do great research later, you can get those jobs. So for example, a lot of like government jobs and think tank jobs, you do just need the credentials. You don’t just need to be smart and know the stuff, it needs to literally have that stamp on your CV. Yeah, so for sure. But yeah, depends on the person. Also, just four years is a lot. And if you do four years of something that is in roughly the right area, and you have a mentor on roughly the right topics, who knows about roughly the right things, that might be slower skill building than if you do four years of just actually doing something really close to what you want to do, and having to like scrappily, figure out yourself. I think mentorship is pretty key. So I think like, if you wouldn’t be able to get a good mentor at some other place, then academia might be a better choice. But often PhD supervisors, as far as I’m aware, don’t have great feedback loops, don’t spend a lot of time on their students and stuff. So it would vary a lot from person to person. Although I haven’t personally done a PhD, so I can’t like crap talk them too competently. But my general hot take is if you’re not sure you should do a PhD, you probably shouldn’t do a PhD, at least right now. Like, you should probably try other things for like six months to a year first. Like I think a lot of people think they assume they would need a PhD before they get something good, and they just haven’t tried. Like I spoke to someone who had applied to four Effective Altruism aligned organisations, got rejected, but got decently far in application processes, and he assumed he needed a PhD. And I was like four rejections is basically nothing, like in most jobs there’s a roughly 1-3% acceptance rate. I don’t know about most jobs, but like jobs of this type, and not just in EA orgs. So four is very little; you got fairly far, just try applying to 10 more - that’s so much faster than a four year PhD! And then he did apply for like two more, and he got one. So I won that round.

Luca 18:42

One thing I want to maybe flesh out a bit more is: when you were talking about doing research, I guess, like within the EA community and stuff, how much of the impact, and particularly the direct impact that you’re talking about, is just because you are engaging in an ecosystem that you think is highly leveraged, and really impactful, versus this is a really good way to do research, and then you know, if you just bring that research to like governments or to like other actors and stuff, that’s how the impact comes? Like how much of it is the ecosystem argument versus that epistemic or attitude kind of argument?

Michael 19:15

So the first one is, like maybe being in an EA org -

Luca 19:19

Yeah, like being able to influence an EA org by producing ‘EA research’ that’s like, what’s impactful because you have a much higher ability to impact like funding decisions because you’re close to those because it’s maybe just much smaller, and there’s much fewer people working to change, like open fields beyond something than like the US government, for example. Versus no, these are just like really important topics and, you know, maybe have different kind of epidemics and like methodologies and stuff around it, and like that’s where the impact comes from.

Michael 19:48

Yeah, interesting. I feel like I haven’t thought of it quite like that before. So I think one thing I’ll flag is you can certainly be outside of an EA org but work on the right topics in the right way and influence the EA people.

Luca 20:04

Yeah, I think that’s what I’m trying to get at.

Michael 20:05

Yeah, so that’s one key thing. So yeah, the Effective Altruism community has access to a fair bit of resources, like an impressive bit of resources relative to how young and small it is. Young in terms of how old, like how long the movement’s been around, as well as just the median age of its members. A fair amount of money, a fair amount of either political influence or ability to probably gain political influence, if required, think things like that - entrepreneurial talent and things. So that is pretty exciting. That’s one reason to be really excited about this. But you don’t have to be at an EA org to do that. You can do work - so one model that I think is probably good sometimes, is if someone is, if they’ve decided a PhD is the right move for them, or they’re working in like a regular think tank, just a prestigious think tank with really smart people that would provide good mentorship, but doesn’t have quite the right thinking styles, or quite the right focuses, but they are really good at research methods and things like that. I would suggest those people often try to do things like occasionally on weekends, or like they take two weeks off or something, and during that time, they try to adapt the sort of thing they’ve been learning and working on to tailor it for the EA needs. So you might have done a lot of work that is relevant to AI risk, but you were forced to mostly answer shorter term questions, and to kind of hold back on giving your real bottom lines because it’s not rigorous enough and stuff. And then you could do like little sprint documents, where you just quickly chuck out, like, I’ve learned all this stuff, and here’s my hot take given that or something. So that’s definitely a model you can do. I think the main benefit working at Effective Altruism aligned orgs is just actually working on the right questions and getting to the right answers relatively quickly, and explaining them transparently, like explaining what you believe precisely and why you believe it. Academia also has often a lot of hedges to sort of cover their ass, which is understandable, that’s what they have to do. But yeah, so I think that’s the main benefit, rather than working in an EA org is the only way to influence EAs or something.

Applying to research roles at EA orgs

Fin 22:00

Yeah, well, it very well may be the case that some of the time, but maybe not all the time, there are lots of hedges in academia. You mentioned this 1-3% ish acceptance rate for many of these EA aligned research orgs - I’m wary of that projecting too kind of pessimistic an impression. So yeah, maybe one thing that’s worth mentioning is that it’s worth not writing off applying to these things, having heard that figure. You can imagine, like a very kind of toy model where, you know, 100 people are just like shotgun applying to, like 100 orgs each, and then there’s like another 100 people applying to the like one or two orgs that seem like an especially good fit for them. And then for each application round, you’re gonna have the number of people applying being basically dominated by the kind of shotgun appliers. But if you’re one of the few kind of focussed appliers, then you should maybe think your chance is better than that kind of naive guess. So yeah. Any thoughts on how to think about your chances of ending up at these places?

Michael 23:03

Yeah, that’s a model that makes sense, but I don’t like it because like - be the shotgun applier! So yeah, a few things on that. The 1-3% thing, I think that’s fairly true of EA organisations, but I actually meant like, I think this is fairly true of this sort of calibre of job in general. I haven’t tried to look at proper stats, but at some point, someone told me that this is roughly the normal thing. And I think that is, for example, I used to work for something called Teach for Australia, which is sort of similar to Teach First or Teach For America that slightly more people will be familiar with, and that also has a very low acceptance rate. And I think it’s just fairly common for the sort of thing that fairly talented ambitious university graduates shoot for to have fairly low acceptance rates. And that’s just sort of fine. It makes sense because you just can apply for a lot of things. Yeah, and so it doesn’t mean a randomly chosen person should be confident they have a 2% acceptance rate. So the thing your model is pointing out is correct, some people have reason to believe that their acceptance rate is much higher. But also I do think when you’re in like active job search mode, you should probably apply for something like 20 things and just see what happens, and quite a wide range of things. And a motto I have, that I’ve like written a post with someone else fleshing out is like: ‘Don’t think just apply! Usually.’ So I allowed myself that hedge, but mostly ‘don’t think just apply.’ Because so I give a lot of people career advice and stuff, or like a lot of people want me to give them career advice and stuff, and really often someone reaches out to me, and they’re like, ‘I think I might want to Rethink Priorities. Can I talk to you for half an hour about whether I should?’ But the initial application stage is one hour. And what I’m going to tell them, like what they want to hear from me is like what sort of person is a good fit? Like, would I be a good fit? Would I enjoy the job? But probably they’re not going to get the job. And also the first application stage is really short. So I’m not saying they should be confident but I’m saying they should be ambitious and just go for it. Like you just roll a lot of dice and see what happens. So a lot of people I tell them like, ‘Hey, you’re probably not going to get the job anyway, but do apply for it and don’t bother thinking about it. Like it’s a waste of your time.’ Not in every case, there’s some caveats. The post has some caveats. But yeah, mostly just like fire them out and see what happens. In my personal case, I applied for something like 20 jobs in 2019, when I was doing my big Effective Altruism job hunt thing as a high school teacher trying to take a really hard left turn into like nuclear security policy. And, yeah, I ended up with two offers out of the 20. And they were both the things I specifically thought were less likely than average out of the things I applied for. And if I was ruling things out, based on whether I might not like them, and they might not be a good fit for me, I would have ruled those two out. They both did not seem like the obvious choice for me. And then once I had them, I thought a bunch more and I was like, actually, both of these seem pretty good for me, now that I - but it wouldn’t have been worth me spending those three hours thinking and talking to people upfront, that would have been silly.

Luca 25:46

Yeah. I mean, I don’t know if this is a useful analogy or not, but it definitely makes me think back to university when I studied economics, and a lot of friends, and myself at the time included, applied for investment banks and consultancies and that kind of area of jobs. And there was just like, a whole two months, three months that that essentially took up with like, applying for things, going through work tests, doing interviews and stuff. And you kind of knew going in that it was kind of random what you were going to get out of it at the end. And just like, yeah, the kind of shotgun approach you mentioned there. I had, I remember a very similar experience with applying for research assistant positions at like, you know, mainstream kind of academic fields and stuff. But the same: I think I applied for like 20, 30 things and then got, like, one, in the year of 2020. But yeah, I remember it’s just like, kind of a brutal job market. Not to, again, dissuade people from doing it. It’s the same in so many other fields and stuff as well.

Michael 26:38

Yeah, I sort of want people to hold two things in their head. Like I don’t want to tell people, ‘you’re probably going to get things’. Like I also have this with applying for funding. I don’t want to tell people like ‘you’re super likely to get funding’ or something, but I also don’t want them to stop applying. I want them to just think about expected value. And just think about, hey, if it takes you like one hour to apply, and there is a 2% chance that you get this job that changes your life basically, and puts you on this amazing trajectory and can really help the world, like you could save like a lot of factory farmed animals, you could save like, substantially reduce AI X-risk or something chuck it in - it’s one hour! And also yeah, it’s not just that you might get the job, it’s also that you learn things along the way.

Fin 27:16

I was going to mention that, yeah. Part of the value is actually finding out skills you are underrating overrating via that process of doing work trails and getting feedback on them.

Michael 27:24

Yeah, yeah. So part of it is like the sort of crapshoot that Luca mentioned. It is noisy like, noisy as in there’s randomness. So the employees will make mistakes, like, I’ve been on both ends of hiring rounds now. And when I am the hirer, I’m pretty confident about the process, we get good people and stuff. But I can’t say I’m confident that we always get exactly the four best people and everyone we rule out should have been ruled out. And even that the people we ruled out at stage one should have been ruled out. There’s like noise and randomness, you’re just getting tiny samples. Also, it’s really hard for a person to know what they’re good at. So one thing is like some of the jobs Luca applied for and got rejected, maybe he should have got them and maybe they made a mistake! Another thing is like ex ante, it was really hard for you to know what kind of person you are. And they’ve done the job for like 10 years, and you haven’t done the job at all. And they’ve designed the process deliberately with a lot of effort and trial and error and measurement, like a process designed to check if you’re good for things. But also, please do not get rejected from four things and think I definitely can’t do this job. Decent chance that’s the noise kicking.

Luca 28:26

And one thing I definitely wanna plug for maybe listeners who are currently going through like the job crunch, or are about to and stuff is one thing that I found really useful was looking at people I looked up to or like impressive people in the field and stuff. And some people kind of have these, like CVs of failures. And yeah, I’ve made my own. I’ll just like plug it and stuff, but just listing all the jobs that people who right now are really impressive and, like cool and stuff and seem so far away, and then you just see their full list of like, 20, 50 entries of jobs they applied to and didn’t get, and I think that kind of hammers home that yeah, everybody kind of has to go through that as well.

Michael 29:00

Yeah. Personally, there’s multiple orgs that I was rejected from, often really early, and sometimes ghosting me, who now I’ve evaluated grants to or they’ve asked me to apply for a much more senior role, or I’ve advised them or something like that. And yeah, one thing is maybe they were wrong. Another thing is I just was a very different person, then. And also, if there are two research assistant roles, they still might be really different. So if you get rejected from one, even if that was the right call, you might get another one. Yeah, so just keep at it, hang in there. Applying for 20 jobs might take you a total of something like 50 hours if there’s like 20 one hour first stages, and then some work tests along the way. It is a non-trivial amount of work. But it’s not a huge amount of work. It can be worth just rolling those dice a bunch. Also you can potentially get funding for, like this is valid work applying for lots of things, so if you need to take a few months off for this, you can maybe get that.

Fin 29:55

Yeah. And speaking of applying for these kinds of impact driven research jobs, are there any indicators you can look out for that might suggest you are a good fit for particular kinds of this work? And in particular, any traits or aptitudes that are kind of less obvious or different from just a standard academic application?

Michael 30:17

Yeah. My headline thing, to sound like a broken record, is yeah, just apply. And like, they’ve designed things to check. And each rejection gives you little evidence, but it still adds up. But there are some other things. Also, I would say, I don’t know if you need to check if you’re a good fit for this, rather than academic research. It’s sort of more like often, this is just the better thing. Not always, but often, this is just a better thing to do. And so if you notice that you should do this - most people haven’t noticed they should do this - and so like, maybe if you could get into both, you should still go for this one, and just like see what happens. But there are some things. One thing is, there’s like a stepping stone of levels. So 80,000 Hours, I really like their advice, people should check out their site and stuff. They have this concept of ‘a ladder of cheap tests’ I think they call it, where this goes back to the idea of testing your fit. This is also part of why I’m slightly anti PhD. Again, there’s a lot that I like about academia, PhDs and stuff. But yeah I’m a little anti PhD, because a PhD is such an expensive test. My whole EA aligned career has been 2.5 years so far, and a PhD would be more than double that! And I’ve learned so much and done so much. Not everyone is going to do that, I have been faster than average, but still. So you can do a ladder of cheap tests. The first thing is yeah, applying for a bunch of stuff. You can also do things like the sort of thing I mentioned earlier. The Effective Altruism community, it’s really hard to define it, but there’s probably something like 7500 people in it, so not that many. And this is spread across a lot of cause areas, a lot of work types, and some of those people aren’t professionally working on it, but are just really interested. So there’s a good chance there’s some topic that you know more about than anyone in the movement, or that you know more about it than anyone who’s written it down. So maybe there’s like 10 people who do know more than you about it, but they’re busy and they haven’t written it up. So you could do things like, you know, choose some weekend to spend eight hours writing some post that translates your knowledge - which might be super well known in most of the world, but the Effective Altruism community is like incredibly naive about it - translates that and pulls out the implications and takeaways, so you can try that sort of thing and see how you like it.

Fin 32:23

I guess, obviously, a related thing is there are also things which you could end up knowing more after a couple of weekends of research.

Michael 32:29

Yeah. It’s not that hard to beat the frontier, which is really exciting, but also terrifying. Like, there’s so much we need to do!

Space governance and the research frontier

Luca 32:36

Fin, do you want to talk about that, and like space governance, maybe? Like, how did you find that experience? Or maybe give some context there?

Fin 32:43

Yeah, so the context is that I was asked to do some research on space governance, the output being a piece on the 80,000 Hours website, trying to figure out what are the most important considerations here, and how might people begin to work on these problems from a kind of long term perspective, broadly speaking? I went into this knowing almost exactly nothing about any technical details, but with a generalist’s you know, research mindset. And there’s some weird feeling of feeling like totally inadequate, but also appreciating the fact that if you look around in this kind of research community, which is still very small, you know, it’s worth taking seriously the fact that well, no one else has spent as much time looking at this stuff. And yeah, that’s some kind of weird, slightly scary responsibility there, but it is, in fact possible.

Michael 33:37

Yeah. Like I think my sense is space governance isn’t even exactly one of your top three things but still, you’re like one of the top three people in EA for it or something, which is sort of crazy.

Fin 33:47

Yeah. Hopefully not for much longer.

Luca 33:49

I think there’s an important lesson there as well, which is that fields can be really big and exist already but EAs have weird kind of goals and often the fields that do exist aren’t, you know, learning information for the right decisions or values that EAs have and stuff so I mean, it’s more more of a leading question or something here for Fin but I would imagine that with space governance there’s a whole bunch of people thinking about it you know, maybe from space company views or the US government views or what have you, but little people who are thinking about space governance in the context of like future generations, longtermism or like AI, because like, EAs are kind of weird. And that means it’s easier to summarise, like big existing literature with an EA angle and often that’s getting research into like EA terms, like fleshing out the bits that are weirder or wonkier and give EA an arbitrage there.

Fin 34:39

It’s a great point. And I think this is a cross cutting point, in the sense that it doesn’t just apply to space governance. It’s definitely the case that there’s an entire field of space law and there are space lawyers who’ve spent like decades of their lives getting into the details of Space Law. There’s also obviously a huge technical field, people who are building spacecraft and instruments. Government is very interested in space policy, especially from a defence perspective, and so on. So there’s a huge amount of very deep, narrow expertise. But you can ask this question like how many people have sat down and thought across all these different kinds of angles on space stuff, what just seems like overall most important to work on or change or push for, from this, broadly longtermist perspective, as in just making the world go best over all the rest of time? And it’s like, really very few people have had the kind of challenge or opportunity or thought of doing that.

Michael 35:34

Yeah, I’m gonna attempt to roll out a fresh, pithy phrase that I came up with when Luca was talking, so we’ll see how this goes. There’s the classic ‘stand on the shoulders of giants’ thing; I think what we can do is we can stand on the shoulders of giants and just look to the left or something like that. There’s like two reasons it’s really easy to beat the EA frontier, partly because there’s so few EAs working on stuff, and there’s so few people working on it with the right angle. But there’s also a whole world doing a lot of relevant things that you can just - you don’t have to come up with everything. You can just gobble up what they’ve done. They might even have review papers, like they might have good summary textbooks and stuff, you can gobble up their own distilled versions of what they’ve done. And just literally ask, ‘and what does this imply for bio risk?’ and spend like five hours of armchair reasoning, and that might mean that you have the best post on this topic or something. Which is, yeah, again, it’s scary. I also, yeah, it also feels like it’s pretty crazy that I’m the right person for my job. My background’s a psychology undergraduate, some stand up comedy, two years of high school teaching, and then two years of scrappily, trying to do the thing I’m currently doing. And I’m now like leading a 10 person team or co-leading a 10 person team, on things like national security and machine learning and stuff. This just doesn’t make sense! But I am in practice the best person available for this job. So yeah, again, when people think of applying for things, like the fact that you might feel like an imposter or something, it’s very plausible you’re still the best we can get. And that that really is good. Like, if you get the job, there’s a good chance you should do the job. They’re not going to give it to you by mistake or something.

Fin 37:01

I guess there’s also some nice feature about how this kind of ‘figuring out what’s most important overall’ type research interfaces with the more narrow and deep kind of research, especially in academia, where you can talk to these people who have this, like incredible expertise on a fairly narrow part of the puzzle, and you can say, ‘Well, look, I’m really interested to hear, you know, what you’ve learned about this little chunk of what I’m trying to figure out, because I’m interested in the upshots, for this, like, broader thing.’ So there’s not a competitive element there, where it’s like, you know, I’m basically in the same field as you and we’re kind of trying to get into the same journals. It’s more just like, I’d quite like to work together with you for a bit and can we get on a call? And that works quite nicely.

Luca 37:40

Yeah, I mean, I definitely found translating research into EA terms a really useful framework at the beginning. And that was definitely also, the moment where I felt the most impostory or the most unsure of what value I could add, and having the framework of No, there’s a bunch of smart writing in this field and stuff already. And what I’m doing is, I’m helping to get a lens of X-risk, or a lens of helping the global poor and really internalising the utility functions or something. Which is like what most academics don’t do, because it’s not in their incentives or something. I found that useful, at least to get the first steps of research that I was confident that I was having value to.


Do you mean just making it so the EAs can understand this existing work, when you say ‘translating it into EA terms’? Or do you mean answering slightly different questions?


Yeah, answering slightly different questions. Maybe to spell out an exact example here: I think, like, climate change is clearly a super crowded field. But that was the first kind of EA project that I was involved in. Yeah, it’s certainly not a neglected literature. What you’re saying there with synthesis and like literature reviews and stuff, right, you have the IPCC reports, you have, like God knows how many 1000s of papers. But the world is optimising for a different climate outcome than EAs are most worried about or most interested in. Which would be either on the one hand, climate change leading to extinction - which is way higher bar than most people actually think about when they worry about their climate outcomes - or it affecting the global vulnerable poor. And a lot of climate studies think about GDP and like dollar terms, rather than in utility terms, right? Where you’re then super interested in like, how does this affect the global poor in India or Nigeria or wherever, where the US government is not too interested in that and therefore, the kind of research it would like direct or summarise in National Academy reports is also not interested in that specific question. But there is a huge literature that you can use, and then add a few extra steps or like ‘this is what I mean with translation’, and then yeah, flesh out something that feels valuable.

Michael 39:34

So I think we’ve said a lot of stuff that’s along the lines of like, EA is interested in slightly different questions and topics and stuff like that, and maybe implied that this is because EA is focusing on different causes. So, you know, neglected animal populations, neglected human populations, like the global poor, neglected populations, such as the future people, neglected risks. I think that is a big part of it. For example, nuclear risk is an area I’ve worked on a bunch; a lot of people just haven’t looked much at what will happen for the really extreme scenarios, they focus more just on any nuclear war at all, rather than: what about one that 1000 weapons are used in; a huge number of cities are on fire; there’s a huge amount of smoke produced. There’s been a little work, but there’s much less on following it through to ‘what’s the chance that actually kills everyone or like, causes permanent collapse?’ because they stop at ‘really awful’, which is totally understandable. But I’m like, but how awful and how do we stop that stuff? Like how do we mitigate the extremely bad thing?

Luca 40:32

Yeah, you get this in bio-security as well. So just to add another example, like biosecurity or public health is a really big existing field, but global catastrophic biological risks of pandemics that might kill everyone is such a higher bar that the number of people working on that drops significantly. Yeah, sorry for interrupting.

Michael 40:48

Yeah. And some of the interventions will be the same for both. Definitely reducing the chance of nuclear war is good for both, reducing the chance of standard pandemics and like various prevention methods are good for both. But there’ll be some things that we won’t notice if we’re just focused on the smaller things, that are still huge, but smaller. And there’ll also be some things, you know, we have the same bag of 100 things, but maybe we don’t prioritise them well. But the one thing I want to add is, I think a lot of the time, it’s also a thinking style type thing. So this comes back partly to academia having certain norms. Like I think a lot of the world is sort of covering its ass or something, a lot of the world is like not willing to really put a kind of shaky bottom line. So for example, in nuclear, one thing that had never been done, that doesn’t make sense, like, just from this cause area thing, you need to add the thinking style thing is, as far as I’m aware, no one had ever just gathered different estimates of how likely a nuclear war is per year, or various types of nuclear war. And this is a thing that obviously the world cares about, like there are a lot of people talking about how likely a nuclear war is. But pretty few of them actually put any numbers on anything. Basically almost none of them use known good practices for forecasting. We have, like empirical evidence that these practices are good for forecasting, at least for the short term, and geopolitics, which nuclear war can be short term geopolitics, unfortunately. And also just no one had put, you know, 10 of them together in one place and been like, ‘Okay, what’s the average then?’ And this is something where it doesn’t make sense. Even if you’re just a regular person who just really doesn’t want any nuclear war in the next five years, you should want that. But people are focused on standard rigorous methods, following the norms, etc. and they’re not prioritising, and they’re also spread across so many issues, so many different topics, so that they haven’t gone deep on particular ones in particular ways.

Fin 42:26

Do you think there are internal estimates, like in governments of those numbers, but no one’s had any reason to do it publicly?

Michael 42:32

Yeah, it seems pretty likely. I really hope so. I am confident that governments have some things. I’m not confident for private reasons; like I just expect that governments have things like nuclear winter models that are either better or complementary to what exists in the public eye. But I mean, there’s a lot of topics where governments could publicly do things like putting numbers on things, aggregating estimates, and they don’t do them on these public things. So there’s, I think there’s a good chance they are missing it too, but I’m not sure.

Caring deeply and thinking rigorously

Luca 43:07

Yeah. So why do you think this gap exists? Is it because people are just uncomfortable putting answers to these really uncertain questions? Or is it that, you know, there might be some difference between personal incentives and social incentives, and being wrong or like being called out there is the issue? Or like, yeah, why why do you think that is?

Michael 43:29

I think, yeah, I mean, you could start with like, you know, just by default, people don’t put numbers on stuff; the standard thing is just to not do that. Like, if everyone’s just running around trying to farm and stuff, they don’t do that. So like, what would push them to do that? And one thing that pushes them to do that is caring deeply about things actually getting better. And also thinking rigorously about how to do that. And to be honest - this is one of the things where maybe I sound dismissive of a lot of the world - I think a lot of people aren’t super doing that. I think a lot of people do care deeply. A lot of people do think rigorously. But it’s somewhat uncommon to have those two things combined really intensely. So you have hedge fund managers who are thinking really rigorously about how to make the most money, and you have a lot of nonprofit workers who are caring deeply. But to really combine both of those at a high level in the same person is somewhat uncommon, and then to have them connect in the community. So I wouldn’t have come up with good forecasting methods if I was left to my own devices. So I had the same thinking styles that I do now, before I encountered the community, and my plans were so much worse. So if there was like 10,000 of me, but scattered and they didn’t know each other, they wouldn’t have created this, like pools of resources and knowledge and methods. So yeah, that just happens with the world by default, I think or something. And then if you care so intensely about making things better, then it’s like it’s really on you and you really have to make sure you find out the best way to do it. And then you apply like evidence and reason to do the most good possible and yada yada.

Fin 44:55

I like this thought about caring deeply and thinking rigorously numerically being anti-correlated, or at least being at the kind of extreme right hand side of both of them is quite rare. Do you have any sense of why that is? One thing that comes to mind is, maybe occasionally there’s some sense that for the really high stakes, really important questions, it often feels maybe a little like, inappropriate to try putting numbers on such important things. You know, it’s kind of frivolous or trivialising these things which can’t really be quantified. But any other explanations?

Michael 45:32

Well, yeah, firstly, is this phenomenon real before we explain it!? You’ve gone too hard on the non hedging! Yeah, I would guess the phenomenon is at least more complicated. You could just explain it pretty easily. So my guess is they are roughly uncorrelated - caring deeply and thinking rigorously. And you know, I would define those more carefully. This could sound very rude to a lot of people, but like, trust me, if you spoke to me, I assume I would sound less annoying than I do in the snippet version. But yeah. So I would guess they’re roughly uncorrelated. And then it’s just not that surprising that being really extreme on both is very rare, because it’s pretty rare to be extreme on one, it’s pretty straight rare to be extreme on the other, and then you want to multiply those two rarities or those two probabilities. I would guess there’s some things that make them correlated. One is, like the thing I mentioned, which is like - so I have become much more productive in the last few years. And even before I encountered the EA community, and a lot of me improving my reasoning and my productivity and my work ethic and my connections and all that, and trying to accrue resources and abilities is driven by this passion to make the world better, in a way that I don’t think I would be driven by money, although some people are. So I think that that creates correlation. I think there’s also something in the opposite direction, which is, to some extent, I think thinking rigorously will tend to lean you a bit towards noticing that helping others is a good thing, and things like that. And that it’s really important.

Fin 47:12

And that there are huge opportunities, and they’re bigger than you originally imagined.

Michael 47:16

But I think there is one reason they would be anti correlated as well. One reason that comes to mind immediately - this is not my area of expertise - which would be a sort of sacred values type thing. So I think Phil Tetlock has done some work on this - he’s a political scientist - and the idea is like, certain topics, it feels like taboo to compare them. And it’s taboo to like, think in certain ways about them. So like in hospitals, people have to actually make decisions about which patients to, in expectation, let die, in order to prioritise giving resources and doctor time and beds to other people. But you’re not really allowed to think like that. Like, there’s certain things where we’ve let certain people think like that, but generally, it feels really wrong and off to prioritise. But you have to; if you aren’t making a decision, that is a decision. So it’s like in practice, if you are putting all your energy into nuclear weapons risk, or if you aren’t, you are implicitly assuming certain numbers, you’re acting as if you are assuming certain numbers, and so getting a better number probably makes the world better, even if it’s still a crap number, just better. Like we are so blind, just have a little bit of sense.

Luca 48:26

Yeah, I remember this is still, I think, one of the big things that I really enjoy about research in particular is exactly what you said there about just making implicit assumptions explicit, almost as an invitation for other people to then criticise and critique in order to get to a better number, right? It almost feels like often the first step is just putting some number out there, and then discussing what that number should be: should it be higher, or should it be lower? Rather than if everything’s in place, it’s really hard to arrive at deliberation, and then at a better kind of data number.

Michael 48:59

Yeah. One thing there is just like, it’s really hard to know if you even do disagree with someone and in what direction, if you’re using qualitative probability terms, and there’s like some research on this, but one example, just from I think Thursday, is I was talking to someone about their career plans, and I think they said that there was like, a non trivial chance they want to end up doing research. And I asked, ‘What do you mean by non trivial?’ And they said, ‘I guess, like, 50% or something.’ I mean, that is non trivial; I agree! But I assumed you meant 2%, and this conversation would be much less productive if I was like, ‘Oh, you probably don’t want to do research. There’s like a tiny chance. Probably leave it off the radar for now or something.’

How to deal with highly uncertain questions?

Luca 49:32

So there does seem to be something particularly unusual about the kinds of questions that EA research involves, I guess particularly on the longtermist side, if you’re engaging with AI or biosecurity, or like nuclear risk and stuff, these feel almost by definition, really, really uncertain areas and really hard to get feedback on. And then I guess, on the other hand, likewise with animal weightings there’s loads of uncertainty there as well. Do you think that this is a big barrier towards getting people to engage with this thing where you almost have to enter kind of a different realm of uncertainty and be comfortable making, as you said, these kinds of explicit statements and it’s just really hard to know whether you’re doing good research or not, because it’s really hard to get feedback on these things.

Michael 50:24

Well, one thing I’d mentioned is we’ve talked a lot about numbers, because numbers are like a sort of shorthand for some of the main types of ways that a lot of Effective Altruism reasoning and research are different to others. But often the work isn’t numbers; often the work is quite qualitative. A lot of my work personally, is the sort of thing, the meme template I mentioned earlier, there are these four distinctions, or there are these four dimensions, or something. A lot of my work is armchair reasoning. And then sort of factorization would be one way to put it. But I’m not into putting numbers on it; I’m like, here, I think, are like the four main factors that might drive this topic or something. And then I think, there is an issue of, for existential risk stuff, it’s very hard to get feedback loops on the thing you ultimately care about most, which is did we all die or did we get locked into some terrible future? Because if that happens once you know, it’s too late. But you can get feedback loops on a lot of the other stuff along the way. So you can train your prediction abilities on short term questions that are on the right types of topics. Like there’s a bunch of nuclear risk relevant stuff happening all the time, for example, and you can make lots of forecasts on that kind of thing. You can have a model. So I’m not just saying ‘0.5%, existential risk from a nuclear weapons strike in 2100’, and I have no sense of what’s driving that. Instead, I have that, but I also have some sort of sense of what the factors causing that are, and if this is true, what should the world look like in two years? And it’s the same as, you know, scientists will often have theories that predict something that’s pretty hard to observe, but also predict a bunch of things that you can observe sooner. And you can check those things and get some evidence on whether the theory is true. So this is an issue; the feedback loops are an issue. But it just somewhat slows us down, somewhat makes things more complicated - it doesn’t kill this enterprise.

Feedback loops and cheap tests in early career research

Luca 52:08

Yeah, I guess I’m wondering, particularly from the early career perspective, how people should be thinking about getting feedback or acquiring skills, and just knowing whether they’re on the right track or not, in order to skill up in order to then have much more impact down the line and stuff? Would you encourage people to engage with maybe, quote, unquote, ‘less uncertain’ problems and stuff at the beginning, where there might be more feedback loops? Or is it again, maybe what you were saying right at the beginning of just if you want to ultimately have an impact on something you should head directly towards that?

Where can you find roles to apply to?

Michael 52:41

Well, I don’t think you should always head directly towards that. So generally, I want people to have a theory of change for what they’re doing. So like, ‘I’m going to do this, it’s going to lead to this, it’s going to lead to this, it’s going to lead to this, the world gets better.’ And they have this sort of flowchart in their mind of what’s going on there. And I want them to often work backwards from where they want the world to be. Like, ‘I want this variable in the world to become better - like less factory farmed animals, lower nuclear risk’ or something. Work backwards from what needs to happen for that, and then flowing through to what they would do. And also work forwards from like, what options are available to me? Which ones seem like they might lead to good things? And that doesn’t mean having one rigid, precise theory of change. It means having a sort of like suite of them, having some flexibility, having some uncertainty, often making local decisions, like just applying for a bunch of stuff and seeing what happens and then trying a job for a year. Sometimes, though, making a grand plan. Yeah, so some sort of blend of those. And so given that sometimes you should take the direct path, but sometimes you have some reason why wandering over here for a bit doing a PhD on international relations on a topic that doesn’t really matter might be the best thing for skilling up for some particular thing. But like, please know why you’re doing it. And it can be okay that just like ‘this is my best option, and it seems like it’s building generically good skills’, but at least know that - don’t don’t trick yourself. But circling back to what you were intending to ask. I think it doesn’t affect the feedback loops much because I think, like in general, researchers get better over their career. And most of that doesn’t come from observing whether the world matched their observations. I claim this now, I haven’t thought about this much before precisely in these terms, but I claim this is true. I think a lot of research skills are reading papers rapidly and extracting the useful insights, and writing clearly, and making an argument that is logically valid, and noticing which factors are the most important and which sub questions to zoom in on. So one thing is that some of these you can sort of pretty visibly tell, another thing is you can survey people, you can like, you know, write a bunch of things, and survey people on like, ‘how clear was this?’ and like track this over time. Another thing is if there’s a given research community - it doesn’t have to be Effective Altruism, it can be any topic, like philosophy, for example. A lot of philosophy stuff is like, probably on average, a philosopher’s judgement of another person’s paper is probably slightly more likely to be good than actively bad. And so if we have like 100 philosophers telling you how good your paper is, or like how sound it seems to be and how useful these contributions are, they’re a bit more likely to give you accurate feedback than exactly opposite feedback. And so in the same way, in the AI risk community, if I write some post or something I can have like 20 people telling me how sensible and useful it seems. And they could all be wrong, but they’re a bit more likely to be on average, right than on average wrong. Yeah, so I can still get that sort of feedback. And there’s a connection to this theory of change thing as well. So you’re gonna have a theory of change for your career or whatever, you can also have it for a research project. And it could flow through your own actions and making yourself better, but it could also flow through other people’s decisions. If your theory of change research project involves this type of actor making a different type of decision, then you can just ask them, ‘Did you read my work? Did it change any of your beliefs or behaviours? If so, in what ways? Did you think it was useful? Do you want me to do more things like this?’ Now, this doesn’t tell you the decisions they made were better. So it is possible you made things net negative, but it’s unlikely. It’s slightly less likely than the opposite. And in order for your work to have done well, like this node turning out correct, this node in your theory of change turning out to be like, ‘Oh, they did read my work, they did find it useful.’ That updates you positively. If they didn’t read your work, you should think the work was less useful than you hoped. So yeah, this is basically like conceptually, on an abstract level, this is the same as the nuclear risk thing, right? Like, I don’t just have that bottom line number of the chance of external risks by 2100, I also have the factors that go into it. And I can check those early steps and get some feedback from reality on the early steps in the chain, even if I can’t do the whole chain until we all die. Or don’t! Hopefully.

Fin 56:35

Alright, I suggest taking a timeout, lots of high level conversation about feedback loops, but I seem to remember about 20 minutes ago, Michael, you were halfway through - before we interrupted - a list of cheap tests you can take to test for fits for various kinds of research. So I don’t know if you want to finish the list?

Michael 56:55

Yeah. So at the bottom of the ladder was like applying for things, which is usually pretty quick for the first stages. It can get longer once you get fairly likely to get a job and you get decently long work tests, but by then your odds are fairly high, so they might be worth doing, etc. And then sort of in parallel on the bottom of the ladder, is the sort of: spend a weekend writing a blog post adapting what you already know for a new purpose. And then next rung up or something, roughly, would be what I call ‘EA aligned research training programmes’. But there’s various things like this, like fellowships and internships; they don’t have to be research, they don’t have to be EA aligned. These are usually things along the lines of one to six months, can be part time, can be full time, sometimes paid, sometimes volunteer opportunities, where the key advantage there is it’s temporary. And it’s like, you’re not an asshole if you quit. So like you can quit jobs, but you’re a bit of an asshole if you take what’s meant to be a permanent full time job and you quit after three months. These ones it’s designed that way. And also, because they are temporary, they generally have a lower bar. So it’s easier to get into them first. So some concrete examples: there’s the Stanford Existential Risks Initiative, and the Cambridge Existential Risk Initiative. Both have roughly 12 week, I think, positions of this nature, and they have a higher acceptance rate than things like the Rethink Priorities fellowship. And where you can find these would be - it’s called EA Opportunities, I think - it’s this website that’s been set up just very recently; it used to be called the EA Internships Board. And it lists a bunch of things like this. There’s also, of course, ones outside of the EA world. Like, you know, regular, prestigious think tanks have all these internships and stuff like that. So you can look for various things like that. And then like, kind of maybe the same rung just a little higher on the rung or something, or maybe the next rung would be things like this, but at established organisations, and it’s not just designed for training, they actually do sort of want you to do good stuff, too. So Rethink Priorities is an example of this. We run fellowships that are three to five months, 20-40 hours a week, and same structure, but we have a somewhat higher bar and we expect the people at the end will produce quite useful stuff, as well as it’s a talent pipeline thing. Centre on Long-Term Risk has something like this as well. Yeah, and then actually doing a job is maybe the next rung on the ladder. So applying’s like, bottom. You might be able to skip all the way up the ladder, like one of the people who’s very likely to join Rethink Priorities quite soon is I think a second year undergrad. She’ll be joining as a fellow. But we have a decent number of undergrads or recently graduated undergrads, whatever that’s called, who joined either temporarily or permanently. So you might be able to skip up all the ladder, but if that’s not working out, you can try the next thing.

Fin 59:39

Awesome. And beyond the particular examples you gave, where can people go to find these opportunities collected into single places?

Michael 59:47

Yeah. Obviously, that list wasn’t comprehensive. There’s also PhDs and masters and all sorts of stuff, and just trying stuff independently, getting a grant to do independent work for three months. There’s a lot of options. It’s basically that the fundamental principle is, how much will you learn? And how long does it take you? How much will you learn about your fit? And how much will you get better at stuff? And how long does it take you? And then where can you find these things? Yeah, the 80,000 Hours job board is a great resource for I guess, rung one and the final rung of like both applying for and getting a job. They mostly have things that are somewhat hard to get, and permanent full time stuff, but again, don’t rule yourself out. I think, both theoretically and expected value, it’s often great to apply, but also empirically, a lot of Effective Altruism online people are quite surprised by getting things and have something like impostor syndrome. Maybe that’s just the whole world - I just don’t pay as much attention to the rest of the world - but definitely the EA community I pay attention to, and they have that. And then yeah, as I mentioned, the EA Opportunities website has a lot of things at that middle-ish level. Both of these boards also have some things that are things along the lines of graduate programmes and stuff. Obviously, you can just like, hunt in the wild, in the real world for graduate programmes for things that you know, according to that theory of change that you’ve made, because I told you to earlier, that these will be helpful for your career plans. Something else? Oh, yeah - funding. So people might be surprised, like, grants often have a higher acceptance rate than a lot of the other things. It depends what you’re working on, depends how you’re doing it. I’m not saying you will get a grant. But it does have a surprisingly high acceptance rate. And it can be pretty quick to a place as a good chance you should give it a shot. And you can get that for anything that’s going to make the world net better in expectation. EA aligned funders, it’s not - like people so often ask me, ‘Can I get funding for this thing?’ And I’m like, ‘Well, is this thing good? If this thing is good, then you probably can. If it’s not, then you probably can’t.’

Fin 1:01:39

For people who are considering applying for funding for the first time, could you try rattling off just like a bunch of random examples of what good for the world could look like?

Michael 1:01:48

Yeah, well, okay, so one thing is where you can go to find more examples: I have a post that I’ll mention again at the end called something like ‘Interested in EA/longtermist research careers? Here are my top recommended resources.’ And it contains my top recommended resources. One of them, if I recall correctly, is a workshop I made on why YOU should consider applying for funding, with a capital ‘you’ because I want you to really know that it applies to you. And yeah, that lists some examples of the kind of thing you can get money for. I will also now say them as a bonus treat. Yeah, you can apply for things like actually doing a project that is obviously useful. This is the thing that people would imagine, they’re like, ‘Oh, I could do a research project for three months and this could have a useful output. Or I could like, write a book on some topic that’s important. I could make a website that fills some need that people have, I could start offering legal services or therapy to like important organisations or people who are doing useful work.’ These sort of things might be somewhat intuitive, though you might not realise just how wide the range is. Like, yeah, any good or service is useful. But there’s also this other category that I think is especially likely to be missed, which is things that build you. And often these things can seem kind of like they’re frivolous or they’re just like ‘for me’, quote, unquote. There’s going to be things like travel funding, or funding to quit a part time job or take some time off in order to apply for lots of jobs and scale up, or funding for various things that will improve your physical or mental health. A lot of people are quite surprised by these kinds of things. And often they won’t get funding; it’s not like everyone gets funding from the EA big wigs for all the mental health treatment they want. But these things, if you’re pretty unlikely or unable to do them without money, you would do them with money, and they make the world better by a substantial degree in expectation, then that can be totally valid. So sometimes like travelling to some city that is a hub for the kind of work you’re doing, to network and form collaborations and find opportunities later, or taking three months off to apply for a crapload of things, while also like skilling up and reading up and getting feedback and finding mentors. These are valid work, even though they’re just like getting the next piece of work. You’re exerting labour and time and energy in order to make the world better, ultimately - that counts.


Yeah, it’s a good answer.

Finding mentorship, independent research vs alternatives

Luca 1:04:08

It’s a really useful framing. There’s one thing you touched on there that I actually want to ask explicitly about. So if I am looking to do independent research and do that by applying for a grant, it seems that like one of the drawbacks, especially if I’m early on in my career, is not having the mentorship maybe that I would if I applied for a fellowship or for an internship or even a job. Is there any advice you have for acquiring that mentorship? So if I’m already, you know, taking agency action by applying for a grant, is there any recommendation that you might have, if I’m looking for a mentor? What should I be looking for and how can I go about doing that?

Michael 1:04:41

Yeah. Okay, firstly on the other thing, because I think Fin’s original question before I talked for a long time was, where can you find these things? So on funding, where you find these things: I have another post called ‘List of EA funding opportunities’, which is a list of EA funding opportunities. I think it plus the comments is comprehensive, and there’s also an Airtable link so you can find a bunch of stuff there. Many of them take one or two hours to apply to. Well, first you think about what you want to do, then the actual application takes like one or two hours. And yeah, they do have a slightly silly high acceptance rate that I won’t say, but you can use publicly available knowledge if you want to go find it. So believe in yourself! Yeah, mentorship. So stepping back. Yeah, I think finding mentorship is really important. I think it’s a really good question. And for that reason, I think getting funding to do something independently should rarely be Plan A. Getting funding to augment what you’re doing anyway, like for travel or better equipment, or some time off to apply for things - that could be quite reasonably be Plan A. But getting funding to do an actual full independent project is probably usually less good than joining some sort of programme or job or something. Because basically, the incentive structure; if someone works for me, if they’re on my team, then their success is my success, and their failure is my failure, and I’m strongly incentivized for them to do well. And I am hired for my ability. Like my role, I was promoted into it for my demonstrated ability to do good mentorship, and for my knowledge of the area and stuff like that. And I know that working on this is one of my top professional development goals. If you just ask someone to be your mentor, and they’re doing it because they’re nice, even if they are also super impact driven, and they know that your project’s important. We aren’t acting fully rationally, we’re not constantly chasing impact the most, we do respond to sort of local incentives. And at the end of the day, when they have like a really busy workday, and you email them something for feedback, if it’s not their job, even if it’s really impactful, they might just like engage with a different mindset - like they might not engage at all, or they might engage pretty lightly. And there are times - mostly in the past, I’ve become a better manager so it happens less often - but there are times I’m like, I see something that one of the people I manage has written, and I’m worried. And then I’m motivated to give a lot of feedback, because I’m like, ‘Oh, this could be embarrassing. This looks pretty bad for me.’ And if it was just someone I loosely mentored, and there’s no website that says I’ve done this - actually, there was a time when that happened. There was someone I loosely mentored in that way, and they wrote something, and it wasn’t very good. And I tried to help a bit, but I was mostly like, ‘I’m gonna cut my losses, to be honest. I’m very busy, I’m gonna walk away.’ And I’m like, you know, a pretty EA-y person. So yeah, you respond to local incentives. So that was all about incentives for the manager. Another thing is just like, for example, my team, we have an onboarding process, we have an operations team, we have a bunch of institutional things; we have a team that we’ve worked on building cohesion, and making sure everyone knows each other, and they like, feel connected, so they can like, give frank feedback without it hurting and stuff like that, so it doesn’t all come from me. So all these things are set up. And if you’re floating around by yourself, and you find a mentor who’s being nice to you, you’re just missing so much of that. It still decently often is a good move, and especially if you’ve tried the other stuff, and you haven’t got the other stuff yet. In general doing things, rather than just reading, is good. So if you haven’t been able to get any opportunities, then this is probably better than just reading a bunch of blog posts or doing a course. But ideally, you get something where it’s like packaged, incentivized mentorship feedback loops, for most people.

Luca 1:08:18

Yeah, one thing I want to throw out as an alternative, which I’m curious for the Michael take on is: there’s one version of this, where I am doing an independent project, and then I reach out for some mentor to try and get them to help me support that project. And there’s another version of this, where I could just reach out to a senior person or somebody who I think does really good work, and then offer to be their research assistant, or something. And then maybe those incentives could become more aligned. Because suddenly, I mean, it’s like maybe more of a barrier for them to take on then as well, because they’re entrusting you with more responsibility, and their own work there as well. But then you do have that alignment of centres where your successes also their success, and they maybe have that vested interest. Is there a trade off there, between you doing your own independent research and looking for a mentor versus assisting somebody on an existing project, even if it doesn’t have the fellowship structure?

Michael 1:09:10

Yeah. That will often be better than just doing your own thing, and getting a mentor who’s helping you with your own thing. But there’s also a middle ground where you’re like, ‘here’s a menu of five things I might want to do. I’d be happy for you to mentor me on any of them based on your preference, and also, I can somewhat sculpt them based on your preference.’ So there’s a sort of sliding scale between how self directed and other directed you are in what you’re choosing. Both times should be impact driven, but like what flavour of impact driven? Yeah, so that will often be better but sometimes be worse. I don’t know which one’s more common, but both seem generally worse than actually joining an opening summer setup. So one thing to think about there is if someone really wanted a research assistant, they probably could open a job ad for a research assistant. And there are people who go from job ads to research assistants. So like those are people who have decided I have these things I really want someone to do for me, and I have time available to help them, and probably I have like institutional support and funding and all that sort of thing. So they’re probably just more invested and maybe better able to support it. This won’t always be the case; there are people who’ve done this thing of like, proactively finding an RA arrangement or other things like that. And it does go well sometimes. Everything, you know, has a caveat, but generally, step one is probably apply. One other caveat I do definitely wanna mention is like, there’s a certain kind of entrepreneurial person who will learn especially fast if they just like chuck themselves in the deep end, and they build a thing like super themselves. And it’s hard to say if you’re that kind of person. But my sense is most people aren’t that. And also that that applies less to research than to building things. So if you have an idea of like a forecasting organisation that is providing a product or service, then that’s the sort of thing where I think it’s more likely that chucking yourself in the deep end means you learn really fast and do something really cool, than if you just want to do research on forecasting. I’m not confident.

Luca 1:11:03

One thing I want to slightly, maybe push back on is, at least from my experience, maybe with academia or maybe more mainstream research is that if you already bring your own research assistant funding, I think that opens up a lot more doors than if you are having to get the academic to get all the like university sign off things and get funding and go through what is a lot of time and effort of vetting candidates. Whereas if you can bring funding and bring incredible signal that you can do good work, and I think often that involves already maybe having had a previous relationship with them or having shown credible interest in the more narrow field that they’re working in and stuff. I think that does, like open up a bigger chunk of existing opportunities.

Fin 1:11:45

Do you want to say just concretely what you did? Because I’m presuming you have something in mind here?

Luca 1:11:49

Well, I mean, I’m thinking in particular of the Research Scholars Programme or something. I don’t know how generalizable this is but to some degree, right, that was like we had funding for two years to do whatever research we wanted to do. And one version of this is you do your own independent research. And another version of this is viewing this as like, ‘oh, I can now do what would have otherwise been a bunch of unpaid internships.’ And I was able to apply to government positions and also other think tank positions and stuff there as well. And kind of because I already had my own existing funding and had the university signal or something there, I think that did credibly open more doors than if I had just applied through the ‘normal process’. And there is something there of, I want to encourage people to maybe be more agenty or more outgoing. Again, I think I’d take Michael’s hierarchy of how good things are on there, but I think this is one way that if it’s easy to get or easier to get independent grants and stuff, then you can kind of use that as well to then kind of hack yourself back into the fellowship internship programme by taking some kind of initiative there.

Michael 1:12:51

Yeah, so my EA aligned career has all been in Effective Altruism organisations. And you super can have an EA aligned career in other organisations. And so often I will fail to flag the things that apply elsewhere. And I think things do look really different, both for non EA organisations and also other types of organisation types can make huge differences. So I think this point applies pretty little to Effective Altruism aligned organisations, but applies quite a bit to the rest of the world and is a really good point for the rest of the world. And I think the sort of explanation of this, to get sort of abstract about it, is: demand signals are really important. And the market often works quite well. But there are these externalities, and there is market failures. And so if you reached out to a company, and were like, ‘hey, I want to bring my own funding and be an intern for you’, then, yeah, there’ll be happy about that, but they probably didn’t super need that job, because if they really needed that job, they just would open it. But for think tanks and governments and stuff like that, they don’t have all the funding that they sort of should have, in some sense, because a lot of what they’re working on is social impact type stuff. Also, they want you there, because it helps them, and you can extract from the mentorship so you’re each getting different things from the relationship. You don’t need to be boosting a thing that they’re doing that’s useful. You just need to be extracting their knowledge and skills and training, whatever you’re doing for them. But for Effective Altruism organisations, especially in the longtermist cause area, we have decently solved the market failure - within our community. Obviously the world is still very much on fire - but within the community, to a decent extent the organisations that should have funding to open roles, do. And that means that if they aren’t willing to pay you for something, there’s a decent chance that that isn’t super useful or super needed. Like for me, if I just got a free new employee, I wouldn’t be happy, because we are time constrained, and also there’s just only so fast that you can or should grow a team. There’s various cohesion type issues that happen if you just scale extremely quickly. So if we could have another person for free, we probably would have another person. And money is not the bottleneck for us. But I super take your point. And I think very excitingly, there’s a lot of programmes where you just can, like, Effective Altruism helps out these programmes. Also individuals, as you say, can be agentic and set up the programmes themselves to come along to governments or think tanks and get weirdly important roles, just because they brought money and then get mentorship from amazing people. Because these orgs are just very sort of, in some sense, underfunded, but full of talent - focused on kind of the wrong things, but very smart. And you can learn a lot. So if you’re going down that path, that’s great.

Concrete example: nuclear risk research

Luca 1:15:47

I think that’s a spot on distinction. I think that helped clarify my thinking a bunch. I am aware we’re maybe burrowing into abstract territory again, so I’m going to ask for another concrete example which is: can you talk us through one research project that you did? You mentioned nuclear risk and stuff before. And maybe to begin just let us know what your theory of change or your goal there was, and then we can maybe dig through and extract some lessons from how you kind of approached it?

Michael 1:16:11

Yeah. So the organisation I work for, Rethink Priorities, a lot of the time we operate kind of a consultancy, where we’re driven by people reaching out and having specific needs that they ask us to fill. And sometimes they give us money. And sometimes they don’t. But we just know that there is a clear theory of change there because these people have important decisions to make, and they want us to do this. Sometimes we are more self directed and we just notice something’s important, we start with theory of change. In this case, this started, there was a very big nuclear risk project that started from a comically small request that someone made that they wanted to Rethink Priorities to look into. And this is a smart person, important, ability to influence things, wanted us to look into some particular organisation as a funding opportunity. This is my understanding, this is before I joined. And then this turned into a year long, really interesting project on all nuclear risk, and then that person left and then I inherited it. So basically our scope could be pretty much to what extent should people focused on improving the long-term future or reducing existential risk focus on nuclear weapons risk? And if they are, how should they do it? So one question is how high priority is this area? Like how likely are nuclear weapons to sort of kill us or turn everything on to the wrong path, even if we don’t all die? And also what should we do about all those high priority things? And so this is very broad. And I could do whatever I wanted within this area, basically. And so I tried to sort of factorise that into some of the key topics. I made a lot of mistakes. I didn’t finish much stuff, and we can maybe circle back to mistakes. But yeah, my aim was sort of thinking about firstly, at a very high level, the reasons why nuclear risk might matter. And so this was just armchair reasoning; just fleshing out, like, why might we care? One thing would be the relatively direct path from a huge nuclear war to the long term future being much worse. So this could be an extremely huge nuclear war that has extremely huge nuclear winter effects, kills almost everyone, and then we just never recover for some reason, like something else finishes off, some other weapon, or just we stagnate and then die for some natural reason. Or, yeah, the future is on the wrong path, because the political systems and values that rearise are bad. Another thing could be sort of pretty indirect paths from nuclear weapons to the long term future being bad. So this could be a relatively small nuclear war that sort of really harms geopolitics, and means that a lot of other things go worse. So this could make countries all much more afraid of each other using WMDs - Weapons of Mass Destruction - so then they each flood in to come up with new fancy WMDs that would counteract these things. So then nuclear weapons aren’t what did us in, but they sort of triggered this chain of events and contributed. So Toby Ord calls this an existential risk factor, rather than an existential risk, in The Precipice. That would be another. And then there’s a bunch of other reasons that are indirect, like, why working on nuclear could be helpful for building expertise that can apply to other areas, stuff like that. So it’s one example of a project within this general space, which is just at a high level, why might we care about this? And then you can look into how much each of those matters. So you could then look into why the direct path might be important, why the indirect path might be important, and how does nuclear risk compare to other areas in terms of building our expertise and credibility and skills and knowledge and connections, for working on other things like AI or bio? So that’s one thing. And then another thing was aggregating estimates of how likely nuclear war is and how big it would be. So trying to find a whole bunch of numbers that have been put out there and put them all into one place and trying to extrapolate from them into some sort of common currency. So a lot of people estimating different timelines, like 2024 or 2040, or something. They might be estimating just a US-Russia war versus nuclear war in general, they might be estimating a nuclear war with at least 100 weapons used, versus just a nuclear war in general. I’m trying to extract that all into one common currency, weight these things differently, based on like some of the sources are really atrocious and some of them are quite solid, and trying to put different weights on them and come up with overall how likely is nuclear weapons use? It appears by the way, roughly 1% per year. That’s my headline. Yeah, so not 50, not zero, meh. This is very work in progress, very rough. But my rough sense of existential risk from nuclear weapons by 2100 is something like 0.5%. We could talk about what that means if you want to, maybe if we want to skip on, there’s my hot take. Yeah. And then to fuel that I’ll just mention one other project, which was building a forecasting tournament on nuclear risk, to give us lots of numbers. So this project was trying to factorise what are the paths to nuclear risk? And what are each of the steps along those paths? And can we come up with operationalized questions of will this thing happen by x year? Or how many of this thing will exist by next year? So it could be like number of weapons in various countries’ stockpiles, or amount of smoke lifted into the stratosphere if there is a nuclear war above a certain size? Or if there is a nuclear war, will the US have any detonations on it? Because that can help us discern whether it will just be India, Pakistan or will it include the US, etc. So a lot of questions, getting a lot of estimates, feeding them into this database. There’s now a resource that other decision makers can hopefully use.

Fin 1:21:23

In broad terms, you get handed this enormous question, which is, maybe figure out what’s important on nuclear? And what could improve the long term future? You’re describing how you factorised that question; you want to aggregate risk estimates, you want to figure out pathways to making the long term future go worse. You started this forecasting tournament. I’m curious now to explore some of the real kind of granular details of what this work looks like day to day. For instance, what tools were you using? Were you speaking to other people whilst you were doing this? How much progress did you feel like you were making day on day? Yeah. Does that make sense as a question?

Michael 1:22:06

Yeah. Yeah, so those three things I mentioned are a subset of all the various things I did, obviously. Each thing was pretty different. So maybe I should mention one more example just to flesh it out. I think there were those four types. Another one was, I tried this one, it was unfinished, but I tried to look into, I think it was something along the lines of: given nuclear winter, how much famine would there be? So some things there. There’s one, in my opinion, really bad white paper trying to answer this question that is like the resource everyone cites that just completely ignores questions like, what if we just move crops? In a nuclear winter various areas get colder, or the amount of rain changes, amount of sunlight changes, this changes what you can grow. And so then this paper is just sort of like, well, everyone would try to do what they try to do normally, no one would move or whatever, and then x many people starve. So okay, then there’s these four types. This is this armchair reasoning type, where I was trying to look at, at a high level, what my goals be, there’s a sort of aggregation type of just pulling a bunch of estimates together, and then doing some janky maths and explaining my reasoning really transparently to see how I got to my bottom line estimates, which I have not done on the podcast, but it’s in the database. And it’s kind of empirical, but also a speculative thing, on the nuclear winter thing, where I’d be reading a lot of papers, and trying to notice all the things that suck about them, and all the reasons to not super trust them, and again, breaking down the question, factorising, trying to look at the science of some things. And then the fourth one would be this nuclear risk tournament. Each of them look pretty different. So a lot of my work day is just yeah, I’m at a desk doing various things on different apps or something. A lot of it is just trying to think and write. So I used this tool called Roam, it’s pretty similar to WorkFlowy and Notion, I think. You can infinitely indent things. So starting with, like ‘nuclear’ is the heading. And then gradually, topics emerge and subtopics and breakdowns. And this is partly a way of note taking, but it’s also partly factorising this huge problem into what seemed to be the big questions and then sub questions for those questions, etc, etc. Notes on papers, talking to a fair few experts, so listing the relevant experts, often these people write papers, there’s also other people, trying to think what do I want to ask them? And when do I want to ask them? And sending all the emails, having a lot of calls, taking good conversation notes. It’s going back to I think Luca said, an hour ago or something, this thing of translational research. I think, honestly, a lot of the time, yeah, you can get a lot of mileage out of just asking really deep experts questions they’ve never thought about before asking three of them this, taking notes and doing some really quick, relatively easy things for a sharp, not very biassed person who just wants the truth to do, to not just trust everything they say, and notice the flaws in their thing. And they just have three nice conversation notes and key takeaways. So yeah, some of that. A few other things, getting a lot of feedback pretty early on, tried to send my project plans to a bunch of people who had worked on nuclear risk from a sort of longtermist/existential risk perspective, to say ‘Are there other questions, you think it’d be important to look at? Do you know of existing resources? Like, yeah, really often the questions to ask people are: how useful do you think this is going to be? For what purpose is it useful, and for what target audience? Is there anything I should read? Is there anyone I should talk to? And asking a lot of people that and chaining that into something good. Yeah, there’s probably more, I’ll pause.

Luca 1:25:42

Yeah. Can I maybe ask about how iterative this process is? So to the degree that you take a big question, and then split it down into smaller sub questions, and then maybe try to find cruxes there. So what degree is this something you have done right at the very beginning, and then it’s just kind of ticking through and going through the list and maybe doing the grunt work there, versus this is an iterative process where, you know, you had some framework at the beginning, and then you realise that’s not the right framework to have or not the best way to kind of dissect these things, and then it’s kind of a maybe more ongoing kind of muddled thing?

Michael 1:26:17

Yeah. I think I’ll first try to say what I did, and continue this descriptive portion. This is not a normative portion, or a prescriptive. What I did was not perfect. So a person called Luisa Rodriguez had worked at Rethink Priorities before on this topic, but before I joined, and so I sort of inherited their breakdown and their work and stuff. This implied a bunch of topics I should focus on - all the things they hadn’t finished. So to begin with, I thought ‘I’ll do the things they haven’t finished.’ But yeah, I first did an exploratory phase, where I tried to learn a bunch about those topics and a bunch of just about nuclear risk and trying to vaguely think, read a bunch of papers that seem relevant, skimming a bunch of papers, taking notes, vaguely thinking what seems important here, and writing very rough project plans, then going from that to sending a bunch of people just a list of like 10 questions I might look at in prioritised order, then writing project plans on three of them. I’ll spoil one of the mistakes I made, which is: I did a lot of things in parallel. I pursued eight different projects at once. If I had pulled it off, it would’ve been great; they all do feed each other. And if I was definitely gonna be doing this for five years, it would have been the right approach, and I would have ended up with this great set of interlocking things that all inform each other. But I did not. Outside view forecasting should have told me I’m going to pivot. I did pivot. So I have like, five unfinished things. Three crappily finished things. But anyway, yeah, so I had a bunch of things in parallel. Yeah, project plans, then reading a bunch to inform it and the project plan I have this theory of change of which actors might need this? What might they need this for? What are my goals here, including my goals for my self development? What are the questions we’re probably gonna look at? Roughly how long are we gonna spend? This is all useful planning. But yeah, it is iterative and that I deviate from the plan as I go. And my general thing here is sort of like, there’s a phrase ‘plans are worthless, but planning is essential.’ I think it’s overstated but in the right direction or something. And I would often say that having a theory of change and a project plan puts you in the right general region of the map, and gives you a sense of what are you looking for as you explore? Like, what should I keep my eye out for in order to change direction? But you aren’t right in the right place of the map and you don’t have exactly the right path and you should explore, you should change your mind and stuff. So yeah, as I went, I noticed that some of the topics I was looking at, some of the topics that Louisa had been looking at didn’t seem like the key things to me and they were skipping over things that seemed really important to me. So then I wanted to zoom into those and deprioritize some other stuff. One thing I did a decent job of is writing relatively early and trying to have bottom lines relatively early, but this is one of the main things, one of the several main things I should have done better. I spent a lot of time just kind of miscellaneously learning and taking miscellaneous notes, that were organised by topic and subtopic, but what I what I now want to do - basically one of the best posts I would suggest for trying to get up to speed on research stuff, other than my ‘Interested in research? Here are my top recommended resources’ which links to this, is a post by Holden Karnofsky called ‘Learning by Writing’, and a complimentary post called ‘[Useful Vices for Wicked Problems](https://www.cold-takes.com/useful-vices-for-wicked-problems/#:~:text=Investigating important topics with laziness,%2C hubris and self-preservation.&text=coldtakes-,Investigating important topics with laziness,%2C hubris and self-preservation.)’.

Luca 1:29:38

Those were literally the two plugs I did in our 50th episode. I think they’re really really good.

Michael 1:29:43

Yeah, like three people messaged me at once on the day it came out or something and were like ‘I just read this and I want to do this method and I feel a lot better about my project now or something’. Good things. So basically the idea there is pretty early on you do a little bit of miscellaneous reading. When you did a plug did you already explain it?


No, no.


Ok I’ll explain.

Luca 1:30:07

I also don’t think we can assume that every listener will have listened to every episode.

Michael 1:30:10

Specifically the plug on episode 50! God that was one of the classic plugs! Yeah, so the idea was sort of like, pick a topic, do a little bit of exploratory reading - I have an Anki card on this so I think I’ll get the eight steps fairly - but then, right then you now form a bottom line. So you already form a take. So pretty early, I should have formed a take on - unfortunately, it’s not a binary question of, ‘Should we pay attention to nuclear risk?’ it’s, ‘Roughly what fraction of our resources should go towards nuclear risk? For what high level goal or what ranking of high level goals? And roughly what interventions seem most important?’ I should have within a month formed a bottom line on that, maybe even less. And then, still do a lot of research - don’t at that point, publish my bottom line and walk away - but then do a lot of research that’s aimed at either affirming or countering that bottom line. So maybe I would have thought relatively early nuclear essentially probably should be, I don’t know, 50 times less important in our portfolio than AI and bio risk. And then I’m now like ‘What would change my mind?’ And what would change my mind includes, for example, the chance there’ll be a huge nuclear weapons build up, or the chance that even a small nuclear war really reshapes geopolitics, the thing I mentioned earlier, and predictably in a quite net negative direction. So I think one thing a lot of people think is it’s going to shape geopolitics, and they sort of skip over the idea that it might strengthen a nuclear weapons taboo and things like that. And I’m like, the future is very hard to predict, right? And I feel pretty uncertain. But at that point, I could have spent a lot less time on some of the questions that seemed really unlikely to flip my bottom line. Like one thing, I spent a decent amount of time on - not very much, but I was gonna - was, How bad EMPs would be. And this is sort of important, but it’s not one of the most important things. So yeah, so I should have at that point focused on flipping my bottom lines, started writing pretty fast, started writing an outline that tries to justify my thing and explain, ‘here’s my take. Here are the three arguments that I think are strongest for it.’ And the breakdown of those arguments. Each of them would have a heading; the heading is phrased as the argument, and ‘here are the three things that might change my mind’ or something, and then send that to a lot of people and have them tell me why am I wrong or something. That would have been great; I would have finished more stuff; it would have been fantastic. I did kind of a bad approximation of that. But yeah, not enough.

What would Michael have done differently?

Fin 1:32:19

Mistakes. We’re already talking about at least one mistake, but anything else that you would have done differently?

Michael 1:32:24

Yeah. So one is the too many things in parallel thing. And to break that down a bit: in my case, there’s a lot of things that are kind of outside view. So I failed to account for planning fallacy, the fact that things would take me longer than expected. I also failed to account for the number of side things I would embroil myself in. I did so many side tasks, and I don’t not endorse them, they all were useful, and they were always shiny and new and variety and interesting. But yeah, that either should have led me to drop those side things, or to do less things in parallel and just finish one thing by the end of my time. Or just notice halfway through that I keep doing these side things and I should adjust my forecast based on that.


What’s an example of a side thing?


Anything other than my main work. So for example, I do grant making on the EA infrastructure fund. And that was not my not my original job, and took a decent amount of my time. Also, a lot of writing random blog posts on other topics. A lot of mentoring people, writing, advising research training programmes. Giving career advice to lots of people, things like that. Yes, so I should have been more tuned to that. But the reason that matters is because let’s say I roughly have an end date, I had an expected end date of the end of 2021 or something. The planning fallacy means that things are gonna take me longer till the end date, the side things mean I’m gonna do fewer things by then. And then the third thing I didn’t account for is the chance I’ll pivot early and something else really cool is going to come up. And I should have expected that because I’d pivoted a bunch by then for yeah, other things. So something really cool did come along that was worth pivoting for. And all of those things, doing the side things, planning, that’s all kind of fine. But it meant I should have done one or two things at a time, so I was robust to that, so that when I finished I finished with a proper version of something rather than a kind of scrappy, scattered version of eight things.

Fin 1:34:22

If you’d serialised rather than parallelized, then it’s more likely you’d at least get some things done before this pivoting.

Michael 1:34:27

It does depend on the person. So there was logic there. Parallel made more sense. All the questions do inform each other, but I’m going to stop at some point, so I just need to do something well. And so to apply this to other people, another version of this is scoping down. So most people aren’t as silly as me and try to do eight projects in parallel. But a lot of people do start with a project that’s sort of as big as ‘what should we do about nuclear risk?’ or ‘what are the strategies to protect from nuclear risk?’ This is pretty big. So there’s people I’ve talked to recently, who have like 12 weeks to do a project. And they’re gonna do a project along the lines of ‘what strategies are people considering for nuclear risk?’ That’s possibly doable, but there’s a good chance that they’ll end up with a half finished thing that was on track to be great, but doesn’t get there. And they should maybe choose a narrower, less ambitious version. It’s kind of complicated. In many ways, I think a lot of people should be way more ambitious and realise, there are these extreme stories going on in the world that almost no one is an active character in and you could really be a leading player, and everything’s on fire and you might be able to help it and step up and stuff and you can get into good things. But for research stuff, scoping down seems to usually be good and is, I think, in line with the be ambitious thing, because this is the path to doing great research at some point.

Fin 1:35:44

Yeah. And you can pick the important thing, and then pick a narrow part of it. Scoping down doesn’t mean doing some random thing, because the entire topic itself is smaller.

Michael 1:35:54

So you can still aim at something like ‘I’m going to work on one of the most important problems in the world, given what other people are working on - one of the most important in the margin, and I’m going to do my research. Like this research really might flip a $1 million grant decision, or bigger or whatever. And it might put me on path to be one of the three main people at the intersection of nuclear risk and existential risk. That’s all great. Do that. Good ambition. But the path to that probably involves doing something somewhat narrow, but in the right topic really damn well. I guess I was just feeding along that.

Luca 1:36:22

It definitely sounds as well with these really big kind of fleshy questions, that it’s maybe worth reflecting on the work that you’re doing as an individual versus the work that the community as a whole is doing. It sounds that when you were entering the nuclear risk question, there was already some work that somebody had done before, right, with Louisa having had a first kind of go and having produced resources there. And then if I’m interpreting this correctly, you left as well, at some point, and then presumably somebody else is now taking it over from there. Thinking about handover and stuff as well, and maybe this adds into this idea of taking a narrow piece, but doing it really well will make it easier for the next person to build on it. Whether that person is at the same team, or whether that person is at a different organisation but then has access to these documents and can read them and then write continue to iterate and build a build on them.

Michael 1:37:11

Yeah, I think that’s a good point. I think another way to put something like this - this is a separate tip - is: try to make your thing a modular piece of an overall hull that we’re going to build together. So try to make your project sort of a nice brick that is evenly shaped and it’s clear what shape this brick is, and then other people can lay bricks on top of that and we can build this house together. Rather than it’s very unclear what shape this brick is, it’s sort of a pile of sand or something. So slightly more concretely, it’s choosing a scope that makes sense, is coherent, it fits in, other things can take the next steps or the previous steps or whatever. And you make it really clear what your scope is, what assumptions you’re making, what limitations you have, what you’re leaving out of scope. So don’t say that you’re covering ‘how big a deal is nuclear risk?’ when you’re actually assuming three things. Instead, say you’re covering ‘how to deal with nuclear risk, with these three assumptions, and focused on just this one pathway’, and make that clear, and then someone else can drop one of those assumptions or focus on another pathway or something. And then together these five people going for a while add up to this whole thing. That’s a good approach. In my case, I did a version of succession planning or handover planning, which is not the ideal version normally, but I got myself into the situation. And then what I did was I did a couple work sprint’s to clean up all my crappy notes, so they’re intelligible to external people. Like I had a lot of conversations with people, and I had hot takes on flaws in their reasoning and stuff. So cutting some of those things! And then I shared it with some research training programmes. And now there is now a suite of people who are doing projects that are kind of along the line of next steps from mine and can use mine as fuel, which is pretty cool. But that’s not that’s not the usual right approach.

Fin 1:38:53

And out of curiosity, are there concrete questions, like additional little bricks that people are adding or can add on nuclear that you want to talk about?

Michael 1:39:03

Funny you ask Fin. Yes, indeed there is. Yeah, there’s a post of mine called something like ‘Nuclear risk research ideas, summary and list’ or something.

Luca 1:39:13

You have a great way of naming EA forum posts.

Michael 1:39:19

What do you mean? Literal?


Very right to the point.

Michael 1:39:17

Yeah. Some of them it’s just like what on earth is this about?! Another research tip that that reminds me of is: have clear titles, and have clear summaries, because I do not have time to read everything. Help me make an informed choice about what to read. Don’t call it yeah, anyway. So yeah, this one has a bunch of ideas. One thing I’ll flag is there are other ideas that I think are probably slightly more important than the average one in there but are nonpublic, which is kind of awkward because some things are info hazardous and stuff like that. But I think this is a pretty good set of ideas. I also would suggest if anyone does want to pick these up, don’t just run with one of these also, reach out to people who are actively working in the field such as - I’m not one of them, but you could reach out to me as well - but also Longview Philanthropy, for example is this EA aligned fund that is stepping into the space in a big way. So they will be a key user and a key expert and finding out what’s useful to them, and according to them, could be great. And going back to mistakes. Another one is I probably didn’t spend quite enough time checking what’s actually useful to people. And I did a thing of basically informing my own beliefs on nuclear risk, which is pretty good. It’s a pretty good proxy. But it’s less good than talking - I talked to other people who are interested in the area and know about it, but not many of the people who use my work, which is a little, yeah.

Learning by writing

Luca 1:40:37

So I actually want to talk a bit more about learning by writing because I actually think it might be fun digging into that bit. And then I think that maybe gives us a nice segue to reasoning transparency as well. So maybe one way of framing this is I guess, as you said, Holden has this piece on learning by writing; I’m curious maybe for the Michael Aird take on learning by writing. Are there any frameworks or lessons that you wanna particularly highlight where they’re kind of useful?

Michael 1:41:03

Yeah, one thing I should flag is everyone’s different. And there’s specific researchers that I know that a lot of this advice isn’t right for and they should choose some kind of really weird, nebulous path of go to the wilderness for a couple years, do whatever seems most important to them based on some really complicated model, and then come back and share it. But yeah, what I’m saying I think is useful for most people, as is learning by writing. Headline thing: read the post, it’s good. Read the other post, that’s also good. But also, one personal experience type thing was there were people I was managing - Oh, yeah, okay. There’s this book called ‘Managing to Change the World’. And it’s a quite good book on management. I think some of it’s not very rigorous or something, some of the ideas are wrong, but there’s a lot of good lessons in there. It’s for management of nonprofits, and leading a team leading an organisation. One of the key principles in that is ‘guide more do less’. And another principle that’s either in there or it’s in a workshop that the company behind the book ran, is ‘take early slices.’ And I’ll explain what both of these mean. ‘Guide more do less’ is basically - or at least my version of it - is a pretty common pattern, a pattern I’ve had is, you don’t make it super clear what you want someone to do. And then they go and do it for a bit, and you don’t check in early, you don’t give them much feedback along the way, they go and do it for a bit, and then you check in relatively late and you find out it’s really off track. And then now you’re basically rewriting their work, and this feels really bad for them, and it takes a lot of your time, and it’s like why were they doing this? And it also doesn’t build their skills. I guess they can see the parallel between what they did, and your good version, and that’s kind of a useful data point, but they didn’t get to try doing it themselves. So the way this applies to research is trying to do things like laying out where you’re aiming pretty legibly pretty early, and then sharing with ideally, hopefully you have a manager or a mentor or something or a range of feedback givers. So for example, the learning by writing style outline, you don’t even need to have filled it in, you can just be like, ‘Here’s the question I’m answering. Here’s what I currently think is the core argument in bullet point form that I’m going to flesh out later. And I don’t even know if this is true. But this is the core argument. Here are the topics I’m going to hit. Here’s what I think are the core counter arguments. And here are what I think are my core rebuttals. I’m not very confident in any of this, but this is what I’m gonna cover.’ And then you can ask people something like, ‘If I filled this in, would that be useful to you? And also, what are your high level disagreements and stuff’ and you can share this pretty early, you don’t have to have done much reading at all yet. So this is kind of implied by learning by writing, but it’s a thing I want to zoom in on. And this was already salient to me before I read the post, this is part of why reading the post, I was like, ‘Oh, I wish I read this earlier.’ There was this time when I was managing someone and they shared a thing relatively late in the journey, and I felt I had to rewrite a decent amount of it, and this was really on me. They joined me to learn and I just hadn’t guided them on a bunch of things. And so I’m sort of most obviously talking to managers here but this really applies to managers or junior people themselves and I’ll flesh that out. One reason here is managers will have done the thing a whole bunch and they’ll have a lot of tacit knowledge about what’s good and how to approach it that they probably haven’t articulated. And so try to articulate that more, try to guide more, try to flesh out all your models. So there’s a bunch of docs I have floating around that people like on tips for writing well, and tips for research methods and stuff. This all basically originated from halfway through this project. I was like, ‘Oh crap. There’s all these things I knew that I didn’t tell them and I’ve got to tell them late and this is on me aghh.’ So I just started collecting them. Yeah, there’s all this tacit stuff. One way to help with that is yeah, articulate the tacit stuff, but often you don’t know, you don’t realise how they’ll misunderstand it or there’s a thing you never notice you do until you notice someone not do it. So getting that early feedback is useful. So for the junior person sharing with the senior person, that tacit knowledge is activated by seeing what you did wrong, or what you were missing. And then they can communicate it to you in a tailored way.

Luca 1:45:10

Yeah, there’s maybe two other lessons I kind of am keen to flesh out a little bit. One is, we were talking before about how there’s a difference between research and this kind of impact driven research, which in the direct sense has to do with whether it changes a decision or not. And one way that learning by writing can be really useful is because it constantly forces you to think again back to the ultimate kind of goal or the theory of change of this research, which is like well, how does it inform this decision? And you were talking before there, 20 minutes ago or something about how it’s very easy in research to kind of get lost by the curiosity of it a little bit, or going down tangents and stuff a bit. And I think that often bringing it back to this kind of learning by writing or ‘how does this actually change the decision that I’m ultimately hoping to inform?’ is a really good way to maybe stay on there and really find the things that are most important to that actual decision, rather than always the 100 tangents that are possible.

Michael 1:46:10

Yeah, so that’s part of what I had in mind when I said, you share the outline with someone, and then you ask them something like, ‘If I filled this in and it was true, would that matter to you?’ So this can be opinionated things along the lines of ‘What should we do about this?’ It can also be an informational thing. So one of the people I’m managing now is doing a thing that kind of has bottom lines, but actually this one is mostly pulling together a bunch of info - and sometimes that’s fine, sometimes that’s the appropriate thing - but in this case, when he’s sharing with people, it’s less like, ‘If I convinced you of this, would it change your mind?’ And it’s more like, ‘If I informed you on these topics, would that help you make better decisions?’ And then you might find out before you’ve looked into any of the topics and before you’ve checked if the things true, you might already be able to find out that ‘No.’ They might be like ‘I already know about these topics.’ I think a lot of the time junior researchers, they don’t know about something, and they haven’t seen any of the senior people write about that thing, because the senior people are busy so they don’t write about everything they know about. So they might look into a broad overview of the topic, and mostly cover stuff that all the people making the decisions already know. That can be good. That can be distillation for other junior people; it can bring people up to the level that the senior people are at because the senior people don’t have time to write it. But just be aware of that. So you can say to people, ‘Would any of this be new to you? Would any of it change your mind on anything, if I filled this in?

Luca 1:47:29

And then there’s this other lesson I’m really keen to explore which is: in terms of finding cruxes, or being able to just develop good reasoning, in of itself, where I think there’s this nice thing where learning by writing encourages you to make the best case for and the best case against something, and then that forces you to reflect on what might change your mind, and I think that then helps instruct you towards finding cruxes that would tangibly change your mind, and in doing so spots the most valuable information that you could acquire there. And I think that’s interesting, where I think humans are often - or at least looking at myself - often kind of bad at reasoning or often fall into motivated reasoning or confirmation bias, or what have you. But constructing the best case for and against something is a way to almost harness that bias where like, okay, I’m just gonna think of the best way that I can steel man an argument and then infer from that what assumptions I need to make for that to hold true. And then I can almost red team it again, by doing the complete opposite and then kind of iterate between.

Michael 1:48:37

Yeah, I think I agree with that. Also, even ignoring the bias type thing, I just think it’s super important to not just list 20 factors that might be relevant in a flat way. You’ll never be sure which factors are most important, but try to pretty early on be like, ‘Okay, I think these are the four important topics. I think topic one’s are the most important. And for that one, here are the four most important factors.’ You still list the things that you think are less important, but just flag them as probably less important, and spend less time looking into them. Because if you look into everything equally early on, then you’re not gonna have time for the things that matter most. So pretty early on, become opinionated about things to zoom in on, but then you’re also harnessing motivated reasoning to have a war against yourself that can be useful. I think also - I mean, it sounds the sort of thing a blindly biassed person says - but in my experience, I think it’s pretty feasible and I think I do it, to just not have a thing I want to confirm about nuclear risk, and just actually be confused. And I think you can to a decent extent cultivate that mindset. I think that’s a big part of added value. So for longtermist work in particular, we’re working on issues that are mostly about risk and mostly about the bigger the risk is the worse. And that means that most of the rest of the world who have selected into working on the same issues think the risks are big. And also they want to convince everyone that this is a big and want to convince anyone to take drastic action. And so you have a lot of communities that have quite a bit of alarmism and quite a bit of taboos of things. So for example, in the nuclear risk field, I think a lot of people would have a really hard time properly engaging with the question: maybe tactical nuclear weapons aren’t a problem, or are a net positive. And I’m not saying they are a net positive or aren’t a problem. But I think these people haven’t properly looked into it and confirmed it. It’s just like that pushes against the vibe and the ethos of advocating for everything to go down, and stuff like that, for the weapons levels go down. So I think you can decently well just try to check what’s going on in your head. And I think you can get some mileage out of that and just notice when you’re leaning into certain conclusions a bit more than others, and try to continually balance yourself and just be there’s some refrains in the rationalist community that I think are useful. One is, ‘That which can be destroyed by the truth should be.’ I think it’s a good one. You want to believe the correct thing. It’s pretty cool. And I think that’s a key way we can add value.

Reasoning transparency

Fin 1:51:07

Since we’re talking about something like useful concepts for doing research well. There’s this idea of reasoning transparency; we’ve talked about it a little bit in the past Luca, but wondering if you can give us your impression of what reasoning transparency means.

Michael 1:51:23

Yeah, so there’s this post called ‘Reasoning Transparency’ on the Open Philanthropy blog by Luke Muehlhauser. It’s really good. And one of the cool things he mentions is this idea of making sure you always start with the summary. And that’s just very simple, and you can just tick that off. If you tick that off, you’re doing a lot better than a lot of researchers and a key thing is also: make it actually a summary. So write a goddamn summary, make it an actual goddamn summary. And that includes things like: actually say what the conclusions are, make that clear; make the key piece of the reasoning clear; try to make it so that if I only read your summary, I get a decent chunk of the value just from that, not the whole thing. And I was resistant to this for a full year. I wrote a lot of EA forum posts and got this feedback so many times, like actually write a goddamn summary, and please stop doing overviews, where I just say ‘I will cover x y z; I will discuss these topics’. An overview is better than nothing; it’s better than having a vague title and then just launching it and people don’t know what you’re talking about. But it’s not like -

Fin 1:52:18

What’s the difference between a summary and an overview?

Michael 1:52:21

So I don’t know if these are the correct terms or something. But what I mean by it is an overview would be like ‘this post will discuss x y z.’ So it helps me what it does is it helps me choose whether to read it, and it helps me know where this will fit in my mental models. And that is useful. That’s better than nothing. But a summary would be like, ‘I looked into x in this way; I had this conclusion; here are the three core arguments for that. And here are the two core arguments against.’ What I mean by this - key takeaway sections or executive summaries or whatever - they’ll usually be a bit longer than an overview because you’re actually giving the meat of it in a condensed form. And so then that could look like you’re sort of wasting people’s time. But the three key functions are: you’re helping people decide whether to read it - there’s way too many things to read and you’re hoping to make an informed decision about is this the thing I should invest time in reading the whole thing? Another thing is it helps people get the value from it without even reading it. Another thing is you help people orient to the whole thing, like you’re helping them know where each piece fits in. So it’s less like you’re just sort of rambling for an hour in the post or paper or whatever, and it’s more like I already know the structure, and I know why this person is giving me these five paragraphs; I know where this fits into the argument and what it’s helping with. And then I said three things - four things! The fourth thing is super often, I noticed something was useful and I read it a while ago, and I made the mistake of not making an Anki card. I no longer know what it says, and I’m like, I don’t want to read it again for 40 minutes. The summary is really nice to go back to it. Yeah, so writing an actual summary that condenses the actual thing. If your actual summary to condense the actual thing gets too long, you can just add like a two sentence summary at the top as well, like a short summary and a longer summary. So that’s one piece of reasoning transparency.

Luca 1:53:59

And there’s a way I guess of linking that back to the idea of transparency in the sense that if you have a lot of information or takes but it’s hidden in an 80 page Google Doc or in a bunch of appendices and stuff that’s not very transparent for the reader then to infer what you’re actually thinking, what your cruxes are or what the big takeaway even is.

Michael 1:54:16

Yeah. I was thinking about the tips for reasoning transparency, but the key principle is what do you actually believe? And why do you believe it? Yeah, that’s basically it. And that’s one of the key things where we can add value, ‘we’ being the EA community, as opposed to a place like academia, I think the EA community, it’s median intelligence and work ethic and stuff is pretty high, but that’s not our key advantage. There’s so many super smart people elsewhere, but a lot of the time, as I said, they’re covering their ass, and they’re hedging, and they’re avoiding the bottom lines. So saying what is my actual bottom line on these key questions? What are the key things driving that bottom line? What will change my mind? And not what is most rigorous. What is the actual source of my belief? So really people walking around, a lot of the things they believe is because someone said it sometime, or because of some shaky analogy that they’re drawing, or because they have three relevant pieces of anecdata or something? And that is evidence; if that’s actually what’s driving your belief a) that might actually be useful info for someone else, in itself, that actually is evidence and b) if that’s how shaky your belief is, then I want to know that! It’s doing a service to the community. So don’t cover your ass, don’t oversell. Just say, what do you believe? Why do you believe it? As if in your brain you’re asking yourself, what do you believe and why you believe it, and put it on paper. And summary helps with that, because it lays this out, and helps you navigate the rest of it with that in mind, but there’s other tips as well.

Luca 1:55:39

What are some of the other tips and advice then under this idea of reasoning transparency? So summaries is one. What else is there?

Michael 1:55:46

Yeah. So another one is: trying to continually make it clear what your key claims are. So if you’re thoroughly covering a topic then it’s probably going to have 20 pages or something. It’s good that you say that 20 pages of stuff, but make it clear to me which sentences in there are really the key bits. What were the most decision relevant pieces? And then what are the key reasons there? And you can literally do this by saying, ‘my key claim is’, or ‘the strongest argument for this is’, or ‘I put the most weight on this and the least weight on this’ or ‘this is my rank order of factors’ or something. So that’s one key thing. Related to that is the just putting numbers on it thing. You don’t always have to put numbers on it, but often trying to be like, ‘I consider this factor roughly five times as important as this one’, or ‘I think there’s a roughly 20% chance of x’, or things like ‘this is my current belief. I think that if I did another 1000 hours of research, there’s an 80% chance I would still roughly believe this.’ So academics kind of have universally the same hedge or something. Again - stereotypically! The median academic in my mind would - yeah, all claims have the word ‘may’ in them. Like ‘this may cause this’, and they will all have the same sort of ‘but further research is needed’. So tell me how uncertain you are. And one way to do that is you know, is it really likely you’ll change your belief with another 100 hours of research? Or is it pretty unlikely? And how wildly would it change? Things like that.

Fin 1:57:13

I like this point, because I can imagine hearing something you said earlier about the fact that other research communities often hedge. The lesson from that should not be, ‘well you should be more confident than them across the board’. The lesson should be: you should just be much more transparent about the things you are confident about the things you’re not confident about. Rather than this kind of blancmange of hedging everything so it’s unclear what the most important points are.

Luca 1:57:38

Yeah, one way I cash this out in my mind is that EA researchers, as we’ve talked before, engage with questions that are way more kind of uncertain and maybe don’t have as good feedback loops. And I think also EA research is way quicker than academic research is, in terms of the hours we dedicate to a question. And reading transparency is a way to mitigate the obvious downsides from that. It’s that at least if you’re really explicit about how confident you are, where your sources come from, what your cruxes and stuff are, at least that helps other people then point out errors and flaws and iterate on that work in a way that is really, really critical for if you want to have all of the upsides from dealing with these uncertain research questions in a really breakneck speed.

Michael 1:58:23

Yeah, I think those are both very good points. I should have said them earlier. So edit me in earlier, fit your voice to mine. Yeah, I think basically what you want is calibrated beliefs and calibrated hedging. And by calibrated I mean, you know roughly how uncertain you are, and things like that. So it’s not necessarily that you’re super likely to be right. But that your level of confidence is well calibrated to how confident you should be and how likely it is that you are wrong, and how likely it is that you would change your mind. And so have calibrated beliefs and calibration uncertainties. And then tell me your calibrated beliefs and calibrated uncertainties and those are two separate things but packaging them well is good. This also connects into the point of: one good thing about academia is it’s set up to pit people against each other, and pit ideas against each other and see what wins. And we can do the same thing, but in some ways better. We’re hampered by having so few people so a lot of things just go uncritiqued, but we can make it easier for the two people who do come along and critique us to do that by making it super clear what we’re saying and why we’re saying it and we map out map this out and are like ‘Come at me. Here, I’m vulnerable. I’m showing my belly and if I’m wrong, I want you to change my mind. Because we’re not pushing for this theory so we can sell some book or something - I’m not saying all academics are doing that - but we’re rather trying to actually find out what’s true as a collective thing so the world gets better. So one thing is you make it easier for people to see how much they should trust you. Another thing is you make it easier for them to critique you. Another thing is - I’ve said bottom lines a lot, I keep saying bottom lines, but the structure matters too. The structure of why you believe it is important because the world changes, and then your bottom line might not transfer. So what I learn from you, I’ll be able to generalise better if I know why you believe it, like what is the structure of your belief, what are the factors. And that also tells me what mechanisms to intervene on. So if I just tell you, ‘there’s a 0. 5% chance of existential risk from nuclear weapons by 2100’, you have a sense of whether to pay attention to nuclear, but you don’t know what to intervene on. But if I tell you ‘here are the five pathways I think are most important and the steps in them’, and which ones are most important of that, then you’re like, ‘oh, okay, I can target this node, I can stop this event from happening - this specific thing.’

Fin 2:00:33

And to be clear, by ‘structures’ you mean: often people will have one or more models that are feeding into some bottom line single number, and it’s really useful to know how sensitive that single number is to guesses at other numbers higher up in that in that chain, and indeed, less sensitive to things so you know where to push on and also where to do more work if you want to get clearer.

Michael 2:00:56

Yeah, I like to think in flow diagrams, and I think that’s a good thing. So most of these things would be flow diagrams, and I want to know the structure of the flow diagrams, which nodes are most important, where do I want to poke and make things go better?

Luca 2:01:08

I think it is worth emphasising what you said there Michael, that being vulnerable is really hard with research. I definitely get this as well that raising transparency is not easy, because it often means showing other people some pretty egregious assumptions I’m making that I feel really silly for, or having to admit that I’ve only skimmed some work, rather than checking all of the assumptions and stuff. I think in a day to day way this can often feel like me being lazy or me being stressed about what other people are going to perceive my research being. But again, the maybe bigger framing is that it is really important to be transparent about these things, because it will help make the work better. And people pointing out that you are wrong, or building on this is good for the world. And this vulnerability is really hard, and I definitely get that on an emotional level sometimes, but it’s really important too.

Michael 2:01:57

Yeah. So I guess another tip and also an example is: don’t just cite a bunch of stuff. Don’t believe a claim and then find sources and then cite them in a flat way that doesn’t say how important those sources are and whether you’ve read them and stuff. Instead, cite the actual sources that are feeding in and say how much you trust them and why, and how much you read of them. So own up if you just skimmed it. Own up if you just read these couple of pages. Also say reasons you think the source isn’t very good or whatever. Also, though, if you think the source is great, if it’s like ‘I read the full thing. I vetted several claims; they all held up. I’ve read a bunch of this author’s other work, and it all seems strong’, then that’s also good to know. So basically, this means that a lot of the time when I read normal stuff, including stuff by EAs, but they haven’t learned this stuff yet, they just cite a bunch of things and I’m like, ‘Well, I don’t know if you believe it because of that thing you’re citing. And I don’t know if I found out this. And I think this source sucks. But I don’t know if that matters. If I tell you the source sucks, would it actually change your mind at all?’ And that makes a difference to how I’m going to vet this. So if I’m vetting someone’s research, if they just cite 50 things, and I don’t know how important each one was, and I don’t know if they already think these things suck, then it’s very hard for me to correct it.

Independent impressions vs deference

Fin 2:03:09

Another distinction I found really useful is between your independent impressions when you’re reading research or doing research, and on the other hand, you’re all-things-considered beliefs. What’s going on with that distinction? What does that mean?

Michael 2:03:23

Yeah. So that was separate from reading transparency, but I guess it’s in the general category of thinking well and writing well. So I did not come up with these terms but I wrote one of my rare concise posts.


A golden nugget.


So I might be able to say basically the full thing! Yeah. Independent impressions are what you would believe if you didn’t account for deference to other people. So you can learn from other people, like you can learn from the things they pointed out, the sources they cite, the arguments they raise and stuff, but completely ignore the extent to which you’re just like, ‘well, that person’s smart, and they believe x so I’m going to put some weight on x.’ Instead, just whatever you believe just based on sort of the ‘evidence itself’, then all-things-considered is just you bring back in deference. So it’s like independent impressions is your all-things-considered belief with one particular node deleted. And the reason to delete that one particular type of evidence is because- well, there’s several reasons. One is if as a community we defer to each other, and then we tell each other our beliefs, you can get what are called ‘information cascades’. And I think this has happened in some ways. So for example, I don’t know how much this has happened. But for a long time, before the book The Precipice came out, there was this survey of existential risk researchers on how likely they believe various existential risks are. There were roughly 10 of these people, and they I think all work at or are pretty associated with the Future of Humanity Institute. And then the survey’s released and so these people just probably talk to each other a whole bunch, and then it would be like, ‘Oh, these 10 people all said this’ or something, and then a bunch of other work is produced. And then people were like, ‘look at these five different sources of estimates’, but they might all be informed by each other. So yeah. It’s good that the survey was done, I just think people should use it carefully. What I recommend doing is forming both independent impressions and all-things-considered beliefs, acting mostly based on your all-things-considered beliefs, but being clear what you’re communicating. Sometimes you only communicate one, sometimes you only communicate the other. So it’s totally fine if I tell you my all-things-considered belief, and just make it clear that I’m doing some deference. And then you just know to be careful, because you might be already accounting for deference to some people or might be slightly deferring to you or something. But when you’re actually acting, you know, you want to make the bet that’s most likely to be right. And other people do contain some info, so base it on that. And then one final point I’ll raise is: one reason the habit of forming independent impressions is useful is again this structure point. it’s not just about bottom lines, it’s about what is this belief? What is driving this belief? And what is driving this risk? Or what is driving this intervention? If I only just listen to Fin saying ‘space governance is x amount of important. And here’s the main intervention’ then I don’t know anything about how to do something on space governance, really. And I don’t know how to change my mind when two years later, a bunch of new research comes out that flips a lot of the things that were driving Finn’s belief, and I wouldn’t know unless I asked him again.

Fin 2:06:28

Yeah, that’s really useful. I’m imagining, for instance, a guess ‘how many jelly beans there are in the jar’ competition, and if the only thing I wanted was the most accurate guess at the number, then the thing I should do is ask the crowd of people who are guessing independently - maybe not even let them communicate with one another. But if I’m in the crowd, and I want to get my guest to be best, I will want to speak to as many people as possible who are also making guesses. If everyone else is doing that, then there is some risk of a kind of cascade like thing where one overconfident person just it’s sure it’s like 513 jellybeans, and everyone inherits that guess, and so you get a biassed overall guess. So it depends on what you want, but it’s a really nice distinction.

Michael 2:07:15

Yeah, it’s somewhat related to the idea of like at Rethink Priorities a nice thing we do is pretty often when we’re making an important decision, we use this thing called rot13, which is this this website where you just type in your text and it gives you a weird garbled version of it that you can’t immediately read but you can immediately chuck back into rot13 and that means someone notices this question exists and then everyone forms their independent take, writes it out, but it doesn’t read each other’s yet. So we will get our independent takes and we don’t ‘anchor’ each other. I think that’s a slight distortion of the term ‘anchoring’ but close enough. This is different from independent impressions. It’s not quite the same thing, because you can form an intimate impression after hearing someone’s take. But they’re vaguely related; some of the benefits are shared. You also can form an all-things-considered belief while still having a quote unquote ‘inside view model’ of the structure of things, but that’s why I said the habit of things forming independent impressions seems like spiritually related to this inside view model thing. Also on that, there’s this great post called something like ‘inside view models and deference’ or something that goes through why you need this stuff.

Luca 2:08:26

I want to maybe go back to one reasoning transparency question quickly. I think this is an important point as well, that if you’re looking to create impactful research, you want to be making research that provides new information. And that often means acting on the frontier of knowledge; doing research on things that are really unknown. Therefore, we should expect that we’re going to be wrong on a bunch of things, especially if you’re the first person to be working on this nuclear question or AI sub question or something there. And therefore, being transparent about what your assumptions and stuff are is really important, because you are aware that there’s a high probability that you’re probably wrong on something really important. And it’s then up to either future you or to other people reading your work and stuff to be able to interpret that.

Michael 2:09:14

Yeah. I think that’s a good point. So I think the reason transparency stuff connects back somewhat to the thing I said earlier about making your work a modular piece of an overall community effort and making it a nicely shaped brick that can be built on. So yeah, I think in general we should assume that what we’re doing is really hard. Both because in general, forming correct beliefs on big things is hard. And also, the thing mentioned earlier about often the thing we’re most interested in is pretty unprecedented. I think unprecedented is not a binary; literally the next thing I’d say is unprecedented. But existential risk is more unprecedented. It’s less anything we’ve observed. And so yeah, what we’re doing is really hard. Also, we just have so few people. Academia has huge numbers of people working on topics of similar difficulty. So yeah, we should expect that we’re super often wrong. That does, I think, increase the case for reasoning transparency, because we do want other people to be able to poke these holes, and usually we’ll get there. A vaguely related point - it’s not quite on the same thing but I want to hammer it home - is: don’t let the perfect be the enemy of the good. So this comes up, as well, the quantification thing that we talked about earlier of a lot of people being resistant to quantification. I think a common thing is ‘you just can’t put numbers on that. You can’t know the number’ and stuff. And my response there is ‘yes, you can put numbers on it. No, you can’t know the number. That’s fine.’ We just want to be a little less stupid. A way I put it sometimes is, ‘in the land of the blind, the one eyed man is king.’ That kind of thing of just, we are currently so stupid. I want us to be a little less stupid as a world and as a community. And that’s the bar. The bar often is something just put 10 nuclear risk estimates in one place. And that’s not the truth. But that’s slightly closer to the truth than if we didn’t have that. So reasoning transparency is good for that reason; we want to be as an incremental community movement gradually moving in the right direction. One thing I want to push back on though is you said something like, ‘because we’re advancing the frontiers of knowledge.’ And I think we’re sometimes not. I think, in general, our research can be doing a few things. You can be advancing the frontiers of humanity’s knowledge, or advancing the frontiers of a given community’s knowledge, or bringing more people to the frontier. And what I mean by that is, so the translational thing we discussed earlier would be an example of advancing the EA community’s knowledge, where maybe the rest of the world actually knows some topic, but we’re just helping EAs learn that thing, because this group of people is kind of naive. We just need to distill for them.

Luca 2:11:48

Yeah, I mean, that’s fair pushback. I guess, to the degree that it matters to my point on reasoning transparency, it’s that the individual researcher is acting on the frontier of their knowledge, and is therefore constrained and should therefore expect that there’s a high probability that what they end up concluding will be wrong. And the more constrained they are, and the research that they can act on, the less precedents there are for them to draw on and stuff. Yeah, I think the higher maybe it is, but yeah, I take your point.

Michael 2:12:13

Yeah, I agree with both things. We are on the same page. But yeah, I do think it’s important to remember that you don’t have to advance humanity’s knowledge in order to do this good stuff. And that’s part of why it can be relatively easy - not easy, but relatively easy - to do a bit of a less bad job than what already exists. And then also, to bring in more people to the frontier. So the thing we discussed earlier of, there might be a thing that the senior people know but no one’s written down - it can still be useful to write that thing down because you bring more people in the community up to the level the community has. And that means more people can be involved in these high stakes decisions or doing research based on that audit.

Luca 2:12:55

Yeah, I really like that framing.





Michael 2:12:57

Pineapple again.


And we’re back.

Communicating your research

Fin 2:13:02

Okay, we’ve been talking about doing useful research. A continuous question is how to write usefully. Michael, any tips for communicating research in the best possible way?

Michael 2:13:18

Yeah. So one thing, my recurring broken record call to action: my post on ‘here are my recommended resources’ has a link to a doc with my recommended things on that. But again I will say pieces of this as well, but you can check that for details.


Very generous.


Yeah, I’m very giving. So one thing there is, I want to hit it again - write goddamn summaries. Make them actually summaries. Try to have in mind what the reader wants from your thing, what’s actually useful for them. Again, it’s rather than covering your arse. This sort of connects to reasoning transparency, connects to summaries. So for example, I think it will be very easy to sort of cover a topic or something and not think about what they need from this - either not saying it at all, or not emphasising it. This connects back to the Theory of Change Type idea. So think, what are the bottom lines that would most flip their mind? What’re the key pieces of info that are most useful to them? How can I make sure I’ve covered these? How can I make sure I’ve explained them to them in a way that makes sense, given what they currently know? And how can I make sure I’ve emphasised this enough, so they’re not just trawling through - because once you’ve once you’ve spent a while learning about a topic, you could probably just talk for eight hours, like stream of consciousness, unstructured.




In theory, one could do such a thing! And yeah you could write the same thing and that would contain a huge amount of nuggets of wisdom, but it’s just too much to wade through and it’s hard to process, because we don’t naturally in our head structure things into a proper document; we structure them as clouds of knowledge and stuff. So trying to turn it into a nice structured thing, with the reader in mind - that’s a core underlying principle. And then that feeds into having summaries. A related idea is something called the curse of knowledge, which is, once you know something, it’s very hard to sort of intuit how people could be confused. So I guess this is one of the few things that I got from being a teacher that is still relevant to me now. It’s just you know, I know math, and I know psychology and stuff like that, and then these people just super don’t. And I can say the fact I want them to learn in a way that makes sense to me, such that if you just deleted it from my brain - that one fact - and then you told me that sentence, it would work for me. But it’s relying on a bunch of background knowledge, a bunch of concepts that are baked in here, a bunch of terms they might not know, a bunch of ways they might not understand it. So yeah, beware of the curse of knowledge. A good way around that, and a good way around a lot of writing stuff is just draft relatively early, and then send it to people, and just get anyone outside of your own brain to help you to spot where they’re confused. Ideally, someone who is pretty representative of your target audience. Also you after a week is somewhat of an approximation of someone outside your brain. You go through it, and you’re like, ‘Oh, this is unclear, and I’m stretching through it.’ I guess a sort of meta point or something is draft early. Be wary of spending too much time carefully crafting perfect phrasings when you’re probably going to massively change your conclusions in a month and have to rewrite everything. Because it’s basically sunk cost; it’s not fully sunk cost, because it’s not fully wasted cost because you’ll get useful feedback - like having written it up will mean you’ll have a better conclusion because you get useful feedback. But still, it’s kind of a waste of time. So when I say draft early, I mean something more like a rough and ready bullet point version, unless you’re someone for whom good writing flows really easily. Concrete examples - really good. That’s probably something I could have done more in this episode. I’ve tried it here and there! One of the other few things I got from being a teacher - and a lot of education research is terrible but some bits of education research are good - one finding that I think is pretty good, and just intuitively makes loads of sense is: if you have an abstract concept, having a concrete example is useful, and especially having multiple that don’t share other variables in common.

Fin 2:17:13

And what’s a concrete example of a concrete example?

Michael 2:17:15

So I should remember this: scarcity. The example - there’s this blog post from something called ‘The Learning Scientists’. I think The Learning Scientists is pretty good if you want to learn about how to learn and how to teach. It’s got nice, digestible resources. This is one of the resources I liked before I was into Effective Altruism - one of the few one of those things I recommend still. But I imagine it would still hold up if I view it with my new enlightened eyes. So yeah, the example they use is trying to teach the concept of scarcity. And don’t just say the concept of scarcity. Instead, also use an example where everyone wants the jelly beans but there’s only so many jelly beans, so then someone can charge more for jelly beans, or whatever. And also use an example where everyone wants to water their lawns at the same time or something, and so therefore the price goes up. I probably should have loaded a concrete example to mine. But hopefully that’s good enough. If you just use the Jelly Bean thing, then the students might really anchor on that and think scarcity is about food. Like it only applies when everyone’s hungry or something. Basically, it’s a matter of triangulation.

Fin 2:18:19

Yeah. You want the overlap between these three examples.

Michael 2:18:21

Yeah. You’re trying to use a bunch of things that have as little in common as possible, apart from the thing you want to point to. And if you use more examples, that helps as well. So that applies especially if you’re explaining concepts; it’s not always relevant in research writing, but it can be relevant. For example, if you have a full pot breakdown. So earlier, I did this one earlier in this conversation with the high level goals for nuclear risk, when I told you there’s this direct path to existential risk and this indirect path. A lot of writing I read sounds like that - it just stops there. And I’m like, I can guess as to the kind of thing you mean, but I’m not sure. And also, there’s a lot of different kinds of things you can mean. So I want to know at least one example of the kind of thing you mean. Like illustrate this category, or this type - so then I gave you the illustration. And they’re partial, but they give you a sense of what level of granularity or whatever it is. So another example is I think in AI governance, it’s really important to have a sort of high level theory of victory, a sort of mid level intermediate goal, a set of intermediate goals and a low level concrete policy. From me just saying that it’s very unclear what I mean by each of those categories. I think I probably won’t bother explaining, but just to illustrate that that was unclear! And if I was writing this up, I would give you two examples of each of them. And then it’ll make you have a lot easier time generating more examples yourself.

Quick wins and good habits for research

Fin 2:19:33

Nice. So some of this advice over the last half an hour or so, for doing research well and communicating the research well, is just really easy, useful, quick wins. For instance, writing a summary for the body of your writing. Some of the advice is difficult, and that explains why not as many people do it as perhaps they should. An example is flagging when you didn’t properly read things that you’re citing, or if you saw flaws in some of the sources you’re citing. I’m interested in any other examples of important skills for effective research that might feel especially aversive or easy to overlook?

Michael 2:20:24

One thing I’ll note is I think a lot of things that seem difficult, are probably mostly difficult at first. It’s sort of like driving - like when you first start learning to drive, there’s 30 different things you’ve got to pay attention to at once and it’s very difficult. So if while you’re trying to learn about some topic and trying to write about it clearly, and trying to reason about it well, and trying to not have motivated reasoning, you’re also trying to remember to flag how confident you are in a given source and how important it was, that’s really hard. But as you ace each one, and as you get feedback, they become fluent. There’s this model of unconsciously unskilled, where you’re not good at something and you don’t know it, and then consciously unskilled, and then unconsciously skilled or something. I probably mangled that a bit. Yeah, so a lot of it feels hard at first because there’s 30 things at once. But if you keep going, gradually you’ll master each one. Probably. If you get good feedback loops. And then each one will become fluent; it’ll become second nature. Nowadays, a lot of this stuff I just do and it just comes naturally. And I have all my working memory free to actually think about the topic. But then your question. So some of the things - I don’t know if they’re much harder than things we’ve talked about - but some of the things that we haven’t talked about that might be harder - it depends on the person - is sort of maintaining this impact driven prioritisation mindset in the face of the whole rest of the world not doing that or something. And a lot of incentives and fads and taboos and norms. Just in general, if you do something that’s different to what everyone around you is doing, it just feels really weird. And at first, you just probably won’t do it. And that’s part of why the EA community is pretty useful, because you can see ‘Oh, I’m not crazy.’ This is different, but for me, before I learned about Effective Altruism, I was already shocked by global poverty and unfairness, and it was just crazy and sickening and disgusting, and I planned to be super frugal, and live down to basically the line at which taxes start applying, and just live on that budget - I think I probably could have. I don’t. I’ve changed my mindset now - and then just donate all the rest. But I didn’t know anyone else who was like that. And there’s a decent chance I just wouldn’t have followed through. I think I probably would have, but I would have felt super weird about it. Because I was like the one crazy guy I knew. So yeah, the prioritisation type stuff: relentlessly focusing on what’s most important, being willing to drop a lot of topics, being willing to not follow the news, because the topics they’re covering mostly aren’t the most important. And also, you know, a lot of people are paying attention; if there were ways to help with the stuff on the news, there’s a decent chance someone would have found it. That’s not always true. I follow the news a bit, but not much. Like there’s a lot of times I don’t have an opinion on this latest hot button issue, because it’s not worth my time to form an opinion on that, because I have way too much good stuff to do to fill my time. And then, within each topic, not just focusing on the things that are easiest to learn about. Not just focusing on the beliefs that are easiest to justify. Not just following the thing that is most likely to be exciting to your readers or that pushes in favour of your thing being important. Having this balanced mindset and just chasing the truth and chasing what’s most important. In fact, that’s one thing related to the quantification thing. You’ll have a lot of people telling you you can’t put numbers on it, you’re arrogant, etc. So you’ve got to be willing to do that, and also to do it reasonably - you definitely can be stupid with numbers. So learning that is difficult. And there’s also a thing about reductionism and analytical thinking. I’m basically a pretty big fan of reductionism, and analytical thinking. It’s not always perfect, but -

Reductionism, quantification, analytical thinking

Luca 2:23:49

Can you explain what you mean by reductionism?

Fin 2:23:53

Just boil it down for us…

Michael 2:23:56

Yeah, I mean, taking this big system, or this overall observable thingy, and trying to think what are the pieces? And what are the drivers? And what are the components? And what are those components made of? And what are those components made of? Etc. And trying to break it down closer to root level. And I see this as related to analytical thinking, which is separating this big - the world, everything is interconnected. One example is when I dream - I might I’m pretty sure this is true- when I dream, it actually changes the orbit of Jupiter because it’s electricity moving in my brain and there’s a tiny little gravitational pull created by that. And that’s wild and amazing and it’s true and everything’s connected. But if I want to understand the orbit of Jupiter, or my dreams, either of those are not the most important variable for the other.


You’re tuning into Hear This Idea with Michael.


So it is connected. It is true. Some of this New Agey stuff has a big kernel of truth, but it’s just not the most useful way to think about the topic. And instead break it down. What are the factors relevant? What are the drivers of this thing? What are the most important drivers? And be willing to box and ignore a lot of the other variables. They’re straggling on. There is this web of connections but that’s not where you should be focused. So the overarching thing with prioritisation, quantification, impact focus, chasing what’s actually true rather than what’s most exciting or appealing to your audience, reductionism, analytical thinking - all these things sort of can go ‘too far’ or wrong or something. And there’s good kernels of truth to the opposite. But the key thing is, I think there’s a lot of things kind of like fads or buzzwords or taboos that push people away from these things. And I don’t want you to be pushed away from these things because of fads or buzzwords or taboos. So there is a place for noticing interconnections, there is a place for noticing the intersections of two risks, there is a place for thinking about unknown unknowns, and like we just can’t be sure that things would be okay if x happens, and for being wary of numbers and stuff, and for systems thinking and all that sort of stuff. But I think a lot of times, these are just buzzwords that stop people actually thinking. And so if someone’s like, ‘How big a deal is climate change? How big a deal is nuclear risk?’ They’ll sort of be like, ‘You can’t put numbers on it. We can’t know. There’s unknown unknowns. We just have to pursue all things at once.’ If they really thought really hard in a balanced, sensible way, and somehow actually landed on that conclusion that would be okay. And it’s good that they’re looking at intersections of things, but it’s kind of just like some sort of fad or taboo is blocking their thinking. Actually think. Sometimes use systems thinking and notice interconnections, sometimes analytical. Be willing to use whatever is actually effective.

Luca 2:26:31

Another way of maybe framing this, at least in the context of cause prioritisation and what we should be focusing on which is, I think, maybe implicit in that nuclear climate example is whether you’re making apples to oranges comparisons, right? Because we can think of all the cascading and unknown unknown things for, let’s say climate and nuclear. But then we also need to apply all of that to if we’re thinking about economic growth, or AI, or whatever other cause you want to do. And just adding all of these things on the score of the cause that you happen to be focusing on, but not doing it for all the things that you’re choosing not to focus on then creates maybe an unfair comparison, where I think those arguments are compelling if you can give maybe a really particular reason of why you think cascading risk on this particular thing is really important. But then that maybe is then going to what exactly you were saying there of actually deeply engaging with it, rather than just saying the word, unpacking it and making a case for why it’s disproportionately the case.

Michael 2:27:26

Yeah. So you’re saying there are some of these sort of buzz-wordy things that have a kernel of truth, but they just could be applied everywhere?

Luca 2:27:33

Yeah, sure. I can say, you know, there’s unknown unknowns, maybe climate does offer a direct extinction risk. But I could also say the same is true of economic growth in Nigeria, right? And if I only consider it on the case of climate, but not on the case of this other cause, then that’s an unfair comparison that you’re now making. You have to apply it to everything. And if you do really strongly think that cascading risk or unknown unknowns is really important in climate, I think you have to make a compelling case why it’s disproportionately the case for this rather than for others. But then that involves deeply engaging with it, right?

Michael 2:28:04

Yeah. I strongly agree with that. I think for a lot of these things the key thing is not that there’s no truth to this sentence, or that this word or phrase or concept has never been useful. It’s sometimes that a word or phrase or concept can be invoked everywhere and it’s just not action guided. So one thing is: it’s hard to put numbers on things, yes. But we need to do stuff, we need to decide. We are in the world; we are acting; we are moving around. So we need to decide. So yeah, unknown unknowns, cascading risks, that can apply to just a lot of things. And these are useful concepts but yeah, actually think. And also notice if you’re applying them to one thing rather than another because you already are working on that thing. And so therefore, you want to justify that whatever. Or because that’s what society is telling you. So if everyone around you is always - so I think climate change is a big deal. I think it’s on the margin, less important to work on then AI, nuclear, and bio and a couple of other things. But I think it is a big deal. Relative to most of the world’s problems, it’s quite a big deal. Also, yeah, I’m talking about on the margin; if no one else was working on climate then it would be screamingly important. But if everyone around you all the time is telling you how important climate change is, and it’s on the news and all that sort of stuff, then notice that by default you’re gonna be pushed towards that conclusion. And you’re gonna whip out all the arguments that help you with that, and try to notice this internal sense of, ‘Am I actually chasing the truth?’ There’s this concept of the bias blind spot, which is if you tell people about various cognitive biases, and then you put them in a situation where that bias is activated, and then you ask them, ‘Did it apply for you?’ Then they say ‘No, it didn’t.’ They understand it; they check comprension. I read this in 2014, before I really understood the replication crisis, so maybe it doesn’t hold up, but it’s probably been true, this bias blind spot thing. So often, I think asking someone, ‘Are you really chasing the truth here? Or are you just going to your preset conclusions?’ Often that won’t work. But I think to be honest, for a lot of people listening to this triggering that question probably is helpful. For example, there was one time when I was talking to someone who Rethink Priorities had offered a job to, and they were deciding whether to take this job, and so they had a call with me to help them decide. And during the call, I felt a little off. And then after a quiet reflection, I was like, I think I was biassed there, I misspoke. And that person didn’t take the job, so maybe they picked up on that or something. But I think I was just slightly - everything I was saying was kind of true, but the set of arguments I was bringing to bear was all a little skewed towards one side; I was less ready to notice the other thing. And it was just some sort of mental dance going on that just wasn’t quite balanced. And I think people like listeners to this podcast, there’s a decent chance - not that you’re bias free and perfect or something - but that triggering yourself to think about it could be enough, sometimes - helpful.

Fin 2:30:45

Yeah, I also wonder if there is some kind of unfortunate asymmetry between a view which emphasises ‘reducing’ problems to factors and trying to put numbers on things, versus the approach which instead emphasises that things are connected, and often it’s misleading to try to put numbers on so many things which we’re so unsure about, because in the case where you’re trying to make estimates, come up with mechanisms, make forecasts, you’re making more precise guesses, which can turn out to be more precisely wrong. As contrasted with making more kind of true and crosscutting claims that are nonetheless kind of vaguely true and won’t turn out to be wrong, because they are kind of vague. And so I guess the more you ‘reduce’ problems, the more surface area you’re opening yourself up to to be wrong, in a way which is asymmetric with the other thing. I don’t know how to fix that. But this is a problem.

Michael 2:31:48

Yeah. I felt confused for part of that question, because ‘reduce’ problems unfortunately, has two meanings.


Yeah. Sorry. Decompose.


But you mean the more you reduction-ify. Yeah, I agree with that. It’s sort of like in science, Freud’s theories are - well, arguably many of them are not even ‘wrong’ - so there’s no claim that they’re making this precise enough that they can’t wriggle out of it. Or if you don’t believe that about Freud, you can imagine it’s true of other things. And it’s more sort of virtuous, but also just more useful for scientific theory to make a relatively precise prediction, a relatively precise claim, that a) would be action guiding, because it’s precise. Like, if you just say - the issue is not that everything’s not connected, it is connected, but I do need to decide right now: do I spend my career on AI, or nuclear, or climate, or the intersection? The intersection is an option, but it’s one of the options that I have to choose between, and there’s also, you know, a billion intersections I could choose; which intersection do I choose? Yeah, they’re all connected, but what’s action guiding will be a relatively precise claim. And also, it’s more falsifiable and more learnable from? And I would also flag, so we talked about that for cause areas, but it’s also for work types and stuff. So yeah, just try to notice the things that are driving you to not chase the truth. And one thing is kind of being nice, or avoiding conflict or something. So with cause areas, I think that happens. The prioritisation mindset requires you sometimes saying something is less important than other things, and that’s kind of rude. But you know, it’s probably true. It must be; they’re not all gonna be equally important. So, yeah, try to notice if you’re pushed a bit towards saying, ‘something something interconnection, something something systems thinking,’ because you want to play nice - you don’t want it to be a conflict. You’re like, ‘Oh, everyone’s yelling. I don’t want that. Systems thinking! You’re all important.’ Because like yeah but you know, let’s focus. To be clear, systems thinking has a place. All of these things have a place. Just don’t use them as taboos or buzzwords or something. Actually think and invoke the concepts when actually thinking.

Luca 2:33:55

Do you maybe want to give an example of where you’ve seen systems thinking or some of the terms you’ve noted there in a positive way? Or do you want to maybe give examples of ways where you do see there being a place?

Michael 2:34:09

Yeah, so that’s a really good question! It’s good to poke me on like, ‘Hey, can you be virtuous and acknowledge the other side’s strength?’ But yeah, for sure. So there’s the Global Catastrophic Risk Institute. Seth Baum is either the leader or one of the leaders of it. They have a lot of papers that are about the intersections of things like AI and nuclear, or bio and AI or something like that, like various things were these key variables in the world, how do they interact with each other? And what are the ways we’re trying to reduce one - as in make one of the risks lower, rather than decompose - what are the ways that trying to make one of the risks lower will be counterproductive for another or have a bonus benefit for another? Or what are the ways to use one of them to lower one of the other risks or things like that. And I haven’t read all the work very closely; I don’t know if they’re a sort of paragon of doing this exactly right. Like I think decently often my take would be their papers are a bit too laundry listy, in the way that like my early work was as well. And not enough ‘chase the bottom line quickly.’ It’s more just mapping the landscape. And so I think everything in the laundry list, it was good that they flag it. But there should be another thing where they spend another five hours to add which ones are the most important or something. And yeah, some of it has a bit of a flavour of ‘all these things matter a bit.’ But mostly I think that’s pretty useful. Like, you know, the world isn’t just composed of this AI thing that is in its own bucket and doesn’t touch anything else. Most of these long term risk things are just powerful technologies, or key components of the world. Like one of the key factors is the US government. And the US government obviously connects to a lot of things. And the Chinese government and the EU, and the academic community or something. So the key drivers will often be key variables in the world that affect a lot of stuff. It’s not like the US government is consciously aware that they’re just focused on AI or something. So yeah, it is good to look at intersections, just try to do it in a way that really is oriented towards finding what’s important, finding what’s true and ultimately prioritising.

Choosing a cause area to focus on

Luca 2:36:15

Okay, so maybe moving on then, and again maybe framing this around how listeners can take action. There’s maybe a concrete question here of: if I want to be doing impactful research, and I now want to pick the research question or the cause that I want to be working on, what kind of questions should I be asking in order to be doing that good work that we’ve been spending the last few hours describing?

Michael 2:36:43

Yeah, okay. So there’s two or three main angles I will take on this. One is cause or topic area, another is type of work, and another is type of organisation that you work in. So this isn’t the only framing but this is one framing that’s useful. Which of each of those things should you focus on? And my slightly odd hot take is a kind of inverse hot take is that, once you filter it quite a bit for the sort of thing that a lot of longtermist people are talking about, or the sort of thing that 80,000 hours recommends, within that the marginal impact of different areas probably doesn’t differ hugely, as far as we can tell, ex ante, for the average person. And those caveats are important. And also that personal fit, and testing fit, and building career capital might be more important. So to flesh that out: I don’t just think everything is equally important. Like pet shelters - don’t work on pet shelters, please. Like they’re slightly useful, and it’s nice and I’m sort of personally glad they exist or whatever, but don’t focus there. But once you’ve narrowed down to something like AI, nuclear, bio risk, improving governmental policy making so that takes risks into account, working on forecasting methods, cause prioritisation research, building the community working on this, a set of like 10 things along those lines - there’s probably some others. Once you’ve filtered down hard from the billion things in the world you could focus on to those 10, I think impact on the margin of a new person working on these are similar. And I think there’s a few things driving that. I’m also not super confident about this. A reason this is a hot take is because a lot of people tell you ‘No, it’s AI,’ or they tell you ‘No, it’s climate,’ or ‘No, it’s bio,’ or something. And I’m sort of like, ‘Eh, you know.’ And this kind of sounds like I’m doing the thing of like, ‘Hey, let’s all get along,’ but I think there’s some drivers of them that make sense. One is, it is really hard to know stuff. So probably one of them is way more important. Like there’s a decent chance that AI, if we knew way more, we would be confident AI is just going to go well, whether we work on it or not. That’s pretty plausible, but we can’t know that. And also, it’s pretty plausible that if we knew more, we’d know that AI is gonna kill us all unless we work extremely hard on it, but we can’t know that either. So in reality, a way more informed observer probably would have much sharper cause prioritisation than I do. And they will be much more relentlessly focused than I am. So I’m in between the people who say, ‘we can’t know anything, it’s hard to quantify, therefore let’s work on whatever.’ I’m in between that and an extremely confident person or something. And we can filter down to these 10, but within those, there’s not that big a difference. I do still think there’s a big difference. I think if no one with this strategic sort of backward chaining theory of change mindset - relentlessly focused on doing what’s best for the world not just for them, not just fads, reasoning carefully, forecasting, quantifying - if no one with that kind of mindset was working on either AI or nuclear, I would really want them on AI. I think AI is a bigger deal in absolute terms. But the community has responded appropriately. Not necessarily perfectly, but the community has sort of allocated people. The EA community is paying attention to what’s most important on the margin. And when there’s an imbalance they somewhat correct it. So it’s kind of like the stock market’s; probably some stocks are overpriced or underpriced, and if you knew more, you’d be able to know that, but it’s pretty hard to beat the market on any given day. Partly because it’s hard to know stuff and partly because if you could beat the market, someone else might have done it already. We are nowhere near that efficient; we are not the same sort of liquid market with something like 7500 people, and job switching is way harder than selling a stock and buying a new one - you’ve gotta retrain and stuff like that - but we are a little bit of an efficient market. So if there’s a wild imbalance, we’ll probably fix it. Now, I talked about that for cause prior, but it also applies to types of work. So if there was way more need for operations people than research people or way more need for grant makers than both of those or something, then the community’s to some extent gonna respond. This is not perfect, but yeah, first approximation.

Fin 2:40:40

So there’s one analogy to whatever the equivalent to liquidity is. That’s a reason to expect the ‘EA job market’ to be a little less than perfectly efficient. Any other reasons to expect it to be directionally biassed against certain jobs or otherwise inefficient?

Michael 2:41:00

Yeah, so I guess you could unpack the liquidity thing. So one is just the retraining time required means that it is costly; the transition cost means that there’s gonna be friction in the system. And that will probably usually correct itself eventually; if that was the whole thing, then we should expect it to just slowly adjust. And so you could have sort of ‘arbitrage’ in time where you jump on it faster, and it means you correct it faster, and that makes the world better, because you’re catching the window of opportunity that other people would have missed. But there’s another thing which means that it won’t resolve even given time, which is just some jobs are sort of harder, or it’s harder to find a person for them, or it’s harder for us to find a person for them. So one thing might be a particular kind of arty person might be harder to find than some types of technical people for our community, because of who we’re drawing on.

Fin 2:41:44

Did you say our key person?

Michael 2:41:48

Arty - artistic or something. I don’t know if I actually buy that example, but that sort of plausibly could happen. This is something I’ve been thinking about lately. One thing I would guess is more important on the margin - two things - one is entrepreneur types, who will do a really good job aimed at the really good stuff. The community has a decent number of really effective doer types, and it has a decent number of really effective thinker types. Getting someone who’s enough of both, who’ll do but it’s good. There’s a lot of doer types who have a plan and I’m like, ‘Oh, please don’t,’ or I’m just sort of like ‘meh, okay, whatever.’ But having someone who has a strong good vision and can execute hugely and can handle the psychological horror of just doing your own thing. Like for some people, it’s thrilling, but for a lot of people, it’s a nightmare. Those people I think are harder to find. So that’s an example of a friction in the system that won’t just resolve. Well, maybe it can, because we can work way harder to resolve them. Another example is, I think, people who can do China focused longtermist research really cautiously and mindfully with Chinese language skills. My guess is that’s a pretty rare type of person. And we could in theory do with a bunch of them. But there’s a lot of types of people almost that make things worse. And so that’s one where I expect if you are the kind of person who could do that, there’s a good chance you should do that. But most things, I think, you know, we can find a decent number - we need more of everything. It’s not like I’m saying we can easily get researchers or operations, but it’s kind of balanced or something.

How relevant is your academic background?

Luca 2:43:20

Yeah, maybe unpacking what seems like one of the other important assumptions you made there, right at the beginning, is for the ‘average’ person. So I can imagine that for lots of listeners, there might be this ongoing question of how much their background or existing skill set should be influencing this decision. I guess to some degree you’ve kind of already addressed this with maybe, right, the marginal impact within this limited set of things isn’t that different. But can you maybe speak a bit more concretely about how, especially I guess, early on in your career where you might not know or might overthink that certain skills and things you have should push you in one direction, how to navigate that space?

Michael 2:43:58

Yeah. Okay. So stepping back briefly, the thing I’ve justified so far is that on the margin, the impact for a person or I don’t know anything about is very roughly equal between various org types, work types and cause areas - between a filtered set. And typically the filtered set will be something like what you can find a decent number of senior longtermists seeming pretty excited about. So I’m asking you to defer there sort of, but it’s a proxy or something. An additional claim that I think strengthens this is the impact on the margin for the generic person that I don’t know anything about isn’t the key thing you should think about; you are a specific person and you know stuff about you. And early on you should be focused on testing fit and building career capital, and then also you should focus on the thing that you’re a good fit for. And so then that leads to the question of passion and backgrounds and stuff like that. And I think the short version of my answer, or the short version of my thoughts on this is passion really matters, but don’t be overconfident about what you would or wouldn’t be passionate about, actually try things and check, and also backgrounds are helpful, but people often focus too much on them. And then to unpack that, focusing on the passion thing first, basically, my stance is just what 80,000 Hours has written about, as far as I remember. So they nailed it, and just plus one to them, but I’ll say it, which is that you want to play the long game; probably most of your impact comes mid or late career - just empirically that tends to happen. One reason that might be false is if we have really short AI timelines, or nuclear war happens pretty soon or something. So you know, what’s happened with jobs historically doesn’t necessarily extend into the future for this really big scary reason. But generally, mid or late career is where most of the impact happens, for various reasons. And so you want to play for a thing where you can rise up the relevant ladder in terms of skills and credentials, and promotions and all that, for it to pay off really big later. And that means it’s pretty important to find the right track to be on and it’s pretty important to find a track that you’ll be happy on, and that you’ll be energetically pursuing each day or something. It doesn’t mean that every day you’re happy, it doesn’t mean that work doesn’t feel hard, it doesn’t mean you won’t get tired, but you can see yourself doing it for five years and not getting burnt out. But that doesn’t mean what you’re passionate about right now. You can easily think you’d be passionate about something because you’ve done some version of it, but after two years, you’d hate it. And you can easily think you wouldn’t be passionate about something without having tried it, or because you’ve tried something like it but not quite right. So one example is talking to people. I think it’s a lot of EAs who think jobs heavy on talking to people are not going to be good for them because they’re pretty introverted or something. And I’m one example.


As we hit the three hour mark…


That’s my point! In the wild, among normal people, I’m fairly introverted, and pretty socially awkward and stuff like that, and so it’d be pretty reasonable to think that I shouldn’t do a job heavy on talking to people. But if I’m talking to people about something that I think is really important, really interesting, and especially if they’re kind of part of an intellectual community with me, and they share my goals, and helping them is helping the world, that’s great. I love it. I do loads of it. I go around all over the place. So if you’ve tried something in one environment and you think it doesn’t suit you so you’re like, ‘I gotta rule this out’, and you try something in another environment and you think it does suit you so you’re like, ‘I’m gonna do this for sure.’ Yeah, basically, just actually check; try to empirically test what you would be passionate about and don’t rule out. But passion ultimately does matter. Don’t do a job you hate.

Luca 2:47:25

Yeah. There’s this snowballing example I can think of in my mind where in the first year of uni, you might be doing work experience, or you happen to choose the research topic of let’s say… I’m trying to think of something.

Michael 2:47:39

Any topic. Try to think of any topic.

Fin 2:47:40

The history of birds.

Luca 2:47:43

Okay, yeah: the history of birds. And then, because now, considering all the research I’ve done, it seems that I have a comparative advantage in the history of birds, by the time it comes around to my second year internship, I will do an internship in the history of birds. And then by the time third year comes around and I need to apply for jobs, clearly my comparative advantage is in the history of birds, and it keeps on going and going, because at every local time that I evaluate, this seems to be the thing that I’m most skilled at. But I guess the point that you’re making there is the obvious one of, ‘explore and exploit,’ right? And I should be doing a bunch more exploring, in order to get information there.

Michael 2:48:18

Yeah. So a lot of people in the EA community are, you know, 24, or something. And to them, they’ve been working on something for a very long time. Like, you know, their whole adult life, but that’s not very long in the scheme of things. So not assuming that this background is a big cost you’ve paid, that makes you a really amazing expert on something, and that you couldn’t get this on something else. So it might be that you just don’t need a background for some topics. So a lot of people would think that they can’t apply to the team that I’m on, because they don’t have a background in an area super relevant to AI governance. Sometimes they actually do and they just don’t realise it. Like law is an area that’s relevant, also machine learning is an area that’s relevant, also cybersecurity - various things that aren’t super obviously relevant, that actually do matter. Another thing is, you might not need the background. We’re pretty after a type of thinking, and a type of goal and a type of writing and things like that, and it matters a bit less your factual knowledge. When you’re making career choices, both as a hiring manager deciding who to hire, and when you’re choosing what career to do, try to focus your choice on the variables that are pretty hard to change, not the ones that are very flexible. And knowledge is unusually flexible; you can just learn new things. Whereas things like the way you think - that’s moderately flexible; it’s changeable, but it’s hard. And then something like just what you would be passionate about after three months of trying to find a good version of it and trying to get passionate about it. Like if you’ve tried it for three months in a bunch of ways and you’ve manipulated all the variables, it might just be that for some reason, that’s just not for you.

Michael’s origin story

Fin 2:49:58

Yeah. Now might be a good time to learn about your own background, because I think it embodies some of the things you’re talking about. So for instance, what did you study at undergrad?

Michael 2:50:10

So, yeah, I did a psychology degree, which just doesn’t come up in my work! Like a little bit here and there. I mean, I’ve mentioned a few psychology things, so that’s something. It came from this podcast. But yeah, I did a psychology undergrad, and then I did a fourth year - in Australia it’s called honours - and wrote a paper on cognitive and political psychology. It went pretty well. Then I was doing stand up comedy at the time as well.


Very standard career track.


Yeah, of course - a standard route to national security and research management. So: psychology undergrad, some stand up comedy, some poetry and short story competitions, and some music and stuff, then two years as a high school teacher, and then just learned about Effective Altruism during this. And was like, ‘Oh, okay, no, this makes sense.’ An interesting thing as well - in my opinion, interesting - is that story ended two and a half years ago. Under three years ago my key day to day concerns were making sure this particular student in my year seven class doesn’t keep swearing when I’m trying to teach basic fraction stuff.


Do you want to call them out on the podcast?


The first draft of the sentence in my head had their name and I was like, ‘ah that’s probably unethical.’ I assume they’re not in the listening audience, but it seems better to avoid.


You won’t know unless you name them!


Also, another reason not to call out the example is there were many seven students I wanted to stop swearing during me teaching fractions; it would be unfair to single one out. But yeah, like that was recent. I do think my trajectory is notably faster than average. I think it is at least somewhat faster than average even for people who are now professionally working in longtermism, and have sort of ‘made it,’ but not way faster. I mean, arguably you two have had a faster trajectory, which we don’t necessarily need to go into here. But yeah, other people have gone faster, other people have gone only a little less fast. It’s not doable for everyone. Again, I sort of want an interesting kind of inspiring message which isn’t, ‘it’s gonna work out for you.’ It’s rather like ‘take a bet and see; take a bunch of bets. And there’s a pretty good chance that one of them’s gonna work out for you.’


Do you want to talk about some of the bets that you took then?


Yeah. Okay. Is it useful for me to say how I learned about EA? I don’t know if it’s useful, but it’s possibly Interesting.

Fin 2:52:33

Yeah, I guess I don’t actually know the answer to the question, so I can’t answer that question. But, how did you learn about EA?

Michael 2:52:37

So there were a few parallel tracks. I got horrified by injustice, and a sense of the world being wrong, and people starving, and all that sort of thing.

Luca 2:52:54

What did that look like? Was that seeing specific case studies? Was it reading? Was it philosophy? What does that actually entail?

Michael 2:53:02

I think I probably was aware that I was pretty privileged, and I was pretty happy. I was aware that there’s a lot of people that aren’t like that. And there’s the standard World Vision ads and charity ads and stuff like that. And they emotionally impacted me in some ways, sometimes. I think usually my emotions are pretty flat, and I’m just cognitively aware that everything’s horrible. But every so often the emotions activate and remind me or something. But mostly, it was just noticing I could do something. Like, I think a lot of people see it, it’s awful, then they donate the $2 to World Vision or they do a 40 hour famine or something like that. And I’m sort of like, ‘No, I mean, I’ll be alive until I’m 80 or something. I’ll be working for 60 of those years. I want to make a good income; I can make a big difference, and I should.’ And I decided I’m gonna do this earning to give thing, which I hadn’t heard of, but I was gonna live down to 20,000 Australian dollars per year - live on that money. Donate the other like 80,000 or something. Yeah, and then I learned about things like GiveWell, Giving What We Can, Peter Singer, famine, affluence, morality, the drowning child, etc, utilitarianism, but I didn’t learn about Effective Altruism, because it was 2010, or something, so I think the term didn’t quite exist. I think I was slightly too early. I read all the ‘About Us’ pages of the websites. I think the first time I saw, ‘Effective Altruism using evidence and reason to do the most good possible,’ I would have been in, but they didn’t quite have the slogan yet, which is really annoying. So then I wandered about doing psychology and stand up. So there’s definitely areas where psychology is useful - meandering, but backgrounds: you also can have a background that is relevant to important work, but it’s still not the one you should use. So there are jobs that use psychology that are really important. But there’s a lot of variables about me. And the thing I studied in undergrad is not the only variable - there’s many. And in my case in particular, there’s a bunch of things that mean I’m best for research management. And one of the best places to do that is on AI governance because that’s one of the areas we need a lot of people in, rather than one of the areas that it’s really important for a few people. But anyway, yeah, I learned a bit about AI, TED talks with Nick Bostrom and stuff like that, and then eventually, when I was a first year teacher - these were all lingering in the background as interests in my mind - I realised I actually finally do have money to give. And also, I’m no longer sure that teaching is a super socially impactful thing, because a lot of the time I’m just helping my students get up the exam leaderboards based on chasing various proxies, and also the evidence on critical thinking training is relatively weak. I imagine there are ways to do really good critical thinking training, but a lot of the methods that have been tried just don’t empirically seem to work very well. And that was one of the things I was banking on and I was excited about teaching for. So I was sort of feeling down on my main career, and I was also like, I have money in the bank; I can make myself feel better if I start donating. So I dived deep into GiveWell stuff, and also tried to read what 80,000 Hours said about teaching. They were like, ‘Nah.’ And I was like, your conclusion seems kind of wrong to me, but you do seem smart, so I’m gonna read your methodology page and go deep. And then they convinced me.

Fin 2:56:06

Fin here. Michael realised after the recording that he may have oversimplified both his and 80,000 Hours’ perspectives on teaching in this section. So 80,000 hours does not, in fact, just say, ‘Nah,’ but rather, ‘not usually recommended.’ And their page on teaching agrees with some arguments for teaching being valuable, but it highlights that some other things seem substantially more valuable, especially on the margin, given how many altruistically motivated people go into teaching anyway. And we’ll link to that page in the show notes and on the website. Okay, back to the conversation.

Michael 2:56:44

And then yeah, in terms of the bets I took: I learned about Effective Altruism in late 2018. The bets I took started in 2019 - my second year of teaching. I knew I had to teach until the end of the second year, roughly, because I sort of committed to a two year programme. I could have pulled out, but it would’ve been a pretty extreme move, relative to what I was socially okay with then, because I was pretty new to it, and I was swimming in my normal environment. It felt like an extreme move, but really it wouldn’t have been. Yeah, so I applied for like 20 jobs or something of like a super wide range and willing to not not screen myself out, based on at first glance the roles seeming to not make sense. And the two that I ultimately got - I think I probably mentioned some time a billion years of the podcast ago, about how the two offers I got were the things I specifically would have expected I didn’t get. Or maybe I mentioned it to someone yesterday.

Fin 2:57:41

You did mention it earlier.

Michael 2:57:43

I did. I talked about very similar things on Thursday at a talk, so there’s deja vu. The two positions I ended up being offered: one was an operations role at an Effective Altruism organisation. And that - I seemed obviously like I should be a researcher type person, and so operations was really a punt. And ultimately I did go down a researcher-y route, and I think it makes sense that I went down researcher-y route, but I also actually do a moderate amount of operations. So I’m now like a research manager, and I’m sort of co-leading a team and helping with a lot of department wide stuff. For example, I project managed a really big hiring round, with eight people we ended up hiring. So there’s a lot of operations-y things. I’ve helped design a lot of the onboarding processes. So being willing to take a punt at this quite weird path, and then learning that I actually got an offer - and I would have accepted it if I didn’t get the other offer. Like it turned out once it was on the table I was like, actually, you know, this is something that I should do if my alternative is staying as a teacher. At the beginning, when I was just applying I would have put less than 3% chance that I would accept if offered, at the beginning. But once I thought about it more, I decided I would accept. So it was useful to get that evidence. It was useful to have the option on the table in expectation. And it was useful to get the evidence that has informed my later career stuff of trying to find an intersection between research and operations and building systems and things like that. And then the other thing I got offered was advertised as basically a maths and computer science research and writing role. You two will be aware that that is not me. Like psychology undergrad, I can think decently quantitatively, I can factorise things and think crisply in a way that’s somewhat associated with technical people, but I’m definitely not myself very much a technical person. I did not have a maths background or computer science background, but I just again took a punt.


Was this at an EA -


This was at an organisation called Convergence Analysis. There was a smaller punt that I took first, which was: there’s these Effective Altruism Global conferences, and there isn’t an Effective Altruism Global in Australia. I was living in Australia at the time. There is an EAGx in Australia, which is a smaller one, like TEDx. And I was really committed to doing EA stuff, but I was still pretty used to normal world things, and I was trying to be really frugal, and not making big crazy moves and stuff, and everyone around me was doing normal stuff. And I asked this group organiser, ‘Could it make sense to fly all the way to London for two days?’ Because I was a teacher, so I couldn’t take much time off. And I was in a very intense teaching programme. So I would have flown just for basically the two days and had a jetlagged crazy conference experience.


This is for the EAG in London?


EAG in London, 2019. ‘Would it make sense for me to go all the way to Sydney to do the EAGx? Would it make sense for me to go all the way to London? Surely it wouldn’t make sense. Which one should I do?’ I didn’t even ask you to do both. And this person was like, ‘Yes. Probably actually both make sense, even though they’re big, and probably actually do both.’ And they were like, really right to do that. And I didn’t know what my plan was there. So this is an example of forward chaining rather than backward chaining. I don’t know if I explained forward chaining and backward chaining?


No, I don’t think you have.


So this is connected to the theory of change thing. So this links back to one of the many earlier threads. Backward chaining is thinking about where you ultimately want the world to be - I think I did explain the terms - so thinking about what you want to be and working backwards to what you can do now. Forward chaining’s what are the opportunities available to you now, and what can I do? I think the world as a whole should do more backward chaining, but sometimes forward chaining is pretty good. And at this point I was like, I don’t really know what going to London will result in, but I can sketch out 20 things it might result in that seem good, and there might be a bunch of other things like that. And what happened is on the conference app, this job was advertised. And it wasn’t advertised very widely, and it wasn’t on an 80k job board, but it was on this conference app. And so the key thing from going to the conference was having access to this conference app and applying for this crazy job that did not make sense for me - it did not fit me - and just taking a punt on it. And then I did well in the work test, and they were kind of confused or something, but they were like, ‘Okay, I guess we could try to mould the role around you. It does seem like it makes sense for you to get it even though this isn’t really what the job is meant to be’ or something. And then that was my first EA research-y job, and that was a really important stepping stone for me.


This was the Convergence job?


Yeah - Convergence Analysis. It’s a longtermist strategy organisation. It was just the two co-founders, and I was their first employee. So I got a permanent teaching job offer, and permanency is properly permanency in teaching; the unions are pretty strong in Australia. So yeah, I quit that thing. Two years into my career I got offered permanency. Quit that, just before getting an offer as well; I actually hadn’t got an offer yet, but I’d been forecasting whether I would get to the next stage of each of my job opportunities, and I’d been doing that for a while and I knew I was relatively calibrated, and I now had like five things in the pipeline that were decently far along. And I was like, ‘Oh, there’s only like a 3% chance I don’t get anything, or something like that. I’m going to quit teaching. And worst case, I have some savings so I can apply for more things for another three months.’ And so then yeah, I took a job at an organisation with just two co-founders that required me to go fly to England for a one month work trial, not yet fully hired. I’m taking this punt and seeing what happens, and I’m really glad I did. I’m not saying everyone should do that. Nowadays the community is better; there’s way better pipelines in. I had to randomly scramble together to find something. Now there’s CERI - the Cambridge Existential Risk Initiative. These just didn’t exist. There weren’t courses like the AGI Safety Fundamentals course; there wasn’t as much of the grant opportunities and stuff. Don’t do what I did, but maybe do have the level of willingness to take bets.


Yes, as a kind of existence proof for how this stuff can pan out. That’s very useful.


Yeah. Nowadays there’s much more of a safety net for people I think, and much better ways, but do have the mindset of, don’t rule things out early; do have the mindset of being willing to consider big moves. Still consider your own health and well being and stuff. But notice, maybe I have four layers of safety nets, and so if this bet doesn’t go well, that’s still actually okay.

Fin 3:03:51

And even if I guess there are more safety nets, there are more obvious opportunities for fellowships and so on, it still requires a certain amount of proactivity on the part of people applying. You can’t just, you know, put ‘open for work’ on your LinkedIn and hope that you get an email two weeks later. There are various levels of being proactive, but very rare that there’s a cost to being more productive than you might by default be.

Michael 3:04:16

Yeah. I still think nowadays, you still should apply for as many things as I applied for, but they’ll just be probably better, safer things, and it’s easier to maybe get funding for your phase of applying and stuff, rather than having to squeeze it in. Not everyone should get that but in some cases, you need to and then you maybe can. And yeah, in terms of 80,000 Hours - again, big fan - they have this idea along the lines of ‘cut off downside risk, and then be ambitious’ or something. Yeah, so make sure that you’d be safe if things go wrong - you’re not going to break things, you’re not gonna make the world worse, and also you’re gonna be okay; you have a financial safety net or something. But then once you’ve done that, then go hard and dream big and take big shots and be willing to take big risks, if the worst case is just kind of flat rather than bad.

Luca 3:05:06

The other Michael Aird origin story I want to get into is your relationship with the EA Forum. I guess maybe the context for listeners is, you post a lot on the forum. I’m curious how you discovered that, why you seem to have gelled with it so well, and any advice you would give other people in terms of putting your work out there, writing on things? Yeah. Go off.

Michael 3:05:36

Go off king. On a practical level, the origin story is: I was in Australia. And specifically Perth, Western Australia, which by some metrics is actually the most isolated city in the world. Like there are reasonable metrics by which this is true. And there’s not much in the EA community there. There were sort of four active people including me or something. There were other people who were less active and maybe it’s different now. So I didn’t have many opportunities to just, you know, be talent scouted by just saying intelligent things to people or something. I had to find my way in virtually or whatever. And I was also coming from a wacky background for what I wanted to do - like psychology and stand up and stuff. I couldn’t just have a good CV and have great grades in the relevant thing. A lot of people luckily have a relevant background, or they learn about EA earlier in their journeys than I did and then they build themselves a lot of background. But I was like, ‘No, I want to take a sharp left turn and get there fast, and I think I am smart and good at this. I’ve got to visibly show that.’ And then also Convergence Analysis wanted to be churning out posts fairly quickly. So the model there was one of the co-founders had a bunch of ideas, but wanted someone to help write them, basically. And in practice - as I was saying the role was somewhat moulded around me - I ended up doing a lot of my own ideas, a lot of things were kind of in between. But yeah, that was part of the idea. So they wanted to churn out a lot of stuff, they wanted to churn out to the EA forum, with an EA target audience in mind, not aiming for advocacy or looking incredible to most of the world or something. There’s a place for that but that wasn’t what they were doing. So that started me thinking, I have lots of post ideas, I want to write lots of posts, I’ve got to do it for Convergence, and then it just sort of went well. And also another thing is just: I like attention and I like positive feedback and stuff. You know, I’m enjoying the podcast for that reason - people are looking at me. I know there’ll be other people listening digitally at some point. I’m confident. I’m calibrated. And I did like stand up, and part of why I was a teacher was also, you know, captive audience. It’s not that hard to be the funny teacher; that’s another perk. Like when you do stand up, it’s hard to be the funny stand up. It’s not that hard to be the funny teacher. So I just sort of channelled my sort of social media addiction tendencies, that type of driver. I just tried to find a community and an outlet that would mean day to day I’m kind of addicted but in a useful way, and continually feeling like I need to write more and stuff like that.

Luca 3:08:15

You were big into Wikipedia before that, right?

Michael 3:08:15

Yeah. So I think in general, I like writing. I like ideas. I like visibly having done things. I like scores. So Wikipedia you see how many edits you’ve got, and there’s also some things - they’re called barnstars. The editing community is really weird. It’s quite funny, and it’s overall mostly nice. And sometimes a random person sees that you fixed a bunch of commas or something, and then pops a bar barnstar on your talk page. And almost no one in the world knows this happened, but I got this barnstar!

Fin 3:08:52

Is Wikipedia editing still potentially a very valuable use of someone’s time?

Michael 3:08:58

Well, when I was doing it it wasn’t because it was a valuable use of my time. I was doing it before I got into Effective Altruism. There was a handover. There was a transitional phase where I was still doing some of it. When I originally started doing it it was just, I like reading stuff, and whenever I see a misplaced comma, I want to fix it.

Fin 3:09:16

So in my mind it’s like picking topics you think are important, and improving the Wikipedia pages, for instance?

Michael 3:09:22

Yeah, I think it’s not very useful, but sometimes. One thing that’s relatively easy - there’s a good post by Darius Meissner on this, so if people are pretty interested, I would encourage you to read that. I seem to recall it has a good summary, hopefully. Basically, I think if you’re learning about something anyway via Wikipedia, there’s a decent chance that it’s worthwhile for you to edit it. It’s also possible that if you tried to test fit in a bunch of other ways, and they aren’t working out yet or something, and you’re really the sort of person who enjoys Wikipedia editing, then maybe you should actively pursue it as a way of improving humanity’s knowledge on some topic. But mostly I don’t think it’s the best use for most people.

Luca 3:10:01

There is a personal anecdote I remember you telling about Wikipedia and the music industry. I can’t remember the singer, but I think that’s great content. But you’re obviously welcome to keep it and move on.

Michael 3:10:14

Yeah. I was gonna say it until Fin cut me off to be more focused. I guess we have a conflict of vision. There’s a way on Wikipedia you can see which pages you edited most, and a lot of them are P!NK, for me - like the singer P!NK - or the band ‘Cute is What We Aim For.’ I’m not even that big a fan of P!NK. Also, Bruno Mars’ songwriting collective - not Bruno Mars himself. So yeah, it wasn’t impact focused. I’m just like a pretty systematising mindset. It’s good that EA captured me, and pointed me in the right direction, because I was doing random stuff, but really intensely, very focused.


Everything very fast. Brownian motion.


Not in negative directions just like weird stuff. But the EA Forum has karma. So it’s the same sort of thing. The points you get, at like 7am each morning, you can see how many you got. And I was pretty addicted to that. It was useful fuel for a bit, and then it got too much and I had to reorient my approach to it. But I think it’s less important nowadays. There’s a bunch of other ways to get in, apart from independent writing on the forum. But it’s sometimes a good way in, sometimes a good way to be sort of spotted or whatever or make connections or have people know that you’ve done interesting thinking on topics so they can reach out to you if they want help on something. And also a key thing I believe is: if you’re writing stuff anyway, and it’s relevant to Effective Altruism, and it’s not bad to publish, either in general or on the forum for information hazards or public relations type reasons, then you really likely should take the tiny effort required to put it on the forum, or take a moderate effort to write a version for the forum. So if you’ve written a paper for a journal, make a version that’s more focused on just what matters most and is more accessible.

Fin 3:12:16

For what it’s worth, in my own experience, I remember coming out of university, being really excited about figuring out ways to get involved with this Effective Altruism stuff, not really knowing anything about anything. And so I had some kind of pandemic free time and used it to, for instance, write a couple of book summaries of EA books which hadn’t been summarised yet, or just write up some thoughts on a topic which I hadn’t really seen anyone write about, and it just seemed overdetermined that it was useful, because it helped motivate me to actually read these books, and digest them. I got feedback when I posted them on the forum from what I missed, or what was good. Also, now these artefacts exist which are useful for other people. Also now these artefacts exist which are useful for me, in the sense that it helped me kind of signal that I was interested in this stuff. So yeah, if you have the free time, and there’s no way more salient options, just writing summaries of things and putting them on the forum seems like a really good thing to do.

Michael 3:13:19

Yeah, I’ve got a fairly bland post called ‘Reasons for and against posting on the EA forum.’ I think it’s got a summary. It’s pretty boring, but feel free to read it if listeners are interested. But yeah I think it’s often a good move. I do think there’s some issues or some of the benefits are smaller than you might expect. The feedback benefit, I think, is actually pretty small, in my experience, and in particular way smaller than if you send something to people as a doc, like as a Google Doc.


You can do both!


But that’s the key thing. I think similar to applying for jobs and applying for grants. A lot of these it’s like you don’t need to decide, is this amazing? If it’s really cheap, maybe just do it. So if you’ve written something anyway, if you think about something anyway, write something up. And then I think for me the biggest benefits were probably pushing me to formulate my thoughts and keep me accountable and stuff. And then also maybe writing for the forum and getting that public attention, which feels pretty nice. And in the process I have a Google Doc, and then I can send that to people and get actually lots of feedback and stuff like that. And putting it on the forum does have some distinct benefits, like literally the factors on the forum, not just the accountability type thing. Like there was a relatively important report that was going to inform an EA organisation’s relatively major actions that I was asked to give a bunch of feedback on because I’d written on the topic, and there was like only four people who had written on the topic. So because it was public knowledge that I’d done that I was able to have this other bit of impact, pretty early, when my other options of impact weren’t that big and I could spend like four hours giving detailed feedback on this thing. So it makes you poke your head up and makes you known as a useful person. Also, I can look back at my posts. Like earlier today I stumbled upon one of my posts, and that informed something I said here because I forgot this model I had a while ago.

Luca 3:15:07

Just to quickly add, I think I’m gonna add this really good line of, if you consider how many hours you spend producing information or producing research and stuff, it would be insane not to spend an hour strategically thinking about disseminating it. Like you’re so close to the finish line there, right? Just taking an hour either sharing the doc or posting it wherever - it’s really low hanging fruit. I say this as somebody who has not posted on the forum.


At all?


I think outside of podcast posts, no. I do the sharing Google Docs thing, but I haven’t yet posted.

Quick wins for aspiring researchers

Fin 3:15:38

Speaking of low hanging fruit, speaking of quick wins: I’m curious if you have any other examples of just short time commitment things which can be incredibly useful, for yourself or just for the world? In the mould of quickly posting things on the forum which you’ve already written?

Michael 3:15:55

Yeah, so I will answer this question, but I will note for the listening public that Fin made a list of things I do that are useful, so I’m not bragging here. I’m just facilitating his wishes. So yeah, I do think I do a bunch of this. One thing I’ll flag up front is I think I’m unusually good at this. Which makes it not obvious other people should do it, or even if I should do it. I think it’s also not super obvious I should do it. But overall, I do think I should do it and I think some other people should do it. The sort of thing I do are things like collections, is one simple thing. This is me locally optimising a lot of the time. This is me forward chaining. I notice something that will be helpful for me, or for one person I’m talking to. One example was, I met someone in a conference, and they had a history type background and wanted to do Effective Altruism cause prioritisation research. And that seemed interesting, so I was like, ‘Oh, what will be interesting history topics?’ And so I made a really minimum viable product list of history things, and that seemed kind of cool and maybe useful for other people, so I turned it into what’s called a ‘short form’ on the forum, where it’s basically just allowed to be crap. It’s a lower bar. So I put it on short form. And people found that helpful, so then I turned it into a post. And at each stage, I put slightly more time in, but each stage was a couple of hours or something. I think in total I spent something like three hours on this post, that’s called ‘Some history topics that might be very valuable’ or something. And then I don’t know if it’s led to things happening in the world, but it had good reception; it has a bunch of karma and 80,000 Hours link to it, and drew on some of its things in a list they made of questions that got a lot of attention. So that this is me making minimum viable products, scaling up iteratively, responding to known market needs. And one thing is, if I or someone else I’m talking to would find something useful, then there’s a decent chance someone else would, too. And the more people I observe would find it useful, the more I invest time, and you can gradually scale these things up. So I have a lot of things that have gone through this pipeline of: I made it just for me, or for someone else, in somewhere between five minutes and two hours, and then as more and more people find it useful, or just I guess, a long time ago, something along the lines of getting input from yourself a week later and seeing if you still believe something or if something’s still clear to you. In a similar way, if a month later, I’m like, ‘Oh, this is useful to me again.’ Or I’m like, ‘This still seems useful to me. I’m not just excited right now.’ Because sometimes I can get overexcited about something, then that’s a signal to go in hard on it. A related principle I read in some book on management was the idea of, ‘say it twice, write it down,’ which is if in your organisation the same question is asked twice, then write it down somewhere and put it somewhere where people can find it. Maybe they’ll ask you again, but you won’t have to write it again, you can just link them to the thing. So now not a lot of my time, but maybe on average multiple times a day, I send someone a set of thoughts I’ve already collected earlier. And that means I don’t have to write out again.

Fin 3:19:09

That’s a pretty good example, and I like the fact that it’s just a public good. Other people can now read these lists and do useful things with them. Also, it’s just useful for you if you’re going to repeat the same information then it in fact saves you time to have this list, which you can refer back to. So I like that, and that is a good example.

Michael 3:19:26

It is much faster. Like a lot of people want career advice in Effective Altruism; a lot of people want help with various things, and I just do not have time to write the same things many, many times. But I do have templates that send the same link. So more than a year ago, I wrote a post. After various conversations, I kept making Google Docs for each conversation, and then I decided to make one master Google Doc that had things I most often mentioned, and then that turned into a post. And then that was a post that I linked a lot of people to for a long time. And then that turned into Rethink Priorities’ rejection email for candidates, so it kind of got adapted to that. So they’d be like, ‘Hey, we rejected you. But if you’re interested in these roles here are all the ways in to help them out, because we do believe in their potential and we want to help them’ and all that. And then that eventually turned into a new post I have, because just as I have more more conversations, I’m getting a clearer sense of what people need, and I’m creating more resources as I go.

Rethink Priorities

Fin 3:20:21

Cool. Do you want to take this opportunity to talk about Rethink? One question, for instance, is what does Rethink do? What is aiming to do?

Michael 3:20:32

Yeah, nice segue from, ‘Rethink rejects people’ to, ‘maybe you should apply!’

Fin 3:20:36

Why don’t you consider being rejected from Rethink as well?

Michael 3:20:39

But it’s a really good rejection email; we’ve got a lot of positive feedback. So you’d be excited to get it! So Rethink Priorities - and again, I’m not speaking for Rethink Priorities, I just work there, so I have a lot of examples - they, we, whatever, are an EA aligned think tank, basically. So we are pretty explicitly effective altruists, we totally can hire people from outside the community, and we’re excited to do so because almost all the world is outside the community, and includes many smart people. But our priorities and vision and the causes we work on and the angles we take are driven explicitly by these principles. It doesn’t mean we always agree with ‘EA orthodoxy,’ but orthodoxy doesn’t agree with itself. We will describe ourselves generally as a think tank. It’s also reasonable to describe decent chunks of us as a consultancy. And what we mean by both of those is, we relatively rarely do quite foundational sort of curiosity driven or non theory of change-y work. We do do some strategic level stuff; we do some abstract things, philosophy things, things that are about big variables in the world, and not just about one decision - that’s totally fine, as long as there’s still a theory of change.


Do you want to give some examples?


Yeah, so one that I’m not involved with is basically, how much moral weight should we give to different beings? And this is primarily focused on non-human biological animals, which maybe sounds like a weird sentence to bother saying, but I think digital minds is where it’s really at. But this project is focused for now on non-human biological animals, and which ones of them are conscious? Is consciousness the key thing? What do we mean by consciousness? Do they differ in their moral status or moral weight? How much should we care about them and their experiences? Do they differ in their capacity for welfare? These are all both philosophical and empirical questions. And they’re really big picture. And they’re not like, ‘which grant should I make out of these two’ or something? But they obviously have a very clear theory of change. Like one of the things is: how much money should various funders allocate to non-human animals versus humans, and also to different non-human animals? And which strategies should we take? So a lot of the time with big picture stuff it’s less like, ‘we have this one theory of change,’ and it’s more like, ‘can we sketch five different things that each seem plausible, where having one answer versus another answer would change what actually happens in the world? And it doesn’t have to be the five most likely things, but if we can sketch five of them, then it’s pretty likely there’s something that can happen, where it really impacts the world. And this is in contrast to something like just looking into whether a given worm is conscious, because that’s the gap in the literature.

Fin 3:23:24

One worm, we emit everything else.

Michael 3:23:26

I guess I meant a type of worm. To be fair in academia, they’d probably go for a type rather than like, ‘Barry’. But still.

Fin 3:23:31

Yeah not a token of a worm. Is there anything that you appreciate about working at Rethink, just as a working environment, compared to other workplaces?

Michael 3:23:41

Yeah, lots of stuff. I have a thing that would be hard to Google your way to, but I have a list on the EA forum of pros and cons from my perspective of working at Rethink Priorities, which you can try to find if you’re adventurous.


We’ll also link it in the write up.


Oh, that’s true. Yeah, I can’t do that, because this one is a comment, so it’s very hard to find from Google. But briefly, I guess starting with some cons. Compared to my other options - I think I have really good options, so the key thing is just, Rethink Priorities are clearly great, but there’s also other clearly great things, and it’s plausible some of them are even better. In particular, I think I could maybe join a different type of research organisation or go harder on grant making, or go hard on some version of community building. And you also have to make these decisions kind of locally a lot of the time - not switching but, how much time do I spend on community building-ish things rather than my main job, even if I stay at my main job? Other things might be even better. Also, there’s some things like being remote is a con for some people; it’s pretty fine for me. Overall it’s good for me because I meant I could start in Australia and it means I can now work in Oxford rather than in the US where much of the company is based, but you know, it’s a small con in some ways. The organisation as a whole isn’t focused on the areas I’m most focused on, but mostly this is fine, because it’s sort of like Rethink Priorities effectively is kind of like five organisations that are really friendly and help each other out or something. So it hardly negatively affects me that other people are doing other stuff, and it decently often positively affects me. In terms of pros, I don’t know if that was a comprehensive cons, but we’ll link to the thing. Pros: just, I’m doing work that matters a hell of a lot. Like it’s horrific that the things we’re working on aren’t yet done. And my team has a list of like 90 project ideas, and this list is growing more than shrinking. It’s 90 currently but it’s growing. It’s gonna become infinite in a matter of time. I mean, probably many of them aren’t that important, but I’m pretty confident at least the top half are a big deal, and it’s terrifying they aren’t happening. And I’m pretty confident they aren’t happening. And also, you know, if they’re happening, one group looking into some important problem doesn’t mean that that should be the only thing.

Luca 3:25:56

Do you want to again give some examples?

Michael 3:25:57

There are some examples. I can’t give all the examples; some of them are spicy and secret. Not secret, but non public. One is just, what is our high level theory of change for AI governance?


Sounds important.

Michael Aird

Yeah. A lot of people have different beliefs on this. There’s definitely some work going on of this flavour, but mostly a lot of people are running around with quite different beliefs of our high level goals here. Like what are we aiming towards, and what are the key variables, and the key plan? And things there could be like, are we trying to advance one country or lab to have a long lead time? Some listeners are going to be pretty confused here because they haven’t heard the basics. But if so that’s okay. Don’t worry. Yeah, are we trying to advance one country or lab, so they have a lot of lead time, so they can then proceed carefully and invest a lot in safety, and they don’t have to race against someone else? And maybe we’ve chosen one that is perfectly safety conscious or likely to do well. Or are we trying to create one sort of pair coalition? Or are we trying to create a completely, extremely multilateral, global thing? Are we trying to prepare for a world with one extremely powerful AGI or with a wide range of different things developing continuously in parallel? Things like that. So which of these visions are happening? I think we just haven’t had someone talk to a bunch of people who’ve been in the field for a while about why they’re doing what they’re doing, and try to find out where they disagree, why they disagree, and then point out critiques of each of these worldviews. So that’s an obvious thing that just super should have happened, and it hasn’t happened. I’m not saying the community’s made a mistake; the community doesn’t have that many people. So it’s very unlikely we’ve done exactly the right thing as a community, but we’ve probably done roughly the right thing, and that does leave things on the table. If people had switched to this instead of what they’re doing then I’d be talking about what they’re doing instead.

Fin 3:27:47

Fin here. Michael realised after the recording that he maybe gave the impression that there’d been approximately no work of this type before his team started on it. Whereas really, there has been some work focused on this and a lot of miscellaneous work and thinking that helps fill a similar role but wasn’t really aggregated or done intensively. Michael also wanted to mention that one researcher, in particular - Matthijs Maas - has started publishing a sequence of posts covering similar topics on the EA forum, under the heading ‘Strategic Perspectives on Long-term AI Governance’, and we’ll link to that on our website.

Michael 3:28:23

Yeah, anyway, so we have a crapload of project ideas. It’s screamingly obvious they should mostly be done. They haven’t been done yet. That sucks. I can help them get done fast. We’ve grown from a team of two people at the end of last year to ten people now. So I can help this happen at scale and build a foundation for this to happen a bunch more. That’s just amazingly exciting. And I get to also work with people who are like, they don’t think like me in a way that means we don’t have cognitive diversity; there are pretty different perspectives brought to bear and different thinking styles and reasoning styles and backgrounds and stuff, but we basically share the same core goals, at least to a large extent. And we also share some things like you know, really relentlessly focusing on strategically pursuing what’s good rather than just you know, the taboos and fads and stuff. Yeah, remote was a perk as well, but it’s in both lists. Pretty decent pay as well, you know, some people are joining us from software engineering backgrounds, but from a teaching background it looks good. There’s probably a bunch more.

Fin 3:29:25

Well, at this point I’m sure listeners are clamouring to know, does Rethink plan to be hiring soon, or is Rethink currently hiring?

Michael 3:29:33

So Rethink is sort of always at least somewhat hiring, I think. The longtermism department in particular. So I’m on the AI governance strategy team. There’s also another team in the longtermism department called the ‘general longtermism’ team, which is currently mostly focused on ambitious entrepreneurial projects aimed at making the world a bunch better, aiming to eventually become, quote, ‘mega projects.’ We currently have an expression of interest form for both of those teams, and we also have a non-public, but I can probably share it with you, expression of interest form for founders for these projects we might launch. We aren’t in an active hiring round, but we plan to continue growing pretty rapidly and having a big bank of people who we know are interested that we can invite to the next hiring round, and possibly pluck off cycle. I don’t know if we’re going to do that sort of thing, but we might sometimes have a particular short term contract we need. So you have the two expression of interest forms available on the website, and the other one, probably if you reach out to me, I can show you the founder one.

Fin 3:30:27

Great, and we’ll link to the two public ones, for sure. Okay, let’s move on to the home straight. One thing I really wanted to ask about before we finished is, this distinction which I’ve heard you mention a couple of times before, between taking a ‘maximising mindset’ to different things, as compared to a ‘satisficing mindset’. What does that mean?

Maximising vs satisficing

Michael 3:30:54

Yeah. So what they literally mean, as far as I’m aware, is ‘satisficing’ is just sufficiently meeting some threshold; you meet at least some threshold, and then you don’t really go harder than that. And then ‘maximising’ would be, also you could call it optimising, is trying to hit the most of some variable, and just you always want more, basically. And my sort of hot take - it’s not a very hot take within EA, but I think it’s worth reminding people sometimes - is probably try to do a pretty maximising thing. And probably don’t, for example, shoot for an impactful career. But something roughly like the most impactful career you can have, with some caveats. One of the reasons I want caveats is because I think even if this mindset is correct, I think just psychologically it could be challenging for some people, because then you’re always hunting for something else. It also could be practically a problem, because switching jobs is a cost for you, and the employers and stuff. So I’m not saying constantly jump from thing to thing, have no integrity, constantly doubt yourself. Probably the way to implement this maximising thing is to have phases where you’re really thinking hard about what to do, and you’re super open to switching, and then phases where you get your head down, and you do stuff and you’re willing to think, ‘should I pivot?’ But it’s not actively on your mind very much, and you mostly just go hard. Most of the world is super satisfice-y, and that makes sense for some personal goals. Like for making money, you probably should be satisfice-y. Like, if you just want to make yourself happy, because of diminishing returns. But for altruistic stuff I think you mostly should be maximising, and I think a lot of EAs aren’t. I think one reason is trying to sort of be nice, or something - trying to find a job type that is at least somewhat impactful, meets some bar, and you can immediately tell someone they can do, and then you can make them feel happy and included and stuff. Whereas I want to be sort of like there’s a pretty good chance there’s something they can do that’s really good, and I’m okay to tell them to hunt for longer, and to try to make that hunt more pleasant and convenient, and keep them happy and inspired and engaged and let them know that they’re doing something really valuable by hunting, but I don’t just want to be like, ‘Oh, you’ve got this background, there’s this job that’s kind of EAish, with that background, and it’ll help.’ Or like, ‘Oh, you’re living in this place? Yeah, there’s still pretty useful things to do in that place,’ or something. Like some people, it’s fine if they don’t move, but often they should consider moving - seriously consider it or something.

Fin 3:33:26

I think that’s really useful. I think it’s interesting to consider how that plays to career decisions. It is the case, presumably, that picking a satisficing mindset, which can look like you care about improving the world with your career, and so you take a job which kind of fits that description, and then you’re happy, and that’s pretty satisfying, and in fact, you may be doing a bunch of good. Contrast that with the maximising approach, which is maybe you are more scrupulous with the thing you end up doing, you kind of iterate more, you maybe change course a bit more, you spend more time prioritising the stages of finding out what to do. That can be in fact, psychologically less comfortable. And once you end up doing the thing, which maybe is, let’s say, 100x, or 1,000x, more impactful on some measure than the satisficing alternative, that’s not 100x or 1,000x, more satisfying to you psychologically. So I think there’s an important explanation of why fewer people maximise for the thing they at least ostensibly care about, and that is that how good it feels to take these different approaches doesn’t track how good it is for the world, right? Does that make sense?

Michael 3:34:40

Yeah, that makes sense. I do want to be careful with this because I think there are also people who are doing something that looks like maximising but they’re doing it too much, and it’s a bad idea for them or in general. And I think there are also trends that can cause that. So for example, you said thinking about what you should do is unpleasant, and that is often true for a lot of people, but there’s also people who just really love the metal side, and they really love just tossing things up, and they don’t really like doing, because there’s a lot of mundane stuff once you figure out that you should be doing AI policy work in government or something, and maybe that’s true. That means most of your job is maybe fairly mundane, and not working directly on the most important thing. So it might be much more fun. So yeah, there are people who spend too long planning and there are people who switch too fast and things like that. So neither switch too fast, nor too slow. And yeah, I agree that it is partly about the personal satisfaction thing, also. Another thing is, don’t shoot - some of this is received EA wisdom you’ve probably heard elsewhere, listening public - don’t shoot for a high chance of doing at least some good. To be clear, ‘at least some good’ might actually be more net positive good in expectation - one in a million levels or something. In the world at large, you might still be doing way better than most people. But still, from an EA perspective, you’re kind of satisficing relative to what your talent could let you achieve. Yeah, so don’t don’t just shoot for a high chance of that. Instead focus on expected value. As a community, we don’t need everyone to succeed. What I mean is, we don’t need everyone to have an impact. We need to maximise the total impact across the community, and that will often look like a lot of people taking bets that might not work out. And then we hopefully set up safeguards to catch these people, and hopefully remind people that they should keep themselves protected and have financial runway and stuff before they do this or make sure they would be able to get a grant or something. I think this is partly about career choice. It’s also partly about research, for example. So I think it’s not that hard to set up a theory of change for your research and your research topic that makes it likely to have more net positive impact in expectation than the vast majority of existing research, but don’t stop there. You probably can do much more than that. Just keep pushing harder. Don’t break yourself, know your limits, take time off, etc. But in terms of what you choose to spend your time on, push harder.

Fin 3:37:09

Cool. And I also like the thing you said at the end, which is separating out something like the intensity or ferocity with which you work day to day and the amount you’re kind of fixated on this stuff. Apart from just as a decision rule, do you satisfice or do you maximise? You can maximise in an extremely healthy way. In fact, you should.

Michael 3:37:28

Yeah. A related thing I want to flag is this idea of utilitarianism doesn’t tell you - and I’m not saying listeners believe in utilitarianism or whatever, but probably you put some credence on it - utilitarianism doesn’t tell you to constantly calculate the utilities, and I’m not telling you with this maximising thing. Figuring out what to do is an action, and it has a certain expected value, and sometimes that action is not worth doing. So sometimes your rough guess as to what’s worth doing will suggest that you shouldn’t spend longer sharpening that guess and you should enter a ‘do’ phase. This is similar to my ‘just apply’ idea. So I’m not saying spend ages planning or something, and relatedly, this idea of pushing yourself to work the most hours you can stay awake every day of every week, is maybe just not a good bet for the world. Even if I don’t care about your well being at all - which I do - but even if I didn’t, it’s just not a good bet for the world. And then also, there’s a level to which even the amount of work that would overall be a good bet for the world, maybe just you know, cut yourself some slack or something. It just seems good at a community level for us to be like, ‘don’t push yourself right up to the limit even of what you can do sustainably. You can ease off that limit a bit if you want.’

Fin 3:38:39

Yeah. Very wise. Final questions?

Research Michael would love to see

Luca 3:38:41

Lovely stuff. Yeah, we are literally exactly now at the four hour recording mark, so it feels apt to move on to closing questions. I guess we’ve been talking about doing impactful research basically this whole time, but one of the questions we do like to ask guests is, what are some particular research questions - and they can be very niche - that you would love to see more people doing work on? And feel free to answer that in a meta way if you want.

Michael 3:39:06

Luca knows me well. So I think meta thing first: I think junior people, as I mentioned earlier - and junior, to be clear, doesn’t necessarily mean early career; it can mean pivoting. Like you could be 30, and really damn good at something irrelevant, but now you’re taking a hard turn, and you’re sort of junior on this new path. So yeah, junior people I think should be pretty focused on testing fit, building career capital, and using their personal fit. And so I think this question just doesn’t matter that much. I will still answer it. It matters a bit. But the question of what research topic you should probably do in theory, with me not knowing who you are, is probably less important than like where would you get good feedback loops, and things like that, and what’s best for you. So mostly just apply for jobs and see what happens and get those mentorships and feedback loops. But one place you can look if you do find yourself with time to look into questions and that is the best move for you for some reason, is a post I made called, ‘A central directory for open research questions’ that is just an overwhelmingly big list of lists. And so you can browse that for, for example, if you’re interested in animal welfare, there’s a set of five lists for that, in particular. Do not try to read all the lists on this page, try to skim the subset that makes sense for you. And then for two topics we’ve talked about more than average on this episode were nuclear risk and AI governance. So the lists I would suggest for that, in particular, my favourite individual public lists on that are - for nuclear risk, my post of ‘Nuclear risk research ideas’. And for AI governance, a post I think was put together by the Centre for the Governance of AI, but I don’t know if it’s officially one of their things, called ‘Some AI Governance Research Ideas.’ There are questions that I think are more important than the average one of these that aren’t public, and also some of the questions, especially in the gov AI list, I think are predictably much less important - just not very important in the scheme of things. So be discerning, and feel free to reach out to people in the field and see if they have any other hot topics, but these are good starting points.

Michael’s reading recommendations

Fin 3:41:10

Feel free to answer this question with a meta answer also. What three books or articles or videos, whatever would you recommend to someone who’s listened to this and wants to read more, but read something that isn’t a very long list.




Okay, next question!

Luca 3:41:30

This can also be just three things that you’ve recently enjoyed reading; that’s another way of putting it.

Michael 3:41:36

Okay. I think in terms of Fin’s one, I think I can manage this because I only have a short or medium list. So most of this conversation is how to do good research on EA topics, and for that my, ‘Interested in EA/longtermist research careers? Here are my top recommended resources’ is my top recommended resources. For people interested in AI governance stuff, there’s an AGI safety fundamentals governance track. I haven’t gone through the course myself, but I gave feedback on the course content, and read the course content. It seems to me probably the best introduction, or even advancing people who are somewhat into the field- the best single thing for a curated list you can walk through. I don’t know when the next actual round will happen, but you can just read through it yourself and do the exercises yourself. On nuclear risk, there’s a sequence for the post that Rethink Priorities put together on that, mostly from before I joined, and there’s also some that I haven’t put in that sequence yet, partly because they refer. But you can find them under my profile, or I could I guess give you two that list of links. Should I say anything that’s not just a blog post, and that isn’t just written by me?


You are most welcome to.

Luca 3:42:54

Your favourite comedy set. That’s maybe a nice way to round off.

Michael 3:42:56

Okay. I don’t know about favourite but I’m a big fan of Dylan Moran, a sporadic fan of Stuart Lee, a few others. My partner is also a comedian. Or I guess I am a former comedian, and she is an active comedian, doing a show tonight. And so we like a lot of sort of random indie stuff that isn’t particularly well known. So there’s a group called Zach and Viggo that are weird clowning stuff that if you happen to be in London, you might be able to see a Zach and Viggo show, but I don’t think it’s on Netflix or anything.

Fin 3:43:35

All right, great answer. Finally, where can people find you, and/or Rethink and your team online?

Michael 3:43:44

So for me, all of my public EA relevant writings, well basically all of them, are under MichaelA on the Effective Altruism Forum. I said, ‘well’ and paused because there’s some that are semi-public, where they’re Google Docs or something, but usually, if you read my posts for long enough, you’ll find a link to one of the Google Docs. And Rethink Priorities has a site. We also have a newsletter and a careers page. Also, I would flag if someone’s listened to this and if you’re feeling fired up about maybe getting involved in this stuff, you can just reach out to me. The easiest way is the Effective Altruism Forum. There’s a message button; you can hit that and message me. I’m pretty busy so I might reply slowly or briefly but usually within a few minutes I can share some useful links and stuff and suggest other people for you to talk to. And if you do that I also have a doc on how to reach out to busy people, and I’m one such busy person so it could be the first day is you saying, ‘can you send me that doc,’ and then I send you that doc, and then you send me a new message tutored by that. That’s one option.

Fin 3:44:46

Michael Aird, thank you very much. That was epic.

Michael 3:44:50

Thanks. Great being on.


Fin 3:44:55

That was Michael Aird on how to do impact driven research. As always, if you want to learn more, you can read the write up and there’s a link for that in the show notes. There you’ll find links to all the books and various EA forum posts Michael mentioned, plus a full transcript of the conversation. If you find this podcast valuable in some way, one of the most effective ways to help is just to write a review wherever you’re listening to this. So Apple podcasts, Spotify, wherever. You can also follow us on Twitter, we are just @hearthisidea. We also have a new feedback form on the website with a bunch of questions, and there is a free book at the end as a thank you. It should only take 10 or 15 minutes to fill out and you can choose a book from a decently big selection of books we think you’d enjoy if you’re into the kind of topics that we talked about on this podcast. Okay. As always a big thanks to our producer Jason for editing these episodes, and also to Claudia for writing full transcripts, especially for editing and transcribing episodes quite as long as this one! And thank you very much for listening.