Episode 53 • 21 September 2022

Tessa Alexanian and Janvi Ahuja on Synthethic Biology and GCBRs

Leave feedback ↗


In this episode, Luca talks to Tessa Alexanian and Janvi Ahuja.

Tessa and Janvi

Tessa Alexanian is the Safety & Security Program Officer at the iGEM Foundation. Tessa is a fellow at the Emerging Leaders in Biosecurity Initiative, was previously a fellow at the Foresight Institute, and co-founded the East Bay Biosecurity Group. You can read more about what she’s up to on her (excellent!) website.

Janvi Ahuja is a PhD student in computational biology at the University of Oxford, where she is affiliated with the Future of Humanity Institute and works with MIT’s Nucleic Acid Observatory. Janvi is also a fellow at the Emerging Leaders in Biosecurity Initiative, and was previously an intern at the UN’s Biological Weapons Convention ISU. Listeners may remember Janvi from co-hosting our episode with Ajay Karpur.

We discuss:

Tessa and Janvi’s Recommendations




Awesome things in Synthetic Biology

Safety and security in Synthetic Biology



Luca 0:06

Hey, you’re listening to Hear This Idea. Over the last few months, Fin and I have been trying to use every other episode of the podcast to explore different aspects of biosecurity. In this episode, we continue that mini-series by looking into synthetic biology. A lot of biosecurity coverage inevitably gets framed around COVID-19, and thus natural pandemics. That is without a doubt a really important topic. But in fact, it is likely that the most devastating threats don’t come from natural origins, but instead are artificially engineered in a lab. That is to be even more contagious, more deadly, and less detectable. So understanding biosecurity will inevitably also require us to understand biotechnology. But what exactly are the future technologies that we should be worried about, and when might they come about? What steps is the synthetic biology community already taking to help mitigate this risk, and what are its community norms? How can we balance all of these really awesome benefits that synthetic biology continues to bring to the real world, whilst at the same time being respectful of dual use concerns? These are really hard questions, and when I asked around, ‘Who would be good to speak to about this?’, I was immediately recommended Tessa Alexanian. Tessa is the safety and security programme officer at the iGEM Foundation, where she works on exactly those questions. She’s also an ELBI fellow and before that, a self described ‘robot whisperer’ at Zymergen.

Tessa was just a wonderful person to interview. She’s smart, articulate and funny and really just made my job here very easy. Joining Tesla for this interview is Janvi Ahuja, who listeners might remember from co-hosting our episode on meta genomic sequencing. Janvi is a PhD student in Computational Biology at the University of Oxford, where she is affiliated with the Future of Humanity Institute, works with MIT’s Nucleic Acid Observatory, and is also an ELBI fellow. Janvi was really useful in helping to bring frameworks to some of the more abstract parts of our conversation, and it was also just really fun having both Tessa and John interview each other. I learned a tremendous amount from this episode, we start by talking about why synthetic biology is just a really awesome and wild field at the moment, which is often, I think, a part that gets left out of biosecurity discussions. We also spend a fair amount of time talking about what a general culture of responsibility and synthetic biology looks like, which I think helps to contextualise what norms and initiatives already exist before we focus on the very worst kinds of risks. Then, of course, we dive into GCBRs, how we might expect synthetic biology to affect that over the coming decades, what technical projects Tessa, John, and we would like to see in the world, and what people in their early careers can do to help. As always, there are timestamps to help you navigate around in case it’s useful. But without further ado, here’s the episode.

Tessa 2:39

I’m Tessa Alexanian, and I like to say that I’m working on steering towards nice futures for biotechnology, which I mostly do through my work being a safety and security programme officer at the iGEM foundation.

Janvi 2:53

I’m Janvi Ahuja, and I work on biosecurity research at the Future of Humanity Institute, and I’m also a PhD student at the University of Oxford in the Department of Medical Sciences.

Luca 3:05

And then another question for both of you to kick things off is what is a problem that you’re currently stuck on?

Janvi 3:11

A problem that I’m currently stuck on, and very much the biggest problem that I’m working on right now is with this group called the Nucleic Acid Observatory that is trying to understand what types of surveillance,in particular, what types of environmental surveillance could be the most useful in providing an early warning for a pandemic. In particular, the thing I’m trying to understand is how useful an environmental system might be in comparison to a clinical system, even assuming the absolute best environmental system we can.

Luca 3:44

Nice, and what are you stuck on there? What is one of the problems that you’re currently wrestling with and would love to get to the bottom of?

Janvi 3:51

Probably just that getting a good sense of how infectious disease dynamics within populations translate to wastewater signals is just very, very difficult, because there’s a lot of things that sort of manipulate the signal and change the reflection of the signal within wastewater. I wish I just understood all of the parameters in between the disease we have as a population and how that shows up in our wastewater.

Luca 4:23

Cool, awesome. Same question to you, Tessa.

Tessa 4:25

Yeah, so a lot of my job and something that’s on my mind, because it’s been a big part of it over the past few weeks is doing risk assessment for this synthetic biology competition. So I look at all of the projects that the teams do and try to provide them with advice and make sure that the precautions they’re taking in terms of biosafety and biosecurity are adequate. There are a lot of places where I wish there were a standard for risk assessment that I could simply apply and I don’t feel like there is, especially because we’re such an international competition. So one example that’s very concrete is how to deal with partial pathogens. So if you have a human cell line, for example, that’s a cancer cell line, and it has a piece but not the complete genome of a human virus: how risky is that really? Well, it is regulated completely differently in different countries. So, I have to give people very different advice, depending on where they live in the world, and also, from my own perspective, I want to give them just a well grounded risk assessment. You know, it very much depends on the details of the project they’re doing and whether they’re likely to produce an infectious virus, and whether the bit of the virus that happens to be in there is a virulence factor that could recombine with something else they’re using, and it gets very into the weeds very quickly, and I really wish that I could just absorb someone else’s heuristic because I sometimes feel, as a person who’s not a biologist, but is working in biosecurity, a little out of my depth trying to evaluate like, ‘You are having to be using a third generation lentiviral vector instead of a second generation lentiviral vector, and it’s with this specific cell line. What do I think that means for what your team should do?’ So more standards for risk assessment, and I would double my emphasis on that question for Julia’s risk assessment, which is very unstandardised, and I wish there were standards I could follow,

What is synthetic biology?

Luca 6:09

We’re going to be talking a lot about synthetic biology. What even is synthetic biology? What should I be thinking of in my head there?

Tessa 6:14

The way that I think about synthetic biology is it’s sort of an engineering orientation towards biology instead of a science orientation. I think scientists are often trained to think about how to discover what’s already out there and produce knowledge. So it’s a process of kind of empirical uncovering of what’s already out there. Engineering tends to be a process instead of trying to think of what you want to build, and then follow this engineering design process where you design something, and then you build it, and then you test it, and then you maybe learn about it and analyse it, and then you jump through that cycle again. Synthetic biology is this field that was founded by some engineers who were looking at biology and squinted at metabolic networks and thought, ‘Hey, those are kind of like regulatory feedback cycles; they look a little bit like electrical engineering principles and circuits, and all this stuff we’ve seen around control systems. Maybe we could engineer biology, the way we engineer circuits?’ The answer is that you sort of can. It turns out to be a productive way to think about biology and about how to engineer it, and biology and genetic circuits are full of noise in a way that electrical circuits are often not, but it’s that sort of process of thinking about it. People often mark the year 2001 as the year that synthetic biology got launched, because there is this foundational paper where somebody builds a cool circuit in a bacteria, and then over the past couple of decades, it’s been this process of working up some standards and engineering principles and tools that let you take this more engineering perspective on biology.

Luca 7:49

Cool, that sounds really sick. It sounds like quite a young field as well, right, if it began in 2001?

Tessa 7:54

Yeah. There’s been some interesting papers now. In 2021, there were some interesting papers reflecting on the question of, ‘What was the second decade of synthetic biology about?’ My perspective on that is that the first decade, that’s 2001 to 2010, was really, ‘Can we do this?’ Can we take this engineering stance towards biology?’ And some big stuff happened in that time, in terms of the tools that we were able to bring to bear and especially the rapidly falling cost of DNA sequencing has really enabled some new engineering tools. Then in the second decade, I feel the question was more, ‘Wow, it looks like we can engineer biology in new and unexpected ways. Does this mean there’s an industry here? Does this mean we can start putting this stuff out into the world? How are we going to deal with this super powerful technology? Now we have CRISPR. Now we have gene drives. Now we have cheap DNA synthesis, as well as sequencing. What happens now?’ I feel like we’re just now at the start of this third decade of synthetic biology, and I’m very curious to see what happens. But you’re right, it feels like a new field where many things could happen.

Janvi 8:55

I think one of the things that’s coolest about synthetic biology, and one of the reasons I love hanging out with the synthetic biologists is just how excited everyone is about it, and also just how creative people are with the technology. I’m curious about examples that you might have of times that synthetic biology has been used that you’ve just been shocked by or some of the coolest examples of its use in the last few months or years.

Cool examples of synthetic biology projects

Tessa 9:17

One thing that I get really hyped about is making photosynthesis better. It’s surprisingly bad, given how ubiquitous it is. A lot of plants use this thing called a C3 photosynthesis. But it turns out that Rubisco, which is my most hated enzyme. You gotta have some nemeses, and one of mine is Rubisco. It’s surprisingly bad at dealing with mixed oxygen and carbon and playing its role in photosynthesis. So there’s some plants that have these specialised compartments in their leaves that segregate the oxygen and the carbon dioxide and make photosynthesis more efficient. There’s been people working on adapting those photosynthetic modifications into rice that just has much higher yields, in most circumstances. There are reasons why not all plants are already using C4 photosynthesis, and it’s not just because it’s a difficult evolutionary leap. It’s also that there’s maybe less good performance in certain conditions - I forget if it’s in wet conditions or dry conditions. There’s reasons why everything isn’t already using that. But it would be really useful to have C4 rice. There’s also a company called Living Carbon that’s trying to do some, not quite so complicated photosynthesis, optimization in trees, to sequester carbon way more efficiently by planting forests that grow really fast. This possibility of just getting into the basics of metabolism, and then taking this engineering mindset and going, ‘How can we tweak all of these enzymes and make them really optimised for doing really good things?’ I get super excited about it.

Luca 10:48

Yeah. Is it the way that natural selection is, or just how things are in nature, that they just haven’t been optimised in the way that humans want to use them and stuff? What’s the kind of philosophy there?

Tessa 11:01

Sometimes it is that humans have a purpose that we would like to use biology to do something that it doesn’t actually do. I think that microbes are amazing chemists, and so there’s a lot of things that we might want to produce using fermentation and industrial bioprocessing that microbes are indifferent to but they’re chemicals that are useful for humans. So it’s this process of convincing the microbe to keep all of the genes that you want it to have, even though it’s not really useful in an evolutionary sense for the microbe to have them. I think the other thing is that evolution is necessarily a random and gradual process. So there might be really big leaps that you can make if you can intentionally do some engineering that that will let you access kind of spaces with biology that you simply can’t walk to in this in this gradual way, because there’s going to be some valley of lower fitness that you would have to cross that’s improbable to cross randomly.

Janvi 11:58

I’m curious about how much you think that we’ve been able to actually integrate synthetic biology into our lives and into solutions in this way?

Tessa 12:09

So there was a cool paper, I forget if it was last year or the year before, that was about six synthetic biology solutions that are already changing the world that was neat. I would recommend reading it. But one of the things it talks about that I really like is this company pivot bio, that it’s doing microbiome engineering for agriculture. We spray a lot of relatively environmentally bad fertilisers that help our crops get sufficient nitrogen and phosphorus. But in nature, plants are mostly getting those from the microbes in their roots. And so you can imagine, reengineering these microbes to perform better at this sort of narrow, specific agricultural function, and then not having to spray as many chemicals and the chemicals mostly get washed away in the rain. So that’s a practical thing that’s already out in the world. It’s being used. It seems like an example of this microbial optimization for human purposes.

The decade ahead for synthetic biology

Luca 13:07

Is there anything in the third decade, or something you mentioned now that you’re particularly excited about coming? Are there any things either for industry, or just for the, quote unquote, good of humanity that you’re just excited about. Like anything being on the horizon there?

Tessa 13:22

Lots of things. I’ll just mention a couple, but I really could rant excitedly about synthetic biology, probably for hours. Some things I’m really excited about. One is the application of synthetic biology to biodiversity and conservation. So there’s a group called Revive and Restore that has been leading some work on genetic rescue. This is taking species that are really close to extinction, and at this point have so little genetic diversity, because there’s so few members of the species that it’s starting to be hard to reproduce, and have healthy future generations and trying to engineer genetic diversity into them, for example, the black footed Ferret, and then produce more hardy and more diverse future generations of this animal, and that I think is really cool. I don’t know if we will hit it in this decade. There’s also, I think, potentially exciting applications of gene drives to biodiversity and conservation, and there’s still quite a lot of work to be done to ensure that we don’t mess up ecosystems with gene drives. But I think they have a lot of potential, for example, to eliminate invasive species on islands where we don’t have any other helpful way to conserve the species that naturally evolve on those islands.

Luca 14:33

There’s discussions of using gene drive for malaria as well, right? Like with mosquitoes and stuff?

Tessa 14:38

Yeah. One thing that I found shifted some of my mindset around this was realising that a lot of the mosquitoes that carry human diseases are invasive species. I used to have the perspective that that seems a little suss because bats are important and mosquitoes are a big part of their diet. Now I’m like, ‘Ah, you know, the Anopheles and Aedes Aegypti mosquito, those ones aren’t from all over the world, but they are now all over the world because they enjoy predating, being vectors for human disease, but we can just not have those ones in most of the world, and I don’t think that would mess up ecosystems very much. So I think, again, there’s some caution to be done there because one of my worst case scenarios here is that we have the potential to really reduce the human burden of disease using gene drives, and then we release gene drives that aren’t quite up to par and we create gene drive resistant mosquitoes or something. That would be a shame. So I think there’s a lot of really interesting work being done to figure out how to do gene drives that have limited temporal and geographic spread. But yeah, doing those tool made vector borne diseases could be really cool.

Luca 15:50

Yeah. When we think about synthetic biology, and our ability to engineer organisms, bacteria and the like, how does that compare? How much control do we currently have over that? So to give a particular problem, I think you gave a really nice talk where you compared bacteria to transistors, or NOR gates. What is our ability at the moment? And what are the main barriers there to being able to engineer more precisely?

Tessa 16:18

One perspective: I had some interesting conversations with an anthropologist who was embedded at the biotechnology company that I was working at, who was trying to understand how people relate to bioengineering. It was really neat. One thing that she was very surprised by talking to the scientists and the engineers there was how much everyone seemed to take a stochastic or statistical perspective towards bioengineering of like, ‘You’re going to try a bunch of stuff and much of it is not going to work for kind of random or very difficult to know reasons. But if you do enough experiments, you can eventually find some way to work with the microbe and get it to do the thing that you want.’ I think one thing that’s very different about microbial engineering, for example, compared to stringing together some door gates on a breadboard is that the microbe is evolving, is changing, and has its own selective pressures and its own goals. Goals is probably the wrong word to use there, that might be anthropomorphising a bit too much. But if you put a construct in the microbe, and you haven’t given it a reason to hold on to that construct, it will shed it over the course of a couple of generations. So you have to manipulate the conditions in which you’re growing it as well as to encourage it to take it off.

How synthetic biology democratised

Janvi 17:32

I’m curious as to your perspective, and the changes in the synthetic biology landscape in the last few years, not just in terms of the science, but in terms of the people doing it, it feels synbio is one of the fields that’s been democratised possibly the most quickly. It seems like you can do a lot of the science with maybe minimal tools. But also, one of the things you’d mentioned was how difficult it is to get the microbe to do what you actually want it to do. I’m interested in this sort of tension between those two things, that is, a lot of people having access to these tools to try and create things but then also it being difficult to actually know what the microbe will end up doing.

Tessa 18:15

Yeah, so I think things have spread and become more democratised, and some of that really is just having better and cheaper tooling. One thing, that is almost hard for me to conceive of, is if you’ve had a 60 year-old professor teaching you biology, they probably did their undergrad without access to PCR, which, Janvi, I know you know well, is like step one in any genetic manipulation of anything, at least that’s that’s my experience.

Luca 18:42

Can you very quickly explain what PCR is?

Tessa 18:45

Yeah, so this is the polymerase chain reaction. I could explain it in detail, but suffice to say it’s just exponentially targeted amplification of DNA. So it’s really useful if, for example, you’re trying to figure out if, when you’ve tried to connect two bits of DNA together, they’ve actually connected. You can use PCR and amplify just that little section,

Luca 19:07

Like the COVID test then, right? There’s the same PCR?

Tessa 19:11

Absolutely it’s the same. It’s useful for diagnostics, and it’s also really useful for trying to observe what you are actually accomplishing in your microbes, because they all just look like colourless liquid, or slightly yellow liquid. So what feels like a very basic observational tool, the microscope that enables molecular biology, didn’t exist until 1986. Now it’s hard for me to imagine doing biology where you can’t synthesise and sequence what you’re interested in. In 2001, it was billions of dollars to sequence a human genome, and that was the first time we did it. Now, there’s a company - I actually haven’t read their preprint yet, so we’ll find out how real I think this is - saying that they can do whole genomes for $100, which would be big if true.

Janvi 20:08

What are the different kinds of groups that are trying to do synthetic biology now that all of this stuff is so accessible?

Tessa 20:14

So I think one thing that feels like a past five years level trend is more countries introducing a bioeconomy strategy. So I think both as people are wanting to move away from petroleum-based manufacturing, and as there are more real bio-products going out, being in the world, and being useful, a lot of countries are going, ‘Oh, we should have a strategy around this. We should encourage local bio-production.’ I think this has also been made more urgent by the COVID-19 pandemic, and a lot of countries realising that if they can’t manufacture vaccines or therapeutics in their own country, then during a public health crisis, they might just not be able to import them from other countries. So I think there’s a geographic spread on the level of government investment. There’s also more accessibility in terms of doing community level educational biology or citizen science. You have people sequencing the microbes in their local river to see if there’s any toxic ones, or doing essays of all of the algae that’s growing around them. There’s a fun group in California that is the Kombucha Genomics project. They’re trying to sequence the genomes of kombuchas from all around the world and find out what’s in them. I think it’s still a relatively small community that is doing outward facing rather than educational projects in the community or the do-it-yourself biology world, which isn’t to say there aren’t some really cool ones. There was a huge group of people who coalesced around this programme called Just One Giant Lab doing COVID-related experiments. There’s been the open insulin project which has been going for a long time trying to produce an open source prototype for insulin that’d be easy to manufacture anywhere in the world. One group that I think is really neat, thinking about accessibility, is the Open Bioeconomy Lab, which is a collaboration between some researchers in Cambridge and some researchers in Ghana, and they’re trying to produce a collection of open enzymes that you can import from Ghana instead of from the UK or the US or somewhere else where there might be quite difficult import-export regulations. So I think you’re seeing increasing local biomanufacturing and also countries increasingly prioritising this in terms of economic investment and educational investment.

Coordination and strategy among synth bio labs

Janvi 22:43

I was just curious as to whether these groups coordinate with each other at all, or whether there’s some sort of overarching structure which joins them, or whether they’re truly independent.

Tessa 22:54

I don’t think there’s a global community of biology or bioeconomy strategy, really. One of the coordinating mechanisms that I know about, and I won’t pretend that I’m an authority on this, is this Global Community Bio Summit that comes together every year, that is one of the sweetest, most wholesome events. If you want to just feel good about biology, I really recommend going to this event, it’s so lovely. There’s also this increasing movement to have biofoundries which are biomanufacturing facilities that you can use to run highly automated experiments, without having to have all the equipment yourself. There’s a Global Biofoundries Alliance, that is spinning up and trying to offer coordination and support and resources to each other. There’s also strategies for global governance coming out of the WHO, for example, and international treaties, like the Convention on Biological Diversity, and the International Union for the Conservation of Nature. Both have recently put out statements or studies on synthetic biology. So, I think there are various places where some global coordinating conversations are happening, but it’s not like there’s the Board of Directors for the bioeconomy around the world. That doesn’t exist.

Janvi 24:11

I just wanted to get a clarifying question: what does bioeconomy mean?

Tessa 24:16

I think what people tend to mean there is productively using biology to create goods that people would want. There might be a more formal definition of it, but I think that’s what people usually mean. So that’s everything from biologic produced drugs and pharmaceuticals to industrial bioprocessing and biomanufacturing of flavourings or beautiful chemicals for fertiliser to some of these more kind of living therapeutics or microbial engineering for agriculture. It’s the swath of getting biology to do economically useful things, I guess.

Lab robotics

Luca 24:58

One thing I wanted to tag on as well is that you previously worked on laboratory robotics. I’m curious just for your take on that landscape and what effects you see there in terms of how we do wet lab work, or science more broadly?

Tessa 25:12

I really liked working with laboratory robotics. I should make it clear that I am not a wet lab biologist - I did an engineering degree. I have been really into biology for a long time, and I took some biology electives, and I was reminded that I don’t like working in the wet lab, because I don’t have the mental fortitude for experiments going wrong for difficult to understand reasons. Janvi, I feel like you’re probably not-

Janvi 25:39

Yeah, this is also why I no longer do wet lab stuff, but I suffered before I left. Sorry, go ahead, Tessa.

Tessa 25:45

I’m really happy that all of the science is happening, and I want to play a supporting role in it. So being able to be a laboratory automation engineer was great for me, because I could help people do biology experiments, but all of my work was negotiating with robots and computer code instead of microbes, which I preferred. The precision can matter, especially when you’re doing that microbial metabolic optimization and trying to squeeze the last few percentage points of efficiency out of the microbes. Then, as you’re screening many, many different options for that, the tiny differences and how a human pipettes across 96 experiments might start to matter. So the precision can start to be really important. But I think scale opens up kinds of experimentation and kinds of approaches to biology that are simply different. There is a difference in kind, I think. One of those is doing engineering that’s less hypothesis driven. So, instead of saying, ‘Okay, I’m gonna really look hard at diagrams of all of the ways that this metabolite could flow through the cell and try to figure out which parts might matter’, going, ‘I’m gonna mutate it a lot, and then I’m going to measure it a lot and see if I see any signal there.’ So when you’re picturing those robots, you should picture a lot of boxes with glass around them. Some of those boxes contain pipettes that are picking up liquid and moving it from place to place. Other boxes I have worked with in the lab include a colony picker - that’s where you have a bunch of microbes growing on agar plates, and you have a little camera that does some image processing and sees where the colonies are growing. It’s this cool little thing that almost looks like an upside down Christmas tree made out of metal that goes and picks them out, and then puts them into liquid. So if you don’t want to pick your colonies yourself with your own little metal pedal tool, you can leave it to the robot to do that. It can do a lot of them at once. The box that I found most, I don’t know, ‘emotionally hurtful’ was, when I first toured Zymergen’s lab, when I was interviewing there, one that’s called a fragment analyzer that runs the equivalent of gel electrophoresis. For context, that’s something that you do quite a lot to check that the changes you intended to make to the microbes DNA have actually happened, and often, the most efficient way to do that is to extract the DNA from your microbes, and then chop it up in a predictable way, and basically see what sizes of fragments are left after you’ve chopped it up. If you have the wrong size of fragment, then you’ve done something wrong in your genome engineering. But here you would just put your little plate into a slot, and this robot would analyse 96, and give you a graph and a set of numbers, stating, ‘Here are all of the sizes of fragments and what abundance’. I haven’t done that much wet lab biology, but I have spent hours of my life with wobbly little gels of agar connected to some electricity, waiting for my fragments of DNA to pass through them. Yet this one was three minutes and you just go up, press a button, and then a few hours later you get the numbers and you don’t even have to squint at a photograph of it. You just get the numbers of the sizes of fragments. I actually almost cried.

Janvi 29:14

The robot doesn’t know how it hurt you either.

The iGEM competition

Luca 29:18

So, you’ve both done iGEM. Tessa, you work at the iGEM foundation at the moment, right? I am curious just to get a general sense of what iGEM as an organisation is and how it relates to synthetic biology. Maybe Tessa you can start off there.

Tessa 29:33

So iGEM is an international organisation trying to build up the field of synthetic biology, and that mostly happens through this annual competition that includes students doing projects in synthetic biology. These are students all the way from high school through to master’s level, as well as a couple of non-students - we usually have a few community labs that participate. The central idea of iGEM got started in 2004, not long after the very first synthetic gene circuits and there was this idea of, ‘Is there something to this engineering approach to biology? Maybe we can explore that by throwing a bunch of ambitious young students at the problem and see what they come up with.’ Over the next decades, as we talked about, synthetic biology really grew up a little bit, and I think the competition has grown up as well. So now there’s much more focus on the idea of figuring out how to do synthetic biology that is able to be done everywhere in the world, and is able to solve problems everywhere in the world, and that brings up its own set of challenges, both financial challenge and barriers with access to education and equipment. So, I think the competition is in a different place than it used to be, but it’s very much followed this arc of how the field of synthetic biology is developed.

Luca 30:45

Yeah, awesome. If I understand it right then, iGEM involves these community labs or these students and stuff doing a bunch of projects. Both of you went through that process as well, right, from the student side? I’m just curious to hear a bit about what your own experiences were like, and what you learned doing it. So Janvi, would you want to start with that?

Janvi 31:03

So, I did iGEM as an undergrad student, and I think it was one of the first experiences where I got to really see how the process of science goes. I think the rest of my undergrad had been somewhat prescriptive. I think to some degree, I also had a tonne of autonomy in terms of how the project should go, and I think that’s one of the parts of iGEM - giving students an opportunity to have their hands on the steering wheel, and that was really exciting, but to some degree, pretty terrifying. A lot of the experiments we were doing were alone and unsupervised. Depending on the age group you’re in, sometimes you have supervisors who are professors who are very engaged in the work you’re doing. Other times you are to some degree left on your own. I think that it was a pretty formative experience for me in terms of engaging with science really deeply. I also really appreciated it, and I think it was one of the spaces in which I really fell in love with science. One of the other parts was just engaging with a community of people who were really excited about what they were doing. I think that made me really value synthetic biology as a space. I feel like the other scientific fields I’ve dabbled in, there just hasn’t been that kind of energy around, I think, to some degree, because synthetic biology and iGEM are all about looking at ambitious problems and just trying to somewhat reverse engineer how to solve them. I think I really appreciated that about that experience.

Luca 32:38

Yeah, that’s awesome. Tessa, same question to you. What was your iGEM experience like and what did you learn?

Tessa 32:44

So I also participated in the competition as an undergrad, and I really relate to what Janvi said about it being a very inspiring community to come into. I had not understood the purpose of conferences before I went to the big end of year iGEM jamboree, we call it, which was more of a celebration than a conference, really. Before that, I would go to lectures because I like learning stuff, but I didn’t really get, ‘Oh, you might go to a place and find all of the people who are infectiously nerdy about the same thing as you and that would be really, really fun’, and I got that from going to iGEM. After going in 2015, I thought, ‘Okay, I just have to do synthetic biology. This is too exciting. I don’t want to be working away from this community.’ So it was very formative for me as well.

Luca 33:29

Awesome, and what were your projects? What did you guys end up making?

Janvi 33:33

So my group could not decide, and we worked on a bunch of water remediation tasks. There were 11 of us, and we decided that we were going to take on three different projects. The projects were detoxifying oestrogen, removing lead from water, and detecting legionella. Yeah, very, very rogue. I said my pitch was something like, ‘We used sort of the resources we had, and the expertise we had, to come up with projects and these were the relevant projects’, even though they were very, very loosely attached to each other.

Tessa 34:09

There is, I think, sometimes in iGEM a bit of an aspect of storytelling, where you’re a bunch of students who got access to do whatever you wanted in a lab for the first time, and then the end of the competition comes and you have some scattered set of resources, and you need to come up with some story about how this is a coherent project instead of messing around all summer.

Luca 34:27

That’s really cool. How long are these projects? How much time does it take?

Janvi 34:32

It varies. Ours was about five or six months. I think that’s how long you’re supposed to do it. But I think some teams do start working on it as soon as the last year is over. Then some teams, I think, also absorbed projects that other groups, like professors or other sort of scientific groups within their university, had been working on for a while.

Luca 34:55

How would you get to join in? Do most unis have organisations, or would you apply to iGEM directly and you get put into teams? If I was listening to this, and I wanted to get involved, what would be the best way to do it?

Tessa 35:08

So all of the teams are run independently by the host institution, and in some cases, in some countries, it’s not a single institution. For example, there’s a Bolivian team that involves a collaboration of four different universities, which I think is really neat. The way that I found out about iGEM was that I went to the engineering team’s open house night, and somewhere nestled - this is in Canada - in between the concrete toboggan, the underwater robotics team, and the solar car team, there was an engineering biology team. As a person who’s extremely interested in genetics, but was studying engineering, I went, ‘Wait, you can do engineering biology? I’m definitely gonna join that team.’ So that was how I found out about it.

Luca 35:50

Oh, that’s so cool. I’m curious as well, we’ve talked a bit about your own projects, but what projects have you seen come out of iGEM, stuff that made you go, ‘Wow, this is so cool’?

Janvi 35:59

God, there’s so many. I think last year one of the winning teams was a team that used the cellulose that’s made from a kombucha scoby fermentation, to try and make leather but then I think they realised it wasn’t elastic enough. So then they also incorporated the use of spider silk into developing leather, which is just so cool. I think one of the other purposes of iGEM is trying to flag or develop cool projects that the synthetic biology community or even just other biology communities at large, could benefit from. There’s also been really successful iGEM projects, like Ginkgo Bioworks, which is now a multibillion-dollar company that came out of a very early iGEM round. Benchling too, which is like an open source, I don’t know how you would describe it, I would say, editing tool?

Tessa 37:03

Yeah, like plasmid designs. Plasmid and CRISPR design software. When you’re working on sequences in the lab and engineering them, it provides this suite of software tools to help you keep track of what you’re engineering and what experiments you’re planning. It’s very useful.

Janvi 37:20

Yeah, that also came out of iGEM and to some degree, to make doing this more accessible to other people.

Tessa 37:27

One way you can participate in iGEM, even if you didn’t participate as a student, is as a judge, and some of my favourite iGEM experiences were actually not as a student participant, but as a judge of the competition. One team that I saw who actually ended up winning the high school division that year was so lovely, but I was a little bit skeptical because they had done such amazing work. First off, they really enamoured me because their project was about designing biosynthetic catnip to lure stray cats into the design so that they spayed and neutered, and then re-released. Their project was covered in these cute drawings of cats, and they were handing out postcards with cute cats on them, which I hope didn’t bias me too much as a judge, but it might have biassed me a little. One of the things they did, and one thing that I used to do when I was in iGEM, was more of the mathematical modelling, engineering analysis, and design side of things. Again, a nice thing about iGEM is that you can get involved, even if you’re not skilled in the wet lab, which I am not. Their mathematical modelling, I thought, was super impressive. They were doing these interesting differential equation models of how all of the different metabolites and their coculture system would relate to each other and which parts of it they could optimise to increase the throughput. I was like, ‘This is really sophisticated, and you are teenagers. I’m not gonna say you didn’t do this, but I want to talk to you more about which parts of this you did, and which parts were your advisors.’ I ended up going to their poster and talking to this 16 year old girl, who was just so happy to talk about all of the papers she had read and how she had synthesised aspects of these differential equation models and ways of analysing them from different papers and which part hadn’t applied perfectly to their project, so she modified it, and I was like, ‘Oh, you’re just super smart’. Amazing.

Luca 39:19

Well, it sounds like a really cool initiative. It sounds at least from your experience, as well, that you don’t necessarily need to have a background in biology to do it. Although I’m imagining some technical skills help here, it sounds like if you’re studying engineering, this might be something that’s worth checking out.

Tessa 39:37

I would also say if you’re studying social science, many iGEM teams could benefit from an embedded social scientist. One thing that we try to incentivise in the competition is this aspect called human practices, which is the idea that you should be reflecting on the values that you’re embedding in the work and responding to what other people think of your project. So maybe going out and talking to stakeholders, for example. This applies even to fairly foundational, basic science projects. We had one team that was working on noncanonical amino acids and expanding the genetic code, and they went and talked to a bunch of religious leaders in their community about what they thought of this idea of expanding the genetic code which I thought was a really interesting piece of social science that helped them understand and contextualise how to advance their work. I know a lot of really cool anthropologists who have been part of iGEM teams, and I would say you don’t even have to be an engineer.

Risk assessment in iGEM

Luca 40:30

Yeah, that’s a great thing to flag. Going all the way back to the beginning of the episode, you mentioned, Tessa, that one of the things that you were stuck on was standardising risk assessments or doing a bunch of these risk assessments for these various projects in these different countries. I’m curious, more broadly, how risk assessments and your use assessments fit into these iGEM projects and what you’ve learnt or would highlight from that?

Tessa 40:59

Well, one thing that’s really great from the risk assessors view, which is now my view on the competition, is that iGEM teams do somewhat chaotic and ambitious things, and they don’t necessarily know what’s hard, and they’ll try to do things that you wouldn’t expect them to do. In 2016, we had a team who got pretty far towards building a gene drive. At the time, there was no national governance of gene drives anywhere, it didn’t fall under their institutional guidance. So they showed up at the end of year jamboree, and, I think iGEM had thought that there might be a few more years to develop a gene drive policy before undergrads started building them, but that was not true. We had a team - we actually have recently hired someone who worked on this team for our responsibility programme - that really wanted to put their melanin producing yeast into the stratosphere on a stratospheric balloon to test its radiation resistance. Their long term goal for their project was yeast that can survive better space and radiation defence, and they happened to have a relationship with a stratospheric balloon company in Brazil, so they wanted to launch their bacteria into space. It was very unclear what the rules are for putting bacteria in space. Different countries manage their air rights differently - does this count as a release of a GMO? Or maybe it only counts as a release of a GMO if the balloon crashes? It was a really interesting case study.

Luca 42:24

What ended up happening there? Were they allowed to launch or not?

Tessa 42:28

So they did, speaking of social science, a really interesting project where they phoned up a bunch of regulators all around the world. I think they found about 40 and heard back from maybe 10 or so and got extremely different answers for different places. We said, ‘Okay, you’re, you’re in Brazil, so you have to follow the Brazilian rules.’ We felt like there were especially some really dicey things around indigenous land sovereignty and launching a balloon over the Amazon. Not all of that land is super uncomplicatedly even under Brazilian jurisdiction. So, in the end, what we had them do was add some melanin extract to their agar that they were growing the yeast on. That was a compromise of, ‘You’re still doing a little bit of this test, but you’re not actually putting your genetically engineered bacteria into the sky.’

Luca 43:14

Wow. That’s so interesting. That’s crazy as well, to the point of how this technology is getting so decentralised, which is accessible as well, that this seemingly insane thing can just be done by a group of students for the iGEM project. It’s really cool.

Tessa 43:31

Cool and scary. Really cool, though, really cool.

Luca 43:35

So it sounds a little bit there as well that iGEM are shaping the way that young people approach science, and teaching them lessons as well, for later on in their life. I’m wondering how that then in turn links back to what we’re talking about before about having this culture that is aware of dual use concerns or safety, more broadly.

Tessa 43:59

I think that iGEM can have an impact in a few ways. One of which is going to the jamboree or going to a iGEM events and feeling like you’re embedded in a community, and then designing that community in a way that communicates values and attitudes about how biology or how science is meant to be done. I remember as a student going to my first jamboree, and seeing that a lot of the judges were asking us questions like, ‘Did you talk to any stakeholders and find out if the solution that you’ve designed was actually desirable to them? Or did you just come up with it in a lab and assume that this technological solution was appropriate for the problem you’re interested in? Or did you consider this possible safety concern about how maybe you’re actually engineering a stronger bacterial immune system that could then be horizontally transferred to other bacterias? Is that something you thought about at all?’ I think seeing those fairly high status people who were judging our project ask us those questions and put them on the same level as questions like, ‘How many controls did you have in this experiment?’, really communicated a lot to me about what was valued in the community. I was also having this intense social experience where I really wanted to belong to the community. So I think that kind of motivation can be powerful. I also think we try to express our values in how the competition is judged, and we change how the competition is judged pretty frequently as we learn more about how people are responding. So some of the criteria for the competition aren’t just like, ‘Did you build a microbe?’, but also, ‘Did you do your measurements in a standardised and well documented way that other people could build upon?’, which is that rigorous engineering value. But we also have this programme called human practices, which is much more about reflecting on the values that you have embedded into your project and responding to the desires and knowledge of other people and considering your responsibility as a synthetic biologist. We also ask every team to do a self-assessment of risk in their own project. We have this big safety forum that we asked them to do. I will be honest, I think for most teams, there are two or three people who do that safety form. It’s not done by the entire team of between four and ten people. But I still think that is an intervention, and I think probably the most powerful interventions that we have are the ones where we go to meetups or have calls with students or I email back and forth a lot of the student teams, and again, try to express, ‘Hello, you’re a part of our community, we really like you. The things that we value here are this whole culture where you attend to these issues of responsibility and anticipating the impact of your work and assessing the possible risks of it.

Synth bio and biosecurity

Luca 46:42

Well, I hope we’ve really tried to emphasise the cool aspect of a lot of this synbio stuff and hopefully talked as well about some of the really awesome things it might do for the future, for humans and for the planet. But I am curious to tackle this more scary question, heads on. So, to transition to that big picture, what challenges do you guys see that this new syn-biotechnology is posing for biosecurity as a whole? Maybe, Janvi, you can paint some of this out for us.

Janvi 47:19

I think there’s a lot of moving parts here, and, as a consequence, a lot of challenges that developments in synthetic biology and allied fields pose for biosecurity. I think these can be split into three categories. There’s one in which there is a lot more new technologies and new information. So we just have a better understanding of how biology works, and more tools to be able to engineer things in the way that we want to. That is kind of the second thing as well, which is that now we suddenly have new capabilities. There’s a lot more that we can do with understanding this new information. The tools that we have can suddenly serve a much wider purpose. So, so far it’s been new information and new capabilities. Then the third thing that is a risk in the biosecurity landscape is that suddenly there’s many new actors that can contribute to the field. Because synthetic biology is making everything much more accessible, it also means that a lot more people can contribute to editing, writing, and reading genetic material, which in many ways is great, because we get a bunch of really cool projects. I also want to underline that even though we talked about some of the really exciting and fun ways to engineer biology, there’s also a bunch of really concrete ways that this has changed our world and been very useful. Some of these examples are from the iGEM companies that have been commercialised and serve concrete purposes in our world. But there’s also this really great series of papers. One is ‘The New Decade of Synthetic Biology’. Then the follow up paper talks about six ways that synthetic biology is used in the world today, which I think Tessa mentioned earlier. I think it’s important to internalise how concretely useful some bio has been. It’s not just a bunch of cool, exciting, abstract ideas. But that democratisation and that creativity that’s come as a consequence of synthetic biology, being more accessible, also means that there’s a bunch more people who have new technologies and new information that they can use that contributes to risk within biosecurity. To some degree, this exposes us a little bit more to this concept of the Unilateralist’s Curse. The idea of this comes from a paper by Nick Bostrom, but the scope of its relevance here refers to the idea of there being an action that a group of actors can take, but the output of that action, how valuable it is, is unclear - whether it’s net positive or net negative. It’s unknown to these actors. But this particular action can be undertaken by just one of these actors, which is what makes it unilateral. What’s scary here is that with the development of tools and capabilities in synthetic biology, individuals can undertake these kinds of actions unilaterally.

Tessa 50:35

I’m curious because you’re an actual wetland biologist, unlike me. Have you seen that shift, even over the course of your undergrad to your PhD? Have things gotten easier?

Janvi 50:46

Yeah. I would say, because I was only in wet lab for quite a short period of time, I haven’t actually noticed that shift myself. But yeah, I remember telling someone relatively excitedly about the Nanopore MinION, and then they were telling me that I was a little bit slow because the SmidgION was announced, a SmidgION is an even smaller version of the MinION. The MinION, by the way, is a sequencing device, meaning that it can read nucleic acid material released by a nanopore, and it’s about the size of a chunky USB. The SmidgION, though it hasn’t actually been released yet, is about the size of half of my thumb. This is an example of that. This is on such short timescales, maybe two years.

Tessa 51:40

Another thing that I might throw in, and this is less on the two to five year timescale, and more on the five to ten year timescale of a change, but I think governance moves slowly, so this is really relevant for some of the biosecurity laws we have in place, for example, is a big shift away from needing to be concerned about managing physical access to physical materials. 10-15 years ago, you could have a set of export controls, or you could have a set of locks on the doors of your lab. This is not to say that these aren’t important. I think recently, they were cleaning out a CDC lab and found some old vials of smallpox, and that’s still problematic for sure. But I think some of the previous ways of thinking about how to govern biosecurity were very focused on this idea of someone getting unauthorised access to a lab and stealing a dangerous pathogen. Now I think what people imagine instead is someone engineering something themselves, and there are all of these services where they could order plasmids and order synthesised DNA from commercial providers and do all of their design in silico, imagining something they want to create and then create it. Whereas, I think previously, we were much more limited to, ‘Oh, perhaps you can extract this one gene from this one organism and paste it into another one’, rather than, ‘Oh, your synthesis is so affordable that you can have this chimerically useful or terrifying binding receptor and just insert it into your organism of interest.’ This feels like a really big and important shift that is related to the decentralisation and the capability. But that shift from having to govern physical spaces and physical materials to having to govern information, and potentially dual use information, feels very important to me.

Luca 53:32

In some ways, there’s like less bottlenecks, right? You can just be like, ‘Okay, as long as we’ve got these 10 labs covered, we’ll be fine with information which is presumably much more difficult to keep control of.’ It also sounds like there’s another dynamic of things are just moving really quickly. I think it sounds a little bit like Tessa, what you were saying there about iGEM before having to cover all these different regulations across different countries and on policies where they’re just isn’t a precedent yet. This sounds similar to things in the digital space or in the tech space. When things just move really quickly, and you don’t know where technologies are gonna be in one or two years. Governments and regulations move slowly, it takes time. That creates a lot of ambiguity, which in and of itself, can sometimes be scary and let things slip through.

Tessa 54:20

Absolutely. I think it’s very hard to regulate as the pace of technology and biotechnology is only speeding up.

Janvi 54:28

Another thing I think that synbio is adding is that, because it’s growing as a field, and because it’s so involved with reading, writing and editing DNA, we’re also constantly sort of developing new capabilities, which I think in the previous episode with Kevin Esvelt and Jonas Sandbrink, they labelled as transfer risks. As synbio is developing, there are certain tools that we can use that are very helpful for other technologies, like, for example, viral vectors for vaccine development. But as we get better at editing these things, they’re also developing these dual use capabilities with technologies that were never intended to serve a malicious purpose.

Building a culture of responsible disclosure

Luca 55:17

It definitely sounds like that with all of these challenges coming in, on the point that it’s harder for regulation or governments to keep up, an increasing amount of responsibility folds to the scientists, the engineers and the community. So I’m curious to dig in a bit more about the norms that have emerged around this. In particular, Tessa, you’ve got this point around building this culture of responsible disclosure around synbio. Could you introduce that as, first off, a concept and what you mean by that?

Tessa 55:51

Sure. The concept here is very much filling up what Janvi was pointing to. There are these technologies that you could develop for totally fun and or benign reasons, and then they might be useful tools for someone who’s seeking to do harm with biology. I think one belief that I often try to impress upon all of the young synthetic biologists I talk to is that, we all like to focus on the really fun parts of biology, the drought resistant crops, and the spider silk kombucha leather, but we have this history in the field of extremely unethical medical experimentation and bio weapons development. That’s also part of our legacy, and we can’t just assume that people will use biology for good because they haven’t so far, right? I think we’ve been relatively lucky. We haven’t had a whole bunch of engineered bioweapons out in the world, and that’s great. But I think looking at the history of how biology has been used in the past, we can’t assume that it will be only a force for good because that’s not how people are related to it so far. Then you have to think about it like, ‘Okay, if there are people who might seek to do harm with this knowledge I’m creating, is there a way for me to share that knowledge in a way that minimises that?’ I used to hope that we could just borrow some norms from cybersecurity here, where you often had a case where someone uncovered something potentially dangerous, an exploit in an insulin pump or a pacemaker is the closest to biology, the medical device hackers. Then you need to find a way to get that security vulnerability patched. So, if you’ve uncovered something dangerous in biology, is it similar? The first thing I would say is that I’ve now gone to DEF CON and seen some students and medical device hackers talk about how difficult it is to get the medical device companies to patch the security flaws that they find. So, I no longer think that you can necessarily just borrow from cybersecurity, because often those biohackers themselves are really struggling with this balance of not wanting to create or expose vulnerabilities, but wanting to encourage accountability from the people who have made those vulnerabilities in the first place. But the other place where I feel like that metaphor breaks down is that, in biology, you can’t necessarily defend against things. In medical device manufacturing, if there is a security flaw, it is usually a mistake that someone made that you could then use human knowledge to fix. Whereas in biology, if there’s something flawed about our immune system, we don’t understand the immune system. So you could discover a flaw and then have no surface area for fixing it. So I think, some of why this idea of a culture of responsible disclosure where people basically think about how to maximise the good outcomes of what they’ve learned to minimise the bad outcomes, becomes so necessary is because there are these vulnerabilities that we can’t necessarily defend against. I think as you’re communicating about your work, it’s important to pause and have a think about the best way to disclose it. Janvi, does that does that match your own experience?

Janvi 59:08

Yeah, that sounds pretty true. I think the comparison between cybersecurity and biosecurity seems super salient to me. To some degree, the fact that we won’t necessarily be able to patch every vulnerability really flags that it doesn’t always make sense to try to find and outline them really well. I think that makes sense to me. I’m actually also curious, Tessa, if you have thoughts on what a good culture of responsible disclosure looks like. I imagine some sort of tiered system or something?

Tessa 59:51

This gets very into my ideal world for dual use assessment in general where I do think in the Effective Altruism community, you often have a lot of young biologists who are way on the other side of thinking a lot and worrying a lot about the potential dual use implications of their work. Sometimes I just want to talk to them and say, ‘Hey, you probably don’t need to stress out as much as you are about this. You’ve gone from your first order highly defensive technology into some kind of third order terror spiral about how this could be bad.’ I think it would be great to have a simple triaging method to say if your work doesn’t hit on these major areas of concern, then you should maybe check in again at publication, but you probably don’t need to be constantly triaging the risk of communicating about it as you’re doing it. But then I think there’s some other work where there might be easy ways to adapt it to be less risky, but still achieve your project goals. I think one piece of the model that feels important to me is that science is often more curiosity, exploration, and funding driven rather than necessarily goal directed, and you may be able to explore many of the same curiosities or explore many of the same problems without posing nearly as many risks. I sometimes try to adopt this sort of steering framing, when I’m talking to people about how to deal with this. I’ll give you the example of an iGEM team we had who were outside of normal dual use regulation, because they weren’t working with any pathogens, but they realised that this bacteria that they were engineering to break down electronic waste could potentially be used to break down real world electronics, and they thought, ‘Hey, that would be bad. It’d be bad if our bacteria digested your car’s navigation system or something.’ So they ended up adapting their projects to only work in an aqueous environment and sort of refocus on mining waste, and reclaiming metals from mining tailings ponds. That’s nice, because they still got to play with the circuit that they were really curious about that was about metal absorption in their E. coli. They still got to do most of the experiments they wanted to do, but they had steered towards an end goal that had far less dual use risk. So I do think that kind of steering is possible. I also think sometimes you can do more as a person who is working on something potentially dangerous. You can co-develop countermeasures for it. Kevin Esvelt’s efforts to develop daisy drives and other forms of limited spread gene drives along with gene drive technology is an example of this for me. I feel like if you haven’t been able to steer towards something less risky, and you haven’t been able to develop countermeasures, because perhaps this can’t be patched, that’s when you get into the space where you start thinking a lot about how to communicate about your work, and this world of responsible disclosure. There’s things you can do earlier in the research pipeline.

Luca 1:02:56

I’m also curious for examples of what that disclosure looks like. So suppose you’ve been able to take the precautions that you are able to, from the actual research stage, what should you do after, when you come across a vulnerability or some other kind of security flaw?

Tessa 1:03:13

One thing that Janvi mentioned earlier was this idea of the Unilateralist’s Curse. I would say step zero is to try not to be unilateral. If you think you’ve really discovered something dangerous, maybe jump to some trusted second person or third person to get their opinion. There are some examples of people who have disclosed something but not disclosed everything you would need to reproduce the vulnerabilities. In 2013, some researchers discovered a novel botulinum toxin, and they didn’t think that any of the existing antitoxins worked against it. So they published saying, ‘Hey, we have discovered this novel toxin.’ But they didn’t publish the full sequence of it, so other people couldn’t produce it. They basically said, ‘If you’re trying to develop an antitoxin for it, reach out to us and we will send you the sequence, but we won’t make that publicly available.’ I think that’s a good example of still communicating about the vulnerability without revealing absolutely everything about it. Again, selectively, responsibly disclosing just the part needed to draw attention to the thing without so many transfer risks.

Tradeoffs in open science

Luca 1:04:19

I’m curious how that part of the selective disclosure then intersects in turn with the open science movement, or this push towards making more things transparent and more accessible to people, regardless of institutional affiliation or the paywall. I am aware that there is a difference between ‘How much of the information do you make accessible?’ versus ‘How much do other people need to pay in order to access that information?’ but there does seem to be some kind of tension there. I’m curious what you guys think about that.

Janvi 1:04:53

It’s a hard balance to strike but I think there’s a lot of science that probably should be open. The subset of science that we care about, or that we’re worried about, the things that may sort of generate direct risks or transfer risk, actually doesn’t inhibit a lot of other science from being broadly accessible, and I think that’s quite important. Then, as Tessa mentioned, even with the dangerous information, you can give people the ability to do the research they still want to do and achieve the same goals they still want to achieve from the outset, whilst not doing dangerous work. To some degree, I think that involves having more of a relationship and more cooperation with proponents of responsible science and biosecurity out there, that other than individuals can reach out to, if they’re planning on undertaking research that they’re worried might be risky, or if they want to access research or information that might be considered more dangerous and is therefore less open. I think one of the things though that biosecurity and open science really intersect and overlap in is this behaviour of active pre-registration of research, which is the idea of, before undertaking your research, putting your grant proposals or outlines of your research on platforms, kind of like preprint servers, but before you’ve even done the research of the earth on the paper. This is good for biosecurity and also good for open science. It’s good for open science because people have a better understanding of how one might approach a certain sort of scientific topic, and there’s more awareness of what’s going on in science more broadly. But it’s also good for biosecurity because if someone notices a risk associated with that, you can approach and try to figure out what the best way is to resolve that risk.

Tessa 1:06:54

A lesson that’s been learned over and over at iGEM, like Janvi said, is that if you can move the interventions earlier in the life sciences research cycle, that’s better, because it really sucks if we end up in a world where all of that steering part of dual use only happens at the publication stage, because then you have people who have applied for grants, and they have done their research, and that might have been somebody’s entire master’s degree or a good part of their PhD, and then they get to the publication stage, and someone says, ‘Actually, this is too dangerous. In particular, the novel part that makes this worthy of publication and this high impact journal is too dangerous. So we’re simply not going to publish it.’ So I’m very in favour of some of this stuff that moves that review earlier in the process, partly because you’re not gonna ruin anyone’s PhDs,

Janvi 1:07:51

Convincing people or changing people’s motivation sounds really difficult. I’m curious, in a way that seems really hard to get feedback on, to hear a little bit more about the kind of work that’s been going on there, and maybe some of the work that you’ve been involved in.

Tessa 1:08:07

I agree. It is hard. It feels like a very squishy set of things. Sometimes I miss working with robots which, when you succeed, move the plate successfully, and when you fail, it crashes and you watch it crash, and you go, ‘Whoops. The code did something wrong there.’ Whereas this idea of ‘shaping motivations’ and ‘inspiring people to take a different perspective on their field’ is a lot squishier. I think the sorts of things that I’ve been working on, often in collaboration with researchers and Megan Palmer’s group at Stanford, are things around trying to gather data on what kinds of risks people anticipate in their work without being prompted. So what categories of risk or even how many people when asked to look at the project that they’re doing for risks, answered none versus answered something. We’ve been using those kinds of measurements as a proxy for risk awareness, and then trying to look at some of the interventions that we’re doing within iGEM like educational workshops, or whether people had contact with me or another member of the iGEM safety and security team, and see any correlates there. Again, it all feels pretty inadequate and squishy. One of the things that I’ll be most excited to see is better data to understand the risks from life science research. I think a culture that I’m jealous of is aviation culture, where I feel like they have a pretty good culture, both of noticing near misses and analysing near misses, and doing lots of post mortems when there are near misses. It’d be so cool to have that in biology. I say that not only because I’m worried about bio risks, but because I am hungry for data.

Luca 1:09:51

Yeah. Why do you think that is the case in the aviation industry orl in the aviation culture?

Tessa 1:09:57

I think it’s because accidents in aviation culture are very big and very upsetting to people, and accidents in biology have overall been less well known. There were some accidents in 2014 in the US where the CDC mailed some microbes that were supposed to be deactivated and weren’t to other CDC labs. That did actually raise a big fuss, and led to a ban on funding of gain of function research in the US for a couple of years. So it’s not that there are none of these feedback loops around accidents and accountability in governance. But I think they are weaker, because there is relatively lower attention and relatively lower visibility,

Janvi 1:10:45

On Tessa’s point as well, I think one of the broader themes that we pick up in security and interfacing with different communities within science is that, in order to make biosecurity or biosecurity practices effective or internalised we need to create as little friction as possible. Part of that is not having biosecurity practices that aren’t super important. I think there’s this general attitude that scientists have towards biosecurity, that is this feeling of frustration, because there are rules like making sure we don’t have lab coats on our chairs so that we don’t trip. I think I heard that rule so much more than I heard any rules on designing my experiments to make sure they were safe. This was even when I was more involved within the virology community. When we’re designing how to implement biosecurity within science, we really need to be cognizant of the needs and behaviours of scientists themselves. That tack involves making sure that we don’t tell them that they can’t postpone their research once they’ve already done all of it and spent years doing it, but also involves making sure that in their day to day lives, integrating biosecurity seems easy and important to them.

Tessa 1:12:11

I really want to underline what you said there. I feel like we need this kind of ‘scope sensitive’ risk assessment, and scope sensitive biosecurity, where the stuff that could be really, really important like a lab accident in transporting your sample of smallpox or very serious pandemic influenza. That would be really bad, right? A lab accident where you trip over your lab coat, because it’s hanging on the back of your chair would be kind of bad, but not that bad, right? The scope of these things is so different. Yet, as Janvi said, the way that we emphasise them in biosecurity is not different.

Tradeoffs in responsible disclosure

Luca 1:12:48

For what it’s worth, this sounds somewhat similar to cybersecurity, right? There’s some parallels here as well. I think I’ve heard it said that you need to deal with the reality that people have a limited attention budget, and, really, you just want to make sure that the processes to follow and all of these protocols are as simple and as easy to do. I’m curious if we can maybe dig a bit more into some of these trade-offs here. Tessa, I think you’ve, in this talk that we’ll link to in the write up, focused on some of these fundamental trade-offs here that responsible disclosure forces us to reckon with. I’m curious if you could walk us through some of them.

Tessa 1:13:29

The three trade-offs were outlined in that talk, one I call ‘vulnerability versus accountability’. This was from paying attention to the cybersecurity world where, as you disclose something, and the more publicly you disclose it, both the more vulnerability you create, but also the more accountability you create for someone to jump in and try to fix that problem. This seems like a very real trade off where you want as little vulnerability in the world as possible, and you also want as much accountability and you want the problem to be fixed as swiftly as possible. That seems like a difficult trade-off, and there’s not necessarily going to be easy wins. Maybe you get someone who was about to release a gene drive, and then you talked to them, and they realised that they shouldn’t, because of these concerns about resistance to gene drives in this population plus global agreement about the acceptability of gene drives, and then they go, ‘Okay, that worked, and I won’t do it’, and you just had that as a private conversation and no vulnerability was created. But I think that’s a best case scenario, and usually you will have to grapple with this trade-off between vulnerability and accountability, and that’s just difficult. I think one of the big reasons to be transparent is for that accountability reason, motivating people to defend against or avoid the threat that you’ve identified. The other one, in the talk, I phrased as ‘risks to the biosphere versus open collaboration’. We were just talking about open science. One of the things that open science has underscored is that we’re bad at science often. I don’t know if you guys follow Retraction Watch, for example, but there’s a tonne of extremely fake data out there, or extremely badly planned experiments or poorly analysed data. If you don’t have people’s excel spreadsheets where the names of genes are getting converted into dates, and then their analysis of human disease is off, which did happen and was terrible, it takes an outside inspector to look at that to recognise that those studies were all off. To me, that provides a lot of evidence that there’s real corrective value in being open and being transparent, because even very expert people get things wrong. So there’s this real tension between the scale of the risks which is huge here. I work on biosecurity and not building cool carbon fixation bacteria, or studying wacky proteins because I’m really worried about the risks to the biosphere from biotechnology and from the misuse of biotechnology. But there’s also this difficult trade off where I expect that I am getting things wrong in my own front models and in my own actions. Transparency lets you invite this open collaboration and this open critique, and that seems really important to me. It just feels like a really difficult trade off where I want us to have good norms, and at least in the effective altruist biosecurity community right now, I feel like we’re in this what I would call an elitist mode with threat modelling where we say, ‘Okay, be careful with that, you could uncover something really dangerous. Leave that to a couple of people and then just work on robustly defensive biassed good projects.’ I’m not totally comfortable with that, but I don’t have a better trade-off for us to be in because this risk to the biosphere versus open collaboration and trade-off is very real and very difficult and erring on the side of caution seems smart. But I also hope that we get to a better place with it.

Janvi 1:16:56

I’ve sometimes thought about how we could possibly solve the problem of the info hazards. If we just sent five people into a room for a year to try and solve this problem, how much traction would we make on it? I think I really agree with Tessa one the trade-off. It really sucks that we have to be so exposed to so many epistemic vulnerabilities in order to try and make sure that we don’t generate risks. But I do agree with the total sort of net value we get out of that, and I think that we shouldn’t be generating more of these risks. I certainly think that we lose insights on how scientists act. We talked a little bit before about having scientists engage with biosecurity. I think they have the best sort of insights on their work and where it could possibly generate risk. But through not being able to help them engage immediately with what types of info hazards that we might care about the most, or engage intimately with this, I think it becomes difficult to actually see the risks that we care about the most.

Tessa 1:18:13

The only other consideration I might add around responsible disclosure is that there’s this open dialogue, collaboration corrective aspect of transparency, that, again, I think open science has really shown us is important, because really smart people are getting things super wrong all the time. Speaking of the dark history of biology, we have had biologists do things like the Tuskegee Syphilis Experiment, where they concealed people with syphilis diagnoses from them for 40 years in order to study the progression of the disease, and not just any people, but specifically African American men in the US in the middle of 20th century. That was enabled because people were really racist, and I think that we shouldn’t laugh about that. I think we should expect that we are getting things wrong. We shouldn’t assume that anticipatory awareness of what we might be getting wrong will come from within our own community either. Then there’s motivating people to defend against the threat. We’ve talked about that a lot. The last thing I wanted to talk about around responsible disclosure is that I think one reason to be transparent is to try to win the race to be the most responsible person who does the disclosure. So you could imagine that you expect someone else to discover the same vulnerability next year. If you think that’s really likely, then you might think, ‘Okay, I’m a person who’s about to decide not to disclose this. But if I think the next person who discovers it will be even more unilateral than me, then maybe I should disclose it first, because I will take away all the novelty benefits of this, but maybe partially or selectively disclose it or responsibly disclose.’ That dynamic seems really hard to me. There was a good EA Forum post recently about checking if you are actually in a race around some of the development of atomic weapons and the intense atmosphere inside the RAND Corporation where they really thought they were saving the world. And, you know, it sounds like some EA orgs were full of ambitious young people trying to save the world and feeling like they have to be in this really secretive, intense mode. Then they were actually wrong about being in a race, and probably made the world worse. So that’s both a place where if there had been a bit more openness about everybody’s technology levels, people wouldn’t have thought they were in a race, on that critique part, but also, you just need to tread cautiously, if you think that the reason to disclose is that you’re winning the race against the the next person who will be less responsible. My prescription for that is the ‘Don’t be a Unilateralist’ thing. Ask other people if they think this is also likely to be disclosed soon.

Janvi 1:20:50

So we spoke a little bit about norms, and motivations and incentives from the perspective of scientists. I’m curious how useful you think it is for safety to be reliant on norms. I’m also curious about examples that you might have of good citizens in this space. Is there precedence of people standing up and flying things, in a way that’s been useful?

Tessa 1:21:19

You sent me the question in advance, and at first, I was like, ‘Oh, I wish I had a good cached answer to this.’ Then I remembered that I do actually have one. The example I think of is someone really proactively reacting to a new discovery. That is the case where I feel norms are essential. I don’t want our safety and security as a society to rely on norms, but I think it’s impossible to regulate at the speed of technology. So I think you necessarily need a bedrock of norms and culture to fall back on when something new has happened that you haven’t been able to anticipate. I also think you need a bedrock of norms to fall back on so that rules don’t get ignored, because if you think a rule is really stupid, you just won’t follow it. I do that with rules, presumably you do as well, right? You need some motivation and culture to think that the rule is important. A really good example of something unanticipated happening, and there being no rules for it yet was the dawn of recombinant DNA in 1972. One of Paul Berg’s grad students, Janet Mertz, was giving a presentation at Cold Spring Harbour. She says, ‘Oh, we figured out how to modify E. coli using plasmids. We figured out they create a lot of copies of the modified DNA. This is really exciting. I think we might actually now try to use a virus to put this bacterial DNA into mammalian cells. This is the technical problem I’ve been chewing on and here’s my presentation. What do you guys think?’ People actually lightly freaked out. Someone else called their supervisor, and then their supervisor called her supervisor who was like, ‘Don’t use a phage that grows in the human gut as you’re doing this experiment’, because she was gonna use SP-40, and he was like, ‘As you’re planning these recombinant DNA experiments, please do not do them in something that could affect humans. We don’t know what happens with recombinant DNA. This is our first attempt at that.’ Before all of our mutations were radiating things with X rays, we were doing this intentional splicing, which again, is in 1972, because modern biology is really new. This led to a whole series of deliberations and meetings, one of which is kind of famously known, in 1975, as the Asilomar Conference that actually led to the framework of biosafety levels and risk groups that we still use today in microbiology. That feels like an example of a key intervention when someone anticipated a potential harm and reached out to someone and said, ‘Hey, you need to be way more deliberative about this. You need to do a longer reflection and risk assessment on your plans here. We need to be more cautious,’ and it worked.

Janvi 1:24:04

I was just going to ask if we’d spoken about different types of disclosure. I feel like we have skirted around it a bunch. But in terms of like, ‘Who do we disclose the thing to?’ There’s one version of disclosure, which is just publishing your work. There’s one tier down from that that is publishing your work, but without the really dangerous bit of information. Then there’s the thing that Tessa is saying, ‘Don’t be a Unilateralist, be a multilateralist’. Speak to your colleagues about this kind of stuff, and then see what you should do next. Then there’s another version: sometimes there are biosecurity boards that you can reach out to. Depending on the researcher doing it, that will affect who you should speak to first about it. But it feels like a pretty safe and low cost thing to always do is to speak to one of your colleagues or two of your colleagues about it first. Go talk to one or two of your particularly biosecurity-minded colleagues about this thing first before you go and decide to publish your work. I think that propounds the idea we spoke a little bit about earlier, which is having more scientists who engage in biosecurity more wholly. Yeah.

Global catastrophic biorisks

Luca 1:25:15

Right. So we’ve talked a bunch about synthetic biology, and why it can be really exciting, and also, some of the downsides and risks. I think we’ve been a little bit vague or black-boxy about what we actually mean by ‘risks’, and what concretely we are worried about here. I think, in part, that is because there is just this whole range of risks and set of risks that is worth emphasising. As a lot of this just seems to be everyday, from the iGEM project that you might be doing or that academic research that you might be doing, it is worth really thinking about these norms, risks, and procedures. But, within the Effective Altruism community at least, there is also this particular interest in these so-called global catastrophic biological risks, which sound like a different subset of these risks more broadly. So Janvi, I’m curious if you can outline what we mean by global catastrophic biological risks and why they are in many ways different from biosecurity more broadly.

Janvi 1:26:17

Yeah. So global catastrophic, biological risks, very much referred to just a subset of bio risks that the world or we might care about, though our focus is mostly on these GCBRs. They’re the risks that have the potential to cause such significant damage to human civilization that they undermine its long term potential. Therefore, it can cause catastrophes and possibly even extinction. So we really care about the tail risks here. These are the things that are really, really unlikely to happen, but would be extremely bad if they did.

Luca 1:26:57

Can you talk a bit about how synbio in particular is relevant for these GCBRs? Tessa, do you want to take a stab at that?

Tessa 1:27:07

Sure. I’ll lay the cards on the table that when I first got involved in the Effective Altruism community, I was interested in biology and interested in responsible biology, but I wasn’t very worried about catastrophic or existential, biological risk. I still think there are some silly ways that people talk about it. I remember maybe the first official EA event I ever went to, I heard someone confidently proclaim, ‘And nobody’s working on biosecurity’. This was years ago, this was in 2016, but I remember being so pissed. I was like, ‘Bioethics is an entire field. Public health is an entire field. What are you talking about?’ So I do think that there’s a tonne to be learned from public health, bioethics, disarmament and all of the people who are working in the Biological Weapons Convention - most of them are not in the Effective Altruism community, but they’re still doing really important work to reduce biological risk. But these narrow, like Janvi said, tail risks, catastrophic risks, I think synthetic biology is especially important for, because we have never seen an existential risk pandemic in history, right? There’s only one example of a mammal going extinct from a natural virus, and that’s the Christmas Island rat, and super isolated island populations are not very representative of the rest of the rest of mammals. Similarly, we’ve had really, really bad pandemics. You could argue that the smallpox epidemic/pandemic in the Americas when smallpox was first introduced was of a catastrophic level. I really do think it arrested the progress of a lot of Mesoamerican civilizations, for example, because estimates are unclear but between 70 and 95% of the population died. Records aren’t great from that time, but I think you can point to a possible example of a collapse of at least one human civilization as a result of a disease. But human civilization overall still progressed. I don’t want to be blind about that either - it’s extremely apocalyptically bad that we don’t have Mesoamerican civilization now, because so many people died of smallpox. This was terrible, right? I can’t imagine what it was like to live in the Americas when like 95% of people died. I’m sure it was one of the greater human tragedies that we’ve had in the world, in history. I want to make sure not to just gloss over how bad that was. Sometimes long-term focus TAs get accused of being like, ‘Oh, the past bad things in history, maybe they’re just mere ripples?’, and I’m like, ‘No, this was really extremely bad. Really really awful.’ We should sit with that as well. I think it would be even worse if we totally arrested the future of any humans having conscious experiences at all, or really reduced the number of humans that would live in the future. To me, that would just be really sad. I get excited about having kids, because I like the idea of there being more humans in the future, who go off and experience things and smell roses and see sunsets or whatever. I think these things that potentially stop human civilization from existing in its current form are really, really bad. Getting back to the synthetic biology point, I think that we haven’t seen something like that from natural pandemics, and I think what people worry about is that with engineering biology, you could potentially get into that space of really, really tail risky, catastrophic biological risk, because you could have someone intentionally designing something intended to kill most people. I used to also think that that was a silly idea, because surely anyone who would have the skill to engineer a catastrophic pathogen would realise that they had done that and then not release it. But learning about Aum Shinrikyo was a big update for me, because this was the group that released sarin gas on the Tokyo subway in 1994, 1995. They also had a biological weapons programme. This is a group of at least relatively sophisticated actors working at a time when biology was much harder to engineer, trying to release anthrax spores. Their bio weapons programme, in the end, wasn’t successful. There’s actually a quote from the the head of the Aum Shinrikyo cult who said if the US assessments of the risk of bio weapons were actually trying to mislead terrorist groups into developing them because they were so hard to develop. But I think synthetic biology is changing that. This is how you get the intersection of synbio and catastrophic risk. The most terrifying stuff isn’t natural, and making unnatural and terrifying stuff is getting easier. Does that match your model, Janvi?

Janvi 1:32:04

Yeah, I think that matches my model. One of the things I’m interested in is, when you first became interested in GCBRs, recognising that there were a bunch of people already working in this field, the question of ‘What is the motivation for GCBRs?’ Is it a longtermism-based thing? Or is it also that GCBRs within the broader field of biosecurity, particularly these tail risks, are neglected? Yeah, I wonder what your thinking is there.

Tessa 1:32:34

I think my personal thinking is more neglectedness-based than longtermism-based. I don’t know that you need to bring in the concern about the very, very far future of humanity to worry about potential catastrophic engineered pandemics. I do think that reducing those tail risks is closely allied with public health. I think, for example, standard public health is also interested in early warning systems for disease surveillance of the sorts that you’re working on Janvi. But I think that those early warning systems tend to be less concerned about unknown pathogens, and designing them in a way that would recognise something that’s unlike anything we’ve seen before, whereas people worried about those very tail risks will be interested in early warning systems that could potentially catch something that is not a Coronavirus, and not influenza virus. So that seems like one concrete example of a place where the priorities of what you invest in vary depending on what kind of risk you’re worried about.

Janvi 1:33:36

I feel like what I posed to some degree was almost a false dichotomy, because it’s true that this subset of risks is super neglected, and can have a super horrendous effect in very much the near term future, and that also, inevitably, affects the long term future.

Luca 1:33:53

Yeah, I feel like there’s just a tonne to disentangle here so I’ll try and repeat back to make sure that I’m kind of grasping everything. So, on the one side, on the importance question, I think one thing that was raised here is scope sensitivity. There’s a huge difference in scope between these sarin gas events, for example, in Tokyo, if I recall, and smallpox in the Americas, and then potentially a disease that wipes out all humans in all cases, right? There’s a lot of orders of magnitude difference here and how we might care about these things based on our philosophies, or what have you. Then the thing, Tessa, it seems that you are really hammering home here is this neglectedness side. There is this huge community, including US defence and including the BWC and presumably including the FBI and public health more broadly as well that care about biosecurity. But when it really comes to these unknowns and these new technologies and the really tail end of things, this stuff is much more neglected than we might think, therefore it’s important to work on these things. When it comes to synbio in particular, there seems to be actually a couple of things going on here. On the one hand, you mentioned, these things are becoming more accessible, which again, that means you might get terrorist groups or rogue actors or whoever have you working on these things. This might be even on the scale of things like sarin gas, or natural pandemics. But it might also be these new tail end things which synbio could be making worse. Is that roughly right in summarising what’s going on here?

Tessa 1:35:29

That was an impressively coherent summary of our collective ramblings. So thank you.

Luca 1:35:34

That’s what I’m here for. But no, this is great. I would love to dig a bit more into detail about exactly what we’re worried about here with synbio. I’m just curious about timelines a little bit here, and maybe some of the shapes of technologies that we’re worried about when it comes to these particular GCBRs that might wipe out large chunks or even all of humanity here. What, concretely, is going on here? And when should we be worried about these things becoming accessible here? Is this tomorrow? Is this next year? Is this a decade away? Is this 100 years away? How are things moving in this space?

Janvi 1:36:14

I can give some relatively formative ramblings on this. I think one of the technologies that we’re concerned about, which is a concrete and very attached to synbio type of technology is the evolution of DNA synthesis machines. Now we have ones that can fit on a benchtop. There’s a world that you can imagine, which is quite exciting, where we have a lot of these DNA synthesis machines on a lot of benchtops, so that if we might get to a point where we can create RNA vaccines really quickly - there’s a lot of other steps other than just a synthesis - but you can imagine them at every clinic so that if people become sick, we can develop these vaccines incredibly quickly. But what that also means is synthesis being much more widely available. That means editing and changing viruses could be much more easily accessible. So that’s something we’re scared about. It’s also already available on a benchtop level, but not everyone can order it, you have to be an institution to order it. But there’s not very much legal regulation on DNA synthesis machines at all. And we can jump into that a little bit more later. But one of the things we’re worried about is the development of DNA synthesis technologies as a whole. And I think, putting a time on this is quite difficult, but I imagine it’s going to speed up a lot in the next sort of, 10 to 15 years, and become much more accessible to everyone. Yeah, we spoke a little bit about how sequencing devices have become so much smaller, in the last 20 years. If you can imagine a synthesis machine that’s the size of your thumb, things really start to get scary. And maybe that’s within the next 10 to 15 years, I don’t know, technology forecasting is hard. But then the other technology that I think about, or that I think is really scary, which is also allied with the synbio fields, but is certainly not dependent on synbio fields, is the development of machine learning technologies that are associated with biological sort of information. One of the things that you could think is scary is this idea of matching sequence to function information incredibly well. And so one of the things that I tried to anchor on initially, when thinking about this was Ajeya Cotra’s transformative AI timelines. And I think they’re now at 2036 or something. And if you mix this ability of artificial intelligence to come to really intelligent conclusions really quickly, using biological information, that’s pretty scary. But maybe we don’t even need this transformative AI development. There’s probably developments before that that are very scary. And one of them is this sort of sequence to function thing that I was saying before. To elucidate that a little bit more, this means that having a sequence of nucleotides of biological information and understanding exactly what that can do when it’s made into a protein or in its own right functioning. Just because that also then means that an algorithm can compile that information and design viruses and test a lot of viruses or something. So yeah, I think that that’s the other development that I’m really scared about, which I could also, unfortunately imagine happening in the next 25 years. I think the risk that comes from synthesis machines being much more accessible is definitely significant. But I’m also really, really terrified of our machine learning capabilities becoming more significant, possibly more so or than I am of the synthesis capabilities becoming more pervasive? And I think maybe the implication that has is, ‘What are we doing right now to try and safeguard these technologies?’ in terms of ‘What are we building in or baking into our synthesis machines to stop people from using them maliciously?’ And then also, how do we bake these biosecurity safety things into a field that is developing very autonomously of us? And I don’t know how that’s possible.

Tessa 1:40:37

Yeah, I just want to add to that, I would say near term, certainly less than five years at the pace that machine learning is advancing right now. I don’t even have good ideas about how to govern this, which is quite frustrating to me personally, because sometimes people ask me, what we should do about this sort of thing, and I’m like, ‘I worry about it, I don’t know’. Not just a sequence to function mapping, but also we’re getting into a space of a combination of high throughput automated screening, and the ability to have algorithms munched through really large datasets that we can do this kind of hypothesis free bioengineering where we maybe do directed evolution experiments, or again, there’s lots of new gene editing tools that let you do massively multiplexed edits without a particular goal in mind, and then you could imagine setting up a screening experiment that lets you produce something dangerous. And, you know, that’s too bad. And I don’t think we have a good answer for it. This was one of the things that was identified, I would recommend if you’re interested in some direct expert forecasting on this, in a WHO report on a horizon scan for global public health, emerging technologies and dual use concerns. One of the technologies that they identified was extreme high throughput discovery systems. And again, this is this confluence of increased lab automation, increased ability to process data, and no idea how to govern it or do it securely. At least with DNA synthesis, we have some history of DNA synthesis screening. We have some ideas: the NTI has this common mechanism project and the SecureBio leading this Secure DNA Project. Whereas when I’ve talked to people about what to do for these high throughput discovery systems, everyone’s just like, ‘It’s a problem. We got to work on it.’

Luca 1:42:35

One kind of more philosophical question is how you see the threat landscape of GCBRs evolving over time. So maybe to draw an analogy here, often when I hear people talk about existential risk and AGI, I think it’s framed as there being an upcoming crunch period, depending on your time lens where it’ll be really important when and if we get AGI aligned or not, but after that, there is then talk about getting access to a lower level, Toby Ord’s Long Reflection moment or something there. Irrespective of whether that is true or not, I’m just curious about how you see that kind of analogy linking onto GCBR? Is it the case that there might just be an upcoming decade or century where this is a really wild field and but maybe if we overcome that, then we’ll have a defensive advantage, or we’ll have things as you described there with Sherlock, or metagenomic sequencing or something in place that just makes this extreme tail risk somewhat safer? Or is it just going to continue to be this wild space with lots of innovations happening, and it’s kind of like whack-a-mole, with threats coming up and us trying to hit as many down as we can.

Tessa 1:43:51

I think the expectation is something like an ebb and flow of risk. I think we could get ourselves into a much more secure place than we are right now. I think if we were really good at making ourselves immune against a really wide variety of threats, that would really lower the risk of a bunch of the current threat space for potentially a while until there was a new technological development. My expectation is that, so long as we are biological, carbon-based life forms, if humanity is continuing as biological life forms, and we could get into deeper science fiction questions about whether or not that’s true, but I’m pretty attached to life as it exists right now, and I do actually hope, at least some of humanity keeps existing as physically embodied life, in that case there, I think it’s not clear to me, if we had a perfect understanding of how to engineer life, if we would be in a defence-based or offence-based space. And so given that, I suspect that as we meander our way towards a more and more perfect understanding of life, I expect there to be an ebb and flow of risk as we bias ourselves towards defence, hopefully intentionally and then discover something new and go, ‘Oh, this is this is a gene drive. We didn’t anticipate that. Oh, no.’

Janvi 1:45:17

I think this is interesting to think about, because the parallel of AI being timelines and people’s personal drives and agendas changing on the basis of that thinking of biology as something that’s constantly growing probably changes your approach to it. One of the things I wanted to mention, actually, that I think this interview will bring up really nicely is, I remember listening to a EAG talk, I think, from 2017, where someone was comparing biosecurity and AI, and they were saying how with AI, we need to fix this thing, and then we can live in these beautiful futures, and it’ll be amazing. And then they said, with biology, we just want to stop a horrible thing from happening, and there’s no upside. Speaking to Tessa, for the first time when I did a couple years ago, I think definitely was the polar opposite of that, of getting me really excited about sort of bio futures. And that even though the timeline continues, maybe in a less predictable way, this ebb and flow is also because of our progress in understanding biology.

Tessa 1:46:25

Yeah, I’m just big biology fan. There’s a beautiful post actually from one of my favourite biology newsletters, which is called The Century of Biology that was about this concept of viriditas, which really resonated with me. I can read what this person wrote, which I really resonated with, which was, ‘My own personal mission statement is that I want life to flourish in the universe. I view biotechnology as the most logical means towards this end. When I say life, I mean, the process that has carpeted our planet, and cells and flowers and children. Life is abundant, beautiful, generative. I’m talking about viriditas, the constant pressure of pushing toward pattern, a tendency and matter to evolve into ever more complex forms. It’s a kind of pattern gravity, a holy greening power, we call viriditas, and it is the driving force in the cosmos. Life, you see.’ Look, I’m just really into life. And biology is all about that.

Janvi 1:47:26

Yeah, they should have said that in the talk.

Technical projects in GCBRs

Luca 1:47:29

Not to call them out or anything. It also sounds like there’s two responses emerging. So one is on governing, either existing technologies or technologies that we see, Tessa, as you said, on the horizon, and this is just a hugely open question. And then there is this set of creating some of these more robustly defensive technologies. So Janvi, this sounds more like your metagenomic sequencing. When we’re thinking about interventions or things that people can be doing as well, is there anything else that you guys want to flag that people should maybe think about working on?

Janvi 1:48:12

Yeah. I think you mentioned a little bit earlier about the offence-defence balance. And I think this is something that comes up a lot, in particular, when thinking about technical interventions of, particularly in bio, trying to prioritise things that we’re sure are very robustly defence prone. And trying to, as we mentioned before, redesign the technical sort of things that we’re looking at so that they don’t have the offensive bits, if that’s what the sort of original idea had. We’ve spoken a lot in this session about synthetic biology and responsible science. And I think a lot of that fits into prevention within this prevention detection response framework. And I wanted to talk a little bit more about my framework for all of this. So I think within prevention, I see there as being these three areas: there’s development, there’s access, and then there’s deterrent, where development or differential technology development, which Jonas and Kevin spoke about in the last episode, prevents bad actors from ever accessing technology, because it doesn’t exist, or that’s a hope. Access is given that the technology exists, we try and limit their access to it. So given DNA synthesis exists, limiting who has access to it. And deterrence is that given that they have access to it to some degree, can we influence their likelihood to employ this technology? And this has two separate parts, there’s deterrence by denial, and there’s deterrence by punishment. Deterrence by denial is this idea of influencing their likelihood of employing this technology because it just seems so unlikely that they would succeed in using this tech that bad actors just wouldn’t want to use a biological weapon. And deterrence by punishment is this idea of sort of making sure that the punishment seems so severe for even trying this thing, that no one would want to try it. And I don’t know if actually deterrence by attribution is its own thing or fits into one of these things, but it’s this other sort of aspect of making sure that you can trace back the origin of an outbreak or, in particular, what group or what actor released the bio weapon. And that adds to this accountability thing. Deterrence by denial really smoothly leads us onto detection and response, because to some degree, developing detection and response efforts actually strengthen deterrence by denial itself, but making our detection just so strong, and our response so strong. We make the bar for creating a bio weapon so high, that is just not a viable option. And I think to some degree, that’s what we want to do with detection and response. Yeah, so with that framework out of the way.

Luca 1:50:56

Yeah, that’s great for actors. Tessa, I’m curious if you’ve got any reactions to that.

Tessa 1:51:00

Oh, yeah, that really fits with a lot of different work about how I think about how prevention and detection and response fit together. And I’ve seen some people really influenced by this idea of deterrence by denial, as well, where if you’re very worried about these deliberate actors, and you think, ‘Oh, these are going to be such awful, large scale things that people build, then maybe there’s no point in all of this more public health flavour and detection and response.’ But because of this deterrence by denial factor, I think it really shows up for preventing the worst case scenarios as well. The other thing I’d say, getting back to how you prioritise differently, depending on whether you’re worried about general public health versus these tail risk scenarios, is again on that point of, ‘How can you do something that’s generalised?’ And that works even with something that we’ve never seen before. So, for example, you might have, like we already talked about, unbiased metagenomic sequencing, that’s just looking for, say, exponentially growing sequences in wastewater, without any particular search for existing viral sequences. And under a response, you might worry less about platform vaccine technologies and relatively invest more in personal protective equipment that’s highly effective, or in disinfection technology, that just reduces the speed of viral spread, which is in no way to suggest that platform vaccine technologies are not also important. I feel like we are generally underinvested in pandemic response. And to get out of a really bad pandemic, we’re gonna need vaccines eventually. But I think you have a lot of people investing in these really broad spectrum, pathogen agnostic response technologies, and that seems pretty important, and also one of the things that you end up prioritising more if you’re more concerned about these tail risks.

Luca 1:52:47

I’m curious, maybe zooming out a bit more, it sounds that we have some kind of tools, broad tools in the toolbox here, to address GCBRs. One thing at the very beginning of our conversation was more around differential tech development, and maybe getting some of these cool defensive technologies out there, then a lot of our conversation here seems to have been around cultural norms. One other thing and another kind of EA cause area I’ve often heard is this idea of choke points or managing critical supply of things. So for example, with AI, there’s a lot of talk about semiconductor supply chains, or in banking or finance and econ there is this talk about swift and using protocols and stuff like that. Is there anything in the biospace that maybe resembles or mimics that at all?

Career advice

Tessa 1:53:36

Yeah, I would say the backbone of how we’re doing biology right now is DNA sequencing, and other nucleic acids and synthesis. So managing sequencing and synthesis capacity feels really big to me. There are other material inputs into the process, as well, that I think are worth thinking about. So again, there’s these huge central type culture collections. There’s an Indian Type Culture Collection, there’s a German one, there’s an American one, and people will often source their microbial strains from those cultural collections rather than from the institution down the road or something. But also that exists too; people are sorting things in the institution down the road, so that’s not the perfect choke point. I would say the big ones are this sequencing and synthesis capacity, which also means controlling the equipment that allows you to do that. I think looking ahead, just over the next five years instead of right now, I think another capacity that’s going to be really important to control is both the algorithms and hardware that allow you to do relatively hypothesis free bioengineering. So these are things like doing a massive multiplex directed evolution experiment towards some end, where ‘directed evolution’ again is you’re randomly mutating things, and then you’ve set up some condition that the cells that have more of the features you want will survive. And so you can do in an iterative process of kind of very massively multiplayer, flex directed evolution and potentially get things that you wouldn’t have been able to design rationally. And similarly, we’re starting to see emerging examples like, famously, AlphaFold last year which is an algorithm that can take in a sequence and give you a protein structure. And then there was a new tool that just came out that can take a protein structure and give you alternate sequences that could give you that same protein structure. And so all of these algorithmic tools for bioengineering that potentially allow you to engineer things that you couldn’t engineer through rational human alone inquiry, or to engineer things that are perhaps obfuscated compared to our current surveillance system. Those seem like important choke points. And I think, figuring out ways to do some similar responsible access to what we’ve seen around powerful algorithms in other contexts where it’s like, ‘You can use this but you need to get permission first.’, seems like probably a good idea.

Janvi 1:56:06

Yeah, I also wanted to note that, on this sort of prevention detection response thing, unfortunately, it seems like the things that are most tractable to some degree do become detection and response, even though prevention is something that we would probably want to prioritise, but is maybe therefore inherently of in its nature, just more difficult to see outcomes in. And so I think a bunch at least of the technical concrete projects have laid around detection and response.

Luca 1:56:34

I’m wondering if there’s anything in particular that you’re really excited to see or would want listeners to go check out or potentially think about getting involved with as well. And given that you both have a more of a technical background in some of these things as well, I’m especially curious for some of those recommendations.

Janvi 1:56:53

One of these projects I’m interested in is kind of related to some of my own work within detection. People have spoken a lot about and you mentioned it, the Swiss cheese model - this is also applied in detection, when thinking about the different layers of surveillance, clinical, sentinel, and environmental. And I’m focusing on really trying to characterise environmental surveillance deeply, particularly looking at sort of wastewater stuff. But I think there’s a bunch of different areas which we haven’t spent a lot of time characterising. One of these things is, ‘What would really good HVAC - like heating and air conditioning systems and ventilation systems - look like? One: ‘What kinds of detection systems or techniques or what kind of just collection techniques would we need to get that biological information out of that sort of system?’ As well as, ‘What is a good pipeline for assessing this?’ And then also modelling, ‘How successful are we likely to be with a range of different pathogens, if we had, really robust HVAC systems?’ Another one is, we talked a little bit about sentinel surveillance, and we talk about central, person-level surveillance, so sampling a person rather than sampling the environment, in airports and BSL 4 labs. But again, I think we haven’t really seen much modelling or efforts in this area to see how successful this would be under a range of threat portfolios. And I think understanding the success of these different types of detection, would give us a better idea of the kind of final early warning system we would need in order to patch all those holes in the cheese. So those are some examples of detection work that I’m excited about.

Luca 1:58:46

Yeah, awesome. Tessa, what about you?

Tessa 1:58:48

I really do think there’s a lot of work to be done. We’re focusing on technical, but I’m going to sneak in a few pitches for non technical work. Because we’ve been talking so much about prevention. I think one thing we’ve seen in the COVID-19 pandemic, and in general in the world, is that a lot of whether our systems are able to respond to an emerging pandemic comes down to the regulatory and government infrastructure. And I feel really excited about people working on, for example, creating new rules for emergency authorization or for the allowability of human challenge trials or other things like that, where when we are in a pandemic, we can shift to have more emergency footing with our regulation because the cost of inaction are much higher than during a non-outbreak scenario. So if you’re more of a lawyer type person I’m going to throw in my pitch for, ‘This seems like something really important to work on’. For concrete and technical biosecurity project ideas, a lot of people have written up really great lists of these. I recently wrote an EA forum post, which was just my list of all of these lists, which I would encourage you to check out because hopefully you walk away from it also feeling inspired. One thing that is nice about being in biosecurity is that there are a bunch of really concrete projects that we can work on. And I know a lot of people who are really concerned about the risks of advanced artificial intelligence, and also having kind of a difficult time because they’re not exactly sure which interventions are tractable and how to actually reduce the risk. And I think in biosecurity, we have this advantage that there are a bunch of technical projects that seem quite robustly good that we can do. So these are developing platform vaccines, these are figuring out responsible access to genetic sequences, and, ‘What if all journals could have an API that just manages access to this digital information in a responsible way? What if we had needle free vaccine delivery? What if we develop DNA vaccines that are stable without cold chain?’ I could go on for a long time, because there truly are about 40 or 50 ideas on this list of lists, but there’s really quite a lot to do.

Closing questions

Luca 2:01:07

Yeah, we’ll definitely link the list of lists in our write up. There’s also a chance to flag that Janvi, you’ve made a reading list for people who might want to get up to speed and read more about biosecurity and GCBRs more broadly. Can you quickly plug that?

Janvi 2:01:25

We’re hoping to actually have it more in the format of a fellowship, or a workshop which is run by the Cambridge Existential Risk Initiative. The aim with this is to create a space for people who are interested in biosecurity but haven’t had the opportunity to speak to a lot of people who’ve spent time thinking about biosecurity, or haven’t had access to these thoughts and memes that are floating around the community, but aren’t written down very much. I’m hoping to have it as a replacement for that sort of an environment to both do a bunch of reading that I think it’s really important and is inspired by Tessa’s reading list, Chris Bakerlee’s reading list, and Greg Lewis’s reading list, as well as input from a bunch of other people. The eventual goal is that you have space to do these readings, as well as discuss your cruxes, and what makes sense to you and what doesn’t make sense to you?

Luca 2:02:24

Awesome. The last question I want to ask before we start wrapping things up is you’re both, if I may describe you as such, early-mid career people who are in this field that seems like it’s rapidly changing, and there’s so much stuff to do. And I’m curious how you guys are thinking and planning your own careers and thinking about having this impact and making some of these trade-offs like specialising in more technical spaces, or in certain aspects of this when this field just seems to be changing all the time?

Tessa 2:02:56

I feel stressed out about this question, in my own life, and I suppose one way that I reduce my stress levels in thinking and planning my own impact is by viewing a lot of things as experiments. This is definitely how I viewed starting to work at iGEM. So, for context, before I worked at iGEM, I was an automation engineer at a synthetic biology startup. So I was working in industry, I was in the lab, not every day, but pretty regularly. I had way more of a sense of the pulse of, ‘What is it like to do cutting edge synthetic biology work?’ But I was not spending much of my time working on biosecurity, full time. All of my biosecurity impact was coming through volunteering, and I was feeling like that was not a good impact trade off. And so I went into this much more risk assessment-y, community building-y, meta-educational role at iGEM. And I think I was only able to make that decision because I told myself that this is an experiment that I’m running to see if I am happy doing non-technical work and to see if I’m any good at doing this whole prevention-focused, biosecurity type stuff. And I think that one question that I have found very useful when other people have asked it to me is letting yourself think on a career scale about your own impact as well. I think that I have gotten better at thinking about my impact less on a day-to-day basis - I used to really, really stress myself out being like, ‘Wow, I had an unproductive day today, so I guess I’m a terrible person.’ And I think now I’m able to zoom out a little bit more and go, ‘Okay, well, it’s all about prioritising more on the scale of months to weeks’, it’s less about, ‘Did you manage to squeeze the Nth hour of productivity out of this specific day?’ You can have weekends that are relaxing - that sort of mindset? And I think recently I’ve been oriented more towards thinking that my greatest impact in my career is probably not what I’m doing right now, but what I might be doing in five or ten years, and so what are the kinds of aptitudes I want to build up for that? So something I’ve been thinking about, for example, is that maybe in the future, I would want to be in a position where I’m founding or running an organisation, which means that I need to pick up more skills in organisational management and people management, which are things that I’m getting a little bit of my current job, but have been orienting my tasks towards that more than I had been previously. So the field is rapidly changing, but start thinking about what aptitudes you might want to have and let yourself chill out about your job on the scale and your work on the scale of days. But think about your priorities over the scale of months for projects and years for your career is my concrete advice.

Janvi 2:05:42

I think that advice gave me some internal peace. Yeah, what Tessa said about doing experiments and running trials on the work that you enjoy and the work that you think you’re well suited for as well that is well aligned with your moral compass, that seems important, that you give yourself space to do those trials before really jumping into jobs that are big commitments, because sometimes it can be hard to sort of step away from those. Especially if you haven’t framed it as a trial. I would say that one of the things that I found really useful when I first got into biosecurity has been trying to have a lot of calls with people and see what they think about things and being really unafraid to ask them questions. To be clear, I wasn’t unafraid. I was very afraid. But I found that I got the most use out of the calls when I internalised, ‘The best use of both of our time is just to be honest about the things that I’m uncertain about.’ And I would really encourage people to do this, to try and have calls with people in biosecurity or in whatever field they’re interested in or subfield are interested in and try and get a good sense of the things they’re unsure about before the call and address those. I found that often the most useful bumps in my career have just been through conversations that have been guiding towards opportunities that have seemed really exciting to me and have then been able to apply for. I think, to be clear, or to be candid, this is especially true in biosecurity, because within biosecurity, we don’t have what people who are more interested in AI safety might have, which is the Alignment Forum or lots of posts within the EA forum about biosecurity. And part of that has been out of an awareness of info hazards. But that doesn’t also mean that there’s not a very good substitute for trying to learn a lot about biosecurity without speaking to people about it. I think the fellowship that I mentioned earlier is trying to substitute for that a little bit. And I think we’ll hopefully get towards it. But I think one of the best ways to get a sense of what you care about in biosecurity is to talk to a bunch of people about it, maybe do the readings that are online Tessa’s list of lists. Try to join reading groups or fellowships in these areas. And then there are also these schemes like CHERI and CERI, which run programmes where you can delve into research projects, which I think are pretty cool, cheap tests to see if biosecurity research in particular would be exciting for you. And I think, having spoken to some of the fellows from this year, some of their research is incredibly cool. Even though it’s been a really short time span, they’ve been able to explore a lot. And this is definitely something that I wish I’d had when I first joined biosecurity. And I think it really is cool opportunity.

Tessa 2:08:59

Yeah, building off what you just said, Janvi, there’s a new service in beta right now to set people up with career chats with biosecurity professionals that you can sign up for, and I’m sure we can put a link in the show notes for this. Hopefully, that trial is still going when this podcast gets released. But even if it’s not, I will say people are pretty available and pretty excited to encourage new folks in the field, I think, especially if you seemed the right mixture of enthusiastic and prudent. And so I think you’ll find that even if you just cold message people on LinkedIn, they’re relatively available to talk to you. So especially if you have some concrete decision or prioritisation that you’re asking for advice about. My advice for reaching out to, cold emailing, and cold LinkedIn messaging people, which I do think is totally good thing to do, is that you will have, as a person who receives these messages, more success if you give a one sentence introduction to yourself, say why you’re messaging the person that you’re messaging specifically, and then have some ask or thing that you want advice on, which could be quite vague. A totally good message I could receive from someone is, ‘Hi, my name is - pick a random name - Rashad, and I’m an undergrad at this university in this area, and I read this post and now think biosecurity is important. And I’m trying to decide if I should invest more in my computational skills or try to transition towards having wet lab skills, and I’m not sure. Will you mind having a call with me?’ And I’ll probably answer yes. And honest, I actually say yes to most people who’ve messaged me, I probably shouldn’t broadcast this on the public Internet, but whatever. But the only times when I don’t basically are when it doesn’t seem like the person is at a point where there’s an input for my advice that will make a difference to them. If they’re just wanting to talk to someone out of not wanting to explore this field on their own, then I would for now probably point them towards this chat with a biosecurity professional service, for example.

Luca 2:11:13

Okay, so I think we need to start wrapping this interview up. So to close things off, Janvi, what are three books, bits of media, audio recommendations, or anything else that you would recommend to listeners if they want to find out more about what we talked about here?

Janvi 2:11:26

So one of the things we talked about a little bit is technical interventions. One report that I’m sure has been mentioned before on this podcast is the Apollo Report. But I do think it’s a very good outline of some of the technologies we’re most excited about pushing forward. And that might be the most feasible to push forward in the next few years. And if you’re more interested in the early warning, detection side of things, which is more my area at the moment, there’s a really great report by the Council on Strategic Risks, called Towards a Global Pathogen Early Warning System that I would recommend reading. Thinking a little bit more about bioweapons, I also really, personally enjoyed reading this book called Biohazard by Ken Alibek. It’s actually kind of a biography because this book describes the Soviet Union’s bioweapons programme. And it’s written by Ken Alibek who was part of running the programme. So I found it a really engaging, if not terrifying, read. And then on synthetic biology and biology more broadly, I found the series of lectures by iBiology, it’s a YouTube channel. It’s pretty useful for when I’ve not really understood a concept in synthetic biology. I’ve often looked for it on iBiology and found a really useful video. I think Kevin Esvelt actually has some videos there on daisy chains. And another sneaky recommendation for synbio, is this Twitter account by someone called Jake Wintermute, which just has quite a few memes on synbiology.

Luca 2:13:04

Great, that’s an awesome list of things that we’ll include in the write up. Tessa, same question to you.

Tessa 2:13:12

So, if you’re a person who’s thought about biosecurity already and has already bought into some of the normative effective altruist assumptions around catastrophic bio risks, I would really recommend the book Biosecurity Dilemmas. It’s partly the kind of book that I’m biassed towards liking because it’s all about, ‘Oh, wow, these things are in tension with each other and everything’s complicated and difficult.’ And I think I have an intellectual bias towards liking books like that. There’s a Ben Goldacre book called I Think You’ll Find It’s a Bit More Complicated Than That. And I saw that title on a shelf. And I was like, ‘Yes, I will be reading that.’ So Biosecurity Dilemmas is very in that mode. But I think if you want to get a bit more conversant both with a bunch of really concrete examples from the past - I think it came out in 2014, but you the 15 years before that - of interesting problems and biosecurity and health security, and also more of a play-by-play of how some of these debates have played out in the health security community. I feel like that book is a really great grounding. If you want to be enthusiastic about biology, I cannot strongly enough recommend several email newsletters about synthetic biology. I am an email newsletter addict. These also exist on Substack. Your mileage may vary, but I really like The Century of Biology. I think I already quoted something from that. A few others. This is kind of cheating, but I’m going to cheat. A few others I would recommend. One is called Codon Magazine, Ginkgo Bioworks, which is a really interesting and creative synthetic biology company in the US. They have their own magazine as well called Grow Magazine that comes out once a year, and also has pretty interesting online content. And there’s also a group out of San Francisco that is called NeoLife, and they have a newsletter where they tend to be a little bit more - I sometimes describe myself as a biotechno-optimist with a side of, ‘Are we the baddies?’ - and they have less of that side of, ‘Are we the baddies?’, but they publish really good excerpts of wacky cool biology papers in their newsletter. So I’d recommend it. If I’m allowed to get the newsletters as a single recommendation, which is dubious, but I’m going to try anyway. The last thing I would recommend just as a really foundation-shaking thing in my own thinking about GCBRs is this report from the Centre for a New American Security on Aum Shinrikyo, where the author of the report, Richard Danzig, I think is his name, actually went and interviewed members of this Japanese cults that had done the Tokyo sarin gas attack, so a big non-state actor chemical weapons attack, and they’d also had a bio weapons programme. And it’s a very narrative story-based exploration of what is the social environment that gets people to do things like build bioweapons and spray them on the lungs of their enemies, which is the thing that they did. And I think it really shifted my own mindset where one of my big cruxes around worrying about GCBRs was that surely someone who was competent enough to engineer something dangerous, would not be stupid enough to release something that could kill a huge number of people in an untargeted way. And then I read about the story of this really toxic social environment that was also a homicidal death cult. And that really changed my beliefs. And I like how concrete the report is. It’s a really detailed narrative reportage of how this came to pass.

Luca 2:16:28

Great. Well, in that spirit then, the last question is: where can people find you and what you’re working on, online?

Tessa 2:16:35

I will own - I’m trying not to do this reluctantly, but it’s semi-reluctantly - that I am pretty active on Twitter. My handle is @tessafyi. I also post semi-regularly on the Effective Altruism forum. So that reading list and the list of lists of concrete biosecurity projects are both forum posts. And you can also reach out to me at hello@tessa.fyi, which is my email associated with my personal website. Happy to answer your questions there too.

Luca 2:17:06

Great. And what about you, Janvi? Where can people find you online?

Janvi 2:17:09

Basically the only place you can find me is on Twitter. I’m @jn_ahuja. Some of the work that NAO is doing, Nucleic Acid Observatory, is available on their website, which was launched relatively recently, which is what most connected to. I’m also on some other academic websites. But I think Twitter is the best place to find me and LinkedIn, as Tessa mentioned, in case you want to drop me a message.


Luca 2:17:38

Tessa and Janvi, thanks so much for coming onto the show. That was Tessa Alexanian and Janvi Ahuja on synthetic biology. As always, if you want to learn more, you can read the write up at hearthisidea.com/episodes/alexanian-ahuja. There you’ll find links to all the papers and books referenced throughout the interview plus a whole lot more. We really want to make this show better. So if you have any comments at all, please do email us at feedback [at] hearthisidea [dot] com or click on the websites and fill out our feedback form. But we’ll also give you a free book for your trouble. If you want to support the show, then the best thing that you can do is leave us a review on whatever platform you’re listening to, or tweet about us both of which really helps others find out about the show. And if you want to help us pay for hosting these episodes online, then you can also leave us a tip by following the link in the description. A big thanks as always to our producer Jason Cotrebil for editing these episodes, and to Claudia Moorhouse for making these transcripts possible. Thanks very much to you for listening.