Max Smeets on Barriers to Cyberweapons
Contents
About this episode
Dr. Max Smeets is a Senior Researcher at ETH Zurich’s Center for Security Studies, and Co-Director of Virtual Routes.
While we often hear warnings about catastrophic cyberattacks, it’s often surprisingly difficult to understand what such attacks would actually require. And when you dig into historical cyber operations, they often look messier and less successful than media coverage might suggest.
Smeet’s two recent books speak to this question No Shortcuts: Why States Struggle to Develop a Military Cyber-Force and Ransom War: How Cyber Crime Became a Threat to National Security.
In this interview, we discuss:
- The different types of cyber operations that a nation state might launch, and how the international norms formed around which cyberattacks are tolerated or condemned;
- The challenges that even elite cyber forces face — like why Stuxnet remains the go-to example for a successful and sophistical cyberattack after two decades, and what we can learn from the relative absence of major destruction from cyberattacks in the Russia-Ukraine conflict;
- How to assess the sophistication of cyber forces, and what capabilities future AI systems would need to develop to meaningfully expand or proliferate advanced cyber skills.
Resources
- No Shortcuts: Why States Struggle to Develop a Military Cyber-Force
- Ransom War: How Cyber Crime Became a Threat to National Security (+ blogpost)
- ‘A US history of not conducting cyber attacks’ (paper)
- ‘If it bleeps it leads? Media coverage on cyber conflict and misperception’ (paper)
- The magic of sophisticated cyber attacks (article)
- The Legend of Sophistication in Cyber Operations (report)
- Predatory Sparrow: cyber sabotage with a conscience? (article)
More general resources on cyberattacks:
- Darknet Diaries (podcast)
- Malicious Life (podcast)
- The Hacker and the State: Cyber Attacks and the New Normal of Geopolitics
- How to build a cyber army (DEFCON talk)
- CFR Cyber Operations Tracker (website)
Transcript
Intro
Luca: Hi. You’re listening to Hear This Idea. In this episode, I speak with Dr. Max Smeets, who is a senior researcher at ETH Zurich Center for Security Studies and co-director of Virtual Routes, a platform that, among other things, provides cybersecurity fellowships in Europe and a media publication. While we often hear warnings about catastrophic cyberattacks, I found it surprisingly difficult to find a clear discussion about what such an attack would actually require. When you dig into many historical cyber operations, they often appear much messier and less successful than media coverage might suggest. Thus, I was eager to speak with Max after I came across his two most recent books that really hit on this topic: No Shortcuts: Why States Struggle to Develop a Military Cyberforce and How Cybercrime Became a Threat to National Security. In this interview, we discussed the different types of cyber operations that a nation-state might launch and how the international norms formed around what kind of attacks are and are not allowed. We also touched on the challenges that even elite cyber forces face, such as why Stuxnet remains the go-to example even after two decades, and what the relative absence of major cyber destruction in the Russia-Ukraine conflict might tell us. Lastly, we examined how to assess the sophistication of cyber forces and what capabilities future AI systems would need to develop to meaningfully expand or proliferate advanced cyber skills. Much of my own interest in this topic stems from the question of what impact future AI systems might have on cybersecurity risks, especially given the huge advances we’ve seen in our coding capabilities recently. However, Max and I deliberately spend most of our conversation not discussing AI per se, but instead trying to unpack the recent history of cyber operations. I think that understanding today’s constraints and learning from subject matter experts provides crucial context for evaluating what truly transformative change would look like. So, I hope you find this approach as informative as I did. And without further ado, here’s the episode.
Max Smeets: Well, first of all, thanks so much for having me on this podcast. This is really exciting. I’ve been listening to quite a few of your podcasts, so to now be on it is quite nice. Right now, the last few hours, I’ve been working on a blog post related to a specific hacking group called the Belarusian Cyber Partisans. The reason I want to write this blog post is that this group has been quite interesting in this space and received a lot of attention following the further invasion of Ukraine, as it was conducting significant hacking operations, including against the railway services in Belarus, where it disrupted some of the military supplies going into Ukraine by Russia. However, today, not many people are riding on them, and there is this open question of whether they are still very active, or if people have started to ignore them in the same way that many have moved on from the Ukrainian war itself. It raises the question of to what degree these people are simply incapable, because they are a small group trying to maintain the momentum they had several years ago. So that’s the problem or puzzle I dealt with in the few hours prior to this podcast.
Luca: Yeah, that sounds really interesting. I think it speaks to a theme that we’ll really hit on here: there are so many actors, case studies, and stories to delve into. I’m really impressed by any kind of academic in this space trying to stay on top of it all. It just seems like you have so much going on.
Max Smeets: Absolutely. Specifically, this group highlights the types of actors out there. We sometimes talk about non-state actors and state actors. This group might be the first that can be legitimately termed a digital resistance movement. It’s a term I’ve used myself. They initially called themselves a hacktivist group.
Types of cyber operation
Max Smeets: But hacktivism is often associated with groups that are quite decentralized. This group is much more centralized, with explicit goals, namely the overthrowing of the Lukashenko regime. They’re also trying to apply international legal principles to justify their status and the way in which they’re operating. So, they are perhaps a unique case as the first digital resistance movement.
Luca: Yeah, interesting. And you said they’re based out of Belarus, right?
So they are people based in opposition to the government and in opposition to Russia. Is that correct?
Max Smeets: That’s a tough question. Well, you know, we can expect that many of them might still be based in Belarus, but certainly, others might be out of the country. This includes the spokesperson, Juliana Schmetzovec. She’s actually based in New York, but perhaps other operators are based in other countries as well. And then there is an open question about to what degree, let’s say, an operator operating from The Netherlands, Denmark, Spain, or where you are based, London, San Francisco, whether that person should be allowed to participate in these types of activities that might actually impact critical infrastructure in Belarus or maybe even other countries, and in that way participate in certain international legal conflicts or other related issues.
What makes cyberattacks difficult?
Luca: Let’s maybe begin with your book, No Shortcuts. I found this book really fantastic, and I think it also felt very counterintuitive to me as somebody who doesn’t have much cyber expertise. I think when I come across cyberattacks, it’s almost always framed as “look what these hackers are capable of,” look at this news event and the damages that have just been created. But I think your book really feels like it takes a different frame, which is, why aren’t we seeing more of this, or why is it, in fact, hard for many cyber military operations to grow? So maybe to lay out the land here a little bit, I’m curious what the types of operations are that these often government-affiliated or government-run cyber operations do and what the incentives are.
Max Smeets: We have seen different types of cyber operations, and we can categorize them most simply into those operations in which a government actor or a non-state actor seeks to achieve an effect. I call these cyber effect operations, or some would call them simply cyber attacks. Those operations can include simple DDoS attacks where you seek to deface a website or other network, but they can also include wiping attacks. For instance, when you seek to wipe a database off a server or other system of a company, or they can be critical infrastructure attacks where you particularly seek to manipulate the systems of some of these critical infrastructure organizations, where you can cause often more significant damage or harm. Now, of course, next to these effect operations, which are mostly in the realm of military entities, particularly in the West, intelligence services are not allowed to conduct these types of effect operations. We have, of course, the espionage operations where this hacking component has existed for a really long time, going back at least to the nineteen eighties. Depending on how you categorize it, you could go back even earlier. So that’s the simplest way to distinguish. We sort of have operations that are most typically conducted by militaries, and then there are operations that are conducted by the intelligence services as well. They sometimes blend over, and that’s where it gets hard. You could see an intelligence service that seeks to, you know, just steal certain data. After they have done that, they might actually try to wipe all their traces, and in that sense, achieving some type of effect on the system. Right? Yeah. And equally, you might have a military operation that seeks to enable certain other military activities that is first, you know, maybe stealing certain data and information that they can use, and then later on seek to achieve a follow-on effect. So it’s not always that we can perfectly distinguish between these two very broad categories.
Luca: I definitely appreciate the point that it seems really hard to define anything here. And, you know, this is coming from the perspective of trying to have written questions to define different terms. But this one thing that you pointed out, if this sounds correct, is maybe this idea of if the cyber operation goes well, does the victim or the person being attacked realize that something went on or not? Right? Like, if a DDoS attack goes well, then if the attacker succeeds in that goal, then the person would notice. Versus if you’re just stealing information, then maybe if all things go well, they don’t even know that something has been stolen.
Max Smeets: Yeah. Typically, you know, for intel operations, you want to get in and you want to get out. Yeah. Better, and you don’t want to be noticed. Right? Yeah.
For military operations, it’s slightly different. By nature, you often seek to achieve an effect. As you’re achieving an effect, let’s say you wipe lots of material on different stations, as we’ve seen in attacks against, for instance, Saudi Aramco versus a case of an Iranian government-linked group that wiped 20,000 terminals off Saudi Aramco, an oil company in Saudi Arabia. Clearly, when you’re doing that operation, you can expect to be detected. We’ve also seen some ways in which government actors try to be cunning and pretend that this might simply be an accident. The most famous case is Stuxnet. This was the operation allegedly led by Israel and the United States against the nuclear centrifuges. At that time, there were actually two different operations: one impacting the closing valves and the other affecting the rotation speed of the centrifuges to achieve the same effect of destroying them. They did this in a cunning way, as they didn’t just destroy them all at once but did so in a seemingly random manner. Many of the Iranian scientists thought this was simply a mistake, and some were even fired because they were deemed incapable of running the nuclear program.
Luca: Yeah. Part of the incentive there is that if they don’t realize this is due to a cyberattack, then the virus can keep operating and keep delaying and sabotaging that program.
Max Smeets: Exactly. Once the genie is out of the bottle, it’s harder to put it back. Also, once something like this has happened—when it occurred over a decade ago—people weren’t as aware of the potential of cyber operations to do something like this. Now that we’ve seen a couple of these cases, the next time you try to do this to some centrifuges in Iran, they will immediately think, “Hey, this might be a cyberattack.” In that sense, you lose that element of deception and potential surprise.
Luca: Yeah. Importantly, this is an example of the previous dichotomy breaking down, where an effect operation wasn’t trying to get noticed. If it does succeed, then, in this case, the Iranians wouldn’t notice this.
Max Smeets: They definitely wanted to have an effect, and perhaps they didn’t want to be noticed in a way that would make them realize immediately that it was these countries behind it.
Hacking IT vs. OT systems
Luca: One other bit of taxonomy I want to quickly address to help lay out the picture is that you mentioned critical infrastructure attacks as some of the cyberattacks that have the potential for especially large devastation. It’s really hard to pin down what critical infrastructure is and what an attack on that entails. One useful distinction that came up in your book is the difference between hacking an IT system and hacking an OT or SCADA [Supervisory Control and Data Acquisition] system. Could you elaborate on that?
Max Smeets: Yes. When you think about critical infrastructure, many run on SCADA systems. You’re then operating in an environment where you need to understand not just some code but the operating environment, such as how an electricity grid works or how a water dam structure works. The skills required to disrupt that type of infrastructure involve not just being a good hacker; you might need engineers or other specialists in your team to do a good job. Going back to the most famous case, Stuxnet, we saw the US and Israel first mapping out the infrastructure of the nuclear centrifuges in Natanz and then actually rebuilding that infrastructure, using centrifuges from Gaddafi to test and retest the codes. From that example alone, you can see that they needed nuclear scientists and other experts on the team to ensure they were rebuilding it correctly to create a true testing environment for the malware they sought to deploy in Iran.
It really is the point here that running these very damaging physical infrastructure attacks requires not just good computer and cyber skills, but also, as you said, an understanding of some of the engineering and how that works.
Max Smeets: Yeah. And we’ve seen some Russian operators. They go by the name of Sandworm. At least, that’s a name given by a cybersecurity company, and it’s a GRU unit that has been particularly known for conducting these cyber effect operations. You’ve seen them targeting the power grid in Ukraine, and you’ve also seen them developing their skills, tactics, techniques, and procedures over time as they gain a better understanding of operating in Ukrainian power grids.
Luca: It’s not just understanding how power grids work, but also how that particular system and those specific devices work.
Max Smeets: Exactly.
Luca: I think it’s also important to note that when people talk about critical infrastructure attacks, they don’t always mean these SCADA or even physical engineering systems. For example, with the Colonial Pipeline, when I heard about that attack, I imagined a physical pipeline kind of going kaboom or something. But in fact, they don’t always have to run through the OT or the SCADA system, if you want to add more color to that.
Max Smeets: Absolutely. There are different gradations of targeting critical infrastructure that range from taking down the website of a critical infrastructure company and claiming to have hacked an oil company, to influencing the closing of pipelines in the case of an oil company. In the example of Colonial Pipeline, it seemed they influenced the billing system and later other infrastructure of the organization, but they never reached the operational environment. However, Colonial Pipeline decided to preventively shut down its operations, also because they couldn’t bill anyone anymore, to ensure that the malware wouldn’t spread further. Indeed, you can have the same hotline: a critical infrastructure company or organization is targeted, but it could mean a whole bunch of different things.
Luca: Interestingly, I think this hints at some of these mind games regarding how much the defender thinks the attacker is capable of. As you pointed out in the Colonial Pipeline case, hacking the invoices and the billing system doesn’t automatically require the company to shut down if it thinks it could recoup some of these payments after the mess has been cleared up. But because they thought, “Okay, maybe these attackers can escalate from our IT system to the OT system,” they had to, as a precaution, shut those systems down. If they were more confident that these attackers were unsophisticated and couldn’t do this, then maybe they wouldn’t have to take this extra step, which is obviously very costly.
Max Smeets: One thing to clarify regarding the Colonial Pipeline hack is that it was a ransomware-based group. So it was a criminal group with a very different intent compared to what you might see from state actors. Absolutely.
International norms In cyber
Luca: Cool. Thanks for helping lay that out. To summarize, we have a difference between these effects, such as manipulating a system like Stuxnet and DDoS attacks, and then we have espionage attacks. Within these effect attacks, when it comes to critical infrastructure, we’ve discussed some distinctions between IT and OT systems. Moving on, one theme in your book is that there could be a significant gap between what some actors are capable of and what they’re allowed to do, given that governments and state-affiliated groups have to adhere to certain norms. I’m curious if you can elaborate on that and discuss where these norms came from regarding what governments are allowed to do in cyberspace and what they’re not.
Max Smeets: There are two questions here. Let me start with the first one.
Like, this seeming disconnect of what they are capable of and allowed to do really goes to the heart of my book on how I started this. It began with the observation that we saw many different governments starting cyber commands or similar entities with an official mandate to conduct cyber effect operations, developing from 02/2010. However, while we observed this institutional development and the militarization of the space, we still saw relatively few countries conducting these operations.
So, the real empirical puzzle for me is what explains that difference. Is there no opportunity for them to do this, or what are the rules? One thing I noticed is that these cyber commands are established similarly to other military structures, meaning that you require, for instance, a parliamentary mandate in Germany or the Netherlands to achieve these effects and operate as a cyber command. You can’t operate in peacetime. If that is the case, it can lead to further issues down the road in developing the right capacity to conduct these operations in the first place. If you’re not allowed to conduct operations in peacetime—not even allowed to do espionage—how then are you able to recruit, retain, and train the right people to work in your organizations when the need arises?
There is a deeper question here around why these organizations are set up, when they’re allowed to operate, and how this mismatch impacts their capacity to operate, particularly in their ability to retain, train, and recruit the right people for their activities.
To your second question on the norm around when we started to think about cyber norms in this space, that’s a question I could answer in various ways. One approach is to point to discussions that have happened at the UN level. For instance, you could highlight Russia in the late nineties discussing the need for dialogue around the responsible use of information technologies, which they believed also impacted domestic terrorism and the focus on developing norms around that.
From the early 2000s, we saw the development of what we call the UN, first in the General Assembly and then the governmental group of experts, where discussions about cyber norms and whether international law would apply to cyberspace began to emerge. This UN discussion started as early as the late nineties and early 2000s, considering cyber as a potentially new domain and how it would impact existing rules and principles.
Another way to think about this is that cyber isn’t entirely new if we consider it comes from the intelligence realm. The Five Eyes community—the English-speaking intel community that includes the UK, Canada, the US, New Zealand, and Australia—has been conducting various intelligence activities for a long time. Cyber is not necessarily a new element in their intelligence toolbox; at some point, it just started to hack more and more into certain systems or gain unauthorized access. The rules that have applied in the intel realm have carried over into what we now call the cyber realm.
Thus, the norms building about what is off-limits has developed over time, but this process doesn’t stem from direct UN regulation. Instead, it arises from how we have practiced these activities over time, where sometimes agreed-upon rules have developed between different countries regarding what is off-limits.
There could be a third aspect, which involves the more explicit signaling and discussions that have been going on bilaterally. We have seen this in various instances, most notably when China ramped up its IP theft. We saw the US start to more explicitly call them out, and we’ve observed similar activity from China calling out the US as well.
We’ve seen the US repeatedly calling out Russian state-led actors, particularly following the 2018 elections regarding disinformation. We’ve also observed a lot of signaling between states.
Luca: How much of these norms, maybe through bilateral diplomacy, happen before events versus how much is unplanned and in response? How do you deal with it? To take three specific flashpoints to briefly hit on, I’m curious. You mentioned Stuxnet before. How much of that was agreed upon? Did it fall within the realm of espionage or the intelligence communities? Were they just doing this through a different cyber operation, or was it more like, “This isn’t the gray zone; we don’t really know how countries are going to respond to this. There aren’t any norms here.” Did the US and maybe Israel do that, and then we see how it plays out and figure out what the norms are afterward? Is it more about responding to events rather than planning them out, as you’ve mentioned since the nineties and February discussions?
Max Smeets: What we know from Stuxnet, based on the writings of journalists like Kim Zetter, David Singer, and others, is that the program was carried over from the Bush administration to the Obama administration. Obama, when he inherited the program, was quite concerned about the international legal implications of Stuxnet. At the same time, what appealed to him was that it was a sort of third option—an alternative to continuing diplomatic conversations with Iran and not as extreme as bombing the nuclear sites in Iran, which Israel had on the table. It felt like the most peaceful option for Stuxnet. Until February 2018, such an operation required presidential approval. There were definitely quite a few lawyers involved in the process. Stuxnet sought to target particular centrifuges in a facility that had a specific rotation speed. You could say they were aware they were operating in a legally contested space. Many people still argue that Stuxnet was illegal and violated international law. They would add that even if they sought to achieve a precise effect, Stuxnet as malware spread beyond the facility and infected systems across the world, including in the United States. Even though it didn’t trigger the malware, cleaning up these systems came at a significant cost. Before Stuxnet, the majority of countries and politicians weren’t even aware that something like this was possible. It was definitely a landmark case, prompting a new perspective on cyber operations. This came at a time when we soon had the Snowden revelations, which laid bare the incredible hacking capacity that existed, particularly in the United States and across the Five Eyes. That was an eye-opener for many for various reasons. The widespread targeting and surveillance across the world, combined with the targeting of allies, even if these weren’t effect operations, led people to realize we were living in a bubble of ignorance. A new era of cyber operations had emerged. If this is what they can do now, can you imagine what these countries will be capable of in 2025?
Luca: I actually think this is a really underappreciated effect, particularly regarding Stuxnet. It’s almost twenty years old now, and it still seems to be the overwhelming example we cite. It raises a real question of what countries are capable of today.
And I also want to flag that it could also be the other way; back then, we didn’t know that these critical infrastructure and sabotage attacks were possible. Now people are aware, and you could hope that the defense can catch up. But I’m aware that this is an ongoing contested question about where things go. It’s still very surprising to me that it’s almost 20 years old. It makes me really question what the current frontier is.
Max Smeets: Absolutely. Since then, we have seen so many incredible developments in this space, but it does make you wonder, combined with the Snowden revelations, what are we not seeing or, indeed, what are they capable of? We sometimes go back quite far without realizing. I take your point there; it’s a really good observation.
NotPetya
Luca: Very briefly on the norms question that I want to ask about as well—this is just to look at a different case study. In particular, the NotPetya attacks. Stuxnet, in some way, is very constrained in the sense that it went after one centrifuge in one country, even though, as you mentioned, in order to spread, it affected many more systems. But that was its intent. With NotPetya, this was Russian malware that, as I understand, was meant to target Ukraine but in a much broader sense and ended up spreading to many more countries in a way that feels clearly more destructive than espionage or sabotage. I’m curious about the decision involved in that and how this affected norms, and whether lawyers were in the room, and so on. I’m curious for your thoughts on that.
Max Smeets: This goes back to the group I previously mentioned, called Sandworm, which has a track record of doing really disruptive and destructive operations. Some of these operations are conducted in a very cunning manner, but they also seem to operate without legal officials looking over their shoulders. Just to put some context here, we saw NotPetya take place after another highly disruptive global incident called WannaCry. WannaCry was a type of ransomware conducted by North Korean state hackers, impacting the systems of organizations worldwide, including causing great disruptions for the NHS healthcare system in the UK. However, it ultimately made very little money due to specific technical reasons and poor development on the North Korean side. Then NotPetya followed, and it was what we call a wiper in disguise. It pretended initially to be ransomware, but it didn’t have a decryption mechanism available to anyone affected. Effectively, this was a wiper, and it was an extremely indiscriminate wiper. As you mentioned, it started with an accountancy firm in Ukraine but then spread globally with significant consequences. We have also seen court cases being fought between insurance companies and other companies, such as shipping companies, about whether this was an act of cyber war and to what degree this would require the insurance firms to pay out or not. It shows the scale of the billions of dollars in damages being done. I’m not sure if they really expected it would have these consequences. In fact, it also had consequences against Russian entities. But undoubtedly, it showcases that this is an actor that is not always as careful in its operational development as one would like to see in this space.
Luca: As I understand, this attack was in 2017, so before the invasion of Ukraine. As you pointed out, the goal was still to target Ukrainian tax accounting software in order to spread to as many Ukrainian companies as possible.
Max Smeets: Yes, but they must have realized that this would spread way further than that. They supercharged it with capabilities that were actually leaked from US intelligence services. In this sense, cyberspace—though a bit cliché—has no borders, and they knew it wouldn’t just stay within a given country.
Luca: I guess I’m curious.
So how was that group, you mentioned San Juan, more okay running this legally or internationally riskier attack? Was it because there was just more gap between the Russian military? Was the Russian military or the Russian government aware of this and gave that kind of consent? I’m curious, especially when we start talking about it getting blurrier, exactly who it was and who gave the permission, and how governments like in this instance can allow for these more destructive attacks to occur.
Max Smeets: I’m not a Russian expert, and I don’t know exactly about the authorization structure within Russia. What is clear in this case is that, as we’ve seen also outside of cyber operations, the risk appetite of the Russian government to conduct operations and activities that violate international law is a lot higher than in some countries in the West. This fits not just in a pattern of Russia conducting disruptive cyber operations, but in a broader pattern. This violates international norms. In fact, we saw a Microsoft report coming out from a different GOU unit that is responsible for the Skripal poisoning and other sabotage-related activities, and that unit also has its own hacking unit. We have limited visibility into exactly all the operations they have done, but it gives you a sense of how these activities are viewed in relation to that. I’m not sure about the exact sign-off procedures when these types of activities are conducted, and I haven’t seen a lot of in-depth writing on that. It’s, of course, a hard topic to research.
Cyber in Russia-Ukraine
Luca: Stepping back, it feels that some of the hypotheses you engage in here in your book suggest that maybe governments are able to do much more, but they are constrained by these norms in what they’re able to do. Some governments may be more risk-seeking than others, but overall, there is still this power or barrier. For example, Stuxnet targeted one centrifuge station in Iran. Maybe the U.S. could, in theory, do much more or much worse, but it doesn’t because it risks violating these laws and potentially starting wars. The key case study, which I think we mentioned at the top of the case study, is how to update following the Ukraine war. In this case, Russia is clearly violating a bunch of international laws in targeting Ukraine. What is keeping them from unleashing their full cyber force on Ukraine? As you said, they have tried to target grids, and there has been a lot of activity. However, my understanding is that people were somewhat surprised at how little destructive cyber activity has played a role in that conflict. I’m curious to spend some time thinking about why that is and if there is an interpretation here that even governments as powerful as Russia are somewhat constrained in what they’re able to do.
Max Smeets: It’s a fascinating question that indeed has come up in the expert community. The moment the further invasion took place, people started writing about what we have and have not seen, and why that is the case. One thing to notice is the variation of Russian cyber activity throughout the further invasion. At the start, they were targeting very different organizations and using very different operational tools than what they are using today. Initially, they were using a lot of wipers in a way that involved quite simple tools that would wipe some data off a server, but that you could almost throw away—like throwaway capability—and then develop something else very quick and dirty for a specific purpose. So, we saw this proliferation of wipers in a way that we have never seen before, targeting a variety of different entities as well.
Now, what we have seen in the later stages of the war is perhaps less about conducting effective operations, but more about operations that might enable follow-on military activity. There are reports of hacking CCTV cameras across the Ukrainian border. Why? If you can access this CCTV footage, then perhaps your usage of drones or other conventional capabilities might be much more effective because you gain a better understanding of certain deployments. Cyber has always been an enabling component in this war for military activity, but the way it has enabled some of these other military activities has changed over time. I don’t know what people’s expectations were prior to the war. I never personally expected that cyber operations would conquer Ukraine if it ever came to that. So maybe there was a bit of mismanagement in expectations prior to it. But I don’t want to take away from the principle that we haven’t seen much. In fact, we have seen quite a bit of different types of activity, and it’s really up to us now to get a better understanding of the total set of activities that we have seen in Ukraine. My sense is that the Ukrainian government themselves was a lot more keen to share information about these activities at the start of the war than they are now. At the beginning, there was a much stronger plea to showcase, “Hey, look what is happening.” There was a more concentrated international effort from companies like Microsoft to help and showcase that they had prevented many types of attacks. There was almost a quid pro quo between the Ukrainian government and some of these entities to say, “Hey, we are willing to share and talk about this publicly.”
Luca: Yeah.
Max Smeets: You also get some marketing benefits from it. Today, the Ukrainian government isn’t sharing much anymore, and actually, big tech is not writing extensive reports about Ukraine in the way they did at the start. So the visibility over time has changed. One thing that is quite clear is that the trends have changed as well in how Russia operates, and we will see how this continues to evolve in the future as the war progresses.
Luca: That’s a really interesting point you raised. It’s maybe not even during wartime about inflicting the maximum amount of damage on the opponent, but much more about strategically complementing military operations, and that these can still end up being more precise.
Max Smeets: Absolutely. We saw a case before the further invasion of Ukraine where a hacking unit from Russia gained access to an app developed by the Ukrainian military for their artillery. This Android-based app would help in setting the range of these artillery units. The Russians hacked into that. Now, you can imagine having access to such an artillery app will give you a much better understanding of where Ukrainian artillery is and subsequently how you can use some of your conventional forces to target them. So we shouldn’t separate it too much from what is happening in the conventional space. If anything, over time, you can expect Russia to coordinate these efforts in a way that wasn’t the case at the start of the invasion.
Luca: I still want to drill into whether there are maybe some case studies where we can start thinking more about limitations that some actors have in causing certain types of effects. I think you mentioned Sandworm before, and that Russia did actually try, following the further invasion, to take some of the Ukrainian power grid offline and failed to do so. I am curious if that points to a limitation that even governments, or at least subunits or some groups affiliated with the Russian government, are just not able to execute these kinds of OT attacks.
Max Smeets: Cyber operations fail all the time at different stages. You can fail repeatedly in gaining access to a certain network or system that you really want to access.
Luca: Yeah.
Max Smeets: Right? Now, there is this feeling that if you’re really persistent and competent, at some point, you might find a way in. But you can fail there.
Once you have access, you might get detected early depending on how that system is set up. Even if you don’t get detected, you might make a mistake in how you seek to wipe material off a server at a given point in time. You might make a mistake in how you set up your command and control infrastructure to siphon data out or deploy new tools. That infrastructure might be compromised, and as a result, people might gain visibility into what you are doing. There are so many steps to conduct cyber operations, particularly the more advanced ones, and at every single step of the way, an actor might fail. This is the case for Russia and for any other actor. What we do know about Russia is that they are willing to develop their tactics, techniques, and procedures in real-time. I don’t want to say that Ukraine, as some have argued, is a testbed for Russia trying to operate somewhere else, but you can see that these groups have been operating in the real-world environment for a long time. They gain a sense of what can and cannot be done. That’s very different from some NATO members that have a military cyber command where some units have never conducted cyber effect operations. The only way they can actually train is on a cyber range, which tries to mirror a real IT/OT infrastructure but always has limitations. If they are lucky, they may have had a secondment to an intel agency or even a private sector agency in some government to gain some experience, which again is not the same as doing this in real life. The one thing that Russian units definitely have is combat experience, which is quite valuable.
Luca: There’s importantly a big difference between how we should interpret this war as information and how relatively good Russia is compared to other nations versus, in an absolute sense, how good any given government is at any kind of operation. A wrong lesson here would be to take away from this that because Russia wasn’t able to succeed in all cases at the cyber operations it wanted to do, it is therefore weaker than NATO cyber forces, which, as you mentioned, haven’t faced the same real-world test. But there may be a lesson here to learn again: these cyber operations are very hard and very random in whether you’re going to succeed or not, and even some very advanced and sophisticated groups today still cannot reliably pull off these kinds of attacks.
Max Smeets: Yeah. The way that I’ve described this in my book is that it’s not that every operation is hard or every operation is difficult. To conduct targeted operations at a specific point in time, achieving a specific effect, those are often quite hard to do. Untargeted operations, where you are not too worried about the effect and the exact timing of it, are actually quite easy to pull off. You can take one hacking course, and you can do it. The analogy I drew was a bit like a burglar or someone who needs to steal, let’s say, a handbag out of a car. If I tell you there are two variations: One is you just walk down the street on a quiet evening, it’s dark, and you see this car there. You have your hammer in your hand, and you’re like, “I can just smash this window and take the handbag out.” You can probably do that and run off with it. That’s very different from saying, “Please, at 2 PM, when this car is driving with this license plate, I want to have the handbag, and please smash the window.” Suddenly, you need a whole different infrastructure. You need to start thinking about, “Okay, where is this system? How do I get close to that system? When is this moving target?”
How can I make sure, especially if you want me to smash the window, that I have a hundred percent guarantee that the operation actually succeeds because it is combined with maybe some follow-on other activity that I’m doing?
Luca: Yeah.
Max Smeets: Two of the seemingly same effects lead to completely different capability requirements. The second point here is that we see for Russia that they are typically quite unconstrained. They don’t feel that there are massive legal constraints or even organizational constraints when conducting their operations. That’s quite different from many of the Western entities, where these constraints are much higher. Most of the time, they have to conduct these operations with a specific effect at a specific point in time and where they can often guarantee beforehand, or at least make a strong case, that this doesn’t have any collateral damage following this specific operation.
Luca: Thanks. I appreciate that distinction and the importance of strategic effects, high reliability, and precision. Those really do seem to be the difficult things here. I guess, Andreas, one thought experiment here is, say that for whatever reason, all of Russia’s cyber capabilities or maybe all of the US’s cyber capabilities suddenly fall into the hands of an actor who is completely unconstrained and just wants to maximize damage. Is the scenario we sometimes see in movies—of a whole country’s power grid or a large region of a country’s power grid being taken offline for many days—real? Is that capability within the current frontier if those resources were to suddenly become proliferated and unconstrained?
Max Smeets: If the US government sought to cause maximum damage, they could indeed cause a lot of harm. Going back nearly two decades to the Stuxnet case, we know that after Stuxnet, the US developed a second plan, codenamed Nitro Zeus, to cause critical infrastructure-wide damage to Iran in case negotiations would fail. That plan was never implemented, and we still have limited knowledge about what they would have been able to achieve. But already then, they were considering it. I have no doubt that if you are an extremely dangerous person with all sorts of motives, you could think about lots of terrible things. You could consider how to influence the water systems. In fact, we have seen different groups targeting water treatment plants in Israel, where changing some of the chemicals could have dire consequences. You can imagine all sorts of horrible scenarios that would have real consequences, especially if combined with a couple of other actions, such as targeting hospitals. There is no doubt in my mind that you could cause real havoc, especially if doing this globally. So then, indeed, there is a question of willingness, but we have seen a sufficient number of incidents that are within the realm of possibility. While, again, with many conventional capabilities, it is within the realm of possibility to destroy the world ten times over, there are, of course, always other reasons why we may not have seen that.
Luca: Yeah. I think it is useful to, as you said, separate out the questions of what is possible and how plausible it is that a given actor would use it. On the point of Nitro Zeus and, as you mentioned, these more devastating things that are feasible, I am curious where that impression comes from. There was a plan for Nitro Zeus, but it didn’t occur because it wasn’t necessary in large part. But it does make me wonder if there isn’t a mirror here with the Russian power grid attacks against Ukraine, where I could also imagine there was probably a brief that said, you know, this is something that we can do and this is something that is possible, but then when they try it, it fails or doesn’t have the desired effect.
Does this all just come down to classified information and people within the intelligence community having access to that information? Are they confident about what is possible, or is there debate and speculation even among them about which plans would actually succeed and which ones wouldn’t? I think there is a lot in
Max Smeets: this space where you will indeed find someone with a former intel or military background who might approach you after a talk and say, “You know what, Max? If you had only known about what we planned between 02/2014 and 02/2016, then all the conclusions you had in your last slide would be wrong.” For me, having never had a security clearance, I have limited visibility. In that sense, it’s hard to answer the question. That said, even people who have had the right clearances often engage in open discussions about the effectiveness of cyber operations. You saw this with Ash Carter discussing US government cyber operations in Operation Glowing Symphony, where he argued that there were limited options the military could provide, and when they did, they were often not very effective. This contrasts with assessments from some generals who believed that cyber was highly effective against ISIS, stating that cyber operations played an important role, particularly in disrupting the propaganda machine of a terrorist entity. This showcases that if you can have a significant cyber component against a terrorist entity that is not very digitally connected, you can certainly have the same against state actors that are much more digitally wired, where the attack surface is much larger. These examples illustrate that even among those with the highest levels of access to classified information, there tends to be disagreement. Perhaps this comes down to where you stand on the issue depending on your position. If you are the general responsible for cyber operations, you may be more inclined to say these things are highly effective than if you are the one making equity decisions on whether to deploy a cyber unit or entity in one of the other domains of warfare.
Barriers to building an elite hacking force
Luca: You know, I think to the point that there is disagreement, as you usefully illustrated, it’s important to keep that in mind. So maybe stepping back and asking ourselves, what are the capabilities or inputs that allow one cyber force to be more effective than another? We don’t have to debate NATO versus Russia here; we can even just talk about many countries trying to build cyber forces for the first time. I’m curious what signs you would point to when trying to distinguish which forces are more sophisticated and capable than others.
Max Smeets: When I started my research, there was a lot of language out there, and to some degree still today, about countries that could launch their cyber weapons or develop their offensive cyber capacity. However, people were often very unclear about what that would entail. What I developed is a framework called the PATEIO framework, which stands for five different elements: people, exploits, tools, infrastructure, and organization. These are the key elements for developing a military cyber force. The first and most important element is people. This includes recruiting the right developers and operators not just for technical operations, but also for legal analysts, targeteers, strategy people, and so on. This aspect is often forgotten, as strange as that may sound. When people talk about developing a capability, they usually focus on the more glamorous topic of exploits. Exploits are used to either gain access or escalate access to systems and networks, and they come in three flavors. One type includes exploits that are not yet known to the vendor or the public; these are called zero-days, which are known the day before they become public.
And then the second type of flavor is those exploits that are known to the vendor but not yet patched. No patch is developed yet. The third is that they are known to the vendor, and the patch has been developed but not necessarily implemented by every IT administrator of a certain organization. What we talk about then is how states use these different zero days to find unknown ways to gain access to computer systems and networks. The reality is very different; most operations, particularly military operations, do not use unknown vulnerabilities but exploit known vulnerabilities. It’s perhaps the least important component when it comes to developing a military cyber force.
Luca: To the degree that you emphasize this, people are maybe the most important component. I’m keen to zoom into that to help kind of disentangle it as well. Some context here is that I think I’ve also found it difficult to segment or define sophistication here. It just feels like it has many more inputs than other domains have. For example, I think about fighter jets, and you have a very clear differentiation of which generations of fighter jets are in and which countries can build jets of what kinds of generations. But cyber just feels much messier to me. When you emphasize that people are important, I’m very curious for your answer. I also imagine we’ll get into the messiness here of what these people are able to do or what the skills are that you are looking for. If you can say, look, country X has this many people who can do Y, then you would know that this is a more sophisticated cyber force.
Max Smeets: Yeah, I think that’s a brilliant observation. Of course, I’m not oversimplifying other fields, but you can roughly say, with an air force, this country has this many F-35s and this many F-16s, and then we have a rough sense of what they are capable of. There are questions on maintenance, how many pilots are available, how they are trained, but you get a rough indicator. With cyber, it doesn’t mean much. If you say, I have 50 people or 500 hackers, it very much depends on the skills that are available, and these are not easily publicly observed. As someone put it previously, you cannot parade computer code on the streets of Moscow. You can sort of parade all of the cool hackers that you have when you would be on the streets of Moscow, but people would not necessarily be impressed, and that would not immediately be an indicator. Plus, you know, things that come up are that you develop these skills, and sometimes you might even unlearn certain skills. Even if you’ve seen a certain group being highly capable of doing one operation, let’s assume all of these people stay in that unit; it doesn’t mean that they could pull it off in ten years’ time. What we know from general skills development in any space is the degree of unlearning. If I had to reboot my laptop to do something in Stata that I’ve not done since grad school, I would be like, “Damn, what was it again? How do I create this graph?” To some degree, you would even see this in military units. Another element is the problem that militaries face in developing a training schedule. In some countries, there is this belief that you can have a six-week-long program, and then these operators can simply function in a cyber command. Others have come to realize that a much longer program of at least a year is needed. But then there is still this question of what are the real skills that these different people need. Many have come to realize that it’s not easy to develop a curriculum because some of the best hackers that we have today have developed these skills in a mixed manner. On the one hand, they’ve learned them from just learning as they were young kids and having a bit of fun.
And on the other hand, it comes from maybe going to certain hacker camps and doing all these things that they never thought would ultimately lead to a job. Now suddenly, they have these skills that they can deploy really effectively for us. But how do you go from there into a formalized curriculum? That’s not an easy one. You see all these questions arising combined with the fact that there is a contrast in this space between how militaries function and the very nature of hacking. Militaries tend to like standard operating procedures. They like routines because routines create a degree of predictability, which is what you want in the military. It creates opportunities for replicability, but it also ensures that you don’t violate international law or other procedures. Whereas the very nature of hacking is to do the unexpected. You want to rely on an element of deception to gain access to someone’s network, and hacking often comes about from playing around. These two things can clash with the specific creativity and imagination of a single operator, which can conflict with the broader organizational risks that a military force might carry.
Luca: This whole thing seems hard, and I want to maintain all of the caveats here that you’ve usefully listed. But, thinking then, are there any indicators? Maybe they’re not perfect indicators, but they’re somewhat good indicators. Are there any observations that you would look out for to see whether a new military side of force now has capabilities that are more similar to some of the leading states like the US and Russia?
Max Smeets: Oh, yes. There are indicators. Some are more publicly visible than others. Some you might only find out not when they’re established, but maybe after years of operation, and hopefully, some report is published on their activities. The first one relates to mandate. What is their mandate? Can they only operate in wartime, or do they have any capacity to operate in peacetime as well? If they can only operate in wartime, that indicates what these units are doing most of the time. Second, there might be some element of recruitment that is relevant here, but it is really hard to interpret. You sometimes see these seemingly big commands, like the German command, but many of them are essentially in an IT role, which doesn’t give you a real sense of their offensive capacity.
But where I start to think, “Oh, this is something significant,” is their training infrastructure. As mentioned before, it’s about cyber ranges. We have seen different governments establishing various sizes of cyber ranges. Many of the ones in Europe are a couple of million, where their operators can practice and learn new skills. But if you’re developing a range for a few million, as much money as it might seem to some, you can replicate the real environment in a very limited sense. If I see something like the US Army does, where they’re investing a billion in developing such a range, suddenly, you get a sense that if you’re developing a cyber range of that kind, your ability for your operators to train and practice certain activities is of a completely different skill level. That would be a strong indicator of what military forces are capable of.
Another high-ranking indicator for me is the integration between the military and the intelligence services. Typically, it’s the intel services that are doing a lot of activity in peacetime, and we see this in various countries. Let’s take the Netherlands, where I’m from, and you see an intel service that is well-respected for several public disclosures. However, its cyber command, the Defense Cyber Command (DCC), has not been independently deployed since its existence and might struggle more to recruit and retain the right talent.
But if you look at countries where the integration of the military and intelligence is much closer, like with the recent establishment of the UK National Cyber Force, it indicates that they have really thought through this problem. They might find ways for people to have a day-to-day mission in which they can consistently improve their skills and also have a reason to stay in this command rather than being taken by the private sector or any other entity.
How to assess cyber sophistication
Luca: All of these feel like important inputs to be tracking. So what can we see that governments or forces are doing that might suggest they are capable of a lot? I’m curious if there are any outputs that you would particularly flag. Think more here of, like, if I wake up tomorrow and see a news article stating that the German government did X or the German cyber force did X or the UK cyber force did Y, then those would be indicators of capabilities. One reason people latch onto exploits, for example, is that it’s a very legible output. If you’re able to find this many zero-days, then that is an output you could track. Or if you see that, within a cyber operation, someone used a particularly powerful zero-day, that is also an output you can track. I’m curious if there’s anything else of that kind of flavor that you would find important to monitor.
Max Smeets: Let me be annoying. Let’s first start with the outputs that I value less, which we have often focused on. For instance, we have focused on people creating cyber power indices of different states, like whether this country has a cyber command or not and whether it has signed up to NATO’s sovereign cyber effects principle. This means it can deliver these effects. Or how it performs in international training exercises, and whether it ranks high or low. Those things mean very little to me. What would be most valuable is to see what we call CTI reports, cyber threat intelligence reports, often written by the private sector about specific operations conducted by state actors.
Now, what’s interesting is that we saw quite a few of these reports around a decade ago from different entities. Perhaps most notably, a very good Russian company at the time called Kaspersky Lab, which still exists today but is banned from the US, had an incredible team that conducted excellent analysis on many Western state actors, including groups they named “the mosque” and “animal farm.”
Luca: Yeah.
Max Smeets: These reports give you a really good indication of what some states are capable of achieving, and that would be the best thing to see. However, over the last decade, we have seen companies hardly publishing these types of reports. So there is a real question here about, on one hand, the visibility that these companies have, and second, the willingness to publish on some of these Western actors. The visibility is questioned in part because where do Western cyber threat intelligence firms have the most visibility? Well, it is on the networks of their clients, since those are the ones they defend. Where do Western state actors mostly target? Well, this is not in Western organizations, but outside of the West.
Luca: Right. Yeah.
Max Smeets: And so that’s where these companies have more limited visibility. Perhaps that is a reason. The second reason relates to whether, even if you find them, and maybe this state or this government is your client, would you want to publish on this? Especially if this state-led operation, let’s say, is a counter-terrorism operation, do you even want to disclose this? There are real questions here around the disclosure thereof.
Luca: Again, similar to the point we made before about classified information, this is a really interesting information ecosystem. As you said, what gets published and what doesn’t get published, including by non-government agencies, seems like a whole tangled question.
Max Smeets: There is a little bit of a “who’s who” in this space.
You have a sense that someone might tell you, “Oh, this unit is actually really quite good.” Yeah. And someone else might sort of confirm this, saying, “Oh, you know, we’ve been sharing this or that with this agency, and that’s been really effective.” So there’s a little bit of that, but for sure, it’s an information ecosystem to study. There is no question about it, and it’s one of the reasons why it has kept me fascinated for a really long time.
Luca: So I’m curious about your reaction to specific bottlenecks in terms of how much of an indicator of sophistication they are. One, maybe to begin with, is at the very beginning of the interview, we talked about the difference between IT and OT systems. So, again, to use the Colonial Pipeline example, whether you go after a company’s invoices or admin kind of system, or whether you’re actually able to engage with the physical engineering in the critical infrastructure behind it. Does that feel to you like an important distinguisher of sophistication? Maybe not at the level once we get to, you know, the US government and first-tier states, maybe China and stuff as well. But to the degree that we have seen examples of Russian groups try and fail at this, to the degree that Stuxnet, at least in the public record, still feels like the big example here. Is that, like, being able to escalate from IT to OT systems an important marker of differentiation?
Max Smeets: It might be one of the many markers, especially if you see an entity operating in an OT setup that knows its way around really well because it gives you a sense of, “Whoa, they must have brought people on board that understand wind turbines or something else.” Especially if you could log them and track them, and you see that it takes them so little time to go right after the right things, almost as if this is a seasoned employee that knows exactly how to handle the system. Then you have a sense that either they’ve been hacking these systems many times, or they have the type of expertise within this unit to do these types of things. So, that is an indication, perhaps, of sophistication.
But, in some ways, you could look at sophistication in two ways. One is to say, “I’m going to look for all of these indicators, and then I come up with a picture.” So I’m going to look at how they get access. If they use something super exotic to get access that no one has ever known of, then they might be more sophisticated, and less so if they use simpler methods. Okay. Then I’m looking at the way in which they set up, let’s say, their command and control infrastructure. If they’re doing this in a really simple fashion, which is all in plain text that you can just read, that’s really not sophisticated. If they are doing this in a very cunning way, using satellite infrastructure, etc., that’s really sophisticated.
There are hundreds of indicators that you could, in principle, use to come up with a picture of whether it’s more or less sophisticated. That’s a good way to go about it, right? How complex are the tools that they use? Are they customized, or are they just coming from some open-source database, etc.?
There’s another way to look at it, which I presented once before, called the magic of sophistication. When we think about cyber operations, the core element is deception. I want to deceive you to get access, and it could be in different ways—a smart phishing email that you do not expect, maybe using a vulnerability. It could be a variety of ways, but deception is at its core, and that has a sort of element of surprise.
Now, if that is the case, the way that I sort of have drawn an analogy before is like a magician. A magician that does a trick for the first time and deceives you is a much better magician than one who steals that trick and does it as well. In terms of techniques, they’re exactly the same. But I guess most people would agree that those who can develop these new tricks all the time are probably better magicians.
That’s, in my view, the same with cyber operations. You can look at all these technical artifacts and then make a claim about more or less sophistication. But if you find a really smart, cunning way that isn’t even using the most advanced tools available to get in, that can be a sign of sophistication. No one has ever thought about this before; this is the way you got in. Later on, you can’t do it anymore because people have found out about this technique. That’s maybe a sign of sophistication.
In fact, what do most state entities do? If I am the UK GCHQ or US NSA, what I’m using are the techniques that are the least difficult to deploy to still get access. I will escalate my technical capabilities only when I really need them, which makes it hard to do. Sometimes, this sophistication is knowing what trick to use. If I am a magician in front of a crowd of five-year-olds, I might use a different trick that is just enough for them to be deceived. I know exactly what is a good magician to pull off this trick. Now, when the kids are 15 or teenagers, I change my tricks because I need something else to deceive them. If I present this at, say, the Penn and Teller show, where super advanced magicians are looking at my tricks, I might use a different trick again. Knowing what you need to succeed beforehand might again be a sign of sophistication. I’m trying to put you in a different mindset, moving away from all these indicators towards a way to view this space slightly differently.
Luca: Yeah. I think there are a bunch of good points. One thing that you really hit on, which I would maybe take a step further than how you phrased it, is that looking at the inputs and how complicated the setup was is a bad indicator. Not just because there’s beauty and sophistication in doing something that is maybe much simpler but more creative, but also because, to the degree that we care about sophistication as a sign that somebody could cause a specific effect, we don’t want to reward difficulty and complexity just for its own sake. I think you have this nice quote in one of your articles, which is that we can talk all we want about zero-day exploits to gain access to a system. But if a simple, efficient email is enough, then we don’t need to give people all these bonus points for coming up with zero-days if this much simpler technique is also sufficient.
I think maybe a way of rephrasing the IT/OT distinction that I’m curious about is that, when I look at the OT systems, it feels like getting there requires more creativity or sophistication. There are just more steps involved; you need a deeper understanding of many more systems at the same time, both computer systems and engineering systems, to find something in the solution space. I’m curious about the degree to which this is a good differentiator or distinction. It is true that most people who could do an OT attack could probably also do an IT attack, but not vice versa. I guess that’s one way of putting it.
Max Smeets: In my view, it might be a distinction worth considering, but one of many, many, many. That’s essentially the easiest way to answer.
Worm attacks
Luca: As you said, the exploits are broadly overrated. I feel like the case against this would be that if you look at some of the most destructive cyberattacks to date, they were things like WannaCry and NotPetya, the ransomware attacks that we talked about.
And as you mentioned, they relied heavily on the leaking of US intelligence and these kinds of EternalBlue vulnerabilities that both of these attacks used. I am curious if there’s something to that. These attacks were only possible because something leaked out of the US national intelligence toolset and was then weaponized by other actors. That kind of points to maybe these exploits, at the very high end of destructiveness and sophistication, actually being a bottleneck for North Korea, having done one WannaCry attack, but not anything of similar scale since 2017.
Max Smeets: Let’s take a step back. The way I have conceptualized the use of zero days often starts with this quote from Rob Joyce, former lead of the NSA’s Tailored Access Operations Unit, who states that to be successful in cyber operations, it’s above all about knowing the adversary’s network better than they know themselves. If you have a good sense of your adversary’s network, then you often don’t need something as exotic as a zero day to get in. Something else will likely be quite effective too. I very much agree with this, and it seems to be the case for many operations. That doesn’t mean that the deployment of a zero day cannot have significant consequences. It depends, though, on the type of zero day, which adds further nuance.
Luca: Right.
Max Smeets: This can mean lots of different things. You have the most powerful ones that would give you access to a really wide set of systems in the world. If you find the right zero day in some type of Microsoft system, you could gain access to many computers globally. This is very different from getting an old zero day in some obscure browser that is used by a limited number of people in a given country. In that sense, these two are not alike at all. Now, this tricky thing we have seen in this case with WannaCry and NotPetya being leaked by US intelligence services raises questions for the US and other governments. If I have a certain zero day, should I retain it for potential further use based on where I think it can be deployed, or should I disclose it to the vendor? The trick here is that sometimes the most powerful exploits are ones you want to retain because they are so powerful, and there are many potential use cases in the future. At the same time, by their very nature, they are so powerful that it might be good to disclose them to the vendor to develop a patch because many systems are vulnerable to it. There is an open question about the degree to which another actor, whether state or non-state, may discover the same vulnerability.
Luca: Yeah.
Max Smeets: These are tough questions, and the US government has certainly faced criticism after these incidents. I also agree that in these two cases, the ability to proliferate widely the damage caused would likely have been limited if it weren’t for the leaking of US intel tools. It’s a very sad incident. However, there are enough counterexamples where significant disruption or damage occurs, or where a lot of documents or intellectual property is stolen by foreign actors, without any zero day being deployed.
Luca: I think that’s really important. There’s a difference between something being sufficient and necessary. The point we’re getting to is that discovering a zero day, especially something as powerful as EternalBlue, might be sufficient for an actor to cause a lot of harm or to conduct a very sophisticated or destructive attack, but it is by no means necessary to reach that threshold.
Max Smeets: Just to clarify, that’s not a magic wand that gives you the opportunity to do everything you want. Maybe it has given you initial access, but perhaps not even administrative access, so you still need to think about how to escalate that access.
Maybe then you’re relying on known vulnerabilities, or you live off the land, or whatever you do. It’s not that uncertainty sets up a chain where everything else is super easy. You still have to remain undetected. You still have to think about what type of payload you’re going to deploy. So, the deployment of an exploit is one step in the broader process of operations.
Luca: And just to pick up on that point, you mentioned it being important that the attacker or the adversary understands the system more than the defender does. I’m curious if we could operationalize that further. For example, is there a quiz we could set up for the attacker and the defender to fill out? If the attacker scores better, would that be a sign? I’m curious about what questions we would ask. When we say someone has a deeper understanding, is this about factual recall? What does that mean, or can we disentangle that at all?
Max Smeets: Well, it starts with understanding the infrastructure that I own. So, I guess the first question would be this: if the attacker has a better understanding of all your devices that are connected to the Internet, then you’re in trouble. You might be surprised, but sometimes massive organizations are not well set up or do not have a full sense of all the infrastructure that is connected online. If they do, how is that configured, and to what degree is it vulnerable? It may seem basic, but it can be problematic if the attacker knows this because they essentially know vulnerable ways in. You could probably come up with other questions, like if the attacker has a much better sense of when certain tools are detected or not, and with what type of antivirus or anti-detection capabilities they are more or less likely to get detected. That’s probably a problematic thing for the victim.
Luca: Yeah.
Max Smeets: If the attacker is much more aware of those kinds of things, maybe that’s an indicator. I wouldn’t immediately set up a questionnaire and draw conclusions from that.
How will AI change cyberwar?
Luca: Let’s segue into some of the proliferation concerns, particularly around generative AI. We’ve talked about the different types of attacks that governments or other organizations do, some of the challenges, and some indicators of people being more capable. The emphasis here is not necessarily on what AI is capable of doing today. We’ve seen cases of generative AI being used by groups like Project Zero to find basic vulnerabilities and incorporate it into things. We’ve seen benchmarks where AIs can autonomously complete some Capture The Flag contests. However, I want to take a longer perspective. In five years or ten years, if this progress continues, what should we look out for that would indicate national security implications that go beyond just being useful or interesting? I want to understand, especially if we think that AI could proliferate tools that are currently only accessible to high-tier governments and states, like the US, Russia, and China. Where would you be worried about actors gaining access to these tools?
Max Smeets: To understand the impact of AI on this space, you have to look at the entire cycle of cyber operations, starting from reconnaissance to gaining access. Once you have access, how are you going to get further access? Once you have this further access, are you going to steal data or destroy data? All of those follow-on steps are important.
And AI will be on the offender and defender side, deployed across all of those stages for either being helpful or detecting. In fact, as part of Binding Hook, the platform that I run, we conducted an essay prize competition, and I’m now reviewing the essays. The essay prize competition was done together with the New York Security Conference, and we asked the question: how does AI impact cybersecurity, and what are the implications for Europe? I don’t want to be too strong in my current views as I’m still reviewing some of the thoughts of others, but it’s fascinating to see the different interpretations that people have here. Some will say, for instance, regarding the development of exploits and the types of techniques being used, that we have already seen early indications of this space being revolutionized in exploit discovery and hopefully also in the patching of vulnerabilities. Others will argue that it’s limited because this often pertains to a class of vulnerabilities that is more likely to be detected than others. We expect that some of the most interesting exploits will still only be discovered and developed when some of the best people are looking into this for a long time. Personally, I am sort of ambivalent about where it goes, and I might also just come with my limitations in knowledge to delve into all of these things to really understand the risks involved.
The one thing that I’ve been most worried about, and this segues into my more recent work over the last few years on crime, is that AI is being deployed in two ways. First, of course, on the phishing side—writing phishing emails, which is 95% of the time when this gets in. However, where they have been most effective is actually in understanding the data that they have acquired. This is a less sexy topic, but one that is really important. When criminals steal a lot of data, they often ask for ransom to not put this online, and they also encrypt this data to demand a ransom to not get decrypted. In the past, when they would steal gigabytes of data, these criminals had no capacity to summarize this and understand what the data actually was that they retrieved from a random company in Latin America, especially if they didn’t speak the language. Today, they have the capacity to quickly assess what they have available to them in a way that was never done before, allowing them to extort victims in a way that was previously impossible. It’s a development that is hard to showcase because, you know, how can you prove that they used any type of AI platform to do this? But it certainly worries me a lot, and they will only get better at doing this in the future, transforming the space considerably.
Luca: Yeah. That’s interesting. I mean, presumably, one indicator would be just seeing higher ransoms being asked for or seeing more people being willing to pay higher ransoms because they could extract more value from that information. Another example, which may be a bit of a toy example, is seeing more insider trading. Now you have the data from quarterly reports, and you can anticipate how well a company is actually doing and long or short the stock accordingly. Some are more playfully thinking of creative ways, but those are some thoughts.
Max Smeets: Yeah. Although they need to be legally able to trade and so on. That’s a tricky thing. Perhaps, you know, an indicator could indeed be higher ransoms, but that might just be them being more explicit in the negotiation process about what data they have available. That might be a good indicator, and sometimes that is published. The reason why the size of the ransom has to do with so many different variables, such as the size of the company and the amount of data stolen, makes it a bit tricky. But, yeah, possible indicators are there.
Hard evidence is not there yet, but it’s something that I worry about today, and I think it will only get more extreme in the years to come.
How cybercrime became a national security threat
Luca: Do you want to talk a bit about the national security implications that you see from cybercrime in your book, as you mentioned, that’s coming out? That is the subtitle. I guess I’m starting off somewhat more skeptical here, where cybercrime to me definitely feels like an objectively bad thing that is a drag on the economy. But I’m curious how, as you’ve seen this space grow, sophistication now becomes something that you would say even has national security implications?
Max Smeets: So this is interesting, right? I finished “No Shortcuts,” and there, I really wanted to cut through the hype. The claim I was making is that everyone is talking about the militarization and the future of cyber war. But, actually, these cyber commands are often not very effective, and that comes from a number of different reasons. I’m going to explain why. That’s essentially the book. It’s a critique of many people who have overhyped this space. When I finished the book, I thought, okay, I’ve got all these contacts that are extremely hard to establish, and I find it really interesting. Let me write another book, maybe zoom in on NATO alone, or talk about the motivations for establishing a cyber command in the first place and to what degree that is signaling or posturing.
I was looking at the space and clearly saw the rise of cybercrime. Regarding what you just mentioned, how do we know what’s out there? One thing is for sure: ransomware-related incidents come up on a day-to-day basis. The impact is often quite visible and public for a number of reasons. I thought, okay, let me delve into this and start trying to conceptualize it. Originally, I have an international relations and political science background, and there was not a single book or article written by a political scientist on this. I felt that the criminology space hadn’t dealt with it the right way. The technical ransomware-focused analyses were very technical.
The more I delved into it, the more I felt that this is a critical issue for society, where you have a number of different indicators. First of all, the one cyber campaign that has ever caused a state to declare a state of emergency in a state of war was ransomware conducted by Conti, the group that I’ve studied most deeply in my book. Other examples that I mention include indicators like the UK COBRA meetings—these national security emergency meetings—where two years ago, the majority of those meetings were on ransomware over any other national security issue. I thought that was an interesting indicator.
You’ve already talked about a couple of other incidents we’ve seen, including the attack on Colonial Pipeline in the US, but you can think about TSMC or others being hit as well. These are significant incidents at a pace that I actually haven’t seen on the state actors’ side. At the same time, we often see the most vulnerable being targeted, and that has definitely brought me into this space. We see ransomware groups typically going after entities that are most keen to pay after they’ve been hacked. This often means they have critical data. As a result, we see hospitals being both a soft target because they don’t always have the resources available to defend themselves, and once they are hacked, they sit on so much vulnerable data that it can have great consequences.
One sad example I cite in my book is the case where one hospital chain was attacked, and after the hospital chain decided not to pay, the criminals started leaking photos of female breast cancer patients to the public, with really terrible psychological consequences. So beyond the big national security issue, it’s often an individual who is hit here in a severe manner. These groups have changed significantly in the last five years from what they were before.
Five years ago, these were relatively small groups that were not conducting very advanced operations. Today, these groups often have a much more professionalized structure. The group I looked at most specifically, Conti, may have had at some point over 20 people on its payroll, and in 2021, it earned between 80,000,000 and 1,200,000,000 US dollars that year alone. These groups have a degree of resources and personnel that many state actors do not even have available to them. This is a serious cause for concern, and the issue does not seem to go away. I really felt it was worth writing a book on it to draw some comparisons between state actors and these criminal groups and hopefully raise some awareness as well.
Luca: Yeah. I was really surprised, as you pointed out, by the sophistication and size of some of these groups. I think you mentioned in one of your articles that Conti has an HR department discussing whether employees are allowed to work remotely or must come into the office, like hybrid work. It really is much more of an industry than I would have thought, with a much bigger scale.
Max Smeets: It’s often a bit like a startup. Just like in Silicon Valley, startups make different decisions on how they operate. You see this with criminal groups as well. Some criminal groups might outsource a lot to be more flexible and easier to scale, but that can also make it harder to control every single activity that this startup does. Other startups might try to have everyone on their payroll and grow a bit slower. The same is true with ransomware groups. Conti, which I analyzed, follows this second model, having more people on the payroll. Other groups are sometimes described as RaaS (ransomware as a service) groups, where they outsource operations to affiliates, allowing the core organization to be smaller while engaging in many activities simultaneously. There are lots of different models they are exploring, and this space is developing rapidly in a way that will be interesting to follow in the years to come.
Luca: I’ll link your article, your book, and the write-up as well. Maybe to close off and briefly return to the question of AI and frontier capabilities, as you usually frame in one of your books, cutting through the hype. I’m curious if you look two to five years ahead, as many people are experimenting and trying to demonstrate what AI models are capable of. Are there two or three observations you would look out for that would cause you to change your mind about how transformative this technology can be? This can be a high bar, like an AI needs to help Project Zero find an EternalBlue-level capability, or it needs to autonomously hack an OT system. But would there be any markers you’d feel comfortable putting out there, saying this is something to watch? If this actually becomes capable, I would really think hard about how to assess this for national security implications.
Max Smeets: That’s not an easy question. Maybe biased by the clips I saw today, I believe they were from Mark Zuckerberg discussing the future of certain organizations having at least an AI capable of taking over the tasks of a sort of mid-level engineer. Of course, if that is the case, all of these developments will have ramifications for how intelligence services and military services operate in this space. If you can write code for a software company, you can also write malware for an intelligence service or a cyber command.
And then you get to a space where, with the right infrastructure available for testing and retesting, some countries already seem to be putting in place the right data that you can input. With the availability thereof, you can achieve things at a scale that is completely impossible today. I’m not a very good visionary, but once we see the frequent deployment of that within cyber commands, we will have clearly witnessed cyber operations going into a realm that is qualitatively different from where we are today.
Luca: Concerning. We should watch out. But thanks so much for the time. This interview was really great, and I hope you have a great rest of your day. This was really interesting.
Max Smeets: It’s a pleasure. Thank you.
Outro
Luca: That was Max Smeets on Barriers to Cyberweapons. If you want to learn more, you can read the write-up at hearthisidea.com/episodes/smeets. There, you’ll find links to all the papers and books referenced throughout our interview, plus a whole lot more. If you enjoy this podcast and find it valuable, then one of the best ways to help us out is to write us a review on whatever platform you’re listening to. A big thanks, as always, to our producer, Jason, for editing these episodes. And thanks very much to you for listening.