Offensive Cyber Beyond the Usual Suspects

The world’s attention remains focused on Russia’s invasion of Ukraine. Since the start of the conflict, cybersecurity experts and scholars have engaged in a heated debate over the perceived success and failure of cyber operations. Major crises such as this have also resurfaced questions around the role, form, tactics, and narratives that configure what offensive cyber is and how it is enacted.

Our understanding of offensive cyber capabilities is incomplete. The war in Ukraine is a stark reminder that assessments based on the peacetime use of these capabilities do not straightforwardly apply to wartime. The task of assessing cyber capabilities is further complicated by the lack of transparency around these capabilities, diverse institutional arrangements, and the variations in the scale and goals of different states’ cyber organisations.

Another factor contributing to this incomplete view is the fact that much writing on offensive cyber has focused on the same small group of countries identified as cyber powers. This group usually involves the members of the Five Eyes intelligence alliance, particularly the United States and the United Kingdom, along with allies and partners such as France and Israel. China, Russia, Iran, and North Korea are often considered, albeit primarily as US adversaries rather than in their own terms.

A more complete understanding of offensive cyber requires examining how these capabilities are being used beyond this small set of players.

More broadly, understandings of the nature of offensive cyber will vary between actors. As well as being the focus of much of the analysis, countries such as the US and the UK have been the most vocal in outlining their understanding of offensive cyber. However, these definitions can be contested. For example, the distinction in Western military doctrine between cyber and information operations is not universally recognised. Similarly, the US’s conceptualisation of military activity on other states’ networks as ‘defending forward’ would be contested by some observers.

It is not simply a question of how offensive cyber operations are used in other parts of the world – offensive cyber might mean different things to different people.

Reflecting on these and other challenges, The Alert invites contributions from scholars and practitioners thinking about offensive cyber ‘beyond the usual suspects.’ This includes redirecting our present gaze to countries in the global south, ‘small’ countries, and actors other than the state. This work could be empirical – looking at how these capabilities are used – or theoretical, examining how they are understood. 

In addition, The Alert highlights below some of the work on offensive cyber (and cybersecurity more broadly) that is focused on countries beyond the usual focus on the Five Eyes allies and parts of Europe. The list is incomplete – if there are important works that you think we have missed we would be keen to hear from you.

Going ‘beyond the usual suspects’ is an ongoing exercise that extrapolates from niche academic debates. However, it is one that can help practitioners, government officials, and human rights defenders critically assess the deployment of cyber capabilities, devise policy recommendations that speak to different contextual realities, and develop strategies that consider the diverse ways through which offensive cyber can be institutionally consolidated.

Examples of works that go beyond ‘the usual suspects’

Maschmeyer, Lennart, Ronald J. Deibert, and Jon R. Lindsay. “A tale of two cybers-how threat reporting by cybersecurity firms systematically underrepresents threats to civil society.” Journal of Information Technology & Politics 18.1 (2021): 1-20.

Chenou, Jean-Marie. “The contested meanings of cybersecurity: evidence from post-conflict Colombia.” Conflict, Security & Development 21.1 (2021): 1-19.

Shires, James. The Politics of Cybersecurity in the Middle East. Hurst Publishers, 2021.

Uchill, Joe. 2021. ‘Hack-and-Leak for Hire Being Sold as Litigation Assistance’. 16 November 2021.

Valeros, Veronica, et al. “A study of machete cyber espionage operations in Latin America.” Virus Bulletin International Conference. 2019.

Schulze, Matthias, and Sven Herpig. “Germany Develops Offensive Cyber Capabilities Without a Coherent Strategy of What to Do With Them.” Council on Foreign Relations 3 (2018): 18.

Hurel, Louise Marie. “Beyond the Great Powers: Challenges for Understanding Cyber Operations in Latin America.” Global Security Review 2.1 (2022): 7.

Subversion over Offense: Why the Practice of Cyber Conflict looks nothing like its Theory and what this means for Strategy and Scholarship

Cyber attacks are both exciting and terrifying, but the ongoing obsession with ‘cyber warfare’ clouds analysis and hampers strategy development. Much commentary and analysis of cyber conflict continues to use the language of war, where actors use ‘offensive cyber operations’ to meet adversaries in ‘engagements’ striving for victory on the ‘battlefield’ in the ‘cyber domain’. This discourse persists despite a growing consensus that cyber operations are primarily relevant in conflict short of war. For example, even the United States’ new strategy of ‘persistent engagement’ developed to meet challengers in such conflict nonetheless implies a military dimension of ‘engaging the enemy’ in its very name. Sometimes, this dogged adherence to the conceptual framework of war can take almost comical dimensions, with a recently published book proposing that “cyberwarfare…is modifying warfare into non-war warfare”. If cyber conflict is not war, why should we continue to look to concepts, theories and language of war to understand and explain it? Moreover, analysts not only agree cyber operations are primarily relevant in non-military competition, but available evidence indicates that cyber operations are in fact ineffective instruments of force projection.

The theory and perception of cyber conflict thus increasingly differs from its observed practice. Visions of cyberwar date back to the beginnings of scholarly engagement with the opportunities and challenges the use of information technologies brings in conflict. John Arquilla and David Ronfeldt warned ‘Cyberwar is Coming’, heralding a new form of conflict. Accordingly, a subsequent wave of theorizing foresaw a revolution in military affairs enabled by the information revolution. Neither has manifested in practice. The foundational notion of a revolution in conflict has lived on, however—only the type of conflict has changed. Accordingly, recent literature suggests cyber operations enable a new strategic space of conflict short of war, marked by a condition of ‘unpeace’ and opening a new way to ‘shape’ world politics in one’s favor. In short, the expectation is that cyber operations transform conflict by offering a way to attain strategic goals that were previously unreachable without going to war. This transformative influence is due to the presumed superior speed, scale, and anonymity of cyber operations. Empirical studies of cyber operations challenge these expectations, however, revealing extensive lead time, operational complexity and yet limited impact. Moreover, governments and private sector actors are increasingly adept at attributing cyber operations to their sponsors, and sometimes do so publicly.

In practice, cyber operations thus often fall short of their revolutionary promise. The reason, I argue in this article recently published in International Security, is their subversive nature. Rather than a military offensive where actors deploy troops to overpower an adversary and compel them to their will through force, cyber operations produce outcomes by sneaking into systems by exploiting flaws in their design to make these systems do things neither their designers nor users intended or anticipate. This mechanism of exploitation is directly analogous to the mechanisms intelligence agencies have long used in subversive covert operations—i.e. instruments of power in conflict short of war. Hence, examining the nature of subversion helps explain the nature of cyber conflict. It closes the gap between theory and practice and dispels some of the persistent myths about its revolutionary impact.

Subversion is an understudied instrument of power used in non-military covert operations. It exploits vulnerabilities to secretly infiltrate a system of rules and practices in order to control, manipulate, and use the system to produce detrimental effects against an adversary. In traditional subversion, states target social systems. Typically, states have used undercover spies to infiltrate groups and institutions, establish influence within the latter, and then use this influence to produce desired outcomes against an adversary. In other words, subversion turns an adversary’s own systems against themselves. This mechanism enables a wide range of possible effects, whose scope and scale always depends on the properties of the targeted systems: influence on public opinion, disintegration of social cohesion, economic disruption, infrastructure sabotage, influence on government policy, and, in the extreme case, overthrowing a government.

Because subversion is secret and produces effects indirectly through target systems, it holds great strategic promise. If successful, it offers a way to influence and weaken adversaries at lower costs and risks than war. Recent research has shown secrecy lowers escalation risks even in military covert operations, and naturally lower-intensity operations involve even lower risks. Moreover, compared to warfare—both overt and covert—where actors must deploy material capabilities and troops, subversion involves minimal resource requirements since it primarily depends on adversary assets. In theory, subversion thus enables a way to shift the balance of power without going to war and the dangers this entails. Here is a key parallel to current expectations about cyber operations.

Importantly, however, subversion often falls short of fulfilling this promise because of operational constraints and trade-offs that produce a crippling trilemma. Exploitation involves a distinct set of challenges that slow the speed of operations, limit the intensity of effects, and limit the control subversive actors have over effects. Importantly, increasing effectiveness of one of these three variables tends to decrease it across the remaining ones. For example, moving faster tends to lower intensity and control. These interactions produce a trilemma as speed, intensity and control are key determinants of strategic value, yet at a given level of resource investments actors can at best maximize one. In practice, subversion is thus often too slow, too weak and too volatile to achieve strategic value. Accordingly, throughout the Cold War policy-makers and analysts have overestimated its effectiveness—another striking parallel to cyber operations, underlining their shared limitations. With their reliance on exploitation, cyber operations share not only the promise but also the pitfalls of subversion resulting from this trilemma.

Hacking, the key instrument used in cyber operations, means exploiting vulnerabilities in systems and the way they are used. There are two main types of vulnerabilities actors can exploit: technical and social. Technical vulnerabilities refer to flaws in the technology itself, and most frequently involve flaws in the logic of programming code that determines the behavior of computer systems to manipulate them. Code inevitably contains flaws because humans are fallible. Hackers exploit such flaws to gain unauthorized access to systems and establish control over them, often by using malware, specialized malicious programs designed for exploitation. Social vulnerabilities concern pathologies in human behavior and weaknesses in security practices guiding the use of technology. Phishing emails are a key example of the latter form of exploitation, also known as ‘social engineering’.

While the systems targeted differ, this mechanism of exploitation is functionally the same as in traditional subversion. Accordingly, as in traditional subversion, hacking groups behind cyber operations face significant constraints in speed, intensity and control. As a starting point, cyber operations cannot produce effects without the presence of vulnerabilities. Where there is no way into a system, hackers cannot create one by force. Accordingly, identifying suitable vulnerabilities is a necessary condition for success. Doing so requires reconnaissance and analysis, which takes time and thus slows operational speed. Acquiring means of exploitation usually requires developing them, which takes further time. There are also generic exploits that are either publicly available, but because they are public, careful victims will have updated their systems to remove the vulnerabilities such exploits target. Alternatively, hackers can buy custom and hitherto unknown exploits (known as 0-days) that actors sell on the grey market to bypass the development time, but doing so requires significant financial investments.

Meanwhile, actors must remain hidden and depend on adversary systems that are seldom fully familiar. Both characteristics limit the intensity of effects. First, to maintain access to a system, hackers must avoid alerting the victim to their presence. Once discovered, victims have several means to neutralize a compromise, ranging from patching software over removing malware or disconnecting and shutting down the affected systems. More sensitive systems are likely to be better protected, thus raising the chance of discovery. Similarly, the greater the scale of a compromise becomes, i.e. the more systems hackers manage to infect with malware or otherwise exploit, the likelier one of these system’s owners or users is to discover the unauthorized access.

Finally, secrecy and dependency on adversary systems also limit control over effects. Control is limited by definition since victims can neutralize it in different ways upon discovery. Depending on the measures the hackers use to hide their presence and stay within a system despite discovery, doing so is not always easy—but it is always possible. Even without discovery, hackers can only establish control over those parts of a system they are sufficiently familiar with to identify flaws even their designers and owners have missed. And even with that hurdle passed, there is no guarantee those parts of the system will respond to manipulation in the way the hackers expect. Hackers are as fallible as the designers of the systems they aim to exploit, hence they are as prone to making mistakes or missing something. Consequently, when hackers proceed to manipulate a target system, it may behave not only different than its designers expect, but also different to the hackers expectations. The manipulation may fail to produce any effect, or produce an unintended effect, leading to unintended consequences.

Importantly, as is the case with traditional subversion, these constraints are not only individual hurdles, but interact in a way that forms a trilemma. At a given level of resource investments, increasing the speed of a cyber operation tends to produce corresponding losses in intensity and control. The faster one moves, the less time there is for reconnaissance and development. With less time for reconnaissance and development, hackers will have relatively less scope and scale of access to systems than with more time. Consequently, the scope and scale of effects they can produce through these systems is also more limited. Similarly, with less time spent the risk of missing something or making mistakes increases as well. Hence, the risks of being discovered prematurely (before attempting to produce an effect), of failing to produce an effect, or of unintended consequences all increase as well.

The trilemma persists, limiting the strategic value of cyber operations. In theory, it is possible to launch speedy operations that achieve massive scale and yet produce carefully calibrated effects. In practice, however, the trilemma typically renders cyber operations too slow, too weak and too volatile to produce strategic value when, where and how it is needed. Accordingly, my research shows Russia’s cyber operations against Ukraine by and large failed to measurably contribute to its strategic goals. Moreover, the evidence shows that throughout the five disruptive cyber operations Russia deployed against Ukraine, the causes behind this limited strategic value were the constraints predicted by the trilemma.

When assessing the threat posed by hostile cyber operations and developing counterstrategies, it is crucial to start from a level-headed analysis of what is feasible in practice, rather than possible in theory. Focusing on possibilities alone not only distorts debates and hampers scholarship, but also undermines strategic responses. For example, ongoing fears of ‘Cyber Pearl Harbor’ or ‘Cyber 9/12’ events risk wasting valuable resources in preventing and mitigating such unicorn events. These resources and intellectual energy are then missing from efforts to counter lower intensity subversive campaigns aiming to undermine social cohesion, cause public and economic disruption and sabotage infrastructure.

Fortunately, the theory and the evidence from Ukraine outlined here indicate that the threat such operations pose is often overstated. Yet that does not mean we can sit back and trust that everything will be fine. In particular if it is allowed to fester over longer periods of time, the likelihood that subversion can achieve some of its goals increases. To prevent this outcome, rather than maximizing engagements with adversaries, as does persistent engagement, strategies that build on established counterintelligence practice promise greater rewards. Because subversion takes time, persistence is important in countering it. But so are efforts to exacerbate the shortcomings of subversion, namely increasing detection capabilities and raising the unpredictability of one’s own systems to raise adversary uncertainty and control challenges.

Here is a key potential for European states to innovate, particularly considering the European Union’s preference for non-violent resolutions over disputes and more aggressive interference. Currently, most cybersecurity strategy development and defensive measures are outsourced to NATO, a military alliance. Accordingly, threats continue to be framed in military terms—most recently a push to counter ‘cognitive warfare’, a new term to describe influence operations and disinformation, classic staples of subversion.

Lennart Maschmeyer is a Senior Researcher at the Center for Security Studies at ETH Zurich.

Does the Cyber Offense Have the Advantage?

There is a simple conjecture that is quite common in all aspects of society: “the best defense is a good offense.” This idea persists and leads to the belief that action can trump protection in cyber security because of its simplicity and the general failure to evaluate claims with evidence. The complexity of computers can give the impression that little is known about their functions leading to the formation of an idea that became conventional wisdom: attack first, sort out the details later.

Yet, there is no evidence that the offense is the best course of action in cyber security. The concept of the offense/defense balance (hereafter O/D balance) has long been studied in International Relations. The basic premise is that “when defense has the advantage over offense major war can be avoided.” This simple conjecture has created a field of research that seeks to unlock the mysteries behind war and peace by focusing on the nature of operations and attack profiles.

Seemingly unknown to most cyber security scholars, the literature became confused over how to measure the balance between the offense and defense and even over its central variables. The recent passing of Robert Jervis highlights the power and breadth of his work. While Jervis’ work kicked off the modern era of research of the O/D balance, he also highlighted the need for a distinction between offensive and defensive operations.

Moreover, even if we accepted the doubtful claim of an offensive advantage s empirically accurate and measurable, this idea nonetheless fails to clearly motivate action. States assuming an offensive advantage might be deluded in their perspective, as happened during World War I. Alternatively, a state might go on the offense anyway, due to the drive of other motivating reasons, such as the importance of a territorial claim or the need to signal discontent.

Challenging the idea of an Offense/Defensive Balance in Cyberspace

There are three core problems with the O/D balance: the undisguisable nature of the variables; the failure to examine how perceptions impact a sense of balance; and the difficulty of measurement.

The key challenge for discussions of the offense or defense in cyberspace is that it is near impossible to distinguish between the two frames. The fluidness of the concept of offense or defense makes the terms virtually useless for research. Moves that are said to be defensive involve forward maneuvers that can seem offensive in nature, a common confusion with the U.S. strategy of “defend forward.” While cyber mission forces can go on the attack, they also can be posted as defensive operators seeking to stop attacks before they happen. The active and adaptive nature of modern technology makes the distinction between offense and defense entirely empty.

A key foundation of the O/D balance is the idea that each side will correctly perceive either the offense or defense as having the advantage, determining the probability for war. Yet, as critics have pointed out “it is inherently difficult to assess the impact of weapons technologies, particularly when they have not been employed in war.” Perceptions of cyber power and an emphasis on offensive dominance are in the eye of the beholder with many doubting the offensive power of the United States or the defensive capability of the North Koreans in an isolated network. In a domain that operates mostly without empirical evidence, anyone can perceive whatever they choose, often based on fictions.

It is impossible to measure the success or failure of the theory of O/D balance in cyberspace given the conditions laid out by the proponents of the theory. Absent of measurement, scholars, and policymakers are making predictions that can never be falsified. In short, we can never know if one is wrong or right. Glaster and Kaufmann counter the idea that the theory cannot be measured “as simply incorrect.” They offer a reformation of O/D balance as the ratio of costs for the attacker versus the costs to defend territory. This premise is inoperable in cyber security for the simple reason that there is no territory to take.

The challenge of distinction then returns: how would one measure the costs to defend versus the costs to attack? While it might be simple in the abstract, would one classify US Cyber Command (USCYBERCOM) as offensive and the Department of Homeland Security (DHS) as defensive? Such simple distinctions betray the fluidity of computer network operations and the pace at which bureaucratic organizations operate and share talent. Glaser and Kaufmann dismiss these challenges, suggesting that “ballpark estimates of the balance may be sufficient.” Yet, “ballpark” estimates encourage the classification of success and dismissal of failure absent more precise metrics.    

Ending Dangerous Conjectures

The failure of the O/D balance literature is critical because a misguided focus on the balance between offensive and defensive operations clouds understandings of cyber strategy. It also forces practitioners towards leveraging language that does not describe the nature of cyber operations. It is near impossible to distinguish cyber actions between offense and defense and even more difficult to measure the effectiveness of said actions. The mental gymnastics required to argue that leaders can accurately measure the O/D balance in cyberspace rapidly become impractical.

The belief in the utility of aggression is dangerous and likely a reaction to the threat inflation pervasive in the discourse. The pathology of offensive advantage and of defenders under siege is reinforced by the discourse in the media and social media about a constant barrage of cyber-attacks. This pathology will lead to strategic malaise and constant attacks, as defenders fail to shore up vulnerabilities.

Conflict is a continuum. As states build towards conflict, little actions taken can add up and interact with big factors such as territoriality to produce warfare. From this perspective, the distinction between “offensive” and “defensive” actions has little value.

The premise of O/D balance theory provides poor policy advice, sometimes leading policymakers to propose offensive operations when these operations might be unsuited for the domain, or worse, ineffective. The focus on this theory is troubling because it minimizes the defense due to the fear of the ‘magic’ of emergent technology. Some might argue that we have failed in the defense for cyber operations, with the SolarWinds operation being a classic example. However, the reality is that states have rarely tried to do defense correctly due to bureaucratic issues, money, lack of knowledge, or because of the pull of the offense.

The misapplied and dangerous conjecture that the best defense is a good offense must end. The best defense is a real defense. Measuring success or failure in the domain is a critical task to avoid the sorts of “ballpark” empirical estimates that dominate the field. Trying to sort out just what is offensive and what is defensive distracts the policymaker and the strategic planner from developing options to protect the national security of the state and ward off the most common abuses in cyberspace. 

Brandon Valeriano is a Senior Fellow at Cato Institute

Event: UK Cyber Strategy Roundtable

The Offensive Cyber Working Group will be hosting a roundtable of experts to discuss the implications of the new UK Cyber Strategy on Friday 17th December at 3pm (UK time).

We’re pleased to have Dr Alexi Drew, Dr Joe Devanny, Dr Leonie Tanczer, Dr Tim Stevens, and Dr Andrew Dwyer have a discussion on the new UK National Cyber Strategy with Amy Ertan acting as moderator of the discussion.

You can sign up to the roundtable through this link: or you can follow us on YouTube (however this will not be monitored for questions):

Active Cyber Defense: panacea or snake oil?

by Dr. Sven Herpig and Max Heinemeyer

Active Cyber Defense is coming to the European Union. The EU is currently working on an update of its Network and Information Security Directive (NIS Directive) which, inter alia, includes a provision on active cyber defense. In other places, such as the United States, the United Kingdom and even member states of the European Union, such as Germany, debates about active cyber defense have been conducted for several years with varying degrees of maturity. For European member states that have not done so already, it is now time to better understand active cyber defense, its implications, and develop a position regarding whether (and to what extent) they want to adapt a corresponding framework.

Active cyber defense is understood by the authors as one or more technical measures implemented by an individual state or collectively, carried out or mandated by a government entity with the goal to technically neutralize and/or mitigate the impact of and/or attribute technically a specific ongoing malicious cyber operation or campaign.

While the advantages and disadvantages of such operations are often viewed controversially, it has become evident that active cyber operations are increasingly being conducted as part of states’ strategic vision in cyberspace. This represents a paradigm shift for many countries on how they attempt to counter cyber operations. This article presents a set of criteria to help evaluate active cyber defense operations in pre-and post-operation. Subsequently, the criteria are applied to the case study of the FBI web shell removal in wake of the Hafnium  Microsoft Exchange operation in early 2021. The FBI was able to prevent some immediate damage to US companies and institutions by deploying active cyber defense measures, but they remained vulnerable even after – it was not a one-time fix.

Digitization and the evolving threat landscape drive governments to look for new approaches

Governments feel that their current approaches are not working in keeping the country safe and that new methods, such as active cyber defense operations, need to be explored further. Countries are looking at active cyber defense operations as an option to better deal with the escalating threat landscape of recent years. The threat landscape has changed significantly and digitization across the globe has accelerated, including critical infrastructures such as hospitals or energy systems.

In the last five years, cyber operations took place that were unprecedented in terms of scale, professionalization, speed, sophistication, and damage done. The paradigm shift in response to malicious cyber activities is a reaction to the paradigm shift experienced in the threat landscape.

Current key trends in the threat landscape include:

The trends in the threat landscape are partially enabled by the growing digitization of businesses. Digitization, in this context, covers areas such as the increase in the use of digital infrastructure, shifts to cloud computing, Bring-Your-Own-Device (BYOD) policies, or the dynamic workforce, IT, and Operational Technology (OT) convergence, and the use of digital supply chains. Growing digitization is generally positive for business growth and for innovation, but it comes at the cost of often increasing complexity in IT landscapes and more dependency and reliance on the cyber security of those IT systems.

Despite diplomatic pressure to combat cybercrime and political espionage, the damages and impact of malicious cyber activities are estimated to be higher than ever and still rising.

As many governments feel that their current approaches to cyber security do not address the new threat landscape sufficiently, they are looking at new ways to bolster their national security. One of those instruments is active cyber defense. The FBI web shell removal is one of the most significant cases of such an operation in terms of intrusiveness into non-governmental IT systems, scale, speed of response, and supposed success.

Was the FBI active cyber defense operation ‘acceptable’?

The removal of web shells by the FBI is an interesting case study for the assessment of active cyber defense operations. So what happened? On March 2, 2021, Microsoft disclosed that a “state-sponsored threat actor [… (Hafnium) operating from China has …] engaged in a number of attacks using previously unknown exploits targeting on-premises Exchange Server software.” This and other malicious campaigns were able to intrude into and install web shells on the servers via ProxyLogon vulnerabilities. Despite the availability of the patches and the advisory, “hundreds of vulnerable computers in the United States” were not patched, and the respective companies did not remove the web shells. The FBI requested a search-and-seizure warrant that would enable the agency to remotely remove the web shells because the agency believed “that the owners of the still-compromised web servers did not have the technical ability to remove them on their own and that the shells posed a significant risk to the victim” and more generally, “threaten[ed] the national security and public safety of the American people and our international partners.” The FBI then employed remote access methods to search and access previously identified file paths on servers in the United States based on known, detected, and commonly used passwords by the operators of the malicious cyber campaign. In the process, the agency created copies of the web shells for evidence and then “executed a command to uninstall the web shell from the compromised server.”

An active cyber defense operation can be assessed against different criteria. The following assessment helps to answer the question of whether an operation is ‘acceptable’. ‘Acceptable’ in this context means how far a (proposed) active cyber defense operation will (likely) ultimately do more good than harm without overstepping legal, geopolitical, and technical boundaries while taking only appropriate risks where necessary. The exact limits of these parameters depend on the implementing country and the context of the operation. Additionally, the exact outcomes and (potentially unintended) impact of an operation can only fully be understood after the operation has concluded.

Ultimately, would a decision-maker, who has to approve the operation and can be held accountable for its outcomes, find the parameters of the active cyber defense operation ‘acceptable’?

An assessment regarding the acceptability of this operation can be done by evaluating it against a framework published by one of the authors. Based on that framework, the removal of the Hafnium web shells checks many of the right boxes. The law enforcement operation took place with a clear scope in the jurisdiction of the implementing agency (‘blue space’) and with previous judicial authorization to mitigate further damage stemming from an ongoing malicious cyber campaign—and, therefore, was likely in the public interest. Although the FBI deployed (in an at least semi-autonomous way) intrusive measures that may have also affected critical infrastructure, the agency consulted with an independent technical expert before implementing the operation. From the risk and risk-mitigation point of view, the only complaint is the ex-post notifications, which denied the targets, especially potential critical infrastructure, an opt-out or other precautions. How effective the operation was is more difficult to determine, but it was likely a tactical success. Because the operation removed only the web shells and did not patch the vulnerability (which would technically have been possible), it left the companies vulnerable to re-exploitation. However, the operation increased the threshold for that to happen. At the same time, the tools provided by the vendor were circulated by the government, and the targets of the web shell removal were informed; thus, they were aware and could patch their systems and infrastructure themselves. Weighing the risks, risk mitigation and effectiveness of the operation based on public information, it seems that the removal of the Hafnium web shell was an acceptable active cyber operation.

Active Cyber Defense: No panacea

The threat landscape is constantly changing. However, this is more true in terms of geopolitics and attack surfaces than in terms of technical underpinnings. Geopolitically, the focus has changed from state-sponsored espionage campaigns to ransomware-driven cybercrime, although the former has not slowed down at all. As more and more services get digitized and new technologies such as machine learning make it to the core of our everyday life, securing their attack surfaces and supply chain becomes crucial. The way malicious cyber activities are conducted, though getting more professional every year, have not fundamentally changed – but they are happening at a faster pace and are impacting significantly more organisations than in previous years.

It is not true that all of a sudden companies and other organisations are less equipped to withstand crime, espionage, and other malicious activities by putting their mind to it and focussing on IT security and resilience. Facing the current threat landscape, governments are increasingly toying with the idea or even implementing active cyber defense operations to address the increased impact and breadth of malicious activities, knowing full well that it will not serve as panacea.

However, there may be occasions where a well-planned and executed active cyber defense operation will neutralize or mitigate the effects of malicious cyber activity and/or help attribute it and therefore bolster national security as an addition to IT security and resilience measures. For those cases, governments need to have a sound framework with strong safeguards to enable the safe deployment of limited active cyber defense measures. We recently published a study on how the framework and safeguards could look and hope that it will contribute to the active cyber defense policy debates around the globe.

Sven Herpig is the Director for International Cyber Security Policy at Stiftung Neue Verantwortung e.V. (SnV Berlin).

Max Heinemeyer is Director of Threat Hunting at Darktrace.