Subversion over Offense: Why the Practice of Cyber Conflict looks nothing like its Theory and what this means for Strategy and Scholarship

Cyber attacks are both exciting and terrifying, but the ongoing obsession with ‘cyber warfare’ clouds analysis and hampers strategy development. Much commentary and analysis of cyber conflict continues to use the language of war, where actors use ‘offensive cyber operations’ to meet adversaries in ‘engagements’ striving for victory on the ‘battlefield’ in the ‘cyber domain’. This discourse persists despite a growing consensus that cyber operations are primarily relevant in conflict short of war. For example, even the United States’ new strategy of ‘persistent engagement’ developed to meet challengers in such conflict nonetheless implies a military dimension of ‘engaging the enemy’ in its very name. Sometimes, this dogged adherence to the conceptual framework of war can take almost comical dimensions, with a recently published book proposing that “cyberwarfare…is modifying warfare into non-war warfare”. If cyber conflict is not war, why should we continue to look to concepts, theories and language of war to understand and explain it? Moreover, analysts not only agree cyber operations are primarily relevant in non-military competition, but available evidence indicates that cyber operations are in fact ineffective instruments of force projection.

The theory and perception of cyber conflict thus increasingly differs from its observed practice. Visions of cyberwar date back to the beginnings of scholarly engagement with the opportunities and challenges the use of information technologies brings in conflict. John Arquilla and David Ronfeldt warned ‘Cyberwar is Coming’, heralding a new form of conflict. Accordingly, a subsequent wave of theorizing foresaw a revolution in military affairs enabled by the information revolution. Neither has manifested in practice. The foundational notion of a revolution in conflict has lived on, however—only the type of conflict has changed. Accordingly, recent literature suggests cyber operations enable a new strategic space of conflict short of war, marked by a condition of ‘unpeace’ and opening a new way to ‘shape’ world politics in one’s favor. In short, the expectation is that cyber operations transform conflict by offering a way to attain strategic goals that were previously unreachable without going to war. This transformative influence is due to the presumed superior speed, scale, and anonymity of cyber operations. Empirical studies of cyber operations challenge these expectations, however, revealing extensive lead time, operational complexity and yet limited impact. Moreover, governments and private sector actors are increasingly adept at attributing cyber operations to their sponsors, and sometimes do so publicly.

In practice, cyber operations thus often fall short of their revolutionary promise. The reason, I argue in this article recently published in International Security, is their subversive nature. Rather than a military offensive where actors deploy troops to overpower an adversary and compel them to their will through force, cyber operations produce outcomes by sneaking into systems by exploiting flaws in their design to make these systems do things neither their designers nor users intended or anticipate. This mechanism of exploitation is directly analogous to the mechanisms intelligence agencies have long used in subversive covert operations—i.e. instruments of power in conflict short of war. Hence, examining the nature of subversion helps explain the nature of cyber conflict. It closes the gap between theory and practice and dispels some of the persistent myths about its revolutionary impact.

Subversion is an understudied instrument of power used in non-military covert operations. It exploits vulnerabilities to secretly infiltrate a system of rules and practices in order to control, manipulate, and use the system to produce detrimental effects against an adversary. In traditional subversion, states target social systems. Typically, states have used undercover spies to infiltrate groups and institutions, establish influence within the latter, and then use this influence to produce desired outcomes against an adversary. In other words, subversion turns an adversary’s own systems against themselves. This mechanism enables a wide range of possible effects, whose scope and scale always depends on the properties of the targeted systems: influence on public opinion, disintegration of social cohesion, economic disruption, infrastructure sabotage, influence on government policy, and, in the extreme case, overthrowing a government.

Because subversion is secret and produces effects indirectly through target systems, it holds great strategic promise. If successful, it offers a way to influence and weaken adversaries at lower costs and risks than war. Recent research has shown secrecy lowers escalation risks even in military covert operations, and naturally lower-intensity operations involve even lower risks. Moreover, compared to warfare—both overt and covert—where actors must deploy material capabilities and troops, subversion involves minimal resource requirements since it primarily depends on adversary assets. In theory, subversion thus enables a way to shift the balance of power without going to war and the dangers this entails. Here is a key parallel to current expectations about cyber operations.

Importantly, however, subversion often falls short of fulfilling this promise because of operational constraints and trade-offs that produce a crippling trilemma. Exploitation involves a distinct set of challenges that slow the speed of operations, limit the intensity of effects, and limit the control subversive actors have over effects. Importantly, increasing effectiveness of one of these three variables tends to decrease it across the remaining ones. For example, moving faster tends to lower intensity and control. These interactions produce a trilemma as speed, intensity and control are key determinants of strategic value, yet at a given level of resource investments actors can at best maximize one. In practice, subversion is thus often too slow, too weak and too volatile to achieve strategic value. Accordingly, throughout the Cold War policy-makers and analysts have overestimated its effectiveness—another striking parallel to cyber operations, underlining their shared limitations. With their reliance on exploitation, cyber operations share not only the promise but also the pitfalls of subversion resulting from this trilemma.

Hacking, the key instrument used in cyber operations, means exploiting vulnerabilities in systems and the way they are used. There are two main types of vulnerabilities actors can exploit: technical and social. Technical vulnerabilities refer to flaws in the technology itself, and most frequently involve flaws in the logic of programming code that determines the behavior of computer systems to manipulate them. Code inevitably contains flaws because humans are fallible. Hackers exploit such flaws to gain unauthorized access to systems and establish control over them, often by using malware, specialized malicious programs designed for exploitation. Social vulnerabilities concern pathologies in human behavior and weaknesses in security practices guiding the use of technology. Phishing emails are a key example of the latter form of exploitation, also known as ‘social engineering’.

While the systems targeted differ, this mechanism of exploitation is functionally the same as in traditional subversion. Accordingly, as in traditional subversion, hacking groups behind cyber operations face significant constraints in speed, intensity and control. As a starting point, cyber operations cannot produce effects without the presence of vulnerabilities. Where there is no way into a system, hackers cannot create one by force. Accordingly, identifying suitable vulnerabilities is a necessary condition for success. Doing so requires reconnaissance and analysis, which takes time and thus slows operational speed. Acquiring means of exploitation usually requires developing them, which takes further time. There are also generic exploits that are either publicly available, but because they are public, careful victims will have updated their systems to remove the vulnerabilities such exploits target. Alternatively, hackers can buy custom and hitherto unknown exploits (known as 0-days) that actors sell on the grey market to bypass the development time, but doing so requires significant financial investments.

Meanwhile, actors must remain hidden and depend on adversary systems that are seldom fully familiar. Both characteristics limit the intensity of effects. First, to maintain access to a system, hackers must avoid alerting the victim to their presence. Once discovered, victims have several means to neutralize a compromise, ranging from patching software over removing malware or disconnecting and shutting down the affected systems. More sensitive systems are likely to be better protected, thus raising the chance of discovery. Similarly, the greater the scale of a compromise becomes, i.e. the more systems hackers manage to infect with malware or otherwise exploit, the likelier one of these system’s owners or users is to discover the unauthorized access.

Finally, secrecy and dependency on adversary systems also limit control over effects. Control is limited by definition since victims can neutralize it in different ways upon discovery. Depending on the measures the hackers use to hide their presence and stay within a system despite discovery, doing so is not always easy—but it is always possible. Even without discovery, hackers can only establish control over those parts of a system they are sufficiently familiar with to identify flaws even their designers and owners have missed. And even with that hurdle passed, there is no guarantee those parts of the system will respond to manipulation in the way the hackers expect. Hackers are as fallible as the designers of the systems they aim to exploit, hence they are as prone to making mistakes or missing something. Consequently, when hackers proceed to manipulate a target system, it may behave not only different than its designers expect, but also different to the hackers expectations. The manipulation may fail to produce any effect, or produce an unintended effect, leading to unintended consequences.

Importantly, as is the case with traditional subversion, these constraints are not only individual hurdles, but interact in a way that forms a trilemma. At a given level of resource investments, increasing the speed of a cyber operation tends to produce corresponding losses in intensity and control. The faster one moves, the less time there is for reconnaissance and development. With less time for reconnaissance and development, hackers will have relatively less scope and scale of access to systems than with more time. Consequently, the scope and scale of effects they can produce through these systems is also more limited. Similarly, with less time spent the risk of missing something or making mistakes increases as well. Hence, the risks of being discovered prematurely (before attempting to produce an effect), of failing to produce an effect, or of unintended consequences all increase as well.

The trilemma persists, limiting the strategic value of cyber operations. In theory, it is possible to launch speedy operations that achieve massive scale and yet produce carefully calibrated effects. In practice, however, the trilemma typically renders cyber operations too slow, too weak and too volatile to produce strategic value when, where and how it is needed. Accordingly, my research shows Russia’s cyber operations against Ukraine by and large failed to measurably contribute to its strategic goals. Moreover, the evidence shows that throughout the five disruptive cyber operations Russia deployed against Ukraine, the causes behind this limited strategic value were the constraints predicted by the trilemma.

When assessing the threat posed by hostile cyber operations and developing counterstrategies, it is crucial to start from a level-headed analysis of what is feasible in practice, rather than possible in theory. Focusing on possibilities alone not only distorts debates and hampers scholarship, but also undermines strategic responses. For example, ongoing fears of ‘Cyber Pearl Harbor’ or ‘Cyber 9/12’ events risk wasting valuable resources in preventing and mitigating such unicorn events. These resources and intellectual energy are then missing from efforts to counter lower intensity subversive campaigns aiming to undermine social cohesion, cause public and economic disruption and sabotage infrastructure.

Fortunately, the theory and the evidence from Ukraine outlined here indicate that the threat such operations pose is often overstated. Yet that does not mean we can sit back and trust that everything will be fine. In particular if it is allowed to fester over longer periods of time, the likelihood that subversion can achieve some of its goals increases. To prevent this outcome, rather than maximizing engagements with adversaries, as does persistent engagement, strategies that build on established counterintelligence practice promise greater rewards. Because subversion takes time, persistence is important in countering it. But so are efforts to exacerbate the shortcomings of subversion, namely increasing detection capabilities and raising the unpredictability of one’s own systems to raise adversary uncertainty and control challenges.

Here is a key potential for European states to innovate, particularly considering the European Union’s preference for non-violent resolutions over disputes and more aggressive interference. Currently, most cybersecurity strategy development and defensive measures are outsourced to NATO, a military alliance. Accordingly, threats continue to be framed in military terms—most recently a push to counter ‘cognitive warfare’, a new term to describe influence operations and disinformation, classic staples of subversion.

Lennart Maschmeyer is a Senior Researcher at the Center for Security Studies at ETH Zurich.

Does the Cyber Offense Have the Advantage?

There is a simple conjecture that is quite common in all aspects of society: “the best defense is a good offense.” This idea persists and leads to the belief that action can trump protection in cyber security because of its simplicity and the general failure to evaluate claims with evidence. The complexity of computers can give the impression that little is known about their functions leading to the formation of an idea that became conventional wisdom: attack first, sort out the details later.

Yet, there is no evidence that the offense is the best course of action in cyber security. The concept of the offense/defense balance (hereafter O/D balance) has long been studied in International Relations. The basic premise is that “when defense has the advantage over offense major war can be avoided.” This simple conjecture has created a field of research that seeks to unlock the mysteries behind war and peace by focusing on the nature of operations and attack profiles.

Seemingly unknown to most cyber security scholars, the literature became confused over how to measure the balance between the offense and defense and even over its central variables. The recent passing of Robert Jervis highlights the power and breadth of his work. While Jervis’ work kicked off the modern era of research of the O/D balance, he also highlighted the need for a distinction between offensive and defensive operations.

Moreover, even if we accepted the doubtful claim of an offensive advantage s empirically accurate and measurable, this idea nonetheless fails to clearly motivate action. States assuming an offensive advantage might be deluded in their perspective, as happened during World War I. Alternatively, a state might go on the offense anyway, due to the drive of other motivating reasons, such as the importance of a territorial claim or the need to signal discontent.

Challenging the idea of an Offense/Defensive Balance in Cyberspace

There are three core problems with the O/D balance: the undisguisable nature of the variables; the failure to examine how perceptions impact a sense of balance; and the difficulty of measurement.

The key challenge for discussions of the offense or defense in cyberspace is that it is near impossible to distinguish between the two frames. The fluidness of the concept of offense or defense makes the terms virtually useless for research. Moves that are said to be defensive involve forward maneuvers that can seem offensive in nature, a common confusion with the U.S. strategy of “defend forward.” While cyber mission forces can go on the attack, they also can be posted as defensive operators seeking to stop attacks before they happen. The active and adaptive nature of modern technology makes the distinction between offense and defense entirely empty.

A key foundation of the O/D balance is the idea that each side will correctly perceive either the offense or defense as having the advantage, determining the probability for war. Yet, as critics have pointed out “it is inherently difficult to assess the impact of weapons technologies, particularly when they have not been employed in war.” Perceptions of cyber power and an emphasis on offensive dominance are in the eye of the beholder with many doubting the offensive power of the United States or the defensive capability of the North Koreans in an isolated network. In a domain that operates mostly without empirical evidence, anyone can perceive whatever they choose, often based on fictions.

It is impossible to measure the success or failure of the theory of O/D balance in cyberspace given the conditions laid out by the proponents of the theory. Absent of measurement, scholars, and policymakers are making predictions that can never be falsified. In short, we can never know if one is wrong or right. Glaster and Kaufmann counter the idea that the theory cannot be measured “as simply incorrect.” They offer a reformation of O/D balance as the ratio of costs for the attacker versus the costs to defend territory. This premise is inoperable in cyber security for the simple reason that there is no territory to take.

The challenge of distinction then returns: how would one measure the costs to defend versus the costs to attack? While it might be simple in the abstract, would one classify US Cyber Command (USCYBERCOM) as offensive and the Department of Homeland Security (DHS) as defensive? Such simple distinctions betray the fluidity of computer network operations and the pace at which bureaucratic organizations operate and share talent. Glaser and Kaufmann dismiss these challenges, suggesting that “ballpark estimates of the balance may be sufficient.” Yet, “ballpark” estimates encourage the classification of success and dismissal of failure absent more precise metrics.    

Ending Dangerous Conjectures

The failure of the O/D balance literature is critical because a misguided focus on the balance between offensive and defensive operations clouds understandings of cyber strategy. It also forces practitioners towards leveraging language that does not describe the nature of cyber operations. It is near impossible to distinguish cyber actions between offense and defense and even more difficult to measure the effectiveness of said actions. The mental gymnastics required to argue that leaders can accurately measure the O/D balance in cyberspace rapidly become impractical.

The belief in the utility of aggression is dangerous and likely a reaction to the threat inflation pervasive in the discourse. The pathology of offensive advantage and of defenders under siege is reinforced by the discourse in the media and social media about a constant barrage of cyber-attacks. This pathology will lead to strategic malaise and constant attacks, as defenders fail to shore up vulnerabilities.

Conflict is a continuum. As states build towards conflict, little actions taken can add up and interact with big factors such as territoriality to produce warfare. From this perspective, the distinction between “offensive” and “defensive” actions has little value.

The premise of O/D balance theory provides poor policy advice, sometimes leading policymakers to propose offensive operations when these operations might be unsuited for the domain, or worse, ineffective. The focus on this theory is troubling because it minimizes the defense due to the fear of the ‘magic’ of emergent technology. Some might argue that we have failed in the defense for cyber operations, with the SolarWinds operation being a classic example. However, the reality is that states have rarely tried to do defense correctly due to bureaucratic issues, money, lack of knowledge, or because of the pull of the offense.

The misapplied and dangerous conjecture that the best defense is a good offense must end. The best defense is a real defense. Measuring success or failure in the domain is a critical task to avoid the sorts of “ballpark” empirical estimates that dominate the field. Trying to sort out just what is offensive and what is defensive distracts the policymaker and the strategic planner from developing options to protect the national security of the state and ward off the most common abuses in cyberspace. 

Brandon Valeriano is a Senior Fellow at Cato Institute

Event: UK Cyber Strategy Roundtable

The Offensive Cyber Working Group will be hosting a roundtable of experts to discuss the implications of the new UK Cyber Strategy on Friday 17th December at 3pm (UK time).

We’re pleased to have Dr Alexi Drew, Dr Joe Devanny, Dr Leonie Tanczer, Dr Tim Stevens, and Dr Andrew Dwyer have a discussion on the new UK National Cyber Strategy with Amy Ertan acting as moderator of the discussion.

You can sign up to the roundtable through this link: https://durhamuniversity.zoom.us/webinar/register/WN_krKUTHfjSD6ILWE6jkQpiA or you can follow us on YouTube (however this will not be monitored for questions): https://youtu.be/auIkD_ppoDI

Active Cyber Defense: panacea or snake oil?

by Dr. Sven Herpig and Max Heinemeyer

Active Cyber Defense is coming to the European Union. The EU is currently working on an update of its Network and Information Security Directive (NIS Directive) which, inter alia, includes a provision on active cyber defense. In other places, such as the United States, the United Kingdom and even member states of the European Union, such as Germany, debates about active cyber defense have been conducted for several years with varying degrees of maturity. For European member states that have not done so already, it is now time to better understand active cyber defense, its implications, and develop a position regarding whether (and to what extent) they want to adapt a corresponding framework.

Active cyber defense is understood by the authors as one or more technical measures implemented by an individual state or collectively, carried out or mandated by a government entity with the goal to technically neutralize and/or mitigate the impact of and/or attribute technically a specific ongoing malicious cyber operation or campaign.

While the advantages and disadvantages of such operations are often viewed controversially, it has become evident that active cyber operations are increasingly being conducted as part of states’ strategic vision in cyberspace. This represents a paradigm shift for many countries on how they attempt to counter cyber operations. This article presents a set of criteria to help evaluate active cyber defense operations in pre-and post-operation. Subsequently, the criteria are applied to the case study of the FBI web shell removal in wake of the Hafnium  Microsoft Exchange operation in early 2021. The FBI was able to prevent some immediate damage to US companies and institutions by deploying active cyber defense measures, but they remained vulnerable even after – it was not a one-time fix.

Digitization and the evolving threat landscape drive governments to look for new approaches

Governments feel that their current approaches are not working in keeping the country safe and that new methods, such as active cyber defense operations, need to be explored further. Countries are looking at active cyber defense operations as an option to better deal with the escalating threat landscape of recent years. The threat landscape has changed significantly and digitization across the globe has accelerated, including critical infrastructures such as hospitals or energy systems.

In the last five years, cyber operations took place that were unprecedented in terms of scale, professionalization, speed, sophistication, and damage done. The paradigm shift in response to malicious cyber activities is a reaction to the paradigm shift experienced in the threat landscape.

Current key trends in the threat landscape include:

The trends in the threat landscape are partially enabled by the growing digitization of businesses. Digitization, in this context, covers areas such as the increase in the use of digital infrastructure, shifts to cloud computing, Bring-Your-Own-Device (BYOD) policies, or the dynamic workforce, IT, and Operational Technology (OT) convergence, and the use of digital supply chains. Growing digitization is generally positive for business growth and for innovation, but it comes at the cost of often increasing complexity in IT landscapes and more dependency and reliance on the cyber security of those IT systems.

Despite diplomatic pressure to combat cybercrime and political espionage, the damages and impact of malicious cyber activities are estimated to be higher than ever and still rising.

As many governments feel that their current approaches to cyber security do not address the new threat landscape sufficiently, they are looking at new ways to bolster their national security. One of those instruments is active cyber defense. The FBI web shell removal is one of the most significant cases of such an operation in terms of intrusiveness into non-governmental IT systems, scale, speed of response, and supposed success.

Was the FBI active cyber defense operation ‘acceptable’?

The removal of web shells by the FBI is an interesting case study for the assessment of active cyber defense operations. So what happened? On March 2, 2021, Microsoft disclosed that a “state-sponsored threat actor [… (Hafnium) operating from China has …] engaged in a number of attacks using previously unknown exploits targeting on-premises Exchange Server software.” This and other malicious campaigns were able to intrude into and install web shells on the servers via ProxyLogon vulnerabilities. Despite the availability of the patches and the advisory, “hundreds of vulnerable computers in the United States” were not patched, and the respective companies did not remove the web shells. The FBI requested a search-and-seizure warrant that would enable the agency to remotely remove the web shells because the agency believed “that the owners of the still-compromised web servers did not have the technical ability to remove them on their own and that the shells posed a significant risk to the victim” and more generally, “threaten[ed] the national security and public safety of the American people and our international partners.” The FBI then employed remote access methods to search and access previously identified file paths on servers in the United States based on known, detected, and commonly used passwords by the operators of the malicious cyber campaign. In the process, the agency created copies of the web shells for evidence and then “executed a command to uninstall the web shell from the compromised server.”

An active cyber defense operation can be assessed against different criteria. The following assessment helps to answer the question of whether an operation is ‘acceptable’. ‘Acceptable’ in this context means how far a (proposed) active cyber defense operation will (likely) ultimately do more good than harm without overstepping legal, geopolitical, and technical boundaries while taking only appropriate risks where necessary. The exact limits of these parameters depend on the implementing country and the context of the operation. Additionally, the exact outcomes and (potentially unintended) impact of an operation can only fully be understood after the operation has concluded.

Ultimately, would a decision-maker, who has to approve the operation and can be held accountable for its outcomes, find the parameters of the active cyber defense operation ‘acceptable’?

An assessment regarding the acceptability of this operation can be done by evaluating it against a framework published by one of the authors. Based on that framework, the removal of the Hafnium web shells checks many of the right boxes. The law enforcement operation took place with a clear scope in the jurisdiction of the implementing agency (‘blue space’) and with previous judicial authorization to mitigate further damage stemming from an ongoing malicious cyber campaign—and, therefore, was likely in the public interest. Although the FBI deployed (in an at least semi-autonomous way) intrusive measures that may have also affected critical infrastructure, the agency consulted with an independent technical expert before implementing the operation. From the risk and risk-mitigation point of view, the only complaint is the ex-post notifications, which denied the targets, especially potential critical infrastructure, an opt-out or other precautions. How effective the operation was is more difficult to determine, but it was likely a tactical success. Because the operation removed only the web shells and did not patch the vulnerability (which would technically have been possible), it left the companies vulnerable to re-exploitation. However, the operation increased the threshold for that to happen. At the same time, the tools provided by the vendor were circulated by the government, and the targets of the web shell removal were informed; thus, they were aware and could patch their systems and infrastructure themselves. Weighing the risks, risk mitigation and effectiveness of the operation based on public information, it seems that the removal of the Hafnium web shell was an acceptable active cyber operation.

Active Cyber Defense: No panacea

The threat landscape is constantly changing. However, this is more true in terms of geopolitics and attack surfaces than in terms of technical underpinnings. Geopolitically, the focus has changed from state-sponsored espionage campaigns to ransomware-driven cybercrime, although the former has not slowed down at all. As more and more services get digitized and new technologies such as machine learning make it to the core of our everyday life, securing their attack surfaces and supply chain becomes crucial. The way malicious cyber activities are conducted, though getting more professional every year, have not fundamentally changed – but they are happening at a faster pace and are impacting significantly more organisations than in previous years.

It is not true that all of a sudden companies and other organisations are less equipped to withstand crime, espionage, and other malicious activities by putting their mind to it and focussing on IT security and resilience. Facing the current threat landscape, governments are increasingly toying with the idea or even implementing active cyber defense operations to address the increased impact and breadth of malicious activities, knowing full well that it will not serve as panacea.

However, there may be occasions where a well-planned and executed active cyber defense operation will neutralize or mitigate the effects of malicious cyber activity and/or help attribute it and therefore bolster national security as an addition to IT security and resilience measures. For those cases, governments need to have a sound framework with strong safeguards to enable the safe deployment of limited active cyber defense measures. We recently published a study on how the framework and safeguards could look and hope that it will contribute to the active cyber defense policy debates around the globe.

Sven Herpig is the Director for International Cyber Security Policy at Stiftung Neue Verantwortung e.V. (SnV Berlin).

Max Heinemeyer is Director of Threat Hunting at Darktrace.

Making the Concept of Violence Central to the Study of Offensive Cyber Operations

Dr Florian J Egloff, Dr James Shires

Cyberspace is everywhere. It is so prevalent that the concept has started to lose its functional utility – and, as the recent Facebook rebrand demonstrates, big tech companies still want to make cyber interactions even more seamless and attractive. For the majority of the world’s population with access to the internet, life offline is increasingly difficult to imagine; and for those without, this lack is increasingly understood as detrimental to their fundamental human rights. 

Cybersecurity is the foundation of our online life, while cyber insecurity is its Achilles’ heel. Within this broader picture, offensive cyber operations by states are an important – but far from the only – cause of global cyber insecurity. The effects of state offensive cyber operations are wide, with harms ranging from leaked or deleted personal data to the non-functioning of critical infrastructures such as oil pipelines. Categorizing and prioritizing these harms is difficult, as scholars and policymakers struggle to draw standard distinctions between peace and war, espionage and covert action, and military and intelligence functions. 

However, studies of offensive cyber operations have rarely engaged with these harms as forms of violence. When they have, violence was often conceived very simply: to break things and kill people. We think the time is ripe to refine our assessment of what violence means in a digital era. To that end, we have written two articles laying out what violence is in relation to offensive cyber operations, and how offensive cyber operations are integrated into the violent tools of statecraft. Together, these articles offer a new perspective on the harms of offensive cyber operations, and one which we hope helps sidestep or solve the longstanding controversies above. In this blog-post, we give you an overview of the results.

First, let’s take a step back, as the disciplinary evolution of political science and international relations has an important lesson for the study of offensive cyber operations. In reaction to what was seen as an overly statist focus on systemic or strategic issues (such as nuclear stability) during the Cold War, the subfield of political violence sought to reorient these disciplines towards the study of violent acts committed for political purposes, whether by states at war, by armed groups and other non-state actors in civil wars, or in situations of unrest and revolution. Their conceptual rationale was that these are all part of a single continuum of organized violence, and so studying them together makes good theoretical sense. Their normative rationale was that the moral aim of studying war and conflict is to prevent or ameliorate its devastating impacts, and so a focus on violence (rather than, for example, stability) directs attention to the problems we need to solve most urgently. 

Issues of political violence may seem starkly removed from the study of offensive cyber operations, because the current consensus is that cyber operations are almost always non-violent. It is very difficult to use cyber means to cause death and destruction in the manner of missiles, machetes, or machine guns. The most impactful cyber operations to date have caused extensive disruption with significant economic losses, but in each case systems recovered shortly afterward – albeit with intense effort – and no one died. This lack of violence is even seen as the unique promise of offensive cyber operations, as states and other actors, such as financially-motivated cyber criminals, can achieve their goals in a more “civilized” way. Ransomware holds data hostage, rather than kidnapping real people. Offensive cyber operations could thus almost be seen as the “better angels of our digital nature”.

However, we think this reading relies on too narrow a definition of violence. The field of political violence is itself split between a narrow “minimalist” concept of violence referring to physical harm (often operationalized crudely as numbers of deaths), and a broader view of violence including psychological and community harms. This broader view of violence is gaining ground in international law, as scholars recognize the psychological and societal impacts of war and conflict, as well as in diverse policy arenas from cyber-bullying to intimate partner violence. The study of offensive cyber operations can also benefit from this broader view – which we term “harm to areas of human value” including bodily, affective, and community aspects. 

This has clear consequences for the kinds of operations we study. While highly-targeted cyber-espionage campaigns such as SUNBURST make global headlines, and might well have strategic national security consequences (e.g. by transferring state secrets or commercial intellectual property), these are not the most violent consequences of offensive cyber capabilities. Instead, repressive use of surveillance operations, or the sabotage of critical infrastructure, could be much more devastating. Focusing on violence shifts us away from the disputed strategic impact of cyber-espionage towards more destructive operations. Conceptually, it means we should no longer privilege sophisticated state actors over cybercrime gangs or intimate partner surveillance; and normatively we should prioritize reducing harm over measuring shifts in the international balance of power.

In this expanded definition, when do cyber operations stop being violent? In terms of harm, there is no lower bound, and so context-specific assessments of severity are crucial. But our expanded definition includes criteria of intentionality – violence must be deliberate – and proximity – violence must be causally significant. Offensive cyber operations complicate both criteria. Many cyber operations have consequences far beyond those originally intended, due to the interconnectedness of digital networks, and at the same time they are far less causally proximate than kinetic weapons, as they manipulate information systems that are embedded in complex ways across state borders. Overall, the less deliberate and the less proximate the cyber component, the less violent the operation.

One might respond that this is all a bit abstract. Cyber operations don’t take place in a vacuum, and the important thing is not only the (lack of) violence of cyber operations, but also the violent consequences of their alternatives. The issue is relative, not absolute. We strongly endorse this view, and so in a separate article we put forward three logics of integration of cyber capabilities into violent state structures. These logics – substitution, support, and complement – weigh the benefits of using offensive cyber capabilities (OCCs) against an adversary instead of, as part of, and in addition to other means of violence, respectively. 

Table 1. (Source)

The Three Logics of Integration and Their Effect on Violence

LogicSubstituteSupportComplement
Summary OCCs replace other means of achieving a particular end OCCs are combined with other means to help achieve that end OCCs achieve an end not available by other means 
Effect on violence (narrow definition) Less violent Less violent Irrelevant 
 OCCs achieve the same end without or with less physical harm OCCs are more precisely targeted, concerns of indirect effects limit use Complementary effects of OCCs are not physically damaging so not violent 
Effect on violence (broad definition) Unclear Unclear More violent 
 Affective/community harms could outweigh physical damage depending on context Affective harms occur even with better targeting, shift in not decreased repression Affective/community harms caused by OCCs increase levels of violence overall 

What does this table show? Where many might think that substituting a conventional means of violence for a cyber operation leads to less violence, we argue that this is not necessarily so. Rather it is an empirical question, one of scale and scope of (also non-bodily) harm. The same can be said for supporting operations. 

The most striking change is, however, in the area of complementary operations, i.e. offensive cyber capabilities that produce genuinely new forms of causing harm, for example digital repression or logical (but disabling) attacks against civilian data. Such complementary uses of OCCs are automatically nonviolent in a narrow definition, because they have not – so far – caused bodily harm or death. In a broader understanding, these operations increase overall levels of violence. 

For example, with regard to interstate violence, the notorious NotPetya operation is violent, though the exact intent of the attackers matters for the judgment of its severity. Regarding repression, the complementary use of OCCs to create an environment of pervasive censorship and fear, as in Xinjiang, also implies increased violence on an expanded definition. When particular groups are targeted by censorship technologies, there are effects on affective life (individual identities, including gender and ethnic identifications) and communal areas of value (social relationships and, at the larger scale, national identities).

Worryingly, it is precisely these new forms of harm that are hardest to capture with a policy apparatus built for a non-digital era. Concerns around escalation as a result of offensive cyber operations should be reoriented toward violent escalation, recognizing that some uses of OCCs could be strategically escalatory – e.g. SUNBURST – but without an accompanying increase in violence. 

Policy responses to cyber operations should also be calibrated based on their logics of integration: supportive and substitutive uses are more likely to be amenable to existing frameworks, while complementary uses present a far more novel policy challenge. Acknowledging complementary uses of OCCs and understanding their violent effects gives defenders a better grasp of the complexity of defending against adversarial actions across a mostly civilian cyberspace.

Where next? In the articles above, we mainly consider positive cases of integration where cyber capabilities were used instead of/as part of/as well as other means. Future research should also consider negative cases where actors decided not to use cyber operations, instead staying with more conventional tactics. In these articles, we also set aside the bureaucratic politics of cyber operations – questions around institutional manoeuvring, domestic dynamics, departmental hierarchies, and individual personalities – which are of course a crucial component of decisions about when and where to deploy these capabilities. 

Ultimately, understanding cyber operations as a form of political violence helps us prioritize research and policy efforts to counter the harms they cause. The most violent uses of OCCs may not be state-sponsored cyber-espionage or sabotage, but authoritarian practices of digital globalised repression, the indirect consequences of disrupted critical infrastructures, and digitally-enabled interpersonal coercion. 

Florian J. Egloff is a Senior Researcher in Cybersecurity at the Center for Security Studies (CSS) at ETH Zurich. He is the author of the forthcoming book Semi-State Actors in Cybersecurity (Oxford University Press, 2022).

James Shires is an Assistant Professor in Cybersecurity Governance at the Institute of Security and Global Affairs, University of Leiden. He is the author of The Politics of Cybersecurity in the Middle East (Hurst/Oxford University Press 2021).