Cyber attacks are both exciting and terrifying, but the ongoing obsession with ‘cyber warfare’ clouds analysis and hampers strategy development. Much commentary and analysis of cyber conflict continues to use the language of war, where actors use ‘offensive cyber operations’ to meet adversaries in ‘engagements’ striving for victory on the ‘battlefield’ in the ‘cyber domain’. This discourse persists despite a growing consensus that cyber operations are primarily relevant in conflict short of war. For example, even the United States’ new strategy of ‘persistent engagement’ developed to meet challengers in such conflict nonetheless implies a military dimension of ‘engaging the enemy’ in its very name. Sometimes, this dogged adherence to the conceptual framework of war can take almost comical dimensions, with a recently published book proposing that “cyberwarfare…is modifying warfare into non-war warfare”. If cyber conflict is not war, why should we continue to look to concepts, theories and language of war to understand and explain it? Moreover, analysts not only agree cyber operations are primarily relevant in non-military competition, but available evidence indicates that cyber operations are in fact ineffective instruments of force projection.
The theory and perception of cyber conflict thus increasingly differs from its observed practice. Visions of cyberwar date back to the beginnings of scholarly engagement with the opportunities and challenges the use of information technologies brings in conflict. John Arquilla and David Ronfeldt warned ‘Cyberwar is Coming’, heralding a new form of conflict. Accordingly, a subsequent wave of theorizing foresaw a revolution in military affairs enabled by the information revolution. Neither has manifested in practice. The foundational notion of a revolution in conflict has lived on, however—only the type of conflict has changed. Accordingly, recent literature suggests cyber operations enable a new strategic space of conflict short of war, marked by a condition of ‘unpeace’ and opening a new way to ‘shape’ world politics in one’s favor. In short, the expectation is that cyber operations transform conflict by offering a way to attain strategic goals that were previously unreachable without going to war. This transformative influence is due to the presumed superior speed, scale, and anonymity of cyber operations. Empirical studies of cyber operations challenge these expectations, however, revealing extensive lead time, operational complexity and yet limited impact. Moreover, governments and private sector actors are increasingly adept at attributing cyber operations to their sponsors, and sometimes do so publicly.
In practice, cyber operations thus often fall short of their revolutionary promise. The reason, I argue in this article recently published in International Security, is their subversive nature. Rather than a military offensive where actors deploy troops to overpower an adversary and compel them to their will through force, cyber operations produce outcomes by sneaking into systems by exploiting flaws in their design to make these systems do things neither their designers nor users intended or anticipate. This mechanism of exploitation is directly analogous to the mechanisms intelligence agencies have long used in subversive covert operations—i.e. instruments of power in conflict short of war. Hence, examining the nature of subversion helps explain the nature of cyber conflict. It closes the gap between theory and practice and dispels some of the persistent myths about its revolutionary impact.
Subversion is an understudied instrument of power used in non-military covert operations. It exploits vulnerabilities to secretly infiltrate a system of rules and practices in order to control, manipulate, and use the system to produce detrimental effects against an adversary. In traditional subversion, states target social systems. Typically, states have used undercover spies to infiltrate groups and institutions, establish influence within the latter, and then use this influence to produce desired outcomes against an adversary. In other words, subversion turns an adversary’s own systems against themselves. This mechanism enables a wide range of possible effects, whose scope and scale always depends on the properties of the targeted systems: influence on public opinion, disintegration of social cohesion, economic disruption, infrastructure sabotage, influence on government policy, and, in the extreme case, overthrowing a government.
Because subversion is secret and produces effects indirectly through target systems, it holds great strategic promise. If successful, it offers a way to influence and weaken adversaries at lower costs and risks than war. Recent research has shown secrecy lowers escalation risks even in military covert operations, and naturally lower-intensity operations involve even lower risks. Moreover, compared to warfare—both overt and covert—where actors must deploy material capabilities and troops, subversion involves minimal resource requirements since it primarily depends on adversary assets. In theory, subversion thus enables a way to shift the balance of power without going to war and the dangers this entails. Here is a key parallel to current expectations about cyber operations.
Importantly, however, subversion often falls short of fulfilling this promise because of operational constraints and trade-offs that produce a crippling trilemma. Exploitation involves a distinct set of challenges that slow the speed of operations, limit the intensity of effects, and limit the control subversive actors have over effects. Importantly, increasing effectiveness of one of these three variables tends to decrease it across the remaining ones. For example, moving faster tends to lower intensity and control. These interactions produce a trilemma as speed, intensity and control are key determinants of strategic value, yet at a given level of resource investments actors can at best maximize one. In practice, subversion is thus often too slow, too weak and too volatile to achieve strategic value. Accordingly, throughout the Cold War policy-makers and analysts have overestimated its effectiveness—another striking parallel to cyber operations, underlining their shared limitations. With their reliance on exploitation, cyber operations share not only the promise but also the pitfalls of subversion resulting from this trilemma.
Hacking, the key instrument used in cyber operations, means exploiting vulnerabilities in systems and the way they are used. There are two main types of vulnerabilities actors can exploit: technical and social. Technical vulnerabilities refer to flaws in the technology itself, and most frequently involve flaws in the logic of programming code that determines the behavior of computer systems to manipulate them. Code inevitably contains flaws because humans are fallible. Hackers exploit such flaws to gain unauthorized access to systems and establish control over them, often by using malware, specialized malicious programs designed for exploitation. Social vulnerabilities concern pathologies in human behavior and weaknesses in security practices guiding the use of technology. Phishing emails are a key example of the latter form of exploitation, also known as ‘social engineering’.
While the systems targeted differ, this mechanism of exploitation is functionally the same as in traditional subversion. Accordingly, as in traditional subversion, hacking groups behind cyber operations face significant constraints in speed, intensity and control. As a starting point, cyber operations cannot produce effects without the presence of vulnerabilities. Where there is no way into a system, hackers cannot create one by force. Accordingly, identifying suitable vulnerabilities is a necessary condition for success. Doing so requires reconnaissance and analysis, which takes time and thus slows operational speed. Acquiring means of exploitation usually requires developing them, which takes further time. There are also generic exploits that are either publicly available, but because they are public, careful victims will have updated their systems to remove the vulnerabilities such exploits target. Alternatively, hackers can buy custom and hitherto unknown exploits (known as 0-days) that actors sell on the grey market to bypass the development time, but doing so requires significant financial investments.
Meanwhile, actors must remain hidden and depend on adversary systems that are seldom fully familiar. Both characteristics limit the intensity of effects. First, to maintain access to a system, hackers must avoid alerting the victim to their presence. Once discovered, victims have several means to neutralize a compromise, ranging from patching software over removing malware or disconnecting and shutting down the affected systems. More sensitive systems are likely to be better protected, thus raising the chance of discovery. Similarly, the greater the scale of a compromise becomes, i.e. the more systems hackers manage to infect with malware or otherwise exploit, the likelier one of these system’s owners or users is to discover the unauthorized access.
Finally, secrecy and dependency on adversary systems also limit control over effects. Control is limited by definition since victims can neutralize it in different ways upon discovery. Depending on the measures the hackers use to hide their presence and stay within a system despite discovery, doing so is not always easy—but it is always possible. Even without discovery, hackers can only establish control over those parts of a system they are sufficiently familiar with to identify flaws even their designers and owners have missed. And even with that hurdle passed, there is no guarantee those parts of the system will respond to manipulation in the way the hackers expect. Hackers are as fallible as the designers of the systems they aim to exploit, hence they are as prone to making mistakes or missing something. Consequently, when hackers proceed to manipulate a target system, it may behave not only different than its designers expect, but also different to the hackers expectations. The manipulation may fail to produce any effect, or produce an unintended effect, leading to unintended consequences.
Importantly, as is the case with traditional subversion, these constraints are not only individual hurdles, but interact in a way that forms a trilemma. At a given level of resource investments, increasing the speed of a cyber operation tends to produce corresponding losses in intensity and control. The faster one moves, the less time there is for reconnaissance and development. With less time for reconnaissance and development, hackers will have relatively less scope and scale of access to systems than with more time. Consequently, the scope and scale of effects they can produce through these systems is also more limited. Similarly, with less time spent the risk of missing something or making mistakes increases as well. Hence, the risks of being discovered prematurely (before attempting to produce an effect), of failing to produce an effect, or of unintended consequences all increase as well.
The trilemma persists, limiting the strategic value of cyber operations. In theory, it is possible to launch speedy operations that achieve massive scale and yet produce carefully calibrated effects. In practice, however, the trilemma typically renders cyber operations too slow, too weak and too volatile to produce strategic value when, where and how it is needed. Accordingly, my research shows Russia’s cyber operations against Ukraine by and large failed to measurably contribute to its strategic goals. Moreover, the evidence shows that throughout the five disruptive cyber operations Russia deployed against Ukraine, the causes behind this limited strategic value were the constraints predicted by the trilemma.
When assessing the threat posed by hostile cyber operations and developing counterstrategies, it is crucial to start from a level-headed analysis of what is feasible in practice, rather than possible in theory. Focusing on possibilities alone not only distorts debates and hampers scholarship, but also undermines strategic responses. For example, ongoing fears of ‘Cyber Pearl Harbor’ or ‘Cyber 9/12’ events risk wasting valuable resources in preventing and mitigating such unicorn events. These resources and intellectual energy are then missing from efforts to counter lower intensity subversive campaigns aiming to undermine social cohesion, cause public and economic disruption and sabotage infrastructure.
Fortunately, the theory and the evidence from Ukraine outlined here indicate that the threat such operations pose is often overstated. Yet that does not mean we can sit back and trust that everything will be fine. In particular if it is allowed to fester over longer periods of time, the likelihood that subversion can achieve some of its goals increases. To prevent this outcome, rather than maximizing engagements with adversaries, as does persistent engagement, strategies that build on established counterintelligence practice promise greater rewards. Because subversion takes time, persistence is important in countering it. But so are efforts to exacerbate the shortcomings of subversion, namely increasing detection capabilities and raising the unpredictability of one’s own systems to raise adversary uncertainty and control challenges.
Here is a key potential for European states to innovate, particularly considering the European Union’s preference for non-violent resolutions over disputes and more aggressive interference. Currently, most cybersecurity strategy development and defensive measures are outsourced to NATO, a military alliance. Accordingly, threats continue to be framed in military terms—most recently a push to counter ‘cognitive warfare’, a new term to describe influence operations and disinformation, classic staples of subversion.
Lennart Maschmeyer is a Senior Researcher at the Center for Security Studies at ETH Zurich.