Balancing on the rail – considering responsibility and restraint in the July 2021 Iran railways incident

JD Work [1], [2]

An intrusion against a railway network, resulting in destructive effects leading to disruption of cargo and passenger transportation, would in previous decades likely have been considered a major strategic attack. Early writings on cyber warfare posited such actions only in theory, within the context of adapting ideas of other long-range strike and sabotage operations to the access opportunities afforded in a new operating environment.[3] Yet an incident in Iran that surfaced in July 2021 has passed with only relatively limited attention, having been all too easily overlooked in the surge of recent ransomware incidents and the other immediate fires that command current intelligence production priorities and readership. Nonetheless, the case deserves greater scrutiny – as much for what was not done, as for what transpired on the wire. While superficially shocking, as an action against critical infrastructure with civil dependencies, it is nonetheless likely a rare example of responsibility and restraint in the employment of offensive cyber operations for covert action objectives.

This article will consider the operational details of the intrusion incident and its context as part of a wider campaign leveraging related malware variants against similar regional targets. We will examine the decision to target the railway network from the normative framework supplied by the Tallinn Manual. Recognizing that customary international law does not easily address the case at hand due to involvement of apparent non-state actors outside of acknowledged conflict, we will proceed to consider the operational-level decisions made in the planning and execution of this intrusion and its effects delivery, identifying factors of cyberweapons design, controlled employment, and available effects options not pursued by the unattributed attackers that in total demonstrate due care and responsibility. We will close with a look at the wider regional context contemporary to these events, and the implications for both current and future responsible offensive cyber operations.

Intrusion and actions on objective

Between 9-10 July 2021, destructive termination of an existing intrusion resulted in the disruption of train services in Iran. While initial reports from local media suggested extensive cancellations and delays to scheduled movement of rolling stock, including passenger routes, the Iranian Transportation Ministry attempted to deny that any disruption had taken place. Handheld imagery then surfaced which showed cancellations listed on public display monitors at the Tehran central rail station at Rah Ahan Square, alongside a message directing complaints to a phone number. This number was found to be the public line for the Office of the Supreme Leader (Beit-e Rahbari). Additional impacts to the Iranian Ministry of Roads and Urban Development public website were also reported on 10 July.[4]

An Iranian cyberdefense firm which supplies the country’s national antivirus software provided the first public reporting on the malware payload used to deliver these effects. The payload was initially called Breakwin, and was described as a wiper also capable of corrupting master boot record hard disk partitions.[5] It is likely that the firm was able to acquire these samples through either antivirus telemetry (although such telemetry was degraded by the extent to which the unattributed operators were able to disable endpoint protection), or through the firm’s sandbox services (which provide other Iranian researchers with a detonation and automated behavioral analysis environment), rather than directly through digital forensics/incident response (DFIR) and associated recovery efforts at the targeted Iranian networks. Direct DFIR engagement findings would presumably have been handled in a nonpublic manner. The company is known to provide additional malware analysis services to the government of Iran, but these are generally conducted under nondisclosure restrictions. Given the Iranian government’s sensitivity to the incident and earlier cases, it is unclear why the firm chose to disclose technical details which contradicted official narratives. This suggests disconnects between line malware intelligence efforts and ministry security structures. It is also not known why recovered samples were uploaded to public malware repositories outside of Iran.

While no samples were identified by the Iranian researchers, the description of the malware was sufficient to allow at least two separate Western firms to acquire and reverse engineer the payloads, along with additional related components. The latter payload elements were possibly not previously identified by the Iranian analysts, likely due to limitations of their detection, telemetry processing, or analytic tooling. Alternatively, if known to Iranian defenders, these elements may deliberately not have been disclosed. (Reasons for such nondisclosure may be complex, from compliance with regime attempts to control embarrassing or sensitive information, efforts to distinguish privately circulated reporting offering additional detail for subscription consumers, or even incomplete technical analysis).

Initial intrusion reportedly took place as much as one month before effects were delivered.[6] The destructive payload – dubbed Meteor by the malware’s designers – appears to have been developed at least six months earlier, and was staged at an unknown time prior to execution after initial intrusion. The payload consisted of multiple components triggered based on automated scripts, with pre-configured execution parameters. Final destructive termination was initiated based on a time trigger, and rendered infected systems unusable through a sequence of commands to disconnect individual machines from enterprise networks, defeat endpoint protection features, erase local logs, lock out local users, overwrite files on targeted systems, corrupt the master boot record, and display a screen locked image crafted by attackers (which was the source of the message seen displayed at the Tehran station). [7]

While technical analysis of recovered malware provided strong insights into the intrusion incident and its destructive termination, it must be acknowledged that this intelligence picture remains incomplete. It is not known how the effects payloads were delivered to the target environment, nor what other actions may have been executed against network infrastructure including routers, enterprise orchestration and management services, or other systems. It is however clear that there was no apparent impact to operational technology (OT) network segments in any reported descriptions of the event.

A previously unknown self-styled hacktivist group calling itself “Predatory Sparrow” (گنجشک درنده) claimed responsibility for the attack on 9 July 2019, citing Iranian state Fars news agency reporting and stating that they sought to protest abuse of the nation by its government.[8] The group apparently took its name from the Passer montanus tree sparrow with habitat in Iran, and its logo incorporates a cartoon bird in the style of the popular “Angry Birds” game franchise against a digital circuit background. The use of bird naming conventions is common within the Iranian hacking scene, with earlier groups such as Parastoo (“Swallow”) having used similar monikers.[9] Major attack campaigns attributed to the Iranian government, such as Ababil, have also referenced the theme.[10] The Predatory Sparrow claim received almost no attention at the time, although it was immediately challenged by an Iranian actor demanding “proof of exploit”. No response was noted in public social media, although further unknown private communication through the Telegram app may have occurred on a channel provided by the Predatory Sparrow group.

Predatory Sparrow logo

The campaign in context

The Meteor code was found to share a high degree of similarity to other malware variants that may be evaluated as part of the same family, and which industry has assessed as evolving generations of tooling over time. These earlier payloads, Comet and Stardust, have been in use since at least September 2019.Previous recovered samplesdid include configuration elements providing victimology information, including indications of use against targets in Syria. Attacks against these networks, the Arfada Petrolum and its parent Katerji Group companies, involved unique images displayed on destroyed victim computers that claimed responsibility in the name of a self-styled hacktivist group calling itself Indra. This name is also found as a string in earlier malware artifacts. The group has been publicly claiming responsibility for cyber attacks against the Syrian regime since first announcing an attack on another Syrian firm, Alfadelex, in September 2019. Indra has also claimed additional unconfirmed intrusions against the airline Cham Wings and the Banias oil refinery.[11] In the latter case, the group provided imagery of a purported industrial control system human machine interface display, depicting crude oil and fuel oil flow, electrical systems status, and emergency stop controls.

The Katerji Group is a primary supporter of the Syrian regime. Its founder, Hossam Katerji has been a member of the Syrian Parliament since 2016, and along with his brothers has been a key financier involved in oil, construction, agriculture, and other commodities transactions. The firm’s assets were estimated at over USD600 million in 2019.[12] The group also reportedly controls multiple militias throughout the country to protect and advance their commercial interests, and are allegedly closely aligned with Iran.[13] The firm has also been accused of facilitating transactions with Daesh.[14]

Indra accounts further claimed that the Cham Wings intrusion provided intelligence on the movements of senior Iranian Revolutionary Guards Corps (IRGC) general Qassim Soleimani, which identified an alleged alias used by Soleimani for travel between Iran, Syria, and Iraq. Exfiltrated “sensitive documentation” was said to have included travel records one day before he was killed in a targeted US airstrike near Baghdad International Airport in January 2020. Indra claims the hack revealed “the stupidity of the former Quds Force commander” in using obviously false passport and other reservation data, and that this led directly to Soleimani’s targeting and death.[15] This is the first public allegation of cyber operations involvement in the strike. However, this claim cannot be independently evaluated based upon open-source information.

The Indra group apparently ceased further public claims as of November 2020. The gap between this halt in activity, and the Comet/Stardust code family resurfacing with the new Meteor variant in July 2021, allegedly in the hands of a separate hacktivist group opposed to the regime, creates substantial uncertainty as to the actual relationships between these entities.

The author here deliberately does not reach attribution on this case. Multiple hypotheses have been raised in industry reporting and in debates within private trust groups, none of which have resulted in definitive analytic conclusions. Neither is it at all apparent that this campaign has concluded, despite the apparent current cessation of activities by the hacktivist entities who claimed responsibility.

Normative problems

Despite decades of acknowledged military and intelligence planning that has treated cyberspace as a potential domain for conflict, there remains few case examples through which to explore operational behavior – especially under crisis pressures. The vast majority of these cases to date merely highlight the problematic employment of offensive capabilities in immature concepts of operation, using portfolios that were clearly untested in controlled, target relevant range environments, and that lacked appropriate management control and oversight by higher echelons. This is unsurprising, where major incidents emerged from authoritarian states that apparently remain largely unconcerned by the hitherto weak reactions of Western states to ever more egregious campaigns across critical infrastructure networks. As additional states, and other players in the global competitive space of the cyber domain, develop their own offensive cyber operations capabilities, it has become increasingly important to understand how this instrument ought to be used – and not merely how it has been abused in the past by malicious actors. Mature planning, accounting for the complex competing equities and potential harms in a tightly coupled global domain is a substantial burden even when considering operations intended for intelligence objectives, or as active defense response options. Offensive cyber operations involving effects on objective with disruptive or destructive intent face a still higher bar of responsibility.

Normative aspirations have sought to impose restraint in the selection of classes of targets that may be engaged. Yet despite substantive international attention and diplomatic engagement in both government and private channels, these efforts have had little practical effect. It is unfortunately unrealistic to expect that calls for placing all critical infrastructure targets out of bounds will be heeded by global adversaries, nor that even the most well intentioned of states will be able to completely exclude the entirety of a target state’s strategic networks from the calculus of politics pursued through other means. But even as the hopes of formal normative agreements continue to falter in the face of realpolitik, the prospect of more responsible decisions taken at the planner and operator level – and reinforced through management and oversight – are more important than ever. It is through these mechanisms we may hope to see a form of convergence towards agreement around behaviors that are more responsible, or at least less potentially destabilizing. Tacit bargaining through repeated interactions, resulting in boundaries of agreed competition, is one of the core pillars postulated in cyber persistence theory.[16]

Targeting

It is impossible to consider this case without addressing the matter of target selection. As a general rule, it is better that critical infrastructure upon which civilian populations depend should not be subject to attack, as an extension on the prohibitions against attacks on civilian populations.[17] Yet neither should a state presume that they will be able to count on such principles to stay the hand of rivals in the face of other belligerence. It has long been considered permissible to target even civilian objects (such as critical infrastructure networks) where “by their nature, location, purpose or use make an effective contribution to military action”, and “whose total or partial destruction, … or neutralization offers a definite military advantage”.[18]

Conflict status and belligerents

There are substantial gray areas in considering the application of this principle to the Tehran incident. No state of formal military hostilities existed between belligerents in this case, and therefore most international humanitarian law rules that control targeting are not applicable here. However, it has been argued that customary international law factors ought to be extended as a matter of normative principle to encompass actions in otherwise undeclared engagements, outside of armed conflict. Such arguments for expanded application of these principles rest upon the idea that the intent in drafting the original foundational instruments of the Hague Convention, Geneva Convention, Rome Statute, and associated treaty mechanisms is a vital component of international order, and that it is appropriate and necessary to conform to this intent in new situations that could not have been envisioned by the original drafters. (The opposing argument should also be noted: that the original agreements were important in outlining both prohibited conduct, but also delineating areas of state power that the involved parties chose deliberately to exclude from restrictions, as a limit to abstract ideals.)

Further complicating this analysis, one party at least represents itself as a non-state actor. While considerations of combatant status also have no application during peacetime, it has also been argued that these ought be normative considerations in offensive operations. As such, if claims of responsibility are taken at face value, the offensive actor here would appear to be an unprivileged belligerent, not entitled to combatant immunity.[19] Unprivileged belligerents are generally assumed in most analysis to be presumptively improper in the normative sense, although this is a view that is grounded in and seeks to reinforce the state’s monopoly on legitimate uses of violence – in practice quite often challenged by the facts on the ground of force generation and employment in contemporary conflict. Nonetheless, cyber operations by non-state actors are generally considered not to breach standards of customary international law, such as being considered as a use of force, a prohibited intervention, or violation of sovereignty, given that these standards apply to states alone.[20] It must also be noted that the notion of an unprivileged belligerent has been in some legal analysis supplanted by a focus on direct participation in hostilities as determinant of combatant status.[21]

However, it is also unclear if a de facto relationship between the irregular actors and a state that might be considered to be a party in wider regional conflict may be present, which might change such analysis upon more comprehensive consideration.[22] Given that at least one potential attribution hypothesis has been debated which encompasses contractor operations, and furthermore a plausible interpretation of the contractor as essentially mercenary in character, additional complications arise. If this theory is given weight, then such mercenaries involved in cyber operations would also be considered unprivileged belligerents.[23] Yet even otherwise organized armed groups, properly under military command and meeting other combatant criteria, are considered unprivileged combatants if they fail to conduct operations in accordance with the laws of armed conflict. Which returns us to the core issue over the propriety of target selection.

Target of military use and advantage

While transport networks here were almost certainly entangled with civilian uses, targeting is generally not restricted only to objects in immediate military use. The precise contours of the nexus with military activities, scoped by duration of current or future military use, direct or sustaining contributions to warfighting functions, absolute or relative value of contribution and therefore associated military advantage accrued by neutralization, and associated factors renders this a complex matter.[24] The state-owned nature of both the rail infrastructure in question, as well as its supporting information and communications technology services, and the predominant role of the IRGC-associated structure of bonyad enterprises (charitable trusts that play a substantial role in the Iranian economy), also introduces additional complications of military location.

The manner in which these networks were provisioned, and made distinct from IRGC and its state enterprises is salient to this analysis, but deeply unclear from the perspective of outside observers. IRGC-owned and -controlled enterprises encompass a number of entities involved in railway transport.[25] A number of these are sanctions-designated entities due to their use in illicit proliferation related activities – including among others the Bonyad Eastern Railway Company, the Sina Rail Pars Company, and the Kaveh Pars Mining Industries Development Company’s subsidiary Tehran International Transport Company, which are part of the Bonyad Mostafazan foundation designated by the US government under Executive Order 13876.[26]

If the unattributed operators acquired their unknown access to the target through a known military network, and delivered effects accordingly, this may well be considered a valid military location by logical address space, even if distinct in physical geography – especially where specific network geolocation may not have as readily been ascertained by the offensive operators. Designation under international sanctions would suggest sustained military use sufficient to overcome simple presumption of civilian character, even as the infrastructure also continues to be used for civilian purposes. Indeed, Tallinn discussions explicitly encompass this, listing “civilian rail networks being used by the military” as a target liable to attack under otherwise appropriate circumstances.[27]

Cyberweapons effect and review

Targeting is inextricably bound to intended effect. Importantly, while the observed destructive intrusion impacted data upon which the functionality of physical rail operations depended, resulting in disruptive consequences for civilian activities for a period, this clearly did not reach the level of damage to those physical objects, injury to civilians, or loss of life. The action is therefore almost certainly de minimis when considered in current interpretations of customary international law, and arguably does not constitute an attack by these standards.[28]

Critically, the events in this case suggest that greater potential impact rising to a different threshold may have indeed been possible, given the extent of access to the target network apparent in the operation, but that the unattributed operators chose deliberately to employ effects that would avoid such outcomes. This strongly distinguishes the present case from earlier incidents such as the NotPetya/Nyeta wiper operations, in which extensive collateral damage to transportation targets occurred as a result of unconstrained wormable propagation of destructive payloads and where adversary operators made no effort to impose guardrails on effects against additional indiscriminately impacted targets they had not characterized or even identified a priori.[29]

Apparent restraint also distinguishes from other cases that may be theorized in which an attacker might have wished to achieve more substantive physical effects, but did not reach this threshold due to capabilities limitations or operational failures. No such factors were evident in the Tehran case. To the contrary, deployed payloads explicitly specified deletion targets with multiple validation controls. This is strongly characteristic of a deliberately engineered weapon that has been designed with more than passing considerations of legal review.[30]

It is important, for the purpose of this analysis, to note that the ostensible purpose of the attack, as declared by unattributed operator messaging in the event, is not necessarily the actual objective sought in the action. The highly public face of the operation, as part of an event that planners and operators may have anticipated would receive global attention based on prior media coverage of other events involving Iranian networks, may not present the controlling reasoning of advantage obtained by denying, degrading, or destroying this target. Public messaging may have been complementary to, or even independent of, the functional advantage.

Intangible exclusions, reversibility, and morale considerations

International legal experts have for some time also explicitly argued that data on target systems and networks are excluded from the definition of an object for the purposes of evaluating damage, hitherto having referred solely to tangible and physical things in ordinary meaning. This matter has remained in dispute, with the degree of centrality of specific data (alone, in or larger aggregate) to civilian populations emerging as a key factor in weighing harms.[31] As the importance of virtual objects to the functioning of modern economies grows, it has been argued that this exclusion does not properly take into account the impact on civilian populations.[32] However, to date any such expansion of the rule has not been generally accepted, as much as it remains a normative aspiration.

The question of data as a targeted object also becomes further salient when considering reversible effects, both in terms of international law and international relations.[33] Considerations of damage must be evaluated very differently when assessing temporary disruption from permanent destruction. In the Tehran incident, effects were delivered in a manner that indicated awareness of the backup solution used by the target. While it appears from observed payload configurations that wiping effects were also directed against these backups, it is unclear if this was executed broadly across all backup instances on the network or if the unattributed operators preserved selected backups that would enable reconstitution at delay. Multiple configuration examples have been reported from the attack, with backup nodes not specified as targets in some instances.[34] This is further complicated in that backup solutions appear to have leveraged software provided by a US-headquartered firm, transactions which are presumably prohibited under current sanctions. However, Iranian IT engineers employed in sanctioned enterprises – including the state-owned oil company – have frequently listed experience with this solution, and earlier generation versions of the software were observed in underground software distribution through at least March 2021. Actions against targets acquired in violation of sanctions restrictions take on a different character than other mere civilian objects.[35]

Impact to civil populations must also be weighed based on the degree of dependence on denied and degraded systems. Iranian passenger transport has adopted modern payment and ticketing systems only recently, and manual fallback options for recovery remain. In this case, no indications of deliberate targeting of point of sale infrastructure or other systems associated with these functions were observed (although these may have been degraded by generalized effects across the network). While less convenient, the railway could resume transportation services using prior manual ticketing processes as an interim measure, further arguing that civil impact was minimized. This is less true of military and proliferation related use of rail services, which involve more complex problems of tracking shipping containers, bills of lading describing contents, and the details of intended cargo destinations – which as shell companies or other front entities may exist almost nowhere outside of these databases.

The de minimis nature of any physical effects in this case, within the law of armed conflict, also rules out violations of the prohibition on cyber attacks intended to spread terror among the civilian population. Indeed, Tallinn discussions explicitly envisioned potential offensive operations against mass transit, but within the context of causing fear of loss of life or injury. [36] In contrast, disruption in the extant case was explicitly accompanied by messaging which indicated no further escalation of the event. Rather, the focus of communications by unattributed operators was on highlighting otherwise repressed political tensions undermining the Iranian regime, distinct from statements against the population as a whole. These are characteristic of messaging one would see in psychological operations campaigns with intended effects directed against leadership legitimacy. International legal consensus has explicitly declared such psychological cyber operations do not qualify as attacks.[37] Further, even if such messaging, tied to the disruptive or destructive event, was to result incidentally in the decline of civilian morale, this would not be considered collateral damage.[38]

Other considerations

One may also view the intrusion and effects delivery in Tehran not through the lens of military activity, but as covert action. This line of thinking is somewhat unique to the United States’ views, as the label is not recognized outside of US domestic statutes. International law is generally silent on such matters, much as in other activities closely associated with espionage. Yet even states with strong commitments to international order have occasion to resort to unilateral action due to structural weaknesses of the global system. While some have asserted such actions are presumptively illegitimate, a more robust debate will consider the circumstances, intentions, and employment.[39] However, consensus on application of international law has been slow to develop due to the reluctance of states to acknowledge and defend practices, even at later remove.[40] Yet there remains a need to understand actions which arise from contexts treated as espionage in customary international law, but that approach possible threshold of attack – whether the kinetic bright line of human casualties, or the long shadows of destructive effects.

In light of what remains significant unresolved disagreement when considering application of abstract normative principles to actions on the wire, which are all too commonly encountered when attempting to contort the body of customary international law to consider novel actions in the cyber domain, we must turn instead to the questions of operational conduct. It matters as much or more at present, where law is silent (or unable to coherently speak), that the planning and execution of offensive cyber effects operations is responsible.

Observed indications of responsibility and restraint

If as Lawrence Lessig has stated, “code is law”, then developers and operators are now responsible for decisions that previously would have been reserved for policymakers and other elements of the sovereign state.[41] This is a practical devolution of functional authority that places on those conducting offensive cyber operations a burden to “do right” even as they may pursue their competitive interests against other states and non-state targets. Black letter law and formal policy always lags the realities on the ground, especially in fast-moving technology spaces. The weight of these responsibilities requires us to critically examine new cases not just in abstract legal frameworks but in operational detail. Where legal and norms-oriented approaches have not yet gained traction, we may yet see functional practices and de facto standards for a more professional kind of offensive behavior emerge from ongoing interactions on the wire.

The longstanding over-classification of even the most basic concepts of offensive cyber operations has for decades limited discussion of what could, and indeed should, be fundamental principles for the conduct of intelligence and covert action executed through malicious modification of systems and networks. This has led to ad hoc experimentation in which hostile operators frequently accomplish some intended objectives, but often with serious consequences to uninvolved third parties. These failure modes are generally well highlighted in current cyber threat intelligence. However, it is also important to note features which display a greater degree of responsibility.

Sadly, in recent cases of adversary action such as the HOLIDAY BEAR/DARK HALO/STELLAR PARTICLE/NOBELLIUM intrusion, at most one can note that adversary behavior was less irresponsible than in prior campaigns such as those conducted by earlier Russian intelligence service attributed operations by SANDWORM/VOODOO BEAR/IRON VIKING, or Chinese intelligence service attributed operations by HAFNIUM and follow-on activities across associated intrusion sets.[42]

Responsibility in offensive cyber operations is not merely the result of good intentions. Rather, it requires deliberate planning, engineering, operational, management, and oversight efforts throughout the lifecycle of a campaign to ensure that access and actions on objective are properly aligned, adequately tailored, and appropriately balance potential harms to competing equities whilst accomplishing mission objectives. Responsible conduct requires programmatic maturity, individual professionalism, and organizational focus to achieve. Given that responsible offensive cyber operations are carried out through more than mere intention, characteristic features of these behaviors may be seen in the artifacts of specific engagements. Further inferences may also be drawn from these observables.

Tailoring

The incident in Tehran is notable in that it suggests that offensive capabilities were well tailored, through substantial intelligence support including extensive reconnaissance within target networks. This likely allowed for deliberate selection of specific nodes for attack effects. This is significant, in that the unattributed operators appear to have deliberately refrained from any effects delivered against the railway operational technology network segments.

While substantial unknowns persist regarding the manner in which target reconnaissance was executed, the operators took due care to ensure that whatever access options and tooling were employed for this phase of the action remained distinct from the payloads which delivered destructive effects. This is critical towards ensuring distinguishability across campaigns and in future operations, where intrusions executed for intelligence objectives are otherwise difficult to tell apart from effects operations.[43] Such distinguishability is important to preventing potential inadvertent escalation or other negative outcomes from adversary misinterpretation of incompletely observed intrusion.

Tested capabilities

It is further important that the payloads appear to suggest substantial quality assurance engineering. Multiple measures to ensure redundancy of key guardrail functions seem to have been deliberately introduced, and the developers apparently sought to provide continuing state of health status so that operators would have positive control throughout delivery and execution. These capabilities have hallmarks of prior operational test and evaluation, likely within controlled range detonation of environments. This remains a substantial requirement of maturity when engineering new offensive capabilities portfolios.[44] The evolution of the malware family across prior incidents also suggests the live fire assessment was incorporated into a continuing review of the tooling and its performance.

Test and evaluation efforts were however not entirely perfect. Technical analysis suggests a seam between capabilities developers and operators’ implementation, indicating that a complete end-to-end package of payload and offensive deployment scripts were not assembled as a cohesive whole until engagement. This is consistent with programmatic maturity, but within an operations model that relies upon modular options composed at point of need. In this case, such a seam did not appear to compromise responsibility for discrimination and restraint – rather it merely resulted in what some would characterize as operational security lapses. In the hands of other operators, or under different targeting circumstances, such disconnects between different functional roles might however have turned out differently.

Constrained automation

While the deployment of the Meteor destructive payload and its associated components was highly scripted, this automation did not permit indiscriminate autonomous behavior. The detailed automation features specified, to a high degree of control, elements of the target to be serviced for effects and did not allow for non-specified elements to be struck. This included as an early step the deliberate isolation of specific endpoints from the wider network environment.

Critically, the payload did not contain logic to independently select new targets, nor to further propagate through the network to deliver further effects. Such autonomous additional targeting is not presumptively irresponsible, but must be appropriately constrained within carefully defined parameters of action. This requires that formal principles of target discrimination, in both the technical and legal senses of the term, be incorporated into command-and-control functionality. It is also best if such capabilities require a human on the loop, if not directly involved in issuing commands to execute against independently discovered targets. In the current case, the extensive reconnaissance prior to execution appears to have precluded the need to conduct further reconnaissance by fire through automated actions.

Paths not taken

Given apparent extensive prior access to the target, a variety of effects scenarios highly likely could have manifest within the rail network, but did not. There are no indications of destructive or disruptive effects against operational technology networks. This does not appear to have been a result of capabilities limitations, but rather deliberate decisions not to pursue available options. Indeed, code features observed to be present would have enabled additional effects with trivial effort. The Meteor wiper supported specific designation of processes to kill, although none were so specified in observed deployment. This function could have readily incorporated even publicly known process kill lists from other malware deployed against industrial control system environments, such as the EKANS/Snakehose ransomware variant (itself leveraging iterated development from the MegaCortex malware). [45] These static lists, while not tailored to the Iranian rail network, likely would encompass at least some functionality expected to be encountered in the operational technology segment of the target environment. Given the extensive reconnaissance on target, it would have likewise been a relatively simple processing and exploitation task to compile a more narrowly tailored process kill list specific to the network as configured.

Additionally, the presumed level of access demonstrated by the operators in this case would likely have served to provide options against unique features of the railway target using non-public capabilities that could be developed based on descriptions of other classes of known vulnerabilities previously identified in similar targets. Automated signaling and switching technologies with network connectivity would be natural targets should an attacker wish to cause more extensive impact to physical plant. Disruption of these functions could create effects including setting conditions to foul track routing in a manner requiring extensive manual effort to reset, or even create enhanced risks for collision of rolling stock.

The current Iran railways signaling system is reportedly built largely upon a legacy, analog frequency modulated (FM) radio network connecting to a digitally switched network management terminal. This dedicated system provides railway dispatch and command functions, as well as supporting additional public security communications. The dedicated network is reported to be connected to a TCP/IP network.[46] New automated signaling functions, including GSM-R network options, have been variously pursued, to increase rail transit capacity and as part of regional interoperability efforts.[47] These functions almost certainly offer vulnerabilities that could have been exploited by a sufficiently skilled attacker. Prior industry research has demonstrated widespread vulnerability discovery in rail signaling functions, including options to deliver to denial of service effects with potential safety critical impacts.[48] Indeed, Iranian researchers have also previously focused on analysis on signaling failures as part of modernization investments.[49]

It is unlikely however that defensive research efforts have fully remediated potential exploitation opportunities, especially given extended access to the target network demonstrated by the unattributed operators in the current case. As a result, it is even more notable that no such effects appear to have been scoped or otherwise pursued in the July 2021 intrusion incident.

Nonproliferation considerations

Interestingly, these tools appear to have been deliberately designed to incorporate only well known, extensively documented features that have been previously seen in the wild in other malware families. This is an additional hallmark of responsible operations, in which more sophisticated planners will consider the potential proliferation implications of deploying code which may both be directly repurposed if recovered from a target system or network, as well as the potential knowledge transfer to both the immediate adversary based on weapons technical intelligence derived from reverse engineering and behavioral analysis of observed implants. (The latter type of proliferation is also of concern for all other possible observers that may have collected against the event, or acquired samples through subsequent circulation among cybersecurity research communities.)

The primary effects mechanisms of the Meteor wiper include deletion functions that have been compared directly to the NotPetya/Nyeta malware. While this is likely somewhat of an overstatement, the overall structure and logic of the tooling is exceptionally similar to many other contemporary ransomware variants, including publicly circulated open-source tooling used for threat emulation and other red team engagements. Additional functionality is provided from abused administrative software in circulation in the underground market since at least 2006. These design characteristics will have taught nothing new to any observers. Additional commands are executed through native system utilities in a manner that would be familiar to the penetration testing community of practice.

The selection of a time-based effects trigger is also significant, although frequently overlooked. While there are other more subtle options, which may correspond to additional operational objectives, the choice to employ a literally decades old design, previously seen in countless other incidents in the wild, provides additional assurance against proliferating novel concepts of operation.[50]

Towards auditable implants

The Meteor payload and its associated automated scripting may be seen from one perspective to have suffered from multiple operational security failures, had the unattributed developers and operators been focused exclusively on non-detection. In particular, Powershell commands intended to protect deployment of malware components through manipulation of endpoint whitelisting provided transparency into installed malicious tooling. This was leveraged in subsequent technical analysis of the malware. Some of the identified malware samples in the campaign were also found to have retained strings used by developers for debugging purposes, which are more commonly eliminated when deployed for operational use. Much of the configuration of the tooling was also left in readily understood English, whereas malware developers under more routine detection pressures will seek to obfuscate functionality in more elaborate ways. Critically, the implant also incorporated functionality not used in the extant attack, which could have been removed by an actor more focused on denying potential observables about future capabilities options.

However, these choices may also be considered as possible decisions to ensure that the attack was not misinterpreted by observers, and that analysis of any recovered samples of this specific weaponized code instance could be completed quickly in order to reduce potential tensions that may arise from uncertainty around the further extent of immediate action. While planners and developers that build tooling for long-term espionage-focused campaigns may choose to prioritize non-detection features, when considering execution of deliberate effects operations additional factors of target (and importantly, target leadership) reactions must be considered.[51] These decisions are relatively costly for planners, in that they trade off features that may otherwise maximize probability of mission success in an attempt to introduce elements that may reduce chances of hostile cyber intelligence services failing to accurately observe and understand an action. These decisions also presume interactions over a relatively shorter timespan, where the cumulative effect of such options may compound costs for the responsible operator in extended competitive and conflict iterations.

Ultimately, such choices lead to a potential scenario in which an attacker may choose to deploy what are effectively auditable implants. At the simplest level, these artifacts may include specific watermarks intended to convey intended function, even voluntary attribution.[52] More complex tooling in this model documents within itself its scope, the actions it has taken, and importantly serves to rule out other actions not taken. The expectation is that such auditable features may be obfuscated, or perhaps even fully encrypted – to enable an operationally relevant window of time for mission completion, after which it is assumed that a sufficiently skilled technical intelligence team will be able to reverse engineer the artifacts and reconstruct the relevant events of the effects operation. This may even be further enabled by post hoc release of decryption keys for such audit artifacts, either as part of termination and withdrawal from access footholds, or through other channels. While it does not appear that the Meteor implant was designed deliberately to offer such post-incident features, many aspects of the supposed operational security mistakes do serve to offer similar utility for analysis after the fact.

Implications and outlook

This case is an important window into contemporary cyber operations praxis, and a rare example of more responsible operational planning and decisionmaking than is typically seen in the largely unrestrained and ever more escalatory actions of authoritarian and revisionist states that dominate current intelligence reporting and media headlines. However, it is also ethically challenging on one level, in that it remains unattributed – given that claims of responsibility by the identified hacktivist actors must necessarily remain suspect. Indeed, this uncertainty of attribution when coupled with cui bono analysis, has reportedly stayed the hand of some industry researchers who might otherwise have provided additional technical review.[53] In one sense, this is understandable in that observed victimology to date suggests that the Western customers of major cybersecurity firms appear to have little reason to be concerned about this threat activity. The tooling itself poses little proliferation risk. From this perspective, it is perhaps right and proper that the incident be set aside, to allow resources to be focused on the ever continuing press of other new samples and fresh intrusions. Such a decision also does not trigger the ethical implications inherent in providing public analysis of malware development and operational tradecraft which may be abused by adversary actors to improve detection and engineer new countermeasures.[54]

There is nonetheless a balance of competing harms. The near-exclusive focus on what is entirely irresponsible behavior by a series of adversaries means there is little to no discussion of what intrusion activity “done right” looks like. This is even more true of effects operations. This leaves academics, policy communities, and even developing programs among allies and partners often adrift in a sea of abstract theory towards unrealized norms – unable to recognize or articulate mistakes or different courses of action that might avoid them. It is also highly likely that the planners of an effects operation intended to generate highly visible disruption, with associated messaging for psychological impact, will have anticipated detection and disclosure of the tooling used to deliver these effects, and made operational choices accordingly that would minimize potential degradation of future options.

Addressing this present situation is in its own way potentially problematic. Real concerns exist where such conversations may lead hostile intrusion and attack campaigns to “improve” ways that advance their sophistication and capacity to inflict harm on defenders, or to defang the rare access and effects opportunities available to states acting with restraint and responsibility. There is also a high likelihood that adversaries will continue to ignore notions of responsibility grounded in the global international order to which they are inimically opposed, and that they see as offering only cost without advantage. Yet it may be argued that many emerging offensive programs might seek to behave more responsibly, either out of organizational self-interest in avoiding detection and associated blowback, or even from individual operators’ recognition that spying and fighting “in the machine” is a professional activity. Nobody wants to be an intelligence service or national cyber command’s equivalent of script kiddie.

The current case also highlights the importance of understanding incidents not only individually, but also within the wider context of ongoing operations. The campaign is the proper unit of analysis for cyber conflict.[55] Here, the identified campaign – encompassing both the actions by the Indra phase and the Predatory Sparrow phase of employment of a common malware family, must also be seen in the context of regional actions impacting transportation sector targets. The strike against the Iranian railway network comes against the backdrop of repeated incidents involving kinetic attacks on maritime shipping and regional oil and gas industry targets. In no small part, such mining incidents and missile fires must also be understood in light of recurring compromise of regional port and shipping targets.[56] Earlier unattributed exchanges have also resulted in disruption at Iranian ports, including effects that appeared to impact sanctioned Iranian state entities.[57] Yet the regime has reportedly continued to fund novel offensive cyber capabilities development intended to destroy vessels at sea and to execute further loitering munitions attacks.[58]

It is only when stepping back to view the whole of the ongoing clandestine conflict that the most significant aspects of responsibility in the present case may be understood. In comparison to other ongoing exchanges of lethal fires and destructive cyber effects within the region, the action against the Iranian rail network must be seen as exceptionally measured. It is in some ways a very real reminder of competing interests that oppose the regime, and that responses even when confined to the cyber domain do not necessarily have to conform to the precise parity of targets introduced by the adversary. Rather, here the theocracy has employed its transportation assets to further ongoing proliferation and regional terror attacks, and has directly targeted other countries’ civil transport through its own attacks. This brought the theocracy’s own transport networks to the table, as potential targets of reciprocal action.

This also brings to the forefront what may be assessed as the most likely intent of the cyber attack against the railway target. A very visible demonstrative action against a representative target, within strong restraints and executed in a deliberately responsible fashion, highlights the pervasive vulnerability of Iranian networks to potential disruption and destruction but risks little unanticipated collateral damage. The strike almost certainly involved no exquisite capabilities, and while the mechanisms of access remain unclear, these were likely commonly known exposures that the generally woeful state of Iranian cyber defenses had never appropriately remediated. The selection of this target therefore does nothing to tip the attacker’s hand as to the true extent of offensive options that might be employed in a less constrained engagement, or in a sustained campaign intending to inflict strategic costs upon the regime.

Unfortunately, however strongly intended and communicated the signal, it is constrained by the receptive capacity of the Iranian regime. There is little indication to date that the core leadership is responsive to such demonstrative actions, and it has shown continued disregard for pressure arising from a frustrated and disillusioned population. The regime’s elites continue to profit from corrupt relationships cultivated in regional adventurism, and see the near-term possibilities of further relief from crippling sanctions imposed under the prior US administration. As faltering negotiations stumble on, while clandestine acquisition activities still pursue prohibited uranium enrichment to set conditions for potential materials diversion and nuclear breakout (using expertise and technology never adequately declared, monitored, or decommissioned despite earlier agreements), the potential leverage that can be generated by restrained and responsible offensive cyber operations is rapidly declining. [59]  And in this is the real concern – that once reasonable options have been exhausted, there will remain nothing but capabilities reserved only for in extremis situations.

JD Work now serves at the Marine Corps University’s Krulak Center for Innovation and Future Warfare, and holds additional affiliations with Columbia University’s Saltzman Institute of War and Peace Studies, as well as the Atlantic Council’s Cyber Statecraft Initiative. He has over two decades experience working in cyber intelligence and operations roles for the private sector and US government.


[1] The views and opinions expressed here are those of the author and do not necessarily reflect the official policy or position of any agency of the US government or other organization.

[2] The author would like to thank Dave Aitel, Juan Andrés Guerrero-Saade, and Gary Brown for comments and critique.

[3] Winn Schwartau, Information Warfare: Chaos on the Electronic Superhighway (New York: Thunder’s Mouth Press, 1994). ; John Arquilla, David Ronfeldt, In Athena’s Camp: Preparing for Conflict in the Information Age (Santa Monica: RAND, 1997). ; Gregory J. Rattray, Strategic Warfare in Cyberspace (London: MIT Press, 2001).

[4] Reuters. “Iran transport ministry hit by second apparent cyberattack in days.” 10 July 2021.

[5] Amn Pardaz. “Trojan.Win32.BreakWin”. 13 July 2021.

[6] Islamic Republic News Agency (IRNA). ” What was the story of the cyber attacks on the Ministry of Roads and Urban Development and Railways?” 18 July 2021. (Farsi)

[7] Juan Andrés Guerrero-Saade. “MeteorExpress: Mysterious Wiper Paralyzes Iranian Trains with Epic Troll.” SentinelOne. 29 July 2021.

[8] Predatory Sparrow, social media, 9 July 2021.

[9] Flashpoint. “Inside an Iranian Hacker Collective: An Exclusive Flashpoint Interview with Parastoo”. 3 February 2016.

[10] JD Work. “Echoes of Ababil: Re-examining formative history of cyber conflict and its implications for future engagements.” Soldiers and Civilians in the Cauldron of War, 86th Annual Meeting of the Society for Military History. May 2019.

[11] Checkpoint, “Indra — Hackers Behind Recent Attacks on Iran”. 14 August 2021.

[12] Kevin Mazur, Revolution in Syria, (Cambridge : Cambridge University Press, 2021)

[13] Lucas Winter. “The Katerji Group- A New Key Player in the Syrian Loyalist Universe”. OE Watch, Foreign Military Studies Office. September 2019.

[14] Michael Georgy, Maha El Dahan. “How a businessman struck a deal with Islamic State to help Assad feed Syrians.” Reuters. 11 October 2017.

[15] Indra. Via social media. 11 February 2020.

[16] As advanced by Emily Goldman, Richard Harknett, and Michael Fischerkeller. Whilst we eagerly await their full volume forthcoming, at present one may see Michael P. Fischerkeller, Richard J. Hartnett. “Persistent Engagement and Tacit Bargaining: A Path Toward Constructing Norms in Cyberspace”. Lawfare. 9 November 2018. https://www.lawfareblog.com/persistent-engagement-and-tacit-bargaining-path-toward-constructing-norms-cyberspace ;  Michael P. Fischerkeller, Richard J. Harknett, “Persistent Engagement, Agreed Competition, Cyberspace Interaction Dynamics and Escalation.” Institute for Defense Analyses, May 2018, https:// http://www.ida.org/idamedia/Corporate/Files/Publications/IDA_Documents/ITSD/2018/ D-9076.pdf

[17] The principle is stated as Rule 93 and Rule 99 of Tallinn Manual 2.0, On the International Law Applicable to Cyber Operations (Cambridge: Cambridge University Press, 2017)

[18] International Committee of the Red Cross, Study on Customary International Humanitarian Law, Volume I, (Cambridge: Cambridge University Press, 2005), also restated in Tallinn 2.0, Rule 100

[19] Tallinn 2.0, Rule 86

[20] Tallinn 2.0, Rule 33

[21] Nils Melzer, Direct participation in hostilities under international humanitarian law (Geneva: International Committee of the Red Cross, 2009)

[22] Tallinn 2.0, Rule 87

[23] Tallinn 2.0, Rule 90

[24] Tallinn 2.0, Rule 100

[25] Wilfried Buchta, “Who Rules Iran? The Structure of Power in the Islamic Republic”, (Washington DC: Brookings Instituion Press, 2000); Frederic Wehrey, Jerrold D. Green, Brian Nichiporuk, Alireza Nader, Lydia Hansell, Rasool Nafisi and S. R. Bohandy, “The Rise of the Pasdaran:  Assessing the Domestic Roles of Iran’s Islamic Revolutionary Guards”, (Santa Monica: RAND, 2009)

[26] US Treasury Department. “Treasury Targets Vast Supreme Leader Patronage Network and Iran’s Minister of Intelligence”. 18 November 2020. https://home.treasury.gov/news/press-releases/sm1185, and network analysis of corporate relationships https://home.treasury.gov/system/files/126/most_found_11182020.pdf

[27] Tallinn 2.0, Rule 100.10

[28] Tallinn 2.0, Rule 92

[29] Michael Schmitt and Jeffrey Biller. “The NotPetya Cyber Operation as a Case Study of International Law”. EJIL Talk blog. 11 July 2017. https://www.ejiltalk.org/the-notpetya-cyber-operation-as-a-case-study-of-international-law/ ; Tarah Wheeler, John Alderdice. “The Geneva Convention and International Cyber Incidents”. Belfer Center for Science and International Affairs, Kennedy School, Harvard University. 4 February 2021. ; https://www.youtube.com/watch?v=nOOk3ltPvEw ; Monica Kaminska, Dennis Broeders, Fabio Cristiano, “Limiting Viral Spread: Automated Cyber Operations and the Principles of Distinction and Discrimination in the Grey Zone”, 13th International Conference on Cyber Conflict (CyCon): ‘Going Viral’. Tallinn, Estonia. 2021.

[30] Gary D. Brown, Andrew O. Metcalf. “Easier Said than Done: Legal Reviews of Cyber Weapons”. 7 J. Nat’l Sec. L. & Pol’y 115 (2014); David Wallace, “Cyber Weapon Reviews under International Humanitarian Law: A Critical Analysis”. NATO CCDCOE. 2018.

[31] Tallinn 2.0, Rule 100.5-7

[32] Beth D. Graboritz, James W. Morford, Kelly M. Truax, “Why the Law of Armed Conflict (LOAC) Must Be Expanded to Cover Vital Civilian Data”, Cyber Defense Review 5, No 3 (2020): 121-131.

[33] Neil C. Rowe, “War Crimes from Cyber-weapons “, Journal of Information Warfare 6, no 3 (2007): 15-25; Michael N. Schmitt, “‘Attack’ as a term of art in international law: The cyber operations context”, 4th International Conference on Cyber Conflict (CYCON). Tallinn, Estonia. 2012. ; Max Smeets, Herbert S. Lin. “Offensive cyber capabilities: To what ends?” 10th International Conference on Cyber Conflict (CyCon). Tallinn, Estonia. 2018.

[34] Remarks under Chatham House rule, GlassHouse Center. 14 September 2021.

[35] Panayotis A Yannakogeorgos, Eneken Tikk. “Stuxnet as cyber-enabled sanctions enforcement”.  International Conference on Cyber Conflict (CyCon US). Washington, DC. 21-23 October 2016.; Mark Peters, “Cyber Enhanced Sanction Strategies: Do Options Exist?” Journal of Law & Cyber Warfare 6 no 1 (2017):95-154

[36] Tallinn 2.0, Rule 98

[37] Tallinn 2.0, Rule 92.2

[38] Tallinn 2.0, Rule 100.26

[39] William Michael Reisman and James E. Baker, Regulating Covert Action: Practices, Contexts and Policies of Covert Coercion Abroad in International and American Law (New Haven, CT: Yale University Press, 1992).

[40] Alexandra H. Perina, “Black Holes and Open Secrets: The Impact of Covert Action on International Law”, 53 Colum. J. Transnat’l L. 507 (2014-2015)

[41] Lawrence Lessig, Code and Other Laws of Cyberspace (New York: Basic Books, 1999)

[42] Perri Adams, Dave Aitel, George Perkovich, JD Work. “Responsible Cyber Offense”. Lawfare. 2 August 2021. https://www.lawfareblog.com/responsible-cyber-offense

[43] Gary Brown, “Spying and Fighting in Cyberspace,” Journal of National Security Law & Policy, 2016

[44] JD Work. “Who Hath Measured the (Proving) Ground: Variation in Offensive Capabilities Test and Evaluation.” 15th International Conference on Cyber Warfare and Security. Old Dominion University, Norfolk, VA. March 2020.

[45] Dragos. “EKANS Ransomware and ICS Operations”. 3 February 2020.

[46] 冉晓径 (Ran Xiaojing), “浅谈模拟集群通信系统在伊朗铁路中应用” (“On the Application of Analog Trunk Communication System in Iranian Railways”), 铁道通信信号 (Railway Communication Signal) 46 no 6 (2010).

[47] Javad Lessan, Ahmad Mirabadi, & Yaser Gholamzadeh Jeddi, “Signaling system selection based on a full fuzzy hierarchical-TOPSIS algorithm”,  International Journal of Management Science and Engineering Management 5, No 5 (2010): 393-400. M. Tamannaei, M. Shafiepour, H. Haghshenas, B. Tahmasebi, “Two Comprehensive Strategies to Prioritize the Capacity Improvement Solutions in Railway Networks”, International Journal of Railway Research 3, No 1 (2016):9-18; Mohammad Ali Sandidzadeh, Farzaad Soleymaani, Shahrouz Shirazi, “Design and Implementation of a Control, Monitoring and Supervision System for Train Movement Based on Fixed Block Signaling System with AVR Microprocessor,” Materials Science and Engineering 671 (2020)

[48]  Christian Schlehuber, Erik Tews, Stefan Katzenbeisser, “IT-Security in Railway Signalling Systems.” In Reimer H., Pohlmann N., Schneider W. (eds) ISSE 2014 Securing Electronic Business Processes. (Wiesbaden: Springer Vieweg, 2014); Sergey Gordeychik, Aleksandr Timorin. “The Great Train Cyber Robbery”. Chaos Communications Congress, Hamburg. 27 December 2015.

[49] M. Yaghini, F. Ghofrani, S. Molla, M. Amereh, B. Javanbakht. “Data Analysis of Failures Of Signaling And Communication Equipment In Iranian Railways Using Data Mining Techniques”. Journal of Transportation Research 11 no 4 (2015): 379-389.

[50] Peter J. Denning. “Computer Viruses”. Research Institute for Advanced Computer Science, Ames Research Center, National Aeronautics and Space Administration. 1988. ; Eugene H. Spafford. “Computer Viruses: A Form of Artificial Life”. Purdue University. 1990

[51] JD Work. “Competitive Dynamics of Observation and Sensemaking Processes Impacting Cyber Policy (Mis)Perceptions”. Bridging the Gap Workshop, Cyber Conflict Studies Association. 12 November 2019.

[52] Dave Aitel. “A plausible platform for cyber norms”. CyberSecPolitics blog. 28 February 2016. https://cybersecpolitics.blogspot.com/2016/02/a-plausible-platform-for-cyber-norms.html; Dave Aitel. “A technical scheme for “watermarking” intrusions”. 8 March 2016. https://cybersecpolitics.blogspot.com/2016/03/a-technical-scheme-for-watermarking.html

[53] Remarks under Chatham House rule, GlassHouse Center, 30 July 2021

[54] Such ethical dilemmas are discussed further in Juan Andrés Guerrero-Saade. “The ethics and perils of APT research: an unexpected transition into intelligence brokerage”. Virus Bulletin Conference. Prague. 30 September – 2 October 2015.; JD Work. “Ethics considerations in victimology collection & analysis in cyber intelligence.” Legally Immoral Activity: Testing the Limits of Intelligence Collection. The Citadel, Charleston. 12 February 2020.

[55] This point has been made frequently within the community of practice, but additional by scholars including Richard J. Harknett and Max Smeets, “Cyber campaigns and strategic outcomes”, Journal of Strategic Studies (2020)

[56] JD Work. “Counter-cyber operations and escalation dynamics in recent Iranian crisis actions”, Workshop on Crisis Stability and Cyber Conflict, Columbia University. February 2020.

[57] JD Work and Richard J. Harknett. “Troubled Vision: Understanding recent Israeli-Iranian offensive cyber exchanges.” CyberStatecraft Initiative, Atlantic Council. July 2020. https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/troubled-vision-understanding-israeli-iranian-offensive-cyber-exchanges/

[58] Jeremie Binnie. “CENTCOM identifies Iranian delta-wing UAV used in tanker attack” Jane’s Intelligence. 9 August 2021. ; Deborah Haynes. “Iran’s Secret Cyber Files”. Sky News. 27 July 2021. https://news.sky.com/story/irans-secret-cyber-files-on-how-cargo-ships-and-petrol-stations-could-be-attacked-12364871

[59] International Atomic Energy Agency. “Verification and monitoring in the Islamic Republic of Iran in light of United Nations Security Council resolution 2231 (2015)”. GOV/2021/39. 7 September 2021. Derestricted.

What If the Best Defense Is a Good Defense (Instead of Offense Rebranded as Active Defense)?

Josephine Wolff

In cybersecurity, the difference between offense and defense is at once extremely straightforward and incredibly difficult to pin down. It is straightforward because defending your own networks and data and attacking someone else’s look completely different: the former involves implementing security controls and detection systems within the confines of your own computer systems, while the latter involves exploiting vulnerabilities in someone else’s systems. So it really should not be difficult to designate any particular activity in cyberspace by an individual country as offense or defense—except that, increasingly, countries seem to view the best cyber defense as, well, offense.

In 2018, the United States Cyber Command announced a new cyberspace strategy grounded in the ideas of persistent engagement with adversaries and defending forward so that defensive interventions occurred “as close as possible to the origin of adversary activity.” The crux of the strategy was essentially to broaden the boundaries of defense so that it would include cyber activities that occurred outside the borders of the networks being defended to encompass activities targeting adversary networks. In other words, offensive cyber activity was rebranded as “forward defense.”

I wrote at the time about my skepticism around relying on offensive cyber capabilities as a defensive strategy, but there were certainly points to recommend this strategy, especially the sense that a defensive strategy focused on hardening networks and computer systems against attacks simply was not working very well. Three years later, though, it is a little hard to tell whether ramping up offensive cyber activity has contributed to a stronger cybersecurity posture for the United States—or, indeed, even how much that offensive cyber activity has been ramped up under the new strategy.

Part of what makes it difficult to assess the effectiveness of offensive cyber activity as a means of defense is that many of these offensive operations may be carried out in secret. So in addition to a few specific operations reported in the media—including a 2019 attack on the Russian Internet Research Agency and two other attacks, also in 2019, directed at Iran—it is possible that there are many examples of persistent engagement in cyberspace that the public simply isn’t privy to. For example, when the Russian REvil ransomware gang went offline earlier this year, following several high-profile ransomware attacks on U.S. targets including Colonial Pipeline and JBS, it was very unclear whether that was the work of the United States government or not.

I don’t think that taking the servers used to perpetrate ransomware attacks offline is necessarily a bad idea. If that is what happened in this case (and again, no one seems to be sure), it does seem like there would be a benefit to making clear that is why the servers went down in order to send a clear signal to other cybercriminals. But just because it may be warranted and useful, it does not mean that this type of offensive cyber activity is defense, in the sense that it makes computer systems any safer. Taking out the computer that has attacked you is not defense its retribution, and that’s an important distinction to keep clear if only because it highlights the need to do a very different kind of work to actually defend infrastructure from ransomware.

More generally, I am wary of blurring the lines between offense and defense—and particularly the language we use to describe each. That’s partly because I think there are significant differences between interfering with your own networks and messing around in someone else’s, but mainly because I worry that relying on offense as a country’s main source of defense can lead to countries neglecting the less exciting but equally (if not more) important work of trying to build out more secure infrastructure and computer networks.

It is possible to point to a series of severe cyberattacks in the United States over the course of the past few years (SolarWinds, Colonial Pipeline, JBS, to name just a few) and argue that their severity suggests persistent engagement has not worked and offensive cyber activity is not an effective defensive strategy in cyberspace. It is equally possible to invoke the relative security of the 2020 election and other unknown, secret offensive cyber operations as evidence that this strategy has been a great success for the United States. Given how little is known publicly about the extent of these operations and how little it is possible to know about what the landscape of threats and cyberattacks would have looked like under a different strategy, I’m not convinced it is possible to draw any very strong conclusions one way or the other.

What does seem clear from the past few years is that offensive cyber activities do not—and will not—suffice to defend computer networks absent the more traditional, inner-looking work of defense. There may well be value in both offensive and defensive cybersecurity efforts, but there is also value in keeping them distinct in order to clarify that the rules, standards, and norms for each are quite different and, most importantly, that offense cannot and should not be viewed as a substitute for a strong defense.

Josephine Wolff is an Associate Professor of Cybersecurity Policy, The Fletcher School at Tufts University.

CS Alert – Offensive Cyber in 1914

Image: An undated photograph of CS Alert. Source: Wikipedia.

By Neil Ashdown

On 5 August 1914 – the day after Great Britain declared war on Germany – CS Alert, a British cable ship, severed the submarine telegraph cables connecting Germany to the United States. The first post on The Alert, the blog of the Offensive Cyber Working Group (OCWG), asked whether this action was an early example of offensive cyber. In this post, I will argue that this historical case can help inform our understanding of offensive cyber operations today, more than a century later.

The relationship between secrecy and offensive cyber

CS Alert’s mission depended on tactical surprise – keeping the timing and nature of the ship’s departure secret increased its likelihood of success.[1] However, the severing of the cables would not have come as a strategic surprise to Germany or other world powers; a Peruvian vessel severed a submarine cable during the 1879-84 War of the Pacific and the US cut cables – including a British-owned cable – during the 1898 Spanish-American War.[2] Germany and other states had actively been promoting the use of wireless telegraphy (radio) precisely because it offered an alternative to dependence on British-owned cables.

This dynamic is relevant to the current secrecy around offensive cyber. Today, states are extremely reticent to provide any details about their offensive cyber capabilities. Some of this is about the preservation of capabilities that would be rendered ineffective by exposure, much as the details of Alert’s mission had to be kept secret. However, there is considerable over-classification,[3] likely stemming from the historical origin of state cyber capabilities within the instinctively secretive intelligence community.[4] In addition, as Ciaran Martin, former head of the UK National Cyber Security Centre, recently argued, such secrecy also allows policymakers an ‘easy option’ for threatening reprisals against hostile state activity, without having to go into details.[5]

Even though Alert’s mission depended on secrecy for its effectiveness, the goal of that mission and its impact were largely predictable based on information available in open sources at the time. At the risk of being proved dramatically wrong by some future revelation(s), something similar could be said about modern state cyber capabilities. Advanced cyber powers may have capabilities that could prompt surprise and outcry if revealed, but cyber capabilities are not ‘magical’. Indeed, like real-world magic tricks, once the secret is revealed, many cyber operations are less sophisticated than we tend to think.[6]

This does not mean that there will be no surprises for observers of offensive cyber. Similarly, public understanding of Britain’s campaign against Germany’s communications evolved over time. While it was immediately clear that the cables had been severed, the details of the operation were initially secret, leading to some incorrect claims. Fulwider notes that “[i]n much of the existing literature, the ship responsible for cutting the cable was reported as the Telconia.”[7]  This underlines that public understanding of offensive cyber operations will need to be corrected in the future as more information becomes declassified. 

Campaigns and context in offensive cyber

Alert’s mission was also part of a wider campaign to disrupt Germany’s communications. According to Winkler, “Historians have been aware of the severing of Germany’s cables in the Channel, but the larger scope of these operations has been obscured, as has the fact that Britain’s activities continued through to the final months of the war.”[8] Modern observers of offensive cyber would recognise this as a coherent campaign comprising many operations aimed at achieving strategic advantage.[9]

This campaign was not limited to technical measures. Winkler describes activities that would today fall under the rubric of human intelligence or covert action. These included a British intelligence agent sabotaging a German radio station in Mexico City, conducting counter-proliferation operations targeting key components[10], and using disinformation to eliminate the competition.[11] Similarly, far from being limited to computer network operations, it is likely that modern offensive cyber capabilities will often involve a combination of different approaches. Among other examples, this can be seen in public disclosures about Russian close access operations[12] and in the participation of the UK Secret Intelligence Service (MI6) in the UK’s National Cyber Force.[13]

The context of Alert’s mission is also important. The severing of the cables occurred at a time when the UK and Germany were in a state of armed conflict, London’s ultimatum to Germany having expired at 2300 on 4 August. In contrast, public understanding of what offensive cyber means, and what it is capable of, arises from a partial view of a subset of activities conducted outside of a state of armed conflict. Attempts to read across from ‘peacetime’ activities to the activities that states might conduct during a time of war need to bear this context in mind.[14]

Offensive cyber is an enabler of other activities, but it can also have unintended or undesirable consequences

Alert’s mission involved physically breaking equipment, but this was not its primary goal. Moreover, the operation was not intended to sever Germany’s communications with the outside world. Rather it was intended to force Berlin to use a more vulnerable channel – radio.[15] Highlighting the continuity between this operation and modern cyber operations, the Tallinn Manual 2.0[16] would describe this as ‘herding’.[17]

The campaign against German communications proved to be a force multiplier for other parts of the British war effort – monitoring of radio traffic increased the effectiveness of British efforts to interdict shipping intended for Germany. As Winkler observes, “[t]he resulting information blockade […] enabled Great Britain to reinforce the maritime blockade.”[18] The severing of the cable was also key component of a broader (dis)information operation conducted by Great Britain to shape perceptions among the US public and policymakers, peaking with the release of the disclosure of the Zimmerman telegram in 1917. The potential for such operations was recognised in the US immediately after Alert’s operation; according to Fulwider, “The New York Times reported the incident on 6 August, accurately pointing out that without direct cable connections, any word of events in Germany would have to pass through hostile channels.”[19]

Similarly, it is likely that modern offensive cyber operations will achieve effect not just through disrupting adversary systems (for example, wiping data or rendering devices inoperable), but through second-order effects. Herding and enabling information operations are two examples, as are large-scale degradation efforts targeting adversary counterintelligence capabilities.[20]

Not all the effects of offensive cyber operations will be intended or desirable. Winkler argues that “[t]he voyage of the Alert and its implications caused officials in the United States eventually to realize the strategic necessity of having an independent cable and radio network linking the nation to its overseas interests.”[21] US activities to reduce this vulnerability eroded the UK’s advantages in this area. Similarly, offensive cyber operations have the potential to draw an adversary’s attention to their vulnerabilities and spur them to develop capabilities of their own to respond.[22]

Efforts to promote norms around offensive cyber are not new

In a wider discussion of its policy on submarine telegraph cables, Kennedy notes that Britain initially attempted to promote international norms against cutting cables.[23] He cites an 1886 paper from the British government’s Colonial Defence Committee (CDC), recording “their strong opinion that no opportunity should be lost in defining the position of cables of neutrals, and in taking any steps likely to lead to the eventual neutralization of all cables by promoting an international sentiment in favour of them … […] Any degree of immunity, however small, which could be secured by Treaty, or by international sentiment, would therefore be a definite gain to the Empire.”

This claim resonates with concerns about the greater vulnerability of advanced digital economies in the event of cyber conflict. In language that almost exactly parallels modern statements about the United States’ vulnerability in this area, Kennedy notes that the CDC’s concern was driven by the recognition that “unrestricted cable-cutting would be on balance ‘a severe loss to the Empire, which would suffer mostly from this type of attack’”.[24] It is hardly surprising that states promote norms that are in their interests, as much around offensive cyber as around the ‘neutralization’ of submarine cables.[25] In the latter case, Britain’s effort to promote such norms quickly stalled, not least because “it was soon realized that it would be impossible to persuade other powers, particularly France and Russia, to accede to an international neutralization agreement.”[26]

As a result, Kennedy notes that Great Britain expanded and hardened its telegraph network.[27] This included the use of physical defences as well as deception to protect submarine cables.[28] This latter activity prefigures the use of deception for defence in cybersecurity.[29] Over time, the increased resilience of the network led to a shift in attitudes, and Great Britain downplayed efforts to promote norms around the neutralization of cables. By the time of the First World War, 28 years after the paper calling for neutralization, the British government had longstanding plans for disrupting the communications of potential adversaries. Efforts to promote norms gave way to an emphasis on resilience and offensive action by Britain to seize the initiative – paralleling in some respects the current debate over the US doctrine of persistent engagement.[30] 

Offensive cyber is an outgrowth of other capabilities and resources

As Kennedy notes, this change in attitude reflected Britain’s awareness by 1911 that it “had so many advantages in this field that her own weaknesses were outweighed.” Britain controlled 60% of the world’s cables and key nodes in the global telegraph network were located on its territory. Moreover, Britain “possessed a virtual monopoly of the vital gutta-percha, which was used to insulate the wires [in undersea cables].”[31] It also had other – less tangible – advantages; its ownership of most of the world’s cables meant that Britain “knew more than anyone else about cable-laying or cable-cutting”.

There are parallels between Kennedy’s assessment of Britain’s power over telegraph cables in the 1900s and modern assessments of state cyber power. In general, states with more advanced technology sectors are more likely to have advanced cyber capabilities. As the International Institute of Strategic Studies (IISS) argued in its 2021 net assessment of state cyber capabilities, “strength in the core industries that underpin the future development of cyberspace is the decisive category [in determining a state’s cyber capability]”.[32] Similarly, the concentration of communication nodes on a state’s territory provides as much of an advantage today as it did with submarine telegraph cables, as does control over key resources ­– gutta-percha can be seen as the semiconductor chip of its day.[33]

The less tangible aspects remain important. Much as Britain’s know-how about cable laying contributed to its dominant position in this area, technical expertise also plays a role in cyber power. The IISS report describes “core cyber-intelligence capability” as lying “[a]t the heart of any nation’s cyber capability.”[34] Some of this capability will derive from technology and geographical position, as noted above, but perhaps even more decisive are the skills and expertise that a state’s intelligence apparatus can bring to bear.

Conclusion

CS Alert has been described as a central motif for the information society: “This one cableship represents strategies, technologies, ideas, and actions that still ripple though today’s technological, political, economic, and media landscape.”[35] The idea that Alert’s mission was an example of information warfare is uncontroversial. Gordon Corera describes it as “one of the first strategic acts of information warfare in the modern world […] leading to the birth of modern communications intelligence”.[36] Winkler asserts that “Information warfare in the electrical age is not a new phenomenon but dates from the late nineteenth century.”[37]

More debateable is whether it is analytically useful to view this operation as an early example of offensive cyber. As several members of the OCWG noted in a recent report, “The ‘prehistory’ of UK offensive cyber operations remains untold, although inevitably pre-dates their avowal by the UK in September 2013, the first country to do so.”[38] Is it helpful to extend this prehistory as far back as 1914, and indeed further? Or does grouping together a modern offensive cyber operation and Alert’s mission obscure the fundamental difference between a telegraph network, however highly developed, and networks of computational devices? If there really is a fundamental difference between 20th century signals intelligence and information warfare, on the one hand, and offensive cyber on the other, then examination of historical cases would be a way to identify this difference. In doing so, it could provide empirical evidence to advance theoretical debates about the nature of cyber conflict and intelligence.[39]

Moreover, highlighting elements of continuity between modern operations and historical operations that are now in the public record – such as Alert’s mission – could provide governments with a framework to talk more about offensive cyber, without revealing operational specifics. As IISS noted in its report, “On offensive cyber, it has so far proved difficult even to find the language for a more informed national and international public debate, but such an effort remains essential if the risks are to be properly managed.”[40] Exploring the similarities and differences between historical examples and modern offensive cyber would help support the development of that language. ‘Cyber’ can sometimes feel very abstract. Providing examples – such as Alert’s cable cutting early in the morning of 5 August 1914 – makes it more tangible.

Neil Ashdown is a PhD researcher in the Centre for Doctoral Training in Cyber Security for the Everyday at Royal Holloway University of London. He was formerly the deputy editor of Jane’s Intelligence Review.


[1] Gordon Corera, ‘How Britain Pioneered Cable-Cutting in World War One’, BBC News, 15 December 2017, sec. Europe, https://www.bbc.com/news/world-europe-42367551.

[2] Jonathan Reed Winkler, ‘Information Warfare in World War I’, The Journal of Military History 73, no. 3 (2009): 845–67, https://doi.org/10.1353/jmh.0.0324.

[3] Jason Healey and Robert Jervis, ‘Overclassification and Its Impact on Cyber Conflict and Democracy’, Modern War Institute, 22 March 2021, https://mwi.usma.edu/overclassification-and-its-impact-on-cyber-conflict-and-democracy/.

[4] Michael Warner, ‘Intelligence in Cyber—and Cyber in Intelligence – Understanding Cyber Conflict: 14 Analogies’, Carnegie Endowment for International Peace, 2017, https://carnegieendowment.org/2017/10/16/intelligence-in-cyber-and-cyber-in-intelligence-pub-73393.

[5] Ciaran Martin, ‘Ciaran Martin on Twitter: “It Is Pure Cyberbabble. The Gov’t Is in a Horrible Position Following Completely Unacceptable Iranian Behaviour. But Briefing Nonsense like This Doesn’t Help Anyone. Saying ‘Secret Cyber Strike’ Is Not a Policy Response & Shouldn’t Be a Pretext for Avoiding Hard Decisions 2/2” / Twitter’, accessed 3 August 2021, https://twitter.com/ciaranmartinoxf/status/1422435444470468608.

[6] Ben Buchanan, The Legend of Sophistication in Cyber Operations (Harvard Kennedy School, Belfer Center for Science and International Affairs, 2017).

[7] Chad R Fulwider, German Propaganda and US Neutrality in World War I (University of Missouri Press, 2017).

[8] Winkler, ‘Information Warfare in World War I’.

[9] Richard J. Harknett and Max Smeets, ‘Cyber Campaigns and Strategic Outcomes’, Journal of Strategic Studies 0, no. 0 (4 March 2020): 1–34, https://doi.org/10.1080/01402390.2020.1732354.

[10] Winkler, ‘Information Warfare in World War I’. “Mason then systematically acquired all of the eleven spare vacuum tubes in Mexico.”

[11] Winkler. “Mason would also go on to eliminate a German team headed to the coast with a smaller radio set by spreading the word that they carried diamonds (actually the crystal detectors used for receiving the signals). The team members were never heard from again.”

[12] Mark Odell, ‘How Dutch Security Service Caught Alleged Russian Spies | Financial Times’, Financial Times, 4 October 2018, https://www.ft.com/content/b1fb5240-c7db-11e8-ba8f-ee390057b8c9.

[13] ‘National Cyber Force Transforms Country’s Cyber Capabilities to Protect the UK’, accessed 12 May 2021, https://www.gchq.gov.uk/news/national-cyber-force.

[14] ‘Ciaran Martin: “Cyber Weapons Are Called Viruses for a Reason: Statecraft, Security and Safety in the Digital Age.”’, The Strand Group, accessed 23 July 2021, https://thestrandgroup.kcl.ac.uk/event/ciaran-martin-cyber-weapons-are-called-viruses-for-a-reason-statecraft-security-and-safety-in-the-digital-age/.

[15] Gordon Corera, Intercept: The Secret History of Computers and Spies (Hachette UK, 2015).

[16] Michael N Schmitt, Tallinn Manual 2.0 on the International Law Applicable to Cyber Operations (Cambridge University Press, 2017).

[17] Tallinn Manual 2.0, Rule 32, Comment 12: “As an example, a tactic of signals intelligence is to force adversaries to use forms of communication that are less secure so information can be collected. This driving, or ‘herding’, of enemy communications from a platform not susceptible to exploitation to a less secure one from which intelligence can be collected might be accomplished by physical damage to the former.”

[18] Winkler, ‘Information Warfare in World War I’.

[19] Fulwider, German Propaganda and US Neutrality in World War I.

[20] Horkos, ‘A Last Clever Knot?’, Medium, 24 November 2020, https://horkos.medium.com/a-last-clever-knot-26fd26765e8d.

[21] Jonathan Reed Winkler, Nexus (Harvard University Press, 2009).

[22] Andrea Shalal-Esa, ‘Iran Strengthened Cyber Capabilities after Stuxnet: U.S. General’, Reuters, 18 January 2013, sec. Technology News, https://www.reuters.com/article/us-iran-usa-cyber-idUSBRE90G1C420130118.

[23] P. M. Kennedy, ‘Imperial Cable Communications and Strategy, 1870-1914’, The English Historical Review 86, no. 341 (1971): 728–52.

[24] Kennedy.

[25] Perri Adams et al., ‘Responsible Cyber Offense – Lawfare’, accessed 3 August 2021, https://www.lawfareblog.com/responsible-cyber-offense.

[26] Kennedy, ‘Imperial Cable Communications and Strategy, 1870-1914’.

[27] Kennedy.

[28] Kennedy. “Another cunning measure at Esquimault was the laying of numerous dummy cables for a few miles out to sea to baffle an attempt at in-shore cutting.”

[29] See, for example, the work of the UK National Cyber Deception Laboratory (NCDL), as described in Neil Ashdown, ‘Mind Games: Deception Offers Role in Cyber Defence’, Jane’s Intelligence Review, 7 May 2020.

[30] Richard J Harknett, ‘SolarWinds: The Need for Persistent Engagement’, Lawfare, 23 December 2020, https://www.lawfareblog.com/solarwinds-need-persistent-engagement.

[31] Kennedy, ‘Imperial Cable Communications and Strategy, 1870-1914’.

[32] ‘Cyber Capabilities and National Power: A Net Assessment’, IISS, accessed 5 July 2021, https://www.iiss.org/blogs/research-paper/2021/06/cyber-capabilities-national-power.

[33] Kathrin Hille, ‘TSMC: How a Taiwanese Chipmaker Became a Linchpin of the Global Economy’, Financial Times, 24 March 2021, https://www.ft.com/content/05206915-fd73-4a3a-92a5-6760ce965bd9.

[34] ‘Cyber Capabilities and National Power’.

[35] R David Lankes, Forged in War: How a Century of War Created Today’s Information Society (Rowman & Littlefield Publishers, 2021).

[36] Corera, Intercept: The Secret History of Computers and Spies.

[37] Winkler, ‘Information Warfare in World War I’.

[38] Joe Devanny et al., ‘The National Cyber Force That Britain Needs?’, 2021.

[39] Robert Chesney and Max Smeets, ‘Policy Roundtable: Cyber Conflict as an Intelligence Contest’, Texas National Security Review, 17 September 2020, http://tnsr.org/roundtable/policy-roundtable-cyber-conflict-as-an-intelligence-contest/.

[40] ‘Cyber Capabilities and National Power’.

Why ‘Cyber Pearl Harbor’ matters for democracy

Dr Andrew Dwyer

Recently, I was struck by a front cover to the magazine, Newsweek, which declared that we are (again) facing the potential for a ‘Cyber Pearl Harbor’. For many within both the practitioner and academic ‘cyber’[1] community, this is manifest of a long shadow of the hyperbole that characterised the popular recognition of the insecurities of computation throughout the late 1990s and early 2000s. Today, if not a nuisance to those working in the area, such analogies and metaphors are still dominant in how they seep into popular debates around contemporary offensive cyber operations. In my research, I seek to understand how malware – and other computational materials, such as algorithms – transform cybersecurity politics and decision-making, and here I will outline what I think the implications for democracy are when governments do not appropriately engage in public debate.

Ransomware, ‘advanced persistent threats’, cyberwar, and most recently “cyber-sabotage” – the term used by the UK Foreign Secretary Dominic Raab (2021) to describe the espionage operations of the People’s Republic of China in the hack of Microsoft Exchange Server email software (see Krebs 2021) – are part of an ever-expanding corpora of terms to explain the condition of cyber that challenges even the most informed. Some have attempted to resolve and clarify the terms and analogies used for offensive cyber and elsewhere, with various successes and warnings (Taddeo 2016; Lawson and Middleton 2019). Although conceptual clarity is required for effective communication, we are unfortunately very far from that possibility. Indeed, I think the conceptual murkiness that surrounds offensive cyber is not the problem per se, but rather a symptom of the current lack of public discussion of capabilities and doctrinal development that hinders legislative scrutiny amid the promise of security provided by states as we enter a post-Covid[2] world. By this, due to the secretive nature of operations and capabilities, a vast potential variety of descriptors and analogies are possible in an attempt to communicate with the public in times where the state seems increasing unable to attend to the insecurities their citizens feel from cyber-attacks.

Within academic literature, much has been made, for quite some time, over whether such a thing as ‘cyberwar’ could ever take place (Rid 2013), with contemporary debate settling on discussions on the organisational capacity for cyber operations to take place (Smeets and Work 2020). Such a change in perspective, however, is still broadly confined to a small and well-informed community who either have direct access to such activities (particularly in contexts where there is extensive military-academic collaboration, e.g., the United States) or within specialised academic and journalistic contexts that speak at select events and have typically developed extensive relationships with the former (and are likely the extent of this blog post’s readership). This community has undoubtedly led to a greater sophistication in thinking about cyber operations, as much as ‘cyberwar’ may have been dampened as a catastrophic event, to a sustained acceptance of the perpetual continuation cyber conflict as evidenced in US thinking on ‘persistent engagement’ (Healey, 2019).

Such an acceptance of perpetual, and incessant, war-like activities is part of a broader move in military thinking on grey/gray-zone and hybrid warfare and what its limits and contours should be. Yet such hybridity causes a severe communication problem for democratic governments; regardless of whether you agree with such moves. This is because the public perception of contemporary conflict remains focused on open kinetic conflict, driven in part by governments’ increasing investment in conventional military capabilities.

Offensive cyber is but a microcosm of the preference to keep secret contemporary capabilities that do not afford such visuality. This is a necessary act to prevent adversaries attaining knowledge of such capabilities. Yet, by not articulating the use of offensive cyber, including future potential doctrine, it could lead to a disruptive and long-term erosion of trust as it becomes clear states cannot always protect their citizens from adversaries in the ways that publics have come to expect (at least in the imagination of certain communities in the Global North). Although attempts have been made to protect publics through improved cyber security, it is clear that defence is different in a computational world. Offensive cyber operations are highly likely to be successful and governments must be honest about this, and their capability to respond.

For many people outside of ‘cyber’, the more spectacular effects of malware, such as WannaCry and NotPetya, dominate imaginations of destruction that have semblances to ‘cyberwar’ – and even to ‘Cyber Pearl Harbor’ – discourse. However, much malware is banal and stealthy, and primarily for espionage, even if it could in theory be used for pre-positioning for other offensive operations (see Microsoft Exchange, SolarWinds, etc.). Whereas cyberwar is seen as something that could inevitably affect critical national infrastructures and beyond, the same imaginary does not apply to ‘everyday’ systems which are assumed to be insecure due to poor governance and maintenance. However, it is in this latter space that offensive cyber action often takes place and inversely the greatest impact.

As former head of the UK National Cyber Security Centre, Ciaran Martin, has recently argued, the UK should not simply invoke secrecy to avoid discussing offensive cyber doctrine with regards to China. By keeping offensive cyber secret, it mythologises the potential of such action, offering a seeming alternative to the ‘dirty’ work of sending personnel to ‘far away’ places.

This is because it sets up an expectation that should not, and cannot, be held in perpetuity. Offensive cyber operations have limited functions and will not ‘win’ a war. It is unlikely that there will be a direct ‘kinetic’ confrontation between China and the United States and its allies in the near future. In the meantime, offensive cyber is likely to dominate – or in language of the USA ‘persistent engagement’ – to degrade specific adversary operations. But it is not going to stop China, Russia or other states completely.

So, there will likely be ‘limited’ attacks against military and associated targets by the US and its allies. Thus, to emphasise the use of offensive cyber to respond to China by the UK Government as a solution is disingenuous to the public. There is a possibility to be doctrinally honest without revealing operational details that articulate the true complexities of our contemporary computational insecurities.

Thus, the ‘Cyber Pearl Harbor’ imaginary matters for offensive cyber operations – not because it necessarily affects decision-makers direct judgements, but for developing expectations and the demise of democratic trust. If a state continually conducts offensive operations against the UK, for example, then how long is it sustainable for its government to promise that counter-offensive cyber operations are an effective solution? ‘Cyber Pearl Harbor’ holds such sway as it promises the spectacular ‘event’ of war that is still celebrated in contemporary popular imaginations but is anathema to military thinking. Yet, as readers of this blog are likely to agree, such ‘event’-based war with offensive cyber operations is exceptionally unlikely to occur. Responses will be full of attrition, will require extensive work, and will not be widespread (unless, for instance, a poor worming architecture is used).

For democratic governments to (over) promise without outlining doctrinal possibilities is dangerous. Offensive cyber can be justified across a suite of responses and governments can be open about the costs in terms of capital and capability. As I and others reflected in a recent piece on the UK National Cyber Force “offensive cyber operations should not be regarded as a technological “fix” to problems that are resistant to resolution by these capabilities” (Devanny et al. 2021, 8).

Of course, there is reticence to suddenly open up discussions on offensive cyber, as it may raise difficult issues and questions, and perhaps debate will go counter to what is already on the move. Yet, offensive cyber operations and capabilities work for the publics they serve, and thus must be held accountable, and fundamentally appropriate, to them. It is only a matter of time before offensive cyber will lose its shine, so let the conversation be had now, in advance of any demise in the trust of such a capability. Governments have a tough balance to strike as computation challenges their conventional role in security of their citizens as it is outsourced to private corporations, and their arsenal of response is limited. So, let’s have the debate, and it might settle to something that is amenable for all, and ultimately, for democracy.  

Dr Andrew Dwyer is an Addison Wheeler Research Fellow at Durham University in the UK. His research focuses on how differing computational materials, such as malware and machine learning algorithms, transform decision-making.

References

Healey, Jason. 2019. “The Implications of Persistent (and Permanent) Engagement in Cyberspace.” Journal of Cybersecurity 5 (1): 1–15. doi:https://doi.org/10.1093/cybsec/tyz008.

Devanny, Joe, Andrew Dwyer, Amy Ertan, and Tim Stevens. 2021. “The National Cyber Force That Britain Needs?” London: King’s College London. https://www.kcl.ac.uk/policy-institute/assets/the-national-cyber-force-that-britain-needs.pdf.

Krebs, Brian. 2021. “At Least 30,000 U.S. Organizations Newly Hacked Via Holes in Microsoft’s Email Software – Krebs on Security.” Krebs on Security. March 5. http://web.archive.org/web/20210722091915/https://krebsonsecurity.com/2021/03/at-least-30000-u-s-organizations-newly-hacked-via-holes-in-microsofts-email-software/.

Lawson, Sean, and Michael K. Middleton. 2019. “Cyber Pearl Harbor: Analogy, Fear, and the Framing of Cyber Security Threats in the United States, 1991-2016.” First Monday 24 (3). doi:10.5210/fm.v24i3.9623.

Raab, Dominic. 2021. “UK and Allies Hold Chinese State Responsible for a Pervasive Pattern of Hacking.” GOV.UK. July 19. http://web.archive.org/web/20210720161540/https://www.gov.uk/government/news/uk-and-allies-hold-chinese-state-responsible-for-a-pervasive-pattern-of-hacking.

Rid, Thomas. 2013. Cyber War Will Not Take Place. London: C. Hurst & Co.

Smeets, Max, and JD Work. 2020. “Operational Decision-Making for Cyber Operations: In Search of a Model.” The Cyber Defense Review 5 (1): 95–112.

Taddeo, Mariarosaria. 2016. “On the Risks of Relying on Analogies to Understand Cyber Conflicts.” Minds and Machines 26 (4): 317–21. doi:


[1] I have much to say about the signifier of ‘cyber’ and how its broadening and condensation away from ‘cyber security’ is an interesting development in how it aligns to a more militaristic imbrication than information security, but I will not develop this here.

[2] I use ‘post’ here very lightly, as it is more like a continuation of the pandemic, as we ‘live’ with the virus in various ways.

Offensive cyber in the age of ransomware

Ciaran Martin 

When the United States launched Cyber Command twelve years ago, the word ‘ransomware’ was not in widespread use. Nor did countering the threat from computer-based racketeering feature in the lengthy deliberations leading up to the formation in the UK of the National Cyber Force, announced in November last year.  

But in the course of a few short late spring weeks in 2021, ransomware has gone from a minority obsession of parts of the information security committee to a significant paragraph in a G7 communique and the headline item in the first summit between Presidents Biden and Putin. The US has categorised ransomware as a national security threat, thanks to the disruption of oil and meat supplies owing to attacks on Colonial Pipeline and the food producer JBS. Lest Europeans think this is solely an American problem, the wholesale (and horrific) disruption of Irish healthcare, repeated attacks on British educational institutions, and a range of incidents in France and Germany reminded us otherwise.  

The ransomware model 

Ransomware has exploded into a global problem because three different factors combine to favour the criminal against the defender, and criminals have begun to realise this. First, the Russian state (and some others, mostly bordering Russia) provide a safe haven from which the gangs can operate. Second, endemic weaknesses in Western cyber security are too easily exploited. Third, the business model works spectacularly well for the criminals: victims too often pay in desperation and cryptocurrencies provide an easy way to launder the loot. The British firm Elliptic has calculated that Darkside, the group responsible for the Colonial Pipelines hack, generated at least $90 million of revenue in just nine months. Moreover, the limitations on law enforcement activity cannot be overstated. Policing and intelligence capabilities against cyber criminals are good and improving, but unless a foolish cyber criminal takes a holiday to the West, he or she is out of reach.  

Disrupting this racket means breaking at least part of this vicious, pro-criminal circle. But it is proving hard. Joe Biden has become the first Western leader to pressurise the Russians on the safe haven problem, and early signs are that Moscow is at least pretending to take it seriously. But progress here cannot be guaranteed (for example, there is little prospect of Russia overturning its constitutional prohibitions on extraditing Russian). Getting consensus on tackling the flow of money – either through banning the payment of ransoms or regulating cryptocurrencies more effectively – has proved fiendishly hard. And improving defences remains a long, hard slog. Some or all of these efforts may yield fruit over time, but for now, serious problems remain even in terms of containing the threat, never mind reducing it.  

A role for offensive cyber? 

Does this mean there is a role for offensive cyber? This much misunderstood set of nascent capabilities has, to date, struggled to prove its utility as a tool for protecting our cybersecurity. Indeed, despite the rhetoric, offensive cyber has mostly been pointed in other directions. The UK’s flagship, publicly disclosed offensive cyber operation targeted so-called Islamic State, degrading the group’s propaganda and operational capabilities ahead of the Mosul offensive in 2016. Other intended targets have included serious online child sex offenders, according to the Government.  

What has been conspicuously absent is a contribution that protects UK cyberspace itself. Indeed, offensive cyber has proved singularly ineffective in contesting the threat from hostile nation-state capabilities. As I argued in a lecture at King’s College, London, last November, this is for various reasons. Disabling Russian or Chinese state-backed offensive cyber operational capabilities is much, much harder than disrupting the computer networks of an international terrorist group, a paedophile ring, or the Russian troll farm known as the Internet Research Agency, which Cyber Command is believed to have hit in 2018. It is likely to be as difficult as hitting the covert infrastructure of US Cyber Command.  

Moreover, ‘hacking back’ will not ‘deter’ cyber espionage, which is generally accepted under international norms. And on the relatively rare occasions when those norms are crossed, the sorts of capabilities offensive cyber affords are generally not appropriate ones for pushback. We are not going to disrupt the lives of innocent citizens in Vladivostok because Russia has disrupted the opening ceremony of the Winter Olympics or leaked the medical details of athletes after hacking the World Anti-Doping Agency. And all the while, suspicion abounds that by stockpiling cyber weapons for offensive use, the West is not serious about the security of cyberspace.  

Network disruption 

The ransomware problem offers those developing offensive cyber capabilities an opportunity to show that such tools can make a useful contribution to a safer cyberspace. With few if any other interventions working, and with normal law enforcement mechanisms effectively nullified, disrupting the networks of the criminals, and the digital infrastructure they use, via offensive operations, could at least be of some significant tactical benefit in containing the problem.  

Over the years, the FBI have led a number of operations to this effect. The Europol-led takedown of the so-called Emotet botnet, one of the most malignant pieces of digital infrastructure ever seen, in March of this year, provided further evidence of the utility of this type of operation. And technically, the sort of disruption involved lends itself to surgical interventions that reduce the risk of collateral disruption and other unintended consequences that worry sceptics of offensive cyber.  

After what the American cybersecurity expert Alex Stamos has called “the craziest eight months in the history of infosec”, there is now a welcome realisation at the political level that securing our interests in cyberspace is a complex and nuanced problem that isn’t solved by belligerent rhetoric about ‘hitting back’ in an invisible digital contest with other states. If Governments are serious about demonstrating that their increasing focus on offensive capabilities will help our cyber security, disrupting ransomware operations would be the right place to focus.   

Ciaran Martin is Professor of Practice at the Blavatnik School of Government, University of Oxford. From 2014 to 2020 he set up and then led the UK’s National Cyber Security Centre, part of GCHQ.