Cyberspace is everywhere. It is so prevalent that the concept has started to lose its functional utility – and, as the recent Facebook rebrand demonstrates, big tech companies still want to make cyber interactions even more seamless and attractive. For the majority of the world’s population with access to the internet, life offline is increasingly difficult to imagine; and for those without, this lack is increasingly understood as detrimental to their fundamental human rights.
Cybersecurity is the foundation of our online life, while cyber insecurity is its Achilles’ heel. Within this broader picture, offensive cyber operations by states are an important – but far from the only – cause of global cyber insecurity. The effects of state offensive cyber operations are wide, with harms ranging from leaked or deleted personal data to the non-functioning of critical infrastructures such as oil pipelines. Categorizing and prioritizing these harms is difficult, as scholars and policymakers struggle to draw standard distinctions between peace and war, espionage and covert action, and military and intelligence functions.
However, studies of offensive cyber operations have rarely engaged with these harms as forms of violence. When they have, violence was often conceived very simply: to break things and kill people. We think the time is ripe to refine our assessment of what violence means in a digital era. To that end, we have written two articles laying out what violence is in relation to offensive cyber operations, and how offensive cyber operations are integrated into the violent tools of statecraft. Together, these articles offer a new perspective on the harms of offensive cyber operations, and one which we hope helps sidestep or solve the longstanding controversies above. In this blog-post, we give you an overview of the results.
First, let’s take a step back, as the disciplinary evolution of political science and international relations has an important lesson for the study of offensive cyber operations. In reaction to what was seen as an overly statist focus on systemic or strategic issues (such as nuclear stability) during the Cold War, the subfield of political violence sought to reorient these disciplines towards the study of violent acts committed for political purposes, whether by states at war, by armed groups and other non-state actors in civil wars, or in situations of unrest and revolution. Their conceptual rationale was that these are all part of a single continuum of organized violence, and so studying them together makes good theoretical sense. Their normative rationale was that the moral aim of studying war and conflict is to prevent or ameliorate its devastating impacts, and so a focus on violence (rather than, for example, stability) directs attention to the problems we need to solve most urgently.
Issues of political violence may seem starkly removed from the study of offensive cyber operations, because the current consensus is that cyber operations are almost always non-violent. It is very difficult to use cyber means to cause death and destruction in the manner of missiles, machetes, or machine guns. The most impactful cyber operations to date have caused extensive disruption with significant economic losses, but in each case systems recovered shortly afterward – albeit with intense effort – and no one died. This lack of violence is even seen as the unique promise of offensive cyber operations, as states and other actors, such as financially-motivated cyber criminals, can achieve their goals in a more “civilized” way. Ransomware holds data hostage, rather than kidnapping real people. Offensive cyber operations could thus almost be seen as the “better angels of our digital nature”.
However, we think this reading relies on too narrow a definition of violence. The field of political violence is itself split between a narrow “minimalist” concept of violence referring to physical harm (often operationalized crudely as numbers of deaths), and a broader view of violence including psychological and community harms. This broader view of violence is gaining ground in international law, as scholars recognize the psychological and societal impacts of war and conflict, as well as in diverse policy arenas from cyber-bullying to intimate partner violence. The study of offensive cyber operations can also benefit from this broader view – which we term “harm to areas of human value” including bodily, affective, and community aspects.
This has clear consequences for the kinds of operations we study. While highly-targeted cyber-espionage campaigns such as SUNBURST make global headlines, and might well have strategic national security consequences (e.g. by transferring state secrets or commercial intellectual property), these are not the most violent consequences of offensive cyber capabilities. Instead, repressive use of surveillance operations, or the sabotage of critical infrastructure, could be much more devastating. Focusing on violence shifts us away from the disputed strategic impact of cyber-espionage towards more destructive operations. Conceptually, it means we should no longer privilege sophisticated state actors over cybercrime gangs or intimate partner surveillance; and normatively we should prioritize reducing harm over measuring shifts in the international balance of power.
In this expanded definition, when do cyber operations stop being violent? In terms of harm, there is no lower bound, and so context-specific assessments of severity are crucial. But our expanded definition includes criteria of intentionality – violence must be deliberate – and proximity – violence must be causally significant. Offensive cyber operations complicate both criteria. Many cyber operations have consequences far beyond those originally intended, due to the interconnectedness of digital networks, and at the same time they are far less causally proximate than kinetic weapons, as they manipulate information systems that are embedded in complex ways across state borders. Overall, the less deliberate and the less proximate the cyber component, the less violent the operation.
One might respond that this is all a bit abstract. Cyber operations don’t take place in a vacuum, and the important thing is not only the (lack of) violence of cyber operations, but also the violent consequences of their alternatives. The issue is relative, not absolute. We strongly endorse this view, and so in a separate article we put forward three logics of integration of cyber capabilities into violent state structures. These logics – substitution, support, and complement – weigh the benefits of using offensive cyber capabilities (OCCs) against an adversary instead of, as part of, and in addition to other means of violence, respectively.
The Three Logics of Integration and Their Effect on Violence
OCCs replace other means of achieving a particular end
OCCs are combined with other means to help achieve that end
OCCs achieve an end not available by other means
Effect on violence (narrow definition)
OCCs achieve the same end without or with less physical harm
OCCs are more precisely targeted, concerns of indirect effects limit use
Complementary effects of OCCs are not physically damaging so not violent
Effect on violence (broad definition)
Affective/community harms could outweigh physical damage depending on context
Affective harms occur even with better targeting, shift in not decreased repression
Affective/community harms caused by OCCs increase levels of violence overall
What does this table show? Where many might think that substituting a conventional means of violence for a cyber operation leads to less violence, we argue that this is not necessarily so. Rather it is an empirical question, one of scale and scope of (also non-bodily) harm. The same can be said for supporting operations.
The most striking change is, however, in the area of complementary operations, i.e. offensive cyber capabilities that produce genuinely new forms of causing harm, for example digital repression or logical (but disabling) attacks against civilian data. Such complementary uses of OCCs are automatically nonviolent in a narrow definition, because they have not – so far – caused bodily harm or death. In a broader understanding, these operations increase overall levels of violence.
For example, with regard to interstate violence, the notorious NotPetya operation is violent, though the exact intent of the attackers matters for the judgment of its severity. Regarding repression, the complementary use of OCCs to create an environment of pervasive censorship and fear, as in Xinjiang, also implies increased violence on an expanded definition. When particular groups are targeted by censorship technologies, there are effects on affective life (individual identities, including gender and ethnic identifications) and communal areas of value (social relationships and, at the larger scale, national identities).
Worryingly, it is precisely these new forms of harm that are hardest to capture with a policy apparatus built for a non-digital era. Concerns around escalation as a result of offensive cyber operations should be reoriented toward violent escalation, recognizing that some uses of OCCs could be strategically escalatory – e.g. SUNBURST – but without an accompanying increase in violence.
Policy responses to cyber operations should also be calibrated based on their logics of integration: supportive and substitutive uses are more likely to be amenable to existing frameworks, while complementary uses present a far more novel policy challenge. Acknowledging complementary uses of OCCs and understanding their violent effects gives defenders a better grasp of the complexity of defending against adversarial actions across a mostly civilian cyberspace.
Where next? In the articles above, we mainly consider positive cases of integration where cyber capabilities were used instead of/as part of/as well as other means. Future research should also consider negative cases where actors decided not to use cyber operations, instead staying with more conventional tactics. In these articles, we also set aside the bureaucratic politics of cyber operations – questions around institutional manoeuvring, domestic dynamics, departmental hierarchies, and individual personalities – which are of course a crucial component of decisions about when and where to deploy these capabilities.
Ultimately, understanding cyber operations as a form of political violence helps us prioritize research and policy efforts to counter the harms they cause. The most violent uses of OCCs may not be state-sponsored cyber-espionage or sabotage, but authoritarian practices of digital globalised repression, the indirect consequences of disrupted critical infrastructures, and digitally-enabled interpersonal coercion.
Florian J. Egloff is a Senior Researcher in Cybersecurity at the Center for Security Studies (CSS) at ETH Zurich. He is the author of the forthcoming book Semi-State Actors in Cybersecurity (Oxford University Press, 2022).
James Shires is an Assistant Professor in Cybersecurity Governance at the Institute of Security and Global Affairs, University of Leiden. He is the author of The Politics of Cybersecurity in the Middle East (Hurst/Oxford University Press 2021).
Mentioned just once in the UK Government’s Integrated Review, recent events highlight that ransomware is undoubtedly a crucial matter of national security. As Ciaran Martin has already discussed in The Alert, ransomware has recently disrupted oil and meat supplies, education infrastructure, and healthcare operations during a global pandemic. Offensive cyber might be typically associated with state operations, yet we are increasingly witnessing just how pernicious criminal operations can be.
Yet, the public conversation around ransomware is lopsided. Discussion has focused almost exclusively on the impact of ransomware and how it might be stymied via policy solutions – banning payments and enforcing mandatory victim disclosure are two regularly proposed antidotes.
Meanwhile, the drivers that have led to the current ransomware scourge have received much less attention. Ransomware has been going on for years and was long regarded as a mere nuisance. Why is it therefore significantly more devastating today than five years ago? We must understand the genesis of its current rise.
Ransomware’s ascent to policymakers’ agendas and CISO’s darkest nightmares is best explained outside of academic journals and policy briefs. This blog will instead draw on cyber threat intelligence (CTI) to explore ransomware’s prolific growth. By continuously tracking cyber criminals groups and responding directly to ransomware operations, the CTI community possesses unparalleled experience and the relevant data to fully grasp how the threat landscape has evolved.
Understanding the key ransomware developments is an important foundation for anyone thinking about the implications of ransomware and how to design meaningful policy responses. These trends will undoubtedly be well understood by many readers of The Alert already, yet it remains important ground to cover given that ransomware’s ascent to a matter of national security means that many of those interested in the topic may not have a cyber security background.
Rather than provide an entry-level primer into ransomware, however, the primary intention of this blog is another: to highlight that engagement with frontline insight and operational realities provides vital context at a far more strategic level. The primary function of commercial CTI will always be to assist network defenders and inform the decision making process across security functions. Yet, it has untapped potential to inform cyber security policy and broader debates within social science.
Shifts in the Ransomware Landscape
The nature of the ransomware threat has fundamentally shifted over the past five years. It is imperative we understand why.
The shift to post-compromise ransomware deployment is arguably the most significant development within the cyber criminal landscape to date.
The traditional approach to ransomware operations relied on a “shotgun” or spam-like approach. Indiscriminate campaigns would target an eclectic mix of victims. Targets might be selected from generic databases with the emphasis on sheer volume. Anything and everything was on the menu at this point. Ransomware might have encrypted a government official’s policy brief on their work laptop, yet could have equally encrypted a pensioner’s family photos on their personal device.
Cyber criminals did not necessarily know where their phishing emails were landing. This spread of targets meant there was only a slim number of high-value targets. Average extortion fees therefore sat between $500 to $1,000 dollars. There would also be a good chance that any important data or systems could be restored from backups. Ransomware was, at this point, seen as more of a nuisance than anything else.
The anatomy of today’s ransomware operations could not be more different.
In post-compromise ransomware incidents, cyber criminals adopt a far more patient and methodical approach. A medley of downloaders, backdoors, and modular malware, as well as credential stuffing tactics and the exploitation of vulnerabilities, are all used to gain access to a target. Threat actors then move laterally and escalate privileges within a victim environment. Rather than deploying ransomware on the first system found, attackers instead search for the most critical and sensitive areas of a network. Operators will often attempt to delete backups as well as exfiltrate data. The security processes used to detect and prevent ransomware are often disabled at this point. It is only then, and at the end of a far more complex attack lifecycle, that ransomware is finally deployed. This is typically focused on core domain infrastructure and the systems that allow a network to function.
The overall severity of a ransomware incident is far higher when threat actors cast a wider net and impact a victim’s most critical systems. This post-compromise approach is increasingly the norm and is the primary reason why ransomware incidents are so devastating today.
A Cyber Crime Ecosystem
Ransomware operations now frequently involve multiple threat actors working together. One threat group may gain access to a victim network, before selling their access on (often via a separate initial access broker). A different group would then leverage that initial access to move around within a network, conducting much of the activity discussed above, before deploying a ransomware variant, which is often developed by a separate group again.
The ransomware affiliate model has also become more prominent in recent years. MAZE ransomware (now defunct) was one example of this approach in practice. Here, MAZE affiliates represent the individuals and groups working under the MAZE umbrella brand. Affiliates are recruited to compromise victims and deploy MAZE ransomware. These entities will lean on the central infrastructure, systems, and communications tools that MAZE ransomware service operators have set up.
Multiple entities working together (either through an ad hoc or affiliate approach) create chronic headaches around attribution. They also highlight the complexity of the current cybercriminal threat and the need to focus on far more than just ransomware developers. My colleague Cian Lynch has written an excellent article on this topic that I consider essential reading for anyone serious about understanding the current ransomware landscape.
Ransomware operations rarely involve just ransomware, with criminals now deploying a variety of coercive tactics. It is well documented that today’s ransomware operations often involve data theft and extortion (made possible by a post-compromise approach). Victims refusing to pay an extortion fee find not only their systems rendered unusable, but also that their sensitive data has been plastered all over criminal data leak sites. Some actors also request separate fees for non-distribution and decryption tools. The rise of these data theft and leak threats has led to ransomware now regularly framed as “double extortion” in the popular media. Yet, this is an oversimplification of a far more multifaceted threat.
Today’s extortion operations often combine a wider variety of coercive tactics. Cyber criminals understand that they can impose additional pressure by drumming up press coverage around an incident. Ransomware groups have subsequently become more proactive in reaching out to journalists and the media in a quest to create headlines and PR headaches. Not stopping there, these criminal groups have also notified business partners and suppliers, thereby increasing the strain on a victim’s third party relations during a crisis period. Upping the ante even further, ransomware groups have been known to directly call and harass an organisation’s employees. But the list doesn’t stop there, with distributed denial of service (DDoS) attacks having also been thrown into the mix.
Growing Impact on Operational Technology
Ransomware is also increasingly impacting operational technology (OT) – that is the systems interacting directly with physical processes, machinery, and infrastructure. This includes power grids, water treatment facilities, and factory plants.
The criticality of OT systems means they are typically segmented from an organisation’s traditional IT network, yet they are increasingly still impacted. Part of this is explained (again) by the shift to post-compromise operations. With cyber criminals spending more time moving around within a target network, their odds of reaching OT assets increase.
Many critical infrastructure providers also have small security budgets. Rather than sophisticated means of obtaining access via complex malware and purchasing zero day exploits, it is often simple and well known misconfigurations or vulnerabilities that provide easy access. This also explains why more unsophisticated threat actors are now exploiting OT systems.
Disrupting OT assets is always a serious concern, yet it is also important to approach the issue with a measured perspective that avoids the all too common cyber doom mongering or unhelpful calls of a pending ‘Cyber Pearl Harbour’. The exploitation of basic vulnerabilities and misconfigurations is certainly frustrating. However, it also provides grounds for optimism given that much of the security solution is already known and straightforward to implement provided that adequate security resources are in place.
Much of the ransomware activity impacting on OT assets also appears inadvertent. Many of the cyber criminals behind this activity likely do not clearly differentiate between IT and OT networks or have a particular interest in OT assets. Instead, the impact on OT systems is most likely caused from coincidental asset scanning in victim networks by ransomware operators.
Strategy and Policy Informed by Frontline Insight
The shifts in the ransomware landscape outlined above explain why ransomware has become such a serious matter of national security in recent years. Yet, they also demonstrate why insight from the frontlines should play a more fundamental role in informing today’s policy and strategic debates.
Public debate has largely focused on ransomware encryptors themselves, yet the emergence of post-compromise deployment means network defenders will quickly find themselves on the backfoot if they neglect to consider initial access and lateral movement vectors. Recommendations and security best-practice advice must therefore urgently emphasise the importance of introducing defensive measures across the entire attack lifecycle. This will be a key priority for any government that is serious about tackling the ransomware threat.
Within the payment debate, an inordinate amount of time and attention has cast doubt on whether cyber criminals will even provide a working decryptor for victims paying up. There has also been understandable concern on how extortion payments incentivise future ransomware attacks. Yet, operational insight adds valuable colour to this discussion.
Those with extensive ransomware incident response experience will tell you that it is rare for a decryptor to not be provided to a paying victim. Rather, the more pertinent issue in the majority of responses is whether the decryptor provided is scalable. There is a big difference between a decryptor that can be rapidly deployed across an entire network and one that requires manual installation on each impacted system. There are countless examples like this where frontline experience provides valuable context to broader debates.
Post-compromise and data theft trends highlight the broader impacts of ransomware operations, regardless of whether an extortion fee is paid. While paying a ransom may prevent data being leaked, the fact it was stolen in the first place still exposes victims to reputational and regulatory implications as well as the possibility that data will be sold on or utilised in further operations.
Thriving cooperation between cyber criminal groups also reinforces the importance of disrupting activity across the entire cyber criminal ecosystem. Much was made of the statement by the developers linked to the BlackMatter ransomware variant that they would avoid targeting critical infrastructure and other sensitive industries. The statement was even touted as a win for President Biden’s warnings to ransomware operators. Yet the affiliate model highlights how BlackMatter developers represent just one part of a ransomware operation. There was little consideration at the time of whether the initial intrusion operators and other actors linked to the deployment of the BlackMatter variant would heed similar levels of moderation, or simply continue targeting critical industries while partnering with alternative ransomware developers.
The cyber criminal community is remarkably fluid and agile. When forums and marketplaces have banned ransomware discussions due to fear of law enforcement action, threat actors have quickly moved onto new ones. Actors have also simply obscured their intent. For example, advertisements “looking for partners to provide access for ransomware operations” becomes “looking for access to major enterprises”. Cyber criminals will alternatively just continue on and rely less on semi-public venues such as forums. These realities are well understood by those on the frontlines tracking cybercriminal developments, yet have clear policy relevance as well. It is therefore vital that cyber security strategy, policy, and doctrine grasp the realities of cyber criminal cooperation. Measures to disrupt and deter ransomware operators should be measured by their impact across the entire cyber criminal community.
It is also important that policymakers grasp the full spectrum of coercive tactics used by cyber criminals. The current narrow focus on ransomware encryptors themselves poses a risk that other forms of extortion are neglected. If it is only encryptors that are associated with added law enforcement heat, then criminals could realistically change up their approach. For example, turning to extortion operations comprising a cocktail of data leakage, DDOS, and employee harassment without actually deploying ransomware. Rather than holding governments and law enforcement agencies to account for what they are doing to combat ransomware, we should be scrutinising what they are doing against all forms of extortion. The public debate must widen significantly.
Frontline insights also provide valuable colour to more abstract and conceptual discussions. They can inform the theories and frameworks we borrow from international relations, security studies, and other social science disciplines. Ransomware is typically framed as a non-state threat by social scientists, yet the distributed network of cyber criminals naturally lends itself to other conceptual tools. Actor network theory and thinking with assemblages represent two apt lenses to dissect today’s cyber criminal phenomena for instance.
In a similar vein of thought, understanding the often inadvertent targeting of OT systems informs our understanding of the relationship between human agency and malware. It highlights the often unpredictable nature of malicious operations as well as their volatility. This also shows that by targeting OT assets, cyber criminals are not necessarily becoming more brazen. They may simply be unaware or incapable of fully anticipating the full consequences of their actions – a finding that is perhaps equally, if not more, concerning.
Understanding the practicalities and operational details of ransomware operations is a crucial yet neglected ingredient in building strategic responses and policy proposals. Both policy and network defence-oriented communities can do better.
Cyber policy thinkers have a huge opportunity to develop more relevant and pragmatic ideas through engagement with CTI. One common misunderstanding among policy and international relations researchers is that their cyber security knowledge gaps can be addressed by working with those that have a technical or computer science background. This is a narrow view of what interdisciplinary research and collaboration represents. An understanding of how cyber operations and attack lifecycles work in practice is a completely different (and arguably more useful) perspective than coding wizardry or computer science know-how.
The CTI and network defence community must also actively engage with academia and policy formulation. CTI can offer so much to these broader debates, yet must also adapt to working with different stakeholders outside of a network defence context. The industry also has plenty of work to do in striking a more accessible and welcoming tone to potential collaborators.
Plenty of challenges and obstacles exist in building better links between different and often disparate cyber security communities. But, they are not insurmountable. When the benefits of cyber security policy that is informed by frontline insight are seriously considered, the opportunities are enormous. The challenge of crafting genuine interdisciplinary perspectives should be fully embraced.
Dr Jamie Collier is a Cyber Threat Intelligence Advisor at Mandiant where he also oversees academic collaboration within Europe. He was the former Threat Intelligence Team Lead at Digital Shadows and has previous experience with NATO CCDCOE, Oxford Analytica, and PwC India.
We’re back with our ad hoc seminar series on ‘Global Challenges in Offensive Cyber‘, where we will be inviting a range of speakers from across different perspectives and backgrounds to discuss offensive cyber.
Our next speaker will be Dr Erica Lonergan on 9 November 2021 (3pm UK, 10am EST).
There is a longstanding debate among scholars and practitioners about whether cyber deterrence is possible. Most deterrence skeptics point to the problems of both credibility and capabilities that are associated with conducting offensive cyber operations as part of deterrence by punishment strategies. Conversely, some are more optimistic about cyber deterrence, pointing to the potential for deterrence by denial strategies that are anchored in defense and resilience. However, this debate largely overlooks another aspect of denial strategies; specifically, the potential for some forms of offensive operations to support denial, rather than punishment, strategies. Leveraging the conventional deterrence literature, rather than the nuclear deterrence literature, this discussion will explore the conditions under which offensive cyber operations could support denial strategies.
Dr. Erica Lonergan (formerly Borghard) is a senior fellow in the Technology and International Affairs Program at the Carnegie Endowment for International Peace, as well as an Adjunct Associate Professor and Research Scholar at the Saltzman Institute of War and Peace Studies at Columbia University. Erica served as a senior director on the US Cyberspace Solarium Commission, a Congressional commission established to develop a comprehensive national strategy to defend the United States in cyberspace. Previously, Erica was senior fellow with the New American Engagement Initiative at the Scowcroft Center for Strategy and Security in the Atlantic Council. Prior to that, she served in two positions at the United States Military Academy at West Point, as an Assistant Professor in the Army Cyber Institute, and as Assistant Professor and Executive Director of the Rupert H. Johnson Grand
An intrusion against a railway network, resulting in destructive effects leading to disruption of cargo and passenger transportation, would in previous decades likely have been considered a major strategic attack. Early writings on cyber warfare posited such actions only in theory, within the context of adapting ideas of other long-range strike and sabotage operations to the access opportunities afforded in a new operating environment. Yet an incident in Iran that surfaced in July 2021 has passed with only relatively limited attention, having been all too easily overlooked in the surge of recent ransomware incidents and the other immediate fires that command current intelligence production priorities and readership. Nonetheless, the case deserves greater scrutiny – as much for what was not done, as for what transpired on the wire. While superficially shocking, as an action against critical infrastructure with civil dependencies, it is nonetheless likely a rare example of responsibility and restraint in the employment of offensive cyber operations for covert action objectives.
This article will consider the operational details of the intrusion incident and its context as part of a wider campaign leveraging related malware variants against similar regional targets. We will examine the decision to target the railway network from the normative framework supplied by the Tallinn Manual. Recognizing that customary international law does not easily address the case at hand due to involvement of apparent non-state actors outside of acknowledged conflict, we will proceed to consider the operational-level decisions made in the planning and execution of this intrusion and its effects delivery, identifying factors of cyberweapons design, controlled employment, and available effects options not pursued by the unattributed attackers that in total demonstrate due care and responsibility. We will close with a look at the wider regional context contemporary to these events, and the implications for both current and future responsible offensive cyber operations.
Intrusion and actions on objective
Between 9-10 July 2021, destructive termination of an existing intrusion resulted in the disruption of train services in Iran. While initial reports from local media suggested extensive cancellations and delays to scheduled movement of rolling stock, including passenger routes, the Iranian Transportation Ministry attempted to deny that any disruption had taken place. Handheld imagery then surfaced which showed cancellations listed on public display monitors at the Tehran central rail station at Rah Ahan Square, alongside a message directing complaints to a phone number. This number was found to be the public line for the Office of the Supreme Leader (Beit-e Rahbari). Additional impacts to the Iranian Ministry of Roads and Urban Development public website were also reported on 10 July.
An Iranian cyberdefense firm which supplies the country’s national antivirus software provided the first public reporting on the malware payload used to deliver these effects. The payload was initially called Breakwin, and was described as a wiper also capable of corrupting master boot record hard disk partitions. It is likely that the firm was able to acquire these samples through either antivirus telemetry (although such telemetry was degraded by the extent to which the unattributed operators were able to disable endpoint protection), or through the firm’s sandbox services (which provide other Iranian researchers with a detonation and automated behavioral analysis environment), rather than directly through digital forensics/incident response (DFIR) and associated recovery efforts at the targeted Iranian networks. Direct DFIR engagement findings would presumably have been handled in a nonpublic manner. The company is known to provide additional malware analysis services to the government of Iran, but these are generally conducted under nondisclosure restrictions. Given the Iranian government’s sensitivity to the incident and earlier cases, it is unclear why the firm chose to disclose technical details which contradicted official narratives. This suggests disconnects between line malware intelligence efforts and ministry security structures. It is also not known why recovered samples were uploaded to public malware repositories outside of Iran.
While no samples were identified by the Iranian researchers, the description of the malware was sufficient to allow at least two separate Western firms to acquire and reverse engineer the payloads, along with additional related components. The latter payload elements were possibly not previously identified by the Iranian analysts, likely due to limitations of their detection, telemetry processing, or analytic tooling. Alternatively, if known to Iranian defenders, these elements may deliberately not have been disclosed. (Reasons for such nondisclosure may be complex, from compliance with regime attempts to control embarrassing or sensitive information, efforts to distinguish privately circulated reporting offering additional detail for subscription consumers, or even incomplete technical analysis).
Initial intrusion reportedly took place as much as one month before effects were delivered. The destructive payload – dubbed Meteor by the malware’s designers – appears to have been developed at least six months earlier, and was staged at an unknown time prior to execution after initial intrusion. The payload consisted of multiple components triggered based on automated scripts, with pre-configured execution parameters. Final destructive termination was initiated based on a time trigger, and rendered infected systems unusable through a sequence of commands to disconnect individual machines from enterprise networks, defeat endpoint protection features, erase local logs, lock out local users, overwrite files on targeted systems, corrupt the master boot record, and display a screen locked image crafted by attackers (which was the source of the message seen displayed at the Tehran station). 
While technical analysis of recovered malware provided strong insights into the intrusion incident and its destructive termination, it must be acknowledged that this intelligence picture remains incomplete. It is not known how the effects payloads were delivered to the target environment, nor what other actions may have been executed against network infrastructure including routers, enterprise orchestration and management services, or other systems. It is however clear that there was no apparent impact to operational technology (OT) network segments in any reported descriptions of the event.
A previously unknown self-styled hacktivist group calling itself “Predatory Sparrow” (گنجشک درنده) claimed responsibility for the attack on 9 July 2019, citing Iranian state Fars news agency reporting and stating that they sought to protest abuse of the nation by its government. The group apparently took its name from the Passer montanus tree sparrow with habitat in Iran, and its logo incorporates a cartoon bird in the style of the popular “Angry Birds” game franchise against a digital circuit background. The use of bird naming conventions is common within the Iranian hacking scene, with earlier groups such as Parastoo (“Swallow”) having used similar monikers. Major attack campaigns attributed to the Iranian government, such as Ababil, have also referenced the theme. The Predatory Sparrow claim received almost no attention at the time, although it was immediately challenged by an Iranian actor demanding “proof of exploit”. No response was noted in public social media, although further unknown private communication through the Telegram app may have occurred on a channel provided by the Predatory Sparrow group.
The campaign in context
The Meteor code was found to share a high degree of similarity to other malware variants that may be evaluated as part of the same family, and which industry has assessed as evolving generations of tooling over time. These earlier payloads, Comet and Stardust, have been in use since at least September 2019.Previous recovered samplesdid include configuration elements providing victimology information, including indications of use against targets in Syria. Attacks against these networks, the Arfada Petrolum and its parent Katerji Group companies, involved unique images displayed on destroyed victim computers that claimed responsibility in the name of a self-styled hacktivist group calling itself Indra. This name is also found as a string in earlier malware artifacts. The group has been publicly claiming responsibility for cyber attacks against the Syrian regime since first announcing an attack on another Syrian firm, Alfadelex, in September 2019. Indra has also claimed additional unconfirmed intrusions against the airline Cham Wings and the Banias oil refinery. In the latter case, the group provided imagery of a purported industrial control system human machine interface display, depicting crude oil and fuel oil flow, electrical systems status, and emergency stop controls.
The Katerji Group is a primary supporter of the Syrian regime. Its founder, Hossam Katerji has been a member of the Syrian Parliament since 2016, and along with his brothers has been a key financier involved in oil, construction, agriculture, and other commodities transactions. The firm’s assets were estimated at over USD600 million in 2019. The group also reportedly controls multiple militias throughout the country to protect and advance their commercial interests, and are allegedly closely aligned with Iran. The firm has also been accused of facilitating transactions with Daesh.
Indra accounts further claimed that the Cham Wings intrusion provided intelligence on the movements of senior Iranian Revolutionary Guards Corps (IRGC) general Qassim Soleimani, which identified an alleged alias used by Soleimani for travel between Iran, Syria, and Iraq. Exfiltrated “sensitive documentation” was said to have included travel records one day before he was killed in a targeted US airstrike near Baghdad International Airport in January 2020. Indra claims the hack revealed “the stupidity of the former Quds Force commander” in using obviously false passport and other reservation data, and that this led directly to Soleimani’s targeting and death. This is the first public allegation of cyber operations involvement in the strike. However, this claim cannot be independently evaluated based upon open-source information.
The Indra group apparently ceased further public claims as of November 2020. The gap between this halt in activity, and the Comet/Stardust code family resurfacing with the new Meteor variant in July 2021, allegedly in the hands of a separate hacktivist group opposed to the regime, creates substantial uncertainty as to the actual relationships between these entities.
The author here deliberately does not reach attribution on this case. Multiple hypotheses have been raised in industry reporting and in debates within private trust groups, none of which have resulted in definitive analytic conclusions. Neither is it at all apparent that this campaign has concluded, despite the apparent current cessation of activities by the hacktivist entities who claimed responsibility.
Despite decades of acknowledged military and intelligence planning that has treated cyberspace as a potential domain for conflict, there remains few case examples through which to explore operational behavior – especially under crisis pressures. The vast majority of these cases to date merely highlight the problematic employment of offensive capabilities in immature concepts of operation, using portfolios that were clearly untested in controlled, target relevant range environments, and that lacked appropriate management control and oversight by higher echelons. This is unsurprising, where major incidents emerged from authoritarian states that apparently remain largely unconcerned by the hitherto weak reactions of Western states to ever more egregious campaigns across critical infrastructure networks. As additional states, and other players in the global competitive space of the cyber domain, develop their own offensive cyber operations capabilities, it has become increasingly important to understand how this instrument ought to be used – and not merely how it has been abused in the past by malicious actors. Mature planning, accounting for the complex competing equities and potential harms in a tightly coupled global domain is a substantial burden even when considering operations intended for intelligence objectives, or as active defense response options. Offensive cyber operations involving effects on objective with disruptive or destructive intent face a still higher bar of responsibility.
Normative aspirations have sought to impose restraint in the selection of classes of targets that may be engaged. Yet despite substantive international attention and diplomatic engagement in both government and private channels, these efforts have had little practical effect. It is unfortunately unrealistic to expect that calls for placing all critical infrastructure targets out of bounds will be heeded by global adversaries, nor that even the most well intentioned of states will be able to completely exclude the entirety of a target state’s strategic networks from the calculus of politics pursued through other means. But even as the hopes of formal normative agreements continue to falter in the face of realpolitik, the prospect of more responsible decisions taken at the planner and operator level – and reinforced through management and oversight – are more important than ever. It is through these mechanisms we may hope to see a form of convergence towards agreement around behaviors that are more responsible, or at least less potentially destabilizing. Tacit bargaining through repeated interactions, resulting in boundaries of agreed competition, is one of the core pillars postulated in cyber persistence theory.
It is impossible to consider this case without addressing the matter of target selection. As a general rule, it is better that critical infrastructure upon which civilian populations depend should not be subject to attack, as an extension on the prohibitions against attacks on civilian populations. Yet neither should a state presume that they will be able to count on such principles to stay the hand of rivals in the face of other belligerence. It has long been considered permissible to target even civilian objects (such as critical infrastructure networks) where “by their nature, location, purpose or use make an effective contribution to military action”, and “whose total or partial destruction, … or neutralization offers a definite military advantage”.
Conflict status and belligerents
There are substantial gray areas in considering the application of this principle to the Tehran incident. No state of formal military hostilities existed between belligerents in this case, and therefore most international humanitarian law rules that control targeting are not applicable here. However, it has been argued that customary international law factors ought to be extended as a matter of normative principle to encompass actions in otherwise undeclared engagements, outside of armed conflict. Such arguments for expanded application of these principles rest upon the idea that the intent in drafting the original foundational instruments of the Hague Convention, Geneva Convention, Rome Statute, and associated treaty mechanisms is a vital component of international order, and that it is appropriate and necessary to conform to this intent in new situations that could not have been envisioned by the original drafters. (The opposing argument should also be noted: that the original agreements were important in outlining both prohibited conduct, but also delineating areas of state power that the involved parties chose deliberately to exclude from restrictions, as a limit to abstract ideals.)
Further complicating this analysis, one party at least represents itself as a non-state actor. While considerations of combatant status also have no application during peacetime, it has also been argued that these ought be normative considerations in offensive operations. As such, if claims of responsibility are taken at face value, the offensive actor here would appear to be an unprivileged belligerent, not entitled to combatant immunity. Unprivileged belligerents are generally assumed in most analysis to be presumptively improper in the normative sense, although this is a view that is grounded in and seeks to reinforce the state’s monopoly on legitimate uses of violence – in practice quite often challenged by the facts on the ground of force generation and employment in contemporary conflict. Nonetheless, cyber operations by non-state actors are generally considered not to breach standards of customary international law, such as being considered as a use of force, a prohibited intervention, or violation of sovereignty, given that these standards apply to states alone. It must also be noted that the notion of an unprivileged belligerent has been in some legal analysis supplanted by a focus on direct participation in hostilities as determinant of combatant status.
However, it is also unclear if a de facto relationship between the irregular actors and a state that might be considered to be a party in wider regional conflict may be present, which might change such analysis upon more comprehensive consideration. Given that at least one potential attribution hypothesis has been debated which encompasses contractor operations, and furthermore a plausible interpretation of the contractor as essentially mercenary in character, additional complications arise. If this theory is given weight, then such mercenaries involved in cyber operations would also be considered unprivileged belligerents. Yet even otherwise organized armed groups, properly under military command and meeting other combatant criteria, are considered unprivileged combatants if they fail to conduct operations in accordance with the laws of armed conflict. Which returns us to the core issue over the propriety of target selection.
Target of military use and advantage
While transport networks here were almost certainly entangled with civilian uses, targeting is generally not restricted only to objects in immediate military use. The precise contours of the nexus with military activities, scoped by duration of current or future military use, direct or sustaining contributions to warfighting functions, absolute or relative value of contribution and therefore associated military advantage accrued by neutralization, and associated factors renders this a complex matter. The state-owned nature of both the rail infrastructure in question, as well as its supporting information and communications technology services, and the predominant role of the IRGC-associated structure of bonyad enterprises (charitable trusts that play a substantial role in the Iranian economy), also introduces additional complications of military location.
The manner in which these networks were provisioned, and made distinct from IRGC and its state enterprises is salient to this analysis, but deeply unclear from the perspective of outside observers. IRGC-owned and -controlled enterprises encompass a number of entities involved in railway transport. A number of these are sanctions-designated entities due to their use in illicit proliferation related activities – including among others the Bonyad Eastern Railway Company, the Sina Rail Pars Company, and the Kaveh Pars Mining Industries Development Company’s subsidiary Tehran International Transport Company, which are part of the Bonyad Mostafazan foundation designated by the US government under Executive Order 13876.
If the unattributed operators acquired their unknown access to the target through a known military network, and delivered effects accordingly, this may well be considered a valid military location by logical address space, even if distinct in physical geography – especially where specific network geolocation may not have as readily been ascertained by the offensive operators. Designation under international sanctions would suggest sustained military use sufficient to overcome simple presumption of civilian character, even as the infrastructure also continues to be used for civilian purposes. Indeed, Tallinn discussions explicitly encompass this, listing “civilian rail networks being used by the military” as a target liable to attack under otherwise appropriate circumstances.
Cyberweapons effect and review
Targeting is inextricably bound to intended effect. Importantly, while the observed destructive intrusion impacted data upon which the functionality of physical rail operations depended, resulting in disruptive consequences for civilian activities for a period, this clearly did not reach the level of damage to those physical objects, injury to civilians, or loss of life. The action is therefore almost certainly de minimis when considered in current interpretations of customary international law, and arguably does not constitute an attack by these standards.
Critically, the events in this case suggest that greater potential impact rising to a different threshold may have indeed been possible, given the extent of access to the target network apparent in the operation, but that the unattributed operators chose deliberately to employ effects that would avoid such outcomes. This strongly distinguishes the present case from earlier incidents such as the NotPetya/Nyeta wiper operations, in which extensive collateral damage to transportation targets occurred as a result of unconstrained wormable propagation of destructive payloads and where adversary operators made no effort to impose guardrails on effects against additional indiscriminately impacted targets they had not characterized or even identified a priori.
Apparent restraint also distinguishes from other cases that may be theorized in which an attacker might have wished to achieve more substantive physical effects, but did not reach this threshold due to capabilities limitations or operational failures. No such factors were evident in the Tehran case. To the contrary, deployed payloads explicitly specified deletion targets with multiple validation controls. This is strongly characteristic of a deliberately engineered weapon that has been designed with more than passing considerations of legal review.
It is important, for the purpose of this analysis, to note that the ostensible purpose of the attack, as declared by unattributed operator messaging in the event, is not necessarily the actual objective sought in the action. The highly public face of the operation, as part of an event that planners and operators may have anticipated would receive global attention based on prior media coverage of other events involving Iranian networks, may not present the controlling reasoning of advantage obtained by denying, degrading, or destroying this target. Public messaging may have been complementary to, or even independent of, the functional advantage.
Intangible exclusions, reversibility, and morale considerations
International legal experts have for some time also explicitly argued that data on target systems and networks are excluded from the definition of an object for the purposes of evaluating damage, hitherto having referred solely to tangible and physical things in ordinary meaning. This matter has remained in dispute, with the degree of centrality of specific data (alone, in or larger aggregate) to civilian populations emerging as a key factor in weighing harms. As the importance of virtual objects to the functioning of modern economies grows, it has been argued that this exclusion does not properly take into account the impact on civilian populations. However, to date any such expansion of the rule has not been generally accepted, as much as it remains a normative aspiration.
The question of data as a targeted object also becomes further salient when considering reversible effects, both in terms of international law and international relations. Considerations of damage must be evaluated very differently when assessing temporary disruption from permanent destruction. In the Tehran incident, effects were delivered in a manner that indicated awareness of the backup solution used by the target. While it appears from observed payload configurations that wiping effects were also directed against these backups, it is unclear if this was executed broadly across all backup instances on the network or if the unattributed operators preserved selected backups that would enable reconstitution at delay. Multiple configuration examples have been reported from the attack, with backup nodes not specified as targets in some instances. This is further complicated in that backup solutions appear to have leveraged software provided by a US-headquartered firm, transactions which are presumably prohibited under current sanctions. However, Iranian IT engineers employed in sanctioned enterprises – including the state-owned oil company – have frequently listed experience with this solution, and earlier generation versions of the software were observed in underground software distribution through at least March 2021. Actions against targets acquired in violation of sanctions restrictions take on a different character than other mere civilian objects.
Impact to civil populations must also be weighed based on the degree of dependence on denied and degraded systems. Iranian passenger transport has adopted modern payment and ticketing systems only recently, and manual fallback options for recovery remain. In this case, no indications of deliberate targeting of point of sale infrastructure or other systems associated with these functions were observed (although these may have been degraded by generalized effects across the network). While less convenient, the railway could resume transportation services using prior manual ticketing processes as an interim measure, further arguing that civil impact was minimized. This is less true of military and proliferation related use of rail services, which involve more complex problems of tracking shipping containers, bills of lading describing contents, and the details of intended cargo destinations – which as shell companies or other front entities may exist almost nowhere outside of these databases.
The de minimis nature of any physical effects in this case, within the law of armed conflict, also rules out violations of the prohibition on cyber attacks intended to spread terror among the civilian population. Indeed, Tallinn discussions explicitly envisioned potential offensive operations against mass transit, but within the context of causing fear of loss of life or injury.  In contrast, disruption in the extant case was explicitly accompanied by messaging which indicated no further escalation of the event. Rather, the focus of communications by unattributed operators was on highlighting otherwise repressed political tensions undermining the Iranian regime, distinct from statements against the population as a whole. These are characteristic of messaging one would see in psychological operations campaigns with intended effects directed against leadership legitimacy. International legal consensus has explicitly declared such psychological cyber operations do not qualify as attacks. Further, even if such messaging, tied to the disruptive or destructive event, was to result incidentally in the decline of civilian morale, this would not be considered collateral damage.
One may also view the intrusion and effects delivery in Tehran not through the lens of military activity, but as covert action. This line of thinking is somewhat unique to the United States’ views, as the label is not recognized outside of US domestic statutes. International law is generally silent on such matters, much as in other activities closely associated with espionage. Yet even states with strong commitments to international order have occasion to resort to unilateral action due to structural weaknesses of the global system. While some have asserted such actions are presumptively illegitimate, a more robust debate will consider the circumstances, intentions, and employment. However, consensus on application of international law has been slow to develop due to the reluctance of states to acknowledge and defend practices, even at later remove. Yet there remains a need to understand actions which arise from contexts treated as espionage in customary international law, but that approach possible threshold of attack – whether the kinetic bright line of human casualties, or the long shadows of destructive effects.
In light of what remains significant unresolved disagreement when considering application of abstract normative principles to actions on the wire, which are all too commonly encountered when attempting to contort the body of customary international law to consider novel actions in the cyber domain, we must turn instead to the questions of operational conduct. It matters as much or more at present, where law is silent (or unable to coherently speak), that the planning and execution of offensive cyber effects operations is responsible.
Observed indications of responsibility and restraint
If as Lawrence Lessig has stated, “code is law”, then developers and operators are now responsible for decisions that previously would have been reserved for policymakers and other elements of the sovereign state. This is a practical devolution of functional authority that places on those conducting offensive cyber operations a burden to “do right” even as they may pursue their competitive interests against other states and non-state targets. Black letter law and formal policy always lags the realities on the ground, especially in fast-moving technology spaces. The weight of these responsibilities requires us to critically examine new cases not just in abstract legal frameworks but in operational detail. Where legal and norms-oriented approaches have not yet gained traction, we may yet see functional practices and de facto standards for a more professional kind of offensive behavior emerge from ongoing interactions on the wire.
The longstanding over-classification of even the most basic concepts of offensive cyber operations has for decades limited discussion of what could, and indeed should, be fundamental principles for the conduct of intelligence and covert action executed through malicious modification of systems and networks. This has led to ad hoc experimentation in which hostile operators frequently accomplish some intended objectives, but often with serious consequences to uninvolved third parties. These failure modes are generally well highlighted in current cyber threat intelligence. However, it is also important to note features which display a greater degree of responsibility.
Sadly, in recent cases of adversary action such as the HOLIDAY BEAR/DARK HALO/STELLAR PARTICLE/NOBELLIUM intrusion, at most one can note that adversary behavior was less irresponsible than in prior campaigns such as those conducted by earlier Russian intelligence service attributed operations by SANDWORM/VOODOO BEAR/IRON VIKING, or Chinese intelligence service attributed operations by HAFNIUM and follow-on activities across associated intrusion sets.
Responsibility in offensive cyber operations is not merely the result of good intentions. Rather, it requires deliberate planning, engineering, operational, management, and oversight efforts throughout the lifecycle of a campaign to ensure that access and actions on objective are properly aligned, adequately tailored, and appropriately balance potential harms to competing equities whilst accomplishing mission objectives. Responsible conduct requires programmatic maturity, individual professionalism, and organizational focus to achieve. Given that responsible offensive cyber operations are carried out through more than mere intention, characteristic features of these behaviors may be seen in the artifacts of specific engagements. Further inferences may also be drawn from these observables.
The incident in Tehran is notable in that it suggests that offensive capabilities were well tailored, through substantial intelligence support including extensive reconnaissance within target networks. This likely allowed for deliberate selection of specific nodes for attack effects. This is significant, in that the unattributed operators appear to have deliberately refrained from any effects delivered against the railway operational technology network segments.
While substantial unknowns persist regarding the manner in which target reconnaissance was executed, the operators took due care to ensure that whatever access options and tooling were employed for this phase of the action remained distinct from the payloads which delivered destructive effects. This is critical towards ensuring distinguishability across campaigns and in future operations, where intrusions executed for intelligence objectives are otherwise difficult to tell apart from effects operations. Such distinguishability is important to preventing potential inadvertent escalation or other negative outcomes from adversary misinterpretation of incompletely observed intrusion.
It is further important that the payloads appear to suggest substantial quality assurance engineering. Multiple measures to ensure redundancy of key guardrail functions seem to have been deliberately introduced, and the developers apparently sought to provide continuing state of health status so that operators would have positive control throughout delivery and execution. These capabilities have hallmarks of prior operational test and evaluation, likely within controlled range detonation of environments. This remains a substantial requirement of maturity when engineering new offensive capabilities portfolios. The evolution of the malware family across prior incidents also suggests the live fire assessment was incorporated into a continuing review of the tooling and its performance.
Test and evaluation efforts were however not entirely perfect. Technical analysis suggests a seam between capabilities developers and operators’ implementation, indicating that a complete end-to-end package of payload and offensive deployment scripts were not assembled as a cohesive whole until engagement. This is consistent with programmatic maturity, but within an operations model that relies upon modular options composed at point of need. In this case, such a seam did not appear to compromise responsibility for discrimination and restraint – rather it merely resulted in what some would characterize as operational security lapses. In the hands of other operators, or under different targeting circumstances, such disconnects between different functional roles might however have turned out differently.
While the deployment of the Meteor destructive payload and its associated components was highly scripted, this automation did not permit indiscriminate autonomous behavior. The detailed automation features specified, to a high degree of control, elements of the target to be serviced for effects and did not allow for non-specified elements to be struck. This included as an early step the deliberate isolation of specific endpoints from the wider network environment.
Critically, the payload did not contain logic to independently select new targets, nor to further propagate through the network to deliver further effects. Such autonomous additional targeting is not presumptively irresponsible, but must be appropriately constrained within carefully defined parameters of action. This requires that formal principles of target discrimination, in both the technical and legal senses of the term, be incorporated into command-and-control functionality. It is also best if such capabilities require a human on the loop, if not directly involved in issuing commands to execute against independently discovered targets. In the current case, the extensive reconnaissance prior to execution appears to have precluded the need to conduct further reconnaissance by fire through automated actions.
Paths not taken
Given apparent extensive prior access to the target, a variety of effects scenarios highly likely could have manifest within the rail network, but did not. There are no indications of destructive or disruptive effects against operational technology networks. This does not appear to have been a result of capabilities limitations, but rather deliberate decisions not to pursue available options. Indeed, code features observed to be present would have enabled additional effects with trivial effort. The Meteor wiper supported specific designation of processes to kill, although none were so specified in observed deployment. This function could have readily incorporated even publicly known process kill lists from other malware deployed against industrial control system environments, such as the EKANS/Snakehose ransomware variant (itself leveraging iterated development from the MegaCortex malware).  These static lists, while not tailored to the Iranian rail network, likely would encompass at least some functionality expected to be encountered in the operational technology segment of the target environment. Given the extensive reconnaissance on target, it would have likewise been a relatively simple processing and exploitation task to compile a more narrowly tailored process kill list specific to the network as configured.
Additionally, the presumed level of access demonstrated by the operators in this case would likely have served to provide options against unique features of the railway target using non-public capabilities that could be developed based on descriptions of other classes of known vulnerabilities previously identified in similar targets. Automated signaling and switching technologies with network connectivity would be natural targets should an attacker wish to cause more extensive impact to physical plant. Disruption of these functions could create effects including setting conditions to foul track routing in a manner requiring extensive manual effort to reset, or even create enhanced risks for collision of rolling stock.
The current Iran railways signaling system is reportedly built largely upon a legacy, analog frequency modulated (FM) radio network connecting to a digitally switched network management terminal. This dedicated system provides railway dispatch and command functions, as well as supporting additional public security communications. The dedicated network is reported to be connected to a TCP/IP network. New automated signaling functions, including GSM-R network options, have been variously pursued, to increase rail transit capacity and as part of regional interoperability efforts. These functions almost certainly offer vulnerabilities that could have been exploited by a sufficiently skilled attacker. Prior industry research has demonstrated widespread vulnerability discovery in rail signaling functions, including options to deliver to denial of service effects with potential safety critical impacts. Indeed, Iranian researchers have also previously focused on analysis on signaling failures as part of modernization investments.
It is unlikely however that defensive research efforts have fully remediated potential exploitation opportunities, especially given extended access to the target network demonstrated by the unattributed operators in the current case. As a result, it is even more notable that no such effects appear to have been scoped or otherwise pursued in the July 2021 intrusion incident.
Interestingly, these tools appear to have been deliberately designed to incorporate only well known, extensively documented features that have been previously seen in the wild in other malware families. This is an additional hallmark of responsible operations, in which more sophisticated planners will consider the potential proliferation implications of deploying code which may both be directly repurposed if recovered from a target system or network, as well as the potential knowledge transfer to both the immediate adversary based on weapons technical intelligence derived from reverse engineering and behavioral analysis of observed implants. (The latter type of proliferation is also of concern for all other possible observers that may have collected against the event, or acquired samples through subsequent circulation among cybersecurity research communities.)
The primary effects mechanisms of the Meteor wiper include deletion functions that have been compared directly to the NotPetya/Nyeta malware. While this is likely somewhat of an overstatement, the overall structure and logic of the tooling is exceptionally similar to many other contemporary ransomware variants, including publicly circulated open-source tooling used for threat emulation and other red team engagements. Additional functionality is provided from abused administrative software in circulation in the underground market since at least 2006. These design characteristics will have taught nothing new to any observers. Additional commands are executed through native system utilities in a manner that would be familiar to the penetration testing community of practice.
The selection of a time-based effects trigger is also significant, although frequently overlooked. While there are other more subtle options, which may correspond to additional operational objectives, the choice to employ a literally decades old design, previously seen in countless other incidents in the wild, provides additional assurance against proliferating novel concepts of operation.
Towards auditable implants
The Meteor payload and its associated automated scripting may be seen from one perspective to have suffered from multiple operational security failures, had the unattributed developers and operators been focused exclusively on non-detection. In particular, Powershell commands intended to protect deployment of malware components through manipulation of endpoint whitelisting provided transparency into installed malicious tooling. This was leveraged in subsequent technical analysis of the malware. Some of the identified malware samples in the campaign were also found to have retained strings used by developers for debugging purposes, which are more commonly eliminated when deployed for operational use. Much of the configuration of the tooling was also left in readily understood English, whereas malware developers under more routine detection pressures will seek to obfuscate functionality in more elaborate ways. Critically, the implant also incorporated functionality not used in the extant attack, which could have been removed by an actor more focused on denying potential observables about future capabilities options.
However, these choices may also be considered as possible decisions to ensure that the attack was not misinterpreted by observers, and that analysis of any recovered samples of this specific weaponized code instance could be completed quickly in order to reduce potential tensions that may arise from uncertainty around the further extent of immediate action. While planners and developers that build tooling for long-term espionage-focused campaigns may choose to prioritize non-detection features, when considering execution of deliberate effects operations additional factors of target (and importantly, target leadership) reactions must be considered. These decisions are relatively costly for planners, in that they trade off features that may otherwise maximize probability of mission success in an attempt to introduce elements that may reduce chances of hostile cyber intelligence services failing to accurately observe and understand an action. These decisions also presume interactions over a relatively shorter timespan, where the cumulative effect of such options may compound costs for the responsible operator in extended competitive and conflict iterations.
Ultimately, such choices lead to a potential scenario in which an attacker may choose to deploy what are effectively auditable implants. At the simplest level, these artifacts may include specific watermarks intended to convey intended function, even voluntary attribution. More complex tooling in this model documents within itself its scope, the actions it has taken, and importantly serves to rule out other actions not taken. The expectation is that such auditable features may be obfuscated, or perhaps even fully encrypted – to enable an operationally relevant window of time for mission completion, after which it is assumed that a sufficiently skilled technical intelligence team will be able to reverse engineer the artifacts and reconstruct the relevant events of the effects operation. This may even be further enabled by post hoc release of decryption keys for such audit artifacts, either as part of termination and withdrawal from access footholds, or through other channels. While it does not appear that the Meteor implant was designed deliberately to offer such post-incident features, many aspects of the supposed operational security mistakes do serve to offer similar utility for analysis after the fact.
Implications and outlook
This case is an important window into contemporary cyber operations praxis, and a rare example of more responsible operational planning and decisionmaking than is typically seen in the largely unrestrained and ever more escalatory actions of authoritarian and revisionist states that dominate current intelligence reporting and media headlines. However, it is also ethically challenging on one level, in that it remains unattributed – given that claims of responsibility by the identified hacktivist actors must necessarily remain suspect. Indeed, this uncertainty of attribution when coupled with cui bono analysis, has reportedly stayed the hand of some industry researchers who might otherwise have provided additional technical review. In one sense, this is understandable in that observed victimology to date suggests that the Western customers of major cybersecurity firms appear to have little reason to be concerned about this threat activity. The tooling itself poses little proliferation risk. From this perspective, it is perhaps right and proper that the incident be set aside, to allow resources to be focused on the ever continuing press of other new samples and fresh intrusions. Such a decision also does not trigger the ethical implications inherent in providing public analysis of malware development and operational tradecraft which may be abused by adversary actors to improve detection and engineer new countermeasures.
There is nonetheless a balance of competing harms. The near-exclusive focus on what is entirely irresponsible behavior by a series of adversaries means there is little to no discussion of what intrusion activity “done right” looks like. This is even more true of effects operations. This leaves academics, policy communities, and even developing programs among allies and partners often adrift in a sea of abstract theory towards unrealized norms – unable to recognize or articulate mistakes or different courses of action that might avoid them. It is also highly likely that the planners of an effects operation intended to generate highly visible disruption, with associated messaging for psychological impact, will have anticipated detection and disclosure of the tooling used to deliver these effects, and made operational choices accordingly that would minimize potential degradation of future options.
Addressing this present situation is in its own way potentially problematic. Real concerns exist where such conversations may lead hostile intrusion and attack campaigns to “improve” ways that advance their sophistication and capacity to inflict harm on defenders, or to defang the rare access and effects opportunities available to states acting with restraint and responsibility. There is also a high likelihood that adversaries will continue to ignore notions of responsibility grounded in the global international order to which they are inimically opposed, and that they see as offering only cost without advantage. Yet it may be argued that many emerging offensive programs might seek to behave more responsibly, either out of organizational self-interest in avoiding detection and associated blowback, or even from individual operators’ recognition that spying and fighting “in the machine” is a professional activity. Nobody wants to be an intelligence service or national cyber command’s equivalent of script kiddie.
The current case also highlights the importance of understanding incidents not only individually, but also within the wider context of ongoing operations. The campaign is the proper unit of analysis for cyber conflict. Here, the identified campaign – encompassing both the actions by the Indra phase and the Predatory Sparrow phase of employment of a common malware family, must also be seen in the context of regional actions impacting transportation sector targets. The strike against the Iranian railway network comes against the backdrop of repeated incidents involving kinetic attacks on maritime shipping and regional oil and gas industry targets. In no small part, such mining incidents and missile fires must also be understood in light of recurring compromise of regional port and shipping targets. Earlier unattributed exchanges have also resulted in disruption at Iranian ports, including effects that appeared to impact sanctioned Iranian state entities. Yet the regime has reportedly continued to fund novel offensive cyber capabilities development intended to destroy vessels at sea and to execute further loitering munitions attacks.
It is only when stepping back to view the whole of the ongoing clandestine conflict that the most significant aspects of responsibility in the present case may be understood. In comparison to other ongoing exchanges of lethal fires and destructive cyber effects within the region, the action against the Iranian rail network must be seen as exceptionally measured. It is in some ways a very real reminder of competing interests that oppose the regime, and that responses even when confined to the cyber domain do not necessarily have to conform to the precise parity of targets introduced by the adversary. Rather, here the theocracy has employed its transportation assets to further ongoing proliferation and regional terror attacks, and has directly targeted other countries’ civil transport through its own attacks. This brought the theocracy’s own transport networks to the table, as potential targets of reciprocal action.
This also brings to the forefront what may be assessed as the most likely intent of the cyber attack against the railway target. A very visible demonstrative action against a representative target, within strong restraints and executed in a deliberately responsible fashion, highlights the pervasive vulnerability of Iranian networks to potential disruption and destruction but risks little unanticipated collateral damage. The strike almost certainly involved no exquisite capabilities, and while the mechanisms of access remain unclear, these were likely commonly known exposures that the generally woeful state of Iranian cyber defenses had never appropriately remediated. The selection of this target therefore does nothing to tip the attacker’s hand as to the true extent of offensive options that might be employed in a less constrained engagement, or in a sustained campaign intending to inflict strategic costs upon the regime.
Unfortunately, however strongly intended and communicated the signal, it is constrained by the receptive capacity of the Iranian regime. There is little indication to date that the core leadership is responsive to such demonstrative actions, and it has shown continued disregard for pressure arising from a frustrated and disillusioned population. The regime’s elites continue to profit from corrupt relationships cultivated in regional adventurism, and see the near-term possibilities of further relief from crippling sanctions imposed under the prior US administration. As faltering negotiations stumble on, while clandestine acquisition activities still pursue prohibited uranium enrichment to set conditions for potential materials diversion and nuclear breakout (using expertise and technology never adequately declared, monitored, or decommissioned despite earlier agreements), the potential leverage that can be generated by restrained and responsible offensive cyber operations is rapidly declining.  And in this is the real concern – that once reasonable options have been exhausted, there will remain nothing but capabilities reserved only for in extremis situations.
JD Work now serves at the Marine Corps University’s Krulak Center for Innovation and Future Warfare, and holds additional affiliations with Columbia University’s Saltzman Institute of War and Peace Studies, as well as the Atlantic Council’s Cyber Statecraft Initiative. He has over two decades experience working in cyber intelligence and operations roles for the private sector and US government.
 The views and opinions expressed here are those of the author and do not necessarily reflect the official policy or position of any agency of the US government or other organization.
 The author would like to thank Dave Aitel, Juan Andrés Guerrero-Saade, and Gary Brown for comments and critique.
 Winn Schwartau, Information Warfare: Chaos on the Electronic Superhighway (New York: Thunder’s Mouth Press, 1994). ; John Arquilla, David Ronfeldt, In Athena’s Camp: Preparing for Conflict in the Information Age (Santa Monica: RAND, 1997). ; Gregory J. Rattray, Strategic Warfare in Cyberspace (London: MIT Press, 2001).
 Reuters. “Iran transport ministry hit by second apparent cyberattack in days.” 10 July 2021.
 Amn Pardaz. “Trojan.Win32.BreakWin”. 13 July 2021.
 Islamic Republic News Agency (IRNA). ” What was the story of the cyber attacks on the Ministry of Roads and Urban Development and Railways?” 18 July 2021. (Farsi)
 Juan Andrés Guerrero-Saade. “MeteorExpress: Mysterious Wiper Paralyzes Iranian Trains with Epic Troll.” SentinelOne. 29 July 2021.
 Flashpoint. “Inside an Iranian Hacker Collective: An Exclusive Flashpoint Interview with Parastoo”. 3 February 2016.
 JD Work. “Echoes of Ababil: Re-examining formative history of cyber conflict and its implications for future engagements.” Soldiers and Civilians in the Cauldron of War, 86th Annual Meeting of the Society for Military History. May 2019.
 Checkpoint, “Indra — Hackers Behind Recent Attacks on Iran”. 14 August 2021.
 Kevin Mazur, Revolution in Syria, (Cambridge : Cambridge University Press, 2021)
 Lucas Winter. “The Katerji Group- A New Key Player in the Syrian Loyalist Universe”. OE Watch, Foreign Military Studies Office. September 2019.
 Michael Georgy, Maha El Dahan. “How a businessman struck a deal with Islamic State to help Assad feed Syrians.” Reuters. 11 October 2017.
 Wilfried Buchta, “Who Rules Iran? The Structure of Power in the Islamic Republic”, (Washington DC: Brookings Instituion Press, 2000); Frederic Wehrey, Jerrold D. Green, Brian Nichiporuk, Alireza Nader, Lydia Hansell, Rasool Nafisi and S. R. Bohandy, “The Rise of the Pasdaran: Assessing the Domestic Roles of Iran’s Islamic Revolutionary Guards”, (Santa Monica: RAND, 2009)
 Michael Schmitt and Jeffrey Biller. “The NotPetya Cyber Operation as a Case Study of International Law”. EJIL Talk blog. 11 July 2017. https://www.ejiltalk.org/the-notpetya-cyber-operation-as-a-case-study-of-international-law/ ; Tarah Wheeler, John Alderdice. “The Geneva Convention and International Cyber Incidents”. Belfer Center for Science and International Affairs, Kennedy School, Harvard University. 4 February 2021. ; https://www.youtube.com/watch?v=nOOk3ltPvEw ; Monica Kaminska, Dennis Broeders, Fabio Cristiano, “Limiting Viral Spread: Automated Cyber Operations and the Principles of Distinction and Discrimination in the Grey Zone”, 13th International Conference on Cyber Conflict (CyCon): ‘Going Viral’. Tallinn, Estonia. 2021.
 Gary D. Brown, Andrew O. Metcalf. “Easier Said than Done: Legal Reviews of Cyber Weapons”. 7 J. Nat’l Sec. L. & Pol’y 115 (2014); David Wallace, “Cyber Weapon Reviews under International Humanitarian Law: A Critical Analysis”. NATO CCDCOE. 2018.
 Beth D. Graboritz, James W. Morford, Kelly M. Truax, “Why the Law of Armed Conflict (LOAC) Must Be Expanded to Cover Vital Civilian Data”, Cyber Defense Review 5, No 3 (2020): 121-131.
 Neil C. Rowe, “War Crimes from Cyber-weapons “, Journal of Information Warfare 6, no 3 (2007): 15-25; Michael N. Schmitt, “‘Attack’ as a term of art in international law: The cyber operations context”, 4th International Conference on Cyber Conflict (CYCON). Tallinn, Estonia. 2012. ; Max Smeets, Herbert S. Lin. “Offensive cyber capabilities: To what ends?” 10th International Conference on Cyber Conflict (CyCon). Tallinn, Estonia. 2018.
 Remarks under Chatham House rule, GlassHouse Center. 14 September 2021.
 Panayotis A Yannakogeorgos, Eneken Tikk. “Stuxnet as cyber-enabled sanctions enforcement”. International Conference on Cyber Conflict (CyCon US). Washington, DC. 21-23 October 2016.; Mark Peters, “Cyber Enhanced Sanction Strategies: Do Options Exist?” Journal of Law & Cyber Warfare 6 no 1 (2017):95-154
 William Michael Reisman and James E. Baker, Regulating Covert Action: Practices, Contexts and Policies of Covert Coercion Abroad in International and American Law (New Haven, CT: Yale University Press, 1992).
 Alexandra H. Perina, “Black Holes and Open Secrets: The Impact of Covert Action on International Law”, 53 Colum. J. Transnat’l L. 507 (2014-2015)
 Lawrence Lessig, Code and Other Laws of Cyberspace (New York: Basic Books, 1999)
 Gary Brown, “Spying and Fighting in Cyberspace,” Journal of National Security Law & Policy, 2016
 JD Work. “Who Hath Measured the (Proving) Ground: Variation in Offensive Capabilities Test and Evaluation.” 15th International Conference on Cyber Warfare and Security. Old Dominion University, Norfolk, VA. March 2020.
 Dragos. “EKANS Ransomware and ICS Operations”. 3 February 2020.
 冉晓径 (Ran Xiaojing), “浅谈模拟集群通信系统在伊朗铁路中应用” (“On the Application of Analog Trunk Communication System in Iranian Railways”), 铁道通信信号 (Railway Communication Signal) 46 no 6 (2010).
 Javad Lessan, Ahmad Mirabadi, & Yaser Gholamzadeh Jeddi, “Signaling system selection based on a full fuzzy hierarchical-TOPSIS algorithm”, International Journal of Management Science and Engineering Management 5, No 5 (2010): 393-400. M. Tamannaei, M. Shafiepour, H. Haghshenas, B. Tahmasebi, “Two Comprehensive Strategies to Prioritize the Capacity Improvement Solutions in Railway Networks”, International Journal of Railway Research 3, No 1 (2016):9-18; Mohammad Ali Sandidzadeh, Farzaad Soleymaani, Shahrouz Shirazi, “Design and Implementation of a Control, Monitoring and Supervision System for Train Movement Based on Fixed Block Signaling System with AVR Microprocessor,” Materials Science and Engineering 671 (2020)
 Christian Schlehuber, Erik Tews, Stefan Katzenbeisser, “IT-Security in Railway Signalling Systems.” In Reimer H., Pohlmann N., Schneider W. (eds) ISSE 2014 Securing Electronic Business Processes. (Wiesbaden: Springer Vieweg, 2014); Sergey Gordeychik, Aleksandr Timorin. “The Great Train Cyber Robbery”. Chaos Communications Congress, Hamburg. 27 December 2015.
 M. Yaghini, F. Ghofrani, S. Molla, M. Amereh, B. Javanbakht. “Data Analysis of Failures Of Signaling And Communication Equipment In Iranian Railways Using Data Mining Techniques”. Journal of Transportation Research 11 no 4 (2015): 379-389.
 Peter J. Denning. “Computer Viruses”. Research Institute for Advanced Computer Science, Ames Research Center, National Aeronautics and Space Administration. 1988. ; Eugene H. Spafford. “Computer Viruses: A Form of Artificial Life”. Purdue University. 1990
 JD Work. “Competitive Dynamics of Observation and Sensemaking Processes Impacting Cyber Policy (Mis)Perceptions”. Bridging the Gap Workshop, Cyber Conflict Studies Association. 12 November 2019.
 Remarks under Chatham House rule, GlassHouse Center, 30 July 2021
 Such ethical dilemmas are discussed further in Juan Andrés Guerrero-Saade. “The ethics and perils of APT research: an unexpected transition into intelligence brokerage”. Virus Bulletin Conference. Prague. 30 September – 2 October 2015.; JD Work. “Ethics considerations in victimology collection & analysis in cyber intelligence.” Legally Immoral Activity: Testing the Limits of Intelligence Collection. The Citadel, Charleston. 12 February 2020.
 This point has been made frequently within the community of practice, but additional by scholars including Richard J. Harknett and Max Smeets, “Cyber campaigns and strategic outcomes”, Journal of Strategic Studies (2020)
 JD Work. “Counter-cyber operations and escalation dynamics in recent Iranian crisis actions”, Workshop on Crisis Stability and Cyber Conflict, Columbia University. February 2020.
 International Atomic Energy Agency. “Verification and monitoring in the Islamic Republic of Iran in light of United Nations Security Council resolution 2231 (2015)”. GOV/2021/39. 7 September 2021. Derestricted.
In cybersecurity, the difference between offense and defense is at once extremely straightforward and incredibly difficult to pin down. It is straightforward because defending your own networks and data and attacking someone else’s look completely different: the former involves implementing security controls and detection systems within the confines of your own computer systems, while the latter involves exploiting vulnerabilities in someone else’s systems. So it really should not be difficult to designate any particular activity in cyberspace by an individual country as offense or defense—except that, increasingly, countries seem to view the best cyber defense as, well, offense.
In 2018, the United States Cyber Command announced a new cyberspace strategy grounded in the ideas of persistent engagement with adversaries and defending forward so that defensive interventions occurred “as close as possible to the origin of adversary activity.” The crux of the strategy was essentially to broaden the boundaries of defense so that it would include cyber activities that occurred outside the borders of the networks being defended to encompass activities targeting adversary networks. In other words, offensive cyber activity was rebranded as “forward defense.”
I wrote at the time about my skepticism around relying on offensive cyber capabilities as a defensive strategy, but there were certainly points to recommend this strategy, especially the sense that a defensive strategy focused on hardening networks and computer systems against attacks simply was not working very well. Three years later, though, it is a little hard to tell whether ramping up offensive cyber activity has contributed to a stronger cybersecurity posture for the United States—or, indeed, even how much that offensive cyber activity has been ramped up under the new strategy.
Part of what makes it difficult to assess the effectiveness of offensive cyber activity as a means of defense is that many of these offensive operations may be carried out in secret. So in addition to a few specific operations reported in the media—including a 2019 attack on the Russian Internet Research Agency and twoother attacks, also in 2019, directed at Iran—it is possible that there are many examples of persistent engagement in cyberspace that the public simply isn’t privy to. For example, when the Russian REvil ransomware gang went offline earlier this year, following several high-profile ransomware attacks on U.S. targets including Colonial Pipeline and JBS, it was very unclear whether that was the work of the United States government or not.
I don’t think that taking the servers used to perpetrate ransomware attacks offline is necessarily a bad idea. If that is what happened in this case (and again, no one seems to be sure), it does seem like there would be a benefit to making clear that is why the servers went down in order to send a clear signal to other cybercriminals. But just because it may be warranted and useful, it does not mean that this type of offensive cyber activity is defense, in the sense that it makes computer systems any safer. Taking out the computer that has attacked you is not defense its retribution, and that’s an important distinction to keep clear if only because it highlights the need to do a very different kind of work to actually defend infrastructure from ransomware.
More generally, I am wary of blurring the lines between offense and defense—and particularly the language we use to describe each. That’s partly because I think there are significant differences between interfering with your own networks and messing around in someone else’s, but mainly because I worry that relying on offense as a country’s main source of defense can lead to countries neglecting the less exciting but equally (if not more) important work of trying to build out more secure infrastructure and computer networks.
It is possible to point to a series of severe cyberattacks in the United States over the course of the past few years (SolarWinds, Colonial Pipeline, JBS, to name just a few) and argue that their severity suggests persistent engagement has not worked and offensive cyber activity is not an effective defensive strategy in cyberspace. It is equally possible to invoke the relative security of the 2020 election and other unknown, secret offensive cyber operations as evidence that this strategy has been a great success for the United States. Given how little is known publicly about the extent of these operations and how little it is possible to know about what the landscape of threats and cyberattacks would have looked like under a different strategy, I’m not convinced it is possible to draw any very strong conclusions one way or the other.
What does seem clear from the past few years is that offensive cyber activities do not—and will not—suffice to defend computer networks absent the more traditional, inner-looking work of defense. There may well be value in both offensive and defensive cybersecurity efforts, but there is also value in keeping them distinct in order to clarify that the rules, standards, and norms for each are quite different and, most importantly, that offense cannot and should not be viewed as a substitute for a strong defense.
Josephine Wolff is an Associate Professor of Cybersecurity Policy, The Fletcher School at Tufts University.