Security, Technology, Innovation

Hack-Back: Toward A Legal Framework For Cyber Self-Defense

By  | 

Individual holding a computer reading - you've been hacked, with code in the background.

Lost in Translation: Governance and Cyberspace

The idea of using force to prevent or stop crime is intuitive in the physical world. You can fight back against an attacker. You can tackle a purse-snatcher. You can reach into the pockets of a shoplifter before he leaves your store. You can hire rough men - even armed men - to guard your belongings. Of course, there are many things that you cannot do, and reasonable people can disagree about the limits of these actions, but the law generally recognizes that force is sometimes necessary to defend persons and property, halt ongoing crimes, and prevent suspects from fleeing.

The rights of private entities to use reasonable force has not extended to cyberspace. Under current law, it is illegal for the victim of a cyberattack to “hack-back” – that is, to launch a counterattack aimed at disabling or collecting evidence against the perpetrator. This blanket prohibition imposes enormous constraints on the private sector’s ability to respond to cyberattacks. Criminalizing self-defense outright would seem ridiculous in the physical world, but cyberspace blurs the traditional conceptions of property, security, self-defense, and the role of the state. Consider these questions: Is intruding onto my computer the same as intruding into my home? In both instances, my property and privacy are being violated. Can I pursue a cybercriminal through the web the same way I would chase a purse-snatcher down a busy street? In both instances, the chase could crash into innocent third parties. Can a bank place malware in its digital records the same way it would put exploding paint into a bag of money handed to the bank robber? In both instances, the criminal is going to suffer some property damage before his day in court.

These comparisons aren’t exact, and, consequently, legal frameworks don’t translate perfectly into the cyber realm. Indeed, this dilemma is just one example of how rapid technological change can outpace not just laws but the conceptual frameworks that underpin them. When this gap forms, policymakers should create new, more flexible systems to weather the tides of change. Unfortunately, in the case of counterhacking, the U.S. has done the opposite, banning even reasonable instances of hack-back. In this article, I advocate loosening this restriction to allow some forms of hack-back. My basic argument is this: the law can and should distinguish between reasonable and excessive force in cyberspace. Counterhacking techniques have different levels of severity, some of which may be appropriate in certain scenarios. With this in mind, I propose a framework to balance the benefits and risks of legalized hack-back.

A Slippery Slope to the Wild West? Risks of Hack-Back

Before delving into the distinctions between counterhacks, it is helpful to explain the general risks of hack-back. Many reasonable critics have voiced strong opposition to hack-back, with some even calling it “the worst idea in cyber security.” Admiral Michael Rogers compared legal hack-back to “putting more gunfighters out on the streets of the Wild West.” His allusion to the Wild West is a common motif in criticism of hack-back. It reflects critics’ concern that legalizing hack-back would be akin to permitting vigilantism, allowing private entities to play sheriff, judge, and executioner in cyberspace. This would lead to three major consequences. First, critics fear that legalizing counterhacking would allow companies to carry out their own vigilante justice against the accused with no due process of law. Private companies may launch attacks indiscriminately with little evidence; or they may inflict far disproportionate punishment on an attacker. Second, critics point out that innocent third-parties may be harmed in counterattacks. Often, cyber threat actors will hijack unwitting victims’ computers to carry out an attack. These computers could become collateral damage of a hack-back. Third, legalized hack-back could have international implications if a private company finds itself attacking a nation-state actor. This would be dangerous, not only because the nation-state would likely far outmatch the private company, but also because the fight could escalate and drag the United States into an international conflict.

The problem with these criticisms - and with current law - is that they do not distinguish between different kinds of counterhacks. To be sure, counterhacking has the potential to infringe on privacy and property rights of criminals and third parties; however, hack-back techniques have varying effects that may be appropriate in different contexts. These techniques can be categorized on a spectrum of “utility” based on the severity versus the benefit of the counterhack. Severity refers to how destructive or invasive the counterhack is. Benefit refers to how effectively a technique accomplishes some legitimate purpose, namely stopping an ongoing attack, protecting data, or gathering evidence.

On the high-utility end of the spectrum are techniques that involve some intrusion into an attacker’s system, but with minimal damage. These would include attributional cyber “beacons” that track the attacker’s location and collect other basic forensic evidence. Another high-utility technique is a “dye packet” that can encrypt stolen data automatically, making it unreadable by attackers. The middle of the spectrum consists of techniques that include more serious intrusions and actual damage to the attacker's physical systems. One example would be aggressive monitoring techniques like keyloggers or screengrabbing. Another example would be malware “booby traps,” triggered by an intrusion or exfiltration of data, that wipe the attacker’s computer memory. A final example for this category is malware to take down an attacker’s server temporarily in order to stop an ongoing attack. Last is the low-utility end of the spectrum, consisting of purely offensive, retaliatory operations. Techniques such as DDoS attacks, ransomware, or other attacks would fall into this category.

It is quite clear that cyber beacons should be treated differently than a ransomware attack; but U.S. law bans even the most benign counterhack, leaving private entities helpless even to collect preliminary evidence to track down hackers that have stolen or damaged sensitive data. The specific legislation in question here is the Computer Fraud and Abuse Act (CFAA), passed in 1986, which makes it a federal crime to access a computer in any way without authorization. One former NSA programmer summarized: “If you’re executing code on someone else’s machine, that means you’re hacking back” under the CFAA. This sweeping law (passed at a time when fewer than 20 percent of Americans owned a computer) should be replaced with a more nuanced legal and policy framework that balances the severity and benefits of hacking-back. Rather than a blanket prohibition, such a framework would tighten or loosen constraints on hack-back techniques based on their utility.

a Legal Framework for Hack-Back

High-utility counterhacking, especially attribution and information-encryption, should be legal in most instances. These techniques carry no risk of property damage, and although there is an invasion of privacy, this does not outweigh the obvious benefit of protecting information and pursuing criminals. Regarding the attacker’s computer privacy and right to property, these rights have likely been forfeited in the commission of a crime, much the same as the man attacking you has forfeited his right to a baseball bat. To use a less extreme example, Shopkeeper’s Privilege allows private entities to search people suspected of shoplifting. The general idea is that criminals lose some (but certainly not all) protections in the commission of a crime. Of course, high-utility techniques still pose a risk of violating the privacy of innocent third parties whose systems were hijacked to carry out the cyberattack. However, as lawyers Stewart Baker and Michael Vatis point out, by definition these third parties’ privacy has already been violated: “What additional harm does the [third party] suffer if the victim gathers information on his already-compromised machine about the person who attacked them both?”

Medium-utility counterhacking should also be permitted, albeit under stricter conditions, given the more intrusive and destructive nature of these techniques. Indeed, this is where the vigilantism argument becomes more credible, because rather than collecting evidence to assist the authorities, the private entity in this case is imposing real costs on the attacker (and maybe third parties). There is no doubt that this could lead to excesses, including disproportionate counterattacks or illegal spying on innocent parties (or, at least, on people who have not yet been proven guilty in a court of law). However, the potential for abuse does not warrant a wholesale prohibition on medium-utility counterhacking. Instead, there should simply be stricter constraints on this type of activity. For one, the law could impose a high standard of proof required to justify a private hack-back. Perhaps counterhackers will need a preponderance of evidence justifying their response. Even stricter, perhaps the counterhacker would need to be right in fact that the targeted computer committed the crime, with no tolerance for reasonable but incorrect targeting. Finally, the law could hold counterhackers liable for any damages to innocent third parties. Liability is considered an effective way to regulate firms’ behavior in other areas, and cyber would be no different.

Low-utility counterhacks should remain illegal. These techniques do not serve the legitimate purposes of stopping an attack or collecting evidence. Their only value is in punishing a criminal, which is a role that should be reserved for the state. Moreover, low-utility counterhacks are far too likely to inflict disproportionate damage to innocent third parties. In short, this category of hack-back is precisely the vigilante justice that critics fear.

Whatever the specific legal standard, the most important point is that the risks of vigilante justice are overstated. The legal system is perfectly capable of adjudicating between reasonable and unreasonable counter-hacking, and of punishing excessive force. For example, in the physical world, tort law has long recognized that private entities are entitled to “rightful repossession” of their property. As one legal scholar explains, “an owner of personal property generally has the right to repossess, by force if necessary, a chattel that has been wrongfully taken or withheld.” Nevertheless, “in the pursuit and recovery...the owner must do no unnecessary damage and is responsible for any excess or abuse of his right.” Similarly, although criminal law allows individuals to use force to stop an ongoing crime, it also imposes strict penalties for misuse or abuse of this right: a botched citizen’s arrest can lead to criminal and civil charges, including false imprisonment, assault and battery, and wrongful death. Finally, it is worth mentioning that there are laws governing several non-governmental entities - such as security guards and bounty hunters - who exercise “quasi-governmental” authority, including the use of force. It is not hard to imagine specialized cybersecurity firms playing a similar role.

Legalizing hack-back would help protect the systems and networks that underpin American society. America’s critical infrastructure is increasingly privately-owned and internet connected. A cyberattack could target the electrical grid, leading to power outages, similar to the attacks on Ukraine in 2015. Malware like TRITON could disable safety systems at water treatment plants or chemical facilities. Herein lies the crux of the argument for hack-back: our vulnerabilities are so distributed, and the threats are so numerous, that the government simply cannot protect every system or respond to every attack. Yet the vulnerabilities are also so interconnected that one attack could cause massive, cascading effects impacting us all. Hack-back would allow private entities to disrupt and defeat these sorts of attacks. For example, an internet provider under attack could take down the attacker’s command and control network. A bank could “detonate” a logic bomb to destroy stolen credit card information before it is used. A chemical plant could disrupt the remote control of a trespassing drone.

Rather than tie the hands of private actors - who own and operate most of the vulnerable infrastructure and possess most of the cyber expertise - the U.S. should empower these entities to mount an aggressive defense of their networks. Establishing a flexible legal framework that balances the benefits and risks of counterhacking is an important step toward a more secure, resilient America.

 


About the Author: 

Nicholas Winstead is CSINT Fellows Alumni and a recent graduate of American University. Nick received his master's degree in the Foreign Policy and National Security through the School of International Service.  


*THE VIEWS EXPRESSED HERE ARE STRICTLY THOSE OF THE AUTHOR AND DO NOT NECESSARILY REPRESENT THOSE OF THE CENTER OR ANY OTHER PERSON OR ENTITY AT AMERICAN UNIVERSITY.

 

more_csint_articles