Skip Ribbon Commands
Skip to main content

Cyber Blog

Apr 17
First of its Kind U.S. Indictment of Iranian Hackers - the Future of Cyber Accountability?/Ido Kilovaty

Last week the U.S. Department of Justice announced that it will indict seven Iranian hackers, due to their various cyberspace activities relating to the hacking of U.S. financial institutions in 2011, such as the NY Stock Exchange and Bank of America, as well as an unauthorized access against the Bowman Avenue Dam in 2013. The seven hackers worked for two Iranian cybersecurity companies – ITSEC and MERSAD. According to Attorney General Loretta Lynch, “A federal grand jury in Manhattan found that these seven individuals conspired together, and with others, to conduct a series of cyberattacks against civilian targets in the United States financial industry that, in all, cost victims tens of millions of dollars”. At the moment, there is no extradition agreement between the U.S. and Iran, thus, it is yet to be seen how the indictment will unfold in the apprehension context.

This announcement comes after the U.S. announced similar criminal charges against members of the Syrian Electronic Army (SEA), a hacking group that carried out cyber operations against media companies and government agencies, as well as the posting of a fake tweet on the Associated Press twitter account regarding an attack on the White House, with President Obama being injured. One of the members of the SEA is currently under custody in Germany.

These two indictments against hacking groups are the first of their kind in the efforts of the U.S. to press charges against individuals engaging in cyber-attacks and other malicious activities in cyberspace. Though the U.S. already indicted five Chinese hackers in the past for economic cyber espionage, the Iranian hackers’ indictment represents more of a national security hacking charges, which are unprecedented up to this time. However, it is unclear whether the prosecution route will be effective in deterring future hackers, as the Associated Press noted – “It's hard to prove the strategy's effectiveness, or whether such indictments actually lead to a decrease in hacking attempts. It's also unclear whether any of the Iranian hackers will ever be apprehended. The five Chinese defendants indicted on similar charges in May 2014 have yet to appear in an American courtroom, leading to criticism that the cases make a publicity splash but have little practical impact”. The main question, therefore, is whether this symbolic step will prevent future cyber-attacks emanating from countries that are considered to be safe havens for transnational hackers.

However, even though the efforts to prosecute foreign-based hackers in the U.S. might not bear any fruits, it could be a tactical measure to “blame and shame” foreign governments in relation to their support of those individuals and hacking groups. China, Russia, and Iran, among the prominent players in the cyberspace battlefield, do not have any extradition agreement with the U.S., making it much more challenging to prosecute these individuals. Individuals, under the fear of prosecution, might avoid traveling thus making these efforts somewhat futile.

Another advantage of indicting individuals who engage in cyber-crime or cyber-attacks is to demonstrate that anonymity in cyberspace can be unveiled. That goal, however, can also be achieved by simple diplomatic routes, rather than the prosecution ones. The real impact of those indictments is yet to be seen, and the efficacy (or lack thereof) of these efforts might set the standard of state behavior vis-à-vis foreign national security hackers.

Apr 10
The Reasonable Algorithm / Karni Chagal

The news of the recent Google-Car accident raised, and not for the first time, unresolved issues of legal liability for damages caused by autonomous devices. Fortunately, that accident did not result in any injuries or significant damage to property, but although self-driving cars are expected to reduce car accidents rate by up to 90%, there is no doubt that in the future, autonomous vehicles will be involved in some fatal car crashes.

Focusing only on the issue of civil liability, for the purposes of this post - we would probably be wondering which individual is to be held liable in case of an accident – the manufacturer, programmer, dealer, or owner? This challenge, however, is not unique to self-driving cars. Similar questions are expected in the context of damages caused by other autonomous devices or algorithms: who is liable when a medical diagnostic device misdiagnoses a patient, or an online arbitration system reaches a ludicrous outcome?

Although devices and algorithms act as programmed (unless subject to some sort of malfunction), the more autonomous and complicated the devices are, the more we might view them as possessing a "discretion" of their own, in the sense that their ultimate choice under a specific scenario may not beyond what the developer anticipated. In fact, the relationship between said devices and their human developers or owners may be somewhat similar to the one of parents and children or even between employers and employees, in the sense that humans can direct the device's choices to a large extent, but cannot necessarily fully control it under all scenarios.

Assuming we acknowledge the certain level of independence sophisticated devices have or will have in the future, and hence may make analogies between "human-algorithm relationship" to other types of vicarious relationships, the next step is the threshold issue of whether the device's own actions are subject to liability or not.
If, for example, a physician who caused damage to a patient is found to have acted with reasonable care, neither the physician nor the hospital where she works will be found liable for the damages. By the same token, one could argue, if a medical algorithm (or a driverless vehicle) caused damages but did so while acting "reasonably", there would be no sense in "going after" its human operators, developers or owners.

How does one determine whether an algorithm or device employed reasonable care? Should its actions and choices be compared to those of the reasonable person under similar circumstances? To those of a "reasonable algorithm"? And if so, to what standards shall we hold said "reasonable algorithm"? Would it suffice to show, as in many U.S. States in the field of medical malpractice, that the device adhered to common practices? (Such requirement might be meaningless, assuming all devices will be programmed to meet the standards of care and, unlike human physicians, will not deviate from them under any circumstances except for malfunction). How will the state’s interest of promoting technology and innovation affect the creation of a reasonableness criteria when it comes to the potentially harmful actions of an autonomous device?

Algorithms and devices differ from humans in many relevant ways. On one hand, their decisions are expected to be much better in many ways, since they can process and analyze unthinkable amounts of data, almost instantaneously. They can also be free of biases, such as personal preferences, tendency to self-preservation etc., and physical weaknesses, such as fatigue, stress or alcohol which may obscure judgement. We can also be sure that, unlike humans, they will always complete the full assessment phase before making a decision and not make impulsive conclusions. On the other hand, whether the algorithm or device is "self-learning" or not, it may be argued that it will not have the necessary flexibility and creativity, which is often characterizing humans, to deal with unexpected input or a changing reality. In addition, the algorithm or device might not fully understand human nuances that could affect the desired outcome, for instance, patients diagnosed by a robo-physician who raise real complaints, but whose actual reason for consultation is their loneliness and need of attention or, alternatively, their desire to skip days at work. It is assumed that a person might grasp these nuances while a machine will not.

How should we, as a society, address these technological developments when determining reasonableness of machine actions? This is but one of the many fascinating questions to think of when addressing legal liability of non-human decision makers.

Mar 15
Responding to Economic Cyber-Attacks with Countermeasures (Addendum to my recent publication at the Journal of Law and Cyber Warfare) / Ido Kilovaty

In my recently published article at the Journal of Law and Cyber Warfare, entitled “Rethinking the Prohibition on the Use of Force in Light of Economic Cyber Warfare: Towards a Broader Scope of Article 2(4) of the UN Charter”, I have explored the question of classification of economic cyber-attacks under public international law, particularly in the context of the use of force framework. In that article, I have argued that cyber-attacks resulting in economic harm (as opposed to physical/kinetic harm), ought to be considered, under certain circumstances, as uses of force, and in the most severe forms, as armed attacks. The secondary-rules importance of my determination was not fully addressed in the scope of the article, and in this blog post I intend to demonstrate the importance of the classification advanced by my paper.

First, the labeling of a certain economic cyber-attack as a use of force, which violates a well-entrenched norm of public international law formulated at Article 2(4) of the U.N. Charter, justifies the use of countermeasures, which is a “self-help” remedy in response to an internationally wrongful act by a state against a state. The ability to classify an economic cyber-attack as a “use of force” leaves victim states better off than if it were unclear what sort of international law norms is violated by an economic cyber-attack. Part of the difficulty in international cyberspace regulation is that it is often perplexing, and even impossible, to label and define different cyberspace operations, thus leaving the victim state helpless if a cyber-attack of a certain sort takes place. “Cyber vandalism”, which was the term used by President Obama following the Sony Hack, is an example of how states struggle to define the activities that take place against them in cyberspace. The Sony Hack, in particular, would most likely not reach the threshold of an economic cyber-attack, but it illustrates the terminological difficulty nonetheless. Some would offer a counter-argument, claiming that the labeling of economic cyber-attacks as uses of force does not necessarily change international law, since economic cyber-attacks would still violate international law norms such as sovereignty and non-intervention, however, the response to that argument would be that the norm on nonuse of force offers a broader set of remedies under the countermeasures regime, since countermeasures still need to comply with the proportionality principle, which provides that countermeasures need to be proportionate to the initial violation of international law.

My second clarification is connected to my first point, but it is more instrumental and less remedial. The possibility of labeling an economic cyber-attack under well established and developed norms of the use of force framework gives the victim state an instrument to use in its diplomatic efforts vis-à-vis the territorial state. The mere fact that the use of force norm was violated does not warrant a countermeasures response on behalf of the victim state, but it could be valuable if the victim state decides to undertake diplomatic or other form of dispute resolution form.

Third, the use of force framework is helpful in enhancing deterrence in cyberspace. If economic cyber-attacks are prohibited by international law, states will most likely refrain from using them against other states, unless, of course, it is in their interest and that interest overrides the deterring force of the use of force framework. Additionally, if non-state actors in the territory of that state will decide to carry out economic cyber-attacks, the state will be under obligation to cease these attacks as part of its duty to prevent transboundary harm to other states.

All in all, the classification of economic cyber-attacks as uses of force is not a theoretical exercise, but it might also have significant practical implications, as demonstrated in this blog post. Naturally, there is still a long way until international law adapts to economic cyber-attacks by creating specific norms and principles to govern them, but at this point in time, it is helpful to use contemporary international law to address these new phenomena.

Feb 20
Thoughts on Apple’s Refusal to Unlock San Bernardino Gunman’s Iphone / Ido Kilovaty

​The shooting spree in San Bernardino, California, which killed 14 people in December last year, comes to the headlines again. Magistrate Judge Sheri Pym, of the U.S. District Court in Los Angeles, issued an order against Apple, requesting it to bypass the security function on the iPhone 5C which belonged to the shooter, and by doing so, assist the F.B.I investigation by allowing it to gain access to the data on the device. Apple CEO, following the order, announced that “The government is asking Apple to hack our own users and undermine decades of security advancements that protect our customers — including tens of millions of American citizens — from sophisticated hackers and cybercriminals”. Apple claims that the implication of such ‘backdoor’ will provide a universal key that will allow law enforcement authorities to break into any iPhone they wish. All in all, Apple’s stance is that it refuses to comply with the court’s order.

That controversy extended to the public, and the views on whether Apple should comply with the order or not were split. Edward Snowden tweeted that “The FBI is creating a world where citizens rely on #Apple to defend their rights, rather than the other way around”, and certain lawmakers expressed their opinions on social media as well. While many defended Apple’s position of refusing to cooperate and undermine the privacy of their customers, Senator Tom Cotton (R-Arkansas) called Apple “company of choice for terrorists, drug dealers, and sexual predators”. Senator Dianne Feinstein (D-California) announced that “it’s not unreasonable for Apple to provide technical assistance”.

Interestingly, only at the beginning of this month, the Berkman Center for Internet and Society at Harvard University released a report on the privacy vs. security debate, in the context of encryption, entitled “Don’t Panic – Making Progress on the ‘Going Dark’ Debate”. In the report, various experts argue that technology does not necessarily mean that we are “going dark” in the surveillance sense – “There are and will always be pockets of dimness and some dark spots – communications channels resistant to surveillance – but this does not mean we are completely “going dark.” Some areas are more illuminated now than in the past and others are brightening”. The Report certainly recognizes the difficulty to balance between privacy and security, but also makes the argument that short-term implications of providing access to encrypted communications might be helpful and essential for national security purposes, but in the long-term, that access could increase the vulnerability to espionage. The report also concluded that market forces and technological developments will catch up and assist law enforcement authorities in their investigations, and information gathering procedures.

Bruce Schneier himself, made a strong argument in his addendum to the Report, claiming that “Adding backdoors will only exacerbate the risks. As technologists, we can’t build an access system that only works for people of a certain citizenship, or with a particular morality, or only in the presence of a specified legal document. If the FBI can eavesdrop on your text messages or get at your computer’s hard drive, so can other governments. So can criminals. So can terrorists”. He continues by providing an example of a backdoor that was installed by a cellular provider in Greece, for the Greek government to use, which was eventually abused by other actors. At the end, it’s a matter of trade-off between short-term and long-term goals.

Technological developments pose a massive amount of challenges, the main question is whether law can adapt to these developments, and more importantly, whether judges and policymakers can understand that their decisions might have negative long-term implications which outweigh the short-term goals, such as solving crimes and providing law enforcement authorities with assistance. For a judge who deals with a terrorism investigation case it is extremely difficult to come up with the broader implications that his decision might have. Naturally, the incentive to provide a prompt and effective remedy is high, but the harm that such remedy could cause might undermine and challenge law enforcement authorities in future investigations, as well as threaten the national security if more backdoors will be forced upon private companies. For now, it seems that law enforcement authorities should keep up with technological advancements, and seek for new ways to investigate and gather information from alternative sources.

Jan 09
Legal Blackout – Thoughts on Due-Diligence in Cyberspace and the Legality of Cyber-Espionage in the Aftermath of the Ukrainian Power Outage Cyber-Attack\Ido Kilovaty

Private-sector experts now believe that the power outage in Ukraine on December 23, 2015, was caused by a malware attributable to the Russian hacking group ‘Sandworm’. This incident is a unprecedented, since it is the first time that a cyber-attack caused a power outage beyond international borders. However, experts were already worried about cyber-attacks causing power outages or significant harm to critical infrastructure long before the Ukrainian power outage occurred, however, only now the scenario has materialized. 

This cyber incident brings back the difficult questions as to whether there is a duty to ensure that one’s territory is not used to cause harm to another state, as well as the question of protection of critical infrastructure from cyber-attacks, particularly when such critical infrastructure is providing the most basic, essential services, such as electricity.

As to the first question, international law is currently indeterminate when it comes to “due diligence in cyberspace”. Some argue that such obligation existed since the Corfu Judgement of the International Court of Justice, while others argue that even if such obligation exists, its precise contours are highly debated and the application to cyberspace is questionable. The Tallinn Manual, for example, provides that “A State shall not knowingly allow the cyber infrastructure located in its territory or under its exclusive governmental control to be used for acts that adversely and unlawfully affect other States”, however, even the editor of the Tallinn Manual, Prof. Michael Schmitt, admits that it was not possible to reach a “consensus on the exact parameters of the obligation”. Even current efforts to draft a treaty on the “Prevention of Transboundary Harm from Hazardous Activities” does not contribute much to the doubts raised by Schmitt, as it provides that “The State … shall take all appropriate measures to prevent significant transboundary harm”. What “significant” or “harm” mean in that case is already a conundrum, as well as the draft stage of the Articles on the Prevention on Transboundary Harm. The gradual development, however, of the obligation of due diligence in cyberspace is a welcome one, since it states will be required to prevent cyber-attacks, as well as react and manage ongoing cyber-attacks against other nation-states. At this point, the obligation is more of an ideal than an actual, enforceable and respected international law obligation.

The second question is even more complicated, as it pertains to the duty of states to protect its own citizens as well as the property within their territory. States only now being to realize that critical infrastructure that is highly dependent upon computer systems and networks is vulnerable and could be abused by malevolent actors. The main problem with the realization that states are required to protect their own infrastructure is that many states are not capable of doing so, in the wake of sophisticated cyber-attacks and hacking groups. While many states are dependent upon cyber infrastructure, many of them are simply incapable of protecting it.

This is the fifth time, according to experts, that a cyber-attack caused kinetic, physical effects, and it is expected that the frequency of these types of cyber-attacks will increase in the future. The cyber-attack against Ukraine’s power grid should not go unaddressed, and should be used to advance international law and rules of conduct for cyberspace, with the emphasis of protection of critical infrastructure.
Dec 04
The Double-Edged Sword of Vehicle Software Tinkering under the Digital Millennium Copyright Act Exemption / Ido Kilovaty

On October 28, 2015, the Acting Librarian of Congress granted an exemption under the Digital Millennium Copyright Act (DMCA) that allows the public to access and modify vehicle software for “good faith security research” and “diagnosis, repair or lawful modification”. The exemption comes mainly as a response to the Volkswagen scandal, in which software code embedded in certain car models was used to falsify emission figures, and reads as follows –

Computer programs that are contained in and control the functioning of a motorized land vehicle such as a personal automobile, commercial motor vehicle or mechanized agricultural vehicle, except for computer programs primarily designed for the control of telematics or entertainment systems for such vehicle, when circumvention is a necessary step undertaken by the authorized owner of the vehicle to allow the diagnosis, repair or lawful modification of a vehicle function; and where such circumvention does not constitute a violation of applicable law, including without limitation regulations promulgated by the Department of Transportation or the Environmental Protection Agency…

Many cybersecurity experts applauded the exemption, which is expected to come into force on October 2016. However, although such an exemption is needed in many instances to fix certain bugs and glitches in the code, it is also going to create new vulnerabilities, due to human error and lack of expertise.

The first challenge is efficiency. That is, who is capable of modifying the software in the most precise and prompt fashion? Consider the fact that automobile manufacturers spend enormous amounts of resources to tackle software bugs, and only recently Ford recalled 433,000 cars due to a software bug. However, more interestingly, Chrysler recalled 1.4 million cars following a vulnerability that allowed a third party to take over dashboard functions, as well as steering and braking. That is not to say that car owners cannot fix the vulnerabilities themselves, but that these manufacturers can research and implement bug fixes for hundreds of thousands of cars in a relatively efficient manner. The main point here is that these manufacturers are accountable for the software and the subsequent fixes and updates, while car owners that take it upon themselves to address these vulnerabilities do not always possess the required set of skills or knowledge to fully tackle these vulnerabilities.

The second challenge is oversight. In other words, who makes sure that the modification of the vehicle software is secure and does not create further vulnerabilities? As mentioned earlier, many vulnerabilities can be created by car owners if they either err or are lacking the knowledge required to deal with the vehicle software. Car manufacturers are required to be transparent when it comes to safety, and they constantly try to improve the integrity of the vehicle software. The main concern with this challenge, however, is that car owners might create vulnerabilities without even knowing it, and there will be no higher authority to supervise and review the modified code for integrity. Although the exemption provides that circumvention of the vehicle software cannot violate Transportation regulations, it is still possible that certain car owners will create unsafe software modifications by mistake, and the difficulty with such scenario is that the vulnerability created by the circumvention will be revealed if an accident occurs due to software malfunction.

What needs to be highlighted is that the DMCA exemption can actually contribute in many instances to the safety and integrity of the vehicle software, as there are many researchers who constantly monitor the code for vulnerabilities. Those researchers may both offer a fix and inform the car manufacturer about the vulnerability that needs to be fixed. Additionally, it is important to remember that the DMCA exemption is a copyright claim exemption, meaning that a car owner who tinkers with the vehicle software code will not be sued under copyright law. However, do we need new safety regulations to ensure the security and integrity of the vehicle software? According to a new Senate Bill – yes, we do.

The Security and Privacy in Your Car Act (SPY Act) was introduced this July with the aim to set IT security standards for connected car manufacturers. The act instructs the National Highway Traffic Safety Administration and the Federal Trade Commission to set certain security standards for the vehicle software, in order to enhance safety and privacy. The SPY Act goes even further, by requiring that connected vehicles install a technology that will “detect, report and stop hacking attempts in real time”. In addition, the Act aims to deal with car manufacturers who use their car software to collect data about consumers.

The SPY Act very diligently complements the DMCA exemption, as car manufacturers need to follow certain binding and clear guidelines designed to secure the integrity of the software, while the DMCA exemption allows car owners to take matters into their own hands and tinker with the code to fix certain vulnerabilities. If car manufacturers spend more time and resources developing more secure software to comply with the IT security and privacy guidelines, there might be a lesser need for car owners to tinker with the code and to possibly create more vulnerabilities. More accountability and transparency on the car manufacturers’ behalf will mitigate the challenges arising from the DMCA exemption.

Nov 18
Anonymous to Launch “Massive Cyber Attacks” Against ISIS in the Aftermath of the Paris Terrorist Attacks / Ido Kilovaty

The Hacker group Anonymous declared that it will wage war against ISIS as a response to the Paris attacks on Friday night last week. The announcement was released as a video on YouTube, in the usual setting that Anonymous uses in most of its video announcements. Part of the message read "Expect massive cyber attacks. War is declared. Get prepared." In fact, similar threats have been made by Anonymous in August last year, when they declared "full-scale cyber war" against ISIS, and also following the Charlie Hebdo attacks in Paris last January. So far, Anonymous' operation was able to take down as much as 5,500 ISIS-related Twitter accounts.

Anonymous' declaration of "cyber war" on ISIS highlights some of the inability of States to effectively leverage the cyber-dependency of armed groups. ISIS is a non-state actor that operates heavily in cyberspace, whether for recruitment, communication or deployment of hostile cyber operations. Only this August a U.S. drone strike killed Junaid Hussain, an ISIS hacker who was believed to have been involved in multiple hacking activities against the U.S. Central Command. This demonstrates that States, in fact, take the cyber threats posed by armed groups seriously, up to the point that they are willing to lethally target the involved individuals. That is a critical point about ISIS' modus operandi – while it is not in possession of sophisticated or unconventional weapons (yet), it is highly reliant upon cyberspace to carry out its attacks and recruit new members.

It is already known that ISIS recruits new members and inspires others by distributing propaganda online. This method is essential for ISIS because it tries to recruit local sympathizers to carry out attacks in their countries of residence. ISIS' ability to carry these attacks or to communicate between existing members depends greatly on cyberspace and computer systems. States are yet to understand the importance of these methods to ISIS' operations, and as such, States have not acted to contravene them by using cyber measures (as opposed to kinetic). As ISIS gets growingly sophisticated in that regard, it is time for States to rethink their counter-terrorism strategies so they align with technological trends, particularly when it comes to ISIS. 

An example of how technological trends exacerbate counter-terrorism strategies is ISIS' usage of the PlayStation 4 gaming console for communication and coordination purposes during the Paris attacks, demonstrating a very unorthodox communication method used to avoid detection, a method which experts define as "more difficult to keep track of than WhatsApp". These methods are actually known to the NSA and CIA, admitting in that context that they followed "World of Warcraft" and "Second Life" in order to monitor potential terrorist and criminal activities within these online games. In fact, the PlayStation 4 platform allows communications without typing or saying a word. For example, in Call of Duty (or most of the other shooter games) a player can shoot at a wall to create a disappearing text made of bullets. This reveals the difficulty of monitoring these communications in real-time, but also reveals a vulnerability which States can take advantage of.

The problem with the mindset of States is that kinetic military operations are the default response to terrorist attacks. For example, France warplanes bombed ISIS headquarters in Syria following the Paris attacks, yet there was no public cyber operations to deny and disrupt communications and data in ISIS-affiliated computer systems. France could effectively respond to ISIS by carrying out a hybrid response operation – using both kinetic military measures, while also wreaking havoc with ISIS computer systems and networks. In that context, Anonymous understands something that States overlooked – cyber-attacks can be just as devastating, if not more, as physical military attacks, and this is true in the case of ISIS. States have the resources to obtain the knowledge and capacity to engage in cyber countermeasures against terrorist organizations, and it is about time for States to get involved.

Nov 10
Is the Cybersecurity Information Sharing Act Another Surveillance Mechanism? / Sharon Herman

On October 27, 2015, the U.S. Senate voted 74 to 21 to pass a version of the Cybersecurity Information Sharing Act (“CISA” or the “Act”) consistent with legislation passed in the House earlier this year, thus ensuring that a combined version of the two will become law. CISA was drafted so as to allow companies to monitor information systems and allow them to share and receive cybersecurity threat data (or as CISA defines it – “cyber threat indicators”) with and from the Department of Homeland Security (DHS). DHS will then be able to disclose and distribute the information to other agencies like the FBI and NSA or other companies, who can potentially use it to defend the target company and other companies encountering such similar attacks. No doubt the massive hacks on health insurer Anthem, Sony, and the Office of Personal Management earlier this year swayed votes in favor of the Act.

Proponents of the Act claim that it respects privacy laws and is necessary given the many cyber-attacks witnessed over just the past months. Its critics contend that it is just another surveillance bill that will weaken our civil liberties and privacy protections. In addition, it is argued that the Act is unnecessary given the current privacy compliant sharing of information and due to the fact that it would not have prevented the recent cyber-attacks, had it been in force at the time.

Is CISA just another surveillance bill? On October 20, 2015, Senate Select Committee on Intelligence (SSCI) Chairman Richard Burr (R-NC) and Vice Chairman Dianne Feinstein (D-Calif.) released a fact sheet on CISA in which they specifically discussed the fact that CISA is not a surveillance bill, because: (1) all sharing is completely voluntary; (2) the U.S. government cannot monitor any personal records, private networks or computers; (3) it requires private companies and the government to review information before sharing to remove irrelevant personally-identifiable information; (4) it does not allow the government to shut down websites; (5) it does not permit the government to retain or use the information for anything other than cyber security purposes, identifying a cyber security threat, protecting individuals from death or serious harm, protecting minors or investigating limited cyber-crime offenses; and (6) it provides rigorous oversight and requires regular reports to ensure protection of privacy. The fact sheet discussion does not refer to the basic definition of surveillance, “close watch kept over someone or something”; it does not clearly state that CISA does not provide the government with a means of obtaining information so that they can keep a close watch over such information. These points do not provide an explanation as to why, in fact, this is not just another surveillance bill. More to the point, if an Act allows private entities to provide government with information so that they can keep a close watch over such information for the purpose of (amongst others set forth in Section 105(d)(5)(A)) preventing, investigating, or prosecuting offenses relating to: “(1) an imminent threat of death, serious bodily harm, or serious economic harm, including a terrorist act or a use of a weapon of mass destruction; or (2) crimes involving serious violent felonies, fraud and identity theft, espionage and censorship, or trade secrets,” does this not constitute  de facto surveillance?

Burr and Feinstein reiterate throughout the fact sheet that information sharing is voluntary, but as Amie Stepanovich points out in a recent Wired article, information that the government would be allowed to share with participating companies under the bill may provide so much of a competitive advantage — the advantage of being “in the know” — that companies will be forced to participate simply to keep up with their participating competitors. Worse, not to participate might actually harm their corporate interests and put their customers at risk.” Given Stepanovich’s pertinent observation, we can reasonably assume that private entities will participate. The question is, in what manner? Section 104(d)(2)(a) of the Act states that an entity sharing a cyber threat indicator shall prior to such sharing “review the cyber threat indicator assess whether such cyber threat indicator contains any information that the entity knows at the time of sharing to be personal information or information that identifies a specific person not directly related to a cybersecurity threat and remove such information”. This provision is somewhat ambiguous. Is there an obligation upon the entity to review? Or is the obligation only applicable if the entity knows that the information includes personal information? Does this mean that the obligation to review is not applicable if they do not know that the information includes personal information? If so, then what’s the point of the provision? Was this provision just poorly drafted?

Critics are reading Section 104(d)(2) together with Section 106(b) (no cause of action shall lie or be maintained in any court against any entity, and such action shall be promptly dismissed for the sharing or receipt of cyber threat indicators) to mean that in sharing the information, the private entities are not obligated to scrub personal information out of the disclosures. That is, a private entity will not be liable for disclosing personal information it did not know was included within the information shared, unless the company acts with willful misconduct or gross negligence. Whilst this could be seen as a protective caveat, given the uncertainty surrounding the circumstances in which there would actually be an obligation to review, it is similarly unclear what would constitute willful misconduct or gross negligence. This, together with the focus on real-time sharing, would discourage the private entity from reviewing the information to check whether personal information is included, thus weakening our civil liberties and privacy protections.

Technologists contend that even without such legislation, they are already sharing information (with each other and with the federal government) that helps protect their systems from future cyber-attacks while complying with their obligations under federal privacy law. Specifically, “When a system is attached (sic), the compromise will leave a trail, and investigators can collect these bread crumbs. Some of that data empowers other system operators to check and see if they, too, have been attacked, and also to guard against being similarly attacked in the future.” If this type of sharing of information is already happening, (i) why is this broad legislation necessary, (ii) why were amendments that tried to define and clarify what information can actually be shared and under what circumstances not passed? What makes CISA more likely to have actually protected Anthem, Sony and OPM from cyberattacks?

Let’s hope that private entities understand Burr and Feinstein’s claim that sharing is voluntary and not obligatory and that such entities continue to share information with each other and with the government whilst self-regulating and complying with privacy laws, without seeking to absolve themselves of liability by relying on CISA. Let’s hope that this Act will result in more industry efforts to coordinate voluntary, privacy compliant, sharing of cyber threat indicators that will actually inform companies as to vulnerabilities and help other companies defend themselves from attack, whilst preserving our right of privacy.

Aug 13
“Sophisticated Cyberattack” against the Pentagon Demonstrates the Biggest Gaps in Inter-State Cyberspace Activities Regulation / Ido Kilovaty

On August 6, 2015, U.S. officials announced that a sophisticated cyber-attack targeted and affected the Pentagon's Joint Staff unclassified e-mail system.[i] It is estimated that the cyber-attack began on July 25, and that 4,000 e-mail accounts, both military and civilian, were affected by the cyber-attack that consisted of a "sophisticated" automatic system that obtained massive amounts of data within a minute and distributed it to thousands of internet users in a rapid manner.[ii] As a result of the attack, the Pentagon was forced to shut down the affected e-mail system[iii] and reopened it on August 7, nearly two weeks after the cyber-attack took place.[iv]

According to these officials, there is no certainty about the identity of the perpetrator of that cyber-attack, but it is suspected that Russian hackers are behind it. The reasoning for the suspicion is the scope of that attack, which suggest that "it was clearly the work of a state actor".[v] However, it is not yet conclusive whether the attack was carried out by the Russian government or by individuals. Surprisingly, the attribution process here is quite questionable, since no conclusive evidence was provided to support Russian involvement in the cyber-attack. Instead, the scope of the attack, which managed to surprise the Pentagon, along with similar previous cyber-attacks, point towards Russian involvement, according to the U.S. However, such hastily made accusation could be detrimental to the interest of deterrence of cyber intruders. The optimal way is to establish attribution on solid evidential grounds, and it is unclear whether such solid grounds exist in this case.

This cyber-attack comes three months after a similar, allegedly Russian, cyber-attack targeted the unclassified e-mail system of the White House, which granted the hackers access to sensitive information such as the President's schedule and other non-public data.[vi] In both cases, the method that was used by the perpetrators is "spear phishing", i.e. – an e-mail containing malware which is specifically tailored for a specific target. According to experts, spear phishing is not a particularly sophisticated cyber-attack method, which is contradictory to what U.S. officials claim.[vii] In addition, last month the Office of Personnel Management (OPM) hack was announced, in which alleged China based hackers collected sensitive information of federal employees, such as social security numbers, e-mail addresses, job assignments and more.[viii] In response to the OPM hack, the Obama administration decided to retaliate (or "hack back") against China in response to the OPM hack, as "the usual practices for dealing with traditional espionage cases did not apply",[ix] yet the administration made it clear that the U.S. does not want to escalate the cyber conflict with China.[x]

It would be interesting to see how the retaliation strategy applies to the Pentagon cyber-attack. As cyber-attacks become more disruptive, even in the absence of visible kinetic effects, governments are looking for more ways to respond to and to deter cyber-attacks. What is clear, that there is a major gap as to the permissible "cyber arsenal" that states have as a response to a cyber-attack. Even if the U.S. decides to employ the "hack back" approach in response to cyber-attacks, the precise contours of such approach are unknown, and the approach requires more polishing. However, the right step has been taken with regard to publicity when a senior administration official said that "one of the conclusions we've reached is that we need to be a bit more public about our responses, and one reason is deterrence".[xi] In my July contribution to the Cyber Blog, I argued that the Administration should publicly act in response to the OPM hack, and that any delay in doing so would not work in favor of the U.S., and this, of course, would also be true in relation to the Pentagon cyber-attack.

Unfortunately, the recent cyber-attacks did not demonstrate a consistent cyber strategy on behalf of the U.S., particularly a strategy that is tailored to address the broad spectrum of cyber-attacks by the various actors in the scene. If indeed a "spear phishing" method is the initiator for the Pentagon cyber-attack, it seems that some institutional cybersecurity training is also necessary, to avoid similar attempts in the future. In this case, the preventive measures might prove to be more helpful and deterring than on-the-spot retaliation measures. As an example, Lockheed Martin has its own "red team", which every now and then tries to trick the employees (e.g. spear phishing). If an employee falls for the "trap", he or she will undergo a comprehensive cybersecurity training.[xii] Both the OPM hack and the Pentagon cyber-attack are a result of spear phishing which managed to trick the specific target. This calls for an enactment of comprehensive preventive measures, as well as concretely tailored retaliation procedures.

[i] Courtney Kube, Russia Hacks PentagonNBC News (Aug 6, 2015),

[ii] Courtney Kube, Jim Miklaszewski, Russian Cyber Attack Targets Pentagon Email Systems: Officials, NBC News (Aug 7, 2015),

[iii] Paul Shinkman, Reported Russian Cyber Attack Shuts Down Pentagon Network, U.S. News (Aug 6, 2015),

[iv] Jim Miklaszewski, Courntey Kube, Pentagon Email Systems Go Back Online After Cyber Attack, NBC News (Aug 7, 2015),

[v] Kubesupra note i.

[vi] Evan Perez, How the U.S. thinks Russia hacked the White House, CNN (Apr. 8, 2015),

[vii] Farzan Hussain, Spear Phishing Attack at Pentagon's Network, Breached 4000 Military Accounts, Hackread (Aug 7, 2015),

[viii] Brian Bennett & Richard Serrano, Chinese Hackers Sought Information to Blackmail U.S. Government Workers, Officials Believe, Los Angeles Times (Jun. 5, 2015),

[ix] David Sanger, U.S. Decides to Retaliate Against China's Hacking, NY Times (Jul 31, 2015),

[x] Id.

[xi] Id.

[xii] Peter Singer & Allan Friedman, Cybersecurity and Cyberwar 66 (2014).​

Aug 07
The Cybersecurity Information Sharing Act (CISA) / Eldar Haber

            ​​Over the past few months, we heard that U.S. Congress is discussing a cyber-security bill of some sort. Presumably, that is a good thing. There are a lot of cyber-related issues that needs to be properly addressed and solved. But many of these bills are also dangerous. Not only that they do not effectively solve cybersecurity problems, they further endanger the liberties of individuals. One of these latest cybersecurity bills, currently moving its way up in Congress, is the "Cybersecurity Information Sharing Act of 2015" (CISA).[i]

            As evident from its title, CISA is mostly about information sharing. It establishes an information-sharing alliance between companies and the NSA. No warrants are needed. In exchange, the participating companies receive a broad immunity to both spy on users and even act offensively against "threats." Sounds familiar? Well, CISA is hardly a new proposed legislation. It builds upon other proposed bills, namely the "Cyber Intelligence Sharing and Protection Act" (CISPA),[ii] which did not eventually pass into law. It also adds to many other proposed bills still pending in Congress.[iii] One of these proposed bills, which previously appeared in this blog is the "Protecting Cyber Networks Act" (PCNA).[iv] While CISA and PCNA share many similarities, they are not identical. The PCNA, in its current form, does not generally allow an alliance between companies and the Department of Defense. In other words, the National Security Agency (NSA) under the PCNA is presumably out of the picture.

            But perhaps the main problems of CISA, in its current form, are different. First, and foremost, it grants the NSA a direct, and warrantless, access to a wide variety of personal information. This information sharing is almost unlimited. Why? Because CISA authorizes companies to share any information, as long as there are "cyberthreat indicators." What are those? No idea. CISA does not clearly defines them, meaning that anything could fall under this category. If Edward Snowden tried to better protect civilians with his revelations, then he might have achieved the opposite result. In other words, civil rights and liberties are still at stake, probably more than ever before.

            But beyond that, CISA raises another important issue, which should be further clarified by Congress: Potential "Hack backs." The broad immunity for companies does not only apply to information sharing, which could be troubling enough, but also to some forms of protection. Are these truly "hack backs" as many argue?[v] I think not. CISA grants private entities, for cybersecurity purposes to "operate a countermeasure" that applies to an information system of such private entity; an information system of another entity upon written consent of such entity; and an information system of a Federal entity upon written consent. It also clarifies that such authorization does not include operation of any countermeasure that is designed or deployed in a manner that will intentionally destroy, disable, or substantially harm an information system not belonging to the private entity. Therefore, these "countermeasures" could only be taken as a defensive measure on the company's own property. In other words, it seems like CISA only allows companies to deploy self-measures for intrusion prevention, e.g., deploy firewalls. If this is true – is there any need for specific legislation that allows such protection? Aren't companies already allowed to deploy firewalls and antiviruses programs on their networks? Sure they are. What I think it means is that companies will be allowed to act with aggression against attackers while they are "within" their networks. Some sort of self-defense provision that grants companies more assurance that their actions are legal. So, it seems that those companies will not be allowed to launch countermeasures against potentially innocent users. That's a good thing, as I previously noted in a previous blogpost, because hack backs could be dangerous. If my interpretation is correct, then to the very least, this provision makes sense, but it still requires further Congressional clarification.

            ​One final note. As I usually point out in cyber-related posts: Protecting any nation and any individual from cyber-attacks is highly important. I do not doubt that for even one second. But it does not mean that all is fair in "cyber-war." Congress seems to respond too broadly, without truly considering the impact of such legislation on our everyday lives. Their decisions today might truly shape our future, both in the digital and the kinetic worlds. As many journalists and scholars argue, surveillance does not equal security.[vi] Congress seems to be stuck in 1984, and that is why criticism is crucial, perhaps now more than ever before.[vii]


[i] Cybersecurity Information Sharing Act of 2015, S. 754, 114th Cong. (2015).

[ii] Cyber Intelligence Sharing and Protection Act, H.R. 3523, 112th Cong. (2013).

[iii] See, e.g., Cyber Threat Sharing Act of 2015, S. 456, 114th Cong. (2015); National Cybersecurity Protection Advancement Act, H.R. 1731, 114th Cong. (2015); Cyber Intelligence Sharing and Protection Act, H.R. 234, 114th Cong. (2015).

[iv] The Protecting Cyber Networks Act of 2015, H.R. 1560, 114th Cong. (2015).

[v] See, e.g.Stop the Cybersecurity Information Sharing Act, EFF, available at (last visited Aug. 1, 2015).

[vi] See, e.g., Patrick G. Eddington & Sascha Meinrath, Opinion: Why the information sharing bill is anti-cybersecurity, CS Monitor (July 22, 2015),

[vii] There are currently various initiatives to stop CISA. See, e.g.supra note v.

1 - 10Next

 About this blog

About this blog

The Cyber Forum is a joint project of the Haifa Center for Law & Technology (HCLT) and the Minerva Center for the Rule of Law under Extreme Conditions​ at the University of Haifa, dedicated to the study of cyber regulation. ​The main goal of the Forum is to promote research activities in the fields of Cyberspace, Extreme Conditions and Law and Technology.

There are no items to show in this view of the "Cybercalendar" list.

 About the HCLT


The Haifa Center for Law and Technology (HCLT) is a renowned interdisciplinary research institute. It is the first and the only center in Israel dedicated to the study of the interconnection between law and technology.​ HCLT further seeks to promote dialogue between academics, innovators, policymakers and businesses, in order to establish the scientific foundation for legislation to address new technologies. The center conducts workshops and conferences, and promotes research activities by faculty and students, judges, lawyers, jurists, decision makers and the general public.​

 About the Minerva Center

New Picture (7).bmpThe Minerva Center for the Rule of Law under Extreme Conditions​ at the University of Haifa Faculty of Law and the Geography and Environmental Studies Department, is an international venue and transnational forum - together with the University of Hamburg - for study, research, training, education and publication. Its mission is to focus on the rule of law, broadly defined to include policy and regulation, under three main types of extreme conditions: natural disasters; national security challenges; and socioeconomic crises.

 ‭(Hidden)‬ Blog Tools