Posts Tagged ‘FBI’

The Leaky Leviathan


David Pozen’s Harvard Law Review paper [1] “The Leaky Leviathan: Why the Government Condems and Condones Unlawful Disclosures of Information” explores the issues of leaks, plants, and combinations “pleaks” of confidential government information. Such disclosures represent an art form for U.S. (and I’m sure other nation) politicians. This paper will be required reading for any student of constitutional law or participant in government. For technologists such as readers of this blog, this paper begins the wrestling match around the Snowden leaks, and the battle over the NSA’s previously secret activities to intercept foreign and domestic Internet and telephone communications.

Pozen’s paper analyzes the reality of how government works. It is easy to make something “secret” in one form or another, but government has created informal ways to incrementally relax secrecy. We’ve all heard newscasters attribute a story to an “unnamed source”. When the story supports the government position in a measured way, it is a plant. When the government feels too much pain and the unnamed source was not controlled by the government, then it is a leak, and top executives in the government squeal like little piglets at the wrong done. In reality, Pozen writes, with tongue-in-cheek, plants need to be “nourished” by leaks. Otherwise, if all leaks were suppressed, plants would lose their believability and would be ineffective as a government tool. He points out that historically whistle-blowers and sources of leaks are rarely thrown in jail. They are, however, often shunned, losing valuable access to government executives.

The Creaky Leviathan by Sagar [2] is sometimes cited as a rebuttal, but I found it hugely supportive:

Let me be clear. Leaky Leviathan contains no hasty judgments. Pozen is admirably careful and measured in his praise of permissive enforcement; his broader objective is to show that the varied causes and consequences associated with leaks mean that the prevailing “system of information control defies simple normative assessment.” This conclusion I fully endorse. But the question remains: even if it advances our comprehension, does “middle-range theory” improve our ability to judge the value or worth of the prevailing “system of information control”? For the reasons outlined above, I think the answer must be no, since the overall consequences of permissive enforcement are obscure and hard to ascertain. As far as this “disorderly” system is concerned, the most we can do from a normative perspective, I think, is become clearer about whether particular outcomes it produces are more or less acceptable.

Whistle-blowing protection when it comes to national security issues is a dicey topic. The issue here is that huge losses of life and treasure are at risk. Counterbalancing such risk is that the national security infrastructure is huge: NSA, the Pentagon and all of the DoD, CIA, NSC, and of course all of the Executive Branch (and some of Congress) represent the government side, and the major contractors Boeing, L3, Halliburton, McDonnell Douglas, Raytheon, etc. are also in the mix. A system of secrecy often makes it difficult to generate healthy debate; moreover, these institutions are “wired” to worry about and exaggerate threats. Secrecy in these institutions makes it difficult even for internals to gain enough information to offer opposing views. Senator McCarthy, J. Edgar Hoover, Nixon, and others required outside forces to neutralize their negative actions. However real the communist threat was, McCarthy and Hoover violated the rights of many U.S. Citizens. There were, in the end, no weapons of mass destruction in Iraq, and we went to war over this fabrication. The Maginot line in France was an utter failure. Examples abound. The whistle-blowing that has been legitimized in the private sector is not well legitimized in the government sector. Current law limits disclosure to Congress, does not cover civilian contractors (thus Snowden is not protected; Manning was somewhat, but still got 35 years, possibly due more to the choice of WikiLeaks as an outlet). The Leaky Leviathan screams out for a legal structure to fairly protect national security whistle-blowing. Benkler’s paper [3] “Whistle Blower Defense” takes a solid crack at this.

Benkley starts with an excellent in-depth review of how we got from 9/11 to the mess that Manning and Snowden disclosed. His “Public Accountability Defense” starts with the observation that most leaks are not prosecuted, because if they were, the non-prosecuted leaks would appear to be sanctioned and would lose their credibility to shape public opinion. He focuses on “accountability leaks”, which are those that expose substantial instances of illegality or gross incompetence or error in important matters of national security. These are rare. One set occurred at the confluence of the Vietnam and Cold Wars with the anti-war and civil rights movements. The second deals with extreme post 9/11 tactics and strategies. Such leaks have played a significant role in undermining threats that the national security establishment has made to the “constitutional order of the United States” – in short, a very big deal. For many of us technologists the issues we would consider to leak would wrack our conscience and destroy our moral compass. We would be going out of our element to deal with the failure of mechanisms inside the national security system to create a leak. For example, the CIA’s program of torture, rendition, and secret off-shore prisons somehow leaked without an Ellsberg, a Manning, or a Snowden. (The technology that enabled Ellsberg to release the Pentagon Papers was a massive, highly automated, Xerox machine.) Benkley observes, “The greater the incongruity between what the national security system has developed and what public opinion is willing to accept, the greater the national security establishment’s need to prevent the public from becoming informed. The prosecutorial deviation from past practices is best explained as an expression of the mounting urgency felt inside the national security system to prevent public exposure. The defense I propose is intended to reverse that prosecutorial deviation.” Needed is a defense or at least a sentencing mitigation platform that requires only a belief that the disclosure would expose substantial “violation of law or systemic error, incompetence, or malfeasance.” This defense is based on the leaker serving a public good. It is not individual rights based. It is belief based, and does not depend on later proven illegality of what was disclosed.

This is not a naive proposal, and many subtleties are discussed, for example, to whom the leak is made, how it is made, when it is made, what is redacted, how public the leak mechanism is, etc.

Benkler reviews several historical leaks, going back to 1942 when Morton Seligman leaked decoded Navy messages to a reporter. If published, the fact that Japanese codes had been broken would be disclosed causing harm to the war effort. Since no government wrongdoing was disclosed, the accountability defense would not apply. The long historical review ends with a discussion of the Manning and Snowden cases. Manning’s 35 year sentence is obscenely excessive even though the criteria for an accountability defense are mixed. One would hope, in the presence of an accountability defense that at least a more reasonable sentence would have been handed down. A detailed analysis of the Snowden case is given; with the single exception of leaks on NSA’s Tailored Access Operations, TAO, which target specific computers, the defense applies. One interesting issue is that the legal defense should be structured so that the prosecution cannot “cherry pick” the weakest disclosure and prosecute that, ignoring the public value of the other disclosures.

The institution through which the leaks are made should also be protected by the defense. In some sense, “free press” does this, but this should be clarified in national defense cases.

Finally, “punishment by process” is discussed. The government can ruin someone in many ways. Huge legal expenses, long drawn-out trials, loss of access and jobs, etc. While protection from punishment by process is desirable, how to do this needs to be addressed. I would think that technologists fear this the most.

I strongly recommend these three thought-provoking articles.

[1] , “The Leaky Leviathan: Why the Government Condems and Condones Unlawful Disclosures of Information”, Harvard Law Review, December 2013, Vol 127, No. 2, David Pozner

[2] “Creaky Leviathan: A Comment on David Pozner’s Leaky Leviathan”, Harvard Law Review Forum, December 2013, Vol 127, No. 2, Rabul Sagar. [A mostly favorable review of [1]]

[3] “A Public Accountability Defense for National Security Leakers and Whistleblowers” 8(2) Harv. Rev. L. & Policy, July 2014, Yochai Benkler. [A well reasoned and historically justified proposal for the legal structure of a whistle blower defense that is of particular interest to technologists.]


Operation Cleaver


The book Hacking Exposed has long been on my bookshelf and is a favorite of mine. Its author, Stuart McClure, who is well-known in the security industry and who is founder and CEO of the security firm Cylance, wrote a passionate introduction to his company’s report on Iranian cyber-attack technology and cyber-exploits dubbed Operation Cleaver. After reading this introduction, I decided to dive deeply into the report.

The report calls Iran the “new China” relative to cyber-attack technology. Well, no, but it does force the many targeted governments to put Iran on their cyber-watch-lists. Iran not only has some sophisticated cyber-attack technology, it appears to be even more brazen than China or North Korea about going after infrastructure around the world. The report claims there have been attacked targets in: military, oil and gas, energy and utilities, transportation, airlines, airports, hospitals, telecommunications, technology, education, aerospace, Defense Industrial Base (DIB), chemical companies, and governments.

The report reminds us that Iran was damaged by Stuxnet (2009-10), Duqu (2009-11), and Flame (2012). It reasonably speculates that these attacks motivate Iran to fund the development of advanced cyber-attack technology. It also points out that Iran has a relevant technology exchange agreement with North Korea.

The report lists as possible retaliation, the 2011 certificate compromises of Comodo and DigiNotar as well as the 2012 Shamoon campaign on RasGas and Saudi Aramco that impacted over 30,000 computers. In late 2012 and early 2013 further Iranian backlash consisted of DDoS attacks on US banks. In 2014, espionage operation Saffron Rose (attacking the US defense industry; the Fireeye report is excellent) and operation Newscaster (uses social media to collect email credentials of US and Israeli journalists, military and diplomatic personnel) are attributed to Iran.

Operation Cleaver seems to be staffed by a team of known and new players, some of whose members are described in the report. While Cleaver uses existing code, its new code seems to date from 2012, and its earliest attacks start in roughly 2013.

The hacking techniques of operation Cleaver are discussed in depth by the report. This depth and associated attribution to Iran were enough to convince me that operation Cleaver, and by association other Iranian cyber-attack teams, are serious threats.

I strongly recommend to all in the security industry: please read this report in detail. It is here. After being published, the FBI subsequently issued one of its “confidential flash” reports warning certain companies of potential Iranian attacks. [I haven’t seen its text.] McClure commented on the FBI “flash” that perhaps Iranian the potential for cyber-attacks is larger than Cylance’s initial research indicated. I would agree, there is no evidence indicating that the Cleaver team has subsumed Iran’s Ajax Security Team, The Iranian Cyber Army, and others. Finally, no one seems to understand the subtleties of Iranian supported, Iranian encouraged, Iranian tolerated, etc. for these groups, and the Cylance report provides no clues regarding Operation Cleaver’s government relationship.

The Cuckoo’s Egg – Revisited


The other day I picked up a used copy of Cliff Stoll’s book The Cuckoo’s Egg about his search for a hacker, ultimately identified as a German, Markus Hess. Hess was using Stoll’s Lawrence Berkeley Labs computer as a base to infiltrate various government computers to steal (and sell to Russia’s KGB) government documents.

In this age of “Advanced Persistent Threats”, this 1986-8 threat was hardly “advanced”. In fact Hess’ basic break-in approach consisted of trying simple passwords for known system and vendor accounts. Today, this still an effective break-in approach! People are too lazy to create complex, but easy to remember, passwords.

Hess also used known bugs in system programs to escalate his privileges. Today, I get weekly CERT notifications of such bugs. There are hundreds of them announced annually. Nothing new (or advanced) here!

What did impress me about this story was that Hess was amazingly persistent. His efforts spanned many months. He was careful – always checking to see if some system person could be watching, and if so, quickly logging off. When a cracked password was changed for example, Hess quickly moved to another system and kept his attack going. “Persistent” threats aren’t new.

Hess copied password files to his system, presumably for off-line brute force (albeit simple) dictionary attacks to “guess” passwords. Some of these attacks were successful.

Also impressive was that Stoll set up an automated warning system to track Hess’ intrusions. It was not visible by Hess, but it automatically recorded his keystrokes on Stoll’s computer. Its design made it impossible for an intruder to delete or modify its records. It was an early threat detection system that of course was primitive compared to today’s detection systems, but it was instrumental to the discovery of Hess and Hess’ cohorts. Stoll also manually created a log notebook, which I would still recommend in the analysis of any attack. Such a notebook would include all aspects of an investigation, including interactions with network vendors, government agencies, and interested parties. Stoll’s astronomy training included “If you don’t record it, it didn’t happen…” – a good message for today’s network forensic engineers.

Another feature of Stoll’s detection system was the creation of what we today call a “honeypot”. His was rather simple: just some fake, but apparently interesting documents that needed system privileges to read. Stoll left open his computer so that the attacker could be tracked, but some government computers were forced to clamp down immediately. I’ve seen companies today, Google comes to mind, where systems are left vulnerable to track intruders so long as damage can be contained and not affect customers. Leaving a system and a honeypot open for the analysis of a threat is a good technique.

I found it hilarious that in the course of watching Hess attack various government and government contractor systems, Stoll was told by the owners of these systems, “It’s impossible to break into our system; we run a secure site.” I’m reminded of all the retail vendor breaches occurring these days as well as the stuxnet-like attacks.

Finally, Stoll had trouble getting help from the FBI, the CIA, and the NSA. The FBI has certainly beefed up its computer expertise since 1988, but it still will refuse to help anyone to deal with an annoying hacker that does not cause serious financial damage. My recommendation here is to pre-prepare an argument for why a threat can potentially cost your company lots of money. Homeland Security has a bevy of agencies to combat cybercrime; learn about them here.


  1. Stoll published a May 1988 ACM article “Stalking the Wily Hacker” that outlines chasing Hess. His book is better and is also an easy and quick read.
  2. The Cuckoo’s Egg, Doubleday 1989
  3. TaoSecurity’s Richard Bejtlich’s excellent talk on chasing Hess (has good photos).
  4. “The KGB, the Computer, and Me”. Video that tells Stoll’s story.



Cybercrime is usually measured in financial loss due to a computer based attack on a company’s computer system. Reputational loss can be huge, but is often only measured in lost sales. Guy Carpenter, a well respected reinsurance broker, recently quoted McAfee/CSIS 2013 study (see below) that the annual global cybercrime loss at $445 Billion. I’ve seen some estimates for the annual global revenue of the computer security industry at $20-50 Billion. The point I want to make here is that these two estimates, which are probably only accurate to a factor of 2 or 3, do not compute for me. If your expected loss is 20 times what you are spending for security, are you spending enough? This and the fact that cybercrime losses are increasing, argue that the computer security industry is going to grow like crazy. On the other hand, the likelihood that a company that is under-spending on computer security gets clobbered with a cybercrime is high, and obviously needs a lot of insurance, which I guess is Guy Carpenter’s message. The cybercrime insurance industry should also skyrocket. (Some liability and theft policies might exclude cybercrime or add claim limits and force customers to insure against cybercrime separately.)

I should point out that the FBI’s Internet Crime Complaint Center received in 2013 complaints with an adjusted dollar loss of $781,841,611. This US number is hugely less than the global number discussed above. See the FBI report listed below. The two reports have vastly different methodologies and thus different numbers.

There is much to learn from the Target breach in the fall of 2013, and this will be the subject of a subsequent post. Points worthy of mention here are that Target’s insurance was woefully inadequate, and while it spent a huge amount on FireEye computer security products, it didn’t have the security infrastructure to use those products effectively. In fact, Target didn’t even have a Chief Security Officer. Target and its banks’ total loss to date is in the hundreds of millions of dollars.

By searching the web, one can find many annual reports on cybercrime. Here are a few that I’ve enjoyed:

HP Cyber Risk Report 2013

Symantec Internet Threat Report 2013

Ponemon 2013 Cost of Cyber Crime Study

Cisco 2013 Annual Security Report

Websense 2014 Security Predictions

McAfee Labs 2014 Threat Predictions

FBI 2013 Internet Crime Report

McAfee/CSIS 2013 Estimating the Global Cost of Cybercrime

My final thought in this note on cybercrime is that the perpetrators of cybercrime are becoming very sophisticated, and the attacks are subtle, take place over a long period of time, and use evasive techniques to avoid being detected. There is a market on which a former “kiddie hacker” can buy nefarious software, and this happens, but much more sophisticated criminals attacked Target. Experts can say that well, the Target breach wasn’t that sophisticated, but it probably wasn’t a kiddie hacker. In fact, with an estimated 50 million credit cards stolen (actually the contents of the magnetic strip with which a card can be counterfeited) each valued at over $100 per card, one sees that cybercrime is “big business”. This crime seems to pale when it compares with the theft of intellectual property done by nation-states, but again that’s another post.