Comments on NIST Interagency Report 7977, Draft 2

2015/03/24

To: crypto-review@nist.gov

Cc: crypto@nist.gov

Re: NIST Interagency Report NISTIR 7977 “NIST Cryptographic Standards and Guidelines Development Process” January 2015 Draft.

Date: March 24, 2015

The January 2015 Draft is a big improvement. In addition, the Summary of Public Comments on NISTIR 7977 (Feb 2014 (First) Draft) was quite helpful to compare how the current draft addresses comments on the first draft. The authors of both new documents are to be commended for their work.

General review comments:

A. Review comments on the first draft of NISTIR 7977 reflect a distrust of the NSA, and by extension of NIST processes. Suppose China sends NIST an email that says, “Hey, we’ve been studying elliptic curve E over field Z/p with base point Q. It has some neat properties, and China recommends that NIST adopt E for its elliptic curve encryption.” Should NIST consider China’s suggestion? A good test of the processes in NISTIR 7977, would be to answer “yes”, even though we suspect that China knows a backdoor to encryption that uses E. Now, receiving such a suggestion from the NSA should be little different. Even though NIST is required to consult with the NSA, most independent security consultants, post-Snowden, would not trust a curve suggested by the NSA any more than one suggested by China. I certainly would not. Therefore we look for processes in NISTIR 7977 that politely look with great suspicion on suggestions for tools and algorithms from the NSA. NIST should get expertise and knowledge from the NSA, but should not blindly accept its tools and algorithms. NIST processes for analysis and acceptance of any suggestion, be it from the NSA, from China, or from any other source should be equally stringent. In particular, cryptographic technology should not be standardized without significant consensus from a wide swath of independent cryptographers. The current draft of NISTIR 7977 does not have an emphasis on public analysis and consensus for acceptance.

B. Of course, NIST could standardize, say an encryption method, even from a private and independent source, and the NSA, China’s PLA Unit 1398, the UK’s GCHQ, or other nation-state cryptography group could know how to crack this encryption method, remaining silent during and after the standardization process. One would hope that NIST, through its own efforts and its financial support of academic and other cryptography research would facilitate the discovery of the weakness and would quickly retire the standard. Such post-standardization life cycle efforts by NIST also need to be part of NISTIR 7977 [line 727].

C. Now if NIST were to publish a proof that a tool or algorithm received from China, the NSA, or another source in fact had no back doors and was totally secure then a consensus on standardization might well be easily achieved. After believing such a proof, I might recommend the proven technology to my clients, but I probably would wait until a huge number of non-government cryptographers also believed the proof and also were recommending this technology. NISTIR 7977 says roughly that NIST will “pursue” to find and use proofs [line 55]. I’d be happy if NIST worked with the NSA and other agencies on such proofs and would recommend such efforts be part of NISTIR 7977.

D. NIST publishes security papers at a prodigious rate. So fast that reviews are deemed inadequate. In light of post-Snowden caution around NIST processes, people naturally ask if these poorly reviewed papers can be trusted. It isn’t going to help if NIST says, “It’s ok, the NSA has reviewed it…” Look, not only does the current draft of NISTIR 7977 fail to convince that future NIST papers will receive good independent reviews, there was no indication that past NIST papers will retroactively receive good reviews. This is a very sad state of affairs, but it is fixable.

Some more specific review comments:

  1. Clarity of the NIST Mission: To develop strong cryptographic standards and guidelines for meeting U.S. federal agency non-national security and commerce needs. This mission should be parsed: To develop strong cryptographic standards and guidelines for meeting 1. U.S. federal agency non-national security needs and 2. commerce needs. My point is that the needs of commerce should treated by NIST as equal to the needs of any federal agency. [line 202 Balance]. For example, federal agencies may well be happy with NSA algorithms, but general commerce may not be.
  2. I do not agree with the NIST Response (page 7 of Summary) to Technical Merit comments that NIST should give priority to non-national security federal information systems. NIST should always make commerce needs equally important. Such a priority statement doesn’t seem to be in NISTIR 7977 explicitly, but there are several statements about NIST being legally required to give such a preference when so ordered by a government entity.
  3. NIST’s statement that it will “never knowingly misrepresent or conceal security properties” [Summary page 3; line 215 Integrity] reminds me of Barry Bond’s statement that he “never knowingly took growth steroids” when his hat/head size at the end of his career was three sizes larger than when he was a rookie. I would prefer a more proactive statement such as “NIST will make every reasonable effort to ensure that military, intelligence and law enforcement agencies by their suggestions, review comments, or contributions do not compromise any security tool or algorithm recommended by NIST.” For NIST standards to regain the confidence of the security and general commerce communities, NIST processes should convincingly ensure by NIST public actions that its tools and algorithms do not compromise the privacy or the integrity of any commercial or private message being protected by NIST standards.
  4. The FISMA requirement that NIST consult with certain federal agencies including the NSA to avoid duplication of effort and to maintain synergy of federal information protection efforts, but NIST can never in the future blindly accept a recommendation from any public or private agency. What is important is that NIST actively regain and maintain its process integrity via the new NISTIR 7977. The current draft falls short.
  5. NIST should consider resolving the conundrum of needing NIST output frequently and needing adequate public reviews of such output by the creation of additional outside paid review boards and conformance testing bodies. Such a review board should be established to review annually the entire Cryptographic Technology Group. [lines 316 and 326]
  6. Minutes of the monthly NIST/NSA meetings should be published. [line 377]
  7. Independent review boards should have the power to reject a proposed standard, say if NIST could not convince the board that the NSA or another agency has not compromised the standard. [page 4; lines 47, 403, and 464]
  8. The NISTIR 7977 Development Process itself should undergo regular review and updates at, say, an annual frequency.

Requested Comments [line 125]:

NIST Question Comment

Do the expanded and revised principles state appropriate drivers and conditions for NIST’s efforts related to cryptographic standards and guidelines?

Yes, but if the word “appropriate” were replaced by “adequate” then No. Neither the integrity of NIST processes in face of NSA influence, nor the issue of adequate review are satisfactorily answered. [A, C, D]

Do the revised processes for engaging the cryptographic community provide the necessary inclusivity, transparency and balance to develop strong, trustworthy standards? Are they worded clearly and appropriately? Are there other processes that NIST should consider?

No. “Trustworthy” standards need public confidence that the NSA or another agency have not added or know of backdoors or other weaknesses to their contributions. Wording isn’t an issue. Different new processes are necessary to separate NIST from NSA and other related agency influence. After-standardization research efforts should be funded as part of all life cycles [B].

Do these processes include appropriate mechanisms to ensure that proposed standards and guidelines are reviewed thoroughly and that the views of interested parties are provided to and considered by NIST? Are there other mechanisms NIST should consider?

No. Cf. A, C, 2, 3, 4, and 7 above. Regarding 7, if NIST won’t vest veto power to independent reviewers, such experts will tend to not participate. Lack of review resources also seems to be a problem. Cf. 5 above.

Are there other channels or mechanisms that NIST should consider in order to communicate most effectively with its stakeholders?

Yes. More paid outside reviewers including an annual review of the Cryptographic Technology Group. Cf. D and 5 above.

Respectfully submitted,

Gayn B. Winters, Ph.D.

Technology Consultant

Advertisements

SSO – Single Sign On

2015/03/14

Early time-sharing systems provided a security model for the underlying file system that provided many services with a single log-in. Access to any branch of the file system was determined by its “ownership” via the familiar Read, Write, Execute flags for Owner, Group, World.

It wasn’t too long before Owner, Group, World was too restrictive and Roles/Groups were introduced where any user could belong to multiple Roles/Groups and access was determined by the least restrictive flag for the Groups to which that user belonged.

Clusters of very similar machine types and operating systems, often with a high speed interconnect between the machines and their devices permitted single sign-on to the systems in the cluster. Several system vendors provided proprietary clusters.

Microsoft Windows Domains pushed the cluster idea out to the LAN, where multiple LAN vendors had already pioneered single log on to the LAN. Microsoft Windows Primary and secondary Domain Controllers on the LAN synchronized user identity data to provide single log on to the domain for every user. We used Domains in our office when we still had a modem connection to the Internet. With time, however, the Internet eventually providing reasonable speed access to application/service servers around the world, such applications proliferated. There seems to be no end to the creativity of application developers on the web. But now users were having to provide separate log on authentication data for each application or service. In the small this was manageable, however it wasn’t too long before what NIST has called “password fatigue” set in. Internet users, wanting to access multiple services, started to

  • write down their (service or web site) usernames and passwords in documents, spreadsheets, and even on yellow Post-its stuck to their monitor.
  • re-use usernames and passwords
  • use overly simple passwords
  • cheat with the same username (often an email address) and simple passwords both of which are used across multiple applications and services
  • let their applications remember usernames and passwords, often in world readable cookies on the workstation.
  • Lose their usernames and password and ask system administrators (or special privileged software) to re-issue them for the user with imperfect memory.

Aside from being an expensive administrative mess, using multiple usernames and passwords was and still is decidedly insecure. Wouldn’t it be better if a user could sign on to a single service, which in turn could provide automated sign on to application/service servers? Well, yes and no. If such an “identity server” were compromised, then all of its user clients could be totally compromised. On the other hand, great care would probably be given to the construction of such a server, to the secure use of connections to it (e.g. via the latest SSL/TLS), and to the encryption of identity data held within it. With some care, this “IDaaS” could also support mobile devices, business partners (e.g. those in a supply chain), and even the Internet of Things.

The above diagram shows an Identity Provider, IdP, which services multiple application service providers, and multiple clients. This diagram implies that the multiple users belong to the same enterprise with a common Name Server and a common IdP. While it would make for a messier diagram, users from multiple enterprises that need to share applications could also be supported.

Provisioning. The users request application provisioning, either as an initial sign-up or as the first time the application is used. The IdP not only provisions the user, but also learns the protocols by which the user (and IdP) can communicate with the application. This includes learning the type of tokens the application will accept for authentication.

Such complexity tends to bloat IdP implementations, and different schemes for lightweight (or “lighter weight”) identity servers have been proposed that take advantage of existing name servers, identity managers, etc. Some vendors split up their implementation into multiple products. For example, some vendors define bridge products that translate one protocol or token format to a different one. History shows that this complexity will pass as standards converge and server speeds increase making full functionality in a single IdP product fast and scalable. Such optimism aside, there are at least a dozen vendors of Single Sign On identity providers today that address this complexity to varying degrees. Each product comes with its own system integration problems that an enterprise must weigh as it selects a Single Sign On vendor.

When a vendor collapses the IdP and the Name Server into a single product, the result can be interesting. When this product is placed in the cloud, it can be even more interesting. Some security companies have such a product, as does Microsoft with its Azure Active Directory.

I see a lot of “sign on with SocialSite” buttons on various vendor sites. The superficial idea is to allow your SocialSite to be an IdP for you. Pick your favorite half dozen social sites, and one is probably listed. There is a privacy issue here: Signing on with SocialSite gives the vendor access to all the information on your social site. This may be ok, but then again, you may not want that vendor or its customers to be grabbing information off your site or worse, posting information to your site. For me at least, I never log in using a social site.

My next IAM post is here.

Alice

2015/03/02

A friend of mine, The Patent King, pointed out to me that the recent court decisions on patents are going to change what software can be patented. This is both a forward and a backward statement. In fact, all of the court cases are backward looking cases, and the Patent Office in its consideration of future patents will be forward looking. These new considerations are collectively called “Alice” primarily after:

  • Alice: Alice Corp. Pty. Ltd. v. CLS Bank Int’l (2014)

but quite a few other court cases come into play. The references below, all of which I found enlightening, cite such cases.

The technology issue is: What software is patentable? The two-step answer starts simply enough. Step 1: The claim must be directed to a process, machine, manufacture, or composition of matter. This is not new. Typically software patents are directed to processes or to machines, and this post will focus on these.

New is Step 2: You are almost out of luck if your claim is directed to a law of nature, a natural phenomenon, or an abstract idea; however, Alice provides some wiggle room for dealing with these “judicial exceptions.” Your claim must identify the exception and must state explicitly how your invention, as a whole, amounts to significantly more than the exception.

Of course the trick is to satisfy “significantly more”. This is similar to the Potter Stewart test for hard core pornography, “I know it when I see it.” As technologists interested in an issued or a future patent, we must work with our patent attorneys to review as many similar cases as we can and make our arguments accordingly.

The rest of this post considers some interesting exceptions mostly of type “abstract ideas”. These include mathematical formulas, mitigating risk (hedging), using advertising as currency, processing information through a clearing house, authentication, organizing information, formulas for updating alarm limits, comparing new and stored information to identify options for action, etc. The “et cetera” means there is no end to the types of abstract ideas.

Returning to the Alice case itself, the patent was about a computer system that acted as an intermediary to maintain and adjust account balances to satisfy business obligations and settlement risk. Since this is a long standing commercial practice, and the patent cites this abstract idea, it is a judicial exception. However, viewing the claim as a whole, it failed to add significantly to the abstract idea. In other words, just crunching the numbers on a computer system is not patentable.

The Ultramercial patent 7,346,545 (the “545 patent”) provided an interesting case. The patent concerned an eleven step process whereby a consumer receives copyrighted material in exchange for viewing an advertisement. Ultramercial sued Hulu, YouTube, and WildTangent for patent infringement. This case bounced around the courts, but after Alice, it was determined that each of the eleven steps as a whole merely implemented the abstract idea of using ads for currency and did not add significantly more to this abstract concept. The 545 patent was ultimately declared invalid.

The case Bilski v. Kappos (2010) concerned Bilski’s patent on hedging to mitigate settlement risk. This patent was deemed too broad and covered well-known practices in a comprehensive way. Fundamentally, one cannot patent an invention that covers an entire abstract and well-known idea.

Mayo Collaborative services v. Prometheus Labs. Inc. (2012) provides an example where an action (raising or lowering the amount of a drug administered) was taken based on a blood test (for metabolites). The action would be normal for any well informed doctor. This case actually falls under the law of nature exception, but the principle applies elsewhere. If all your software does is automate what a trained practitioner does normally, then it is not patentable.

Ancora Technologies, Inc. v. Apple, Inc. is interesting and is not yet resolved by the Supreme Court. Ancora’s invention was to put authentication software in the flash reserved for the BIOS. This would make it more difficult for a hacker to get around the authentication check. Ancora sued Apple for infringement of their patent 6,411,941 (the “941 patent”). If it is accepted that authentication checks are abstract ideas, then is putting such a check in the BIOS flash significantly more than other implementations of this abstract idea? If putting such a check on a hard disk is not patentable, then why should putting such a check in the BIOS flash be patentable? Is the method of putting the check in the BIOS flash and not screwing up the BIOS a patentable significant extension of the abstract idea? Apple has appealed to the Supreme Court.

There are some interesting ramifications of Alice to the cloud, data analytics, and cyber-security worlds. Look for future posts on these topics.

Recommended Reading:

Go Ask Alice – Delightful paper by Berkeley law professor Joe Merges

Patent Eligibility in the Wake of Alice – Nice analysis by Berkowitz and Schaffner

Summary of Ancora v. Apple – by IP firm Sughru Mion

Apple appeals Ancora Ruling – News flash from Law360

USPTO 2014 Interim Alice Training – Very good slide-set tutorial

JSON – JavaScript Object Notation

2015/03/01

For reasons that are inexplicable to me, JSON is not a subset of JavaScript. The reason is a few weird non-escaped characters. Cf. RFC 7159. JSON is a language independent data format. A formal definition and parsing libraries for various programming languages can be found on JSON.org. Briefly JSON’s data types are:

  • Number – a signed decimal number with optional fractional part and optional exponent E part. JSON numbers are typically implemented as double precision floating point numbers.
  • String – zero or more Unicode characters bracketed by double quotes. A backslash is the escape character.
  • Boolean – true or false
  • Array – ordered list of zero or more values of any JSON type, delimited by square brackets, and separated by commas.
  • Null – an empty value null.
  • Object – an unordered set of comma separated name:value pairs delimited by braces. It is recommended that the names be unique Strings in order to implement associative arrays.

JSON values are: number, string, array, object, true, false, and null. Whitespace characters are: space, tab, line feed and carriage return. Whitespace can be used around and between values and punctuation.

Recent identity management standards (but not SAML) use JSON to encode

  • JWT tokens
  • JWK keys
  • JWE encryption
  • JWA algorithms
  • JWS signatures

Subtle points: JSON doesn’t distinguish various implementations of Number, e.g. support for signed zeros, under and over-flows, integers, etc. There is no support for JavaScript data types of Date, Error, Regular Expression, Function, and undefined (other than null.) Even being careful with escapes, so that even though properly escaped JSON values are JavaScript values, the use of eval() is not recommended. Use JSON.parse() instead of eval().

The next IAM post is Single Sign On (SSO) here.

Superfish

2015/02/24

I don’t own a Lenovo PC, thus I shouldn’t get upset over Lenovo pre-installing adware from Superfish. This adware was apparently infected by a third-party Komodia (Israel startup) which put a bad certificate on the Lenovo PC which allowed a Man-in-the-middle attack for all web sites. The web site’s SSL certificate showed the issuer was Superfish. Now I hate preinstalled software and routinely delete it when I (or a friend) get a new PC. The annoying thing here is that just removing Superfish doesn’t remove the bad certificate and the MITM exploit can continue. Lenovo has apologized and has a removal tool (from McAfee, but other vendors have one as well.) Lenovo has been hit with a class action suit, which I hope will extend to Superfish and to Komodia.

I don’t particularly like government intervention, but if it were clear that Lenovo and its suppliers were guilty of some federal crime and subject to huge fines, it might dissuade PC makers from preinstalling such crap onto their PCs. I know that PC profit margins are thin, and preinstalled software adds revenue, but really! (There is also the fallacy that preinstalled software enhances the PC by making it usable and attractive right out of the box. If you think this is attractive, think about the Superfish infection!

Some people like to reinstall Windows – assuming they have an installation disk from Microsoft. (This is less hassle for open source operating systems, because such a disk image can be downloaded.) In this way, all the crap installed by the PC manufacturer doesn’t get installed.

A Quick Introduction to Cryptography

2015/01/30

As a mathematician, I’ve always enjoyed encryption, and have followed it since my days at Digital where, of necessity, many corporate consulting engineers became experts in security. (The Morris worm didn’t hit Digital’s Ultrix but was shocking none-the-less. Mitnick’s theft of the source code for Digital’s flagship operating system VMS was, to say the least, embarrassing.)

My Ph.D. thesis generalized a classification of singular elliptic curves, and thus using the group structure on an elliptic curve for encryption deeply fascinated me.

There are quite a few very nice online courses, books, and papers on encryption. Search YouTube and Google for them. There are two issues to watch out for with older material. One is that someone may have recently cracked an encryption scheme, and the other issue is that more and more powerful computers enable brute force attacks that can break older encryption schemes.

Out of my private notes and many past talks I’ve put together and updated a few of my slides for an easy one hour introduction to cryptography for the working engineer. These slides are Quick Cryptography Introduction, and they list some of my favorite cryptography references.

Of course, to keep the presentation even close to an hour, many interesting and important topics were either given a cursory mention, or omitted entirely. In fact, I culled out more slides than I left in. There are, however, a couple “P.T. Barnum” slides that list such omitted topics and hint at future talks.

The very last slide of the presentation points out something both obvious and deep: Encryption is only a tiny part of security. While all working engineers should know the basics of cryptography (the contents of the presentation is a start), we should also realize, for example, that the people aspects of security dominate almost all security breaches. Weak passwords, poor software maintenance, lack of employee education on social engineering, etc. are still rampant. On the technology side, much software is simply poorly designed from a security perspective. We’ve a long ways to go in security, but learning a little cryptography is a good prerequisite.

BlackEnergy

2015/01/26

The BlackEnergy toolkit seems to have been deployed as early as 2007 when publicly analyzed by Arbor Networks. It was a DDoS attack using just HTTP and PHP. It evolved in 2008 into a rootkit, BlackEnergy2, which was, according to secureworks.com, whose paper gives a complete analysis, similar enough to the existing rootkit Rustock to sometimes be detected as such. BlackEnergy2 had a banking plugin designed to steal banking credentials from infected users. It could then corrupt the disk, making it non-bootable, and then shut down the system (presumably so that the owner could not check the compromised bank account.) The 2014 evolution, described by ESET and also by F-Secure, was used in various industries of the Ukraine and Poland.

BlackEnergy3 is another variant, used by actors identified as the Quedagh gang (possibly Russian government sponsored), which F-Secure reports as being used to target political organizations with crimeware.

BlackEnergy3 is similar to the BlackEnergy used to infect various industrial control systems. Infected programs include GE’s Cimplicity, Siemens’ WinCC, and Advantech/Broadwin’s WebAccess. There are also some similarities to the malware Sandworm, which was used in a 2013 Russian cyberattack against NATO, the European Union, overseas telecommunications, and energy sectors. These various links and similarities give rise to speculation of a larger, government sponsored, program.

This all bothered DHS enough to issue, on October 29, 2014, a threat alert. Sadly, this has been reported as business as usual for the nation’s two regulated (nuclear and power grid) industries. It is not clear to me that the threat alert was a good idea; it’s a little like crying “wolf” when there’s no wolf … yet….

Havex aka Dragonfly

2015/01/26

Havex, aka Dragonfly, is a Remote Access Trojan (RAT) that surfaced as early as September 2013 and appears to be related to the attack group Energetic Bear, whose activities were seen in August 2012 in the energy sector. Havex’s early targets appear to be European companies and educational institutions. These targets are not directly ICS vendors, and the relation to ICS is unclear, but ICS-CERT, F-Secure, Symantec, Kaspersky, and others are tracking such attacks. At the moment, Havex appears only to retrieve structural intelligence about its targets, possibly in advance of future attacks. Havex uses compromised web sites (samples here) to induce users to download software that is infected with Havex (a watering hole attack).

Havex has dozens of variants and uses multiple methods for penetration: phishing, watering hole, etc., and F-Secure has already identified many (146 so far) command and control servers for these variants.

The total number of variants, attack vectors, C&C servers, and years in service is scary. A great deal of data has been collected across multiple industries. It feels to me like a nation-state preparing for cyberattacks (plural).

Keyloggers as Trojan Horses

2015/01/15

Read this Register article for starters.

A $10 giveaway at, say, conferences, could be an effective Trojan Horse.  Samy Kamkar (@samykamkar) has released schematics for a key logger built on the open source Ardunio hobby board.  I don’t quite see how to get the cost down to the claimed $10 without a lot of volume, since the Ardunio board retails for $25.  Perhaps the NSA has some better secret manufacturing plans with more nefarious delivery ideas.

Samy of course hopes that Microsoft and other keyboard manufactures will address such a security hole.

2015

2015/01/01

Every few years I put together a talk about technology expected to develop or become prominent in the New Year. As I contemplated such a talk for 2015, my thoughts were stuck on cyber-security. While 2014 Silicon Valley IPOs in storage and in health-care are astounding, my thoughts are still on the vast discrepancy between the sophistication of malware attacks and the woeful inadequacy of corporate defenses. Various estimates of cyber-theft losses run into the hundreds of billions of dollars. These losses are hard to quantify. Even a kiddie virus that “only” disrupts a local network can cost millions of dollars in repairs and lost revenue. Never quantified by the courts, how do you put a numerical value on opportunity cost? How does one value the careers of the Target CEO and CIO who lost their jobs? Imagine the settlement if these two people alone could sue the perpetrators of the November 2013 Target breach in US court!

It is disheartening to contemplate that both the November 2013 Target breach and the recent Sony breach were preceded by successful, but smaller, earlier breaches. They of course were also preceded by other breaches into other companies. Will 2015 be the year that people wake up? Yes and no.

Let’s first consider the retail industry. We’ve had breaches at Target, Home Depot, Neiman Marcus, Sally Beauty, Kmart, Dairy Queen, Michaels Stores, P.G. Chang’s, Heartland Payment Systems, Goodwill, Supervalu, Staples, Jimmy John’s, Bebe Stores, Sheplers (western wear), Chick-Fil-A, OneStopParking (Krebs claims same attackers as Target’s), and probably many others in the retail industry that I haven’t studied. If this isn’t enough to motivate CEOs of retail companies, consider the very public breaches outside retail: Google (exposed Chinese Gmail accounts), Epsilon, Sony (Playstation), Sony Entertainment (Movies), US Dept of Veterans in 2009, Global Payments 2014, AOL, eBay, JPMorgan Chase, Adobe, United Parcel Service (UPS) Stores, Sands, etc.

No longer can the CEO of even a modestly large retail outlet assume it will be the “other guy” whose credit card database gets attacked. The board room topic of how much to increase the IT budget to address security will come up. The answer will be something like 10%. This is so wrong for many reasons. What most IT departments need is a total cultural change: new people, new expertise, new software, new security products, new processes, and new influence that will affect the entire company. This doesn’t even count the pain that Microsoft is forcing companies to suffer by shutting down support for older Windows products, notably XT and Server 2003. My guess is that the correct board room answer should be 100-200% (and even higher in capital costs for things like pin and chip support) and not a paltry increase such as 10%. If those triple digit percentage increases are even floated, they will get shouted down as not affordable.

Affordability here isn’t a technology topic, it is a topic for the Harvard Business Review: Restructuring the Retail Industry. I’ve seen hints of this. Authors scratch the surface on topics like “Who Pays?” for a breach? “How much should one spend of security?” where authors look at the probability and severity of a loss for a retail company and come up with some low recommendation. Bruce Schneier has in 2014 given a couple insightful talks which I cynically interpret as saying “Look, for all the reasons that I’ve just explained to you, you’re going to get hacked, so put your money on Incident Response. Namely, invest in recovering from the inevitable attack.” Bruce’s company Co3 Systems sells incident response products and services. To be fair, Bruce doesn’t say not to invest in malware defense, but rather, don’t fail to invest in incident response.

My first 2015 predictions: Large retail companies will not restructure, but they will wrestle with this affordability problem. Security product and consulting companies will do very well as a result. Incident response companies should also do well. Malware defense products will improve; however, retail companies will continue to be hacked. The attacks will escalate and increase in sophistication. Damage will continue to rise. The recent Sony Entertainment attack shows that retail won’t be the only target (no pun intended.)

OK, what really scares me? It isn’t retail! If we admonish retail and related companies for ignoring early warning signs of malware attacks, aren’t we blissfully ignorant of the warning signs for infrastructure attacks? We are. In fact, most of the cyber-security articles that I read also ignore this.

My second 2015 predictions: The United States will suffer a cyber-attack on some infrastructure site in 2015. The technology of Stuxnet and its predecessors and follow-ons Duqu, Flame, Gauss, Wiper, Mahdi, Shamoon, sKyWIper, Miniduke, Teamspy, etc. provide a roadmap for even the smallest nation-states to follow for such infrastructure attacks. In fact, if you take Ralph Langner’s excellent paper “Stuxnet’s Evil Twin” and substitute “centrifuge” for your favorite industrial mechanism, the result isn’t a bad outline for how to proceed with such an attack. An actual attack, say of our electrical grid, would have to modify multiple SCADA systems and multiple flow control and transmission devices. Lot’s of code needs to be modified from the Stuxnet code, but a small nation state could do it. (North Korea is reported to have 1800 skilled software engineers engaging in cyber-espionage. Such a large team and their contractors could do it.) Perhaps a smaller team of educated terrorists could carry out a more focused attack, say of a single industrial site.

We are not without early warnings beyond Stuxnet itself. SiliconANGLE’s May 2014 article on Iran and Syria attempted attacks on US Energy firms is such a warning. Certainly the attacks on Iran’s and Saudi Arabia’s oil infrastructure are warnings. A gcn article on an attack on Iran is here. A cnet article on a Quatari LNG attack and a Saudi Aramco oil company attack is here. This cnet article outlines multiple variants of the malware potentially used in these attacks. The February 2014 RSA Conference had multiple, more technical, talks on this topic. I expect even more for the 2015 conferences in the US and in Asia Pacific & Japan.

What totally surprises and scares me is that the U.S. has not yet had a serious infrastructure cyber-attack, while mid-eastern countries have had such attacks. Advancing past Stuxnet, the attack software is becoming more sophisticated and powerful. The U.S. is due…