ONVIF

2018/07/04

ONVIF – formally known as the Open Network Video Interface Forum

In the early 2000’s, my company Bristol Systems Inc. got into IP cameras and access control HID security cards as part of our comprehensive security program for our customers. Sadly, ONVIF had not yet been formed.

ONVIF was formed in 2008 by Axis Communications, Bosch Security Systems, and Sony. The video security market at the time was formed of companies who made and/or sold video cameras and recorders. In the worst case, each pair of such devices had proprietary programming interfaces and proprietary protocols for communication between the cameras and recorders. This was an interconnect nightmare for customers who might want to add a camera to system with a recorder or wanted to update their recorder. The idea of ONVIF was to standardize communication APIs and protocols between these devices, in order to permit interoperability independent of vendor, and to be totally open to all companies and organizations in this space. Its goals, beyond interoperability, are flexibility, future-proofing (your camera will continue to work in a heterogeneous system even if its manufacturer goes belly-up), and consistent quality.

The forum has now dropped the longer name as its standards have expanded beyond video, for example, to storage and to access control. It is now known simply as ONVIF.

The ONVIF standards are comprised of a core standard and several additional technical standards called “profiles”. All ONIVF conformant devices must conform to the core standard and to one or more profiles. One can think of the profiles as groups of features. This grouping provides some sanity in this market: if a vendor decides a particular profile is necessary or desirable then this vendor must implement all of the (mandatory) features of the profile. A device that only implements some of one profile and some of another cannot be ONVIF compliant.

The Core Specification 2.5 (December 2014) is rather comprehensive. This spec is around 150 pages and includes device and system management, web services, framework for event and error handling, security, ports, services, device remote discovery (necessary for plug and play interoperability), and encryption for transport level security. It includes data formats for streaming and storing video, audio, and metadata. It also includes a wide variety of service specifications, e.g., access control, analytics, imaging, pan-tilt-zoom, recording control, replay control, etc. It uses IETF, and other networking standards.

The current profiles are identified by the letters: S, C, G, Q, A, and T. Thus we have Profile S, Profile C, Profile G, Profile Q, Profile A, and (draft) Profile T. To remember which is which, I use:

  • S = “streaming” for sending video and audio to or from a Profile S client or device. Basic camera controls.
  • C = “control” for basic access control such as door state and control, credential management, and event handling
  • G = “gigabyte” for storage, recording, search and retrieval
  • Q = “quick” for quick installation, device discovery and configuration
  • A = “additional access” for more on access control, configuration of the physical access control system, access rules, credentials, and schedules
  • T = “ tampering” for compression, imaging, and alarms for motion and tampering detection.

In each profile, support for a feature is mandatory, if any aspect of that feature is supported by the device, otherwise it is conditional. For example, Profile S specifies compliance requirements for pan-tilt-zoom, which would be conditional on whether the camera supported aspect of pan-tilt-zoom. In which case, the camera would have to support all Profile S features of pan-tilt-zoom. If the camera does not support pan-tilt-zoom, then it can still be Profile S compliant.

In future posts, I’ll write about selecting a video (and audio) security system for my home, and about integrating my system into the neighborhood watch, which is a heterogeneous collection of security systems. In particular, exactly how the various profiles come into equipment decisions will be detailed in depth.

Advertisements

Privacy in Windows 10

2017/11/13

Privacy in Windows 10

11/13/2016, 07:33:17

The general problem with privacy in Windows 10 is that applications get lots of privileges that permit the “theft” of personal information. My goal would be to turn off these as much as possible. Here is what I’ve tried:

Go to Settings, then Privacy, AND turn off all privacy options in the General tab. [Couldn’t change some app notifications. Had to uninstall one uncooperative app.]

Go to Settings → Privacy → Background Apps, toggle off each app. [Had to search for Privacy, then all ok. Turned off most.]

Go to Settings → Accounts → Sync your settings. Turn off all settings syncing. [I used the “Sync Settings” switch to turn them all off. The individual settings were grayed out.]

Turn off sharing ID/profile with third party apps. Go to Settings → Privacy → General → Let my apps use my advertising ID. (this will reset your ID). [Had to search for “Advertising”, then could turn it off.]

Go to Location, turn off “Location for this device” (via Change button). [Found under “Personalization”]

Go to Camera, turn off “Lets apps use my camera”. You can enable the camera when you need it. You can also enable the camera for specific apps. [done]

To to Speech, Inking and Typing”. Click on “Turn off” and “Stop getting to know me”. [Click “Get to know me” and you ‘ll get the option to turn it on or off. I use “off”]

Go to Feedback and Diagnostics” and choose Never for feedback, and “Basic” for diagnostic and data usage. [Done after I reconsidered my earlier settings.]

In Settings, go to Windows Update → Advanced Options and “Choose how updates are delivered”, select “turn off”.

Go to “Network and Internet” → WiFi, turn off WiFi Sense. [done]

Disable Cortana: Use Notebook menu item and select Permissions. Turn off all switches. Then select Settings, click on “Change what Cortana knows about me in the cloud” and tap “Clear”

Disable your Microsoft account: Go to Settings → Accounts, “your info” tab, Choose sign in with a local account instead (and set up a local account).

Disable Telemetry (automated data collection and communications): On the web, there is a lot of advice on disabling telemetry in Windows 10. Here is one from TechLog360 (link below): Open Command Prompt (Administrator) and type:

sc delete DiagTrack [response: [SC]DeleteService SUCCESS]
sc delete dmwappushservice [SC] DeleteService SUCCESS]
echo “” > C:\ProgramData\Microsoft\Diagnosis\ETLLogs\AutoLogger\AutoLogger-Diagtrack-Listener.etl
reg add “HKLM\SOFTWARE\Policies\Microsoft\Windows\DataCollection” /v AllowTelemetry /t REG_DWORD /d 0 /f  [response: The operation completed successfully]

I actually like Windows 10’s visual effects, but to turn one or more off, Go To System → Advanced system settings → Advanced to uncheck whatever you don’t want.

http://techlog360.com/

http://superuser.com/questions/949569/can-i-completely-disable-cortana-on-windows-10

http://arstechnica.com/information-technology/2015/08/windows-10-doesnt-offer-much-privacy-by-default-heres-how-to-fix-it/

UX versus UI

2017/06/01

 

UX = User Experience, UXD = User Experience Design, UI = User Interface, and UID = User Interface Design are terms and acronyms frequently thrown around. Here are some of my thoughts about them.

First UI and UID: A user interface for a product is something for which you can write a detailed specification. E.g. Physical size of the product and its components, user input locations, visual, audio, and tactile feedback mechanisms, recording capabilities, etc. What exactly does the device do when it is powered up? What is displayed or heard or felt? Press or click on this button or icon, and the device does X and the screen looks like Y. Will the device accept voice input? What intermediate results are recorded? All these things, and no end of others, are user or human interface things. They describe how the device works as is experienced by the user. They all can be objectively tested as to whether or not they meet the specification.

User interface experts have traditionally injected subjective opinions as to how good a user interface is. Most of these experts form their opinions by using the product themselves or by discussing how easy real users find the product to be. “Easy” is an interesting word here, because it can mean easy to learn, easy to understand, easy to use, etc. For the most part, being easy is subjective. One can measure some tings such as how fast the “average user” becomes proficient in the use of a product. Of course, this begs the question of the meanings of “average user” and level of proficiency, and how many users were surveyed to compute an average. Such measurements are usually very vague on the environmental issues: For example, was the user allowed to read a printed or on-line manual (now the test is about the manual together with the device), and was the user given any verbal instruction from an experienced user (now the test is about the device and the instruction)? In general, I lump all of UI expert opinion as pure subjective opinion. Some of these opinions are better than others, of course – where “better” is usually translated into product sales. Such opinions are best lumped into User Experience.

None of this user interface stuff has much to do with how well real users like a product, whether the product meets their stated and unstated needs, how many questions users have, and what the users actually feel is easy or difficult about using the product. I lump these subjective issues into the category of User Experience issues. One can try to design for a better user experience, and such an effort is often called User Experience Design, but its actual definition is a little vague. In fact, a project manager will have difficulty allocating time and budget to user experience issues. The proponents of agile development processes actually allow for development phases where attempts are made to ask users about their experiences with early versions of a product so that UI changes can be made for the next iteration. If the developers are both good and lucky, iterations of these development phases will create a product that meets business needs (revenue, profitability, market share, etc.) Of course, the product development team has numerous members from all facets of the company: project, product, and program management, engineering, sales, marketing, finance, competitive analysis, etc. Gathering consensus and making product decisions can be challenging.

Final thoughts:

  • When the team is formed, make it clear who makes decisions, and how decisions are made.
  • Make it clear by what criteria a product is canceled. (For example, if the projected ship date slips by N months, then cancel, if the product won’t meet profitability goals then cancel, etc.)
  • Separate UI (objective measurements) from UX (subjective opinions). The purpose of UID is to meet perceived product and user goals as defined by UXD. The UID consists of a UI specification. Test this first, then subjectively evaluate whether the UI meets the UX goals.
  • Keep the UI development team small and the product as simple as possible (but no simpler.)

Windows 10 Problems and (some) Solutions

2017/05/07

On May 11, 2016 I started one of my note files on Windows 10 problems.  Over the next year, I added some solutions. This file is now an unreadable mess, and I decided in 2017 not to make it a WordPress post.  Now what do I do with it? Hopefully since then Microsoft has fixed many of these problems.  My plan is to go back and blog about each problem.  This, and new problems, should keep my busy for years to come!

Upgrading to Windows 10

2016/07/15

The first good news is that the upgrade from Windows 8 (which I hated with a passion) to Windows 10 went very smoothly. Now the PC on Windows 8 is my daughter’s PC and it had few applications. In fact the only problem I had was with a DVD player, which was quickly solved by downloading a new version of the player.

The second good news is that a “simple” upgrade from Windows 7 to Windows 10 went well also. I had to delete Chrome and OpenOffice, and then reinstall their Windows 10 versions.

The upgrades were slow, even with a good cable modem, but it all worked. I was delighted that the upgrades restarted themselves intelligently each time when the network burped.

The Leaky Leviathan

2015/06/21

David Pozen’s Harvard Law Review paper [1] “The Leaky Leviathan: Why the Government Condems and Condones Unlawful Disclosures of Information” explores the issues of leaks, plants, and combinations “pleaks” of confidential government information. Such disclosures represent an art form for U.S. (and I’m sure other nation) politicians. This paper will be required reading for any student of constitutional law or participant in government. For technologists such as readers of this blog, this paper begins the wrestling match around the Snowden leaks, and the battle over the NSA’s previously secret activities to intercept foreign and domestic Internet and telephone communications.

Pozen’s paper analyzes the reality of how government works. It is easy to make something “secret” in one form or another, but government has created informal ways to incrementally relax secrecy. We’ve all heard newscasters attribute a story to an “unnamed source”. When the story supports the government position in a measured way, it is a plant. When the government feels too much pain and the unnamed source was not controlled by the government, then it is a leak, and top executives in the government squeal like little piglets at the wrong done. In reality, Pozen writes, with tongue-in-cheek, plants need to be “nourished” by leaks. Otherwise, if all leaks were suppressed, plants would lose their believability and would be ineffective as a government tool. He points out that historically whistle-blowers and sources of leaks are rarely thrown in jail. They are, however, often shunned, losing valuable access to government executives.

The Creaky Leviathan by Sagar [2] is sometimes cited as a rebuttal, but I found it hugely supportive:

Let me be clear. Leaky Leviathan contains no hasty judgments. Pozen is admirably careful and measured in his praise of permissive enforcement; his broader objective is to show that the varied causes and consequences associated with leaks mean that the prevailing “system of information control defies simple normative assessment.” This conclusion I fully endorse. But the question remains: even if it advances our comprehension, does “middle-range theory” improve our ability to judge the value or worth of the prevailing “system of information control”? For the reasons outlined above, I think the answer must be no, since the overall consequences of permissive enforcement are obscure and hard to ascertain. As far as this “disorderly” system is concerned, the most we can do from a normative perspective, I think, is become clearer about whether particular outcomes it produces are more or less acceptable.

Whistle-blowing protection when it comes to national security issues is a dicey topic. The issue here is that huge losses of life and treasure are at risk. Counterbalancing such risk is that the national security infrastructure is huge: NSA, the Pentagon and all of the DoD, CIA, NSC, and of course all of the Executive Branch (and some of Congress) represent the government side, and the major contractors Boeing, L3, Halliburton, McDonnell Douglas, Raytheon, etc. are also in the mix. A system of secrecy often makes it difficult to generate healthy debate; moreover, these institutions are “wired” to worry about and exaggerate threats. Secrecy in these institutions makes it difficult even for internals to gain enough information to offer opposing views. Senator McCarthy, J. Edgar Hoover, Nixon, and others required outside forces to neutralize their negative actions. However real the communist threat was, McCarthy and Hoover violated the rights of many U.S. Citizens. There were, in the end, no weapons of mass destruction in Iraq, and we went to war over this fabrication. The Maginot line in France was an utter failure. Examples abound. The whistle-blowing that has been legitimized in the private sector is not well legitimized in the government sector. Current law limits disclosure to Congress, does not cover civilian contractors (thus Snowden is not protected; Manning was somewhat, but still got 35 years, possibly due more to the choice of WikiLeaks as an outlet). The Leaky Leviathan screams out for a legal structure to fairly protect national security whistle-blowing. Benkler’s paper [3] “Whistle Blower Defense” takes a solid crack at this.

Benkley starts with an excellent in-depth review of how we got from 9/11 to the mess that Manning and Snowden disclosed. His “Public Accountability Defense” starts with the observation that most leaks are not prosecuted, because if they were, the non-prosecuted leaks would appear to be sanctioned and would lose their credibility to shape public opinion. He focuses on “accountability leaks”, which are those that expose substantial instances of illegality or gross incompetence or error in important matters of national security. These are rare. One set occurred at the confluence of the Vietnam and Cold Wars with the anti-war and civil rights movements. The second deals with extreme post 9/11 tactics and strategies. Such leaks have played a significant role in undermining threats that the national security establishment has made to the “constitutional order of the United States” – in short, a very big deal. For many of us technologists the issues we would consider to leak would wrack our conscience and destroy our moral compass. We would be going out of our element to deal with the failure of mechanisms inside the national security system to create a leak. For example, the CIA’s program of torture, rendition, and secret off-shore prisons somehow leaked without an Ellsberg, a Manning, or a Snowden. (The technology that enabled Ellsberg to release the Pentagon Papers was a massive, highly automated, Xerox machine.) Benkley observes, “The greater the incongruity between what the national security system has developed and what public opinion is willing to accept, the greater the national security establishment’s need to prevent the public from becoming informed. The prosecutorial deviation from past practices is best explained as an expression of the mounting urgency felt inside the national security system to prevent public exposure. The defense I propose is intended to reverse that prosecutorial deviation.” Needed is a defense or at least a sentencing mitigation platform that requires only a belief that the disclosure would expose substantial “violation of law or systemic error, incompetence, or malfeasance.” This defense is based on the leaker serving a public good. It is not individual rights based. It is belief based, and does not depend on later proven illegality of what was disclosed.

This is not a naive proposal, and many subtleties are discussed, for example, to whom the leak is made, how it is made, when it is made, what is redacted, how public the leak mechanism is, etc.

Benkler reviews several historical leaks, going back to 1942 when Morton Seligman leaked decoded Navy messages to a reporter. If published, the fact that Japanese codes had been broken would be disclosed causing harm to the war effort. Since no government wrongdoing was disclosed, the accountability defense would not apply. The long historical review ends with a discussion of the Manning and Snowden cases. Manning’s 35 year sentence is obscenely excessive even though the criteria for an accountability defense are mixed. One would hope, in the presence of an accountability defense that at least a more reasonable sentence would have been handed down. A detailed analysis of the Snowden case is given; with the single exception of leaks on NSA’s Tailored Access Operations, TAO, which target specific computers, the defense applies. One interesting issue is that the legal defense should be structured so that the prosecution cannot “cherry pick” the weakest disclosure and prosecute that, ignoring the public value of the other disclosures.

The institution through which the leaks are made should also be protected by the defense. In some sense, “free press” does this, but this should be clarified in national defense cases.

Finally, “punishment by process” is discussed. The government can ruin someone in many ways. Huge legal expenses, long drawn-out trials, loss of access and jobs, etc. While protection from punishment by process is desirable, how to do this needs to be addressed. I would think that technologists fear this the most.

I strongly recommend these three thought-provoking articles.

[1] http://cdn.harvardlawreview.org/wp-content/uploads/pdfs/vol127_pozen.pdf , “The Leaky Leviathan: Why the Government Condems and Condones Unlawful Disclosures of Information”, Harvard Law Review, December 2013, Vol 127, No. 2, David Pozner

[2] http://harvardlawreview.org/2013/12/creaky-leviathan-a-comment-on-david-pozens-leaky-leviathan/ “Creaky Leviathan: A Comment on David Pozner’s Leaky Leviathan”, Harvard Law Review Forum, December 2013, Vol 127, No. 2, Rabul Sagar. [A mostly favorable review of [1]]

[3] http://benkler.org/Benkler_Whistleblowerdefense_Prepub.pdf “A Public Accountability Defense for National Security Leakers and Whistleblowers” 8(2) Harv. Rev. L. & Policy, July 2014, Yochai Benkler. [A well reasoned and historically justified proposal for the legal structure of a whistle blower defense that is of particular interest to technologists.]

Removing Crapware from Windows

2015/05/28

Every so often my PC starts getting slow. In the task manager there are dozens of processes that I don’t recognize. It’s a real pain to clean these out. But, …, I guess this is just basic maintenance that needs to be done. Here are my notes for today. I doubt this makes good reading, unless you land here via a search engine and want to see how I got rid of something.

The first lesson here is that removing crap is best done in the Administrator account, and not just in an ID with administrator privileges. Some utilities (sc for example) test for user ID and not just privileges. If you use Windows Vista, 7, or 8, this account is “hidden”. Sigh. If you’ve ever wondered what the option “run as Administrator” is, now you need it.

On the site windowsvc.com, I found this helpful way to remove crap installed as a service. In this case, I wanted to remove BrsHelper:

Open a command prompt by right clicking its icon and selecting “run as Administrator”. Copy the lines in red respectively to stop, disable auto-start, and to delete the service entirely. For example,

sc stop “BrsHelper”

sc config “BrsHelper” start=disabled

sc delete “BrsHelper”

I note on the web that others get “Access Denied” with sc even when running it as Administrator. I didn’t have that problem, but beware. This seems like a nice utility. It does have a side effect of staying in memory after using it. I had to kill its process tree from the task manager when I was done with it.

The Administrator account isn’ t just hidden, it isn’t enabled at all. To enable it, run the command prompt as Administrator as above, then type:

net user administrator /active:yes

Now the Administrator account is active, and you’ll see it when you want to log in or just change user accounts. BEWARE, initially it has no password. Be sure to set a good one if you want to leave it active. To disable it, repeat the above command with “no” instead of “yes”.

There are other ways to do this. Vishal Gupta’s site www.askvg.com offers three other ways here.

I was trying to remove the crapware YTdownloader, and ran into the above Administrator problem. There is an interesting utility autoruns.exe which lists all of the programs that are set to auto run. You must run this program as Administrator, but you can tune the autoruns without messing directly with the registry. You can also submit whatever you find to VirusTotal. My local McAfee claims there is a trojan inside YTdownloader.exe. There are other reports that it is malware. My early attempts to remove it got trapped by McAfee which claimed that the program was moved to a quarantine area. But going to McAfee’s interface for its quarantined files showed no sign of YTdownloader. I could find it using the file explorer, and there was a directory of the same name, which I could delete but only as Administrator. This didn’t get rid of a companion program BrsHelper, which I killed as above.

Incidentally, YTdownloader is sometimes called YouTube downloader. Beware of being tricked into installing YTdownloader by trying to download videos! I don’t understand the relationship here.

I also got rid of a couple Dell programs with bad reputations: dkab1err.exe (the character after the “b” is the digit one.) and DKADGmon.exe. They must have gotten installed when I used a Dell printer at one of my consulting client’s sites. With Administrator active, I had no trouble deleting them. I did have to deal with an extra prompt to continue however. Just click it and move on.www-searching.com

The program biomonitor.exe was always running. The utility autoruns.exe didn’t list it. Apparently it is part of HP’s SimplePass fingerprinting tool. To delete it, kill the process tree for biomonitor from the task manager, and then uninstall HP SimplePass from the control panel.

I came across a program WindowexeAllkiller.exe. While it looked interesting, it required the .Net framework, thus I didn’t try it. CNET warns that while safe, an inexperienced user can get into trouble. The author recommends checkpointing Windows before using it. The apparent goodness of this tool is that you can eliminate several bad programs at once. I suppose this is why it is such a dangerous tool. Some feedback on this tool would be welcome.

As I was thinking I was done, I noticed an unexpected tab in Chrome for www-searching.com. (Note the hyphen.) I don’t know how it got there. As I was on a roll looking for strangeness, I quickly found that this program was a search engine of sorts that was designed to track you and steal your personal information. The only damage it did to me was to install a shortcut to its site on my task bar. Of course I deleted the task bar item and the tab in Chrome, and then I did all the due diligence to get rid of potential infection elsewhere. I searched the registry, checked for Chrome add-ons and for a hijacked home page, checked the Chrome history and was very surprised to find nothing, checked the scheduled tasks, searched the file system, and looked for ads by it. I couldn’t find anything else. Malwarebytes was reputed to find and remove it, but a complete scan found nothing. Maybe I was lucky that I didn’t try out this bogus search engine!

I noticed on the web that www-searching.com was also similar to ohtgnoenriga.com (Gads, what language is that?) as well as search.conduit.com “Conduit Search”. I also looked for ohtgnoenriga and conduit.com on my system, and fortunately found nothing.

Finally, I deactivated my Administrator account as above.

SCIM – System for Cross-domain Identity Management

2015/05/15

SAML, OAuth, and OpenID Connect, as we have seen, all require the registration of the Client Applications, the Resource Owners (End Users), and the Resource Servers. The Authorization Server (AS) = The OpenID Provider (OP) is thus forced to keep the registration data, perhaps stored in tables. While these standards loosely define what goes into these tables, they do not define either how they are collected nor how these data are managed. SCIM, the System for Cross-domain Identity Management [not to be confused with SCIM the Smart Common Input Method platform] is an attempt to do this. See Ping Identity’s history in their SCIM white paper and a brief Wikipedia article for descriptions of some early attempts. The IETF lists some current draft specs.

The “C” in SCIM used to stand for “cloud”, but on-premises use of SCIM for internal identity management is popular as well. A SCIM server can be in the cloud and still manage on-premises applications using a secure SCIM channel through the firewall. This becomes a “cloud identity bridge”.

In my earlier IAM posts, I noted that the IdP or the AS had table descriptions for Clients, Resource Providers, Resource Owners, etc. This is the beginning of a Schema for SCIM. It needs groups, roles, entitlements, devices.

OpenID Connect

2015/05/01

OpenID Connect allows Client Applications to verify the identity of the End User based on the authentication performed by an Authorization Server. It also allows the Client to obtain basic profile information about the End User.

There are a number of versions of how OpenID Connect was born, e.g. here, here, here, and officially here. I like the story that after multiple influential companies implemented SAML, WS*, and OpenID 2.0, and also Facebook implemented Facebook Connect, Eran Hammer and David Recordon put forth a one page proposal for what became OpenID Connect. I can’t find this historical one-pager, and even the core spec today is around 100 pages with half a dozen other supporting documents. Some have called it a functional merger of OpenID and Facebook Connect that is layered on OAuth 2.0. Others provide the “formula”:

(Identity, Authentication) + OAuth 2.0 = OpenID Connect

Whoever should be getting historical credit, the basic idea is both simple and brilliant: Take the authorization mechanism of OAuth 2.0, make a couple tiny additions, which I’ll explain in a moment, and viola, we’ve got a authentication mechanism.

As with OAuth 2.0, there is a registration process that is not specified, but it is essentially the same as for OAuth and is described in the spec under OpenID Connect Discovery and under OpenID Connect Dynamic Client Registration. There is a bit of a pas de deux on terminology. What OAuth calls the Authorization Server AS is also referred to as the OP for OpenID Provider, and OP has an Authorization Endpoint and a Token Endpoint. The client obtains these endpoints during registration.

The fundamental new idea is simply to add a new scope value openid to the initial authorization request message (described in my last post on OAuth 2.0) to the Authorization Server AS. Having openid as one of the scope values, makes requests not only for access tokens but also for a new “identity token” and also opens up the possibility to request more information about the end user. Here are some of these request parameters:

  • scope: must contain the new value openid and may contain one or more of the scope values of profile, email, address, phone, and offline_access
  • response_type: code – means both access and id tokens be returned from the token endpoint in exchange for the code value obtained from the AS
  • client_id: obtained during registration at the AS
  • redirect_uri: one of the pre-registered redirection URI values for the client

This request asks the AS to authenticate the owner/operator of the browser that is sending the message and to return an id_token as well as access_tokens. The id_token will affirm that the user has authenticated recently. The id_token may contain additional claims or information about the user. The method of authentication is not defined by the spec. The id_token includes a JSON object that includes:

  • iss = issuer identifier for the issuer of the response
  • sub = subject identifier, a locally unique within the issuer for the end-user
  • aud = audience(s) for whom this id_token is intended = array of case sensitive strings or a single such string
  • UserInfo Endpoint
  • iat = Issue timestamp
  • exp = Expiration datetime
  • auth_time = time end-user last authenticated
  • How the user was authenticated (optional)
  • many other optional tags

The user runs a protected application which may make additional GET requests to the UserInfo endpoint for REST APIs for identity attributes. Curiously the spec warns that these claims may not be for the end user (due perhaps to man-in-the-middle attacks)! In addition there are language dependencies on the claim values.

The final OpenID Connect specification is dated Feb 26, 2014; and the certification program was launched April 22, 2015 with Google, Microsoft, Ping Identify, ForgeRock, Nomura Research Institute, and PayPal the first to self-certify.

Multiple companies, in support of OpenID Connect, have announced they will no longer be supporting OpenID 2.0 at some point in the near future.

My next IAM post is about SCIM.  It is here.

OAuth 2.0

2015/04/20

After SAML, a study of IAM needs to dig next into OAuth 2.0. It is NOT backward compatible with earlier OAuth versions, and an excellent historical introduction is here.) The official spec is RFC 6749, as well as the spec RFC 6750 for Bearer Token Usage. This post is to present an easy overview by working a specific and common example.

OAuth 2.0 is for authorization, but there is a rather clever extension, OpenID Connect, that provides authentication. They really should be just one standard, and I suggest learning them both in rapid succession.

One primary motivation for OAuth was to provide authorization for the many APIs that are provided vendors such as Google, Facebook, Linkedin, Twitter, and eBay to enhance and extend their products. These vendors view mobile devices as emerging to be the primary user interface, and hence much attention in OAuth is made to support a wide variety of such devices with varying capabilities and security profiles.

OAuth 2.0 has four actors:

  • Client C = the application making protected resource requests on behalf of the resource owner and with its authorization. C is a client both of the resource server RS and of the authentication server AS.
  • Authorization Server AS = the server issuing access tokens to the client after successful authentication of the resource owner and obtaining authorization. AS has separate authorization and token endpoints.
  • Resource Server RS = API = the server hosting the protected resources. RS is capable of accepting and responding to protected resource requests that use access tokens.
  • Resource Owner RO = an entity capable of granting access to a protected resource. When the RO is a person, RO = end user.

Clients. The popularity of OAuth stems from the huge variety of client applications directly supported. Individual implementations of authorization servers vary in how they support clients, but the spec gives some guidance. There are two broad client types, confidential and public. Confidential clients are capable of maintaining the confidentiality of their credentials and can authenticate securely. Public clients cannot, e.g. executing on a device used by the RO that is incapable of secure client authentication. The AS should assume the client is public until it meets the AS’s criteria for secure authentication. E.g.,

  • A web application that is a confidential client running on a web server. An RO accesses the client application via a browser on the device used by the RO. Client credentials and access tokens are stored on the web server and are not accessible by the RO.
  • A user agent based application is a public client in which the client code is downloaded from a web server and executes within a browser on the device used by the RO. Protocol data and credentials are accessible to the RO.
  • A native application is a public client installed and executed on the device used by the RO. Protocol data and credentials are accessible to the RO, but are protected from other servers and applications.

Client Registration. Before initiating a request for data, a client application registers (over TLS) with the AS, typically with an HTML form filled out by some end-user. This mechanism is not defined by OAuth. The point here, however, is to establish trust by exchanging credentials and to identify client attributes such as the redirection URIs and client type. The trust established must be acceptable not only to the AS but also to any relying resource server that accepts AS access tokens. Registration can be accomplished using a self-issued or third-party-issued assertion, or by the AS performing client discovery using a trusted channel.

A client identifier is issued by the AS to the client at the time of registration. It is a string, unique to the AS and to the client. Each client registration record in the AS typically includes:

  • client identifier (the key to this record) = client_id REQUIRED
  • client password
  • client public key
  • client secret = client_secret REQUIRED
  • list of redirection URIs
  • other attributes required by the AS: application name, website, description, logo image, acceptance of legal terms, etc.
  • client type

If the client is implemented as a distributed set of components, each with a different client type and security context, e.g. a confidential server-based component and a public browser-based component, then the AS must either have specific support for such a client, or should register each component as a separate client. This flexibility of a particular AS can be a competitive advantage for it.

OAuth gives some additional security guidance to the three examples of clients mentioned above. Web applications, confidential clients accessed by the RO via a browser, should store their access tokens on the web server in a way not exposed to or accessible by the resource owner. User-agent-based applications, public clients whose code is downloaded from a web server, should make use of the user-agent capabilities when requesting authorization and storing tokens. These are accessible to the RO. Native applications, installed and executed on the RO’s device, have protocol data and credentials accessible to the RO. Native applications should protect these data and tokens from hostile servers and other applications, even those that execute on the same device.

The registration process must also include RO and RS registration. It includes the authorization endpoint that the client uses to obtain an authorization grant. Each RO registration record in the AS should contain:

  • username
  • password
  • public key
  • certification of authenticity
  • session cookie info
  • authorization endpoint (“application/x-www-form-urlencoded”)
  • URI for the associated RS
  • access token requirements for RS

As mentioned above, one of the things that makes OAuth 2.0 popular is its flexibility to support a wide variety of devices from traditional workstations and servers to Internet enabled devices such mobile phones and wrist watches. The protocol flow for various instances of device varies in order to deal with various client capabilities. The spec gives some guidance, but the implementations vary. The additional layer OpenID Connect addresses this in more detail.

A typical protocol flow is the one specified for a confidential client C requesting authorization for a specific resource on RS:

  1. C requests authorization from RO
  2. C receives authorization grant from RO
  3. C sends authorization grant to AS
  4. C receives access token from AS
  5. C sends access token to RS
  6. C receives protected resource from RS

It is required that these exchanges use TLS.

A. In this example, the authorization request in (A) is made directly to RO. Without sending its client_secret, C sends and the RO receives this request as a JSON object that includes:

  • a list of redirection URIs via the “redirect_uri” request parameter which must be registered with the AS and formatted using the “application/x-www-form-urlencoded” format.
  • This request may include a “state” parameter and value.
  • A client_id obtained during registration of the client.
  • An authorization request number
  • Authorization Endpoint
  • Response/Grant type requested. REQUIRED. Since there are multiple authorization grant types, there is some variation on authorization requests. Cf. grant types below.

B,C. The authorization grant in (B) and (C) is one of the following four predefined grant types or an extension/custom grant type private to this particular AS:

  1. Authorization Code – Used when the AS is an intermediary between C and RO
  2. Implicit – for clients implemented in Javascript in a browser. Client is issued access token directly w/o an auth. code
  3. resource owner password credential – username/password is the auth. Code to get an access token
  4. client credentials – used as a grant when the client C is acting on its own behalf
  5. extension/custom grants

B. The Resource Owner might not have the capabilities to process the request or to form the grant. In this case the RO forwards the request to the AS which returns the grant to the RO after processing. The RO then returns the grant to the client C. A good example here is when the RO is an end user with a small mobile device, and the RO, running an application on C, asks C to do something such as print a photo owned by RO but stored in the cloud. C says, ok, but I need access to the photo, and starts step (A) with a resource request. Since the request contains the authorization endpoint, the RO can immediately forward the request to the named AS which can do all the processing needed to produce the grant. (The perspicacious reader might ask, well if the grant gets sent to C in step (B) and then back to AS in step (C) with a request for an access token, why doesn’t AS just return an access token? The point is to formally separate the steps that require client credentials from the steps that require RO credentials.)

B,C. Note authorization grants do not include RO credentials. As our example, let’s look at the Authorization Code grant type. It is a JSON object containing at least:

  • grant type, in this case, grant_type: authorization
  • authorization end point
  • client id
  • RO id
  • requested scope
  • local state
  • redirection URI
  • the client generated authorization request number

D. AS authenticates C and validates the grant. If valid, AS issues an access token as well as a refresh token. A synonym for access token is bearer token. It is also called a “valet key”, because it provides limited access to resources much like a valet key for an automobile. These tokens are JSON objects defined by RFC 6750 containing:

  • Type name: Bearer
  • grant type
  • client ID
  • scope
  • timestamp when issued
  • validity time for this token in seconds beyond timestamp
  • original client generated authorization request number
  • authorization code (unique number)
  • authorization end point
  • RO id
  • RS id
  • RS endpoint
  • signature of the AS
  • refresh token number

These token attributes are not fully specified by the spec. Anybody in possession of this token will be given access to the protected resources by the RS. Thus, tokens should be protected from disclosure both in storage and in transit.

E. The client C requests the protected resource from RS by presenting the access token for authorization.

F. RS validates access token by checking scope and validation (expiration) date, and serves request.

An example of such an access token is:

HTTP/1.1 200 OK

Content-Type: application/json;charset=UTF-8

Cache-Control: no-store

Pragma: no-cache

{

“access_token”:”mF_9.B5f-4.1JqM”,

“token_type”:”Bearer”,

“expires_in”:3600,

“refresh_token”:”tGzv3JOkF0XG5Qx2TlKWIA”

}

If the token has expired, C submits the refresh token, its client_id, and its client_secret to AS to get a new access token, and then C repeats (E) and (F).

When an RO provides a distinct C with access to resources, this access is authorized by AS and provided by RS. I think of AS as being a big flat space supported by three legs C, RO, and RS. This is dubbed a “three legged flow”. When is client equals the resource owner (or is entrusted with the RO’s credentials), then the roles of C and RO collapse, and we have a “two legged flow.” This special case can be handled by a resource owner password credential grant type or a client credential grant type.

Finally, quite a number of security concerns are addressed in OAuth 2.0 Threat Model and Security Considerations [RFC 6819]. These security concerns are also discussed in OpenID Connect which is my next post here.