Archive for the ‘General’ Category



ONVIF – formally known as the Open Network Video Interface Forum

In the early 2000’s, my company Bristol Systems Inc. got into IP cameras and access control HID security cards as part of our comprehensive security program for our customers. Sadly, ONVIF had not yet been formed.

ONVIF was formed in 2008 by Axis Communications, Bosch Security Systems, and Sony. The video security market at the time was formed of companies who made and/or sold video cameras and recorders. In the worst case, each pair of such devices had proprietary programming interfaces and proprietary protocols for communication between the cameras and recorders. This was an interconnect nightmare for customers who might want to add a camera to system with a recorder or wanted to update their recorder. The idea of ONVIF was to standardize communication APIs and protocols between these devices, in order to permit interoperability independent of vendor, and to be totally open to all companies and organizations in this space. Its goals, beyond interoperability, are flexibility, future-proofing (your camera will continue to work in a heterogeneous system even if its manufacturer goes belly-up), and consistent quality.

The forum has now dropped the longer name as its standards have expanded beyond video, for example, to storage and to access control. It is now known simply as ONVIF.

The ONVIF standards are comprised of a core standard and several additional technical standards called “profiles”. All ONIVF conformant devices must conform to the core standard and to one or more profiles. One can think of the profiles as groups of features. This grouping provides some sanity in this market: if a vendor decides a particular profile is necessary or desirable then this vendor must implement all of the (mandatory) features of the profile. A device that only implements some of one profile and some of another cannot be ONVIF compliant.

The Core Specification 2.5 (December 2014) is rather comprehensive. This spec is around 150 pages and includes device and system management, web services, framework for event and error handling, security, ports, services, device remote discovery (necessary for plug and play interoperability), and encryption for transport level security. It includes data formats for streaming and storing video, audio, and metadata. It also includes a wide variety of service specifications, e.g., access control, analytics, imaging, pan-tilt-zoom, recording control, replay control, etc. It uses IETF, and other networking standards.

The current profiles are identified by the letters: S, C, G, Q, A, and T. Thus we have Profile S, Profile C, Profile G, Profile Q, Profile A, and (draft) Profile T. To remember which is which, I use:

  • S = “streaming” for sending video and audio to or from a Profile S client or device. Basic camera controls.
  • C = “control” for basic access control such as door state and control, credential management, and event handling
  • G = “gigabyte” for storage, recording, search and retrieval
  • Q = “quick” for quick installation, device discovery and configuration
  • A = “additional access” for more on access control, configuration of the physical access control system, access rules, credentials, and schedules
  • T = “ tampering” for compression, imaging, and alarms for motion and tampering detection.

In each profile, support for a feature is mandatory, if any aspect of that feature is supported by the device, otherwise it is conditional. For example, Profile S specifies compliance requirements for pan-tilt-zoom, which would be conditional on whether the camera supported aspect of pan-tilt-zoom. In which case, the camera would have to support all Profile S features of pan-tilt-zoom. If the camera does not support pan-tilt-zoom, then it can still be Profile S compliant.

In future posts, I’ll write about selecting a video (and audio) security system for my home, and about integrating my system into the neighborhood watch, which is a heterogeneous collection of security systems. In particular, exactly how the various profiles come into equipment decisions will be detailed in depth.


Privacy in Windows 10


Privacy in Windows 10

11/13/2016, 07:33:17

The general problem with privacy in Windows 10 is that applications get lots of privileges that permit the “theft” of personal information. My goal would be to turn off these as much as possible. Here is what I’ve tried:

Go to Settings, then Privacy, AND turn off all privacy options in the General tab. [Couldn’t change some app notifications. Had to uninstall one uncooperative app.]

Go to Settings → Privacy → Background Apps, toggle off each app. [Had to search for Privacy, then all ok. Turned off most.]

Go to Settings → Accounts → Sync your settings. Turn off all settings syncing. [I used the “Sync Settings” switch to turn them all off. The individual settings were grayed out.]

Turn off sharing ID/profile with third party apps. Go to Settings → Privacy → General → Let my apps use my advertising ID. (this will reset your ID). [Had to search for “Advertising”, then could turn it off.]

Go to Location, turn off “Location for this device” (via Change button). [Found under “Personalization”]

Go to Camera, turn off “Lets apps use my camera”. You can enable the camera when you need it. You can also enable the camera for specific apps. [done]

To to Speech, Inking and Typing”. Click on “Turn off” and “Stop getting to know me”. [Click “Get to know me” and you ‘ll get the option to turn it on or off. I use “off”]

Go to Feedback and Diagnostics” and choose Never for feedback, and “Basic” for diagnostic and data usage. [Done after I reconsidered my earlier settings.]

In Settings, go to Windows Update → Advanced Options and “Choose how updates are delivered”, select “turn off”.

Go to “Network and Internet” → WiFi, turn off WiFi Sense. [done]

Disable Cortana: Use Notebook menu item and select Permissions. Turn off all switches. Then select Settings, click on “Change what Cortana knows about me in the cloud” and tap “Clear”

Disable your Microsoft account: Go to Settings → Accounts, “your info” tab, Choose sign in with a local account instead (and set up a local account).

Disable Telemetry (automated data collection and communications): On the web, there is a lot of advice on disabling telemetry in Windows 10. Here is one from TechLog360 (link below): Open Command Prompt (Administrator) and type:

sc delete DiagTrack [response: [SC]DeleteService SUCCESS]
sc delete dmwappushservice [SC] DeleteService SUCCESS]
echo “” > C:\ProgramData\Microsoft\Diagnosis\ETLLogs\AutoLogger\AutoLogger-Diagtrack-Listener.etl
reg add “HKLM\SOFTWARE\Policies\Microsoft\Windows\DataCollection” /v AllowTelemetry /t REG_DWORD /d 0 /f  [response: The operation completed successfully]

I actually like Windows 10’s visual effects, but to turn one or more off, Go To System → Advanced system settings → Advanced to uncheck whatever you don’t want.

UX versus UI



UX = User Experience, UXD = User Experience Design, UI = User Interface, and UID = User Interface Design are terms and acronyms frequently thrown around. Here are some of my thoughts about them.

First UI and UID: A user interface for a product is something for which you can write a detailed specification. E.g. Physical size of the product and its components, user input locations, visual, audio, and tactile feedback mechanisms, recording capabilities, etc. What exactly does the device do when it is powered up? What is displayed or heard or felt? Press or click on this button or icon, and the device does X and the screen looks like Y. Will the device accept voice input? What intermediate results are recorded? All these things, and no end of others, are user or human interface things. They describe how the device works as is experienced by the user. They all can be objectively tested as to whether or not they meet the specification.

User interface experts have traditionally injected subjective opinions as to how good a user interface is. Most of these experts form their opinions by using the product themselves or by discussing how easy real users find the product to be. “Easy” is an interesting word here, because it can mean easy to learn, easy to understand, easy to use, etc. For the most part, being easy is subjective. One can measure some tings such as how fast the “average user” becomes proficient in the use of a product. Of course, this begs the question of the meanings of “average user” and level of proficiency, and how many users were surveyed to compute an average. Such measurements are usually very vague on the environmental issues: For example, was the user allowed to read a printed or on-line manual (now the test is about the manual together with the device), and was the user given any verbal instruction from an experienced user (now the test is about the device and the instruction)? In general, I lump all of UI expert opinion as pure subjective opinion. Some of these opinions are better than others, of course – where “better” is usually translated into product sales. Such opinions are best lumped into User Experience.

None of this user interface stuff has much to do with how well real users like a product, whether the product meets their stated and unstated needs, how many questions users have, and what the users actually feel is easy or difficult about using the product. I lump these subjective issues into the category of User Experience issues. One can try to design for a better user experience, and such an effort is often called User Experience Design, but its actual definition is a little vague. In fact, a project manager will have difficulty allocating time and budget to user experience issues. The proponents of agile development processes actually allow for development phases where attempts are made to ask users about their experiences with early versions of a product so that UI changes can be made for the next iteration. If the developers are both good and lucky, iterations of these development phases will create a product that meets business needs (revenue, profitability, market share, etc.) Of course, the product development team has numerous members from all facets of the company: project, product, and program management, engineering, sales, marketing, finance, competitive analysis, etc. Gathering consensus and making product decisions can be challenging.

Final thoughts:

  • When the team is formed, make it clear who makes decisions, and how decisions are made.
  • Make it clear by what criteria a product is canceled. (For example, if the projected ship date slips by N months, then cancel, if the product won’t meet profitability goals then cancel, etc.)
  • Separate UI (objective measurements) from UX (subjective opinions). The purpose of UID is to meet perceived product and user goals as defined by UXD. The UID consists of a UI specification. Test this first, then subjectively evaluate whether the UI meets the UX goals.
  • Keep the UI development team small and the product as simple as possible (but no simpler.)

Windows 10 Problems and (some) Solutions


On May 11, 2016 I started one of my note files on Windows 10 problems.  Over the next year, I added some solutions. This file is now an unreadable mess, and I decided in 2017 not to make it a WordPress post.  Now what do I do with it? Hopefully since then Microsoft has fixed many of these problems.  My plan is to go back and blog about each problem.  This, and new problems, should keep my busy for years to come!

Upgrading to Windows 10


The first good news is that the upgrade from Windows 8 (which I hated with a passion) to Windows 10 went very smoothly. Now the PC on Windows 8 is my daughter’s PC and it had few applications. In fact the only problem I had was with a DVD player, which was quickly solved by downloading a new version of the player.

The second good news is that a “simple” upgrade from Windows 7 to Windows 10 went well also. I had to delete Chrome and OpenOffice, and then reinstall their Windows 10 versions.

The upgrades were slow, even with a good cable modem, but it all worked. I was delighted that the upgrades restarted themselves intelligently each time when the network burped.

The Leaky Leviathan


David Pozen’s Harvard Law Review paper [1] “The Leaky Leviathan: Why the Government Condems and Condones Unlawful Disclosures of Information” explores the issues of leaks, plants, and combinations “pleaks” of confidential government information. Such disclosures represent an art form for U.S. (and I’m sure other nation) politicians. This paper will be required reading for any student of constitutional law or participant in government. For technologists such as readers of this blog, this paper begins the wrestling match around the Snowden leaks, and the battle over the NSA’s previously secret activities to intercept foreign and domestic Internet and telephone communications.

Pozen’s paper analyzes the reality of how government works. It is easy to make something “secret” in one form or another, but government has created informal ways to incrementally relax secrecy. We’ve all heard newscasters attribute a story to an “unnamed source”. When the story supports the government position in a measured way, it is a plant. When the government feels too much pain and the unnamed source was not controlled by the government, then it is a leak, and top executives in the government squeal like little piglets at the wrong done. In reality, Pozen writes, with tongue-in-cheek, plants need to be “nourished” by leaks. Otherwise, if all leaks were suppressed, plants would lose their believability and would be ineffective as a government tool. He points out that historically whistle-blowers and sources of leaks are rarely thrown in jail. They are, however, often shunned, losing valuable access to government executives.

The Creaky Leviathan by Sagar [2] is sometimes cited as a rebuttal, but I found it hugely supportive:

Let me be clear. Leaky Leviathan contains no hasty judgments. Pozen is admirably careful and measured in his praise of permissive enforcement; his broader objective is to show that the varied causes and consequences associated with leaks mean that the prevailing “system of information control defies simple normative assessment.” This conclusion I fully endorse. But the question remains: even if it advances our comprehension, does “middle-range theory” improve our ability to judge the value or worth of the prevailing “system of information control”? For the reasons outlined above, I think the answer must be no, since the overall consequences of permissive enforcement are obscure and hard to ascertain. As far as this “disorderly” system is concerned, the most we can do from a normative perspective, I think, is become clearer about whether particular outcomes it produces are more or less acceptable.

Whistle-blowing protection when it comes to national security issues is a dicey topic. The issue here is that huge losses of life and treasure are at risk. Counterbalancing such risk is that the national security infrastructure is huge: NSA, the Pentagon and all of the DoD, CIA, NSC, and of course all of the Executive Branch (and some of Congress) represent the government side, and the major contractors Boeing, L3, Halliburton, McDonnell Douglas, Raytheon, etc. are also in the mix. A system of secrecy often makes it difficult to generate healthy debate; moreover, these institutions are “wired” to worry about and exaggerate threats. Secrecy in these institutions makes it difficult even for internals to gain enough information to offer opposing views. Senator McCarthy, J. Edgar Hoover, Nixon, and others required outside forces to neutralize their negative actions. However real the communist threat was, McCarthy and Hoover violated the rights of many U.S. Citizens. There were, in the end, no weapons of mass destruction in Iraq, and we went to war over this fabrication. The Maginot line in France was an utter failure. Examples abound. The whistle-blowing that has been legitimized in the private sector is not well legitimized in the government sector. Current law limits disclosure to Congress, does not cover civilian contractors (thus Snowden is not protected; Manning was somewhat, but still got 35 years, possibly due more to the choice of WikiLeaks as an outlet). The Leaky Leviathan screams out for a legal structure to fairly protect national security whistle-blowing. Benkler’s paper [3] “Whistle Blower Defense” takes a solid crack at this.

Benkley starts with an excellent in-depth review of how we got from 9/11 to the mess that Manning and Snowden disclosed. His “Public Accountability Defense” starts with the observation that most leaks are not prosecuted, because if they were, the non-prosecuted leaks would appear to be sanctioned and would lose their credibility to shape public opinion. He focuses on “accountability leaks”, which are those that expose substantial instances of illegality or gross incompetence or error in important matters of national security. These are rare. One set occurred at the confluence of the Vietnam and Cold Wars with the anti-war and civil rights movements. The second deals with extreme post 9/11 tactics and strategies. Such leaks have played a significant role in undermining threats that the national security establishment has made to the “constitutional order of the United States” – in short, a very big deal. For many of us technologists the issues we would consider to leak would wrack our conscience and destroy our moral compass. We would be going out of our element to deal with the failure of mechanisms inside the national security system to create a leak. For example, the CIA’s program of torture, rendition, and secret off-shore prisons somehow leaked without an Ellsberg, a Manning, or a Snowden. (The technology that enabled Ellsberg to release the Pentagon Papers was a massive, highly automated, Xerox machine.) Benkley observes, “The greater the incongruity between what the national security system has developed and what public opinion is willing to accept, the greater the national security establishment’s need to prevent the public from becoming informed. The prosecutorial deviation from past practices is best explained as an expression of the mounting urgency felt inside the national security system to prevent public exposure. The defense I propose is intended to reverse that prosecutorial deviation.” Needed is a defense or at least a sentencing mitigation platform that requires only a belief that the disclosure would expose substantial “violation of law or systemic error, incompetence, or malfeasance.” This defense is based on the leaker serving a public good. It is not individual rights based. It is belief based, and does not depend on later proven illegality of what was disclosed.

This is not a naive proposal, and many subtleties are discussed, for example, to whom the leak is made, how it is made, when it is made, what is redacted, how public the leak mechanism is, etc.

Benkler reviews several historical leaks, going back to 1942 when Morton Seligman leaked decoded Navy messages to a reporter. If published, the fact that Japanese codes had been broken would be disclosed causing harm to the war effort. Since no government wrongdoing was disclosed, the accountability defense would not apply. The long historical review ends with a discussion of the Manning and Snowden cases. Manning’s 35 year sentence is obscenely excessive even though the criteria for an accountability defense are mixed. One would hope, in the presence of an accountability defense that at least a more reasonable sentence would have been handed down. A detailed analysis of the Snowden case is given; with the single exception of leaks on NSA’s Tailored Access Operations, TAO, which target specific computers, the defense applies. One interesting issue is that the legal defense should be structured so that the prosecution cannot “cherry pick” the weakest disclosure and prosecute that, ignoring the public value of the other disclosures.

The institution through which the leaks are made should also be protected by the defense. In some sense, “free press” does this, but this should be clarified in national defense cases.

Finally, “punishment by process” is discussed. The government can ruin someone in many ways. Huge legal expenses, long drawn-out trials, loss of access and jobs, etc. While protection from punishment by process is desirable, how to do this needs to be addressed. I would think that technologists fear this the most.

I strongly recommend these three thought-provoking articles.

[1] , “The Leaky Leviathan: Why the Government Condems and Condones Unlawful Disclosures of Information”, Harvard Law Review, December 2013, Vol 127, No. 2, David Pozner

[2] “Creaky Leviathan: A Comment on David Pozner’s Leaky Leviathan”, Harvard Law Review Forum, December 2013, Vol 127, No. 2, Rabul Sagar. [A mostly favorable review of [1]]

[3] “A Public Accountability Defense for National Security Leakers and Whistleblowers” 8(2) Harv. Rev. L. & Policy, July 2014, Yochai Benkler. [A well reasoned and historically justified proposal for the legal structure of a whistle blower defense that is of particular interest to technologists.]

Removing Crapware from Windows


Every so often my PC starts getting slow. In the task manager there are dozens of processes that I don’t recognize. It’s a real pain to clean these out. But, …, I guess this is just basic maintenance that needs to be done. Here are my notes for today. I doubt this makes good reading, unless you land here via a search engine and want to see how I got rid of something.

The first lesson here is that removing crap is best done in the Administrator account, and not just in an ID with administrator privileges. Some utilities (sc for example) test for user ID and not just privileges. If you use Windows Vista, 7, or 8, this account is “hidden”. Sigh. If you’ve ever wondered what the option “run as Administrator” is, now you need it.

On the site, I found this helpful way to remove crap installed as a service. In this case, I wanted to remove BrsHelper:

Open a command prompt by right clicking its icon and selecting “run as Administrator”. Copy the lines in red respectively to stop, disable auto-start, and to delete the service entirely. For example,

sc stop “BrsHelper”

sc config “BrsHelper” start=disabled

sc delete “BrsHelper”

I note on the web that others get “Access Denied” with sc even when running it as Administrator. I didn’t have that problem, but beware. This seems like a nice utility. It does have a side effect of staying in memory after using it. I had to kill its process tree from the task manager when I was done with it.

The Administrator account isn’ t just hidden, it isn’t enabled at all. To enable it, run the command prompt as Administrator as above, then type:

net user administrator /active:yes

Now the Administrator account is active, and you’ll see it when you want to log in or just change user accounts. BEWARE, initially it has no password. Be sure to set a good one if you want to leave it active. To disable it, repeat the above command with “no” instead of “yes”.

There are other ways to do this. Vishal Gupta’s site offers three other ways here.

I was trying to remove the crapware YTdownloader, and ran into the above Administrator problem. There is an interesting utility autoruns.exe which lists all of the programs that are set to auto run. You must run this program as Administrator, but you can tune the autoruns without messing directly with the registry. You can also submit whatever you find to VirusTotal. My local McAfee claims there is a trojan inside YTdownloader.exe. There are other reports that it is malware. My early attempts to remove it got trapped by McAfee which claimed that the program was moved to a quarantine area. But going to McAfee’s interface for its quarantined files showed no sign of YTdownloader. I could find it using the file explorer, and there was a directory of the same name, which I could delete but only as Administrator. This didn’t get rid of a companion program BrsHelper, which I killed as above.

Incidentally, YTdownloader is sometimes called YouTube downloader. Beware of being tricked into installing YTdownloader by trying to download videos! I don’t understand the relationship here.

I also got rid of a couple Dell programs with bad reputations: dkab1err.exe (the character after the “b” is the digit one.) and DKADGmon.exe. They must have gotten installed when I used a Dell printer at one of my consulting client’s sites. With Administrator active, I had no trouble deleting them. I did have to deal with an extra prompt to continue however. Just click it and move

The program biomonitor.exe was always running. The utility autoruns.exe didn’t list it. Apparently it is part of HP’s SimplePass fingerprinting tool. To delete it, kill the process tree for biomonitor from the task manager, and then uninstall HP SimplePass from the control panel.

I came across a program WindowexeAllkiller.exe. While it looked interesting, it required the .Net framework, thus I didn’t try it. CNET warns that while safe, an inexperienced user can get into trouble. The author recommends checkpointing Windows before using it. The apparent goodness of this tool is that you can eliminate several bad programs at once. I suppose this is why it is such a dangerous tool. Some feedback on this tool would be welcome.

As I was thinking I was done, I noticed an unexpected tab in Chrome for (Note the hyphen.) I don’t know how it got there. As I was on a roll looking for strangeness, I quickly found that this program was a search engine of sorts that was designed to track you and steal your personal information. The only damage it did to me was to install a shortcut to its site on my task bar. Of course I deleted the task bar item and the tab in Chrome, and then I did all the due diligence to get rid of potential infection elsewhere. I searched the registry, checked for Chrome add-ons and for a hijacked home page, checked the Chrome history and was very surprised to find nothing, checked the scheduled tasks, searched the file system, and looked for ads by it. I couldn’t find anything else. Malwarebytes was reputed to find and remove it, but a complete scan found nothing. Maybe I was lucky that I didn’t try out this bogus search engine!

I noticed on the web that was also similar to (Gads, what language is that?) as well as “Conduit Search”. I also looked for ohtgnoenriga and on my system, and fortunately found nothing.

Finally, I deactivated my Administrator account as above.

Comments on NIST Interagency Report 7977, Draft 2




Re: NIST Interagency Report NISTIR 7977 “NIST Cryptographic Standards and Guidelines Development Process” January 2015 Draft.

Date: March 24, 2015

The January 2015 Draft is a big improvement. In addition, the Summary of Public Comments on NISTIR 7977 (Feb 2014 (First) Draft) was quite helpful to compare how the current draft addresses comments on the first draft. The authors of both new documents are to be commended for their work.

General review comments:

A. Review comments on the first draft of NISTIR 7977 reflect a distrust of the NSA, and by extension of NIST processes. Suppose China sends NIST an email that says, “Hey, we’ve been studying elliptic curve E over field Z/p with base point Q. It has some neat properties, and China recommends that NIST adopt E for its elliptic curve encryption.” Should NIST consider China’s suggestion? A good test of the processes in NISTIR 7977, would be to answer “yes”, even though we suspect that China knows a backdoor to encryption that uses E. Now, receiving such a suggestion from the NSA should be little different. Even though NIST is required to consult with the NSA, most independent security consultants, post-Snowden, would not trust a curve suggested by the NSA any more than one suggested by China. I certainly would not. Therefore we look for processes in NISTIR 7977 that politely look with great suspicion on suggestions for tools and algorithms from the NSA. NIST should get expertise and knowledge from the NSA, but should not blindly accept its tools and algorithms. NIST processes for analysis and acceptance of any suggestion, be it from the NSA, from China, or from any other source should be equally stringent. In particular, cryptographic technology should not be standardized without significant consensus from a wide swath of independent cryptographers. The current draft of NISTIR 7977 does not have an emphasis on public analysis and consensus for acceptance.

B. Of course, NIST could standardize, say an encryption method, even from a private and independent source, and the NSA, China’s PLA Unit 1398, the UK’s GCHQ, or other nation-state cryptography group could know how to crack this encryption method, remaining silent during and after the standardization process. One would hope that NIST, through its own efforts and its financial support of academic and other cryptography research would facilitate the discovery of the weakness and would quickly retire the standard. Such post-standardization life cycle efforts by NIST also need to be part of NISTIR 7977 [line 727].

C. Now if NIST were to publish a proof that a tool or algorithm received from China, the NSA, or another source in fact had no back doors and was totally secure then a consensus on standardization might well be easily achieved. After believing such a proof, I might recommend the proven technology to my clients, but I probably would wait until a huge number of non-government cryptographers also believed the proof and also were recommending this technology. NISTIR 7977 says roughly that NIST will “pursue” to find and use proofs [line 55]. I’d be happy if NIST worked with the NSA and other agencies on such proofs and would recommend such efforts be part of NISTIR 7977.

D. NIST publishes security papers at a prodigious rate. So fast that reviews are deemed inadequate. In light of post-Snowden caution around NIST processes, people naturally ask if these poorly reviewed papers can be trusted. It isn’t going to help if NIST says, “It’s ok, the NSA has reviewed it…” Look, not only does the current draft of NISTIR 7977 fail to convince that future NIST papers will receive good independent reviews, there was no indication that past NIST papers will retroactively receive good reviews. This is a very sad state of affairs, but it is fixable.

Some more specific review comments:

  1. Clarity of the NIST Mission: To develop strong cryptographic standards and guidelines for meeting U.S. federal agency non-national security and commerce needs. This mission should be parsed: To develop strong cryptographic standards and guidelines for meeting 1. U.S. federal agency non-national security needs and 2. commerce needs. My point is that the needs of commerce should treated by NIST as equal to the needs of any federal agency. [line 202 Balance]. For example, federal agencies may well be happy with NSA algorithms, but general commerce may not be.
  2. I do not agree with the NIST Response (page 7 of Summary) to Technical Merit comments that NIST should give priority to non-national security federal information systems. NIST should always make commerce needs equally important. Such a priority statement doesn’t seem to be in NISTIR 7977 explicitly, but there are several statements about NIST being legally required to give such a preference when so ordered by a government entity.
  3. NIST’s statement that it will “never knowingly misrepresent or conceal security properties” [Summary page 3; line 215 Integrity] reminds me of Barry Bond’s statement that he “never knowingly took growth steroids” when his hat/head size at the end of his career was three sizes larger than when he was a rookie. I would prefer a more proactive statement such as “NIST will make every reasonable effort to ensure that military, intelligence and law enforcement agencies by their suggestions, review comments, or contributions do not compromise any security tool or algorithm recommended by NIST.” For NIST standards to regain the confidence of the security and general commerce communities, NIST processes should convincingly ensure by NIST public actions that its tools and algorithms do not compromise the privacy or the integrity of any commercial or private message being protected by NIST standards.
  4. The FISMA requirement that NIST consult with certain federal agencies including the NSA to avoid duplication of effort and to maintain synergy of federal information protection efforts, but NIST can never in the future blindly accept a recommendation from any public or private agency. What is important is that NIST actively regain and maintain its process integrity via the new NISTIR 7977. The current draft falls short.
  5. NIST should consider resolving the conundrum of needing NIST output frequently and needing adequate public reviews of such output by the creation of additional outside paid review boards and conformance testing bodies. Such a review board should be established to review annually the entire Cryptographic Technology Group. [lines 316 and 326]
  6. Minutes of the monthly NIST/NSA meetings should be published. [line 377]
  7. Independent review boards should have the power to reject a proposed standard, say if NIST could not convince the board that the NSA or another agency has not compromised the standard. [page 4; lines 47, 403, and 464]
  8. The NISTIR 7977 Development Process itself should undergo regular review and updates at, say, an annual frequency.

Requested Comments [line 125]:

NIST Question Comment

Do the expanded and revised principles state appropriate drivers and conditions for NIST’s efforts related to cryptographic standards and guidelines?

Yes, but if the word “appropriate” were replaced by “adequate” then No. Neither the integrity of NIST processes in face of NSA influence, nor the issue of adequate review are satisfactorily answered. [A, C, D]

Do the revised processes for engaging the cryptographic community provide the necessary inclusivity, transparency and balance to develop strong, trustworthy standards? Are they worded clearly and appropriately? Are there other processes that NIST should consider?

No. “Trustworthy” standards need public confidence that the NSA or another agency have not added or know of backdoors or other weaknesses to their contributions. Wording isn’t an issue. Different new processes are necessary to separate NIST from NSA and other related agency influence. After-standardization research efforts should be funded as part of all life cycles [B].

Do these processes include appropriate mechanisms to ensure that proposed standards and guidelines are reviewed thoroughly and that the views of interested parties are provided to and considered by NIST? Are there other mechanisms NIST should consider?

No. Cf. A, C, 2, 3, 4, and 7 above. Regarding 7, if NIST won’t vest veto power to independent reviewers, such experts will tend to not participate. Lack of review resources also seems to be a problem. Cf. 5 above.

Are there other channels or mechanisms that NIST should consider in order to communicate most effectively with its stakeholders?

Yes. More paid outside reviewers including an annual review of the Cryptographic Technology Group. Cf. D and 5 above.

Respectfully submitted,

Gayn B. Winters, Ph.D.

Technology Consultant



I don’t own a Lenovo PC, thus I shouldn’t get upset over Lenovo pre-installing adware from Superfish. This adware was apparently infected by a third-party Komodia (Israel startup) which put a bad certificate on the Lenovo PC which allowed a Man-in-the-middle attack for all web sites. The web site’s SSL certificate showed the issuer was Superfish. Now I hate preinstalled software and routinely delete it when I (or a friend) get a new PC. The annoying thing here is that just removing Superfish doesn’t remove the bad certificate and the MITM exploit can continue. Lenovo has apologized and has a removal tool (from McAfee, but other vendors have one as well.) Lenovo has been hit with a class action suit, which I hope will extend to Superfish and to Komodia.

I don’t particularly like government intervention, but if it were clear that Lenovo and its suppliers were guilty of some federal crime and subject to huge fines, it might dissuade PC makers from preinstalling such crap onto their PCs. I know that PC profit margins are thin, and preinstalled software adds revenue, but really! (There is also the fallacy that preinstalled software enhances the PC by making it usable and attractive right out of the box. If you think this is attractive, think about the Superfish infection!

Some people like to reinstall Windows – assuming they have an installation disk from Microsoft. (This is less hassle for open source operating systems, because such a disk image can be downloaded.) In this way, all the crap installed by the PC manufacturer doesn’t get installed.

Keyloggers as Trojan Horses


Read this Register article for starters.

A $10 giveaway at, say, conferences, could be an effective Trojan Horse.  Samy Kamkar (@samykamkar) has released schematics for a key logger built on the open source Ardunio hobby board.  I don’t quite see how to get the cost down to the claimed $10 without a lot of volume, since the Ardunio board retails for $25.  Perhaps the NSA has some better secret manufacturing plans with more nefarious delivery ideas.

Samy of course hopes that Microsoft and other keyboard manufactures will address such a security hole.