Posts Tagged ‘security’

Cybercrime

2014/10/07

Cybercrime is usually measured in financial loss due to a computer based attack on a company’s computer system. Reputational loss can be huge, but is often only measured in lost sales. Guy Carpenter, a well respected reinsurance broker, recently quoted McAfee/CSIS 2013 study (see below) that the annual global cybercrime loss at $445 Billion. I’ve seen some estimates for the annual global revenue of the computer security industry at $20-50 Billion. The point I want to make here is that these two estimates, which are probably only accurate to a factor of 2 or 3, do not compute for me. If your expected loss is 20 times what you are spending for security, are you spending enough? This and the fact that cybercrime losses are increasing, argue that the computer security industry is going to grow like crazy. On the other hand, the likelihood that a company that is under-spending on computer security gets clobbered with a cybercrime is high, and obviously needs a lot of insurance, which I guess is Guy Carpenter’s message. The cybercrime insurance industry should also skyrocket. (Some liability and theft policies might exclude cybercrime or add claim limits and force customers to insure against cybercrime separately.)

I should point out that the FBI’s Internet Crime Complaint Center received in 2013 complaints with an adjusted dollar loss of $781,841,611. This US number is hugely less than the global number discussed above. See the FBI report listed below. The two reports have vastly different methodologies and thus different numbers.

There is much to learn from the Target breach in the fall of 2013, and this will be the subject of a subsequent post. Points worthy of mention here are that Target’s insurance was woefully inadequate, and while it spent a huge amount on FireEye computer security products, it didn’t have the security infrastructure to use those products effectively. In fact, Target didn’t even have a Chief Security Officer. Target and its banks’ total loss to date is in the hundreds of millions of dollars.

By searching the web, one can find many annual reports on cybercrime. Here are a few that I’ve enjoyed:

HP Cyber Risk Report 2013

Symantec Internet Threat Report 2013

Ponemon 2013 Cost of Cyber Crime Study

Cisco 2013 Annual Security Report

Websense 2014 Security Predictions

McAfee Labs 2014 Threat Predictions

FBI 2013 Internet Crime Report

McAfee/CSIS 2013 Estimating the Global Cost of Cybercrime

My final thought in this note on cybercrime is that the perpetrators of cybercrime are becoming very sophisticated, and the attacks are subtle, take place over a long period of time, and use evasive techniques to avoid being detected. There is a market on which a former “kiddie hacker” can buy nefarious software, and this happens, but much more sophisticated criminals attacked Target. Experts can say that well, the Target breach wasn’t that sophisticated, but it probably wasn’t a kiddie hacker. In fact, with an estimated 50 million credit cards stolen (actually the contents of the magnetic strip with which a card can be counterfeited) each valued at over $100 per card, one sees that cybercrime is “big business”. This crime seems to pale when it compares with the theft of intellectual property done by nation-states, but again that’s another post.

Hypervisors

2011/07/07

Hypervisors

One of the main features of Cloud Computing is dynamic resource allocation.  At a single machine level, one first creates virtual (abbreviated “v.”) machines on a physical machine. The union of the v. machines on the physical machines are then networked together and controlled by a master controller that does the resource allocations within the cloud.

In order to understand this, it is first necessary to understand what happens on a single machine.  VMware’s implementation of this is a standard, and the following notes are taken primarily from wandering through their excellent website www.vmware.com to understand their hypervisors, ESX and the newer and far better incarnation ESXi.  These notes are a little more like general requirements for a general hypervisor than the VMware hypervisors’ functionality; hence, these notes don’t correspond precisely to their specific capabilities.

A hypervisor installs directly on top of the physical server and creates multiple virtual machines that can run simultaneously, sharing the physical resources of the underlying server.  For efficiency, security, and reliability, a hypervisor needs to be quite small and simple.  VMware’s ESXi is less than 100MB.  On the other hand, it needs to run scripts for automated installations and maintenance, and it needs a remote management interface, all of which adds a little to the complexity.  These notes are broken down into:

  1. Physical machine support
  2. Virtual machines and other virtual support
  3. Resource management
  4. Operating system support
  5. Security
  6. Hardware realities

[Note:  VMware ESXi is available as a free download for deployment as a single-server  virtualization solution.  One can use the freely available VMware vSphere™ Client to command VMware ESXi to create and manage its virtual machines.]

1. Physical machine support (ESXi limits are in parentheses):

  • 64 bit support on rack, tower and blade servers from Dell, Fujitsu Siemens, HP, IBM, NEC, Sun Microsystems and Unisys.
  • Physical cores (64)
  • Physical memory (1TB)
  • Transactions/sec (8,900)
  • I/O ops/sec (200,000)
  • iSCSI, 10Gb Ethernet, InfiniBand, Fibre Channel, and converged  network adapters
  • SAN multipathing
  • Support storage systems from all major vendors.
  • Internal SATA drives, Direct Attached Storage (DAS), Network Attached Storage (NAS) and both fibre channel SAN and iSCSI SAN.
  • Remote (security) management with granular visibility into v. machine hw resources, e.g. memory, v. cores, keyboards, disk, and IO.  Allows security against viruses, Trojans, and key-loggers.
  • Energy efficiency with dynamic voltage and frequency Scaling and support for Intel SpeedStep and AMD PowerNow!
  • Support for next-generation virtualization hardware assist technologies such as AMD’s Rapid Virtualization Indexing® or Intel’s Extended Page Tables.
  • Support large memory pages to improve efficiency of memory access for guest operating systems.
  • Support performance offload technologies, e.g., TCP Segmentation Offloading (TSO), VLAN, checksum offloading, and jumbo frames to reduce the CPU overhead associated with processing network I/O.
  • Support virtualization optimized I/O performance features such as NetQueue, which significantly improves performance in 10 Gigabit Ethernet virtualized environments.

2. Virtual Support

  • v. machines (256) per  physical machine
  • v. RAM (255 MB) per v. machine
  • v. SMP (each v. machine can use up to 8 cores simultaneously)
  • Select v. machine direct access to physical IO and SAN LUNs.
  • v. disks (limit ???)
  • v. file systems allow multiple v. machines to access a single v. disk file simultaneously.
  • Remote boot a v. machine from a v. disk on a SAN.
  • Multiple v. NICs per v. machine each with its own IP and MAC address.
  • v. InfiniBand channels between applications running on v. machines.  Of course, these v. channels do not have to be restricted to the same physical machine.
  • v. switches to support v. networks and v. LANs among v. machines.
  • Support Linux and Microsoft v. clusters of v. machines.

3. Resource Management

Dynamically manage resource allocations while v. machines are running, subject to minimum, maximum, and proportional resource shares for physical CPU, memory, disk, and network bandwidth.

Intelligent process scheduling and load balancing across all available physical CPUs

Allow physical page sharing by v. machines so that a physical page is not duplicated across the system.

Shift RAM dynamically from idle v. machines to active ones, forcing the idle ones to use their own paging areas and to release memory for the active ones.

Allocate physical network bandwidth to network traffic between v. machines to meet peak and average bandwidth and burst size constraints.

Provide “priority” network access to “critical” v. machines.

Support failover for v. NIC’s , for v. machines, and for v. network connections to enhance availability.

4. Operating System Support

A hypervisor should support a wide variety of guest operating systems on its v. machines.  It is reasonable to require an OS modification (to get a v. OS) to run on a v. machine; however, applications that don’t directly access the hardware should run on a v. OS without any modification.  The v. operating systems should be allowed to call the hypervisor with a special call to improve performance and efficiency or to avoid difficult-to-virtualize aspects of a physical machine.  Such calls are usually hypervisor specific and are currently called “paravirtualization.”

VMware has a “standard” paravirtualization interface called its “Virtual Machine Interface” that can be supported by an OS to allow a single binary version of the OS to run either on native hardware or on a VMware hypervisor.

Most networking applications are at the application layer and hence will run without change.  For Cloud Computing, Infiniband read and write commands to send and receive from two applications’ buffer pairs are such.  More on this in a subsequent post.

5. Security

There should be v. hardware support to check digitally signed v. kernel modules upon load.

There should also be v. memory integrity techniques at load-time coupled with v. hardware capabilities to protect the v. OS from common buffer-overflow attacks used to exploit running code.

The hypervisor should secure iSCSI devices from unwanted intrusion by requiring that either the host or the iSCSI initiator be authenticated by the iSCSI device or the target whenever the host attempts to access data on the target LUN.

The hypervisor should disallow promiscuous mode sniffing of network traffic, MAC address changes, and forged source MAC transmits.

Once the small hypervisor is well secured, running all the user applications on v. machines should provide GREATER security than if they were running on real servers!

6. Hardware Realities

Of course virtual machines go back at least as far as the VM operating system on the IBM 360, and Wikipedia has a nice history of them and also a nice discussion of paravirtualization and its predecessors.  The early  x86 architecture was at best challenging and at worst impossible to virtualize, and there were multiple attempts and academic papers on this topic.  Intel made things worse with the ugly 80286 processor, but tried somewhat to fix things with the 80386.  When at Digital Equipment Corporation, we had a meeting at Intel with the chief 80386 designer who was cognizant of VM efforts, but only mildly sympathetic.  In any case, around 2005-2006 both Intel and AMD made some serious efforts to support virtual machines.  Neither Intel nor AMD implement this support in all of their processor products, presumably preferring to charge more for those processors that have the support.  Caveat Emptor.

-gayn

Thoughts on Agile and Scrum

2011/06/26

The 2001 Agile Manifesto (http://agilemanifesto.org/ ) reads:

We are uncovering better ways of developing software by doing it and helping others do it. Through this work we have come to value:

Individuals and interactions – over processes and tools
Working software – over comprehensive documentation
Customer collaboration – over contract negotiation
Responding to change – over following a plan

That is, while there is value in the items on the right, we value the items on the left more.”

There are 12 Principles behind the Agile Manifesto (http://agilemanifesto.org/principles.html):

  1. Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.
  2. Welcome changing requirements, even late in development. Agile processes harness change for the customer’s competitive advantage.
  3. Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.
  4. Business people and developers must work together daily throughout the project.
  5. Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.
  6. The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.
  7. Working software is the primary measure of progress.
  8. Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.
  9. Continuous attention to technical excellence and good design enhances agility.
  10. Simplicity–the art of maximizing the amount of work not done–is essential.
  11. The best architectures, requirements, and designs emerge from self-organizing teams.
  12. At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.

My particular view of software development history is that the “waterfall” method (requirements → design → implementation → test → delivery → maintenance) failed, because it simply took too long. In fact each step took too long and still never got it 100% complete and correct. Thus, reality was that the waterfall method really looked like (imperfect requirements → imperfect design → imperfect implementation → imperfect test → delivery of an imperfect product → extraordinarily expensive maintenance with less than satisfied customers).  I’ve seen quotes that over 70% of waterfall projects failed before any kind of customer delivery. I’m not sure of the number, but it feels right. In addition, over the long duration of the project, of course the real requirements changed!

The industry reacted to this reality with many efforts prior to Agile: throw one (version) away, rapid prototyping, spiral development, structured programming, etc. Agile also has some competing methodologies.  All of these approaches have a philosophy.  All have requirements, designs, implementations, testing, delivery, and maintenance.  Pundits retell the tiresome joke that the nice thing about software development philosophies is that there are so many to choose from!  It is perhaps a bit cynical to say that the agile philosophy became popular because the word “agile” was so pregnant with meaningful reaction to the real problems that needed to be addressed.

The big problems today are:

  1. Requirements frequently change. Technology, competition, markets, and internal funding changes are among the many changes that influence changing requirements. The fundamental reality is two fold: After initial requirements and designs are in place and are reviewed by management and customers, the development process must embrace such changes, and there must be very frequent (weeks, not months) deliveries of working code (and hardware if required) that are reactive to requirement changes as the development proceeds. Such deliveries must be reviewed, face-to-face (at least electronically for geographically disbursed teams), with the development team, the business team, and a significant number of customer representatives. In this open-source world where the product depends on independent third party software, then that development needs to be represented. Clever companies often “volunteer” an employee to work for “free” on such external projects, and these volunteers can then participate in the reviews. These reviews will most likely change not only the requirements but also the priorities for the next deliverable. The good and the bad news is that people get a clearer understanding of the product once they see some functionality in action. They will see what they like and also what they either don’t like or what they feel needs improvement. Of course, requirements changes imply new tasks to update requirements and design documents as well as the current implementation. These need to be reviewed again. The hope is that early functionality and early reviews will cause all this changing work to converge. (If it doesn’t converge, the program will run out of money and will fail.)
  2. Not only management, but also customers need to be intimately involved with the development team reviews. (I disagree with the word “daily” in Principle 4, and prefer the word “regularly” instead.) In the waterfall, maybe or maybe not end-customers were involved in the requirements phase. Often this was where marketers and product managers intervened to represent the “voice of the customer.”  This was too little, too late, and too prone to misinterpretation. After that, the customer was usually not involved until just before delivery with alpha and beta tests at the customer site. Of course alpha and beta testing was too late to significantly change the requirements, the design, or most aspects of the implementation. The product delivered often therefore was only a partial step to meeting a typical customer’s needs. It takes a little selling (or “pre-selling”) to convince potential customers to invest the time that it takes to participate in the development of a product. The primary selling technique is a commitment that the product will directly address the customer’s needs, rather than having the customer pay a fee to customize and/or integrate the product into the customer’s systems. Note that this is not a promise to provide the product for “free.” The product should be good enough that the customer is still willing to pay for it. Other techniques, especially for start-ups, are to put key potential customers on the board, and/or to get them to invest in the company by purchasing stock. To protect their investment, they will often get involved with the development process.
  3. Processes become too burdensome. As waterfall based programs tended to fail, some reacted by tightening up the waterfall development processes. Whereas there was clear value to this, e.g. specification reviews, design reviews, code walk-throughs, quality standards, etc. the main downside was that the formalization added time, effort and expense. It made requirements changes even more likely! The tone of the 2001 Agile Manifesto might be viewed as “down on process”, but in reality any development methodology needs process to be effective. There are a couple fundamental problems here. The first is that large programs, say over a couple dozen developers, often need large processes to coordinate all the subprograms and their inter-dependencies. The second is that most process mavens preach customization of processes, and continuous customization is usually required. One insane waterfall type wanted the program manager to formally customize the corporation’s processes (which of course required a corporate approval process) prior to the start of the program, before the program manager even understood what process features would be needed. In reality without process, one has chaos, and even if a rare gem arises out of chaos, management and customers hate chaos.  Also, process tends to balance resources and keeps the development team efficient.You’d think that the Agile proponents would invent “Agile processes.” Some did, but some did something much more clever. They adopted Scrum, which came out of the automotive and consumer products space. (I’m told that “scrum” refers to how a rugby team restarts the game after an infraction. I don’t follow rugby. In any case, “scrum” is not an acronym.) Scrum allows work tasks to be spun off in an ad hoc fashion into “sprints” of fixed short duration. This allows the program to be capable of responding to changing requirements and priorities. Sprints are managed by daily ~15 minute meetings where the developers tell of progress since the last meeting, work planned for today, and impediments. Higher level scrums can be formed (a “scrum of scrums”) to discuss overlaps, integration issues, and global impediments. A representative of each sprint team attends this meeting. A “ScrumMaster” is responsible for resolving impediments (outside the meetings.) General reviews and planning sessions are also defined. (I recommend/require that minutes of these meetings be kept. Minutes allow people who can’t attend a meeting to catch up.) The point is to layout and manage the work needed to be done to achieve the next working release. Various tools and techniques are used to keep track of requirements not yet implemented (“product backlog”), work needed for the next release but not scheduled as a sprint (“sprint backlog”), backlog priorities, and progress towards the next release.

    A “product owner” makes final decisions on priorities and represents the stakeholders. The product owner is responsible for gathering “stories” which describe how requirements are to be used. The story format is based on a user type wanting to do something to achieve some result. The scrum philosophy is that stories give more meaning to requirements. The stories are similar to “Use Cases” in Unified Process (UP) systems. The team’s engineering manager usually serves as ScrumMaster, and the program manager (or product manager) usually serves as the product owner.

    The main added value of Scrum is the flexibility of spinning off work tasks or sprints that weren’t part of the original project plan. In fact, a project plan is still useful to define releases and release dates, but it needs to stay at a high level so that it can be easily modified as requirements change. The degree of granularity in the project plan provides the same degree of granularity for progress reports. The priorities, schedule estimates, and work estimates can be added as well as task dependencies. This allows for multiple earned value reports at this level of granularity. The reports are valid until requirements change and the development is rescheduled. With Agile and Scrum, this should occur rather frequently and usually after an intermediate release review.

  4. Security is essential. It needs to be clearly defined as part of the requirements, and the working releases need to address security requirements very early in the program. At the very least, a security model of which users can access what data and how this access is authenticated must be clearly defined in the requirements. Good security comes in defined layers such that if an outer layer is compromised, then there is still protection from the inner layers. Usually security needs to be explained to the prospective customers. There are plenty of examples in the recent literature as well as in this blog! Requirements for misuse of all interfaces also needs to be defined, especially as to what the system does (rather than crash) when a violation is detected, and as a matter of security, all misuses must be detected. If security is part of the requirements, it should be no surprise that security needs to be an explicit part of the design and of course the implementation.
  5. High Availability is essential. Among my friends, Leslie Lamport gets credit for the definition of a “distributed system” as one which can fail for a user due to a failure in a computer that the user never heard of. Of course, everyone wants to make their product as bug-free as possible, but life isn’t always kind. Power outages, network failures, server failures, etc. often conspire to cause users much grief. Users want their systems not to lose or corrupt their data, even in the presence of such failures. They also want such failures to be nearly invisible to them. By “nearly” I mean that the failure is at most noticed for at most a second or so. Data transfers should restart, and the amount of re-typing the user must do is trivial. The average time it takes to recover from such a failure is called the Mean Time To Repair (MTTR). The average length of time between such failures of such a system is called its Mean Time To Failure (MTTF). Availability is the ratio MTTF/(MTTF+MTTR). It is the percent of time that the system is fully functional. Good availability is something like 0.999 (“three nines”). High availability is usually something better than 0.9999 (four nines) and Very High Availability is better than 0.99999 (five nines). Air traffic control systems want an availability better than 0.999999 (six nines). If data is corrupted, then the repair time must include the time to fix the corrupted data. Usually, a system won’t meet any reasonable availability requirements if data can get corrupted. [Cf. other post on high availability in this blog.]Much like for security, availability needs to be put into the requirements and have that requirement flow to the design and implementation. High availability is difficult to achieve, and Agile implementation cycles need to estimate availability at every iteration. Vendors need to be involved including network and computing server vendors, and even power companies need to be involved. Usually availability commitments need to be made from these vendors that include inventory of spares, 24 hour service personnel, and various failover features of their system.
  6. Verification/Testing is as important as development. (“Verification” encompasses testing; it usually also includes requirements reviews, design reviews, code reviews, and test reviews. It definitely includes a review of all test results.) At the very least spend as much money and expertise on testing as are on development. Some might say more of each, and some estimate that testing should be as much as 75% of the development budget. (How much verification/testing does an air flight controller system for airports require?) Include hiring a hacker to try to break into the system at every stage of development. Agile emphasizes the quality of the engineers on the team (a stronger form of Principle 5). Don’t skimp on the quality of the test engineers. In fact, mixing up assignments on test sprints and development sprints is a good idea (Agile Principle 11). Make the design of testing part of the design of the system, and make sure tests can be automated. Every Agile working delivery must include a working test suite to verify it! The worst part of the waterfall method is that testing is explicitly the last step before delivery. Keep in mind that delivery must include both “white box” testing, where the design of the software is known to the test developers, and “black box” testing, where the tests try to break the system with only knowledge of what is does and without knowledge of how it works. Black box testing usually includes “stress testing”, which consists of using a wide variety of both legitimate and illegitimate input data. Organize the legitimate input data so that stories are tested. Be sure that enough (story) testing is done to get the required test coverage. Testing a more complex story is essentially a simulation. Some products might require hundreds of hours of simulation, and this makes it clear that the automated test suites need to consider how the output of the suite of tests can be analyzed and how reports for management and customers can be automatically generated.Bugs will be found in the product through normal usage that are not found by the suite of tests. Fix these bugs only after fixing or augmenting the test suites so that the tests can find the bug. This is the most fundamental aspect of good regression testing. You need to be sure you can detect such a bug if it re-occurs after some other software change.

Agile and Scrum, despite the detractors who hate the words “sprint” and “ScrumMaster”,  can be used to build modern applications. Changing requirements will force changes to requirement and design documents as well as changes to existing implementation code. This allows these documents and code to be initially incomplete, and the development process should be designed to allow reviews that include business management and customers to complete and perfect them. Scrum provides flexible, customizable processes that allow for such iterative refinement work. Deeper requirements such as security and availability need to be included at the beginning and each iterative deliverable needs to address them. Verification/testing must be treated as requirements that include appropriate requirements for black and white box testing, for test coverage, and for robust simulations. All tests should be automated with output appropriate for the required analysis of results and the generation of reports suitable for management and customers. Verification and testing needs to be given at least the same or higher levels of quality and quantity of resources. Projects can be managed with the usual tools, including earned value; however, expect projects to be replanned frequently per the Agile philosophy.