• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Cloud2009
 

Cloud2009

on

  • 816 views

 

Statistics

Views

Total Views
816
Views on SlideShare
816
Embed Views
0

Actions

Likes
0
Downloads
4
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Cloud2009 Cloud2009 Document Transcript

    • Contract Number: N69250-08-D-0302 Task Order: 0007 Security Assessment ofCloud Computing Vendor Offerings Final Report Oct 10, 2009 Vassil Roussev, Ph.D. Golden G. Richard III, Ph.D. Daniel Bilar, Ph.D. Department of Computer Science University of New Orleans
    • ContentsExecutive Summary ............................................................................................................. 2Recommendations .............................................................................................................. 31. Introduction ................................................................................................................. 52. Background .................................................................................................................. 6 2.1. The Promise and Early Reality of the Cloud ..................................................................... 6 2.2. DoD Enterprises and the Cloud ........................................................................................ 9 2.3. Security Concerns on the Cloud ..................................................................................... 103. Representation of the Navy Data Center System Environment................................ 114. Navy Security Concerns and Evaluation Criteria ....................................................... 125. Vendor Assessment Overview ................................................................................... 176. Vendor Assessment: Amazon Web Services (IaaS) ................................................... 17 6.1. Description ..................................................................................................................... 17 6.2. Security Assessment ...................................................................................................... 187. Vendor Assessment: Boomi Atmosphere (PaaS/SaaS).............................................. 20 7.1. Description ..................................................................................................................... 20 7.2. Security Assessment ...................................................................................................... 218. Vendor Assessment: force.com (PaaS) ...................................................................... 22 8.1. Description ..................................................................................................................... 22 8.2. Security Assessment ...................................................................................................... 239. Vendor Assessment: Pervasive Software (PaaS/SaaS) .............................................. 24 9.1. Description ..................................................................................................................... 24 9.2. Security Assessment ...................................................................................................... 25 9.3. Case Studies of Interest ................................................................................................. 26Author Short Bio: Vassil Roussev ...................................................................................... 28Author Short Bio: Golden G. Richard, III ........................................................................... 29Author Short Bio: Daniel Bilar ........................................................................................... 30List of Abbreviations ......................................................................................................... 31References ........................................................................................................................ 32 1
    • Executive SummaryCloud computing is a relatively new term that describes a style of computing in which aservice provider offers computational resources as a service over the Internet. The maincharacteristic of cloud computing is the promise that service availability can be scaled upand down according to demand with the customer paying based on actual resource usageand service level arrangements.Cloud computing differs from prior approaches to scalability in that it is based on a newbusiness proposition, which turns IT capital costs into operating costs. Under this model,a service provider builds a public cloud infrastructure and offers its services to customers,who only pay what they use in terms of compute hours, storage, and network capacity.The expectation is that, by efficiently sharing the infrastructure, customers will see thecost and complexity of their IT operations go down significantly.From a security perspective, sharing the physical IT infrastructure with other tenantscreates new attack vectors that are not yet well understood. Virtualization is perceived asproviding a very high level of protection, yet it is too early to estimate its long-termeffectiveness: after all, it is a software layer subject to implementation faults. Cloudservices are in their very early stages of adoption, with small-to-medium enterprises as itsfirst customers. In all likelihood, no determined adversary has systematically tried topenetrate the defense mechanisms of a cloud provider. Just as importantly, providers donot appear ready to deliver the specific offerings that a DoD enterprise needs.Over the medium and the long run, there is little doubt that DoD‟s major IT operationswill, ultimately, move to scalable cloud services that will increase capabilities and reducecosts. However, current offerings from public cloud providers are not, in our view,suitable for DoD enterprises and do not meet their security requirements.Large civilian enterprises have very similar concerns to those of DoD ones and havetaken the path to develop private clouds, which use the same technology but the actualoperation stays inside the enterprise data center. In our view, this is the general directionin which DoD entities should focus their efforts. It is difficult to conceive a scenariounder which it becomes acceptable for DoD data and computations to physically leavethe enterprise. Moreover, DoD operations have a large enough scale to reap the benefitsof cloud services by sharing the infrastructure within DoD.It is important to recognize that a generic cloud computing platform, by itself, provideslimited benefits. These come from virtualization, which by consolidating existingdeployments reduces overall hardware requirements, simplifies IT management, andprovides greater flexibility to react to variations in demand.The true cost savings come from well-known sources—eliminating redundantapplications, consolidating operations, and sharing across enterprises. In cloud terms, it ismulti-tenancy—the sharing of a platform by multiple tenants—that provides the greatestefficiency gains. It is also important to recognize that efficient scaling of services requiresthat applications be engineered for the cloud and it is likely that most legacy applicationswould need to be reengineered. 2
    • RecommendationsThe overall assessment of this report is that cloud computing technology has significantpotential to substantially benefit DoD enterprises. Specifically, it can facilitate theconsolidation and streamlining of IT operations and can provide operational efficiencies,such as rapid automatic scaling of service capacity based on demand. The sametechnology can simplify the deployment of surge capacity, as well as fail-over, anddisaster recovery capabilities. Properly planned and deployed, cloud computing servicescould ultimately bring higher levels of security by simplifying and speeding up theprocess of deploying security upgrades, and by reducing deployed configurations to asmall number of standardized ones.It is important to both separate the potential of cloud computing from the reality ofcurrent offerings, and to critically evaluate the implications of using the technologywithin a DoD enterprise. Our own view (supported by influential IT industry research) isthat the current state of cloud computing offerings does not live up to level of hypeassociated with them. This is not uncommon for new technologies that burst to theforefront of the news cycle; realistically, however, it takes time for the technology tomature and fulfill its promise. From a DoD perspective, the most important question toask is:How does the business proposition of cloud computing translate into the DoD domain?The vast majority of the benefits from using infrastructure-as-a-service (IaaS) offeringscan be had now by aggressively deploying virtualization in existing, in-house datacenters. Further efficiencies require higher level of sharing at the platform (platform-as-a-service, PaaS) and application levels (software-as-a-service, SaaS). In other words, thereneed to be multiple tenants that share these deployments and associated costs. Due totrust issues, it is difficult to envision a scenario where DoD enterprises share deploymentswith non-DoD entities in a public cloud environment.Further, regulatory requirements with respect to trust, security, and verification havehidden costs that are generally unaccounted for in the baseline (civilian) model. Theresponsibility of assuring compliance cannot be outsourced and would likely be mademore difficult and costly.Based on the above observations, we make the following general recommendations. ADoD enterprise should: Only consider deploying cloud services on facilities that it physically controls. The general assumption is that the physical location of one‟s data in the cloud should be irrelevant as users will have the same experience. This is patently not true for DoD systems as the trustworthiness and security of any facility that hosts data and computations must be assured at all times. Further yet, the supply chain of the physical hardware must also be trustworthy to ensure that no breaches can be initiated from it. Consider vendors who offer technology to be deployed in a private cloud, rather than public cloud services. Expanding on the previous point, it is inconceivable at this point that DoD would relinquish physical control over sensitive data and 3
    • computations to a third party. Therefore, DoD should look to adopt thetechnology of cloud computing but modify the business model for in-house use.For example, IaaS technology is headed for commoditization with multiplecompeting vendors (VMWare, Cisco, Oracle, IBM, HP, etc.) working ontechnologies that automate the management of entire data centers. Such offeringscan be expected to mature within the next 1-2 years.Approach cloud service deployment in a bottom-up fashion. The deployment ofcloud services is an evolutionary process, which ultimately requires re-engineering of applications, as well as business practices. The low-hanging fruit isdata center virtualization, which decouples data and services from the physicalmachines. This enables considerable consolidation of hardware and softwareplatforms and is the basic enabler of cloud mobility. Following that, it is theautomation of virtualized environment management that can bring the costsfurther down. At the next level, shared platform deployments, such as databaseengines and application servers, provider further efficiencies. Finally, the mostdifficult part is the consolidation of applications and shared deployment of cloudversions of these applications.We fully expect that these themes are familiar to DoD IT managers and someaspects of these are already implemented. This should not come as a surprise asthese are the true concerns of business IT efficiency and cloud computing doesnot magically wave them away. However, cloud computing does provide extraleverage in that, once a service is ready for the cloud, it can be deployed on awide scale at a marginal incremental cost.Look for opportunities to develop and share private cloud services within DoD.Sister DoD entities are natural candidates for sharing cloud servicedeployments—most have common functions and compliance requirements, suchas human resources and payroll, that are prime candidates for multi-tenantarrangements. Unlike the public case, the consolation of such operations can bringthe cost of compliance down as fewer systems would need to be certified.Critically evaluate any deployments scenarios using Red Team exercises andsimilar techniques. As discussed further in the report, most of the serviceguarantees promised by vendors are based on the design of the systems and havenot been independently verified. This may be acceptable in the public domain asthe price of failure for many enterprises is high but rarely catastrophic for thenation; however, DoD facilities must be held to a much higher standard. The onlyrealistic way of assessing how DoD cloud services would perform under stress, orunder a sustained attack by a determined adversary is to periodically simulatethose conditions and observe the system‟s behavior. 4
    • 1. IntroductionCloud computing is a relatively new term that describes a style of computing in which aservice provider offers computational resources as a service over the Internet. The maincharacteristic of cloud computing is the promise that service availability can be scaled upand down according to demand with the customer paying based on actual resource usageand service level arrangements. Typically, the services are provided in a virtualizedenvironment with computation potentially migrating from machine to machine to allowfor optimal resource utilization. This process is completely transparent to the clients asthey see the same service regardless of the physical machine providing it. The name„cloud computing‟ alludes to this concept as the Internet is often depicted as a cloud onarchitectural diagrams, hence the computation happens „in the cloud‟.Historically, the basic concept was pioneered by IBM in the 1960s under the name utilitycomputing, however, it has been no more than a decade since the cost and capabilities ofcommodity hardware have made it possible for the idea to be realized on a massive scale.In general, the service concept can be applied at three different levels and actual vendoroffering may be a combination of these: Software as a Service (SaaS). Under the SaaS model, application software is accessed and managed over the network and is usually hosted by a service provider. Licensing is tied to the number of concurrent users, rather than physical product copies. For example, Google provides all of its applications—Gmail, Google Docs, Picasa, etc.—as services, and most of them are not even available as standalone products. Platform as a Service (PaaS). PaaS offers as a service a specific solution platform on top of which developers can build applications. For example, in traditional computing LAMP (Linux, Apache, MySQL, Perl/PHP/Python) is a popular choice for developing Web applications. Example PaaS offerings include Google App Engine, force.com, and Microsofts Windows Azure Platform. Infrastructure as a Service (IaaS). IaaS goes one level deeper and provides an entire virtualized environment as service. The services can be provided at the operating system level, or even the hardware level, where virtual compute, storage, and communications resources can be rented on an on-demand basis.Given the wide variety of offerings many of which are not even directly comparable, thisreport provides a conceptual analysis of the field that relates to the Navy-specificrequirements. This drives the analysis of specific offerings and will be extend the usefullifetime of this report by providing a framework for evaluating other offerings. 5
    • 2. Background "The security of these cloud-based infrastructure services is like Windows in 1999. Its being widely used and nothing tremendously bad has happened yet. But its just in early stages of getting exposed to the Internet, and you know bad things are coming." - John Pescatore, Gartner VP and security analyst, (quoted by Financial Times, Aug 3, 20091)Before presenting the security implication of transitioning the IT infrastructure to thecloud, it is important to understand the business case behind the trend and thetechnological means by which it is being implemented. These have direct bearing on theultimate cost/benefit analysis with respect to security and it is important to understandthat some of the security assumptions behind the typical enterprise analysis may not holdtrue for a DoD implementation. In turn, this may prevent at least some of the possibleefficiencies and may completely alter the outcome of the decision process.2.1. The Promise and Early Reality of the CloudThere are dozens of definitions of the Cloud and Vaquero et al. [22] recently completedan in-depth survey on the topic. Based in part on those results, we offer one of the morerigorous definitions: “Clouds are a large pool of easily usable and accessible virtualized resources (such as hardware, development platforms and/or services). These resources can be dynamically reconfigured to adjust to a variable load (scale), allowing also for an optimum resource utilization. This pool of resources is typically exploited by a pay- per-use model in which guarantees are offered by the Infrastructure Provider by means of customized service level agreements (SLA).”The above statement can be broken into three basic criteria for a “true” cloud: The management of the physical hardware is abstracted from the customer (tenant); this goes beyond virtualization—the location of the data and the computation is transparent to the tenant. Infrastructure capacity is highly elastic allowing tenants to consume the almost exact amount of resources demanded by users—the infrastructure automatically scales up/down on a fine time scale. Tenants incur no upfront capital expenditures—infrastructure costs are paid for incrementally as an operating expense.The main selling point of all cloud computing offerings is that they provide significantreductions of the total cost of ownership, offer more flexibility, and have low entrancecosts for new enterprises. It is important to realize that the main source of efficiency in apublic cloud infrastructure comes from sharing the cost with other tenants.Cloud platforms can provide savings at each layer of the hardware/software stack,starting with the hardware, all the way up to the application layer: Data center maintenance1 http://www.ft.com/cms/s/0/5aa4f33e-7fc4-11de-85dc-00144feabdc0.html 6
    • Infrastructure/network maintenance Infrastructure management and provisioning Database management, provisioning, and maintenance Middleware/application server management Application management Patch deployment and testing Upgrade managementTypical data center arrangements in which racks of servers and storage are rented andthen managed by the tenant offer the lowest level of sharing and, therefore, the lowestlevel of overall cost reduction. At the other end of the spectrum is a multi-tenantarrangement in which multiple parties share the database, middleware, and applicationplatforms and outsource all maintenance to the provider. This arrangement promises thegreatest efficiency improvements. For example, Gartner Research reports [12] that itsclients are experiencing project savings of 25% to 40% by deploying customerrelationship management solutions via the SaaS model.The real value of cloud computing kicks in when a full application platform is leveragedto either purchase applications written to take advantage of a multi-tenant platform, or to(re-)develop the applications on this new stack. For example, moving existing mailservers from an in-house data center to a virtual machine somewhere in the cloud will notyield any notable savings as the tenant would still be responsible for maintenance,patching, and provisioning. The latter point is important—provisioning is done at peakdemand, the capability is utilized only a small fraction of the time, and the cost is bornefull time (According to McKinsey‟s survey [14], the average physical server is utilized atabout 10%.) In contrast, a true cloud solution, such as Google Mail, would relieve thetenant from all of these concerns, potentially saving non-trivial amounts of physical andhuman resources.McKinsey‟s report offers a sobering view of current cloud deployments: Current cloud offerings are most attractive for small and medium-sized enterprises; adoption by larger enterprises (with the end goal of replacing the in-house data centers) faces significant hurdles: o Current cloud computing offerings are not cost-effective compared to large enterprise data centers. (This conclusion is based on the assumption that existing IT services would be replicated on the cloud and, therefore, most of the potential efficiencies would not be realized.) Figure 1 illustrates the point by considering Amazon‟s EC2 service for both Windows and Linux. The overall conclusion is that most EC2 options are more costly than TCO for a typical data center. It is possible to improve the cloud price through pre-pay agreements for Linux system (but not for Windows). Further, based on case studies, it is estimated that total cost of ownership (TCO) for 1 month of CPU processing is about $150 for an in-house data center and $366 on 7
    • EC2 for a comparable instance. Consequently, cloud costs need to go down substantially before they can justify wholesale data center replacement. Figure 1 EC2 monthly CPU equivalent price options [14]o Security and reliability concerns will have to be mitigated and applications‟ architecture needs to be re-designed for the cloud. Section 2.3 discusses this point in more detail.o Business perceptions of increased IT flexibility and effectiveness will have to be properly managed. Currently, cloud computing is relatively early in its technology cycle with actual capabilities considerably lagging public perceptions. There is no reason to doubt the momentum, or long-term viability of the technology itself as all the major IT vendors—IBM, HP, Microsoft, Google, Cisco, VMWare, etc.— are working towards providing integrated solutions. Yet, mature solutions for larger enterprises appear a few years away.o The IT organizations will have to adapt to function in a cloud-centric world before all efficiencies can be realized.Most of the gains in a private cloud deployment (using a cloud platform internally)come from virtualizing server storage, network operations, and other critical buildingblocks. In other words, by deploying virtualization aggressively, the enterprise canachieve close to 90% of the utilization a cloud service provider (such as Google) canachieve. Figure 2 illustrates the point.The traditional IT stack consists of two layers: facilities—the actual (standardized)data center facilities—and the IT platform—standardized servers, networking,storage, and OS. Publicly-announced private clouds are essentially an aggressivevirtualization program on top of it—the virtualization layer contains hypervisors andvirtual machines in which OS instances execute. 8
    • Figure 2 Average server utilization rates [14]2.2. DoD Enterprises and the CloudIt should be clear that, as more and more of the functions are outsourced to the cloudprovider and are (implicitly) shared with other parties, a great deal of control over thedata and computation is handed over to the provider. For most private enterprises (themost enthusiastic adopters so far) this may not present the same types of issues as for aDoD entity: Legal Requirements. It may not be permissible for sensitive government data and processing to be entrusted to a cloud provider. Specifically, it is likely not acceptable for DoD to relinquish control of their physical location and it is clear that they would have to be guaranteed to stay on US territory. For example, some current cloud computing providers, such as force.com, use backup data centers outside of the U.S. as a disaster mitigation strategy. Further complicating the picture is knowing where backup data is stored and this picture isn‟t clear— force.com provides conflicting claims on this issue, stating both that backup tapes never leave the data center [17] and that they are periodically moved offsite [18]. Google Apps terms of service state that it has the right to store and process customer data in the United States "or in any other country in which Google or its agents maintain facilities". Compliance. DoD enterprises must meet specific standards in terms of trust and security, starting with physical security and including security clearances, fine- grained access control, and strict accounting. Such requirements are generally lacking in the commercial space and, given the early stages of the technology development cycle, any compliance claims on the part of providers need to be scrutinized vigorously. Legacy systems. Many DoD enterprises have to support and maintain numerous legacy systems. Although the same is true for the commercial sector, it is not clear that the DoD entity will have the resources and the freedom to replace those with COTS solutions. Further, COTS solutions may not work as well (out of the box) 9
    • even for relatively standard functions like personnel and supply management, simply because the government tends to do things differently. Risk and Public Relations (PR) Risk. Objectively, the risk profile of DoD agencies is considerably higher than that of private enterprises as national security concerns are at stake. Accordingly, the public‟s tolerance for even minor breaches and failures is almost non-existent. Therefore, decision makers must necessarily put extra weight on the risk side of the equation and invest additional resources in managing and mitigating those risks. Thus, a risk scenario that is statistically very unlikely and can be ignored by private enterprises may require consideration (and resources) in a DoD enterprise. With respect to IT, this would usually manifest itself in the form of customized applications and procedures. Those are additional costs that cloud computing is unlikely to remedy as they will have to replicated on the cloud.2.3. Security Concerns on the CloudCritical analysis reveals that cloud computing, both as a concept and as it is currentlyimplemented, brings a whole suite of security problems that stem primarily from the factthat physical custody of data and computations is handed over to the provider.Physical security. The first problem is that we have a completely new physical securityperimeter—for most enterprises this may not be noticeable but for a DoD entityconsiderations could be different. First, is the location where the data is housedphysically as secure as the in-house option? If the computation is allowed to „float‟ in thecloud, are all the possible locations sufficiently secure?Confidentiality. Traditionally, operating systems are engineered such that a super-userhas unlimited access to all resources. This approach was recognized as a problem indatabase systems so trusted DBMS were developed so that the administrator couldmanipulate the structure of the database but not have (by default) access to the records.The super-user problem escalates in the cloud environment—now the hypervisoradministrator has control over all OS instances under his command. Further, if thecomputation migrates, another administrator has access to it, and so on. The concept oftrusted cloud computing has been recently put forward as a research idea [19] but, to thebest of our knowledge, no technical solutions are yet deployed.Another concern is the physical location of the data—if it is migrated and replicatednumerous times, what are the guarantees that no traces will be left behind? Along thesame lines, OS instances are routinely cloned and suspended and the memory imagescontain sensitive data—keys, unique identifiers, password hashes, etc.New attack scenarios. In the cloud, we have new neighbors that we know nothingabout—normally, if they are malicious, they would have to work hard to breach thesecurity perimeter before they can launch an attack. In the cloud, they are already sharingthe infrastructure and can exploit security flaws virtual machine hypervisors, virtualmachines, or in third-party applications for privilege escalation and attack from the cloud.In public cloud computing services, such as Amazon‟s, the barriers to entry are very low.The services are highly automated and all that is needed is a credit card to open anaccount and start probing the infrastructure from within. 10
    • Limits on security measures. Not having access to the physical infrastructure implies thattraditional measures that rely on physical access can no longer be deployed. Securityappliances, such as firewalls, IDS/IPS, spam filters, data leak monitors, can no longer beused. Even for software-based protection mechanisms there are no solutions that allowthe tenant to control policy decisions. This has been recognized as a problem andVMWare has recently introduced the VMSafe interface, which allows third-partymodules to control security policy decisions. Even so, tenants need also to be concernedabout the internal security procedures of the provider.Testing and compliance. Part of ensuring the security of a system is to continuously test itand, in some cases, such testing is part of the compliance requirement. By default,providers prohibit malicious traffic and possibly even vulnerability scans [2], although itmay be possible to reach an arrangement as part of the SLA.Incident response. Early detection and response is one of the critical components ofrobust security and the cloud makes those tasks more challenging. To perform aninvestigation, the administrator will have to rely on logs and other information from theprovider. It is essential that a) the provider collects the necessary information; and b) theSLA provides for that information be accessible in a timely manner.Technical challenges. OS security is built for the physical hardware and in its transitionto the virtual environment there are several challenges that are yet to be addressed: Natural sources of randomness. Cryptographic implementations rely on random numbers and it is the job of the OS to collect random data from non- deterministic events. In a virtualized environment, it is possible that cloned instances would behave in a predictable-enough manner to execute attacks [20] Trusted platform modules (TPM). Since the physical hardware is shared among several instances the TPM-supported remote attestation process would have to be redesigned.3. Representation of the Navy Data Center System EnvironmentCurrently, SPAWAR New Orleans supports over two dozen applications running in avirtualized data center environment. At any given time, up to 300 VM instances are inexecution. Over time, it is expected that both the number of applications and theworkload will continue to grow steadily and increase demand on the operation. Part ofthe mission is to support incident management, such as hurricane response, whichrequires that redundant and reliable capacity be available. It is self-evident that, for mostof the applications, it is critical that they be available and functioning correctly under anycircumstances.The baseline platform configuration consists of commodity Sun Microsystems and Dellhardware, and Sun Solaris and Microsoft Windows operating systems. The virtualizationlayer is implemented using VMWare products. Data storage is organized using acentralized Storage Area Network, with local disk boot configuration and SAN-basedapplication installations.Incident Response 11
    • Incident response systems support communication, coordination, and decision makingunder emergency conditions, such as natural disasters. Specifically, Navy provides amessaging platform, which allows for information to be broadly disseminated among firstresponders. It also provides tracking of assets with information provided by transpondersand provides up-to-date information to decision makers through appropriate visualization.Overall, most of the information exchanged via incident response application is of asensitive nature and, as such, must be closely monitored and regulated. Specific concernsinclude the location of assets, personally-identifiable information, as well as operationalsecurity. The latter presents a difficult challenge as it often has to do with theaccumulation of individual data points over time that can, together, paint a bigger pictureof the enterprise. This data may include patterns of response, organizational structure,and availability of assets that would normally be kept secret.The threat to operational security in this context is well illustrated by recent research onhidden databases on the Internet [13]. A hidden database is one that is not generallyavailable but does provide a query interface, such as a Web form, that can return a limitedsample. An example would be any reservation system which provides information onavailability for specific parameters. Researchers have shown that it is quite feasible todevise automated queries that can exploit the limited interface to obtain representativesamples of the underlying database and to accurately estimate the overall content of thehidden database. The solution to such concerns involves careful design of the applicationinterface so that returned data does not permit an adversary to collect representativesamples. Absent specifics of the Navy application pool, this concern is not discussedfurther in this report.LogisticsNavy data centers provide logistic systems for Navy operations by tracking assets,equipment, and maintenance schedules. The execution of such applications has concernssimilar to those discussed in the previous section, especially operational security.Human ResourcesHuman resources applications provide the typical HR functions of a large enterprise asapplied to the specific needs of the Navy. In addition to hiring and discharging of servicemembers, the system keeps track of skill sets and pay certifications. An additionalconcern is the non-disclosure of personally-identifiable information to third parties. Thisincludes restricting the release of information that can indirectly lead to such disclosure.As with other operational security concerns, the counter-measures are application-specific and are not discussed further.4. Navy Security Concerns and Evaluation CriteriaIn general terms, the Navy‟s IT operations have several major security concerns, some ofwhich are specific to its operation as a DoD enterprise. There are a number of factors thata repeatable decision making process should incorporate when assessing cloud computingvendor offerings. As an unclassified report, there are limits on the amount of specificinformation that we could obtain and provide. Therefore, the outlined criteria should be 12
    • treated as a high-level roadmap and the specific details need to be fleshed out when anactual decision point is at handRelease-code AuditingOne of the main concerns is release-code auditing which requires that all source code beaudited before being put into production. This is, effectively, a major restriction on thekind of services that can be provided by commercial vendors on general-purpose cloudplatforms.It is likely economically and logistically infeasible for vendors to undergo a certificationprocess for all the code that could provide services to the Navy. For example, it wouldnot be feasible to use an existing scalable email platform, such as the one provided byGoogle to improve the scalability of Navy‟s operations.Compliance with DoD RegulationsAnother hurdle in the adoption of COTS cloud services is the need to ensure that allservices comply with internal IT requirements. Due to the sensitive nature of thisinformation, no specific issues are discussed in this report. It is worth noting, however,that even if the problem can be resolved administratively and the vendor can demonstratecompliance, the Navy would still have to dedicate resources to monitor this complianceon a continuous basis.The Common Criteria for Information Technology Security Evaluation (CC) is aninternational standard (ISO/IEC 15408) for computer security certification. CC is aframework in which computer system users can specify their security requirements,vendors can then implement and/or make claims about the security attributes of theirproducts, and testing laboratories can evaluate the products to determine if they actuallymeet the claims. DoD security requirements can be expressed, in part, by referring tospecific protection profiles and can help determine minimum qualifications. For example,VMWare‟s ESX Server 3.0.2 and VirtualCenter 2.0.2 have earned EAL4+ recognition,whereas Xen, the virtualization solution used by Amazon, has not been certified.Overall, the CC certification is a long and tedious process and speaks more directly to thecommitment of the vendor rather than the quality of the product. As a 2006 GAO report[23] points out in its findings, there is “a lack of performance measures and difficulty indocumenting the effectiveness of the NIAP process”. Another side effect of certificationis that it takes up to 24 months for a product to go through an EAL4 certification process[23] (p.8), which limits the availability of products with up-to-date features.Given the fast pace of development in cloud computing products, basing a decision solelyon certification requirements may severely constrain DoD‟s choice. One possibleapproach is to formulate narrower, agency-specific certification requirements to speed upthe certification. It is also important to recognize that the common EAL4 certificationdoes not cover some of the more sophisticated attack patterns that are relevant to cloudcomputing, such as side channel attacks.Another existing regulation that could provide a ready reference point for evaluation isThe Health Insurance Portability and Accountability Act of 1996 (HIPAA). HIPAA is aset of established federal standards, implemented through a combination ofadministrative, physical and technical safeguards, intended to ensure the security and 13
    • privacy of protected health information. These standards affect the use and disclosure ofprotected health information by certain covered entities (such as healthcare providersengaged in electronic transactions, health plans and healthcare clearinghouses) and theirbusiness associates.In technical terms, HIPAA requires the encryption of sensitive information „in-flight‟ and„at-rest‟, dictates basic administrative and technical procedures for setting and enforcingaccess control policies; and requires in-depth auditing capabilities, data back-upprocedures and disaster recovery mechanisms. These requirements are set based onestablished best security practices and are common to all applications dealing withsensitive information. From a DoD perspective, these are minimal standards andadditional safeguards are likely necessary for most applications. In that respect, HIPAAcan be considered a minimal qualification requirement.Overall, the emergence of any major standards targeted at cloud computing should betaken as a cue that cloud computing is reaching a certain level of maturity, as standardsdevelopment tends to follow rather than lead. However, the emergence of standardscannot, by itself, be relied upon as a timing mechanism to identify optimal points foradopting new technologies. Standards processes have mixed record on their timeliness (atbest) and tend to follow rather than lead. In our view, given the current pace ofdevelopment by major IT companies, it is likely that, within the next two years, matureofferings will start to differentiate themselves from the rest.Limits on Physical SecurityCurrently, the security of the services is provided, in part, through the use of appliances,such as firewalls and intrusion detection systems. These devices are used tocompartmentalize the execution of different applications and isolate faults and breaches.Having a separate and specialized device creates an independent layer of protection thatconsiderably raises the bar for attackers—a wholesale breach requires that multiple layersbe breached.In a cloud deployment, there is no direct analogy to this practice although the industryhas recognized the problem and is moving toward offering solutions. Specifically,virtualization vendors are beginning to offer an interface (API) through which third-partyvirtual appliances could be plugged-in to enforce security policies and monitor traffic andexecution environments. The conceptual problem is that such components are dependenton the security of the hypervisor—if it is breached there is the opportunity for attackers tocircumvent the virtual security appliances. Eventually, solutions to this problem willlikely be found—CPU vendors, such as Intel, are actively developing hardware supportfor virtualization. Yet, at this relatively early stage of development, there is no proof thatvirtual appliances offer the same level of protection as physical ones.Underlying Hardware Security and TrustSince cloud infrastructure is likely to be remote, it is impractical/impossible for it to bescrutinized by the Navy to comply with its heightened level of security concerns and theissue of hardware trojans must be addressed. With the advent of global markets,vertically integrated chip manufacturing has given way to a cost-efficient, multi-stage 14
    • supply chain whose individual stages may or may not pass through environments friendlyto US interests (Figure 3) Figure 3: IC manufacturing supply chain [9]A DoD report crystallizes the issues thusly [9]: " Trustworthiness of custom and commercial systems that support military operations [..] has been jeopardized. Trustworthiness includes confidence that classified or mission critical information contained in chip designs is not compromised, reliability is not degraded or untended design elements inserted in chips as a result of design or fabrication in conditions open to adversary agents. Trust cannot be added to integrated circuits after fabrication; electrical testing and reverse engineering cannot be relied upon to detect undesired alterations in military integrated circuits. The shift from United States to foreign IC manufacture endangers the security of classified information embedded in chip designs; additionally, it opens the possibility that Trojan horses and other unauthorized design inclusions may appear in unclassified integrated circuits used in military applications. More subtle shifts in process parameters or layout line spacing can drastically shorten the lives of components."If for conventional IT delivery platforms which are locally accessible only designspecifications and the testing stages can be controlled, the situation is exacerbated forremote cloud computing platforms: A very real possibility of hardware-based maliciouscode, hiding in underlying integrated circuits, exists. Potentially serious ramifications ofhardware-based subversion include unmitigatable cross-tenant data leakage to VMidentification and denial of service attacks [16].We stress that this type of surreptitious hardware subversion is not within the ability ofAV software of cloud vendors to deal with; it is an ongoing, open research problem todetect such malicious code in hardware at all. It is exacerbated, however, by the lack ofinfrastructure control inherent in using public cloud offerings, as well as the motivationof the cloud infrastructure owner to purchase and deploy COTS available.Client-side Vulnerabilities MitigationCloud clients (in particular web browsers) incorporate more functionality than the meredisplay of text and images, including rich dynamic content comprising media playback 15
    • and interactive page elements such as drop-down menus and image roll-overs. Thesefeatures includes extensions such as the Javascript programming language, as well asadditional features for the client such as application plugins (Acrobat Reader,QuickTime, Flash, Real, and Windows Media Player), and Microsoft-specificenhancements such as Browser Helper Objects and ActiveX (Microsoft Windows‟sinteractive execution framework). Invariably, these extensions have shownimplementation flaws that can be maliciously exploited as security vulnerabilities to gainunauthorized access to obtain sensitive information.In August 2009, Sensepost demonstrated a proof-of-concept password brute-forcing withpassword reset links—most if not all cloud apps use some password recovery (email- orsecret questions-based), several ways to steal cloud resources: Amazon cloud instance ofWindows license stealing, paid application theft (via DevPay), “cloud DoS” withexponential virus-like growth of EC2-hosted VM instances [4].When a user accesses a remote site with a client (say a browser), he types in a URL, andinitiates the connection. Once connected, a relationship of trust is established: the userand the website (the user initiated the connection, and now trusts the page and contentdisplay) and conversely, the site and the user (in executing actions from the user‟sbrowser). It is this trust, together with the various features incorporated into rich clientsthat attackers could subvert through what is called Cross-Site Scripting (XSS) and Cross-Site Request Forgery (CSRF) attacks.Cross-site scripting attacks mostly use legitimate web sites as a conduit, where web sitesallow other (malicious) users to upload or post links on to the web site. It must beemphasized that from the point of view of the clients, neither HTTPS (the encryptedchannel with the little lock in the browser that denotes „safety‟) nor logins protect againstXSS or CSRF attacks. In addition, unlike XSS attacks which necessitate user action byclicking on a link, CSFR attacks can also be executed without the user‟s involvement,since they exploit explicit software vulnerabilities (i. e. predictable invocation structures)on the cloud platform. As such, the onus to prevent CSFR attacks falls squarely on thecloud vendor application developers. Some login and cryptographic token approaches, ifconscientiously designed to prevent CSFR attacks, can be of help [5].Availability, Scalability, and Resistance to Denial of Service AttacksPractically all service providers claim up time well above 99%, scaling on demand, androbust network security. By and large, such claims are based on a few architecturalfeatures (redundant power supply, multiple network connections, presence of networkfirewalls) and should be scrutinized with a healthy dose of skepticism. Few (if any) of theproviders have performed large-scale tests to quantify how their services really behaveunder stress. Customers should demand concrete proof that the provider can fulfill thepromises in the service level agreement (SLA).In our view, the only way to adequately assess the performance-related vulnerabilities isto perform a Red Team exercise. Such an exercise, unlike the SLA, would be able toanswer some very specific questions: How many simultaneous infrastructure failures are necessary to take the system down? 16
    • How quickly can it recover? How does the system perform under excessive load—does performance degrade gracefully, or does it collapse? How well does the system really scale? In other words, as the load grows, how fast do resource requirements grow? What happens if other tenants misbehave—does that affect performance? How well can the service resist a massive brute-force DoD attack? The latter is not just a function of the deployed hardware and software but also includes the adequacy and the training of administrative staff and their ability to communicate and respond.Short of a complete Red Team simulation, an experienced technical team should examinein detail the architecture of the system and try to answer as many of the above questionsas possible. This will not give the same level of assurance but should be minimumassessment performed on an offering selected for potential acquisition.5. Vendor Assessment OverviewThe presented assessments should be viewed as a small but typical sample of the types ofservices offered on today‟s market. The research effort was hampered by a relative dearthof specific information when it comes to the security of offered services. Although oursample of surveyed products was by no means exhaustive, we found it difficult to extractinformation from vendor representative beyond what is already publicly available. Somecompletely ignored our requests for information and were not included here.6. Vendor Assessment: Amazon Web Services (IaaS)6.1. DescriptionThere are still differing opinions on what exactly constitutes cloud computing, yet thereappears to be a consensus that, whatever the definition might be, Amazon‟s Web Services(AWS) are a prime example. Frequently, IaaS are also referred to as utility computing.Broadly, AWS consists of several basic services: Amazon Elastic Computing Cloud (EC2) is a service that permits a customer to create a virtual OS disk image (in a proprietary Amazon format) that will be executed by a web service running on Amazon‟s hosted web infrastructure. Amazon Elastic Block Store (EBS) provides block level storage volumes for use with EC2 instances. EBS volumes are off-instance storage that persists independently from the life of an instance. Volumes can range from 1 GB to 1 TB that can be mounted as devices by EC2 instances. Multiple volumes can be mounted to the same instance. Amazon SimpleDB is a web service providing the core database functions of data indexing and querying. The database is not a traditional relational database although it provides an SQL-like query interface. Amazon Simple Storage Servic (S3) is a service that essentially provides an Internet-accessible remote network share. 17
    • Amazon Simple Queue Service (SQS) is a messaging service which offers a reliable, scalable, hosted queue for storing messages as they travel between computers. The main purpose is to automate workflow processes and provides means to integrate it with EC2 and other AWS. Amazon Elastic MapReduce is a web service that emulates the MapReduce computional model adopted by Google. It utilizes a hosted Hadoop2 framework running on the EC2 infrastructure. Hadoop is the open source implementation of the ideas behind Google‟s MapReduce and is supported by major companies, such as IBM and Yahoo!. Amazon CloudFront is a recent addition which automates the process of creating a content distribution network. Amazon Virtual Private Cloud (VPC) enables enterprises to connect their existing infrastructure to a set of isolated AWS compute resources via a Virtual Private Network (VPN) connection, and to extend their existing management capabilities such as security services, firewalls, and intrusion detection systems to include their AWS resources.It is fair to say that Amazon‟s services interfaces are emerging as one of the early defacto technical standards of cloud computing. One recent development that can furtheraccelerate this trend is the development of Eucalyptus (Elastic Utility ComputingArchitecture Linking Your Programs To Useful Systems). It is an open-source softwareinfrastructure for implementing cloud computing on clusters. The current interface toEucalyptus is compatible with Amazons EC2, S3, and EBS interfaces, but theinfrastructure is designed to support multiple client-side interfaces. Eucalyptus isimplemented using commonly available Linux tools and basic Web-service technologiesand is becoming a standard component of the Ubuntu Linux distribution.6.2. Security AssessmentThe primary source of information for this assessment is the AWS security whitepaper[3], which focuses on three traditional security dimensions: confidentiality, integrity, andavailability. It does not cover all aspects of interest but does provide a good sampling ofthe overall state of affairs.Certifications and AccreditationsAmazon is cognizant of its customers‟ need to meet certification requirement, however,actual certification efforts appear to be at an early stage: “AWS is working with a public accounting firm to ensure continued Sarbanes Oxley (SOX) compliance and attain certifications such as recurring Statement on Auditing Standards No. 70: Service Organizations, Type II (SAS70 Type II) certification.”Separately, Amazon provides a short white paper on building HIPAA-compliantapplication [1]. From the content, it becomes clear that virtually all responsibility forcomply with the regulations falls on the customer.AWS provides key-based authentication to access their virtual servers. Amazon EC2creates a 2048 bit RSA key pair, with private and public keys and a unique identifier for2 http://hadoop.apache.org/ 18
    • each key pair to facilitate secure access. The default setup for Amazon‟s EC2‟s firewall isdeny-all mode which automatically denies all inbound traffic unless the customerexplicitly opens an EC2 port. Administrators can create multiple security groups in orderto enforce different ingress policies as needed and can control each security group with aPEM-encoded X.509 certificate and restrict traffic to each EC2 instance by protocol,service port, or source IP address.Physical SecurityPhysical security measures appear consistent with best industry practices but clearly donot provide the same level of physical protection as an in-house DoD facility: “AWS data centers are housed in nondescript facilities, and critical facilities have extensive setback and and military grade perimeter control berms as well as other natural boundary protection. Physical access is strictly controlled both at the perimeter and at building ingress points by professional security staff utilizing video surveillance, state of the art intrusion detection systems, and other electronic means. Authorized staff must pass two-factor authentication no fewer than three times to access All visitors and contractors are required to present identification and are signed in and continually escorted by authorized staff.”BackupsBackup policies are not described in any level of details and the provided briefdescription is contradictory. It is only clear that some of the data is replicated at multiplephysical locations, while the remaining part is customer responsibility.Amazon Elastic Compute Cloud (EC2) SecurityEC2 security is the most descriptive part of the white paper and provides the greatestamount of useful information although many details are obscured.Amazon‟s virtualization layer utilizes a “highly customized version” of the Xenhypervisor (http://xen.org). Xen is based on the para-virtualization model in which thehypervisor runs inside a host operating system (OS) and provides a virtual hardwareinterface to guest OS instances. Thus, all privileged access is mediated and controlled bythe hypervisor. The firewall is part of the hypervisor layer and mediates all trafficbetween the network interface and the guest OS. All guest instances are isolated fromeach other.Virtual (guest) OS instances are built and are completely controlled by the customer asphysical machines would be. Customers have root access and all administrative controlover additional accounts, services, and applications. AWS administrators do not haveaccess to customer instances, and cannot log into the guest OS.The firewall supports groups, thereby permitting different classes of instances to havedifferent rules. For example, in the case of a traditional three-tiered web application, webservers would have ports 80/443 (http/https); application servers would have anapplication-specific port open only to the web server group; database servers would haveport 3306 (MySQL) open only to the application server group. All three groups wouldpermit administrative access on port 22 (SSH), but only from the customer‟s corporate 19
    • network. The firewall is cannot be controlled not by the host/instance itself—it requiresthe customers X.509 certificate and key to authorize changes.AWS provides an API that allows automated management of virtual machine instances.Calls to launch and terminate instances, change firewall parameters, and perform otherfunctions must be signed by an X.509 certificate or the customer‟s Amazon SecretAccess Key and can be encrypted in transit with SSL to maintain confidentiality.Customers have no access to physical storage devices and the disk virtualization layerautomatically wipes no longer in use to prevent data leaks. It is recommended that tenantsuse encrypted file systems on top of the provided virtual block device to maintainconfidentiality.Network security mechanisms address/mitigate the most common attack vectors. DDoSare mitigated using known techniques such as syn cookies and connection limiting. Inaddition, Amazon maintains some additional spare network capacity. VM instancescannot spoof their own IP/MAC addresses as the virtualization layer enforces correctness.Port scanning is generally considered ineffective as almost all ports are closed bydefaults. Packet sniffing is not possible as the virtualization layer will effectively prohibitsetting the virtual NIC from being put into promiscuous mode—no traffic that is notaddressed to the instance will be delivered. Man-in-the-middle attacks are preventedthrough the use of SSL-encrypted communication.Amazon S3/SimpleDB SecurityStorage security, as represented by the S3 and SimpleDB services, has received relativelylight treatment. Data at rest is not automatically encrypted by the infrastructure and it isthe application‟s responsibility to do that. One drawback is that, if application data isstored encrypted on SimpleDB, the query interface is effectively disabled.The S3 APIs provide both bucket- and object-level access controls, with defaults thatonly permit authenticated access by the bucket and/or object creator. Write and Deletepermission is controlled by an Access Control List (ACL) associated with the bucket.Permission to modify the bucket ACLs is itself controlled by an ACL, and it defaults tocreator-only access. Therefore, the customer maintains full control over who has accessto their data. Amazon S3 access can be granted based on AWS Account ID, DevPayProduct ID, or open to everyone.When an object is deleted from Amazon S3, removal of the mapping from the publicname to the object starts immediately, and is generally processed across the distributedsystem within several seconds. Once the mapping is removed, there is no external accessto the deleted object. That storage area is then made available only for write operationsand the data is eventually overwritten by newly stored data.7. Vendor Assessment: Boomi Atmosphere (PaaS/SaaS)7.1. DescriptionBoomi AtomSphere offers an on-demand integration platform for any combination ofSoftware-as-a-Service, Platform-as-a-Service, Infrastructure as a Services, and on-premise applications. Their main selling point is leveraging existing applications by 20
    • providing connectors for integration of SaaS offerings with on premise, back officeapplications. This is a scenario that may be attractive for Navy purposes.The integration processes are implemented through their proprietary, patent-pendingAtom, a dynamic runtime engine that can be deployed remotely or on premises. TheseAtoms capture the components of end-to-end integration processes, includingtransformation and business rules, processing logic and connectors.They can be hosted by Boomi or other cloud vendors for SaaS-to-SaaS integration ordownloaded locally for SaaS to on-premise integrations. On-premise applications aretypically firewalled, with no direct access via the Internet, and no access even via a DMZ.To handle this requirement, the Boomi Atom can be deployed on-premise to directlyconnect the on-premise applications with one or more SaaS/Cloud applications. Changesto firewalls (such as opening an inbound port) are not required, and the Atom supports afull bi-directional movement of data between the applications being integrated. Deployedlocally, no data enters Boomis data center at any point.Should SaaS to SaaS integration be required and such applications accessed via a secureInternet connection, Atoms can be hosted in Boomis cloud, with Boomi managing theuptime of the Atom. Customer data is isolated from other tenants in Boomis platform(though standard caveats apply, as mentioned above )Finally, it is possible deploy Atoms into any cloud infrastructure that supports Java, suchas Amazon, Rackspace, and OpSource, offering direct non-Boomy connectivity betweenbetween applications. Also in this deployment style, no customer data enter Boomis datacenter.7.2. Security AssessmentBoomi has deployed multiple controls at the infrastructure, platform, application, anddata level, thus acknowledging multidimensional security aspects of their product.Infrastructure SecurityThe Boomi infrastructure meets the AICPA SAS70 Type II (and Level l PCI DSS) auditrequirements, which is the most widely recognized regulatory compliance mandate issuedby the American Institute of Certifed Public Accountants. Its primary scope is theinquiry, examination, and testing of service organization‟s control environment. Datacenters, managed-service providers, and SaaS vendors represent such serviceorganizations [15].Boomis controls include best-of-breed routers, firewalls and intrusion detection systems,DDoS protection bolstered by redundant IP connections to world class carriers terminatedon a carrier grade network. Physical power continuity is provided by redundant UPSpower, diesel generator backups, as well as HVAC facilities. In addition, they haveinstated multipoint monitoring of key metrics alerts for both mission critical and ongoingmaintenance issues.Platform and Application SecurityAs noted, Atoms can reside locally or be hosted in Boomis data center. An Atom cancommunicate information to the Boomi data center in two modes, continuous automatic 21
    • and user-initiated communications. During ongoing communications, Atoms merely sendoperational information such as online uptime status, tracking information catalogingprocess executions, configuration updates, and code update checks to Boomi. User-initiated communication is undertaken only upon the request of an authorized user. Theinformation sent includes logging information about specific integration processes, error,failure and diagnostic messages, and retrieving schemata for the design of newintegration processes.No inbound firewall ports need to be opened in order for an Atom to communicate withBoomis data center. Traffic is protected by standard 128 bit SSL encryption. Anycredential password needed for application integration (like an database password) isencrypted by x509 private/public key pairs and stored for the account. When an Atom isdeployed, the encrypted password is passed along and the credentials supplied unlock thepassword at runtime.The AtomSphere platform (used to build, deploy and manage the Atoms, regardless ofdeployment style) is accessed via a standard web browser. Boomi uses the OWASP TopTen list to address the most critical client and server side web security application flaws.The U.S.Defense Information Systems Agency recommends OWASP Top Ten as keybest practices to be used as part of the DOD Information Technology SecurityCertification and Accreditation Process (DITSCAP, now DIACAP) [10].Cross-site Scripting (CSS) and Cross-Site Forgery Request (CSFR) mentioned earlier arealso listed in the Top Ten. Boomis controls to prevent CSS relies on proper XMLencoding through an authenticated AWS REST-based API when data is delivered to theclient. Timestamps, as well as the aforementioned AWS authentication are used tomitigate (though not eliminate) CSRF attacks [6].Client Data SecurityBoomi stresses that its AtomSphere platform does not by default retrieve, access or storeclient data. It merely supports necessary data mapping rules to facilitate integrationwithout saving data at Boomis location, unless specifically configured to do so. Hence,data flowing through Atoms residing locally do not touch the Boomi data center: It istransported directly to either the SaaS or a local application through an Atom component(a connector) configured to user-specified security requirements. Should the client prefera zero-footprint Boomi data center hosted Atom deployment, the data centerinfrastracture controls are used to safeguard the integrity, confidentiality and availabilityof those Atoms.8. Vendor Assessment: force.com (PaaS)8.1. Descriptionforce.com offers a Platform-as-a-Service cloud architecture to clients, designed to supportturn-key, Internet-scale applications. Their primary selling point is a track record of highsystem uptime, with an Internet-facing mechanism for tracking reliability, and the ability 22
    • to declaratively develop certain classes of web-based applications, with little need towrite code.Client applications to be executed on the force.com cloud are stored as metadata, which isinterpreted and transformed into objects executed by the force.com multitenant runtimeengine. Applications for the force.com architecture can be developed declaratively usinga native application framework, via a Java-like programming language called Apex, orvia exposed APIs that allow applications to be developed in C#, Java, and C++. TheAPIs support integration with other environments, e.g. to allow data to be accessed fromsources external to the force.com infrastructure. Applications that use the API do notexecute within the force.com cloud—they must be hosted elsewhere.force.com imposes a strict application testing regiment on new applications before theyare deployed, to ensure that new applications do not seriously impact the performance ofexisting applications running in the force.com cloud. An extensive set of resource limitsis also imposed, to prevent applications from monopolizing CPU resources and memory.Operations that violate these resource limits result in runtime exceptions in theapplication.8.2. Security Assessmentforce.com deploys a number of mechanisms for increasing the security of applicationsand associated data. These are described in the following sections.Infrastructure SecurityThe force.com instrastructure‟s network is secured via external firewalls that blockunused protocols and the deployment of internal intrusion detection sensors on allnetwork segments. All communication with force.com is encrypted, via SSL/TLS. Thirdparty certification is regularly performed to assess network security. Power and HVACfor datacenters is fully redundant, with multiple UPS‟s, power distribution units, dieselgenerators, and cooling systems. External network connectivity is provided via fiberenclosed in concrete vaults. A number of physical security measures are deployed atforce.com datacenters, including 24 hour manned security, biometric scanning for accessto computer systems, full video surveillance, and bullet-proof, concrete-walled rooms.Computers hosting cloud-based applications are enclosed in steel cages withauthentication control for physical access. On the other hand, a successful phishingattack has been mounted against force.com employees, resulting in the leakage of a largeamount of customer contact data [11].Platform and Application SecurityNative force.com applications are stored as metadata and executed by a runtime engine.The database-oriented nature of the force.com APIs and the lack of low-level APIs forapplications executing within the cloud severely limits the possibility of; force.comapplications do not execute independently of the runtime engine (which has extensiveauditing and resource monitoring checks) and applications developed using force.comAPIs do not execute within the force.com cloud—they merely have access to data in thecloud. On the other hand, if an attack vector against the multitenant runtime engine itselfwere developed, then it appears that data and applications belonging to other 23
    • organizations could be manipulated, since data is comingled. No attack vectors of thiskind have been reported and the feasibility of developing such attacks is unknown.Client Data SecurityMeasures to ensure client data security are vague—available force.com literature simplystates that “salesforce.com protects customer data by ensuring that only authorized userscan access it” and that “All data is encrypted in transfer.” One mechanism that mighthave implications for DoD applications is the presence of a force.com platform “RecycleBin”, which stores deleted data for up to 30 days, during which the data is available forrestoration. It is unclear whether the platform implements secure deletion for data storedin the force.com datacenter, and whether there is a mechanism for ensuring that deleteddata is removed from backups.9. Vendor Assessment: Pervasive Software (PaaS/SaaS)9.1. DescriptionSimilar to Boomi but wider and more differentiated, Pervasive offers a an on-demandintegration suite for Software-as-a-Service and Platform-as-a-Service (confusinglyrenamed Integration-as-a-Service, or IaaS). Pervasive emphasizes development speed,application integration, as well as heterogeneous data connectivity: (200+ connectors,among them to legacy COBOL , QSAM and MVS formats).In addition, its product line field a remarkable capability to process non-relational, semi-structured and un-structured content, which is typically not explicitly formulated andburied in most organizations. Specialized pre-made turn key integration solutions forindustry sectors are offered, as well. As with Boomi, hosting options are available (dataintegration through Pervasives DataCloud) or any other cloud, as well as local premises.Thus, a full range of SaaS-to-SaaS, on-premises-to-SaaS and on-premises to on-premises,as well as traditional B2B and large-scale bulk data exchange.can be integrated viaPervasives platform 24
    • Figure 4 Pervasive Data Integration PlatformTheir flagship product, the Pervasive Data Integration Platform, consists of a unified setof tools for rapid development of seamless connectors that captures, as in Boomis case,process logic, data mapping and transformation rules. The connector secret sauce lies inthe runtime Integration Engine, instantiated by the Integration Services (as shown in blueand purple in Figure 4)9.2. Security AssessmentIn contrast to Boomis strong emphasis on security (attested to by several position andtechnical papers), surprisingly little detail is found on Pervasives security stance. Nosecurity controls are mentioned for the DataCloud. One sentence in their technicaldescription of the Integration Engine is devoted to noting that each Integration Engineinstantiated runs in its own address space, their isolation increasing reliability .Several issues can be deduced from the type and scale technologies employed: TheManagement of deployed component though the Integration Manager (yellow in Figure4) is effected through standard browsers - this is subject to the standard XSS and CSFRissues as delineated in previous section. Lastly, the workhorse of their integrationplatform, the Integration Engine is designed to handle an impressive gamut ofapplications (as evinced from Figure 5), from extract, transform, and load (ETL) projectsto B2B integration (e.g. HIPAA, HL7, ACORD, X12 and SOA adapters).This in itself is not the problem, but coupled with the Engine being lightweight to deploy,as Pervasive emphasized several times, thorough input validation seems unlikely. Beforeany adoption for mission critical deployment, it is recommended that these and other 25
    • issues (e.g. interaction of applications that embed the Integration Engine is accomplishedalso through relative complex COM APIs) be addressed. Figure 5: Integration Engine APIDespite several contacts via email, telephone conversation with sales people, accountmanagers and senior system engineers with entreaties for material addressing the securityissues as formulated by the Navys SOW, no further material was forthcoming as of dateof this writing.We stress that the paucity of available information neednt necessarily reflect on thequality of their safety controls: Indirect evidence that security controls across their IaaS,PaaS and SaaS must meet minimum standards can be corroborated by 100+ case studies,which span over a dozen sectors. Pervasives family of products have been deployed as anintegration solution in industries subject to comparable information classification, auditand access control stipulations as the Navy; health care sector being one of them.9.3. Case Studies of InterestThe health care sector is of interest because of its statutory (HIPAA) data securityrequirements, process complexity, entity scale and legacy system integration. Pervasivelists about a dozen and a half case studies, among them the State of New York. NYdecided to modernize its Medicaid system with the Pervasive Data Integrator forefficiency and HIPAA compliance reasons.Originally created to streamline healthcare processes and reduce costs, HIPAA mandatesminimum standards to protect and keep private an individual‟s health information. Fororganizations to be compliant, they must design their systems and applications to meet 26
    • HIPAA‟s privacy and security standards and related administrative, technical, andphysical safeguards.These standards are referred to as the so-called Privacy and Security Rules. HIPAA‟sPrivacy Rule requires that individuals‟ health information be properly protected bycovered entities. Among other requirements, the privacy rule regulates encryptionstandards for data in transmission (in-flight) and in storage (at-rest). HIPAAs SecurityRule mandates detailed administrative, physical and technical safeguards to protect healthinformation: Inter alia, this means implementation and deployment of access controls,encryption, backup and audit controls for the data in question, subject to appropriateclassification and risk levels. Other industry case studies of potential interest to the Navymay involve Transportation/Manufacturing (logistics), Public Sector/ Government(statutes) and Financial (speed) sectors.Judicious study of health care case studies may also yield insights into issues of scale andlegacy system migration. In this context, we mention the state of Californias tackling ofHIPAA compliance with Pervasive software. Their unique requirements - non-negotiableintegration of legacy systems and a traffic volume of 10 million transactions/month haveNavy correspondences. Lastly, Louisiana East Jefferson General Hospitals transitionfrom proprietary database ETL tool to a Pervasive solution in order to optimize use oftheir data warehouse may warrant a look, as well. 27
    • Author Short Bio: Vassil RoussevVassil Roussev is an Associate Professor of Computer Science at the University of NewOrleans (UNO). He received his Ph.D. in Computer Science from the University of NorthCarolina—Chapel Hill in 2003. After that, he joined the faculty at UNO and has focusedhis research into several related areas—computer security, digital forensics, distributedsystems and cloud computing, high-performance computing, and human-computerinteraction. The overall theme of his research is to bring to bear the massivecomputational power of scalable distributed systems, as well as visual analytics tools tosolve challenges in security and forensics with short turnaround times. He is also workingon tighter integration of security and forensics tools as a means to enrich both areas ofresearch and practice.Dr. Roussev‟s has over 20 peer-reviewed publications (book chapters, journal articles,and conference papers) in the area of computer security and forensics, including featuredarticles in IEEE Security and Privacy and Communications of the ACM. His research andeducational projects have been funded by DARPA, ONR, DoD, SPAWAR New Orleansthe State of Louisiana, and private companies, including a Sun Microsystems AcademicExcellence Grant.Dr. Roussev is Director of the Network Security Lab (NSSAL) at UNO, coaches theUNO Collegiate Cyber Defense Team, and represents UNO at the Large ResourceAllocations Committee of the Louisiana Optical Network Initiative (http://loni.org). He isalso a Co-PI on a $15mln project to create the LONI Institute (http://institute.loni.org/).The LONI Institute seeks to develop a state-wide collaborative R & D environmentamong Louisiana‟s research institutions with a clear focus on advancing computationalscientific research. 28
    • Author Short Bio: Golden G. Richard, IIIGolden G. Richard III is Professor of Computer Science in the Department of ComputerScience at the University of New Orleans. He received a B.S. in Computer Science (withhonors) from the University of New Orleans in 1988, and an M.S. and Ph.D. in ComputerScience from The Ohio State University in 1991 and 1995, respectively. He joined UNOin 1994. Dr. Richard‟s research interests include computer security, operating systemsinternals, digital forensics, and reverse engineering. He is a GIAC-certified digitalforensics investigator and a member of the ACM, IEEE Computer Society, USENIX, theAmerican Academy of Forensics Sciences (AAFS), and the United States Secret ServiceTaskforce on Electronic Crime. At the University of New Orleans, he directs the GreaterNew Orleans Center for Information Assurance and co-directs the Networking, Security,and Systems Administration Laboratory (NSSAL).Prof. Richard has over 30 years of experience in computing and is a recognized expert indigital forensics. He and his collaborators and students at the University of New Orleanshave made important research contributions in high-performance digital forensics, filecarving, evidence correlation mechanisms, on-the-spot digital forensics, and OS supportfor digital forensics. Furthermore, he and his collaborators pioneered the use of GraphicsProcessing Units (GPUs) to speed processing of digital evidence. Recently, he developedand taught one of the first courses in academia on reverse engineering of malicioussoftware. He is the author of numerous publications in security and networking as wellas two books for McGraw-Hill, the first on service discovery protocols (Service andDevice Discovery: Protocols and Programming, 2002) and the second on mobilecomputing (Fundamentals of Mobile and Pervasive Computing, 2005). 29
    • Author Short Bio: Daniel BilarEducation Dartmouth College (Thayer School of Engineering), PhD Engineering Sciences, 2003 Thesis: Quantitative Risk Analysis of Computer Networks Cornell University (School of Engineering), MEng Operations Research and Information Engineering, 1997 Brown University (Department of Computer Science), BA Computer Science, 1995Current Affiliation Assistant Professor of Computer Science, University of New Orleans, August 2008 - present Co-Chair, 6th Workshop on Digital Forensics and Incident Analysis (Port Elisabeth, SA), 2010 Advisory Board, Journal in Computer Virology (Springer, Paris), 2008- Professional Advisory Board, SANS GIAC Systems and Network Auditor, 2002-2005Past Affiliations Endowed Faculty Fellow, Wellesley College (Wellesley MA), 2006-2008 Visiting Professor of Computer Science, Colby College (Waterville ME), 2004-2006Research InterestsDetection, Classification and Containment of Highly Evolved Malicious Software, Systems ofSystems Critical Infrastructure Modeling and Protection, Risk Analysis and Management ofComputer NetworksDr Bilar was a founding member of the Institute for Security and Technology Studies atDartmouth College, conducting counter-terrorism technology research for the US Department ofJustice and Department of Homeland Security. 30
    • List of AbbreviationsCC Common Criteria for Information Technology Security EvaluationDBMS Database management systemDoD Department of DefenseEAL Evaluation assurance levelEC2 Elastic cloud computingIaaS Infrastructure as a serviceIT Information technologyHIPAA The Health Insurance Portability and Accountability Act of 1996LAMP Linux, Apache, MySQL, Perl/PHP/PythonNIAP National Information Assurance PartnershipOS Operating systemPaaS Platform as a servicePR Public relationsSaaS Software as a serviceSLA Service level agreementTCO Total cost of ownershipTPM Trusted platform module 31
    • References[1] Amazon.com, “Creating HIPAA-Compliant Medical Data Applications with AWS”, April 2009, http://awsmedia.s3.amazonaws.com/AWS_HIPAA_Whitepaper_Final.pdf.[2] Amazon.com, “Amazon Web Services. Customer Agreement”, http://aws.amazon.com/agreement/.[3] Amazon.com, “Amazon Web Services: Overview of Security Processes”, Sep 2008, http://s3.amazonaws.com/aws_blog/AWS_Security_Whitepaper_2008_09.pdf[4] Nicholas Arvanitis, Marco Slaviero, Haroon Meer, "Clobbering the Cloud", BH USA 2009, August 2009, http://www.sensepost.com/research/presentations/2009- 08-SensePost-BH-USA-2009.pptx[5] Adam Barth, Collin Jackson and John C. Mitchell, "Robust Defenses for Cross-Site Request Forgery", Proceedings of the ACM CCS 2008, October 2008, http://www.adambarth.com/papers/2008/barth-jackson-mitchell-b.pdf[6] Boomi Inc, "Boomi OWASP Top Ten Response", August 2009, http://www.boomi.com/files/boomi_datasheet_owasp_response.pdf[7] Rajkumar Chee, Shin Yeo, and Srikumar Venugopal, "Market-Oriented Cloud Computing: Vision, Hype, and Reality for Delivering Computing as the 5th Utility", Proceedings of the 2009 9th IEEE/ACM International Symposium on Cluster Computing and the Grid, May 2009[8] DARPA Microsystems Technology Office, BAA 07-24 "TRUST in Integrated Circuits", March 2007, http://www.darpa.mil/MTO/solicitations/baa07- 24/index.html[9] DOD Defense Science Board Task Force, "High Performance Microchip Supply", Feb 2005, http://www.acq.osd.mil/dsb/reports/2005-02-HPMS_Report_Final.pdf[10] DoD DISA, "Security Checklists", http://iase.disa.mil/stigs/checklist/index.html[11] eSecurity Planet, “Salesforce.com Scrambles To Halt Phishing Attacks”, http://www.esecurityplanet.com/trends/article.php/3709871/Salesforcecom- Scrambles-To-Halt-Phishing-Attacks.htm.[12] Gartner, Inc. “SaaS CRM Reduces Costs and Use of Consultants” by Michael Maoz. 15 October 2008[13] Panagiotis G. Ipeirotis, Luis Gravano, Mehran Sahami, “Probe, Count, and Classify: Categorizing Hidden Web Databases”, ACM SIGMOD 2001, Santa Barbara, California, USA[14] McKinsey & Co., “Report: Clearing the Air on Cloud Computing”, Apr 2009, http://uptimeinstitute.org/content/view/353/319/.[15] NDP LLC, "Why is SAS 70 Relevant to SaaS in Todays Regulatory Compliance Landscape?", 2009, http://www.sas70.us.com/industries/saas-and-sas70.php[16] Thomas Ristenpart, Eran Tromer, Hovav Shacham and Stefan Savage, "Hey, You, Get Off My Cloud! Exploring Information Leakage in Third- Party Computer Clouds", Proceedings of ACM CCS 2009, Nov. 2009, http://cseweb.ucsd.edu/~hovav/dist/cloudsec.pdf 32
    • [17] salesforce.com, “ISO 27001 Certified Security”, http://www.salesforce.com/platform/cloud-infrastructure/security.jsp.[18] salesforce.com, “Three global centers and disaster recovery”, http://www.salesforce.com/platform/cloud-infrastructure/recovery.jsp.[19] Nuno Santos, Krishna P. Gummadi, and Rodrigo Rodrigues, “Towards Trusted Cloud Computing”, USENIX Workshop on Hot Topics in Cloud Computing, San Diego, CA, Jun 2009. http://www.usenix.org/events/hotcloud09/tech/full_papers/santos.pdf[20] Alex Stamos, Andrew Becherer and Nathan Wilcox, "Cloud Computing Models and Vulnerabilities - Raining on the Trendy New Parade", Blackhat USA Briefings, July 2009, https://media.blackhat.com/bh-usa-09/video/STAMOS/BHUSA09- Stamos-CloudCompSec-VIDEO.mov[21] UC Berkeley Reliable Adaptive Distributed Systems Laboratory, "Above the Clouds: A BerkeleyView of Cloud Computing", Feb 2009, http://radlab.cs.berkeley.edu/[22] Luis M. Vaquero, Luis Rodero-Merino, Juan Caceres, Maik Lindner, “A Break in the Clouds: Towards a Cloud Definition”, ACM SIGCOMM Computer Communication Review, Volume 39, Number 1, January 2009.[23] United States Government Accountability Office. “INFORMATION ASSURANCE National Partnership Offers Benefits, but Faces Considerable Challenges”, Mar 2006, http://www.gao.gov/new.items/d06392.pdf. 33