Improve OpenStack Hybrid Cloud Security With Intel, Mirantis and SoftLayerDavid Slovensky
Together Intel, Mirantis and SoftLayer demonstrate how Intel Trusted Execution Technology, attestation and automation can enhance hybrid cloud security
Mastering Application Integration Challenges in Hybrid Cloud EnvironmentsSam Garforth
These are the slides from the Nastel Red Hat webinar of April 7th 2021. The abstract is:
Many enterprises are adopting OpenShift in their journey to building and running containerized workloads in on-premise, cloud-based or hybrid environments. These initiatives leverage multiple application integration technologies, such as IBM MQ, Apache Kafka or Tibco EMS.
But managing application integration in hybrid cloud environments introduces multiple challenges:
- Need a single point of control for multiple middleware
- Need to grant self-service and delegated authority to development teams
- Need to enable developers to test application message flows
- Need to address middleware upgrades & migrations
In this webinar, we’ll show you how Nastel Navigator can be used in the OpenShift environment to address these challenges:
- Automated discovery of middleware estate
- Simplified configuration management
- Full audit trail of changes (who, what, where, when)
- Secure, granular delegation of specific authorities to development and operations teams
- Full web-based command & control
You can watch the recording of the webinar here http://bit.ly/nastelredhat
IBM MQ provides mission-critical enterprise messaging, offering a foundation on which to extend and build out your hybrid cloud solution. This session shows why IBM MQ is the key messaging technology that many companies trust their business to, on-premise and in the cloud, and how IBM MQ continues to evolve to meet the ever-growing needs of our users and their environments.
With IBM MQ's continuous delivery model its capabilities are constantly growing, this session includes the updates added in MQ 9.1.2 CD, including the new Uniform Cluster pattern.
Improve OpenStack Hybrid Cloud Security With Intel, Mirantis and SoftLayerDavid Slovensky
Together Intel, Mirantis and SoftLayer demonstrate how Intel Trusted Execution Technology, attestation and automation can enhance hybrid cloud security
Mastering Application Integration Challenges in Hybrid Cloud EnvironmentsSam Garforth
These are the slides from the Nastel Red Hat webinar of April 7th 2021. The abstract is:
Many enterprises are adopting OpenShift in their journey to building and running containerized workloads in on-premise, cloud-based or hybrid environments. These initiatives leverage multiple application integration technologies, such as IBM MQ, Apache Kafka or Tibco EMS.
But managing application integration in hybrid cloud environments introduces multiple challenges:
- Need a single point of control for multiple middleware
- Need to grant self-service and delegated authority to development teams
- Need to enable developers to test application message flows
- Need to address middleware upgrades & migrations
In this webinar, we’ll show you how Nastel Navigator can be used in the OpenShift environment to address these challenges:
- Automated discovery of middleware estate
- Simplified configuration management
- Full audit trail of changes (who, what, where, when)
- Secure, granular delegation of specific authorities to development and operations teams
- Full web-based command & control
You can watch the recording of the webinar here http://bit.ly/nastelredhat
IBM MQ provides mission-critical enterprise messaging, offering a foundation on which to extend and build out your hybrid cloud solution. This session shows why IBM MQ is the key messaging technology that many companies trust their business to, on-premise and in the cloud, and how IBM MQ continues to evolve to meet the ever-growing needs of our users and their environments.
With IBM MQ's continuous delivery model its capabilities are constantly growing, this session includes the updates added in MQ 9.1.2 CD, including the new Uniform Cluster pattern.
Path to Network Functions Virtualization (NFV) Nirvana 2013Andrew Hendry
Presentation outlining the perspective of F5 Networks on an evolutionary path to Network Functions Virtualization (NFV) for telecom operators. Presentation at Carrier Network Virtualization event in Palo Alto, CA in December 2013.
Juniper Announces Availability of Its Contrail SDN Solution; Showcases Custom...Juniper Networks
This whitepaper dives in Juniper's recent Contrail SDN announcements highlighting key elements of the technology, partnerships and impact as a disruptor in the space.
JLove conference 2020 - Reacting to an Event-Driven WorldGrace Jansen
We now live in a world with data at its heart. The amount of data being produced every day is growing exponentially and a large amount of this data is in the form of events. Whether it be updates from sensors, clicks on a website or even tweets, applications are bombarded with a never-ending stream of new events. So, how can we architect our applications to be more reactive and resilient to these fluctuating loads and better manage our thirst for data? In this session explore how Kafka and Reactive application architecture can be combined in applications to better handle our modern data needs.
Deploying Applications in Today’s Network InfrastructureCisco Canada
This presentation prepares networking engineers for the fundamentals of deploying application in today’s server virtualization infrastructure. The objectives for this presentation is to share best practices, tips and tricks on how best to implement Cisco technology such as Cisco UCS and Cisco Nexus 1000v with any virtualization stack. During this presentation we will analyze and dissect two server virtualization use cases recently architected. These use cases consist of a multi -tenant private cloud and virtual desktop infrastructure for thousands of users.
In this presentation we show how IBM MQ can be used to provide a secure, reliable messaging fabric across multiple clouds from on-premises private clouds to a range of public cloud providers including a managed service on IBM Cloud.
Enterprise messaging and IBM MQ is a critical part of any system, this session shows you how MQ is rapidly evolving to meet your needs. Irrespective of your platform or environment, this session introduces many of the updates to MQ in 2019 and 2020, whether that's in administration, building fault tolerant, scalable messaging solutions, or securing your systems.
White paper from Cohesive Networks - Cloud Security Best Practices - Part 2
Learn about Defense in Depth, layers of security for cloud networking, and how you as the application owner can take back control of networking security features with VNS3.
Presented at MQ Technical Conference 2018
More businesses are discovering the benefit of the cloud and moving parts or the whole of their infrastructure onto cloud platforms. In this session we will be looking at how you can utilize IBM MQ in the cloud including considerations you must make before moving your MQ infrastructure into the cloud. We will also look at what resources are available for you to use as a starting point for moving IBM MQ in the cloud.
Cisco ACI & F5 Integrate to Transform the Data CenterF5NetworksAPJ
To meet business expectations without compromising on security, availability, or performance, today’s IT organizations are expected to deliver applications with a speed and efficiency that was unimaginable just a few years ago. To keep pace, you must transform your data
center infrastructure to support the rapid provisioning and scaling of network and application services. With the joint solution of Cisco Application Centric Infrastructure (ACI) and F5 Synthesis™, you can operationalize the network and accelerate application deployment.
Thanks to the advent of public and private clouds, both IT and business have become more agile – more able to quickly respond to fluctuating needs and demands in information processing. However, to achieve a fully agile infrastructure, businesses need to integrate their traditional IT with clouds in all their variants. Hybrid clouds provide that path forward.
For companies considering a hybrid cloud infrastructure, there are significant concerns, with security being number one. Companies must protect corporate data and applications, even as that data moves in a geographically distributed IT infrastructure. Simultaneously, they must ensure the security of data from point of capture at the edge to consumption and storage in the back end. A second concern is ease of infrastructure management and maintenance. This concern becomes more relevant as the number of vendors and management interfaces increase. A related concern has to do with simplifying management and maintenance with automation. For automation to succeed, it requires a policy-driven infrastructure. Finally, because businesses are ultimately looking for greater agility from hybrid clouds, another key concern is the ease of application development and application deployment to production.
For this paper, we used publicly available information to compare two major hybrid cloud technology and service companies: Cisco, through its hybrid cloud portfolio, and HP, through its Helion portfolio. Although it is difficult to pinpoint exactly where each vendor falls in the hybrid cloud spectrum, we can draw a few broad conclusions. The Cisco approach is network-centric and application-centric. The HP approach, on the other hand, is more infrastructure-centric, with an emphasis in developer support, and includes some elements to support the software development lifecycle. The differences between the two companies’ approaches are clearest in the question of security. From our research, it is clear that HP and Cisco are both strong contenders. Their offerings span compute, storage, and network for hybrid clouds and offer different approaches to and levels of security, automation, SDLC support, network virtualization, cloud management, workload mobility technologies, and more. Each company has its own specific target niche in enterprise cloud deployments.
As the interconnectivity between private and public clouds grows, the world of the hybrid cloud is quickly changing. We expect significant changes in the near future –not only in offerings from Cisco and HP, but in the hybrid cloud ecosystem generally. We look forward to watching how Cisco, HP, and other cloud vendors adapt to the expansions and shifts in the future of the hybrid cloud.
Path to Network Functions Virtualization (NFV) Nirvana 2013Andrew Hendry
Presentation outlining the perspective of F5 Networks on an evolutionary path to Network Functions Virtualization (NFV) for telecom operators. Presentation at Carrier Network Virtualization event in Palo Alto, CA in December 2013.
Juniper Announces Availability of Its Contrail SDN Solution; Showcases Custom...Juniper Networks
This whitepaper dives in Juniper's recent Contrail SDN announcements highlighting key elements of the technology, partnerships and impact as a disruptor in the space.
JLove conference 2020 - Reacting to an Event-Driven WorldGrace Jansen
We now live in a world with data at its heart. The amount of data being produced every day is growing exponentially and a large amount of this data is in the form of events. Whether it be updates from sensors, clicks on a website or even tweets, applications are bombarded with a never-ending stream of new events. So, how can we architect our applications to be more reactive and resilient to these fluctuating loads and better manage our thirst for data? In this session explore how Kafka and Reactive application architecture can be combined in applications to better handle our modern data needs.
Deploying Applications in Today’s Network InfrastructureCisco Canada
This presentation prepares networking engineers for the fundamentals of deploying application in today’s server virtualization infrastructure. The objectives for this presentation is to share best practices, tips and tricks on how best to implement Cisco technology such as Cisco UCS and Cisco Nexus 1000v with any virtualization stack. During this presentation we will analyze and dissect two server virtualization use cases recently architected. These use cases consist of a multi -tenant private cloud and virtual desktop infrastructure for thousands of users.
In this presentation we show how IBM MQ can be used to provide a secure, reliable messaging fabric across multiple clouds from on-premises private clouds to a range of public cloud providers including a managed service on IBM Cloud.
Enterprise messaging and IBM MQ is a critical part of any system, this session shows you how MQ is rapidly evolving to meet your needs. Irrespective of your platform or environment, this session introduces many of the updates to MQ in 2019 and 2020, whether that's in administration, building fault tolerant, scalable messaging solutions, or securing your systems.
White paper from Cohesive Networks - Cloud Security Best Practices - Part 2
Learn about Defense in Depth, layers of security for cloud networking, and how you as the application owner can take back control of networking security features with VNS3.
Presented at MQ Technical Conference 2018
More businesses are discovering the benefit of the cloud and moving parts or the whole of their infrastructure onto cloud platforms. In this session we will be looking at how you can utilize IBM MQ in the cloud including considerations you must make before moving your MQ infrastructure into the cloud. We will also look at what resources are available for you to use as a starting point for moving IBM MQ in the cloud.
Cisco ACI & F5 Integrate to Transform the Data CenterF5NetworksAPJ
To meet business expectations without compromising on security, availability, or performance, today’s IT organizations are expected to deliver applications with a speed and efficiency that was unimaginable just a few years ago. To keep pace, you must transform your data
center infrastructure to support the rapid provisioning and scaling of network and application services. With the joint solution of Cisco Application Centric Infrastructure (ACI) and F5 Synthesis™, you can operationalize the network and accelerate application deployment.
Thanks to the advent of public and private clouds, both IT and business have become more agile – more able to quickly respond to fluctuating needs and demands in information processing. However, to achieve a fully agile infrastructure, businesses need to integrate their traditional IT with clouds in all their variants. Hybrid clouds provide that path forward.
For companies considering a hybrid cloud infrastructure, there are significant concerns, with security being number one. Companies must protect corporate data and applications, even as that data moves in a geographically distributed IT infrastructure. Simultaneously, they must ensure the security of data from point of capture at the edge to consumption and storage in the back end. A second concern is ease of infrastructure management and maintenance. This concern becomes more relevant as the number of vendors and management interfaces increase. A related concern has to do with simplifying management and maintenance with automation. For automation to succeed, it requires a policy-driven infrastructure. Finally, because businesses are ultimately looking for greater agility from hybrid clouds, another key concern is the ease of application development and application deployment to production.
For this paper, we used publicly available information to compare two major hybrid cloud technology and service companies: Cisco, through its hybrid cloud portfolio, and HP, through its Helion portfolio. Although it is difficult to pinpoint exactly where each vendor falls in the hybrid cloud spectrum, we can draw a few broad conclusions. The Cisco approach is network-centric and application-centric. The HP approach, on the other hand, is more infrastructure-centric, with an emphasis in developer support, and includes some elements to support the software development lifecycle. The differences between the two companies’ approaches are clearest in the question of security. From our research, it is clear that HP and Cisco are both strong contenders. Their offerings span compute, storage, and network for hybrid clouds and offer different approaches to and levels of security, automation, SDLC support, network virtualization, cloud management, workload mobility technologies, and more. Each company has its own specific target niche in enterprise cloud deployments.
As the interconnectivity between private and public clouds grows, the world of the hybrid cloud is quickly changing. We expect significant changes in the near future –not only in offerings from Cisco and HP, but in the hybrid cloud ecosystem generally. We look forward to watching how Cisco, HP, and other cloud vendors adapt to the expansions and shifts in the future of the hybrid cloud.
DEVNET-1008 Private or Public or Hybrid ? Which Cloud Should I choose?Cisco DevNet
With the advent of cloud computing, the choices for delivery and consumption of applications have drastically increased. With choices comes complexity. Enterprises often find themselves struggling to decide if public, private or hybrid cloud is the best choice for their needs. This session will talk about the pros and cons of public, private and hybrid cloud. It will also describe how Cisco Intercloud Fabric (ICF) can provide the best of both worlds.
Cisco Fog Computing Solutions: Unleash the Power of the Internet of ThingsHarshitParkar6677
The Internet of Things (IoT) speeds up awareness and
response to events. It’s transforming whole industries, including
manufacturing, oil and gas, utilities, transportation, public safety,
and local government.
But the IoT requires a new kind of infrastructure. The cloud by
itself can’t connect and analyze data from thousands and millions
of different kinds of things spread out over large areas. Capturing
the power of the IoT requires a solution that can:
● Connect new kinds of things to your network. Some of them
might be in harsh environments. Others might communicate
using industrial protocols, not IP.
● Secure the things that produce data. And secure the data as it
travels from the network edge to the cloud. This requires a
combination of physical security and cybersecurity.
● Handle an unprecedented volume, variety, and velocity of data.
Billions of previously unconnected devices are generating more
than two exabytes of data each day. Sending all of it to the cloud
for analysis and storage is not practical. Plus, in the time it takes
to send data to the cloud for analysis, the opportunity to act on it
might be gone.
As service providers increasingly provide cloud-based services to enterprises and small businesses in virtual and multi-tenant environments, their security strategies must continually evolve to detect and mitigate emerging threats. In the VMDC reference architecture, physical and virtual infrastructure components such as networks (routers and switches), network-based services (firewalls and load balancers) - and computing and storage resources are shared among multiple tenants, creating shared multi-tenant environments.
Security is especially important in these environments because sharing physical and virtual resources increases the risk of tenants negatively impacting other tenants. Cloud deployment models must include critical regulatory compliance such as Federal Information Security Management Act (FISMA), Health Insurance Portability and Accountability Act (HIPAA), and Payment Card Industry Data Security Standard (PCI DSS).
The VMDC Cloud Security 1.0 solution enables customers to:
• Detect, analyze, and stop advanced malware and advanced persistent threats across the attack continuum.
• Consistently enforce policies across networks and accelerate threat detection and response.
• Access global intelligence using the right context to make informed decisions and take fast,
appropriate action.
• Comply with security requirements for regulatory requisites such as FISMA, HIPAA, and PCI.
• Support secure access controls to prevent business losses.
• Secure data center services using application and content security.
As service providers increasingly provide cloud-based services to enterprises and small businesses in virtual and multi-tenant environments, their security strategies must continually evolve to detect and mitigate emerging threats. In the VMDC reference architecture, physical and virtual infrastructure components such as networks (routers and switches), network-based services (firewalls and load balancers) - and computing and storage resources are shared among multiple tenants, creating shared multi-tenant environments.
Security is especially important in these environments because sharing physical and virtual resources increases the risk of tenants negatively impacting other tenants. Cloud deployment models must include critical regulatory compliance such as Federal Information Security Management Act (FISMA), Health Insurance Portability and Accountability Act (HIPAA), and Payment Card Industry Data Security Standard (PCI DSS).
The VMDC Cloud Security 1.0 solution enables customers to:
• Detect, analyze, and stop advanced malware and advanced persistent threats across the attack continuum.
• Consistently enforce policies across networks and accelerate threat detection and response.
• Access global intelligence using the right context to make informed decisions and take fast,
appropriate action.
• Comply with security requirements for regulatory requisites such as FISMA, HIPAA, and PCI.
• Support secure access controls to prevent business losses.
• Secure data center services using application and content security.
Specific benefits include:
1. Demonstrated solutions to critical technology-related problems in evolving IT infrastructure—Provides support for cloud computing, applications, desktop virtualization, consolidation and virtualization, and business continuity.
2. Reduced time to deployment—Provides best-practice recommendations based on a fully tested and validated architecture, facilitating technology adoption and rapid deployment.
3. Reduced Risk—Enables enterprises and service providers to deploy new architectures and technologies with confidence.
4. Increased Flexibility—Provides rapid, on-demand, workload deployment in a multi-tenant environment using a comprehensive automation framework with portal-based resource provisioning and management capabilities.
5. Improved Operating Efficiency—Integrates automation with a multi-tenant pool of computing, networking, and storage resources to improve asset use, reduce operation overhead, and mitigate operation configuration errors.
The starting point for this project was a MapReduce application that processed log files produced by the support portal. This application was running on Hadoop with Ruby Wukong. At the time of the project start it was underperforming and did not show good scalability. This made the case for redesigning it using Spark with Scala and Java.
Initial review of the Ruby code revealed that it was using disk IO excessively, in order to communicate between MapReduce jobs. Each job was implemented as a separate script passing large data volumes through. Spark is more efficient in managing intermediate data passed between MapReduce jobs – not only it keeps it in memory whenever possible, it often eliminates the need for intermediate data at all. However, that alone not brought us much improvement since there were additional bottlenecks at data aggregation stages.
The application involved a global data ordering step, followed by several localized aggregation steps. This first global sort required significant data shuffle that was inefficient. Spark allowed us to partition the data and convert a single global sort into many local sorts, each running on a single node and not exchanging any data with other nodes. As a result, several data processing steps started to fit into node memory, which brought about a tenfold performance improvement.
How Cisco Migrated from MapReduce Jobs to Spark Jobs - StampedeCon 2015StampedeCon
At the StampedeCon 2015 Big Data Conference: The starting point for this project was a MapReduce application that processed log files produced by the support portal. This application was running on Hadoop with Ruby Wukong. At the time of the project start it was underperforming and did not show good scalability. This made the case for redesigning it using Spark with Scala and Java.
Initial review of the Ruby code revealed that it was using disk IO excessively, in order to communicate between MapReduce jobs. Each job was implemented as a separate script passing large data volumes through. Spark is more efficient in managing intermediate data passed between MapReduce jobs – not only it keeps it in memory whenever possible, it often eliminates the need for intermediate data at all. However, that alone not brought us much improvement since there were additional bottlenecks at data aggregation stages.
The application involved a global data ordering step, followed by several localized aggregation steps. This first global sort required significant data shuffle that was inefficient. Spark allowed us to partition the data and convert a single global sort into many local sorts, each running on a single node and not exchanging any data with other nodes. As a result, several data processing steps started to fit into node memory, which brought about a tenfold performance improvement.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
3. i
Cisco Hybrid Cloud Solution for IT Capacity Augmentation
C O N T E N T S
Preface iii
Audience iii
C H A P T E R 1 Introduction 1-1
Intercloud Fabric Overview 1-2
C H A P T E R 2 Hybrid Cloud Use Cases 2-1
Workload Offloading 2-1
Distributed Workload 2-2
Planned Peak Capacity 2-2
Applications Used within Use Cases 2-2
C H A P T E R 3 Design Overview 3-1
Cisco Intercloud Fabric for Business 3-1
Cisco Intercloud Fabric Director 3-2
Self-Service IT Portal and Service Catalog 3-2
Cisco Intercloud Fabric Secure Extension 3-3
Cisco Intercloud Fabric Core Services 3-3
Cisco Intercloud Fabric Firewall Services 3-3
Cisco Intercloud Fabric Routing Services 3-4
Cisco Secure Intercloud Fabric Shell 3-4
VM Portability and Mobility 3-4
Cisco Intercloud Fabric for Providers 3-5
Cisco Intercloud Fabric Provider Platform 3-5
C H A P T E R 4 Implementation and Configuration 4-1
Initial Intercloud Fabric Deployment within the Enterprise 4-1
Deployment of the IcfCloud Link (IcfCloud) 4-4
Cloud VMs (cVM), Virtual Data Centers (vDC), and Categories 4-5
Intercloud Fabric Implementation for Cisco Powered Provider 4-7
Intercloud Fabric Implementation for Amazon 4-8
AWS ICF Router Implementation 4-9
Deploying ICF Router 4-9
4. Contents
ii
Cisco Hybrid Cloud Solution for IT Capacity Augmentation
Enabling Inter-VLAN Routing 4-12
Extended Routing and NAT Configuration 4-13
ICF Firewall Implementation into AWS 4-18
Create ICF Firewall Data Interface Port-Profile 4-18
Create ICF Firewall Data Interface IP Pool 4-19
Add ICF Firewall Services to the IcfCloud 4-20
Using PNSC to Configure and Deploy the ICF Firewall Service 4-21
Add (Optional) vZone(s) 4-26
Create Security Profile(s) 4-28
Create Firewall Service Paths 4-29
Associate Service Paths to Port Profiles 4-30
ICF Firewall Rule Verification with a Syslog Server 4-32
Configuring an ICF Firewall 4-33
Intercloud Fabric Implementation 4-35
Intercloud Fabric Implementation for Azure 4-35
Intercloud Fabric Implementation for Use Case 1, 3-Tier Offloading 4-36
Intercloud Fabric Implementation for Use Case 2, Distributed Work Load 4-37
Intercloud Fabric Implementation for Use Case 3, Planned Peak Capacity 4-38
Use Case Testing and Results 4-39
3-Tier Offloading to Azure 4-40
3-Tier Offloading to Cisco Powered Provider 4-40
3-Tier Offloading to AWS 4-41
Distributed Workload with Azure 4-42
Distributed Workload with AWS 4-43
Planned Peak Capacity with Cisco Powered Provider 4-43
A P P E N D I X A Recommended Practices and Caveats A-1
Recommended Practices A-1
Application Deployment Validation for Hybrid Environments A-1
Network Planning for Cisco Intercloud Fabric A-1
Naming Convention A-2
High Level Security Recommendations A-2
Caveats A-3
A P P E N D I X B Technical References B-1
A P P E N D I X C Terms and Acronyms C-1
5. iii
Cisco Hybrid Cloud Solution for IT Capacity Augmentation
Preface
This document provides guidance and best practices for deploying Cisco Hybrid Cloud Solution for IT
Capacity Augmentation use cases, allowing customers to seamlessly extend the enterprise network and
security, and manage workloads on different Public Clouds, such as AWS, Azure, and Cisco Powered
Provider.
The design has undergone an intensive test program, and the goal of this validated solution is to
minimize the TCO (Total Cost of Ownership) of a customer looking to deploy Intercloud Fabric for
Business, by accelerating and simplifying its deployment. The focus is on Intercloud Fabric for Business
and the end-to-end solution validation, in the context of Capacity Augmentation use case and three
specific sub-use cases:
1. Generic Workload Offloading (with and without network and security services)
2. Distributed Generic Workload (with and without network and security services)
3. Planned Peak Capacity
This guide supplements the general Cisco Intercloud Fabric document.
Audience
This document is intended for, but not limited to, IT managers or architects, sales engineers, field
consultants, professional services, Cisco channel partner engineering staff, and all customers who wish
to understand further how to seamlessly place and manage their virtualized workloads in a hybrid cloud
environment.
7. C H A P T E R
1-1
Cisco Hybrid Cloud Solution for IT Capacity Augmentation
1
Introduction
The Cisco Validated Design (CVD) for Hybrid Cloud Solution for IT Capacity Augmentation, helps
customers accelerate the implementation of Intercloud Fabric solution, and achieve a faster and more
flexible response to business needs, addressing the following potential challenges of hybrid cloud
implementation:
• Workloads placement across heterogeneous Private and Public Clouds
• Secure extension from Private Cloud to Public Cloud
• Unified management and networking to move workloads across clouds
Cisco Intercloud Fabric is a software solution that enables customers to manage and access their
workloads across multiple Public Clouds in a heterogeneous environment, providing choice and
flexibility to place their workloads where it benefits the most and according to technical (capacity,
security, and so on,) or business (compliance, and so on,) needs. Figure 1-1 shows the solution footprint
for Enterprise customers, where Cisco Intercloud Fabric for Business is deployed in a heterogeneous
Private Cloud or virtualized environment, and Cisco Intercloud Fabric for Provider, a multi-tenant
software appliance that is installed and managed by the Cloud providers that are part of the Cisco
Intercloud Fabric ecosystem. In addition, Cisco Intercloud Fabric can access Amazon (EC2) and Azure
Public Clouds using native APIs without the need for Cisco’s Intercloud Fabric for Provider.
Figure 1-1 Cisco Intercloud Fabric Solution
Along with the benefits for Enterprise or business customers, Cisco Intercloud Fabric solution also
benefits Cisco Powered Providers to generate additional revenue stream on top of multiple Cisco’s
reference architectures, such as Virtual Multiservice Data Center (VMDC). Intercloud Fabric supports
heterogeneous workloads, simplifying the tenant needs, and abstracting the infrastructure requirements.
This design guide focuses on the Cisco Intercloud Fabric for Business, and its end to end aspects,
including the environment configuration used to demonstrate the use cases discussed later, the tests and
results achieved, and best practices.
295076
Data Center
or Private
PublicHybrid
Cisco Intercloud
Fabric for Business
Cisco Intercloud
Fabric for Provider
GUI APIs
Cloud APIs
8. 1-2
Cisco Hybrid Cloud Solution for IT Capacity Augmentation
Chapter 1 Introduction
Intercloud Fabric Overview
The solution validation includes the discussion of Capacity Augmentation, helping customers to
understand how Cisco Intercloud Fabric is leveraged to support such scenarios, and to help IT
departments support their line of businesses. Within Capacity Augmentation, this breaks down into three
sub-use cases which are as following:
• Workload Offloading (with and without network and security services)—Workload Offloading
use cases focus on the offload of a complete 3-tier application (Web/App/DB services) from the
Enterprise into the Service Provider Cloud. In some Service Provider environments, the Enterprise
would deploy firewall, load balancing, and routing services for data traffic being extended into the
cloud. Test cases for both, with and without services, were executed.
• Distributed Workload (with and without network and security services)—Web front end
services of a 3-tier application are deployed and verified in the Service Provider Cloud, while the
application and database services for the application reside in the Enterprise Data Center. In some
Service Provider environments, the Enterprise would deploy firewall, load balancing, and routing
services for the web traffic that is extended into the cloud. Test cases for both, with and without
services, were executed.
• Planned Peak Capacity—In the Planned Peak Capacity use case, Enterprise customers can use
Service Provider Cloud resources temporarily to burst their workloads into the Public Cloud to meet
the seasonal demands. The resources are released/decommissioned in the Public Cloud when high
demand processing finishes.
Intercloud Fabric Overview
The Hybrid Cloud solution objective is to unify all clouds and provide ubiquitous end user access to any
services in the cloud. For example, the end users in the Private Cloud or virtualized environments have
access to services in the Virtual Private Cloud (vPC) or Public Cloud as if accessing the resources in the
Private Cloud. From here, both vPC and Public Cloud are referred to as “Provider Cloud”, and both
Private Cloud or virtualized environment are referred to as “Private Cloud”.
The Intercloud Fabric Director (ICFD) Administrative Interface or the ICFD user interface is used for
the provisioning of applications and compute resources in the Provider Cloud.
These applications and compute resources can either be instantiated in the Service Provider Cloud either
by the Administrator or a user interface, or if permitted, existing resources within the Enterprise
environment may be offloaded to the Service Provider Cloud.
Note When this document makes reference to application or workload, it means VMs (Virtual Machines),
which host Enterprise applications and workloads. At this moment the unit of operation of Cisco
Intercloud Fabric is a VM.
ICF utilizes existing Enterprise resources such as DHCP, SMTP, and AD to secure and verify that
existing resources are available for provisioning and that the role of the person doing the provisioning
has the correct credentials and authority to provision those resources.
The ICF solution provides essential automated management and orchestration that allows organizations
to control and manage cloud-based services transparently throughout their life cycles. This covers a
diverse range of cloud deployments that flexibly scale from test and development to production
workloads, and from initial cloud pilots to large-scale Enterprise-wide initiatives, for delivering
maximum value to customers.
9. C H A P T E R
2-1
Cisco Hybrid Cloud Solution for IT Capacity Augmentation
2
Hybrid Cloud Use Cases
As Enterprises are adopting both the Private and Provider Clouds (Public Clouds), they want the
flexibility to place their workloads in either of these two clouds based on their needs, as well as company
policy and/or compliance requirements. As the Enterprise business grows rapidly and requires additional
compute resources, Enterprise IT wants to take advantage of resources in the Provider Cloud rather than
building out additional Data Centers or adding additional compute resources in their Private Cloud.
Also, in peak season, Enterprises require placing some of their workloads in the Provider Cloud to meet
the demands but keep their sensitive data in the Private Cloud. However, if the enterprise is connecting
to Provider Cloud via WAN, latency and bandwidth (BW) costs maybe a concern since most applications
have strict latency requirements. It is very common to find Enterprises Data Center or Private Cloud
co-located with Provider Cloud and therefore latency between application servers and tiers is not a
concern.
This design guide emphasizes Capacity Augmentation use cases and sub-use cases that include
Workload Offloading, Distributed Workload, and Planned Peak Capacity.
Workload Offloading
The Workload Offloading use case, with or without network and security services, focuses on the ability
of Intercloud Fabric to help customers to use additional capacity of Provider Clouds to offload an
existing application running in the Private Cloud, while extending network and security policies. The
use case focuses on the offload of a complete 3-tier application (Web/App/DB services) from the
Enterprise into the Provider Cloud. In some Service Provider environments, the Enterprise deploys
firewall, load balancing, and routing services for data traffic extended into the cloud. Test cases for both,
with and without services, were executed.
Note Intercloud Fabric is not positioned as a migration tool by itself. It includes an offload capability for the
move of the VM and the seamless extension of the network and security to the Provider Cloud, while
keeping the control point at the Enterprise or business customer. For one-time migration purposes where
there is no need to extend the network and security or maintain the control from a portal in the Enterprise,
Cisco recommends other tools from partners.
10. 2-2
Cisco Hybrid Cloud Solution for IT Capacity Augmentation
Chapter 2 Hybrid Cloud Use Cases
Distributed Workload
Distributed Workload
In a hybrid cloud scenario, applications are eventually deployed in a distributed fashion, in dispersed
locations. Intercloud Fabric enables customers to take advantage of the ability to manage multiple
Provider Clouds as a seamless extension of the Private Cloud, which makes it easier for distributed
applications. This powerful ability creates the need for being mindful of requirements prior to
distributing the application.
As part of the Distributed Workload use case, with or without network and security services, a web front
end services of a 3-tier application is deployed and verified in the Provider Cloud, while the application
and database services for the application reside in the Enterprise Data Center. In some Service Provider
environments, the Enterprise deploys the firewall, load balancing, and routing services for the web
traffic that extends into the cloud. Test cases for both, with and without services, were executed.
Planned Peak Capacity
In the Planned Peak Capacity use case, Enterprise customers use Service Provider Cloud resources to
temporarily burst their workloads to meet any seasonal demands. The resources are
released/decommissioned in the Provider Cloud when high-demand processing finishes.
Cisco Intercloud Fabric manages the creation and access to the VMs in the Provider Clouds, extending
the network and Enterprise configured security policies, all while managing the life-cycle of the cloud
positioned VM.
Cisco Intercloud Fabric exposes APIs on the business side that can be used by monitoring systems and/or
cloud platforms to trigger instantiation of additional VMs to a certain application with configuration of
the new servers and services as part of such application. This design guide does not demonstrate APIs
or 3rd
party tools.
Applications Used within Use Cases
Two 3-Tier applications were used throughout the testing and included a deployment of Microsoft
SharePoint and a WAMP (Windows Apache MySQL PHP) placement. Each of these were deployed to
the different provider environments, with some differentiation based on availability of services
(Table 2-1).
Further breakdown of these subcomponents is shown in Table 2-2 and Table 2-3, with the database
resource varying due to provider OS support differences.
Table 2-1 Service Providers, Services and Applications
Provider Services Application
Amazon EC2 ICF Firewall, ICF Router,
HA Proxy1
1. HAProxy = Open Source Load Balancer
3 Tier WAMP Stack / 3 Tier SharePoint
Microsoft Azure HA Proxy 3 Tier WAMP Stack / 3 Tier SharePoint
Cisco Powered Provider (ICFPP) HA Proxy 3 Tier WAMP Stack / 3 Tier SharePoint
11. 2-3
Cisco Hybrid Cloud Solution for IT Capacity Augmentation
Chapter 2 Hybrid Cloud Use Cases
Applications Used within Use Cases
An open source load balancer application was deployed in the Enterprise and, depending on the use case,
was offloaded to the service Provider Cloud to load balance and monitor traffic destined to each of the
web front-end servers. The HAProxy application was installed on both a Red Hat Linux version 6.3 and
CentOS version 6.3 virtual machine. It was deployed into the Enterprise’s VMware environment. For
more information regarding HAProxy and its functionality refer to the HAProxy web site.
Table 2-2 SharePoint 3-Tier Application
Quantity Resource OS Component
2-4 Web Front End (WFE) Windows 2008 R2 SP1 MS IIS
1 Application (App) Windows 2008 R2 SP1 MS SharePoint
2 Database (DB) Windows 2008 R2 SP1 MS SQL Cluster
Table 2-3 WAMP 3-Tier Application
Quantity Resource OS Component
2 Web Front End (WFE) Windows 2008 R2 SP1 MS IIS
1 Application (App) Red Hat Enterprise Linux 6.3 Tomcat/PHP
1 Database (DB) CentOS 6.3/RHEL 6.3 MySQL
12. 2-4
Cisco Hybrid Cloud Solution for IT Capacity Augmentation
Chapter 2 Hybrid Cloud Use Cases
Applications Used within Use Cases
13. C H A P T E R
3-1
Cisco Hybrid Cloud Solution for IT Capacity Augmentation
3
Design Overview
The Cisco Intercloud Fabric solution helps customers to seamlessly extend their network and security
policies from Private Cloud to Provider Cloud, while maintaining the point of control in the Enterprise,
for example, as in an IT department. This section discusses solution design points.
Figure 3-1 shows the overall high-level design for the Cisco Intercloud Fabric solution. It is important
to understand aspects of the solution architecture.
Figure 3-1 Cisco Intercloud Fabric Solution Overview
The Cisco Intercloud Fabric architecture provides two product configurations to address the following
two consumption models:
• Cisco Intercloud Fabric for Business (focus of this design guide)
• Cisco Intercloud Fabric for Providers
Cisco Intercloud Fabric for Business
Cisco Intercloud Fabric for Business is intended for Enterprise customers who want to be able to
transparently extend their Private Cloud into Public Cloud environments, while keeping the same level
of security and policy across environments. Cisco Intercloud Fabric for Business consists of the
following components:
• Cisco Intercloud Fabric Director
• Cisco Intercloud Fabric Secure Fabric
Cisco Intercloud Fabric for Business
IT AdminsEnd Users
295077
Secure Network Extension
Cisco Intercloud
Fabric Director
End User and IT Admin
Portal Workload and
Fabric Management
Cisco Intercloud
Fabric Services
VM Manager
Cisco Intercloud
Fabric for Providers
Cisco Intercloud
Fabric Provider
Platform
ICF Secure Shell
Cisco Private
Cloud Services
Data Center/
Private Cloud
Provider
Cloud
VM VM VM VM
14. 3-2
Cisco Hybrid Cloud Solution for IT Capacity Augmentation
Chapter 3 Design Overview
Cisco Intercloud Fabric for Business
Cisco Intercloud Fabric Director
Workload management in a hybrid environment goes beyond the capability to create and manage virtual
services in a Private or Public and Provider Cloud and network extension. Both capabilities are part of
the overall hybrid cloud solution, that also needs to provide different types of services, such as policy
capabilities (placement, quotas, and so on,), capabilities to manage workloads in heterogeneous
environments, and other capabilities as discussed here.
Cisco Intercloud Fabric Director (ICFD) provides to the end user and IT administrator a seamless
experience to create and manage workloads across multiple clouds. It is the single point of management
and consumption for hybrid cloud solutions.
Heterogeneous cloud platforms are supported by Cisco ICFD in the Private Cloud, which operationally
unifies workload management in a cloud composed of different cloud infrastructure platforms, such as
VMware vSphere and vCloud, Microsoft Hyper-V and System Center Virtual Machine Manager
(SCVMM), OpenStack, and CloudStack. This unification provides a holistic workload management
experience and multiple options for cloud infrastructure platforms for the customers. Cisco ICFD
provides the required software development kit (SDK) and APIs to integrate with the various cloud
infrastructure platforms.
Cisco ICFD exposes northbound APIs that allow customers to programmatically manage their
workloads in the hybrid cloud environment or to integrate with their management system of choice,
which allows more detailed application management that includes policy and governance, application
design, and other features.
Future releases of Cisco ICFD plan to include enhanced services that differentiate the Cisco Intercloud
Fabric solution, such as bare-metal workload deployment in a hybrid cloud environment and an
enhanced IT administrative portal with options to configure disaster recovery and other services.
Self-Service IT Portal and Service Catalog
The Cisco ICFD self-service IT portal makes it easy for IT administrators to manage and consume hybrid
cloud offers, and for the end users to consume services. For end users, Cisco ICFD provides a service
catalog that combines offers from multiple clouds and a single self-service IT portal for hybrid
workloads.
For IT administrators, Cisco ICFD has an IT administrative portal from which administrators can
perform the following administrative tasks:
• Configure connection to Public and Enterprise Private Clouds.
• Configure roles and permissions and Enterprise Lightweight Directory Access Protocol (LDAP)
integration.
• Add and manage tenants.
• Configure basic business policies that govern workload placement between the Enterprise and
Public Clouds; advanced policies are available in the management layer.
• Customize portal branding.
• Monitor capacity and quota use.
• Browse and search the service catalog and initiate requests to provision and manage workloads in
the cloud.
• View the workload across multiple clouds and offloaded workloads as necessary.
• Manage user information and preferences.
15. 3-3
Cisco Hybrid Cloud Solution for IT Capacity Augmentation
Chapter 3 Design Overview
Cisco Intercloud Fabric Core Services
• Configure catalog and image entitlement.
• Configure virtual machine template and image import, categorization, and entitlement.
• Perform Cisco Intercloud Fabric Secure Extension management.
• Future capabilities are added through the end-user or IT administrative portal.
Cisco Intercloud Fabric Secure Extension
All data in motion is cryptographically isolated and encrypted within the Cisco Intercloud Fabric Secure
Extender. This data includes traffic exchanged between the Private and Public Clouds (site-to-site) and
the virtual machines running in the cloud (VM-to-VM). A Datagram Transport Layer Security (DTLS)
tunnel is created between endpoints to more securely transmit this data. DTLS is a User Datagram
Protocol (UDP)-based, highly secure transmission protocol. The Cisco Intercloud Fabric Extender
always initiates the creation of a DTLS tunnel.
Cisco Intercloud Fabric Core Services
Cisco Intercloud Fabric includes a set of services that are crucial for customers to successfully manage
their workloads across the hybrid cloud environment. These services are identified as Intercloud Fabric
Core Services and are as follows:
• Cloud Security—security enforcement for site to site and VM to VM communications.
• Networking—switching, routing and other advanced network-based capabilities.
• VM Portability—VM format conversion and mobility.
• Management and Visibility—hybrid cloud monitoring capabilities.
• Automation—VM life-cycle management, automated operations and programmatic API.
Future releases of Cisco Intercloud Fabric plan to include an extended set of services, including support
for 3 rd
party appliances.
Cisco Intercloud Fabric Firewall Services
In traditional Data Center deployments, virtualization presents a need to secure traffic between virtual
machines; this traffic is generally referred to as east-west traffic. Instead of redirecting this traffic to the
edge firewall for lookup, Data Centers can handle the traffic in the virtual environment by deploying a
zone-based firewall. Cisco Intercloud Fabric includes a zone-based firewall that is deployed to provide
policy enforcement for communication between virtual machines and to protect east-west traffic in the
provider Cloud. The virtual firewall is integrated with Cisco Virtual Path (vPath) technology, which
enables intelligent traffic steering and service chaining. The main features of the zone-based firewall
include:
• Policy definition based on network attributes or virtual machine attributes such the virtual machine
name.
• Zone-based policy definition, which allows the policy administrator to partition the managed virtual
machine space into multiple logical zones and write firewall policies based on these logical zones.
• Enhanced performance due to caching of policy decisions on the local Cisco vPath module after the
initial flow lookup process.
16. 3-4
Cisco Hybrid Cloud Solution for IT Capacity Augmentation
Chapter 3 Design Overview
Cisco Intercloud Fabric Core Services
Cisco Intercloud Fabric Routing Services
Cisco Intercloud Fabric Secure Extender provides a Layer 2 (L2) extension from the Enterprise Data
Center to the provider Cloud. To support Layer 3 (L3) functions without requiring traffic to be redirected
to the Enterprise Data Center, Cisco Intercloud Fabric also includes a virtual router. The virtual router
is based on proven Cisco IOS® XE Software and runs as a virtual machine in the provider Cloud. The
router deployed in the cloud by Intercloud Fabric serves as a virtual router and firewall for the workloads
running in the provider Cloud and works with Cisco routers in the Enterprise to deliver end-to-end Cisco
optimization and security. The main functions provided by the virtual router include:
• Routing between VLANs in the provider Cloud.
• Direct access to cloud virtual machines.
• Connectivity to Enterprise branch offices through a direct VPN tunnel to the Service Provider's Data
Center.
• Access to native services supported by a Service Provider: for example, use of Amazon Simple
Storage Service (S3) or Elastic Load Balancing services.
Cisco Secure Intercloud Fabric Shell
Cisco Secure Intercloud Fabric Shell (Secure ICF Shell) is a high-level construct that identifies a group
of VMs and the associated Cloud Profiles, and it is designed to be portable and secure across clouds. A
cloud profile includes the following configurations:
• Workload Policies—a set of policies created by the Enterprise IT Admin via Intercloud Fabric
Director portal to define what networks are to extend, security enforcements to be applied to the
workloads in the cloud, and other characteristics such as DNS configuration.
• Definition of the Site-to-Site and VM to VM Secure Communication—IT Admins manage,
enable, or disable secure tunnel configurations between the Private and Public Clouds and/or
between the VMs in the cloud.
• VM Identity—Intercloud Fabric creates an identity for all the VMs that it manages to ensure only
trusted VMs are allowed to participate of the networks extended to the cloud, communicate to other
VMs in the same circle of trust in the Public Cloud, or to communicate to other VMs in the Private
Cloud.
• Cloud VM Access Control—Intercloud Fabric helps to control the access to the cloud VMs via the
secure tunnel established between Private and Public Clouds, or directly via the VM Public IP
defined and managed via Intercloud Fabric.
VM Portability and Mobility
Cisco Intercloud Fabric allows customers to offload VMs from Enterprise virtualized Data Centers to
the cloud, and back from the cloud to the Data Center. The abstraction of the underlying layers allows
offloading to happen seamlessly regardless of the source and target environments, as long as the
environments are supported by Cisco ICF.
At the time of completion of this document, the mechanism that is supported allowed only cold
offloading, which included offloading a VM from one point to another, shutting it down, importing it by
Cisco ICF for image transformation, and then copying it to the destination, where it was powered on and
accessed by the users.
17. 3-5
Cisco Hybrid Cloud Solution for IT Capacity Augmentation
Chapter 3 Design Overview
Cisco Intercloud Fabric for Providers
The transformation process normalizes the required capabilities between different clouds, for example:
a VM that is offloaded from VMware environment to AWS requires image conversion from vmdk to
AMI, and when a VM is offloaded from AWS to a VMware-based Private Cloud, Cisco ICF converts
from AMI to vmdk. All the operations to transform and normalize the workload when it is offloaded to
the cloud and from the cloud, are performed in the Private Cloud, within Cisco ICFB.
Cisco does not position ICF as an offloading tool by itself, but as part of the solution to support
portability and mobility of the workload, that customers can use it to choose where to place a VM as
needed in a hybrid cloud environment. Others tools are better positioned for one-time offloading
purposes.
Cisco Intercloud Fabric for Providers
Cisco Intercloud Fabric for Providers is intended for Provider Cloud environments, allowing their
Enterprise customers to transparently extend their Private Cloud environments into the provider's Public
Cloud, while keeping the same level of security and policy across cloud environments. There are two
Cisco Intercloud Fabric offers for providers; those who offer managed services, or those targeted for
Intercloud Fabric hybrid workloads. For Service Providers that want to offer managed services, Cisco
Intercloud Fabric consists of the following components:
• Cisco Intercloud Fabric Director
• Cisco Intercloud Fabric Secure Fabric
• Cisco Intercloud Fabric Provider Platform
For Service Providers that want just to be a target for hybrid workloads, Cisco Intercloud Fabric consists
of the following components:
• Cisco Intercloud Fabric Provider Platform
Cisco Intercloud Fabric Provider Platform
Cisco Intercloud Fabric Provider Platform (ICFPP) simplifies and abstracts the complexity involved in
working with a variety of Public Cloud APIs, and it enables cloud API support for Service Providers that
currently do not have it. Cisco ICFPP provides an extensible adapter framework to allow integration
with a variety of Provider Cloud infrastructure management platforms, such as OpenStack, Cloudstack,
VMware vCloud Director, and virtually any other APIs that is integrated through an SDK provided by
Cisco.
Currently, service providers have their own proprietary cloud APIs (Amazon Elastic Compute Cloud
[EC2], Microsoft Windows Azure, VMware vCloud Director, OpenStack, and so on,), giving customers
limited choices and no easy option to move from one provider to another. Cisco ICFPP abstracts this
complexity and translates Cisco Intercloud Fabric API calls to different provider infrastructure
platforms, giving customers the choice to move their workloads regardless of the cloud API exposed by
the Service Provider.
Many Service Providers do not provide cloud APIs that Cisco Intercloud Fabric can use to deploy
customers' workloads. One option for these providers is to provide direct access to their virtual machine
managers' SDKs and APIs (for example, through VMware vCenter or Microsoft System Center), which
exposes the provider environment and in many cases is not a preferred option for Service Providers
because of security concerns, for example. Cisco ICFPP, as the first point of authentication for the
customer cloud that allows it to consume Provider Cloud resources, enforces highly secure access to the
provider environment and provides the cloud APIs that are required for service providers to be part of
the provider ecosystem for Cisco Intercloud Fabric.
18. 3-6
Cisco Hybrid Cloud Solution for IT Capacity Augmentation
Chapter 3 Design Overview
Cisco Intercloud Fabric for Providers
As the interface between the Cisco Intercloud Fabric from customers' cloud environments and provider
clouds (Public and virtual Private Clouds), Cisco ICFPP provides a variety of benefits, as described
below:
• Brings standardization and uniformity to cloud APIs, making it easier for Cisco Intercloud Fabric
to consume cloud services from service providers that are part of the Cisco Intercloud Fabric
ecosystem.
• Helps secure access to service providers' underlying cloud platforms.
• Limits the utilization rate per customer and tenant environment.
• Provides northbound APIs for service providers to integrate with existing management platforms.
• Supports multi-tenancy.
• Provides tenant-level resource monitoring.
• In the future, it helps build Cisco infrastructure-specific differentiation.
• In the future, support is provided for enterprises to deploy bare-metal workloads in the provider
Cloud.
19. C H A P T E R
4-1
Cisco Hybrid Cloud Solution for IT Capacity Augmentation
4
Implementation and Configuration
Intercloud Fabric for Business works with a growing number of provider options. The providers
supported during this release are Amazon Web Services, Microsoft Azure, and Cisco Powered Provider
Public Cloud. For more information refer to Installation and Upgrade Guides.
Initial Intercloud Fabric Deployment within the Enterprise
Figure 4-1 shows the Intercloud Fabric Enterprise deployment topology.
Figure 4-1 Topology Overview
This section provides a high-level overview of the Intercloud Fabric implementation for all simulated
Enterprise environments used in testing. More detailed information is provided in later sections
discussing specifics about the connection deployments for each of the three service providers that were
used.
Within each local Enterprise environment, both the Microsoft Active Directory (AD) server and a
Domain Name System (DNS) server were already installed. The Microsoft DNS and AD servers were
registered with ICFD and synchronized with ICFD to allow for authentication of users and the
registration of VM names for components provisioned by ICFD. To allow Administrative users to
approve Service Requests submitted by ICFD users, a Simple Mail Transfer Protocol (SMTP) server was
also included in each Enterprise environment.
HSRP
.254
299132
Mgmt
Mgmt
AD/
DNS
VMM SMTP
VLAN 1901 - 10.11.117.0/24
Provider Local
.10 .11 .12
.06 .07.70
VLAN 2600,2603-2605
VLAN 1903,1908
DMZ
(IT FW)
Enterprise Provider Cloud
.10/.11
Tunnel
Tunnel
vPath Data
.10/.11
VLAN 1903 - 10.11.137.0/24
VLAN 1908 - 10.11.187.0/24
VLAN 1903 - 10.11.137.0/24
ICLINK
ICLINK VLANs
VLAN 1902 - 10.11.127.0/24
VLAN [2600,2603-2605]
VLAN [2600,2603-2605]
.1
.12/.13
.14.12/.13
.14
cCSR
.200
.1
.05
.05
VLAN 2600 10.10.10.X/24 DHCP Managed VLAN
VLAN 2603 10.11.233.X/24 Web /LB Server VLAN
VLAN 2604 10.11.234.X/24 Application Server VLAN
VLAN 2605 10.11.235.X/24 DB Server VLAN
ICFD PNSC cVSM
ICX ICS
DHCP cVSG
APP
APP
DB
DB
LB
LB
WEB
WEB
20. 4-2
Cisco Hybrid Cloud Solution for IT Capacity Augmentation
Chapter 4 Implementation and Configuration
Initial Intercloud Fabric Deployment within the Enterprise
Note Approver SMTP functionality was not tested as part of this CVD.
As part of each Enterprise compute environment a Cisco Nexus 1000V, Virtual Distributed Switch
(vDS), was used to provide L2 network connectivity between the various LAN segments in the
Enterprise. Each compute environment consisted of one or more Cisco UCS Chassis and two B200-M2
server blades running either ESXi version 5.5.0 or 5.1.0. The compute layer was then connected to a
network topology based on a Cisco Virtual Multi-Tenant Data Center (VMDC 2.2) design. Enterprise
networks were configured as separate tenant containers (Virtual Routing Domains) within the same
physical network. For more information related to the VMDC 2.2 network architecture refer to the
VMDC2.2 Design Guide.
Note Refer to Appendix A, “Recommended Practices and Caveats” for more detailed information about the
infrastructure.
For all test topologies, Intercloud Fabric Director was deployed using the OVA image downloaded from
the Cisco web site into a VMware vSphere environment.
Before configuring ICFD OVA any further, after it deploys, it must be licensed. To install the license,
log into the ICFD web interface as admin and select Administration > License (Figure 4-2).
Figure 4-2 Cisco Intercloud Fabric for Business Licensing
With the license submitted, begin the configuration of the Infrastructure components of Prime Network
Services Controller (PNSC) and the Cloud Virtual Supervisor Module (cVSM). The Infrastructure
wizard is started within ICFD under the first pull-down option of the Intercloud tab (Figure 4-3).
21. 4-3
Cisco Hybrid Cloud Solution for IT Capacity Augmentation
Chapter 4 Implementation and Configuration
Initial Intercloud Fabric Deployment within the Enterprise
Figure 4-3 Cisco Intercloud Fabric Infrastructure Setup
Within the Infrastructure setup, configure the ICFD and register it to the local vCenter server
representing that particular Enterprise environment.
The wizard then provisions either a single cVSM or redundant cVSMs for high availability (HA). For
testing purposes, each Enterprise has a pair of Cisco UCS B-Series servers installed with VMware ESXi
version 5.1 or 5.5. Using two, physical hosts permits a single cVSM distribution across each host to
provide high availability.
The Infrastructure wizard then uploads the components from a tar image that was provided along with
the original ICFD downloaded file (Figure 4-4).
Figure 4-4 Infrastructure Bundle Upload
Note At the time of completion of this document, ICF version 2.2.1 was released with major improvements.
The infrastructure bundle is no longer a separate file and is included in the deployment ova.
With the bundle uploaded, proceed to the summary screen of installation options before beginning the
infrastructure deployment shown in Figure 4-5.
22. 4-4
Cisco Hybrid Cloud Solution for IT Capacity Augmentation
Chapter 4 Implementation and Configuration
Initial Intercloud Fabric Deployment within the Enterprise
Figure 4-5 Confirmation Summary
The deployment process of PNSC and cVSM is completely automated by ICFD and is monitored by
viewing the corresponding Service Request created within ICFD. PNSC and cVSM (HA) are fully
provisioned in less than 30 minutes.
Deployment of the IcfCloud Link (IcfCloud)
After the ICFD infrastructure deploys, deploy the IcfCloud link to one of the ICF-established Service
Providers (Azure, Cisco Powered Provider, and AWS for the initial release). Prior to linking securely to
each provider, the Enterprise Administrator needs the appropriate account credentials for the service and
billing. The Service Provider credentials are entered using the ICFD wizard at the time of deployment
and are validated during the initial setup process.
The Enterprise Administrator also needs to configure the IP addresses and VLAN ranges used to
configure both the management of the secure link and any services to be deployed in the Service
Provider Cloud. It is recommended that prior to the deployment of the ICF infrastructure all networking
and Enterprise resources are identified and configured prior to the ICFD deployment and that the
IcfCloud link to the Service Provider.
Separate VLANs and IP network segments were used in the validation for the management of the ICF
components and the optional IcfCloud Tunnel interface. The ICF Administrator has the option, during
the IcfCloud deployment, to use the default for the same network and the same IP address space for both
the tunnel network and management of the ICF components.
ICFD’s IcfCloud wizard is used to deploy the secured network connection to the Service Provider. When
IcfCloud deploys, two primary components are established (or four VM components if HA is selected).
The components are the Intercloud Extender (ICX) VM which resides on the ESXi host within the
Enterprise and the Intercloud Switch (ICS) VM which resides in the Service Provider Cloud. The ICX
and ICS are the endpoints between the Enterprise and the Service Provider for the IcfCloud. The ICX
and ICS components appear as modules within the cVSM and are managed by the PNSC. If HA is
selected at deployment, an IcfCloud is created between each pair of ICX and ICS VMs.
23. 4-5
Cisco Hybrid Cloud Solution for IT Capacity Augmentation
Chapter 4 Implementation and Configuration
Initial Intercloud Fabric Deployment within the Enterprise
Other options within the IcfCloud deployment wizard include the configurations of MAC pools for VMs
that may either be instantiated or offloaded to the Service Provider Cloud, Tunnel Profile configuration
options for specifying tunnel encryption algorithms, protocols, and re-key options. IP Groups, used to
protect Public facing interfaces of VMs that are deployed in the Service Provider Cloud, and any
additional services such as firewall (ICF Firewall) and routing services (ICF Router) used to secure and
provide local routing and NAT services for VMs deployed in the cloud.
Figure 4-6 Cisco Intercloud Fabric Configuration Details
ICFD version 2.1.2 was used for testing. ICFD version 2.1.2 supported services only within the Amazon
EC2 Cloud Provider. Testing and validation performed in the Amazon Cloud was performed with a cloud
services router (ICF Router) and cloud services firewall (ICF Firewall), deployed by the ICFD.
Note ICF version 2.2.1 was released, with major improvements, including ICF Firewall and Router
availability to all supported Provider Clouds.
Cloud VMs (cVM), Virtual Data Centers (vDC), and Categories
All client VMs were configured with two network interfaces. NIC0 of each VM was used for Enterprise
management and was a non-routable address space, configured by a DHCP server located in the
Enterprise. NIC1s IP address is assigned by one of the following methods:
• For VMs created by the Enterprise administrator using the VMM, the IP address of NIC1 is
manually assigned.
• For VMs instantiated by ICFD in the provider Cloud NIC1’s IP address is assigned from a static IP
pool configured within ICFD.
In ICF, Virtual Data Centers (vDCs) are used to associate both compute resources and users, or user
groupings, to a particular IcfCloud (Figure 4-7).
24. 4-6
Cisco Hybrid Cloud Solution for IT Capacity Augmentation
Chapter 4 Implementation and Configuration
Initial Intercloud Fabric Deployment within the Enterprise
Figure 4-7 vDC Overview
There are three policies defined in the vDC:
• Compute Policy—Used only for Private Cloud vDC to identify hypervisor targets for placement
during offloading back operations from the provider Cloud to the Enterprise.
• Network Policy—Used for both Private and Public Cloud vDCs, to define the number of network
interfaces and port profile (port group/VLAN) assignments, as well as set the IP assignment method
(DHCP / Static IP Pool).
• System Policy—Used only for Public Cloud vDC to define the naming policy of instantiated VMs
in the provider Cloud and insert the appropriate DNS information.
To give more flexibility within vDCs, this default policy is overwritten by categories that are defined
within the vDC. These Categories allow for differing hypervisor host placement, or naming, as well as
differing network types that may be required for different applications. In testing, each type of service
(Web, Application, Database) that comprised the 3-Tier application was assigned to categories to
provide name prefixes appropriate for their application types, and network interfaces on the appropriate
overlay extended network tiers. Each type of service was assigned a unique VLAN that had been
extended to the Service Provider Cloud. Figure 4-8 shows categories configured in the ICFD for a
Private Cloud vDC allowing for differentiated compute and network policies depending on the
application.
Figure 4-8 Private Cloud vDC Categories
Figure 4-9 shows categories configured in the ICFD for a Public Cloud vDC allowing for differentiated
System (Deployment) and Network policies depending upon the application.
Figure 4-9 Public Cloud vDC Categories
vDC 1 (Group 1)
Compute Policy
Network Policy
vDC 2 (Group 2)
Compute Policy
Network Policy
299138
ICX
Enterprise Environment
ICS
SP Environment
vDC 3 (Group 3)
Compute Policy
Network Policy
vDC 1 (Group 1)
Compute Policy
Network Policy
vDC 2 (Group 2)
Compute Policy
Network Policy
vDC 3 (Group 3)
System Policy
Network Policy
ICLINK
25. 4-7
Cisco Hybrid Cloud Solution for IT Capacity Augmentation
Chapter 4 Implementation and Configuration
Intercloud Fabric Implementation for Cisco Powered Provider
With vDC Categories applied, an instantiated cVM can receive an appropriate name using a prefix like
“web-” that would be enumerated by the ICFD Service Request number to ensure its uniqueness.
Network interfaces are configured to static IPs from dedicated pools, or is specified to request a DHCP
supplied IP for the interface as the Network Policy dictates.
Finally these Categories can set appropriate Private Cloud target destinations for applications that may
have differing requirements, allowing some cVMs to return to higher processor or faster storage clusters.
Intercloud Fabric Implementation for Cisco Powered Provider
Figure 4-10 shows the components of ICFB and ICFP working together in a Cisco Powered Provider
allowing the Enterprise application to span both Cloud environments.
Figure 4-10 Cisco Powered Provider Topology
All of the implementation steps outlined in the previous sections were followed up to the step of
deploying the IcfCloud link to the Cisco Powered Provider. Within the ICFD IcfCloud wizard, Amazon’s
EC2, and Microsoft’s Azure Cloud is specifically supported, with pull down menu options specific to
each.
To provide ICF connectivity to other service providers, the cloud infrastructure requires the Service
Provider to deploy Cisco’s Intercloud Fabric Provider Platform (ICFPP) virtual appliance.
ICFPP is a virtual appliance the Service Provider can deploy on their provider network, for providing
the Service Provider with a cloud management API interface. ICFPP resides between the ICFB and
Service Provider Cloud platform (for example, Cloudstack, OpenStack, and so on,) and provides the
following functionality:
• Provides Cloud API standardization for Cisco-powered Service Provider.
• Enables Cloud API support for a Cisco powered Service Provider, that does not otherwise support
a Public Cloud API
• Abstracts the complexity of different Public Cloud APIs.
HSRP
.254
299141
Mgmt
Mgmt
ADVMM SMTP
VLAN 1901 - 10.11.117.0/24
.10 .11 .12
.06 .07.70
VLAN 2600,2603-2605
VLAN 1903
DMZ
(IT FW)
Enterprise Provider Cloud
.10/.11
Tunnel
Tunnel
.10/.11
VLAN 1903 - 10.11.137.0/24
VLAN 1903 - 10.11.137.0/24
ICLINK
ICLINK VLANs
VLAN 1902 - 10.11.127.0/24
VLAN [2600,2603-2605]
VLAN [2600,2603-2605]
.12/.13
.05
.05
VLAN 2600 10.10.10.X/24 DHCP Managed VLAN
VLAN 2603 10.11.233.X/24 Web /LB Server VLAN
VLAN 2604 10.11.234.X/24 Application Server VLAN
VLAN 2605 10.11.235.X/24 DB Server VLAN
ICFD
ICFPP
PNSC cVSM
ICX ICS
DHCP
APP
APP
DB
DB
LB
LB
WEB
WEB
Provider Local
26. 4-8
Cisco Hybrid Cloud Solution for IT Capacity Augmentation
Chapter 4 Implementation and Configuration
Intercloud Fabric Implementation for Amazon
Enterprise customers need credentials established by the Cisco Powered Provider to allow for the use of
the “public facing” API Services presented by the ICFPP appliance. Enterprise Administrators then use
those credentials to authenticate to the ICFPP appliance, create the Intercloud Switch (ICS) component
and the IcfCloud between the Enterprise and the Cisco Powered Provider. For more information on the
ICFPP virtual appliance refer to the Cisco Intercloud Fabric Architectural Overview.
Intercloud Fabric Implementation for Amazon
Figure 4-11 shows the components of ICFB connecting to Amazon (EC2), allowing the Enterprise
application to span both cloud environments.
Figure 4-11 CFB Deployment to Amazon (EC2) Topology
All implementation steps outlined in the previous sections led up to deploying the IcfCloud link to the
Amazon Web Services (AWS).
The Amazon Hybrid Cloud topology was deployed with both a compute firewall (ICF Firewall) and
routing services (ICF Router) instantiated within the Amazon Cloud by ICFB where they are shown as:
• ICF Firewall = cVSG (Virtual Security Gateway)
• ICF Router = CSR (Cloud Services Router)
These services are managed separately than similar services deployed in the Enterprise environment.
There is an additional network that is needed for firewall services which is provisioned at the time the
IcfCloud is established.An additional network (for example, VLAN1908) was selected to be used by
PNSC to deploy security policies directly to the ICF Firewall. The security policies are then used to
allow or deny network traffic to and from the various cloud VMs that are deployed in the provider Cloud.
Deployment of the cloud services router (ICF Router) allows for routing of the overlay extended
networks within the Service Provider. The ICF Router acts as a “proxy” gateway for traffic between
cVMs that are deployed on different network segments within the cloud. For the purposes of this testing,
the ICF Router was configured to have an interface on each of the networks segments that were extended
from the Enterprise to the Service Provider. Traffic between the cVMs could then be routed locally
without having to be sent back to the Enterprise, eliminating any network tromboning. ICF Router
functionality is further explained in the section that follows.
HSRP
.254
299142
Mgmt
Mgmt
AD/
DNS
VMM SMTP
VLAN 1901 - 10.11.117.0/24
Provider Local
.10 .11 .12
.06 .07.70
VLAN 2600,2603-2605
VLAN 1903,1908
DMZ
(IT FW)
Enterprise Provider Cloud
.10/.11
Tunnel
Tunnel
vPath Data
.10/.11
VLAN 1903 - 10.11.137.0/24
VLAN 1908 - 10.11.187.0/24
VLAN 1903 - 10.11.137.0/24
ICLINK
ICLINK VLANs
VLAN 1902 - 10.11.127.0/24
VLAN [2600,2603-2605]
VLAN [2600,2603-2605]
.1
.12/.13
.14.12/.13
.14
CSR
.200
.05
.05
VLAN 2600 10.10.10.X/24 DHCP Managed VLAN
VLAN 2603 10.11.233.X/24 Web /LB Server VLAN
VLAN 2604 10.11.234.X/24 Application Server VLAN
VLAN 2605 10.11.235.X/24 DB Server VLAN
ICFD PNSC cVSM
ICX ICS
DHCP cVSG
APP
APP
DB
DB
LB
LB
WEB
WEB
.1
27. 4-9
Cisco Hybrid Cloud Solution for IT Capacity Augmentation
Chapter 4 Implementation and Configuration
Intercloud Fabric Implementation for Amazon
The ICF Router was configured for network address translation (NAT) of the load balancer’s VIP
address. Using the PNSC administrator’s interface, a NAT configuration was applied to allow for the
translation of the load balancer’s VIP address to an Amazon (AWS) public IP address. The VIPs public
IP address was then used by external clients (that is, clients not connected to the Enterprise) to access
the web services for the 3-Tier application using the public Internet.
To allow HTTP or any other protocol to be forward to a VM within the AWS Cloud, HTTP has to be
permitted on the inbound public IP address assigned by AWS for the ICF Router. AWS recommends that
a specific source address or address range be assigned to the Inbound AWS Security Group to secure
access.
Figure 4-12 shows the creation of AWS security within the EC2 Dashboard
Figure 4-12 AWS Security Group Rule
AWS ICF Router Implementation
For complete steps and options of the ICF Router, refer to Chapter 6 of the Cisco Intercloud Fabric
Getting Started Guide, Release 2.1.2.
This section provides highlighted procedures for deploying the ICF Router with respect to validated use
cases.
Deploying ICF Router
Enable the IcfCloud for routing and/or firewall services. Before deploying the ICF Router or ICF
Firewall within ICF, configure supported networks to an Org within their Port Profiles in ICFD.
Figure 4-13 shows the configuration of the Port Profile to support Services within ICFD.
28. 4-10
Cisco Hybrid Cloud Solution for IT Capacity Augmentation
Chapter 4 Implementation and Configuration
Intercloud Fabric Implementation for Amazon
Figure 4-13 Configure Services and Org in Port Profile
An Org is specified or created from editing a Port Profile, or during the creation of a Port Profile. With
an Org in place, it appears in PNSC under Resource Management > Managed Resources an ICF Router
is added from here using the Actions pull-down menu selecting the Add Edge Router option as shown
in Figure 4-14.
Figure 4-14 Adding the ICF Router from the Org “ent4” Shown in PNSC
The following five types of interfaces are available when deploying an ICF Router:
1. Gigabit Ethernet—Data interfaces for inter-VLAN routing, with a minimum of two interfaces.
2. Tunnel—Used for creating an IPSec tunnel.
3. Loopback—Termination point for routing protocols established on the ICF Router.
4. Management—Required interface, using two IPs, one for management access and another
dedicated to PNSC communication.
5. Public Cloud—Optional interface to allow external access to cVMs as well as externally accessible
NATs.
The validation focused on using the interfaces shown in Figure 4-15.
29. 4-11
Cisco Hybrid Cloud Solution for IT Capacity Augmentation
Chapter 4 Implementation and Configuration
Intercloud Fabric Implementation for Amazon
Figure 4-15 ICF Router Interfaces Configured During Deployment
This allowed for management of the ICF Router, inter-VLAN routing, Internet access for cVMs, and the
eventual configuration of a static NAT. Static NAT was used to present the 3-Tier application to be
externally accessible from the Public Interface.
The Management interface needs L2 or L3 reachability back to PNSC and ICS. If a Public Interface is
added, configure the Management interface route to reach the Enterprise networks that are not
configured on an interface of the ICF Router. The route is inserted within the Device Service Profile of
the ICF Router.
The Device Service Profile is created within PNSC at Policy Management > Service Profiles > (Org ICF
Router is deployed to) > Edge Router > Device Service Profiles. The Routing Policy shown in
Figure 4-16 is the first listed section under Policies, with the second option handling the NAT
configuration touched on later in this section.
30. 4-12
Cisco Hybrid Cloud Solution for IT Capacity Augmentation
Chapter 4 Implementation and Configuration
Intercloud Fabric Implementation for Amazon
Figure 4-16 ICF Router Device Service Profile Configuration
Enabling Inter-VLAN Routing
IcfCloud extended networks is optimized for use with Gigabit Ethernet interfaces set up to extend the
default gateway of the Enterprise. The extended gateway enables inter-VLAN routing without requiring
any change on the cVMs located in the provider Cloud. This extension of the gateway inserts an ARP
filter in the ICS to redirect any requests to the Enterprise gateway to the ICF Router.
Figure 4-17 Inter-VLAN Routing Enabled with ARP Filtering
With the ARP filtering in place (Figure 4-17), cVMs is directed to the ICF Router automatically, without
unnecessary packet tromboning.
The Public Interface of the ICF Router automatically creates a NAT Overload configuration to allow
external Internet access for cVMs without tunneling back to the Enterprise. This same Public Interface
was also used in the use cases to provide static NAT to the LB cVM to present the 3-Tier App for external
web consumption.
299148
Enterprise
Environment
Intercloud Switch
CSR
VLAN 1703 SVI
IP 10.11.135.254
VLAN 2303 SVI
IP 10.11.213.254
MAC 0000.0c9f.f8ff
VLAN 2304 SVI
IP 10.11.214.254
MAC 0000.0c9f.f900
VLAN 2305 SVI
IP 10.11.215.254
MAC 0000.0c9f.f901
Mgmt [VLAN 1703]
Mgmt IP 10.11.135.1
Service IP 10.11.135.2
Public [Provider Local]
Tier1 [VLAN 2303]
IP 10.11.213.1
MAC 000e.0800.0012
Tier2 [VLAN 2304]
IP 10.11.214.1
MAC 000e.0800.0012
Tier3 [VLAN 2305]
IP 10.11.215.1
MAC 000e.0800.0012
arp table (vemcmd show arp all)
VLAN
2303
2304
2305
IP
10.11.213.254
10.11.214.254
10.11.215.254
MAC
000e.0800.0012
000e.0800.0012
000e.0800.0012
31. 4-13
Cisco Hybrid Cloud Solution for IT Capacity Augmentation
Chapter 4 Implementation and Configuration
Intercloud Fabric Implementation for Amazon
Extended Routing and NAT Configuration
The Routing Policy (Figure 4-18) allows for communication of cVMs that need to reach Enterprise
infrastructure resources on the example 10.11.115.0/24 network. Any additional, non-ICF extended
segments would need to be added in this way, or through one of the advanced routing options of BGP,
OSPF, or EIGRP within the Routing Policy. This is not completely necessary in the most basic
deployment of ICF Router, but with the addition of a Public interface, the default route is switched from
an Enterprise router to the provider side gateway.
Figure 4-18 ICF Router Interfaces with Device Service Profile Applied
Static NATs were configured for the web front end servers to verify external reachability. This required
a NAT policy pointing to an inside NAT address of the LB resource, and a corresponding outside NAT
address of the AWS provider side private IP it was mapped to. These 172.x.x.x addresses shown in
Figure 4-18 for the primary IP and secondary IP of the Public interface are mapped to public facing IPs
that are handled by AWS.
The Static NAT is assigned to the ICF Router within the AWS EC2 Dashboard.
Note AWS Login and password required to access AWS EC2 Dashboard
From the AWS EC2 Dashboard, find the ICF Router from within the Instances and right click to select
Networking > Manage Private IP Address within the pull down. From the Manage Private IP Address
wizard click Assign new IP, and click Yes, Update to add the IP.
In Figure 4-19 the secondary private IP assigned is 172.31.21.172, with the original private IP shown as
172.31.27.52. The primary private IP has a public IP associated with it, but this is not a persistent
assignment. To maintain the same public IP between reboots, this secondary IP is associated with an
Elastic IP within AWS.
Service Profile
inside-nat
299149
CSR
Mgmt [VLAN 1703]
Mgmt IP 10.11.135.1
Service IP 10.11.135.2
Public [Provider Local]
Primary IP 172.31.25.206
Secondary IP172.31.16.38
Service Profile
outside-nat
Tier1 [VLAN 2303]
IP 10.11.213.1
MAC 000e.0800.0012
Tier2 [VLAN 2304]
IP 10.11.214.1
MAC 000e.0800.0012
Tier3 [VLAN 2305]
IP 10.11.215.1
MAC 000e.0800.0012
Device Service Profile - Policies
Routing Policy
Static
10.11.115.0/24 -> 10.11.135.254
NAT Policy
inside-nat 10.11.213.125
<->
outside-nat172.31.16.38
32. 4-14
Cisco Hybrid Cloud Solution for IT Capacity Augmentation
Chapter 4 Implementation and Configuration
Intercloud Fabric Implementation for Amazon
Figure 4-19 AWS Manage Private IP address
To acquire an Elastic IP, select Elastic IPs from within the Networking & Security section of the AWS
EC2 Dashboard, and click the Allocate New Address button which results in the addition of
52.5.176.220 in Figure 4-20.
Figure 4-20 Elastic IP Assignments
Select this new Elastic IP and click the Associate Address button shown in Figure 4-20, type in the name
of the ICF Router to associate it to, which automatically translates to the instance ID once selected.
Leave the pull-down of the Private IP Address to the primary private IP shown as 172.31.27.52 here, and
click Associate in Figure 4-21 to finish:
33. 4-15
Cisco Hybrid Cloud Solution for IT Capacity Augmentation
Chapter 4 Implementation and Configuration
Intercloud Fabric Implementation for Amazon
Figure 4-21 Elastic IP Association
With the elastic IP associated, the original public IP is gone, and the new Public DNS and Public IP both
map into the value for the Elastic IP:
Figure 4-22 Elastic IP is Now the Same as the Public IP
With the AWS Elastic IP setup completed, as shown in Figure 4-22, an additional Network Security
Group needs to be added to the CSR instance before the AWS EC2 Console is finished. To add a new
Network Security Group, select the Create Security Group option from within NETWORK & Security
> Security Groups of the EC2 Dashboard opening up dialogue box shown in Figure 4-23.
34. 4-16
Cisco Hybrid Cloud Solution for IT Capacity Augmentation
Chapter 4 Implementation and Configuration
Intercloud Fabric Implementation for Amazon
Figure 4-23 Create Security Group from EC2 Dashboard
This allows predefined or custom options, for traffic types, as well as sources and destinations.
With a Network Security Group created to allow the specific traffic of the application, select the instance
of the CSR within the EC2 Dashboard under INSTANCES > Instances, and right click the instance or
use the Actions pull down to select Networking > Change Security Groups. From within the Change
Security dialog box, select the entry for the new Network Security Group, and click the Assign Security
Groups to apply the change.
With the AWS configuration complete, configure the NAT Policy components within PNSC by creating
the appropriate Device Service Profile and Interface Service Profiles.
Figure 4-24 Device Service Profile and Interface Service Profiles
The Device Service Profile establishes the rules used for the NAT translation as it is applied to interfaces
within the Interface Service Profiles. The Device Service Profile is set in the first screen of the ICF
Router configuration wizard under Resource Management > Managed Resources > {Org} > Edit
selecting the deployed ICF Router instance as shown in Figure 4-25.
299155
Device Service Profile - Policies Interface Service Profile
NAT Policy Set {enabled|disabled}
Bold represents selected options
NAT Policy {enabled|disabled}
NAT Rule
{Match Conditions: Source <-> Destination
Protocol: Any|Specific
NAT Action: Static|Dynamic
Translated Address: Source, Destination
NAT Options: Enable Bidirectional, Enable
DNS, Disable Proxy ARP}
Public
Tier1
Service Profile
outside-nat
{Enable NAT;inside|outside}
Service Profile
inside-nat
{Enable NAT;inside|outside}
35. 4-17
Cisco Hybrid Cloud Solution for IT Capacity Augmentation
Chapter 4 Implementation and Configuration
Intercloud Fabric Implementation for Amazon
Figure 4-25 Device Service Profile for the ICF Router
Configuration of the Device Service Profile and subcomponent NAT policies and objects is found in
PNSC at:
• Device Service Profile—QPolicy Management > Service Profiles > {Org} > Edge Router > Device
Service Profiles
• NAT Policy Set—Policy Management > Service Policies > {Org} > Policies> NAT > NAT Policy
Sets
• NAT Policy—Policy Management > Service Policies > {Org} > Policies > NAT > NAT Policies
• Object Group—Policy Management > Service Policies > {Org} > Policy Helpers > Object Groups
This last component listed, called Object Group is not seen in Figure 4-24, but is used as the Source
object in the NAT Rule for the Match Condition of the translation.
With the NAT established through the Device Service Profile, it is enabled by applying Interface Service
Profiles representing the inside and outside of the translation that occurs. These are applied within the
Network Interfaces tab of Resource Management > Managed Resources > {Org} > Edit of the deployed
ICF Router instance, as shown in Figure 4-26.
36. 4-18
Cisco Hybrid Cloud Solution for IT Capacity Augmentation
Chapter 4 Implementation and Configuration
Intercloud Fabric Implementation for Amazon
Figure 4-26 Assign Interface Service Profiles to the Interfaces
These Service Profiles (Interface Service Profiles) are created in PNSC within: Policy Management >
Service Profiles > {Org} > Edge Router > Interface Service Profiles
Within the Interface Service Profile, the specification of “Enable NAT”, and if the NAT interface type
is Inside or Outside are a minimum requirement. Settings for DHCP Relay, VPN Interface, and ACLs
for ingress or egress can additionally be applied.
ICF Firewall Implementation into AWS
A compute firewall (ICF Firewall) VM is deployed into the AWS Cloud to restrict access specifically to
the Virtual IP address (VIP) of the load balancer. However, depending upon the application that is
deployed (for example Microsoft SharePoint) other protocol access is needed specifically for DNS and
Active Directory traffic to allow SharePoint to function properly.
The following is the list of tasks that need to be completed to deploy the ICF Firewall into AWS:
• Create ICF Firewall Data Interface Port-Profile
• Create ICF Firewall Data Interface IP Pool
• Add ICF Firewall Service to the IcfCloud
• Configure PNSC for ICF Firewall Service
– Add ICF Firewall Resource
– Add (Optional) vZone(s) for Web Front End Servers
– Create Security Profile
– Add ICF Firewall to the Service Path
– Associate ICF Firewall Service Path to cVSM Port-Profile
Create ICF Firewall Data Interface Port-Profile
Create a dedicated Port Profile for the Firewall Data interface, as shown in Figure 4-27, on the cVSM
using the ICFD GUI manager by selecting Intercloud > All Clouds > IcfVSM > Add Port Profile
37. 4-19
Cisco Hybrid Cloud Solution for IT Capacity Augmentation
Chapter 4 Implementation and Configuration
Intercloud Fabric Implementation for Amazon
Figure 4-27 Create Port Profile for the Data Interface
Figure 4-28 shows the port profile “ent6-icfvsg-vlan1908” was added using VLAN 1908:
Figure 4-28 “ent6-icfvsg-vlan1908” Port Profile Created for ICF Firewall
Create ICF Firewall Data Interface IP Pool
As shown in Figure 4-29, a separate IP pool needs to be created for the ICF Firewall Data VLAN that
was created above. The ICF Firewall data VLAN IP pool should consist of at least two valid IP
addresses. One IP address from the IP pool is assigned to the ICS’s service interface in the provider
Cloud and the other to the ICF Firewall’s data interface. From the ICFD GUI manager select Policies >
Static IP Pool Policy > Add.
38. 4-20
Cisco Hybrid Cloud Solution for IT Capacity Augmentation
Chapter 4 Implementation and Configuration
Intercloud Fabric Implementation for Amazon
Figure 4-29 Static IP Pool Created for the ICF Firewall
Add ICF Firewall Services to the IcfCloud
To add ICF Firewall Services to the IcfCloud, from the ICFD Gui Manager, select Intercloud > highlight
the cloud you want to add services too > select Add Services. After selecting Add Services, a pop up
menu appears to allow you to select ICF Firewall and/or ICF Router.
As shown in Figure 4-30, after selecting the ICF Firewall (VSG) check box, enter the Service Interface
VLAN (for example, VLAN1908), as well as the Service Interface IP Policy, created above (for
example, ent6-icfvsg-vlan1908). The remaining portions of the ICF Firewall configuration are
performed through the PNSC web console in the next section.
Figure 4-30 Add ICF Firewall to the Provider Cloud
39. 4-21
Cisco Hybrid Cloud Solution for IT Capacity Augmentation
Chapter 4 Implementation and Configuration
Intercloud Fabric Implementation for Amazon
Using PNSC to Configure and Deploy the ICF Firewall Service
From the PNSC GUI Manager, create the ICF Firewall by selecting Resource Management > Managed
Resources > {org} (ent6-provB-1) > Highlight Network Services in the right pane and using the Actions
pull down menu Select “+ Add Compute Firewall”.
Figure 4-31 Add Compute Firewall using the PNSC GUI Manager
After selecting “+ Add Compute Firewall” a configuration wizard is invoked to deploy the ICF Firewall
into the provider Cloud.
Figure 4-32 CF Firewall Properties
40. 4-22
Cisco Hybrid Cloud Solution for IT Capacity Augmentation
Chapter 4 Implementation and Configuration
Intercloud Fabric Implementation for Amazon
In Figure 4-32, specify the name and host name of the ICF Firewall. A specific device profile for the
ICF Firewall may be used to configure specific administrative policies or settings, such as NTP, DNS or
syslog server. The Device Profile is configured and applied to the ICF Firewall after it has been
deployed.
Figure 4-33 Instantiate ICF Firewall in the Cloud
Figure 4-33 select “Instantiate in Cloud” to deploy the ICF Firewall in the provider Cloud. If previous
versions of the ICF Firewall image are available, select the appropriate version.
41. 4-23
Cisco Hybrid Cloud Solution for IT Capacity Augmentation
Chapter 4 Implementation and Configuration
Intercloud Fabric Implementation for Amazon
Figure 4-34 Select the Appropriate IcfCloud for Placement of the ICF Firewall
If there was multiple IcfClouds configured, as shown in Figure 4-34 the ICF Firewall is placed in to a
specific IcfCloud. In this example, there is only a single IcfCloud currently configured.
42. 4-24
Cisco Hybrid Cloud Solution for IT Capacity Augmentation
Chapter 4 Implementation and Configuration
Intercloud Fabric Implementation for Amazon
Figure 4-35 Configure Management Interface
As shown in Figure 4-35 and Figure 4-36, 2 ICF Firewall interfaces need to be configured. One
Management and one Data interface. The configurations are performed separately through the wizard.
Make sure to select the correct Port Group for each type of interface.
43. 4-25
Cisco Hybrid Cloud Solution for IT Capacity Augmentation
Chapter 4 Implementation and Configuration
Intercloud Fabric Implementation for Amazon
Figure 4-36 Configure Data Interface
Lastly, review and finalize the ICF Firewall configuration, as shown in Figure 4-37.
44. 4-26
Cisco Hybrid Cloud Solution for IT Capacity Augmentation
Chapter 4 Implementation and Configuration
Intercloud Fabric Implementation for Amazon
Figure 4-37 ICF Firewall Deployment Summary
Add (Optional) vZone(s)
Source and destination objects is configured as one of four types of attributes: network, VM, user
defined and vZones.
Shown in Figure 4-38 both of the Microsoft SharePoint Web Front End Servers are added to a vZone
named “SharePoint-Web-Server”. Creating a vZone allows the administrator to group virtual machines
together, and apply specific firewall rules to all devices within that vZone.
45. 4-27
Cisco Hybrid Cloud Solution for IT Capacity Augmentation
Chapter 4 Implementation and Configuration
Intercloud Fabric Implementation for Amazon
Figure 4-38 Add (Optional) vZone
In Figure 4-39 the vZone condition is based upon the VM name that is registered with ICFD.
Figure 4-39 vZone Configuration using VM Name
46. 4-28
Cisco Hybrid Cloud Solution for IT Capacity Augmentation
Chapter 4 Implementation and Configuration
Intercloud Fabric Implementation for Amazon
Create Security Profile(s)
For the use cases covered in this document, three types of Security Profiles were created. The use cases
that are covered involve the use of a 3-Tier application deployed into the provider Cloud. A Service
Profile was created for each of the three tiers. Within each of the three Service Profile tiers, “Access
Policy Set” are applied. Each “Access Policy Set” contains an “Access Policy”. Within each “Access
Policy” are rules to deny or permit traffic for a particular tier. Figure 4-40 shows the logical layers of
the Security Profile and how it is applied to the Port Profile associated to the Web Tier Application
VLAN.
Figure 4-40 Logical layers of the Security Profile
In Figure 4-41, (4) ACL policies associated to the “tier1-aclPolicySet” shown in the right pane.
Figure 4-41 Compute Security Profiles and Associated Policies
The tier1-aclPolicySet and the corresponding ACL policies are created by selecting the Policy
Management tab > Service Policies > {org} (ent6-provB-1) > Policies > ACL > ACL Policy Set. In
Figure 4-41, ACL Policies are created and then added to an ACL Policy Set. This allows for the ACL
Policies to be reused within any of the defined ACL Policy Sets.
Service Path:tier1-spath
Port Profile: ent6-vlan2603 (Web Tier VLAN) associated to cVSG through cVSM
Compute Security Profile: tier1-secProfile
ACL Policy Set: tier1-acl-policySet
299171
ACL Policy: mgmt-traffic
vZone – WebServer
Ent6-web-1/10.11.233.101
Ent6-web-2/10.11.233.102
Rule: tier1-lb-Traffic
[Object Group] Load-Balancer –> vZone-WebServer
[Object Group] App Servers –> vZone-WebServer
Rule: Mgmt-Traffic
[Object Group] Mgmt-Subnets –> any
Service: (TCP 22/80/443) and ICMP/DNS/AD
externalAny-IP<-> (TCP 80/443)
47. 4-29
Cisco Hybrid Cloud Solution for IT Capacity Augmentation
Chapter 4 Implementation and Configuration
Intercloud Fabric Implementation for Amazon
Figure 4-42 Add, Remove, or Reorder ACL Policies per ACL Policy Sets
As shown in Figure 4-43, various ACL rules is organized into ACL polices and then grouped into an
ACL Policy set. Structuring the ACL policies to manage a particular traffic type allows the ACL policy
to be re-used in other ACL Policy Sets.
Figure 4-43 Organize ACL Polices and Associated Rules in a Logical Manner
As shown in Figure 4-43, various ACL rules are grouped together to manage a specific network traffic
types.
Create Firewall Service Paths
After the creation of a Compute Security Profile, it is specified in the Service Path as the Service Profile,
along with the service node of the ICF Firewall (Figure 4-44).
48. 4-30
Cisco Hybrid Cloud Solution for IT Capacity Augmentation
Chapter 4 Implementation and Configuration
Intercloud Fabric Implementation for Amazon
Figure 4-44 Associate Service Profile to a Service Path
Associate Service Paths to Port Profiles
As shown in Figure 4-45, apply Service Path to the port profile. Resource Management > Managed
Resources > {org}(ent6-provB-1) > Port Profiles.
49. 4-31
Cisco Hybrid Cloud Solution for IT Capacity Augmentation
Chapter 4 Implementation and Configuration
Intercloud Fabric Implementation for Amazon
Figure 4-45 Select the Service Path
Select the port profile, in this example ent6-vlan2603 is the Microsoft SharePoint Web service network,
and right click to edit the port profile. In Figure 4-45, select the appropriate Service Path profile to be
applied. In the same screen, to disassociate the port-profile from the firewall, check the “Disassociate”
box.
In Figure 4-46, verify that the appropriate Security Profiles are applied to the correct Port Profiles on
the cVSM.
50. 4-32
Cisco Hybrid Cloud Solution for IT Capacity Augmentation
Chapter 4 Implementation and Configuration
Intercloud Fabric Implementation for Amazon
Figure 4-46 Verify Security Profile is Applied to the Correct Port Profile
As shown in Figure 4-47, the port profiles are now associated to the Service Path.
Figure 4-47 Verify Port Profiles and Service Path
ICF Firewall Rule Verification with a Syslog Server
A syslog server was deployed into the Enterprise, and logging was enabled on specific firewall rule sets
to determine the network traffic to be allowed or denied. Monitoring of the syslog messages helped to
identify required traffic that the application needed to function properly.
A CentOS 6.3 Syslog Server was deployed into the ICF management network within the Enterprise
environment to monitor the log messages being generated by the firewall rule sets. Information for
configuring a generic syslog server is found on the Internet
51. 4-33
Cisco Hybrid Cloud Solution for IT Capacity Augmentation
Chapter 4 Implementation and Configuration
Intercloud Fabric Implementation for Amazon
Centos Syslog Server Configuraton (rsyslog.conf)
# Provides UDP syslog reception
$ModLoad imudp
$UDPServerRun 514
# Provides TCP syslog reception
$ModLoad imtcp
$InputTCPServerRun 514
#### RULES ####
# Log anything (except mail) of level info or higher.
# Don't log private authentication messages!
*.info;mail.none;authpriv.none;cron.none
local6.* /var/log/messages
Configuring an ICF Firewall
Perform the following procedures to configure an ICF Firewall to send log messages to a syslog server.
Step 1 Create Syslog Policy.
Step 2 Create Device Profile.
Step 3 Add the Syslog Policy into the Device Profile.
Step 4 Apply the Device Profile to the ICF Firewall.
Step 5 Create the Syslog Policy from Administration > System Profile > Policies > Syslog > Add Syslog Policy.
From the servers tab within the created syslog policy, select add Syslog Server.
Figure 4-48 Forwarding Facility Should Match Syslog Configuration
The forwarding facility shown in Figure 4-48 (for example, local6) should match what was configured
in the syslog.conf file on syslog server.
Step 6 Policy Management >Device Configurations > {org} (ent6-provB-1) >Device Profile > Add Device
Profile and select the syslog policy just created to the Syslog section, along with any appropriate DNS
and NTP information (Figure 4-49).
52. 4-34
Cisco Hybrid Cloud Solution for IT Capacity Augmentation
Chapter 4 Implementation and Configuration
Intercloud Fabric Implementation for Amazon
Figure 4-49 Apply the specific Syslog Policy in the Device Profile
Step 7 Apply the device profile to the ICF Firewall: Resource Management > {org} (ent6-provB-1) > select
ICF Firewall(ent6-ICF Firewall) > General tab and in the Device Profile field select the syslog device
profile (Figure 4-50).
Figure 4-50 Apply Syslog Device Profile to the ICF Firewall
53. 4-35
Cisco Hybrid Cloud Solution for IT Capacity Augmentation
Chapter 4 Implementation and Configuration
Intercloud Fabric Implementation
Intercloud Fabric Implementation
Intercloud Fabric implementation guidance is provided for the following:
• Intercloud Fabric Implementation for Azure, page 4-35
• Intercloud Fabric Implementation for Use Case 1, 3-Tier Offloading, page 4-36
• Intercloud Fabric Implementation for Use Case 2, Distributed Work Load, page 4-37
• Intercloud Fabric Implementation for Use Case 3, Planned Peak Capacity, page 4-38
Intercloud Fabric Implementation for Azure
All of the implementation procedures outlined in the previous sections lead to deploying the IcfCloud
link to Microsoft Azure.
As noted in the Figure 4-51, services are not supported in this release of ICF. All routing and firewall
services were performed by the Enterprise Data Center. All network gateways for cVMs deployed in
Microsoft’s Azure Cloud were configured to use the Enterprise’s aggregation routers.
Figure 4-51 Microsoft Azure Topology
In Figure 4-52 the Microsoft Azure connection is specifically supported and was selected from the
IcfCloud wizard.
HSRP
.254
299182
Mgmt
Mgmt
ADVMM SMTP
VLAN 1901 - 10.11.117.0/24
Provider Local
.10 .11 .12
.06 .07.70
VLAN 2600,2603-2605
VLAN 1903
DMZ
(IT FW)
Enterprise Provider Cloud
.10/.11
Tunnel
Tunnel
.10/.11
VLAN 1903 - 10.11.137.0/24
VLAN 1903 - 10.11.137.0/24
ICLINK
ICLINK VLANs
VLAN 1902 - 10.11.127.0/24
VLAN [2600,2603-2605]
VLAN [2600,2603-2605]
.12/.13
.05
.05
VLAN 2600 10.10.10.X/24 DHCP Managed VLAN
VLAN 2603 10.11.233.X/24 Web /LB Server VLAN
VLAN 2604 10.11.234.X/24 Application Server VLAN
VLAN 2605 10.11.235.X/24 DB Server VLAN
ICFD PNSC cVSM
ICX
ICS
DHCP
APP
APP
DB
DB
LB
LB
WEB
WEB
54. 4-36
Cisco Hybrid Cloud Solution for IT Capacity Augmentation
Chapter 4 Implementation and Configuration
Intercloud Fabric Implementation
Figure 4-52 Select Cloud Type
Intercloud Fabric Implementation for Use Case 1, 3-Tier Offloading
Use case 1 involved a 3-Tier application consisting of mixed Windows and Linux VM resources
(Figure 4-53). The 3-Tier application was comprised of the following VMs and operating systems
(Table 4-1).
Table 4-1 Use Case 1, 3-Tier Application VM’s and Operating Systems
Name Application OS Disk vCPU RAM Quantity
Load Balancer (LB) HAProxy RedHat 6.3 16GB 1 4GB 1
Web Front End (WFE) IIS Windows 2008 R2 16GB 1 4GB 2
Application (App) Apache/PHP RedHat 6.3 12GB 1 2GB 1
Backend (DB) MySQL CentOS 6.3 12GB 1 2GB 11
1.Azure testing
Backend (DB) MySQL RedHat 6.3 12GB 1 2GB 12
2.Cisco Powered Provider testing
55. 4-37
Cisco Hybrid Cloud Solution for IT Capacity Augmentation
Chapter 4 Implementation and Configuration
Intercloud Fabric Implementation
Figure 4-53 3-Tier Offloading
Note OOB interfaces on VLAN 2600 not shown to simplify the Figure 4-53.
As shown in Figure 4-54, VM offloaded can remain in the enterprise and left in a powered off state, or
once the VMs have been offloaded, the user or admin has the option to remove them completely.In this
example all components of the 3-Tier application and the LB were offloaded to the Provider Cloud.
Intercloud Fabric Implementation for Use Case 2, Distributed Work Load
Use case 2 brought in Microsoft SharePoint as the application used, which like the previous use case was
implemented as a 3-Tier application and was presented by a load balancer instance. The SharePoint
components were set up as shown in Table 4-2.
Note This setup is below the recommended resource requirements from Microsoft for a SharePoint
installation, but was sufficient to show basic functionality of a SharePoint placement.
In the Distributed Work Load use case, the SharePoint resources were first installed in the vSphere
Private Cloud and set up with HAProxy as a load balancer in front of the WFE components. After
functionality was confirmed, the WFEs and load balancer were offloaded to the IcfCloud extended
provider Cloud.
As shown in Figure 4-54, VM offloaded can remain in the enterprise and left in a powered off state, or
once the VMs have been offloaded, the user or admin has the option to remove them completely. In this
example only the WFEs and LB were offloaded to the Provider Cloud.
299184
VLAN 2605
VLAN 2600,2603-2605
VLAN 1903
DMZ
(IT FW)
Enterprise Provider Cloud
ICLINK
ICLINK VLANs
ICX
DB
VLAN 2604
APP
VLAN 2603
WFELB WFE
VLAN 2605
DB
VLAN 2604
APP
VLAN 2603
WFELB WFE
ICS
Table 4-2 Use Case 2, 3-Tier Application Components
Name Application OS Disk vCPU RAM Quantity
Load Balancer (LB) HAProxy RedHat 6.3 16GB 1 4GB 1
Web Front End (WFE) SharePoint2013 w/IIS Windows 2008 R2 30GB 1 4GB 2
Application (App) SharePoint2013 Windows 2008 R2 60GB 1 4GB 1
Backend (DB) Clustered SQL Server 2008 Windows 2008 R2 80GB 1 4GB 2
56. 4-38
Cisco Hybrid Cloud Solution for IT Capacity Augmentation
Chapter 4 Implementation and Configuration
Intercloud Fabric Implementation
Basic connectivity was confirmed for each WFE resource that had been offloaded, at this point it was
noted that in each provider environment, the ping response time was between 8-10ms. This kind of
latency is out of bound for what is supported between tiers in SharePoint, but basic functionality was
still observed.
Figure 4-54 Distributed Workload
Note OOB interfaces on VLAN 2600 are not shown to simplify Figure 4-54.
Intercloud Fabric Implementation for Use Case 3, Planned Peak Capacity
Use case 3 had the same initial SharePoint 2013 resources as Use Case 2 (Table 4-3).
This use case stays similar to the Distributed Work Load in Use Case2, but in the Planned Peak Capacity
situation, the initial WFE have stayed in the Enterprise and two additional WFE components have been
instantiated in the Provider Cloud as shown in Figure 4-55.
Figure 4-55 Planned Peak Capacity
299185
VLAN 2605
VLAN 2600,2603-2605
VLAN 1903
DMZ
(IT FW)
Enterprise Provider Cloud
ICLINK
ICLINK VLANs
ICX
DB
VLAN 2604
APP
VLAN 2603
WFELB WFE
VLAN 2605
VLAN 2604
VLAN 2603
WFELB WFE
ICS
Table 4-3 Use Case 3, SharePoint 2013 Resources
Name Application OS Disk vCPU RAM Quantity
Load Balancer (LB) HAProxy RedHat 6.3 16GB 1 4GB 1
Web Front End (WFE) SharePoint2013 w/IIS Windows 2008 R2 30GB 1 4GB 2
Application (App) SharePoint2013 Windows 2008 R2 60GB 1 4GB 1
Backend (DB) Clustered SQL Server 2008 Windows 2008 R2 80GB 1 4GB 2
299186
VLAN 2605
VLAN 2600,2603-2605
VLAN 1903
DMZ
(IT FW)
Enterprise Provider Cloud
ICLINK
ICLINK VLANs
ICX
DB
VLAN 2604
APP
VLAN 2603
WFELB WFE
VLAN 2605
VLAN 2604
VLAN 2603
WFE WFE
ICS
57. 4-39
Cisco Hybrid Cloud Solution for IT Capacity Augmentation
Chapter 4 Implementation and Configuration
Use Case Testing and Results
Note OOB interfaces on VLAN 2600 not shown to simplify Figure 4-55.
These additional WFE elements are added to the configuration of the LB resource that is still in the
Enterprise, and connectivity is tested to confirm that all WFEs are accessible.
Use Case Testing and Results
The use case testing had mixed results. All Workload Offloading and instantiations of application
components under test worked, and were able to communicate back to the Enterprise environment.
Application performance in some cases met expectations, but others were below acceptable levels as
explained.
The WAMP 3-Tier worked well and the NAT access from external queries to the load balancer worked
with services.
SharePoint had compromised performance as a distributed application between cloud environments. The
latency between clouds was significantly beyond the requirements by Microsoft and is assumed to be
the source of performance issues. Latency expectation between tiers is expected to be < 1ms, but during
testing with displaced tiers, the latency was roughly between 8-12ms as shown in Table 4-4.
Note For distributed applications where different layers are deployed in dispersed clouds (for example,
Private and Public Clouds), to meet latency requirements, a dedicated link might be required as opposed
to using the Internet to extend the network through Intercloud Fabric. Although not tested as part of this
document, ICF abstracts the underlying network and its dependencies, which allows customers to
connect to their Cloud provider using different mechanisms, such as: AWS Direct Connect, Azure
Express Route or MPLS network connected to Cisco Powered Provider. These solutions might be an
alternative to resolve the latency requirement.
The resource requirements used for deployed SharePoint components were also short on expected
processor and memory allocations, but these resources were sufficient for basic functionality prior to the
offloading testing, so latency is still seen as the primary problem.
After offloading VMs to the Service Provider, the HAProxy was still load balancing either the 3-Tier
Application or the SharePoint web front end servers was functioning properly.
Table 4-4 Data from 100 Ping Sequences over IcfCloud
Ping (bytes)
Enterprise-AWS
(milliseconds)
Enterprise-Azure
(milliseconds)
Enterprise-DiData
(milliseconds)
Intra-Enterprise
(milliseconds)
64k low 10.3 9.34 8.1 0.249
64k high 12.5 13.2 8.77 0.367
64k Avg 10.618 9.8783 8.2833 0.31085
2000k low 11 9.93 8.7 0.318
2000k high 18.4 12.6 9.47 0.52
2000k Avg 11.491 10.5983 8.8932 0.39177
58. 4-40
Cisco Hybrid Cloud Solution for IT Capacity Augmentation
Chapter 4 Implementation and Configuration
Use Case Testing and Results
3-Tier Offloading to Azure
The 3-Tier application and Load Balancer was instantiated in the Enterprise Data Center, using the
VMware vSphere client. The 3-Tier application was comprised of two Windows Servers for the Web
Front End Services, one Red Hat Linux VM for the application server, and one Red Hat Linux VM for
the database server. HAProxy was used as the Load Balancer running on a CentOS VM. All network
connectivity and Load Balancer configurations were verified in the Enterprise Data Center before
offloading to the Azure Cloud.
Once the 3-Tier application was verified in the Enterprise Data Center, all VMs were then offloaded to
the Azure Cloud. After offloading, all VMs were removed from the Enterprise Data Center. The ICF
Administrator or the ICF user does have the option to offload the VMs to the cloud, and leave the
existing source VMs in a powered off state after the offloading was completed.
Table 4-5 shows offloading times from the Enterprise Data Center to the Azure Cloud. All offloading of
the following VMs were sequential.
Offloading times vary based on traffic at the provider, traffic on the Enterprise side, guest VM size, and
OS type. This information is provided to show a loose expectation of what transfer times might be.
The final step is to offload these same VMs back to the Enterprise.
In this case, all VM offloading was started at approximately the same time.
All network connectivity and load balancer configurations were verified in the Enterprise Data Center
after offloading back to the Enterprise from the Azure Cloud (Table 4-6).
3-Tier Offloading to Cisco Powered Provider
The 3-Tier application and Load Balancer was instantiated in the Enterprise Data Center, using the
VMware vSphere client. The 3-Tier application was comprised of two Windows Servers for the Web
Front End Services, one RedHat Linux VM for the application server, and one CentOS Linux VM for
Table 4-5 3-Tier Offloading Times to Azure
Resource OS Disk Size Time (hr:min:sec)
LB CentOS 6.3 11GB 00:25:03
WFE1 Win2K8R2 19GB 00:57:44
WFE2 Win2K8R2 18GB 00:58:15
App Red Hat 6.3 20GB 00:55:14
DB Red Hat 6.3 21GB 00:45:22
Table 4-6 3-Tier Offloading Times Back from Azure to the Enterprise
Resource OS Disk Size Time (hr:min:sec)
LB CentOS 6.3 11GB 1:56:59
WFE1 Win2K8R2 19GB 2:52:24
WFE2 Win2K8R2 18GB 3:06:12
App Red Hat 6.3 20GB 3:17:10
DB Red Hat 6.3 21GB 2:42:28
59. 4-41
Cisco Hybrid Cloud Solution for IT Capacity Augmentation
Chapter 4 Implementation and Configuration
Use Case Testing and Results
the database server. HAProxy was used as the Load Balancer running on a RedHat VM. All network
connectivity and Load Balancer configurations were verified in the Enterprise Data Center before
offloading to the Cisco Powered Provider Cloud (Table 4-7).
The final step was to offload these same VMs back to the Enterprise. In this case, All VM offloading
was started sequentially.
All network connectivity and Load Balancer configurations were verified in the Enterprise Data Center
after offloading back to the Enterprise from the Cisco Powered Provider Cloud (Table 4-8).
3-Tier Offloading to AWS
The 3-Tier Offloading to AWS used nearly identical application components as the 3-Tier Offloading to
Cisco Powered Service Provider use case used. The LB resource was different in the make up of the
3-Tier application, to show the minor variant of running CentOS instead or Red Hat (Table 4-9).
The more important difference for this use case was the insertion of the services of ICF Firewall and ICF
Router. This allowed a NAT for external web consumption of the 3-Tier application through the ICF
Router as explained in Extended Routing and NAT Configuration, page 4-13, and security with the ICF
Firewall as described in Using PNSC to Configure and Deploy the ICF Firewall Service, page 4-21.
Basic functionality of the 3-Tier application was confirmed in the Enterprise environment, and all
components were then offloaded to AWS using the ICFD portal.
Table 4-7 3-Tier Offloading Times to Cisco Powered Provider
Resource OS Disk Size Time (hr:min:sec)
LB Red Hat 6.3 17GB 00:52:44
WFE1 Win2K8R2 19GB 1:26:56
WFE2 Win2K8R2 18GB 1:37:32
App Red Hat 6.3 20GB 1:32:57
DB Red Hat6.3 21GB 1:16:21
Table 4-8 3-Tier Offloading Times from the Cisco Powered Provider Back to the Enterprise
Resource OS Disk Size Time (hr:min:sec)
LB Red Hat 6.3 17GB 00:36:25
WFE1 Win2K8R2 19GB 00:52:01
WFE2 Win2K8R2 18GB 00:53:07
App Red Hat 6.3 20GB 00:39:27
DB Red Hat6.3 21GB 00:40:55
Table 4-9 3-Tier Offloading Times to AWS
Resource OS Disk Size Time (hr:min:sec)
LB CentOS 6.3 10GB 1:14:19
WFE1 Win2K8R2 16GB 2:02:13
WFE2 Win2K8R2 16GB 2:32:56
60. 4-42
Cisco Hybrid Cloud Solution for IT Capacity Augmentation
Chapter 4 Implementation and Configuration
Use Case Testing and Results
Note During the request process for these AWS offloading, the WFE1 VM was offloaded by itself to the point
of completion. After WFE1 was in place, the remaining four components (LB/WFE2/App/DB) were
initiated in rapid succession through ICFD to offload simultaneously. This may have added some time
to the resulting offloading of the following components, but did show viability for simultaneous
offloading.
With the 3-Tier application positioned in AWS and service deployed with the ICF Firewall and the ICF
Router, basic cloud functionality was tested. The external IP mapped with NAT to the LB resource was
tested for access and HAProxy was used to verify that each web resource was receiving some of the
traffic over multiple successful access attempts.
ICF Firewall rules were tested to finish validation in AWS, with rules set up to restrict direct access to
the database resources from the Web tier, and permission established from internal Enterprise networks
for SSH and ping to all tiers.
With AWS testing complete, all cVMs were offloaded back from AWS using the ICFD portal
(Table 4-10).
Distributed Workload with Azure
SharePoint 2013 was used for the Distributed Workload offloaded to Azure. The deployment used
Clustered SQL Server 2008 as its backend and had a CentOS resource acting as its LB using HAProxy.
The SharePoint installation was deployed in the simulated Enterprise environment and tested for basic
functionality through queries to the WFE components. After functionality was confirmed, and an
IcfCloud was established to Azure, the LB and WFE components were offloaded to Azure (Table 4-11).
App Red Hat 6.3 10GB 1:18:51
DB CentOS 6.3 10GB 1:29:55
Table 4-9 3-Tier Offloading Times to AWS (continued)
Resource OS Disk Size Time (hr:min:sec)
Table 4-10 Offloading Times Back from AWS
Resource OS Disk Size Time (hr:min:sec)
LB CentOS 6.3 10GB 0:40:03
WFE1 Win2K8R2 16GB 1:30:54
WFE2 Win2K8R2 16GB 1:35:24
App Red Hat 6.3 10GB 0:49:06
DB CentOS 6.3 10GB 0:43:34
Table 4-11 Distributed Workload Offloading Times with Azure
Resource OS Disk Size Time (hr:min:sec)
LB CentOS 6.3 10GB 0:22:01
WFE1 Win2K8R2 30GB 1:04:46
WFE2 Win2K8R2 30GB 1:15:55
61. 4-43
Cisco Hybrid Cloud Solution for IT Capacity Augmentation
Chapter 4 Implementation and Configuration
Use Case Testing and Results
Offloading was successful, but latency between the now displaced SharePoint tiers was far beyond the
requirements stated by Microsoft. The degradation in results left exceedingly long page load times that
were not worth recording data on. At this point the Distributed Workload use case involving a distributed
SharePoint was not deemed viable and the cVMs were offloaded back from Azure (Table 4-12).
Distributed Workload with AWS
SharePoint 2013 was used for the Distributed Workload offloaded to AWS. The deployment used
Clustered SQL Server 2008 as its backend and had a CentOS VM resource acting as its LB using
HAProxy. The SharePoint installation was deployed in the simulated Enterprise environment and tested
for basic functionality through queries to the WFE components. After functionality was confirmed, an
IcfCloud was established to AWS, along with an ICF Router and ICF Firewall Services (Table 4-13).
The LB and WFE components were then offloaded to the Amazon EC2 Cloud.
Initially, all traffic was permitted through the ICF Firewall to verify the SharePoint 2013 was functioning
properly. However, the latency between the now displaced SharePoint tiers was far beyond the
requirements stated by Microsoft. The degradation in results left exceedingly long page load times that
were not worth recording data on. At this point the Distributed Workload use case involving a distributed
SharePoint was not deemed viable and the cVMs were offloaded back from AWS (Table 4-14).
Planned Peak Capacity with Cisco Powered Provider
SharePoint used the same components involved in the Distributed Workload with Azure with the
exception that the initial WFE elements and LB stayed in the Enterprise for the test.
Table 4-12 Offloading Times Back from Azure
Resource OS Disk Size Time (hr:min:sec)
LB CentOS 6.3 10GB 0:46:57
WFE1 Win2K8R2 30GB 2:20:35
WFE2 Win2K8R2 30GB 2:31:32
Table 4-13 Distributed Workload Offloading Times with AWS
Resource OS Disk Size Time (hr:min:sec)
LB CentOS 6.3 10GB 0:48:39
WFE1 Win2K8R2 51GB 4:42:21
WFE2 Win2K8R2 51GB 4:40:26
Table 4-14 Distributed Workload Offloading Times Back from AWS
Resource OS Disk Size Time (hr:min:sec)
LB CentOS 6.3 10GB 0:47:32
WFE1 Win2K8R2 51GB 3:40:37
WFE2 Win2K8R2 51GB 02:45:43
62. 4-44
Cisco Hybrid Cloud Solution for IT Capacity Augmentation
Chapter 4 Implementation and Configuration
Use Case Testing and Results
One of the WFE components of the SharePoint 3-Tier application was cloned to a template within
vSphere. With a vSphere template ready, a template and catalog entry were created within Intercloud >
Compute > All Clouds > Enterprise Templates by selecting the WFE template previously cloned in
vSphere, and clicking the Create Template in Cloud and Create Catalog option as shown in
Figure 4-56.
Figure 4-56 Create Template in Cloud and Create Catalog
Following the dialog for the template and eventual catalog item created, the WFE components were
expanded into the Cisco Powered Provider by requesting instantiation of new WFE cVMs from the ICFD
catalog (Table 4-15).
Instantiated WFE cVMs were reconfigured as new registered SharePoint WFE components, and they
were added to the HAProxy configuration of the LB that remained in the Enterprise. The new WFE
components were seen to receive traffic within HAProxy and would return the SharePoint page if given
enough time, but the performance degradation was too much due to the displacement of tiers as seen in
previous use cases. The use case was not deemed viable.
Instantiated cVMs did not need to be offloaded back, and were terminated through the ICFD portal
completing the testing of the use cases.
Table 4-15 Planned Peak Capacity Instantiation Times with Cisco Powered Provider
Resource OS Disk Size Time (hr:min:sec)
Template Creation Win2K8R2 30GB 2:00:39
WFE30 Win2K8R2 30GB 0:57:37
WFE31 Win2K8R2 30GB 0:57:37