This document provides an agenda and overview for a presentation on network virtualization and IT infrastructure automation. The agenda includes presentations from Rod Stuhlmuller of Nicira/VMware on network virtualization, Stathy Toulomis of Opscode on Chef for infrastructure automation, and a demo of Nicira's private cloud platform by Jacob Cherkas. The document also provides background on how Nicira/VMware built their own private OpenStack cloud using OpenStack, Chef, and Nicira Network Virtualization to increase efficiency, speed, and agility while reducing costs and roadblocks. It highlights the importance of automation, network virtualization, and components like virtual switches and controllers.
At this year's FOSE 2011 conference, Government Computer News (GCN) awarded Phantom Virtual Tap the Best of FOSE / Best Networking Product for Government award. The Tap delivers unprecedented total visibility into formerly murky traffic passing between VMs on hypervisor stacks. With its ability to tap traffic between virtual servers (VMs) on a physical server, the Phantom Virtual Tap heralds a new era of network compliance, management, and security for virtualized data centers.
Presented by Net Optics' Senior Solutions Engineer, David Pham, this webinar will briefly introduce you to the Phantom Virtual Tap as well as provide insight into some of the security and compliance challenges created by data center virtualiztion. Additionally:
Advantages of gaining visibility into your virtualized network infrastructure
How to eliminate visibility challenges in the virtual network
Provide attendees the opportunity to learn more about this new technology
Our presenter, Ran Nahmias, Net Optics Director of Cloud and Virtualization Solutions, provides an overview of practical challenges to conducting Lawful Intercepts within converged (physical & virtual) or homogenous virtual network environments.
Virtualization in the Data Center, More Than a Trend!
Virtualization has provided network architects with a new level of flexibility and cost-savings in their server deployments. At the same time, that new level of flexibility has created new opportunities for potentially unlawful activity to be concealed or easily moved across legal jurisdictions to avoid prosecution. View this informative webinar to learn about:
Unique enforcement challenges inherent to Virtualization
Compliance challenges created by Virtualized environments
Methods for thwarting virtual machine jurisdiction ‘hopping’
At this year's FOSE 2011 conference, Government Computer News (GCN) awarded Phantom Virtual Tap the Best of FOSE / Best Networking Product for Government award. The Tap delivers unprecedented total visibility into formerly murky traffic passing between VMs on hypervisor stacks. With its ability to tap traffic between virtual servers (VMs) on a physical server, the Phantom Virtual Tap heralds a new era of network compliance, management, and security for virtualized data centers.
Presented by Net Optics' Senior Solutions Engineer, David Pham, this webinar will briefly introduce you to the Phantom Virtual Tap as well as provide insight into some of the security and compliance challenges created by data center virtualiztion. Additionally:
Advantages of gaining visibility into your virtualized network infrastructure
How to eliminate visibility challenges in the virtual network
Provide attendees the opportunity to learn more about this new technology
Our presenter, Ran Nahmias, Net Optics Director of Cloud and Virtualization Solutions, provides an overview of practical challenges to conducting Lawful Intercepts within converged (physical & virtual) or homogenous virtual network environments.
Virtualization in the Data Center, More Than a Trend!
Virtualization has provided network architects with a new level of flexibility and cost-savings in their server deployments. At the same time, that new level of flexibility has created new opportunities for potentially unlawful activity to be concealed or easily moved across legal jurisdictions to avoid prosecution. View this informative webinar to learn about:
Unique enforcement challenges inherent to Virtualization
Compliance challenges created by Virtualized environments
Methods for thwarting virtual machine jurisdiction ‘hopping’
Cloud Computing is a growing research topic in recent years. The key concept of Cloud Computing is to provide a resource sharing model based on virtualization, distributed file system, parallel algorithm and web services. But how can we provide a testbed for cloud computing related training courses? In this talk we will share our experience to build cloud computing testbed for virtualization, high throughput computing and bioinformatics applications. It covers lots of open source projects, such as DRBL, Xen, Hadoop and bioinformatics related applications.
In short, Diskless Remote Boot in Linux (DRBL) provides a diskless or systemless environment for client machines. It works on Debian, Ubuntu, Mandriva, Red Hat, Fedora, CentOS and SuSE. DRBL uses distributed hardware resources and makes it possible for clients to fully access local hardware.
Xen is one of open source hypervisor for linux kernel. It had been used in Amazon EC2 production environment to provide cloud service model (1) — "Infrastructure as a Service (IaaS)". In this talk, we will show you how DRBL can help on fast deployment of Xen playground in classroom.
Hadoop is becoming the well-known open source cloud computing technology developed by Apache community. It is very power tool for data mining. It had been used in Yahoo and Facebook production environment to provide cloud service model (2) — "Platform as a Service (PaaS)". It’s easy to setup single hadoop node but difficult to manage a hadoop cluster. In this talk, we will show you how DRBL can help on fast deployment and management.
Most bioinformatics applications are open source, such as R, Bioconductor, BLAST, Clustal, PipMaker, Phylip, etc. But it also require traditional cluster job submission. In this talk we will show you how DRBL can help to build a testbed of bioinformatics research and provide cloud service model (3) — "Software as a Service (SaaS)". In this talk, we will cover how to:
- 1. Use DRBL to deploy Xen virtual cluster (drbl-xen)
- 2. Use DRBL to deploy Hadoop cluster (drbl-hadoop)
- 3. Use DRBL to deploy bioinformatics cluster (drbl-biocluster)
A live demonstration about drbl-hadoop and drbl-biocluster will be done in the talk, too.
Cloud Computing is a growing research topic in recent years. The key concept of Cloud Computing is to provide a resource sharing model based on virtualization, distributed file system, parallel algorithm and web services. But how can we provide a testbed for cloud computing related training courses? In this talk we will share our experience to build cloud computing testbed for virtualization, high throughput computing and bioinformatics applications. It covers lots of open source projects, such as DRBL, Xen, Hadoop and bioinformatics related applications.
In short, Diskless Remote Boot in Linux (DRBL) provides a diskless or systemless environment for client machines. It works on Debian, Ubuntu, Mandriva, Red Hat, Fedora, CentOS and SuSE. DRBL uses distributed hardware resources and makes it possible for clients to fully access local hardware.
Xen is one of open source hypervisor for linux kernel. It had been used in Amazon EC2 production environment to provide cloud service model (1) — "Infrastructure as a Service (IaaS)". In this talk, we will show you how DRBL can help on fast deployment of Xen playground in classroom.
Hadoop is becoming the well-known open source cloud computing technology developed by Apache community. It is very power tool for data mining. It had been used in Yahoo and Facebook production environment to provide cloud service model (2) — "Platform as a Service (PaaS)". It’s easy to setup single hadoop node but difficult to manage a hadoop cluster. In this talk, we will show you how DRBL can help on fast deployment and management.
Most bioinformatics applications are open source, such as R, Bioconductor, BLAST, Clustal, PipMaker, Phylip, etc. But it also require traditional cluster job submission. In this talk we will show you how DRBL can help to build a testbed of bioinformatics research and provide cloud service model (3) — "Software as a Service (SaaS)". In this talk, we will cover how to:
- 1. Use DRBL to deploy Xen virtual cluster (drbl-xen)
- 2. Use DRBL to deploy Hadoop cluster (drbl-hadoop)
- 3. Use DRBL to deploy bioinformatics cluster (drbl-biocluster)
A live demonstration about drbl-hadoop and drbl-biocluster will be done in the talk, too.
Customer Bulletin 0410 A Comparison of ISO-C1 and HT-300Dyplast Products
This Customer Bulletin is part of a series of white papers aimed at providing our clients, engineers, contractors, fabricators, and friends with objective information on competitive products. Marketing literature on the internet and in printed media address the physical and performance characteristics of competing polyisocyanurate rigid foam insulations fabricated from bunstock. As is often the case, some literature can be misleading and/or in some cases there may not be sufficient information to credibly compare products. This Customer Bulletin provides factual, clarifying information which should allow for an objective comparison of Dyplast’s ISO-C1® with HiTherm’s HT-300 (each 2 lb/ft3 density).
When a global Telecommunications firm's comprehensive virtualization strategy required visibility into thousands of virtual servers spread across 5 U.S. data centers, they turned to Net Optics and its Phantom solutions.The Telco faced the challenge of supporting numerous VoIP call centers for compliance, security and call quality. This virtualized architecture encompasses more than 150 VMware ESX servers and 1600+ virtual servers. The Telco chose Net Optics Phantom™ HD, working in conjunction with the Net Optics Phantom Virtual Tap, to fulfill this complex demand. Only the Phantom solution delivers the necessary robustness to process extremely high data bandwidths.
In this webinar from Net Optics you will learn:
Presented by Net Optics' Senior Solutions Engineer, David Pham, this webinar will walk through a specific deployment scenario of Net Optics' innovative Phantom Virtual Tap and the recently introduced Phantom HD High-Throughput Tunneling Appliance
Advantages of gaining visibility into your virtualized network infrastructure
How to eliminate visibility challenges in the virtual network
Financial benefits of traffic monitoring and inspection
Readying the campus for the internet of things (io t) - Networkshop44Jisc
Avaya, alongside Leeds Beckett University, will look at a better way to build the smart campus and ready it for the agility and security demands placed upon it by the IoT. Can vastly reducing the number of protocols required to build a campus network lead to reduction in complexity and simultaneously increase security, agility, resilience and performance?
Cloud computing revolutionized application design, and changed the way people think about infrastructure. The rise of cloud computing coincided with a new generation of applications and services that required scale. New architecture and design had to take into account low latency network connectivity, geographic distribution, large real-time data stores, the ability to meet demand (while not knowing exactly how much demand to handle), and so much more. We refer to this as Internet Scale.
Yet most discussion of scale and cloud revolves around compute as virtualized instances, which have defined configurations and constrained options. Delivering on the promise of Internet Scale involves substantial upfront design, and a comprehensive understanding of the entire architecture - from the underlying hardware, to the operating system, the application stack, services, and deployment. And, it involves choice - choices you should make based on your requirements. Join us for a discussion on the many facets of Internet Scale, and how it can apply to your applications and services.
2. Speakers & Agenda
Rod Stuhlmuller
Director of Product Marketing, Nicira/VMware
Nicira’s journey to the cloud & network virtualization
Stathy Toulomis
Solutions Architect, Opscode
Opscode Chef overview and the benefits of IT infrastructure automation
Jacob Cherkas
Cloud Solutions Architect, Nicira/VMware
Live demo of the Nicira private cloud, then open discussion on the use of
OpenStack, Chef and Network Virtualization for building a cloud.
3. The Journey to Our Own OpenStack Cloud
Primary Drivers
– Cost
– Agility
– Speed
Primary Roadblocks
– The network operational model
– Not the network itself
What we learned
– Individual
habits, experience, perceptions and
organizational changes can have
significant impact
– Automation is critical
5. Our Cloud
Virtual Infrastructure
Cloud Environment
Abstraction Layer
L3, ECMP
Non-blocking
No Oversubscription
Internet
On-Site Data Center Off-Site Data Center
Physical Infrastructure
10. Primary Drivers – Cost, Agility, Speed
Before
Compute Cost
Operations Efficiency
Business Velocity
11. Operational Efficiency and Business Velocity
Before After
Compute Cost Compute Cost
Operations Efficiency Operations Efficiency
Business Velocity Business Velocity
12. What We Learned
“Our cloud will make everything
faster and more efficient.”
14. Meet Duffie and Tim
Before Cloud
– Network and Systems Administrator
– Master of Complexity
– Majority of time spend responding to
infrastructure adds & changes that
impact release schedule
– Viewed by R&D as a necessary evil
– Always requesting purchase of more
compute and network capacity
After Cloud
– Elevate or Terminate
– Becomes Cloud Architect
– Hero to R&D
– “Best decision I’ve ever made!”
15. Meet Duffie and Tim
Before Cloud
– R&D Build Manager
– Physical servers under his desk
– Always requesting purchase of more
disk, memory, CPU
After Cloud
– “Server Hugger”
– We are taking your servers, you have
to use the cloud
– “You can’t take my servers, I need
isolation, I need security, I need
performance, I need reliability and
availability”
– “I love the cloud”
16. Important Components
Cloud Environment
Controller Nodes Virtual Switch Nodes Virtual – Physical Gateways Abstraction Layer
L3, ECMP
Non-blocking
No Oversubscription
Internet
On-Site Data Center Off-Site Data Center
Physical Infrastructure
17. Automation
Cloud Environment
Controller Nodes Virtual Switch Nodes Virtual – Physical Gateways Abstraction Layer
L3, ECMP
Non-blocking
No Oversubscription
Internet
On-Site Data Center Off-Site Data Center
Physical Infrastructure
18. Cloud Management
Cloud Environment
Controller Nodes Virtual Switch Nodes Virtual – Physical Gateways Abstraction Layer
L3, ECMP
Non-blocking
No Oversubscription
Internet
On-Site Data Center Off-Site Data Center
Physical Infrastructure
19. Network Virtualization
Cloud Environment
Controller Nodes Virtual Switch Nodes Virtual – Physical Gateways Abstraction Layer
L3, ECMP
Non-blocking
No Oversubscription
Internet
On-Site Data Center Off-Site Data Center
Physical Infrastructure
20. Network Virtualization = A complete network in software
L2
Virtual Network L3
L2
All the properties attributed to SDN With the benefits of virtualization
- Software flexibility - Non-disruptive deployment
- Software innovation and extension - Decoupled from topology
- Hardware choice - Hardware independence
- Service insertion - Backwards compatibility
21. “SDN” is Not Network Virtualization
Manual
Configuration State
VLANs
Distributed ACLs
Forwarding State QoS
Port Groups
L2 Tables
L3 Table
Control Plane
Data Plane
Physical Network
Physical Network
Hardware Dependent
22. Network Virtualization
A New Operational Model for Networking
Decouples from Physical Network and moves operational state into software
Distributed Virtual Network
Controller
Cluster
Network Virtualization Abstraction Layer (vSwitch)
Leaves the Physical Network to do what it does best, Forward Packets.
Physical Network
Physical Network
Hardware Independent
23. What VMware did for servers…for the network.
Application Application Application Workload Workload Workload
x86 Environment L2, L3, L4-7 Network Services
Virtual Virtual Virtual Virtual Virtual Virtual
Machine Machine Machine Network Network Network
Server Hypervisor Decoupled Network Virtualization Platform
Requirement: x86 Requirement: IP Transport
Physical Compute & Memory Physical Network
(Dell, HP, IBM, Quanta,…) (Arista, Cisco, HP, Juniper, Cumulus,…)
24. AT&T
Fundamentally transform and accelerate the way AT&T delivers
applications and services both internally and externally.
“ NVP is a foundational element that
supports a major transformation at
AT&T. Network virtualization is the
future of networking.
IT Transformation
Common Computing Platform
TOBY FORD
AVP, CLOUD ARCHITECTURE & STRATEGY
25. eBay
Transform the time it takes to deploy complex test &
development environments for developers and QA.
“ NVP allows us to repurpose
network infrastructure on-demand,
and reduces the time it takes to
deploy test/dev environments
7 days to 30 seconds
from days to minutes.
JC MARTIN
CLOUD ARCHITECT, EBAY
26. Rackspace
Deliver enterprise-class private networking in a public,
multi-tenant cloud.
“ NVP, combined with OpenStack
is a game changer. Together we are
bringing enterprise private networking
to the cloud.
Rackspace Cloud Networks
LEW MOORMAN
PRESIDENT, RACKSPACE
27. Automation
Controller Nodes Virtual Switch Nodes Virtual – Physical Gateways Abstraction Layer
L3, ECMP
Non-blocking
No Oversubscription
Internet
On-Site Data Center Off-Site Data Center
Physical Infrastructure
28. Opscode Chef
Stathy Toulomis
Solutions Architect, Opscode
29. Managing Complexity Then
To Add a New Server…
• 2x Web Server
Configurations
• 2 Web Server Restarts
• 4x Database Configurations
Add 1 server Web Servers • 8x Firewall Configurations
20+ Changes
• DNS Service
• Network Configuration
• Deployer
Application Servers
• 8x Monitoring Changes
The Bottom Line…
Database Cache
20+ Changes
12+ New Infrastructure
Dependencies
Databases 4+ Hours
30. Managing Complexity Later
We added:
• Load Balancers
• MemCache
• Search Appliances
• Lots of VM’s
• More Scale
Exponential Increase In:
• Configuration Changes
• Infrastructure
Dependencies
• Skills Needed
• Greater Risk
31. What is Chef?
Chef is an automation platform for developers & systems engineers to
continuously define, build, and manage infrastructure.
CHEF USES:
Recipes and Cookbooks
that describe Infrastructure as Code.
Chef enables people to easily build &
manage complex & dynamic applications
at massive scale
• New model for describing infrastructure
that promotes flexibility, extensibility
and reuse.
32. Chef is Infrastructure as Code
• Programmatically provision
and configure
• Treat like any other code
base
• Reconstruct business from
code repository, data
backup, and bare metal
resources.
33. “Infrastructure As Code”
• A configuration management system (DSL)
• A library for configuration management
• A community, contributing to library and expertise
• A systems integration platform (API)
http://www.flickr.com/photos/asten/2159525309/sizes/l/
35. Recipes and Cookbooks
• Recipes are collections of
Resources
• Cookbooks contain
recipes, templates, files, cus
tom resources, etc
• Code re-use and modularity
• Hundreds already on
Community.opscode.com
http://www.flickr.com/photos/patrick_q/199986515/
36. Dynamic configuration management
pool_members = search('node','role:webserver')
template '/etc/haproxy/haproxy.cfg' do
source 'haproxy-app_lb.cfg.erb'
owner 'root'
group 'root'
mode '0644'
variables :pool_members => pool_members.uniq
notifies :restart, 'service[haproxy]'
end
37. How Can Chef Help?
Blueprint Your Build Anything… And Manage It Simply
Infrastructure
• Provision compute • Introduce continuous
• Compute resources in the Data Center incremental change or total
and the Cloud change.
• Application
•
• Infrastructure Automatically reconfigure
• Storage everything
• Application Stacks •
• Security
Re-provision for disaster
• Big Data recovery
• Network •
• HPC
Fail-over to bare metal
• Configuration Standards • Monitor for compliance
• Linux, Windows, OSX, Unixe
• Cloud migrations become
s
Using 1,000’s of trivial
man-days of prior art!
Discoverable and Searchable
Infrastructure
38. How Opscode Can Help
Hosted Chef Private Chef
• Delivered via SaaS Model • All the power of Hosted Chef
Hosted by Opscode behind the firewall
• Manage up to 50,000 Servers • Delivered as enterprise software
• Industry-leading SLA’s • Implementation consulting
• 24x7x365 Support Options customized to your needs
• Get up and running quickly
• Pay/grow as you need
The Opscode
Community
• Training
• 650+ Cookbooks 18,000+ Registered Users
• Plug-Ins • 950+ Individual and
170+ Corporate Contributors
• Source Code Documentation
• Global Partner Network
• FAQ
There is a lot of hype in the media right now about SDN. Every single hardware vendor has established a “software defined network” strategy. Unfortunately when you look under the covers they are either adding APIs to their CLI or adding Openflow to their switches. Neither of these fix the customer problems we discussed earlier in the presentation.SDN from hardware vendors is a stub, no a new model, just a repositioning of their own proprietary OS and API, but in most cases, these approaches require hardware from a single vendor in a continuation of the old vertically integrated network architecture.Network virtualization could be considered the “Next Generation SDN”You get the properties of SDN –flexibility, and hardware independence, and you get the benefits of virtualization… it is non-distruptive, decoupled from the existing topology and you can implement tomorrow with no change to you core network infrastructure.
We are in an SDN bubble? How can we tell?- Everyone is doing it, all networking companies have an SDN strategy- Many vendors claim to have been doing SDN even before the term was coined (2007)- No one can agree on what it means, because everyone is positioning their SDN as SDN- Definitions that exist are so broad and varied that they have become meaninglessIn network hardware, switches and routers, there are two types of state maintained, distributed forwarding state and manual configuration state.Distributed forwarding state is what network equipment is great at, this state is maintained automatically as network devices communicate with each other, so that if one path goes down, alternate paths are used and the network system quickly converges and automatically.It’s the manually configured operational state that causes the issue. This is where network engineers use CLI to manually make changes to VLAN configurations, ACLs, QoS, Port Security groups, etc. This is what causes changes to the network topology to take days or weeks instead of seconds and where human error causes downtime or security holes because of simple configuration errors or typos.SDN, as it is positioned by network hardware vendors, is a software stub, like SNMP that provides better device-by-device management. This is simply SDN washing, that is another attempt to better manage the complexity of physical network device management complexity, rather than taking advantage of the fundamental IP connectivity that all network hardware delivers and moving the operational state management into software, independent of the underlying hardware. SDN does not create a virtual network, it’s just arguably “better” network device management.OpenFLow is a communication protocol that provides a “standard” way to communication with switches to , separates data plane from control plane, using a centralized controller to manipulate forwarding tables, source address, destination address, forward packet, drop packet, blah, blah, blah
Network virtualization, on the other had, creates an abstraction layer between the physical network and the Virtual network and extracts the complex operational state of the network into software where it can be programmatically controlled. This is the same model we have seen work for software development, moving from machine language, to layered abstractions and object oriented programming and the same model was the basis for server virtualization. Decouple the software layer (virtual machines) from the underlying physical infrastructure (X86 hardware) and you enable an entirely new operational model.Network virtualization extracts the complex, and currently manual, configuration state into the virtual layer (virtual network) and leaves the robust forwarding state management in physical network. Therefore taking advantage of what the physical network does best, forward packets. The only requirement from the physical network is IP connectivity.This results in two things…First a far more simple underlying physical infrastructure that can be provided by any hardware vendor, you no longer need complex proprietary protocols and vertically integrated solutions that lock you into a single hardware provider. You can stil use your favorite vendor, Cisco, Juniper, Arista, HP, whoever, you just are not locked in, you can mix and match and make the best price performance decision at the time.Second, a completely new operational model for networking, which gives you the agility you have with VMs for the network. Create complex, multi-tier, Layer 2 and Layer 3 topologies with L4-7 services, all in software in seconds, not days. Bring true multi-tenant cloud infrastructure that allows users to create their own isolated environments, create, delete, snapshot, rollback. Capabilities simply not possible otherwise.
AT&T views network virtualization as a foundation element of the cloud they have built to support a fundamental transformation in the way AT&T deploys applications internally and delivers cloud services publically.Toby Ford now owns all cloud at AT&T, but in the begininng of the project Toby was attacked by the “white blood cells” in the organization who did not want change…Toby has now been promoted and it is recognized broadly within AT&T that the architectural approach Toby has taken is the future of networking.AT&T is now in production across three data centers moving to five by the end of the year for internal application development and production deployment.
Use Case – Reduce number of Overprovisioned Servers, Save $ MillionsBefore NVPData center asset utilization under 60%, typically under 40%Stranded Servers (overprovisioned servers in pods, not available because in separate subnet or availability zones)In efficient power, cooling and rack space utilizationAfter NVP80-90% Data center asset utilizationSave $15-$30 million in servers alone, per large data centerPlace and move any workload, anywhere in the data center“Data Center Defragmentation”Use Case – Onboarding EnterpriseBefore NVPCustomers forced to accept cloud IP addressesNo support for legacy applicationsLimited security on shared cloud infrastructureExpensive dedicated VPN hardware requiredLimited support for burstingAfter NVPEnterprise class security and network services in the cloudEnterprise customer uses IP address of choiceL2 adjacency between cloud workloads and on premise workloadsOn-demand burstingUse Case – Physical Hosting to Virtual Cloud MigrationBefore NiciraPhysical workloads, often in different area of the data center, are not able to be on the same subnet as virtual workloadsDifficult for customers to migrate hosted servers to cloud serversAfter NiciraRegardless of location in the data center, physical and virtual workloads can be connected to the same virtual network, enabling L2 adjacency across subnets and availability zonesEasy hosted server customer cloud integration and migration solution
Managing this complexity is non-trivial. Just adding a new server to this "simple" application is more than 20 changes. How long does this take? Can you test this? What happens if something fails?
And that was then, this is now.Greater amounts of change, dependencies, skills needed and yes… greater risk.
Chef is an open source configuration management and infrastructure automation tool. Chef enables operators and developers to define their infrastructure, applications and how these things are dynamically tied together at massive scale. So how does this work?
Recipes are a collection of these Resource abstractions, programmatically configuring the service.Cookbooks are how we manage a particular application or service, the collection of Recipes and the various config files, templates and other support files.These cookbooks behave like libraries, exposing their customizations as Attributes. And you can share them on our Community site with thousands of other developers and system administrators.
This is an example of an haproxy recipe. Haproxy is a software load balancer, in this example we're searching for other nodes that have the Role of "webserver". We're going to pass these search results into the haproxyconfig file, and restart the haproxy service on any change in this config file.
Chef gives you the ability to build infrastructure at real scale. Manage your applications' configuration, their operational considerations like networking, storage, logging, and security. You can build anything on Linux, Windows, OSX or Unixes. And the real key is you can manage is simply.Infrastructure as code means you can incrementally change things and configure them, redeploying as necessary to bare metal or clouds.
We're stewards of this open source community and project, we offer a Hosted version of the Chef Server, and we'll install it and support it behind your firewall.