1) Data growth is increasing rapidly, putting pressure on disaster recovery capabilities. Ensuring business continuity through effective backup and disaster recovery is a top priority for many organizations.
2) Riverbed's solutions optimize WAN bandwidth utilization to accelerate backup and replication processes between data centers. Steelhead appliances deduplicate and compress data to maximize throughput.
3) Riverbed's solutions work with all major storage vendors' replication technologies. Riverbed also offers monitoring and visibility tools like Cascade to help manage disaster recovery operations more effectively.
If you are a consumer of project information this presentation is directed at you. The Project Control Data Warehouse is an 'open' type project and an instance of the ODWM. Thanks for taking a look!
Syllabus of Streaming Courses in mainframe assembler and z/OS internals for everyone who interested to become a real systems programmer or system-level software developer for IBM mainframe platform, especially in z/OS system environment.
Lily Craps, responsible for the Mainframe outsourcing project at SDWorx, explains how the moving of their mainframe to a shared environment at NRB, enabled ‘economies of scale’ on infrastructure costs for hardware and software. She describes the process, from starting the outsourcing study, over the RFI/RFP process, the selection of the provider, the contract negotiations and the migration project, next to the criteria for choosing NRB and an Infrastructure As A Service –cloud model.
If you are a consumer of project information this presentation is directed at you. The Project Control Data Warehouse is an 'open' type project and an instance of the ODWM. Thanks for taking a look!
Syllabus of Streaming Courses in mainframe assembler and z/OS internals for everyone who interested to become a real systems programmer or system-level software developer for IBM mainframe platform, especially in z/OS system environment.
Lily Craps, responsible for the Mainframe outsourcing project at SDWorx, explains how the moving of their mainframe to a shared environment at NRB, enabled ‘economies of scale’ on infrastructure costs for hardware and software. She describes the process, from starting the outsourcing study, over the RFI/RFP process, the selection of the provider, the contract negotiations and the migration project, next to the criteria for choosing NRB and an Infrastructure As A Service –cloud model.
Software Defined Networks Network Function Virtualization Pivotal TechnologiesOpen Networking Summits
Margaret T. Chiosi
Distinguished Network Architect
AT&T Labs
Agenda
Overview of NFV, NFV and SDN synergy, standardization and role of opensource – Margaret Chiosi, AT&T
Spreading NFV through the Network: the NFV Use Cases - Andrea Pinnola, Telecom Italia
Building a Digital Telco: Network Virtualisation experiences – Francisco Javier Salguero, Telefonica
DOCOMO's Challenges for Network Virtualization in Mobile Networks - Tetsuya Nakamura, NTT Docomo
Deployment of SDN and NFV : Vendor perspectives and experiences - Karthikeyan Subramaniam, Adara
NFV-SDN Synergy
Technology Track Session
ONS2015: http://bit.ly/ons2015sd
ONS Inspire! Webinars: http://bit.ly/oiw-sd
Watch the talk (video) on ONS Content Archives: http://bit.ly/ons-archives-sd
Watch the replay: http://event.on24.com/r.htm?e=830086&s=1&k=BF6DC01D4350A4D22655D80CBED9B3C5&partnerref=rti
Economic realities dictate that "new" distributed systems are almost never entirely new creations. Existing capabilities which cannot be readily duplicated at minimal cost are often necessary and even critical components of otherwise new systems. How we address achieving interoperability with these legacy systems – whose data and interfaces are often less than completely defined – can be a critical cost and schedule risk item.
Open standards such as the DoD's UAS Control Segment (UCS) Architecure and the Open Group's Future Airborne Capability Environment (FACE) provide architecture and data design standards which support new development and provide a means of rigorously capturing the data semantics of information in existing interfaces. At the protocol and implementation level, the OMG's Data Distribution Service (DDS) standard provides proven, cost effective design patterns which support the bridging and/or the migration of existing systems with new, open architectures.
Speaker: Mark Swick, Principal Applications Engineer, RTI
Maximize Application Performance and Bandwidth Efficiency with WAN OptimizationCisco Enterprise Networks
Learn how a two-step strategy that reduces application bandwidth consumption and makes more efficient use of your remaining bandwidth can help you achieve seemingly conflicting business and IT goals.
Register to watch webcast: http://cs.co/9006CAY0.
Power utilities worldwide are looking for ways to extend the life of in-field assets while also improving service levels to power subscribers. Existing energy infrastructure deployed across our nation is largely composed of assets with extremely limited communication capabilities based on SCADA protocols, such as DNP3. By the end of 2024, over $7 trillion of infrastructure upgrades will be required. These challenges can be addressed with smarter distribution grids that control and monitor assets down to the level of neighborhoods and individual homes. These smart grids incorporate military-grade securable protocols and hardened architectures. They use hygiene services and bi-directional conversion to Data Distribution Service (DDS) software for all internal and intra-network signalling. And they source intelligence in nodes with continued secure communications to existing operational management systems. The result is distributed management, distributed intelligence and distributed security for high resolution analysis, implementation of evolving policy controls and reasonable price to performance ratio (IRR).
This webinar, co-hosted by RTI and LocalGrid, will discuss the evolution and benefits of smart grids, as well as advancements such as:
Localized control rather than centralized protocol hygiene (energy firewalls)
Analytics and policy control rather than just security
Over the air DDS-assisted updates to change duty cycle and capabilities, and reduce service costs
Mesh-/fabric-based network topologies for self-healing, fault-tolerant networks (multi-master topologies)
Verification of DR compliance using DDS
On Demand: http://ecast.opensystemsmedia.com/443
I hosted a webcast with Sr. VP and GM of HP Storage David Scott. David and I talked about flash-optimized storage and the software defined data center. You can find the audio for the webcast at http://hpstorage.me/ASTB-podcasts - they are number 146 and 147.
Radisys' CTO, Andrew Alleman, was one of the featured speakers at the OCP Telco Engineering Workshop during the 2017 Big Communications Event. Andrew discussed carrier-grade open rack architecture (CG-OpenRack-19), the future of open hardware standards and commercial products in the OCP pipeline during his presentation.
News to Development Environments and for RDz for z/VSEIBM
This presentation demonstrates how z/VSE (COBOL) applications can be developed using modern Integrated Development Environments,
such as IBM Rational Developer for z Systems (RDz), Jazz, IBM Rational Team Concert (RTC) and surrounding Tools.
This toolset can be used to develop Applications from Mobile, Web or Java to COBOL for CICS on z/VSE.
November 14, 2012—Lighthouse Point, FL. Future Strategies Inc., is pleased to announce the Gold and Silver Winners for the 2012 Global Awards for Excellence in Business Process Management and Workflow. Sponsored by WfMC and now in their 20th year, these prestigious awards recognize user organizations that have demonstrably excelled in implementing innovative business process solutions to meet strategic business objectives.
Workflow Management Coalition (WfMC) and BPM.com jointly sponsor the annual Global Awards for Excellence in BPM and Workflow. The Awards program is managed by Future Strategies Inc.
About the Workflow Management Coalition (www.wfmc.org)
The WfMC, founded in August 1993, is a non-profit, international organization of workflow vendors, users, analysts and university/research groups. The Coalition's mission is to promote and develop the use of workflow through the establishment of standards for software terminology, interoperability and connectivity between workflow products. Comprising over 300 members worldwide, the Coalition is the only standards body for this specific software market. The creation of the WfMC Standards Reference Model has proved its importance in other areas of technology, most notably the ISO Seven Layer reference model for computer communications.
Next Level Hyper-Converged and Software-defined
Storage Solutions Combine State-of-the-art Huawei
FusionServers with Proven DataCore Software.
The Huawei DataCore Software-defined Storage solution enables customers to maximize the value from their storage investments, current and future.
To help you drive the most value from your storage investments, Huawei has partnered with DataCore to consolidate these disparate storage systems with
unified management and a comprehensive set of data services. Additionally, Huawei’s FusionServer and Oceanstor systems can be easily integrated with existing storage from a variety of vendors, including Dell, EMC, Hitachi, HP, IBM and NetApp using DataCore’s comprehensive Software-defined
Storage (SDS) platform. These storage systems can be centrally managed and easily combined into a single set of storage with different tiers of capacity in order to improve their overall productivity and utilization.
Closed Loop Network Automation for Optimal Resource Allocation via Reinforcem...Liz Warner
In this talk, we present a closed-loop automation approach to dynamically adjust LLC cache allocation (Intel RDT) between high priority VNFs and BE workloads using reinforcement learning. The results demonstrated improved server utilization while maintaining required service level agreement for high priority VNFs.
Software Defined Networks Network Function Virtualization Pivotal TechnologiesOpen Networking Summits
Margaret T. Chiosi
Distinguished Network Architect
AT&T Labs
Agenda
Overview of NFV, NFV and SDN synergy, standardization and role of opensource – Margaret Chiosi, AT&T
Spreading NFV through the Network: the NFV Use Cases - Andrea Pinnola, Telecom Italia
Building a Digital Telco: Network Virtualisation experiences – Francisco Javier Salguero, Telefonica
DOCOMO's Challenges for Network Virtualization in Mobile Networks - Tetsuya Nakamura, NTT Docomo
Deployment of SDN and NFV : Vendor perspectives and experiences - Karthikeyan Subramaniam, Adara
NFV-SDN Synergy
Technology Track Session
ONS2015: http://bit.ly/ons2015sd
ONS Inspire! Webinars: http://bit.ly/oiw-sd
Watch the talk (video) on ONS Content Archives: http://bit.ly/ons-archives-sd
Watch the replay: http://event.on24.com/r.htm?e=830086&s=1&k=BF6DC01D4350A4D22655D80CBED9B3C5&partnerref=rti
Economic realities dictate that "new" distributed systems are almost never entirely new creations. Existing capabilities which cannot be readily duplicated at minimal cost are often necessary and even critical components of otherwise new systems. How we address achieving interoperability with these legacy systems – whose data and interfaces are often less than completely defined – can be a critical cost and schedule risk item.
Open standards such as the DoD's UAS Control Segment (UCS) Architecure and the Open Group's Future Airborne Capability Environment (FACE) provide architecture and data design standards which support new development and provide a means of rigorously capturing the data semantics of information in existing interfaces. At the protocol and implementation level, the OMG's Data Distribution Service (DDS) standard provides proven, cost effective design patterns which support the bridging and/or the migration of existing systems with new, open architectures.
Speaker: Mark Swick, Principal Applications Engineer, RTI
Maximize Application Performance and Bandwidth Efficiency with WAN OptimizationCisco Enterprise Networks
Learn how a two-step strategy that reduces application bandwidth consumption and makes more efficient use of your remaining bandwidth can help you achieve seemingly conflicting business and IT goals.
Register to watch webcast: http://cs.co/9006CAY0.
Power utilities worldwide are looking for ways to extend the life of in-field assets while also improving service levels to power subscribers. Existing energy infrastructure deployed across our nation is largely composed of assets with extremely limited communication capabilities based on SCADA protocols, such as DNP3. By the end of 2024, over $7 trillion of infrastructure upgrades will be required. These challenges can be addressed with smarter distribution grids that control and monitor assets down to the level of neighborhoods and individual homes. These smart grids incorporate military-grade securable protocols and hardened architectures. They use hygiene services and bi-directional conversion to Data Distribution Service (DDS) software for all internal and intra-network signalling. And they source intelligence in nodes with continued secure communications to existing operational management systems. The result is distributed management, distributed intelligence and distributed security for high resolution analysis, implementation of evolving policy controls and reasonable price to performance ratio (IRR).
This webinar, co-hosted by RTI and LocalGrid, will discuss the evolution and benefits of smart grids, as well as advancements such as:
Localized control rather than centralized protocol hygiene (energy firewalls)
Analytics and policy control rather than just security
Over the air DDS-assisted updates to change duty cycle and capabilities, and reduce service costs
Mesh-/fabric-based network topologies for self-healing, fault-tolerant networks (multi-master topologies)
Verification of DR compliance using DDS
On Demand: http://ecast.opensystemsmedia.com/443
I hosted a webcast with Sr. VP and GM of HP Storage David Scott. David and I talked about flash-optimized storage and the software defined data center. You can find the audio for the webcast at http://hpstorage.me/ASTB-podcasts - they are number 146 and 147.
Radisys' CTO, Andrew Alleman, was one of the featured speakers at the OCP Telco Engineering Workshop during the 2017 Big Communications Event. Andrew discussed carrier-grade open rack architecture (CG-OpenRack-19), the future of open hardware standards and commercial products in the OCP pipeline during his presentation.
News to Development Environments and for RDz for z/VSEIBM
This presentation demonstrates how z/VSE (COBOL) applications can be developed using modern Integrated Development Environments,
such as IBM Rational Developer for z Systems (RDz), Jazz, IBM Rational Team Concert (RTC) and surrounding Tools.
This toolset can be used to develop Applications from Mobile, Web or Java to COBOL for CICS on z/VSE.
November 14, 2012—Lighthouse Point, FL. Future Strategies Inc., is pleased to announce the Gold and Silver Winners for the 2012 Global Awards for Excellence in Business Process Management and Workflow. Sponsored by WfMC and now in their 20th year, these prestigious awards recognize user organizations that have demonstrably excelled in implementing innovative business process solutions to meet strategic business objectives.
Workflow Management Coalition (WfMC) and BPM.com jointly sponsor the annual Global Awards for Excellence in BPM and Workflow. The Awards program is managed by Future Strategies Inc.
About the Workflow Management Coalition (www.wfmc.org)
The WfMC, founded in August 1993, is a non-profit, international organization of workflow vendors, users, analysts and university/research groups. The Coalition's mission is to promote and develop the use of workflow through the establishment of standards for software terminology, interoperability and connectivity between workflow products. Comprising over 300 members worldwide, the Coalition is the only standards body for this specific software market. The creation of the WfMC Standards Reference Model has proved its importance in other areas of technology, most notably the ISO Seven Layer reference model for computer communications.
Next Level Hyper-Converged and Software-defined
Storage Solutions Combine State-of-the-art Huawei
FusionServers with Proven DataCore Software.
The Huawei DataCore Software-defined Storage solution enables customers to maximize the value from their storage investments, current and future.
To help you drive the most value from your storage investments, Huawei has partnered with DataCore to consolidate these disparate storage systems with
unified management and a comprehensive set of data services. Additionally, Huawei’s FusionServer and Oceanstor systems can be easily integrated with existing storage from a variety of vendors, including Dell, EMC, Hitachi, HP, IBM and NetApp using DataCore’s comprehensive Software-defined
Storage (SDS) platform. These storage systems can be centrally managed and easily combined into a single set of storage with different tiers of capacity in order to improve their overall productivity and utilization.
Closed Loop Network Automation for Optimal Resource Allocation via Reinforcem...Liz Warner
In this talk, we present a closed-loop automation approach to dynamically adjust LLC cache allocation (Intel RDT) between high priority VNFs and BE workloads using reinforcement learning. The results demonstrated improved server utilization while maintaining required service level agreement for high priority VNFs.
Riverbed - Maximizing Your Cloud Applications Performance and AvailabilityRightScale
RightScale Conference Santa Clara 2011: Database and application performance matter little if the application delivery is slow. Companies looking for performance, reliability, and scalability put proven application delivery systems to work in their cloud deployments. Join Raja Srinivasan and Jim Young as they discuss the features and technologies that cutting-edge companies are taking advantage of in their traffic management solutions for rapidly scaling environments. Learn how digital agency, Tenthwave, launched the “Stop Bullying Speak Up” campaign on Facebook using Riverbed Stingray Traffic Manager on RightScale to handle SSL Decryption and optimize cloud performance, and how Riverbed and RightScale have enabled Tenthwave to build repeatable deployments for their online promotions and campaigns.
The Cisco IWAN Application simplifies WAN deployments by providing highly intuitive, policy-based automation. It enables you to realize the benefits of SD-WAN: lower costs, simplified IT, increased security, and optimized application performance.
View the Webcast: http://cs.co/9007BKlEc
Software-Defined WAN is transforming Hybrid WAN networks into simplified, bandwidth efficient, and enterprise-class quality-of-experience deployments. These along with other unique attributes of SD-WAN combine to create a lower total cost of ownership (TCO) than Hybrid WAN alone. Join this webinar to learn the details of how SD-WAN is transforming Hybrid WAN into a solution that delivers a real ROI for your business.
Since the dawn of time nearly every being has striven for independence. IT professional tirelessly work to get to the same goal of creating solutions which result in greater independence from how technology was used in the past.
Review this presentation to learn how Cloud-Delivered SD-WAN delivers independence from underlying transport, freedom to host applications anywhere, liberty for how services are delivered and choices on how far you extend your wide area network. You'll leave with a better understanding of how to gain your independence from the boundaries of the legacy networks of the past decade.
This webinar, originally hosted by Scott Raynovich, VeloCloud CEO Sanjay Uppal, demonstrated the power of the cloud-delivered SD-WAN, including specific technology from VeloCloud that can enable bandwidth expansion, provide direct optimal access to cloud-based applications, and enable virtual services integration in cloud and on premise while dramatically improving operational automation
Cisco Intelligent WAN: Ou comment améliorer l’expérience en succursaleCisco Canada
Session: Cisco Intelligent WAN: Ou comment améliorer l’expérience en succursale
Presenter: Martin Langlois, Architecte de solutions technologiques
Date: October 27, 2015
Fernando Nunez's ANDICOM 2016 presentation discusses NFV and SDN and outlines use cases of vE-CPE and SD-WAN. He focuses on how combining these two use cases creates a comprehensive and powerful solution and describes the concept of Ensemble SmartWAN (SD-WAN 2.0).
Enterprises continue to implement or evaluate shifting services which were typically hosted in the branch into the cloud. The reasons include creating a leaner branch, taking advantage of increases in broadband Internet bandwidth and reduced complexity and cost.
This presentation takes a deep dive into the Cloud-Delivered SD-WAN architecture for service chaining. You’ll understanding the architectural differentiation and benefits of this approach and why it offers a superior model for delivering secure, reliable, and high performance service chaining.
Implementing vCPE with OpenStack and Software Defined NetworksPLUMgrid
Service providers and the broader vendor community have made progress in virtualizing key vCPE network functions. Concurrently, there is a strong push to bring these functions to the cloud. This session will discuss how Openstack is enabling this transformation and the role played by technologies like SDN and NFV. It will also discuss the latest advances in the networking stack of the Linux kernel which further enable these network functions to run in a fully distributed architecture. Finally, it will tie all these concepts together proposing a model for implementing virtual CPE services.
Learn how you can streamline your migration to Cisco Intelligent WAN (IWAN) with lab-tested deployment best practices from Verizon Managed Services. Profit from the real-world expertise and valuable insights of this leading WAN solutions provider.
Miss the webcast? Register to view replay here: http://cs.co/9008BPw6A
Real-time Big Data Analytics in the IBM SoftLayer Cloud with VoltDBVoltDB
Real-time analytics on streaming data is a strategic activity. Enterprises that can tap streaming data to uncover insights and take action faster than their competition gain business advantage. Join John Hugg, Founding Engineer, VoltDB and Pethuru Raj Chelliah and Skylab Vanga, Infrastructure Architect and Specialists, IBM SoftLayer to learn how VoltDB enables high performance and real-time big data analytics in the IBM SoftLayer cloud.
CA Unified Infrastructure Management for z Systems: Get a Holistic View of Yo...CA Technologies
Discover how CA Unified Infrastructure Management for z Systems helps you gain a holistic view of business services that span mobile to mainframe. Whether you’re part of your organization’s central IT operations team or a seasoned mainframe expert, you’ll want to join us for this in-depth session and see how mainframe storage, network, z/OS® and z/VM® metrics can now be fed into this powerful single pane of glass environment using these lightweight, easy to install probes. Learn how to build custom dashboards and set alerts with the useful alarms that can be used out of the box. Don’t miss this opportunity to discover how you can empower your IT operations staff to monitor your mainframe as part of your overall IT infrastructure, freeing up z Systems® specialists to resolve issues more quickly and lower your overall MTTR.
For more information, please visit http://cainc.to/Nv2VOe
Application Lifecycle Management for Multivalue CustomersRocket Software
In this presentation, you will learn about Rocket ALM Solutions. We will cover the reasons that successful software developers implement comprehensive ALM solutions, the features that such a solution should provide, and the capabilities that Rocket ALM solutions offer to Rocket MV customers.
FlexPod delivers new integrated infrastructure validated designs with NetApp All-Flash and Cisco ACI that deliver new levels of performance and the ability to meet business objectives
Jacob Rapp
HP
Application Driven SDN
Technology Track Session
ONS2015: http://bit.ly/ons2015sd
ONS Inspire! Webinars: http://bit.ly/oiw-sd
Watch the talk (video) on ONS Content Archives: http://bit.ly/ons-archives-sd
In a world of business that has increasingly become more and more distributed, and with our relationship with data having changed, we need to begin expanding the way we look at innovation in IT from solely focusing on the data center, to considering the requirements, challenges and costs associated with the edge. This presentation focuses on extending data center investments to all remote sites, and new opportunities to connect IT with today's business requirements through a Software-Defined Edge.
Building Efficient Edge Nodes for Content Delivery NetworksRebekah Rodriguez
Supermicro, Intel®, and Varnish are delivering an optimized CDN solution built with the Intel Xeon-D processor in a Supermicro Superserver running Varnish Enterprise. This solution delivers strong performance in a compact form factor with low idle power and excellent performance per watt.
Join Supermicro, Intel, and Varnish experts as they discuss their collaboration and how their respective technologies work together to improve the performance and lower the TCO of an edge caching server.
OOW16 - Oracle E-Business Suite: What’s New in Release 12.2 Beyond Online Pat...vasuballa
Learn more about Oracle E-Business Suite’s product roadmap of recent releases and future plans to deliver new capabilities for years to come. This session covers what’s new in Oracle E-Business Suite 12.2 beyond online patching, including functional enhancements and user experience innovation. Gain an understanding of the functional and user experience enhancements that are available, which are input for planning how to further leverage Oracle E-Business Suite to meet your company’s needs.
User adoption. Geo-replication. Slow performance. Custom-code troubleshooting. Office 365 migration. These are some of the challenges you face with your SharePoint environment, but did you know that they all have something in common? Application performance infrastructure is essential to delivering the best SharePoint user experience, regardless of where and how SharePoint is deployed.
OPEN SOURCE TECHNOLOGY: Docker Containers on IBM BluemixDA SILVA, MBA
This is a recorded Webinar from Aug 04, 2015, covering the following topics:
- WHAT IS BLUEMIX
- WHAT IS DOCKER
- LIVE DEMO: Docker containers on Bluemix
Register today for an IBM Cloud Webinar: http://www.ibmcloudwebinars.com
Get updated and join our Linkedin Group:
https://www.linkedin.com/groups/IBM-Cloud-Webinars-8333586/about
Please, feel free to reach out if you have any queries:
raphaelda@ie.ibm.com
@raphaelsilvada
https://ie.linkedin.com/in/raphaelsilvada
Similar to INSPIRIT- Riverbed- Data protection and Disaster Recovery (20)
Apresentação realizada pelo Gerente Técnico da INSPIRIT, Anderson Carvalho, durante a 20a edição do CNASI SP ,com foco em IT360,produto do fabricante ManageEngine.
Para ter acesso a mais infos do produto: http://migre.me/5Zd6k
In Forrester’s 60-criteria evaluation of community platform vendors, we found that Lithium
Technologies and Jive Software led the pack because of their mature tool sets and depth of services
offered.
Por meio desse Overview criado pela TrustWave, você poderá conhecer melhor as soluções tecnológicas da empresa.
Para ter acesso aos produtos TrustWave, entre em contato com a INSPIRIT pelo email leads@inspirit.com.br
Let’s start out by level setting the fundamental challenge around disaster recovery today. On one side we have continuous growth of data and the storage required to support it. I don’t have to tell you this, you see it yourself everyday, but to put it in perspective with some recent analyst numbers, according to IDC disk storage revenue grew 20.7% in Q2 2010, meaning you bought 20% more, and now all of you have more storage to manage. Another survey showed that the ¾ of respondents already had at least 50TB, might not seem that bad, until you start to think about regularly copying it and moving it offsite. And this problem isn’t going to slow down, as IDC also predicts 45 times more data in the next 10 years. So there’s clearly a growing amount of data out there.On the flip side, the need to protect this data remains a top priority, as reflected by 73% of firms saying business continuity and disaster recovery was a top IT priority. If you think “oh it couldn’t happen to me”, think again, this is a question of when, not if you’ll need to have a DR plan. Some recent examples: American Eagle Outfitters couldn’t sell you anything online or give you store locations or track your order for 8 days this summer. Virgin Blue, the Australian arm of Virgin Airlines, their entire reservation system was offline for 21 hours, stranding and angering passengers. Most important, Facebook was recently offline for two and a half hours!! All of these had real costs in lost transactions and lost reputation with their customers.No offense to those organizations, but it shows how errors, failures, man-made and natural disasters can cause serious downtime if your DR plan isn’t complete and fully tested, ready to go when you need it.
There are four common challenges in managing storage today.First is the growth problem we’ve already discussed. The need for replication throughput is increasing much faster than the cost of bandwidth is dropping, and there aren’t any more hours in the day.Second, the hours of the day that can be used for dedicated backup/replication activities is decreasing, if you aren’t already a 24 hour business. At the same time, service level agreements are getting more stringent, allowing even less downtime, planned or not.Third, although local mirroring and backup helps for local errors or contained hardware failures, it doesn’t address a site disaster that might take an entire location offline. To be fully protected, you need a reliable, cost efficient, secure mechanism to get your data copied offsite and ready to restore or go live in an alternate data center.Last, getting more value out of your existing infrastructure and investments is as important as smoothly integrating new approaches to take advantage of new technologies.
[Try to evoke emotional reaction to the cons listed in this slide]Local Tape BackupsWhat if a tape from a local backup were lost (fell of the truck) or misplaced? What is the damage to your business (exposure of critical customer data, financial loss, …)? How quickly can you recover data from tapes? What is the impact of the delay?Limited Data ProtectionWhat is the value of the data you are not protecting? Dedicated WAN linksOngoing increased network costs (to SP, network infrastructure provider, …). Can you continue to scale your DR solution in this manner as the need to protect more data grows?
[Try to evoke emotional reaction to the cons listed in this slide]Local Tape BackupsWhat if a tape from a local backup were lost (fell of the truck) or misplaced? What is the damage to your business (exposure of critical customer data, financial loss, …)? How quickly can you recover data from tapes? What is the impact of the delay?Limited Data ProtectionWhat is the value of the data you are not protecting? Dedicated WAN linksOngoing increased network costs (to SP, network infrastructure provider, …). Can you continue to scale your DR solution in this manner as the need to protect more data grows?
[Try to evoke emotional reaction to the cons listed in this slide]Local Tape BackupsWhat if a tape from a local backup were lost (fell of the truck) or misplaced? What is the damage to your business (exposure of critical customer data, financial loss, …)? How quickly can you recover data from tapes? What is the impact of the delay?Limited Data ProtectionWhat is the value of the data you are not protecting? Dedicated WAN linksOngoing increased network costs (to SP, network infrastructure provider, …). Can you continue to scale your DR solution in this manner as the need to protect more data grows?
If you’re not familiar with Gartner’s Hype Cycle, it’s a way of tracking the adoption and maturity of new technologies, compared with the marketing hype around them.Some great news here is that WAN Optimization is now entering the Plateau of Productivity for business continuity and disaster recovery use cases. This means it’s solid technology that does what it promises, and it delivers real value in the real world. WAN optimization can solve a number of the challenges we’ve discussed, by transforming your WAN from a barrier into an enabler and giving you far more capability to realize the DR strategy and results you require.
Limited bandwidth can really constrain your ability to backup or replicate a lot of data on the WAN. Your DR operations data transfers trickle through the small pipes from branch offices to the data center, but it’s usually far too expensive to upgrade every connection out there. You’d be signing up for an ongoing monthly operating expense just for more bandwidth, when really you could be using what you already have more efficiently. What can you do about this?Deploy Steelhead appliances to enable you to leverage the network instead of physical shuffling tapes to move data offsite. And if you’re already doing WAN-based backup or replication, you can protect more data, more often (for a better RPO) and recover faster (for a better RTO) across the WAN with Adaptive Data Streamlining on bandwidth constrained connections. We can remove up to 60-95% of the traffic by essentially deduplicating the WAN, recognizing data that’s been sent before and offering it up locally from the Steelhead in your data center. Only new data goes across the network, and this is also compressed for additional savings. Our unique adaptive SDR feature can self-tune each connection dynamically depending on how much pressure it’s receiving to increase throughput versus reduce bandwidth requirements. Data reduction can also be done on disk (more reduction) or in memory (faster) for best results.
A different problem can occur in some environments, typically between data centers, where you may have enough rated bandwidth capacity to keep up with the changing data, but due to high latency and/or packet loss and retransmission, you aren’t able to effectively use the capacity. There’s a certain amount of management overhead and error correction in most storage protocols, and the further you need to go, the worse this slows things down. And remember, enough distance is an essential consideration in DR planning to ensure no regional disaster affects both your primary and recovery sites.Here again, Steelhead appliances can solve the issue, with Transport Streamlining delivering maximum performance by enabling data transfer to ramp up much faster, and maintain peak throughput at the full rated capacity, even in the face of congestion and loss. This enables you to fill the pipe, and flood data from the SAN onto and across the WAN up to 50X faster.This is especially important in quickly draining data from the storage array or software replication cache to avoid failing asynchronous replication jobs.
Another common issue is that few companies can afford the luxury of a dedicated network for DR purposes, and usually need to run business applications in the same pipe with DR traffic. Shrinking or non-existent backup windows causes by 7x24 business means you need to find a way for both types of connections to co-exist or you’ll face unhappy end users and potentially fail to meet your data protection service level agreements (SLAs) at the same time.Riverbed is fully compatible with and also directly offers powerful QoS tools to help you prioritize traffic on shared pipes. Depending on your requirements, you may wish to dedicate more bandwidth to either DR or applications in times of congestion. Our hierarchical fair service queuing (HFSC) has an advanced algorithm to maximum results and minimize delays, guaranteeing the specified performance levels. It’s easily configured to recognize different categories of traffic and set up appropriate policies groups of sites, giving you full control.
EMC is a very close partner with Riverbed. Their E-Lab has qualified a wide range of configurations of Steelhead appliances and EMC replication tools and storage platforms.We have jointly developed some unique optimizations for SRDF and FCIP, giving even better results in these environments. In additional SRDF optimization automatically configures the array so its native compression doesn’t interfere, saving you time and a service call in getting it up and running smoothly.Through the EMC Select program, there are also a number of EMC-specific Steelhead models available for direct purchase from EMC and its channel partners.It’s worth noting that even Data Domain replication benefits from Riverbed WAN optimization, for even better results.This is a small subset of our joint customers, but you can read more in some of our case studies on optimizing EMC.
IBM is partner with Riverbed on multiple levels, both as a storage vendor and as a systems integrator.Riverbed has been qualified with XIV Remote Mirroring and has shown great results for Global Mirror and many TSM customers.These real world results show 83% bandwidth reduction for XIV.
NetApp is a very close partner with Riverbed. Although NetApp does not perform qualification of WAN optimization appliances, we work very well with NetApp replication tools and storage platforms. Riverbed delivers optimization above and beyond native SnapMirror compression and deduplication for even better performance, without adding load to the filer head which could be better used for more I/O processing.This is a small subset of our joint customers, but you can read more in some of our case studies on optimizing NetApp.
HDS is another great new partner with Riverbed. We’ve worked together to certify HUR and TrueCopy on their SAN storage, and Hitachi NAS replication.We’ve shown here the results of a performance test showing replication to complete 41x faster and an amazing 99% reduction in bandwidth requirements.HDS also uses Steelhead appliances in their own environment!
Dell’s EqualLogic as well as Dell OEM/resale of EMC storage show more strength in storage partnership.We are participating in their Remote Office/Branch Office (ROBO) programs in the field.Shown here is a performance test where replication completed 17x faster.We have a number of joint customers and case studies with more detail.
-Switch the animation DR first – steelheads appear , block appears with (steelheads solve half the problem , difficult to grow but blah blah)- Then bring tape/truck etc*Don’t make steelheads go awayWe saw an interesting opportunity for cloud storage and what was happening in the world of data protection. Enterprises were getting tired of supporting (or never were able to support in the first place) a dedicated off-site facility for data backup. And given that data is continually growing it never gets cheaper.[build] For those who went the route of a tape strategy, recovery from an event is just as painful as the backups were in the first place. In short, enterprises weren’t meeting their business requirements.[build] For those who are using a disk-to-disk strategy, provisioning enough equipment and constantly buying more storage is painful and expensive.We knew there must be a better way.
[for presenter: Understanding the animation in this slide] - you start with your existing infrastructure – servers you want to back up and the tools you usually use to do that (netbackup, TSM, etc) -[build] add riverbed CSA as the TARGET for your existing infrastructure – no rip and replace -[build] when you backup to CSA, riverbed automatically dedupes data, typically achieving 20x to 50x dedupe rates - with local disk, we store enough data for recovery of recent information. This provides LAN performance for the most likely restores needed - [build] we then write this data to the cloud of your choice. We write to the cloud using REST, the object based language of cloud storage. You do not need to change your infrastructure to support this - Cloud storage becomes even cheaper now, since the low cost, elastic storage used is 1/20th or 1/50th of what you’d normally use - [build] restores from the cloud are much faster too, since only deduped data is moving over the WAN and Riverbed’s optimizations help make it more efficient as well.Backup Performance: Inline de-dupe via CIFS interface Can scale with CPUNFS and OST to followRestore Performance: Unparallel fast LAN restore or restore in the cloud.Local de-dupe store holds 0% to ~10% of total (enough for full restore) Cloud holds 100% of data (deduped)Retention: Unlimited elastic retention with dedupe and cloud storageDisaster Recovery: built-in into the solutionDedupe Everywhere: 20x – 50x Optimized Storage, Network and Cloud Storage
Let’s switch gears to another challenge. As you know, Riverbed has the unique position of sitting at the intersection of applications, networks, and storage. The storage industry has a large set of challenges that would benefit from new forms of optimization. In an ideal world that optimization would happen without a rip-and-replace of what enterprises already have in place.Look at these four challenges that enterprises face with relation to their storage:Growth – storage growth shows no signs of slowing down, with a planned 44x increase in data over the next few decades. That results in a lot of extra space, cost, and bandwidth to support it.Performance – How do you complete all the maintenance and protection operations you need to do given that the time in a day won’t increase by 44x even though storage will?Protection – Can you get that data offsite, protect it, and of course recover quickly when you need to?And finally, integration – are you capable of accomplishing these things by augmenting the current tools and processes you have in place?
Traffic on the road doesn’t buildKey industry unique cloud enablersDeduplicated data-in-flight (WAN) optimize bandwidth costs for both in/out into the cloudDeduplicated data-at-rest (Storage) optimize storage costs on-premise and in-cloud storage footprintUnmatched LAN-like restore performance (even) from the cloudMost importantly no rip-replacejust makes it better (= Riverbed traditional positioning)No fork-lift upgrade of NBU, BExec, TSM, Legato, etc.Transparently fits into existing environment – other approaches require changing client==============================Below is the text from the original CIO pitch slide. Some of it might be reusable for building out the aboveRiverbed has core technology advantages that run across its product line. These are advantages that will make your business faster and more cost effective.First – Deduplication of data across the WAN. Our technology can eliminate up to 60-95% of the data typically moving across your WAN. This enables people to share more data while you push out WAN upgrades for years.Protocol acceleration – protocols, both network and application protocols, are chatty and aren’t well designed for today’s WAN environments. That’s true even for Web-based applications! Latency combined with these chatty protocols results in applications that perform like an old car. We make those protocols perform like a race-car by optimizing the way they communicate across the WAN. Moreover, we can do this without changing the way servers or clients operate, meaning there is no integration effort for your IT team to get this operating immediately.Measure & Report – Our real-time visibility system gives you the insight to know how well your applications are performing and if there is a problem. Think of it as the google maps for your networked environment - It gives you the ability to see problems before your business runs into them head-first, and proactively address them before users to complain. Our unique ability to quickly analyze where servers are in your environment, who is accessing them, and where bottlenecks are mean that you can improve performance, enable better consolidation, and proactively invest in the right places and do it all much more effectively.Enable further consolidation – WAN performance improvements mean that you can consolidate more IT back to the data center or cloud. But for services that need to remain local, Riverbed can still enable consolidation. Our ability to virtualize services right on to your branch office WAN optimization appliance mean that you can enable consolidation even for branch services that need to be local. Our partnerships with Microsoft, Checkpoint, Websense, and more make this a reality. This technology – we call it the Riverbed Services Platform – makes it easy to manage virtualized applications even at the most far-flung branch.
[for presenter: Understanding the animation in this slide] - you start with your existing infrastructure – servers you want to back up and the tools you usually use to do that (netbackup, TSM, etc) -[build] add riverbed CSA as the TARGET for your existing infrastructure – no rip and replace -[build] when you backup to CSA, riverbed automatically dedupes data, typically achieving 20x to 50x dedupe rates - with local disk, we store enough data for recovery of recent information. This provides LAN performance for the most likely restores needed - [build] we then write this data to the cloud of your choice. We write to the cloud using REST, the object based language of cloud storage. You do not need to change your infrastructure to support this - Cloud storage becomes even cheaper now, since the low cost, elastic storage used is 1/20th or 1/50th of what you’d normally use - [build] restores from the cloud are much faster too, since only deduped data is moving over the WAN and Riverbed’s optimizations help make it more efficient as well.Backup Performance: Inline de-dupe via CIFS interface Can scale with CPUNFS and OST to followRestore Performance: Unparallel fast LAN restore or restore in the cloud.Local de-dupe store holds 0% to ~10% of total (enough for full restore) Cloud holds 100% of data (deduped)Retention: Unlimited elastic retention with dedupe and cloud storageDisaster Recovery: built-in into the solutionDedupe Everywhere: 20x – 50x Optimized Storage, Network and Cloud Storage
Start with data in the cloud and at step 2. Blow up data and end.Add some smaller data in the cloud that gets combined with the data on the appliance for a full restore.[for presenter: Understanding the animation in this slide] - you start with your existing infrastructure – servers you want to back up and the tools you usually use to do that (netbackup, TSM, etc) -[build] add riverbed CSA as the TARGET for your existing infrastructure – no rip and replace -[build] when you backup to CSA, riverbed automatically dedupes data, typically achieving 20x to 50x dedupe rates - with local disk, we store enough data for recovery of recent information. This provides LAN performance for the most likely restores needed - [build] we then write this data to the cloud of your choice. We write to the cloud using REST, the object based language of cloud storage. You do not need to change your infrastructure to support this - Cloud storage becomes even cheaper now, since the low cost, elastic storage used is 1/20th or 1/50th of what you’d normally use - [build] restores from the cloud are much faster too, since only deduped data is moving over the WAN and Riverbed’s optimizations help make it more efficient as well.Backup Performance: Inline de-dupe via CIFS interface Can scale with CPUNFS and OST to followRestore Performance: Unparallel fast LAN restore or restore in the cloud.Local de-dupe store holds 0% to ~10% of total (enough for full restore) Cloud holds 100% of data (deduped)Retention: Unlimited elastic retention with dedupe and cloud storageDisaster Recovery: built-in into the solutionDedupe Everywhere: 20x – 50x Optimized Storage, Network and Cloud Storage
Riverbed’s approach to WAN optimization is designed with resilience in mind. In DR operations, redundancy is a good thing, redundancy is a good thing, redundancy is a good thing, so there is no single point of failure that can block data protection or recovery.Steelheads include a number of features and work in a variety configurations for high availability. Within an appliance, multiple ports, process watchdogs, RAID, fault tolerance, and dual power supplies all keep the box running. You can also cluster in serial, parallel, quad (not shown), or N+1 Interceptor-based configurations to have connection aware failover of WAN optimization to other appliances.
Steelhead appliances also scale smoothly to cost effectively fit any size requirement, from the smallest offices to the biggest data centers or clouds.Each family can be upgraded by license key to larger configurations
Talk about licensing here.Perpetual appliance model + Cloud License add onsWhat happens when the license is exceeded ?
Are we limited on backup products at the front end ?Add some backup apps.
And not surprisingly, we’ve seen broad based interest around this technology as well. Across industry and across company size, companies with very large data sets and smaller ones, whitewater is in use to help accelerate cloud storage.