Riverbed provides WAN optimization and data protection solutions using its Steelhead, Cascade, Whitewater, and other products. Whitewater provides a cloud storage gateway that can accelerate backup and recovery to public cloud storage by up to 30x while reducing backup costs by 50% or more. Riverbed has partnerships with EMC and other storage vendors to provide awareness of replication protocols like SRDF, enabling more efficient WAN usage. Customers have reported significant improvements in DR capability and WAN performance for replication workloads without increasing bandwidth using Riverbed solutions.
Software Defined Storage - Open Framework and Intel® Architecture TechnologiesOdinot Stanislas
(FR)
Dans cette présentation vous aurez le plaisir d'y trouver une introduction plutôt détaillées sur la notion de "SDS Controller" qui est en résumé la couche applicative destinée à contrôler à terme toutes les technologies de stockage (SAN, NAS, stockage distribué sur disque, flash...) et chargée de les exposer aux orchestrateurs de Cloud et donc aux applications.
(ENG)
This presentation cover in detail the notion of "SDS Controller" which is in summary a software stack able to handle all storage technologies (SAN, NDA, distributed file systems on disk, flash...) and expose it to Cloud orchestrators and applications. Lots of good content.
MT42 The impact of high performance Oracle workloads on the evolution of the ...Dell EMC World
Increased data, along with innovations in application development have led to increasing I/O demands, which are not being met by existing architectures. Find out how high performance applications, particularly analytics applications running on a variety of files systems, are being constrained by storage performance and how Dell EMC's broad portfolio of storage infrastructure can meet their extreme performance demands.
Discover how Dell EMC's revolutionary performance can help you streamline and improve the performance of your entire Oracle environment. Performance and cost comparisons will show you how Dell EMC's performance is not just for extreme workloads but can also help you achieve massive consolidation, more simplified data architectures, increased data agility and reduced management overhead.
"
Riverbed SteelFusion is the first and only branch converged infrastructure that delivers local performance while enabling data centralization and instant branch recovery. SteelFusion eliminates the headache of branch office IT, consolidating servers and storage into the datacenter without sacrificing the benefits of having servers at the edge. Learn more at: http://riverbed.com/steelfusion/
Enabling the Software Defined Data Center for Hybrid ITNetApp
Recently, NetApp held a Cloud Breakfast for customers of our High Touch Customer Program. This was a combined presentation from OBS, VMware and NetApp.
Presenters:
Jim Sangster, Senior Director, Solutions Marketing, NetApp - "Cloud for the Hybrid Data Center"
John Gilmartin, Vice President, Cloud Infrastructure Products, VMware - "Next Generation of IT"
Axel Haentjens Vice President, Marketing and International Orange Cloud for Business "NetApp Epic Story OBS"
Tim Waldron, Manager, Cloud Solutions, NetApp EMEA "Cloud Services – An EMEA Perspective"
NVMe and all-flash systems can solve any performance, floor space and energy problems. At least this is the marketing message many vendors and analysts spread today – but actually, sounds too good to be true, right?
Like always in real life, there is no clear black or white, but some circumstances you should be aware of – especially if you intend to leverage these technologies.
You may ask yourself: Do I need to rip and replace my existing storage? What is the best way to integrate both? What benefits do I receive?
Well, just join our brief webinar, which also includes a live demo and audience Q&A so you can get the most out of these technologies, make your storage great again and discover:
• How to integrate Flash over NVMe in real life
• How to benefit of some Flash/NVMe for your entire applications
Software Defined Storage - Open Framework and Intel® Architecture TechnologiesOdinot Stanislas
(FR)
Dans cette présentation vous aurez le plaisir d'y trouver une introduction plutôt détaillées sur la notion de "SDS Controller" qui est en résumé la couche applicative destinée à contrôler à terme toutes les technologies de stockage (SAN, NAS, stockage distribué sur disque, flash...) et chargée de les exposer aux orchestrateurs de Cloud et donc aux applications.
(ENG)
This presentation cover in detail the notion of "SDS Controller" which is in summary a software stack able to handle all storage technologies (SAN, NDA, distributed file systems on disk, flash...) and expose it to Cloud orchestrators and applications. Lots of good content.
MT42 The impact of high performance Oracle workloads on the evolution of the ...Dell EMC World
Increased data, along with innovations in application development have led to increasing I/O demands, which are not being met by existing architectures. Find out how high performance applications, particularly analytics applications running on a variety of files systems, are being constrained by storage performance and how Dell EMC's broad portfolio of storage infrastructure can meet their extreme performance demands.
Discover how Dell EMC's revolutionary performance can help you streamline and improve the performance of your entire Oracle environment. Performance and cost comparisons will show you how Dell EMC's performance is not just for extreme workloads but can also help you achieve massive consolidation, more simplified data architectures, increased data agility and reduced management overhead.
"
Riverbed SteelFusion is the first and only branch converged infrastructure that delivers local performance while enabling data centralization and instant branch recovery. SteelFusion eliminates the headache of branch office IT, consolidating servers and storage into the datacenter without sacrificing the benefits of having servers at the edge. Learn more at: http://riverbed.com/steelfusion/
Enabling the Software Defined Data Center for Hybrid ITNetApp
Recently, NetApp held a Cloud Breakfast for customers of our High Touch Customer Program. This was a combined presentation from OBS, VMware and NetApp.
Presenters:
Jim Sangster, Senior Director, Solutions Marketing, NetApp - "Cloud for the Hybrid Data Center"
John Gilmartin, Vice President, Cloud Infrastructure Products, VMware - "Next Generation of IT"
Axel Haentjens Vice President, Marketing and International Orange Cloud for Business "NetApp Epic Story OBS"
Tim Waldron, Manager, Cloud Solutions, NetApp EMEA "Cloud Services – An EMEA Perspective"
NVMe and all-flash systems can solve any performance, floor space and energy problems. At least this is the marketing message many vendors and analysts spread today – but actually, sounds too good to be true, right?
Like always in real life, there is no clear black or white, but some circumstances you should be aware of – especially if you intend to leverage these technologies.
You may ask yourself: Do I need to rip and replace my existing storage? What is the best way to integrate both? What benefits do I receive?
Well, just join our brief webinar, which also includes a live demo and audience Q&A so you can get the most out of these technologies, make your storage great again and discover:
• How to integrate Flash over NVMe in real life
• How to benefit of some Flash/NVMe for your entire applications
Move to Hadoop, Go Faster and Save Millions - Mainframe Legacy ModernizationDataWorks Summit
In spite of recent advances in computing, many core business processes are batch-oriented running on Mainframes. Annual Mainframe costs are counted in 6+ figure Dollars per year, potentially growing with capacity needs. In order to tackle the cost challenge, many organizations have considered or attempted multi-year mainframe migration/re-hosting strategies. Traditional approaches to Mainframe elimination call for large initial investments and carry significant risks – It is hard to match Mainframe performance and reliability. Using Hadoop, Sears/MetaScale developed an innovative alternative that enables batch processing migration to Hadoop, without the risks, time and costs of other methods. This solution has been adopted in multiple businesses with excellent results and associated cost savings, as Mainframes are physically eliminated or downsized: Millions of dollars in savings based on MIP reductions have been seen – A reduction of 200 MIPS can yield $1 million in annual savings. MetaScale eliminated over 900 MIPs and an entire Mainframe system for one fortune 500 client. This presentation illustrates reference architecture and approach successfully used by MetaScale to move mainframe processing to the Hadoop platform without altering user-facing business applications.
Oracle Database Consolidation with FlexPod on Cisco UCSNetApp
Cisco and Oracle as technology front-runners provide YOU the tools you need to optimize your Oracle environments! John McAbel, Senior Product Manager - Oracle Solutions on UCS at Cisco Systems, explains how NetApp and Cisco are providing a flexible infrastructure that helps prepare organizations for today, and for future business growth and change.
VMworld 2013: Software-Defined Storage: The VCDX Way VMworld
VMworld 2013
Wade Holmes VCDX, VMware
Rawlinson Rivera VCDX, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
CloudBridge and NetApp Storage Solutions - The Killer AppNetApp
Among the largest pain points for most businesses are data storage and backup. Learn about the value and best practices of deploying Citrix CloudBridge to help optimize NetApp SnapMirror storage replication.
We cover the IBM solution for HPC. In addition to hardware and software stack we show how the rational choice of compilation/running parameters helps to significantly improve the performance of technical computing applications.
Rain stor isilon_emc_real_Examine the Real Cost of Storing & Analyzing Your M...RainStor
Are you storing larger than necessary quantities in your data warehouse, RDBMS, and line of business applications? Are you spending a large portion of your budget on Teradata or Netezza with costs continually climbing as data volumes grow? Are you getting the right ROI for all the data you store in your data warehouses?
Read this deck to find out:
What is the cost of storing your critical Big Data assets?
What workloads are best suited for data warehouses, which for Hadoop, and why?
Advantages of running Hadoop on scale-out NAS.
Importance of Security and Data Governance for critical data assets.
How to maintain data warehouse performance even with high growth rates.
DataCore Software introduction from my "Meet DataCore" webinar. DataCore products include software-defined storage and hyperconverged infrastructure solutions. Datacore has more than 10K customers and 30K+ implementations world-wide.
Cost of Ownership for Hadoop Implementation - Hadoop Summit 2014aziksa
This presentation will compare the pros and cons for hadoop implementation on cloud such as Hortonworks on AWS, Hadoop as a service from company like Amazon AMR, Altiscale and on premise installations. It will talk about the total cost of ownership for each category of Hadoop implementation and share a TCO calculator. There will be multiple categories of cost such as – 1. hardware/infrastructure, 2. Network/communication, 3. License/software, 4. Application development/training 5.On going support cost. Focus will be to bring all hidden and non-hidden cost to visibility. Using the calculator, participant will be able to find their own cost of ownership for their Hadoop cluster and can plan better for project implementation and support. It will also talk about managing risks on vendor viability, loss of intellectual property and control on technical architecture.
Klaus Gottschalk from IBM presented this deck at the 2016 HPC Advisory Council Switzerland Conference.
"Last year IBM together with partners out of the OpenPOWER foundation won two of the multi-year contacts of the US CORAL program. Within these contacts IBM develops an ac- celerated HPC infrastructure and software development ecosystem that will be a major step towards Exascale Computing. We believe that the CORAL roadmap will enable a massive pull for transformation of HPC codes for accelerated systems. The talk will discuss the IBM HPC strategy, explain the OpenPOWER foundation and the show IBM OpenPOWER roadmap for CORAL and beyond."
Watch the video presentation: http://wp.me/p3RLHQ-f9x
Learn more: http://e.huawei.com/us/solutions/business-needs/data-center/high-performance-computing
See more talks from the Switzerland HPC Conference:
http://insidehpc.com/2016-swiss-hpc-conference/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Move to Hadoop, Go Faster and Save Millions - Mainframe Legacy ModernizationDataWorks Summit
In spite of recent advances in computing, many core business processes are batch-oriented running on Mainframes. Annual Mainframe costs are counted in 6+ figure Dollars per year, potentially growing with capacity needs. In order to tackle the cost challenge, many organizations have considered or attempted multi-year mainframe migration/re-hosting strategies. Traditional approaches to Mainframe elimination call for large initial investments and carry significant risks – It is hard to match Mainframe performance and reliability. Using Hadoop, Sears/MetaScale developed an innovative alternative that enables batch processing migration to Hadoop, without the risks, time and costs of other methods. This solution has been adopted in multiple businesses with excellent results and associated cost savings, as Mainframes are physically eliminated or downsized: Millions of dollars in savings based on MIP reductions have been seen – A reduction of 200 MIPS can yield $1 million in annual savings. MetaScale eliminated over 900 MIPs and an entire Mainframe system for one fortune 500 client. This presentation illustrates reference architecture and approach successfully used by MetaScale to move mainframe processing to the Hadoop platform without altering user-facing business applications.
Oracle Database Consolidation with FlexPod on Cisco UCSNetApp
Cisco and Oracle as technology front-runners provide YOU the tools you need to optimize your Oracle environments! John McAbel, Senior Product Manager - Oracle Solutions on UCS at Cisco Systems, explains how NetApp and Cisco are providing a flexible infrastructure that helps prepare organizations for today, and for future business growth and change.
VMworld 2013: Software-Defined Storage: The VCDX Way VMworld
VMworld 2013
Wade Holmes VCDX, VMware
Rawlinson Rivera VCDX, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
CloudBridge and NetApp Storage Solutions - The Killer AppNetApp
Among the largest pain points for most businesses are data storage and backup. Learn about the value and best practices of deploying Citrix CloudBridge to help optimize NetApp SnapMirror storage replication.
We cover the IBM solution for HPC. In addition to hardware and software stack we show how the rational choice of compilation/running parameters helps to significantly improve the performance of technical computing applications.
Rain stor isilon_emc_real_Examine the Real Cost of Storing & Analyzing Your M...RainStor
Are you storing larger than necessary quantities in your data warehouse, RDBMS, and line of business applications? Are you spending a large portion of your budget on Teradata or Netezza with costs continually climbing as data volumes grow? Are you getting the right ROI for all the data you store in your data warehouses?
Read this deck to find out:
What is the cost of storing your critical Big Data assets?
What workloads are best suited for data warehouses, which for Hadoop, and why?
Advantages of running Hadoop on scale-out NAS.
Importance of Security and Data Governance for critical data assets.
How to maintain data warehouse performance even with high growth rates.
DataCore Software introduction from my "Meet DataCore" webinar. DataCore products include software-defined storage and hyperconverged infrastructure solutions. Datacore has more than 10K customers and 30K+ implementations world-wide.
Cost of Ownership for Hadoop Implementation - Hadoop Summit 2014aziksa
This presentation will compare the pros and cons for hadoop implementation on cloud such as Hortonworks on AWS, Hadoop as a service from company like Amazon AMR, Altiscale and on premise installations. It will talk about the total cost of ownership for each category of Hadoop implementation and share a TCO calculator. There will be multiple categories of cost such as – 1. hardware/infrastructure, 2. Network/communication, 3. License/software, 4. Application development/training 5.On going support cost. Focus will be to bring all hidden and non-hidden cost to visibility. Using the calculator, participant will be able to find their own cost of ownership for their Hadoop cluster and can plan better for project implementation and support. It will also talk about managing risks on vendor viability, loss of intellectual property and control on technical architecture.
Klaus Gottschalk from IBM presented this deck at the 2016 HPC Advisory Council Switzerland Conference.
"Last year IBM together with partners out of the OpenPOWER foundation won two of the multi-year contacts of the US CORAL program. Within these contacts IBM develops an ac- celerated HPC infrastructure and software development ecosystem that will be a major step towards Exascale Computing. We believe that the CORAL roadmap will enable a massive pull for transformation of HPC codes for accelerated systems. The talk will discuss the IBM HPC strategy, explain the OpenPOWER foundation and the show IBM OpenPOWER roadmap for CORAL and beyond."
Watch the video presentation: http://wp.me/p3RLHQ-f9x
Learn more: http://e.huawei.com/us/solutions/business-needs/data-center/high-performance-computing
See more talks from the Switzerland HPC Conference:
http://insidehpc.com/2016-swiss-hpc-conference/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Is it possible accomplishing the national development independentFernando Alcoforado
The failure in promoting economic and social development of almost all peripheral and semi-peripheral countries of the world must be attributed to the fact that the governments of these countries outline strategies to promote national development dissociated from the evolution of the capitalist world-system. In his book Unthinking Social Science, the American sociologist Immanuel Wallerstein states that it is necessary to review the current paradigms of social sciences and going to think otherwise in the XXI century. Wallerstein argues for the adoption of a new theoretical and methodological framework in social science based on analysis of the capitalist world-system to understand how each national system it is inserted in order to promote their economic and social development. The new theoretical analysis of the economic system of a nation taking into account the capitalist world-system proposed by Wallerstein is opposed to the current Cartesian method approach that formulates the development of the national economic system of isolated and dissociated form of the analysis of the insertion of the national economy in the world capitalist system.
LA MEJOR AGENCIA DE CASH MANAGEMENT EN MÉXICO
COBRANZA DOMICILIARIA, CALL CENTER & ONLINE 24/7
*Outsourcing de Crédito y Cobranza
*Personal Inplant en tu Oficina
*Consultoría Relacionada a tu CASH FLOW
*Red de Gestores a Nivel Nacional
*Cobranza Extrajudicial y Legal
*Investigación de Crédito
*Levantamientos Físicos de Inventarios
*Censos
*Encuestas con Clientes
*Entregas y recolecciones Críticas
*Servicios de Valor Agregado y Última Milla
Riverbed SteelHead Family Brochure 10.10.13.
SteelHead is the flagship product of Riverbed for WAN Acceleration. Riverbed is the world's leader in WOC (WAN Optimization Controller), according to Gartner's Magic Quadrant.
Companies in today\'s challenging economy need to do more with less...see how the combination of Cisco, NetApp and VMWare can help you in your data center.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
4. Riverbed has customers in all industries Tech/Software Retail/Consumer Transportation/Logistics US and International Government Healthcare/Bio/Pharma Energy A/E/C Media/Communications Manufacturing Professional Services Financial Services
5. Riverbed Q2 2011: Market Share Leader Source: Gartner 2006 - 2011 WOC Advanced Platform Market Share Note: Included within “other” for the given quarter. Note : Q4 2010 number adjusted for inconsistency of cascade. Gartner will retroactively address .
6.
7. WAN Optimization is a mature technology If you haven’t looked into WAN optimization, you’re missing out!
8. Comprehensive Steelhead Product Deployment PRIMARY DATA CENTER (Replicate to Secondary Data Center) BRANCH OFFICES (Backup / Replicate to Primary Data Center) MOBILE WORKERS (Backup Laptops) Steelhead Appliances (Clustered) Steelhead Mobile Steelhead Appliance Cascade Central Management Console Steelhead Mobile Controller Interceptor Virtual Steelhead Steelhead Appliances (Clustered) SECONDARY DATA CENTER (Recovery Location) WAN
9.
10.
11. Qualified for use with: SRDF/A on DMX & V-MAX Celerra Replicator, MirrorView/A
Riverbed ’s products work equally well (and have been accepted) across all industries. Some of the largest companies in the world, some of the best known brands, depend on Riverbed to optimize their application performance across the WAN.
I won’t go into too much detail here, but just enough so that you can understand what we are offering, and then direct me to the right people in the organization to talk about each of these technologies in depth. Cascade is a unique performance monitoring tool. It’s the only tool on the market that gives you a view of application performance that C-level executives can understand, but at the same time allows your engineers to troubleshoot very specific problems. IDC estimates that with Cascade enterprises solve problems up to 83% faster, and at the same time IT can align better with the business. Your network operations team will be interested in this, but also teams that are engaged in consolidation projects. Zeus is the leading virtual application delivery controller. It enables enterprises to deliver fast, secure and available applications across any combination of physical, virtual and cloud infrastructures with a single point of application delivery control and monitoring across all locations. This will help your teams involved in virtualization, cloud computing, and new application development realize improved resiliency, speed, security, and ease of management. Steelhead is Riverbed’s WAN optimization product family. It comes in appliance, virtual, mobile, and cloud flavors and allows you to make every location feel like they are at HQ. It can simultaneously accelerate applications and cut bandwidth costs. It fits across the board in virtualization, consolidation, and DR projects, as well as mobility projects, new application rollouts, and cloud engagements. Typically the networking team runs these tools, but any of these major project influencers might require it. Whitewater is a cloud storage accelerator. It accelerates backup and restore from the cloud, and enables you to cut data protection costs by up to 50%. It allows your existing infrastructure to seamlessly integrate with new cloud storage offerings while providing deduplication, security, and acceleration. Your storage architects and backup administrators will be interested in this. Aptimize web content optimization is a software package set up on the web server that dramatically accelerates web-based applications and websites. Aptimize dynamically groups activities for fewer long distance round trips, compresses images to reduce bandwidth required, increases caching for faster repeat visits, and prioritizes actions to give the best possible response time for loading a web page on any browser. Your web app developers and SharePoint administrators will be interested in this.
If you’re not familiar with Gartner’s Hype Cycle, it’s a way of tracking the adoption and maturity of new technologies, compared with the marketing hype around them. Some great news here is that WAN Optimization is now entering the Plateau of Productivity for business continuity and disaster recovery use cases. This means it’s solid technology that does what it promises, and it delivers real value in the real world. WAN optimization can solve a number of the challenges we’ve discussed, by transforming your WAN from a barrier into an enabler and giving you far more capability to realize the DR strategy and results you require.
Riverbed has the most comprehensive WAN optimization solution available. Our technology spans between the data centers, branch offices, and mobile workers. In an ideal environment, aside from local mirroring of server data for local restore, your disaster recovery plan will include backup of branch office desktops and servers across the WAN to the Primary Data Center, backup of remote workers laptops across any connection, and replication between your primary and secondary data center. These operations can be done by the backup and replication tools you use today, with WAN optimization enabling you to leverage the network most efficiently. The main product lines are Steelhead appliance and Steelhead Mobile software, and there are additional options such as the Virtual Steelhead. All of these products accelerate DR traffic up and applications up to 50x faster on the WAN, while reducing bandwidth requirements up to 60-95%. Let’s look at how Steelhead WAN optimization can help in DR environments, and we’ll return to talk about some of the (optional) enhancing add-on products later on….
I’m not going to spend a lot of time explaining how our technology works – our sales engineers or professional services teams would be happy to spend time with you on a deep dive into our technology – but I can give you sense of the key components: First, we use patented de-duplication techniques to ensure that data is only sent once between any two Steelhead appliances, or between a Steelhead Mobile client and an appliance. Typically we can remove 60% to 95% of the traffic on any WAN link. Removing traffic can improve application performance, but it’s not enough. The second thing we do is to optimize the TCP protocol – all Steelhead appliances are transparent TCP proxies; we set up highly optimized TCP sessions on the fly that accelerate the performance of any TCP-based application being used over a particular link. These two optimizations mean that many of your key applications like SharePoint, SAP NetWeaver, FTP and many others will see significantly better performance right away. The third key optimization has to do with eliminating the inefficient behavior of many key applications. Usually this is caused by hundreds of round trips, or other inefficiencies generated by the applications you rely on most, including Windows and UNIX file sharing, Microsoft Exchange, Oracle Forms, Lotus Notes, MS-SQL and others. By constraining chatty protocols to the LANs where there is very high bandwidth and very low latency, the number of round trips on your WAN is minimized. It’s the combination of de-duplication, TCP optimization and application-specific protocol optimization that makes the difference. And, of course Riverbed does it all with the simplest-to-deploy solutions which means our systems can scale to the world’s largest deployments.
EMC is a very close partner with Riverbed. Their E-Lab has qualified a wide range of configurations of Steelhead appliances and EMC replication tools and storage platforms. We have jointly developed some unique optimizations for SRDF and FCIP, giving even better results in these environments. In additional SRDF optimization automatically configures the array so its native compression doesn’t interfere, saving you time and a service call in getting it up and running smoothly. Through the EMC Select program, there are also a number of EMC-specific Steelhead models available for direct purchase from EMC and its channel partners. It’s worth noting that even Data Domain replication benefits from Riverbed WAN optimization, for even better results. This is a small subset of our joint customers, but you can read more in some of our case studies on optimizing EMC.
With SAN replication, it’s common to have multiple types of data interleaved within a single replication connection. By using our app-level knowledge of how the transactions are “bucketed” on the wire, we can apply different Optimization Policies for each type of data. This allows for overall higher throughput by not wasting resources on trying to reduce uncompressible data. Also, we can use our knowledge of these “buckets” to give granular visibility as to how much traffic is being transferred for each storage group, which is something that something that even cheap bandwidth can never allow you to do. And while EMC SRDF is the first protocol we’ve enhanced with this capability, we have the option to provide the same value to similar technologies like Netapp SnapMirror, EMC RecoverPoint or IBM XIV replication.
SRDF/A blade provides: SRDF/A header optimization T10 DIF header integration Participate in SRDF/A flow control to enhance throughput Auto disable compression for SRDF/A FCIP SRDF/A UDP LDVM EMC VMAX HP EVA/CA Scale- free additional throughput ASIC-based pass-through traffic forwarding DR Wizard Granular QoS for storage traffic Storage-aware reporting
So both tape and disk backup offers a mixed bag of benefits and drawbacks. Today, the cloud is increasingly being seen as a medium that provides the strengths of tape and disk without all of their drawbacks. With cloud storage, users only need to purchase capacity to meet their needs at any given time. If demand picks up, instantly purchase more capacity. If demand drops off for any length of time, reduce your capacity quickly and easily. This pay-as-you-go model means no more forward provisioning or inaccurate forecasting. Because of the large economies of scale available to cloud storage providers and and the need to pay for only the capacity you need, cloud storage is a lower cost platform for data protection. With cloud storage, you no longer get the calls in the middle of the night telling you that the tape handler jammed or library went offline. Cloud service providers agree to SLAs that put the onus on them to resolve availability issues quickly and per your agreements. Finally, the large redundancy built into most public clouds means that achieving HA and DR are simply side benefits of cloud storage. Unfortunately, in most cases, IT administrators can’t just point their datacenters at the cloud and have everything act like it did using tape or local disk. One reason is WAN latency and limited bandwidth. How do you send large amounts of data to distant locations over the WAN without lag dragging down your processes? Also, once the data leaves a user’s firewall, how can they guarantee that it will be secure from prying eyes? Finally, processes developed or tape or local disk can’t simply be applied verbatim to cloud storage without degradation in efficiency and performance. And how can apps that weren’t designed to talk cloud manage data protection with minimal user intervention and accuracy? For the cloud to be compelling, these drawbacks must be overcome. Whitewater helps customers do jus that.
Whitewater was designed from the ground up to enable customers to tap into the benefits of cloud storage while avoiding its drawbacks. Whitewater leverages industry-leading deduplication and WAN optimization techniques to dramatically accelerate data transfers between it and the cloud, overcoming both the latency and data size problem. Whitewater also was built to bring access to the cloud without changes to your existing infrastructure. It is fast and easy to deploy. To your backup tools, it looks just like any other local disk target. And it connects directly and quickly to any of the major cloud storage providers. Because of Whitewater’s deduplication and compression technology, data that is transmitted and stored in the cloud is shrunk considerably, greatly reducing both networking and capacity costs for data protection to lower the cost of cloud storage. Finally, by encrypting data both for transmission, using SSL v3, and storage, using AES-256, Whitewater provides dual-level security for data to ensure it is completely protected. Combined with the multiple layers of security cloud storage providers wrap around their infrastructure, data can be more secure in the cloud than even within a customer’s own datacenter.
Whitewater is specifically designed to interface with all of the most popular backup tools and cloud storage providers. Using a standard CIFS or NFS interface, Whitewater appears to backup tools as just another local disk target. Whitewater has been tested as compatible with tools covering 85% of the backup market so can be used in almost all environments. Whitewater also supports the interface APIs of most major cloud storage platforms. This model allows enterprises to quickly take advantage of cloud storage to replace tape, or use it as an augmentation strategy to existing data protection techniques. Unlike others, we don’t require you to stop what you’re doing to try out our approach.
[for presenter: Understanding the animation in this slide] - you start with your existing infrastructure – servers you want to back up and the tools you usually use to do that (netbackup, TSM, etc) -[build] add riverbed CSA as the TARGET for your existing infrastructure – no rip and replace -[build] when you backup to CSA, riverbed automatically dedupes data, typically achieving 20x to 50x dedupe rates - with local disk, we store enough data for recovery of recent information. This provides LAN performance for the most likely restores needed - [build] we then write this data to the cloud of your choice. We write to the cloud using REST, the object based language of cloud storage. You do not need to change your infrastructure to support this - Cloud storage becomes even cheaper now, since the low cost, elastic storage used is 1/20 th or 1/50 th of what you’d normally use - [build] restores from the cloud are much faster too, since only deduped data is moving over the WAN and Riverbed’s optimizations help make it more efficient as well. Backup Performance: Inline de-dupe via CIFS interface Can scale with CPU NFS and OST to follow Restore Performance: Unparallel fast LAN restore or restore in the cloud. Local de-dupe store holds 0% to ~10% of total (enough for full restore) Cloud holds 100% of data (deduped) Retention: Unlimited elastic retention with dedupe and cloud storage Disaster Recovery: built-in into the solution Dedupe Everywhere: 20x – 50x Optimized Storage, Network and Cloud Storage
Start with data in the cloud and at step 2. Blow up data and end. Add some smaller data in the cloud that gets combined with the data on the appliance for a full restore. [for presenter: Understanding the animation in this slide] - you start with your existing infrastructure – servers you want to back up and the tools you usually use to do that (netbackup, TSM, etc) -[build] add riverbed CSA as the TARGET for your existing infrastructure – no rip and replace -[build] when you backup to CSA, riverbed automatically dedupes data, typically achieving 20x to 50x dedupe rates - with local disk, we store enough data for recovery of recent information. This provides LAN performance for the most likely restores needed - [build] we then write this data to the cloud of your choice. We write to the cloud using REST, the object based language of cloud storage. You do not need to change your infrastructure to support this - Cloud storage becomes even cheaper now, since the low cost, elastic storage used is 1/20 th or 1/50 th of what you’d normally use - [build] restores from the cloud are much faster too, since only deduped data is moving over the WAN and Riverbed’s optimizations help make it more efficient as well. Backup Performance: Inline de-dupe via CIFS interface Can scale with CPU NFS and OST to follow Restore Performance: Unparallel fast LAN restore or restore in the cloud. Local de-dupe store holds 0% to ~10% of total (enough for full restore) Cloud holds 100% of data (deduped) Retention: Unlimited elastic retention with dedupe and cloud storage Disaster Recovery: built-in into the solution Dedupe Everywhere: 20x – 50x Optimized Storage, Network and Cloud Storage
Whitewater appliance come in a 2 form factors and multiple density/performance combinations to meet a wide-variety of data protection needs. The virtual appliance is an ESX-based VM that can be downloaded and deployed quickly to handle data protection in ROBOs or small company datacenters. The 510 and 710 are physical appliances targeting datasets of up to around 10TB with transfer speeds of 400GB/hr to 600 GB/hr. The largest appliance today, the 2010, has throughput of up to 1TB/hr and is used for datasets of around 20 TB. All feature industry leading deduplication, advanced compression, and dual-level encryption as standard.