The document discusses how a combination of DataCore storage virtualization software and Riverbed WAN optimization appliances can help organizations achieve stringent Recovery Time Objectives (RTOs) and Recovery Point Objectives (RPOs) for disaster recovery without needing to upgrade to more expensive higher bandwidth internet connections between primary and backup sites. The solution uses DataCore's asynchronous replication to transmit disk block changes from the primary to backup site over an existing 2Mbps IP WAN, while Riverbed appliances compress, cache, and optimize the traffic in transit, significantly improving replication speeds and keeping the backup site constantly updated. Based on testing at a customer site, the solution was able to reduce replication times for virtual machines from over 42 hours to just 3
CtrlS, the leading data center solution provider, today announced the launch of an innovative, first-of-its-kind solution in India: Disaster Recovery on Demand. CtrlS's DR on Demand framework is built to align to enterprises - large and medium - DR strategy by offering a robust Disaster recovery solution at a cost that suits their budget. CtrlS is using "scalable, ready to deploy private cloud architecture" for this framework. With this solution, CtrlS now supports the full LAMP and Windows Stack for on demand disaster recovery services.
Router virtualization can provide benefits like reduced costs and environmental impact. There are two main approaches: hardware-isolated virtual routers (HVRs) with dedicated resources, and software-isolated virtual routers (SVRs) that share resources. HVRs are better suited for high-scale environments like POPs due to their ability to independently scale resources, while SVRs are preferable for low-scale environments like data centers. Cisco IOS XR Software supports HVRs through Secure Domain Routers that provide full isolation of routing instances.
Could the “C” in HPC stand for Cloud?This paper examines aspects of computing important in HPC (compute and network bandwidth, compute and network latency, memory size and bandwidth, I/O, and so on) and how they are affected by various virtualization technologies. For more information on IBM Systems, visit http://ibm.co/RKEeMO.
Visit the official Scribd Channel of IBM India Smarter Computing at http://bit.ly/VwO86R to get access to more documents.
CtrlS: Cloud Solutions for Retail & eCommerceeTailing India
The document discusses various topics related to data centers and IT infrastructure including:
1. Metrics for measuring data center efficiency such as PUE (Power Usage Effectiveness), DCiE (Data Center Infrastructure Efficiency), and strategies for improving efficiency including water-cooled condensers, variable frequency drives, and more efficient server power supplies.
2. CtrlS's data center architecture and disaster recovery solutions including solutions spanning multiple availability zones with synchronous and asynchronous replication for high availability.
3. Trends in IT infrastructure like virtualization, cloud computing, and openflow networking.
4. Managed services solutions discussed including e-commerce platforms, logistics, CRM and other applications.
The document discusses methods for accelerating Perforce workspace syncs and reducing network storage usage as digital assets grow rapidly. It introduces IC Manage Views, which uses dynamic virtual workspaces and local caching to achieve near-instant workspace syncs and reduce network storage by 4x. Benchmark results show IC Manage Views delivers files 2x faster than traditional methods through intelligent file redirection that separates reads from writes. IC Manage Views is compatible with existing storage technologies and scales as users and data grow.
Data Center Network Trends - Lin NeaseHPDutchWorld
This document discusses trends in data center networks and strategies for addressing challenges. It describes how legacy data centers can be transformed into next generation data centers through standardization, virtualization, and automation. This allows for reduced costs, better asset utilization, lower power consumption, and higher system uptime. It also discusses various types of data center networks, the growth of 10Gb Ethernet, and topology approaches like using top-of-rack switches. The document outlines a strategy for network automation and virtualizing the data center network through representing it as an inventory of virtual connections.
This white paper from Oracle discusses how organizations can lower their IT costs using Oracle Database 11g Release 2. It claims that Oracle Database 11g Release 2 allows organizations to reduce hardware costs by a factor of 5x through server consolidation. It also says Oracle Database 11g Release 2 improves performance by 10x, reduces storage costs by 10x, maximizes availability and security, doubles DBA and developer productivity, simplifies software portfolios, and achieves business value in a quarter of the time. The white paper explains how Oracle Database 11g Release 2 and Oracle Real Application Clusters enable consolidation onto low-cost commodity server grids to reduce hardware costs.
SRDF (Symmetrix Remote Data Facility) is EMC's software solution that provides disaster recovery and business continuity capabilities. It works by duplicating production site data on a physically separate recovery site transparently. This allows fast switchover to the copied data if the primary site becomes unavailable, minimizing downtime. SRDF can operate over IP networks or Fibre Channel, and works with the Symmetrix enterprise storage system and various host platforms. Its benefits include eliminating expensive and inflexible manual backup/restore, improving cost-effectiveness, and providing high availability of critical data and applications.
CtrlS, the leading data center solution provider, today announced the launch of an innovative, first-of-its-kind solution in India: Disaster Recovery on Demand. CtrlS's DR on Demand framework is built to align to enterprises - large and medium - DR strategy by offering a robust Disaster recovery solution at a cost that suits their budget. CtrlS is using "scalable, ready to deploy private cloud architecture" for this framework. With this solution, CtrlS now supports the full LAMP and Windows Stack for on demand disaster recovery services.
Router virtualization can provide benefits like reduced costs and environmental impact. There are two main approaches: hardware-isolated virtual routers (HVRs) with dedicated resources, and software-isolated virtual routers (SVRs) that share resources. HVRs are better suited for high-scale environments like POPs due to their ability to independently scale resources, while SVRs are preferable for low-scale environments like data centers. Cisco IOS XR Software supports HVRs through Secure Domain Routers that provide full isolation of routing instances.
Could the “C” in HPC stand for Cloud?This paper examines aspects of computing important in HPC (compute and network bandwidth, compute and network latency, memory size and bandwidth, I/O, and so on) and how they are affected by various virtualization technologies. For more information on IBM Systems, visit http://ibm.co/RKEeMO.
Visit the official Scribd Channel of IBM India Smarter Computing at http://bit.ly/VwO86R to get access to more documents.
CtrlS: Cloud Solutions for Retail & eCommerceeTailing India
The document discusses various topics related to data centers and IT infrastructure including:
1. Metrics for measuring data center efficiency such as PUE (Power Usage Effectiveness), DCiE (Data Center Infrastructure Efficiency), and strategies for improving efficiency including water-cooled condensers, variable frequency drives, and more efficient server power supplies.
2. CtrlS's data center architecture and disaster recovery solutions including solutions spanning multiple availability zones with synchronous and asynchronous replication for high availability.
3. Trends in IT infrastructure like virtualization, cloud computing, and openflow networking.
4. Managed services solutions discussed including e-commerce platforms, logistics, CRM and other applications.
The document discusses methods for accelerating Perforce workspace syncs and reducing network storage usage as digital assets grow rapidly. It introduces IC Manage Views, which uses dynamic virtual workspaces and local caching to achieve near-instant workspace syncs and reduce network storage by 4x. Benchmark results show IC Manage Views delivers files 2x faster than traditional methods through intelligent file redirection that separates reads from writes. IC Manage Views is compatible with existing storage technologies and scales as users and data grow.
Data Center Network Trends - Lin NeaseHPDutchWorld
This document discusses trends in data center networks and strategies for addressing challenges. It describes how legacy data centers can be transformed into next generation data centers through standardization, virtualization, and automation. This allows for reduced costs, better asset utilization, lower power consumption, and higher system uptime. It also discusses various types of data center networks, the growth of 10Gb Ethernet, and topology approaches like using top-of-rack switches. The document outlines a strategy for network automation and virtualizing the data center network through representing it as an inventory of virtual connections.
This white paper from Oracle discusses how organizations can lower their IT costs using Oracle Database 11g Release 2. It claims that Oracle Database 11g Release 2 allows organizations to reduce hardware costs by a factor of 5x through server consolidation. It also says Oracle Database 11g Release 2 improves performance by 10x, reduces storage costs by 10x, maximizes availability and security, doubles DBA and developer productivity, simplifies software portfolios, and achieves business value in a quarter of the time. The white paper explains how Oracle Database 11g Release 2 and Oracle Real Application Clusters enable consolidation onto low-cost commodity server grids to reduce hardware costs.
SRDF (Symmetrix Remote Data Facility) is EMC's software solution that provides disaster recovery and business continuity capabilities. It works by duplicating production site data on a physically separate recovery site transparently. This allows fast switchover to the copied data if the primary site becomes unavailable, minimizing downtime. SRDF can operate over IP networks or Fibre Channel, and works with the Symmetrix enterprise storage system and various host platforms. Its benefits include eliminating expensive and inflexible manual backup/restore, improving cost-effectiveness, and providing high availability of critical data and applications.
The document compares the technical architectures, operational requirements, and costs of SRDF and Dataguard for database replication. Some key differences noted are that Dataguard replicates transactions through log shipping while SRDF replicates blocks and may replicate corrupt blocks. Dataguard standbys can be opened for read-only operations more easily while SRDF volumes need to be mounted at the OS level. Dataguard has additional maintenance and monitoring needs while SRDF has lower resource costs. The document also lists some stress testing, operational, and maintenance requirements for a Dataguard configuration.
The document discusses Juniper Networks' WX application acceleration platform. It provides an overview of how WX improves application performance over the WAN by recognizing redundant transmissions and accelerating protocols. It also discusses WX use cases, deployment benefits, product lineup, and how it has been recognized as a leader in the market by analysts like Gartner. Customer case studies demonstrate how WX helps businesses consolidate data centers and improve response times.
Data center interconnect seamlessly through SDNFelecia Fierro
Data Center Interconnect Seamlessly through SDN
Data Center Interconnect, or DCI, has become a hot topic as IT infrastructures transform from islands of connectivity to pools of resources for efficiency purposes. Properly deployed, DCI enables all computing and storage resources to be pooled, regardless of where they physically reside, and it is the quality of this abstraction and the associated visibility that counts.
Enter Pluribus Networks running on Broadcom chipsets
Join us for this On-Demand Webinar where we will discuss why Data Center Interconnect is a key opportunity to simplify any network and how Broadcom chipsets enable this with their industry-leading VXLAN capabilities.
Pluribus Netvisor®-powered switches running on Broadcom include the industry’s most powerful and open DCI technology, VXLAN, which enables all resources across the entire planet to be shared. Along with VXLAN itself, we will explain the role of visibility in a widely distributed VXLAN based environment.
In this On-Demand Webinar, we'll discuss how DCI running on Broadcom VXLAN can:
Share IT resources and increase utilization of those resources
Provide enterprise scale, simplicity and agility – reducing the cost and complexity of IT
Support modern applications including Converged Infrastructure and VDI
Orange Business Services: A Telecom Business Reinvents Itself for the Cloud EraNetApp
When your industry’s revenue projections flatten, how can you break away and continue to grow? At Orange Business Services, we decided to reinvent the company by becoming a cloud services provider—entering a market still at the beginning of its growth trajectory.
This white paper evaluates the bandwidth requirements and performance of delivering Citrix XenDesktop virtual desktops to branch offices. It finds that using Citrix Branch Repeater can significantly reduce bandwidth consumption and improve performance. Key findings include:
- Branch Repeater reduces average bandwidth per session by up to 89% for various workflows like Office, browsing, printing, and video.
- It can reduce XenDesktop session launch times on congested WANs by up to 40% and double the number of users able to work simultaneously.
- Print times from virtual desktops to branch printers are reduced by up to 60%.
The paper provides guidance on bandwidth planning and demonstrates how Branch Repeater optimizes WAN usage and
This document discusses a presentation about the CA arcserve Unified Data Protection (UDP) solution. The presentation covers:
1) The market need for a unified data protection solution due to the complexity of managing multiple point solutions and the growth of virtual environments.
2) How the CA arcserve UDP solution provides a single, unified platform for backup, replication, disaster recovery, and other data protection tasks across physical and virtual environments.
3) Key capabilities of the CA arcserve UDP solution like global deduplication, virtual standby, assured recovery testing, and a centralized management console.
The document discusses Citrix Branch Repeater, which provides WAN optimization and acceleration for XenDesktop and XenApp traffic. It can improve application launch times by up to 40% and file transfers by up to 6 times. It reduces bandwidth usage while increasing throughput. The product line includes integrated Windows Server appliances as well as plug-ins. It enhances the user experience and allows more users to be supported over existing WAN links.
France Telecom is virtualizing servers using VMware on HP BladeSystems to consolidate 17 data centers into two more efficient centers. This is projected to save 22 million euros over three years through reduced power, maintenance, tax, and space costs. Virtualization has increased server utilization 10-fold from 5% to 50-70%, reduced servers by 50%, cut time to deploy new servers from 4 weeks to hours, and improved disaster recovery from weeks to a maximum of 4 hours.
Reprinted with permission of NCTA, from the 2014 Cable Connection Spring Technical Forum Conference Proceedings. For more information on Cisco cloud solutions, visit: http://www.cisco.com/c/en/us/products/cloud-systems-management/index.html
Data drives innovation in the life sciences. Collaborative teams in biomedical research, pharmacology, academia, government and national laboratories need to quickly and efficiently exchange and process vast amounts of data. New research technologies – in particular, next-generation genomic sequencing – create tens of gigabytes of data for each experimental run. Supporting the movement of these huge data sets, Aspera software provides breakthrough high-speed file transfer across the globe for projects which serve up vast public databases for the study of human genomic variation. Scientific users enjoy familiar Unix-style interfaces, embeddable APIs and user-friendly web and desktop GUIs.
IT Brand Pulse industry brief describing a new approach to configuring virtual networks for virtual machines...layering hypervisor-based virtual networking services on top of hardware based virtual networking services. The result is more efficient management and lower costs.
The network layer and its requisite analytics can be a gating factors to an initiative’s success. Critically important is the oft-overlooked network performance monitoring of business flows that are rooted in the network itself.
We’ll cover how analytics provides the essential link between:
* Resource usage
* Application performance
* Ultimate success of servicing the business.
If you’re an IT structure that is deploying these mission-critical modern applications, it can be hard to imagine maximum success without the inclusion of the performance analytics that come from the network itself.
To view the full recorded webinar:
http://www.pluribusnetworks.com/resources/webinar-forrester-research-modern-applications-demand-network-analytics/
Disaster Recovery, Local Operational Recovery, and High Availabilityxmeteorite
InMage provides a single solution for disaster recovery, local operational recovery, and high availability for SAP. It leverages continuous data protection and heterogeneous replication to meet stringent recovery requirements. InMage supports automated recovery of entire SAP environments, including data and applications, on Windows, Linux, and Unix platforms to enable remote or local recovery that is faster and more reliable than conventional backup methods.
This document discusses disaster recovery and workload mobility considerations for connecting data centers. It covers business requirements like high availability, regulatory compliance, and agility. Technical requirements include consistent data availability, sufficient compute resources, and network availability. It then discusses various data replication technologies and their bandwidth needs. Designs for high availability compute clusters within and across data centers are presented. Factors in workload mobility like live migration networks and latency requirements are also covered.
This document discusses distributed data center architectures and disaster recovery strategies. It begins by providing background on the evolution of data centers and then covers key aspects of distributed data center design like replication, high availability, and disaster recovery plans. The objectives of disaster recovery plans, such as recovery point and recovery time objectives, are explained. Different disaster recovery architectures like warm and hot standbys are also summarized.
This document discusses how Citrix CloudBridge can optimize video delivery in XenApp and XenDesktop environments through features like video caching, disk-based compression, and Quality of Service (QoS). Video caching improves performance by serving cached video over LAN speeds. Disk-based compression reduces bandwidth usage by eliminating duplicate video content. QoS allows administrators to classify and prioritize different types of video traffic to control bandwidth utilization. Together these features enhance the user experience and reduce WAN bandwidth consumption of video streams in virtual desktop and application environments.
The document discusses disk I/O performance in SQL Server 2005. It begins with some questions about which queries and RAID configurations would affect disk I/O the most. It then covers the basics of I/O and different RAID levels, their pros and cons. The document provides an overview of monitoring physical and logical disk performance, and offers tips on tuning disk I/O performance when bottlenecks occur. It concludes with resources for further information.
This document describes the key features of the CA Arcserve UDP solution. It discusses its unified data protection capabilities including agentless backup for virtual environments, global deduplication across sites with Recovery Point Server, replication between sites, virtual standby for disaster recovery, assured recovery testing, and unified management and reporting. The solution aims to provide simplified and scalable data protection for physical and virtual environments from small to large organizations.
- Marathon Oil Corporation reported third quarter 2003 net income of $281 million, up significantly from $87 million in third quarter 2002.
- Exploration success included discoveries offshore Angola, Equatorial Guinea, Gulf of Mexico and Norway. Production from new fields in the UK and Alaska began.
- Refining throughput reached record levels at Marathon Ashland Petroleum (MAP) facilities. The Cardinal Pipeline and Catlettsburg refinery project advanced.
- Asset sales totaled nearly $1 billion year-to-date, with proceeds reinvested in new acquisitions and operations. Organizational changes aimed to increase efficiency and savings.
Bradley is a year 10 student at Lara Secondary College who enjoys sepep. He lives with his parents and brothers. In his spare time, Bradley plays football for the Corio football club.
The document compares the technical architectures, operational requirements, and costs of SRDF and Dataguard for database replication. Some key differences noted are that Dataguard replicates transactions through log shipping while SRDF replicates blocks and may replicate corrupt blocks. Dataguard standbys can be opened for read-only operations more easily while SRDF volumes need to be mounted at the OS level. Dataguard has additional maintenance and monitoring needs while SRDF has lower resource costs. The document also lists some stress testing, operational, and maintenance requirements for a Dataguard configuration.
The document discusses Juniper Networks' WX application acceleration platform. It provides an overview of how WX improves application performance over the WAN by recognizing redundant transmissions and accelerating protocols. It also discusses WX use cases, deployment benefits, product lineup, and how it has been recognized as a leader in the market by analysts like Gartner. Customer case studies demonstrate how WX helps businesses consolidate data centers and improve response times.
Data center interconnect seamlessly through SDNFelecia Fierro
Data Center Interconnect Seamlessly through SDN
Data Center Interconnect, or DCI, has become a hot topic as IT infrastructures transform from islands of connectivity to pools of resources for efficiency purposes. Properly deployed, DCI enables all computing and storage resources to be pooled, regardless of where they physically reside, and it is the quality of this abstraction and the associated visibility that counts.
Enter Pluribus Networks running on Broadcom chipsets
Join us for this On-Demand Webinar where we will discuss why Data Center Interconnect is a key opportunity to simplify any network and how Broadcom chipsets enable this with their industry-leading VXLAN capabilities.
Pluribus Netvisor®-powered switches running on Broadcom include the industry’s most powerful and open DCI technology, VXLAN, which enables all resources across the entire planet to be shared. Along with VXLAN itself, we will explain the role of visibility in a widely distributed VXLAN based environment.
In this On-Demand Webinar, we'll discuss how DCI running on Broadcom VXLAN can:
Share IT resources and increase utilization of those resources
Provide enterprise scale, simplicity and agility – reducing the cost and complexity of IT
Support modern applications including Converged Infrastructure and VDI
Orange Business Services: A Telecom Business Reinvents Itself for the Cloud EraNetApp
When your industry’s revenue projections flatten, how can you break away and continue to grow? At Orange Business Services, we decided to reinvent the company by becoming a cloud services provider—entering a market still at the beginning of its growth trajectory.
This white paper evaluates the bandwidth requirements and performance of delivering Citrix XenDesktop virtual desktops to branch offices. It finds that using Citrix Branch Repeater can significantly reduce bandwidth consumption and improve performance. Key findings include:
- Branch Repeater reduces average bandwidth per session by up to 89% for various workflows like Office, browsing, printing, and video.
- It can reduce XenDesktop session launch times on congested WANs by up to 40% and double the number of users able to work simultaneously.
- Print times from virtual desktops to branch printers are reduced by up to 60%.
The paper provides guidance on bandwidth planning and demonstrates how Branch Repeater optimizes WAN usage and
This document discusses a presentation about the CA arcserve Unified Data Protection (UDP) solution. The presentation covers:
1) The market need for a unified data protection solution due to the complexity of managing multiple point solutions and the growth of virtual environments.
2) How the CA arcserve UDP solution provides a single, unified platform for backup, replication, disaster recovery, and other data protection tasks across physical and virtual environments.
3) Key capabilities of the CA arcserve UDP solution like global deduplication, virtual standby, assured recovery testing, and a centralized management console.
The document discusses Citrix Branch Repeater, which provides WAN optimization and acceleration for XenDesktop and XenApp traffic. It can improve application launch times by up to 40% and file transfers by up to 6 times. It reduces bandwidth usage while increasing throughput. The product line includes integrated Windows Server appliances as well as plug-ins. It enhances the user experience and allows more users to be supported over existing WAN links.
France Telecom is virtualizing servers using VMware on HP BladeSystems to consolidate 17 data centers into two more efficient centers. This is projected to save 22 million euros over three years through reduced power, maintenance, tax, and space costs. Virtualization has increased server utilization 10-fold from 5% to 50-70%, reduced servers by 50%, cut time to deploy new servers from 4 weeks to hours, and improved disaster recovery from weeks to a maximum of 4 hours.
Reprinted with permission of NCTA, from the 2014 Cable Connection Spring Technical Forum Conference Proceedings. For more information on Cisco cloud solutions, visit: http://www.cisco.com/c/en/us/products/cloud-systems-management/index.html
Data drives innovation in the life sciences. Collaborative teams in biomedical research, pharmacology, academia, government and national laboratories need to quickly and efficiently exchange and process vast amounts of data. New research technologies – in particular, next-generation genomic sequencing – create tens of gigabytes of data for each experimental run. Supporting the movement of these huge data sets, Aspera software provides breakthrough high-speed file transfer across the globe for projects which serve up vast public databases for the study of human genomic variation. Scientific users enjoy familiar Unix-style interfaces, embeddable APIs and user-friendly web and desktop GUIs.
IT Brand Pulse industry brief describing a new approach to configuring virtual networks for virtual machines...layering hypervisor-based virtual networking services on top of hardware based virtual networking services. The result is more efficient management and lower costs.
The network layer and its requisite analytics can be a gating factors to an initiative’s success. Critically important is the oft-overlooked network performance monitoring of business flows that are rooted in the network itself.
We’ll cover how analytics provides the essential link between:
* Resource usage
* Application performance
* Ultimate success of servicing the business.
If you’re an IT structure that is deploying these mission-critical modern applications, it can be hard to imagine maximum success without the inclusion of the performance analytics that come from the network itself.
To view the full recorded webinar:
http://www.pluribusnetworks.com/resources/webinar-forrester-research-modern-applications-demand-network-analytics/
Disaster Recovery, Local Operational Recovery, and High Availabilityxmeteorite
InMage provides a single solution for disaster recovery, local operational recovery, and high availability for SAP. It leverages continuous data protection and heterogeneous replication to meet stringent recovery requirements. InMage supports automated recovery of entire SAP environments, including data and applications, on Windows, Linux, and Unix platforms to enable remote or local recovery that is faster and more reliable than conventional backup methods.
This document discusses disaster recovery and workload mobility considerations for connecting data centers. It covers business requirements like high availability, regulatory compliance, and agility. Technical requirements include consistent data availability, sufficient compute resources, and network availability. It then discusses various data replication technologies and their bandwidth needs. Designs for high availability compute clusters within and across data centers are presented. Factors in workload mobility like live migration networks and latency requirements are also covered.
This document discusses distributed data center architectures and disaster recovery strategies. It begins by providing background on the evolution of data centers and then covers key aspects of distributed data center design like replication, high availability, and disaster recovery plans. The objectives of disaster recovery plans, such as recovery point and recovery time objectives, are explained. Different disaster recovery architectures like warm and hot standbys are also summarized.
This document discusses how Citrix CloudBridge can optimize video delivery in XenApp and XenDesktop environments through features like video caching, disk-based compression, and Quality of Service (QoS). Video caching improves performance by serving cached video over LAN speeds. Disk-based compression reduces bandwidth usage by eliminating duplicate video content. QoS allows administrators to classify and prioritize different types of video traffic to control bandwidth utilization. Together these features enhance the user experience and reduce WAN bandwidth consumption of video streams in virtual desktop and application environments.
The document discusses disk I/O performance in SQL Server 2005. It begins with some questions about which queries and RAID configurations would affect disk I/O the most. It then covers the basics of I/O and different RAID levels, their pros and cons. The document provides an overview of monitoring physical and logical disk performance, and offers tips on tuning disk I/O performance when bottlenecks occur. It concludes with resources for further information.
This document describes the key features of the CA Arcserve UDP solution. It discusses its unified data protection capabilities including agentless backup for virtual environments, global deduplication across sites with Recovery Point Server, replication between sites, virtual standby for disaster recovery, assured recovery testing, and unified management and reporting. The solution aims to provide simplified and scalable data protection for physical and virtual environments from small to large organizations.
- Marathon Oil Corporation reported third quarter 2003 net income of $281 million, up significantly from $87 million in third quarter 2002.
- Exploration success included discoveries offshore Angola, Equatorial Guinea, Gulf of Mexico and Norway. Production from new fields in the UK and Alaska began.
- Refining throughput reached record levels at Marathon Ashland Petroleum (MAP) facilities. The Cardinal Pipeline and Catlettsburg refinery project advanced.
- Asset sales totaled nearly $1 billion year-to-date, with proceeds reinvested in new acquisitions and operations. Organizational changes aimed to increase efficiency and savings.
Bradley is a year 10 student at Lara Secondary College who enjoys sepep. He lives with his parents and brothers. In his spare time, Bradley plays football for the Corio football club.
Third Quarter 2004 Earnings Conference Call Transcriptfinance4
The transcript summarizes an earnings call by Anthem Inc. discussing their third quarter 2004 results. Key points include:
- Anthem reported a 23% increase in net income to $1.70 per diluted share for Q3. Membership grew 8% to over 12.7 million.
- Medical cost trends remained stable at 9.5-10.5% for the year. Premium yields were sufficient to cover increases in benefits.
- The company expects overall medical cost trends to remain stable for the rest of 2004 and into 2005. Anthem is committed to disciplined pricing.
The document discusses upcoming changes and goals for a website and organization. It mentions that by the end of February, the website will get a new server and blog design. It also discusses ideas for services for regional chapters like a newsletter, replacing internal pages with an internal social network, and spreading knowledge through a wiki. Later sections discuss social media like Twitter and blogging, providing tips and examples for using these tools.
Second Quarter 2003 Earnings Conference Call Transcriptfinance4
Anthem reported record financial and operational results for Q2 2003. Adjusted net income was $1.30 per share, a 31% increase year-over-year. Operating revenue reached $4.1 billion, also an all-time high. Membership increased 10% year-over-year to 11.7 million members due to growth across customer segments. Anthem raised its full-year 2003 adjusted EPS guidance to $5.10-$5.15 per share, representing 24-25% growth.
Este documento discute como as interações entre estudantes em ambientes virtuais de aprendizagem (AVA) podem constituir redes sociocognitivas e autopoiéticas. Analisa como tarefas convergentes e divergentes contribuem para diferentes tipos de trocas entre os estudantes. Conclui que tarefas convergentes favorecem trocas cooperativas e autopoiéticas, enquanto tarefas divergentes favorecem o estabelecimento de vínculos e identidade entre os membros do grupo.
This document provides an index of different types of animals categorized into domestic, wild, and marine animals. Domestic animals listed include dogs and cats, while wild animals mentioned are lions and elephants. Marine animals in the index are dolphins and whales.
Carlson Companies is one of the largest privately held compani.docxwendolynhalbert
Carlson Companies is one of the largest privately held companies in the United States,
with more than 180,000 employees in more than 140 countries. Carlson enterprises
include a presence in marketing, business and leisure travel, and hospitality industries.
Its Information Technology (IT) division, Carlson Shared Services, acts as a service
provider to its internal clients and consequently must support a spectrum of user
applications and services. The IT division uses a centralized data processing model to
meet business operational requirements. The central computing environment includes an
IBM mainframe and over 50 networked Hewlett-Packard and Sun servers
[KRAN04, CLAR02,HIGG02]. The mainframe supports a wide range of applications,
including Oracle financial database, e-mail, Microsoft Exchange, Web, PeopleSoft, and a
data warehouse application.
In 2002, the IT division established six goals for assuring that IT services continued to
meet the needs of a growing company with heavy reliance on data and applications:
1. Implement an enterprise data warehouse.
2. Build a global network.
3. Move to enterprise-wide architecture.
4. Established six-sigma quality for Carlson clients.
5. Facilitate outsourcing and exchange.
6. Leverage existing technology and resource.
The key to meeting these goals was to implement a storage area network (SAN) with a
consolidated, centralized database to support mainframe and server applications.
Carlson needed a SAN and data center approach that provided a reliable, highly scalable
facility to accommodate the increasing demand of its users.
https://jigsaw.vitalsource.com/books/9781323079324/epub/OPS/xhtml/filebib.xhtml#biblio_111
https://jigsaw.vitalsource.com/books/9781323079324/epub/OPS/xhtml/filebib.xhtml#biblio_46
https://jigsaw.vitalsource.com/books/9781323079324/epub/OPS/xhtml/filebib.xhtml#biblio_89
Storage Requirements
Until recently, the central DP shop included separate disc storage for each server, plus
that of the mainframe. This dispersed data storage scheme had the advantage of
responsiveness; that is, the access time from a server to its data was minimal. However,
the data management cost was high. There had to be backup procedures for the storage
on each server, as well as management controls to reconcile data distributed throughout
the system. The mainframe included an efficient disaster recovery plan to preserve data
in the event of major system crashes or other incidents and to get data back online with
little or no disruption to the users. No comparable plan existed for the many servers.
As Carlson’s databases grow beyond 10 terabytes (TB) of business-critical data, the IT
team determined that a comprehensive network storage strategy would be required to
manage future growth.
Solution
Concept
The existing Carlson server complex made use of Fibre Channel links to achieve
communication and backup capabilities among servers. Carlson considered extending
this capability to a full-blown Fibr ...
Removing Storage Related Barriers to Server and Desktop VirtualizationDataCore Software
An IDC Viewpoint Paper: Virtualization is among the technologies that have become increasingly attractive in the current economic climate. Organizations are implementing virtualization solutions to obtain the following benefits: Focus on efficiency and cost reduction, Simplify management and maintenance, and Improve availability and disaster recovery.
The document discusses utilizing cloud services for disaster recovery. Some key points:
- Cloud services can eliminate costs of maintaining secondary hardware offsite for disaster recovery by providing infrastructure, platform, and software as a service options.
- Important considerations for disaster recovery planning include determining an acceptable level of data loss, downtime and physical separation between primary and backup systems.
- Choosing a cloud provider requires assessing their security, compliance with data protection laws, pricing models, and compatibility with existing systems.
- Different data replication strategies like application-level, file-level, or full virtual machine replication may be suitable depending on the data and application.
- Planning is needed for how systems, users and clients will failover and access
The document discusses several key infrastructure trends including virtualization, storage technologies like deduplication and object storage, networking technologies like MAN and wireless mesh, and process trends like SOA implementation and legacy skills. It also provides an analysis of the Israeli market for various infrastructure technologies and recommends several vendors and system integrators.
White paper whitewater-datastorageinthecloudAccenture
The document discusses the advantages of using cloud storage over traditional tape or disk storage methods. It outlines how Riverbed's Whitewater cloud storage gateway addresses key concerns with cloud storage like security, transmission speeds, and data availability. Customers that have implemented Whitewater report significant cost savings from eliminating tape infrastructure costs and storage array purchases and being able to back up more frequently due to increased speeds.
The document discusses how the DataCore SANsymphony-V storage hypervisor can help virtualize business-critical applications without performance issues by managing resources across storage systems. It provides adaptive caching, auto-tiering of storage pools from different disk assets, and synchronous mirroring between fault domains. This allows applications to perform predictably even when virtualized, improves throughput by up to 5 times, and provides 99.999% availability. The storage hypervisor is a better solution than expensive hardware modifications to deal with virtualization issues, and provides benefits like preventing downtime and simplifying management of distributed infrastructure.
This document provides a sample portfolio assessment report analyzing 20 applications for cloud readiness. Key outcomes include an overall assessment of cloud readiness, health, and open source risk for each application. It prioritizes applications for cloud migration and provides recommendations for how to migrate each application and address issues. The report includes an executive summary, portfolio snapshot, cloud readiness assessment, software health assessment, and software composition analysis with vulnerabilities identified. It recommends which applications to migrate in each wave and how to improve software health and reduce open source risks.
IRJET- A Survey on Remote Data Possession Verification Protocol in Cloud StorageIRJET Journal
This document summarizes a survey on remote data possession verification protocols for cloud storage. It begins with an abstract describing the problem of verifying integrity of outsourced data files on remote cloud servers. It then provides background on remote data possession verification (RDPV) protocols and discusses related work on ensuring data integrity and supporting dynamic operations. The document describes the system framework, RDPV protocol, use of homomorphic hash functions, and an optimized implementation using an operation record table to efficiently support dynamic operations like modifications. It concludes that the presented efficient and secure RDPV protocol is suitable for cloud storage applications.
When you use IPQ to boost network quality, you boost the user Quality of Experience (QoE) associated with almost any networked application from Microsoft.
This document provides a summary of Riverbed's Steelhead product family of WAN optimization solutions. The Steelhead products use Riverbed's RiOS technology to accelerate application performance and data transfers over hybrid networks. The product family includes Steelhead appliances, Interceptor appliances, a Central Management Console, Virtual Steelhead, Cloud Steelhead, Steelhead Cloud Accelerator, Steelhead Mobile software and controller, and are suitable for deployments of any size from small businesses to large enterprises.
- The document provides information about HTTP-SS, a new technology for more efficiently transmitting web application data that can reduce bandwidth usage by up to 90% and latency issues. It works by having a clustered power server process requests and transmit data to lightweight clients on various platforms in a optimized way.
- It also describes the IBM LinuxONE Emperor system, a high-performance server designed for running Linux workloads that can support up to 8,000 virtual servers on a single system. It offers scalability, security, availability, and other enterprise-grade qualities of service for critical workloads.
The document discusses the Brocade Virtual ADX, a virtual application delivery controller (ADC) platform. It provides a full-featured ADC system with a virtual footprint that leverages Intel technology. It can run on standard hypervisors and be hosted on x86 servers to provide load balancing, application security, and other services. It uses a distributed multi-core architecture to scale performance elastically. The Brocade Virtual ADX supports various load balancing methods and application health checks. It also provides scripting, global load balancing, and other features to optimize application delivery in virtualized environments.
CtrlS DR on Demand framework provides disaster recovery capabilities at a fraction of the cost of maintaining a full secondary DR site. It ensures critical applications and data can be restored and made available within hours of an outage occurring at the primary site. The framework defines the full DR strategy including architecture, initial setup, data replication processes, and activation of the DR site hosted within CtrlS's private cloud using their network and storage infrastructure and DR specialists. This approach saves over 40% compared to maintaining a traditional cold, warm or hot site DR solution.
This document discusses disaster recovery and workload mobility considerations for connecting data centers. It covers business requirements like high availability, regulatory compliance, and agility. Technical requirements include consistent data availability, sufficient compute resources, and network availability. It then discusses data replication technologies, protocols, and factors affecting replication over networks. Lastly, it covers considerations for application availability across data centers using compute clusters, virtual machine mobility, and network availability.
The document summarizes new features in PSP1 for DataCore's SANsymphony-V10 software-defined storage platform and Virtual SAN product. Key enhancements include improved performance for random write workloads, doubled scalability of Virtual SAN, quality of service controls for multi-tenant environments, chargeback reporting, use of standby nodes to maintain redundancy during maintenance, and integration with Microsoft Azure hybrid cloud storage. The updates bring major benefits in the areas of performance, private clouds, and hybrid cloud storage infrastructures.
For the retail and hospitality industries, managing applications and infrastructure at remote sites is a tough job. Typically, these sites have applications that are run at the location and a few key applications that need to be always running. For a restaurant, it may be point-of-sale and order scheduling. For a hotel, it may be reservations and door lock systems. If applications like these have an outage, the
site is out of business leading to lost sales and poor customer satisfaction.
Companies have tried running key applications in the “cloud” or in corporate data centers, but if there is a problem at the data center or in the WAN connecting the remote sites
to the application, then a lot of sites are affected, instead of a few. So, companies in these industries are increasingly looking at keeping applications running on-site. DataCore Virtual SAN has an automated, turnkey deployment and provides centralized storage management for lots of remote sites. It is integrated with Microsoft System Center which means existing Microsoft management tools can be used to oversee and manage remote sites from a central location. DataCore Virtual SAN includes a comprehensive PowerShell
scripting interface with over 250 documented cmdlets and software deployment wizards to automate and deploy with little manual intervention. In addition, DataCore Virtual SAN has extensive instrumentation making it simple to centrally monitor system storage behavior and performance at remote sites.
Possibly the most important aspect for managing many
remote sites is the reliability of DataCore Virtual SAN. It
is based on a 10th generation Software-defined Storage
platform deployed at over 10,000 customer sites. It is a
proven product, which eliminate storage as a failure point.
Whitepaper-- Speed up your IT infrastructureAbhishek Sood
Network infrastructures are speeding up and your business needs to keep pace. You will not be able to test your network's high-speed performance if you keep hitting traffic jams. Access this white paper to learn how to reduce the limits on your infrastructure to increase overall performance and improve the quality of your storage infrastructure.
VMworld 2013: How to Replace Websphere Application Server (WAS) with TCserver VMworld
VMworld 2013
Kaushik Bhattacharya, Pivotal
Michel Bond, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
The document discusses several ways that Riverbed products can optimize VMware environments including:
1) Consolidating servers and desktops at data centers to reduce infrastructure costs while Riverbed accelerates access.
2) Optimizing backup and disaster recovery of virtual machines between sites by accelerating replication.
3) Enabling serverless offices by consolidating file/email servers to data centers and accelerating access with Riverbed.
Westmorland Limited, a successful motorway service station company, experienced rapid growth that strained their IT systems. They commissioned Waterstons to develop a new IT strategy. Waterstons conducted a review and identified needs for new point of sale, accounting, and data integration systems. They developed a four-year phased plan to implement the strategy. This transformed Westmorland's information management, giving them better control of costs and reducing accounting time.
The Stewart Milne Group needed a solution to manage documents and drawings for internal and external users. They implemented a Microsoft SharePoint portal customized by Waterstons. This allows suppliers and builders to access the latest versions of specifications and drawings electronically. Users can now redline documents digitally instead of using hard copies. The portal has streamlined administration and improved traceability of documents. As a result, Stewart Milne can provide a better service to customers.
Newcastel International Airport It Strategymichaelking
Newcastle International Airport underwent major changes that required updating its outdated IT systems. An IT strategy review identified the need for (1) upgraded hardware and infrastructure, (2) new accounting, asset management, and flight information systems, and (3) revised management information systems. This project aims to improve collaboration, reduce costs, eliminate manual data work, and provide better management information and decision making support through a multi-year upgrade of the airport's IT systems and infrastructure.
Waterstons provided strategic IT management and services to Durham Business School over many years. They helped upgrade the school's infrastructure, introduced technologies to enhance virtual learning, and configured systems like Microsoft CRM to improve administration efficiency. These changes supported the school's goal of providing a high quality educational experience and becoming a top European business school. Benefits included an improved learning experience, higher productivity, better knowledge sharing, and a more resilient IT infrastructure. Future plans include expanding the use of CRM and introducing new systems for programme management and key performance indicators.
J. Barbour & Sons implemented a new VoIP telephone system provided by Waterstons across their 3 UK sites to replace aging PBX systems. The new system provided significant cost savings through reduced support costs and eliminated charges for calls between sites. It also improved productivity through faster moves, easier setup of new users, and a standardized single directory. The success of the rollout has led Barbour to plan a global expansion of the VoIP system.
The document discusses potential network architectures using Riverbed products to optimize wide area network (WAN) performance for a school district. It describes the district's current network with over 230 sites on various connection speeds. It then outlines several Riverbed deployment options including in-path, out-of-path, and virtual in-path configurations. Benefits mentioned include increased WAN performance, user productivity, data backup and replication speeds, and cost reductions. Implementation and support processes are also summarized.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?
Data Core Riverved Dr 22 Sep08
1. Disaster Recovery White Paper: Reducing the Bandwidth to
Keep Remote Sites Constantly Up-to-date
Storage Virtualization and WAN optimization help organizations achieve more stringent Recovery Time Objectives
(RTO) and Recovery Point Objectives (RPO) without incurring the expense of higher speed inter-site lines.
Author: Mike Beevor – DataCore Software
Co-Authors: Simon Birbeck & Richard O’Connor – Waterstons Ltd.
Although the consolidation from discrete physical servers to virtual servers has made it possible for many organizations
to configure an affordable remote Disaster Recovery (DR) site, the inter-site traffic to keep the DR location current soon
grows to be a problem. The volume of updates from newly created virtual machines at the primary site can rapidly
climb to the point where they exceed the line’s capacity to transmit them. The overload causes the DR site to fall
behind more and more, exposing companies to unacceptable data loss in the event of a disaster.
Some customers’ first thoughts were to upgrade to higher speed lines. However, after seeing how much more that would
cost in infrastructure and recurring leased line rentals, there is often the need to investigate other less pricey alternatives
to attain a satisfactory Recovery Point Objective (RPO).
In this case study, the IT consultants at W@terstons pondered the scenario and proposed to address these requirements
through a clever combination of storage virtualization and WAN optimization. They employed the built-in remote
replication function in DataCore SANmelody™ software to relay changes in virtual disk blocks from the primary site to
the DR site over an existing 2MBps IP WAN. These inter-site transmissions were compressed, cached and optimized
inline by inserting Riverbed Steelhead network appliances on either side of the link. The solution succeeded in meeting
the firm’s disaster recovery objectives and proved much more cost-effective than upgrading to higher bandwidth service.
Details of the configuration, observations and analysis are provided below.
Remote Replication via DataCore Storage Virtualization Software
DataCore offers two replication mechanisms to keep DR sites up to date, depending on distance and available
bandwidth. Both functions transparently replicate writes to selected virtual disks issued by any application, operating
system or hypervisor.
When sites are separated by less than 10km using high-speed connections like Fibre Channel, the synchronous
mirroring option keeps local and remote disks in lock-step. Essentially, every I/O to a designated disk is replicated in
real-time to the remote site before it is acknowledged as complete to the host.
As distances stretch further or budgets dictate use of lower costs WANs, synchronously replicated I/Os experience
higher latencies causing applications to time out, or response to become inordinately long. In this scenario, the use of
DataCore asynchronous remote replication over an IP WAN is necessary.
With asynchronous replication, the primary site acknowledges updates to applications immediately and then makes a
best effort to deliver them to the remote site over the available bandwidth.
There is always the risk of some data loss when restoring operations from the DR location because the primary may not
have sent all of its changes prior to the disaster. But it is far more effective than relying on shipping frequent backups to
restore the DR site and allows for a greater degree of granularity and reduction in RTO.
1
2. Terminology
The “source volume” is the local production volume an application server writes to and reads from. With
Asynchronous Replication it is understood that the bandwidth between the two sites is limited, and so writes to the local
source volume are acknowledged immediately, and replicated when bandwidth permits, or at a specified instant in time
to the “destination” volume at the DR facility. The destination volume is usually always in a quot;catch- upquot; mode,
anywhere from a few writes to a few gigabytes worth of writes behind the source volume. How far the destination falls
out of synch (or lags behind) the source volume is determined by the quantity of changes occurring on the source
volume, the bandwidth of the link used for the replication between sites, network QoS (Quality of Service) policies
and/or replication schedules or administrator imposed throttles.
Clearly, provisions must be made to keep track of changes to the source volume in order to replicate those changes and
bring the source and destination as close to being synchronous as time and bandwidth permit. There are a number of
ways to do this, ranging from marking changed blocks that need to be replicated, to buffering the changes in a reserved
storage area. DataCore implements this feature in the AIM (“Asynchronous IP Mirroring”) option for their SANmelody
and SANsymphony™ products. For an in-depth review of asynchronous replication and DR, please refer to the DataCore
white paper on “DR and Asynchronous Replication”.
WAN Optimization from Riverbed Steelhead Appliances
Riverbed’s Steelhead WAN optimization appliances are designed to provide the highest performance for IP-based
traffic between sites, while at the same time, making it easy to deploy, manage and monitor Wide-Area Data Services
(WDS). The underlying software known as RiOS provides an integrated framework for data reduction, TCP
optimization, application level optimization, remote-office file services, and management services to provide a
comprehensive solution for enterprise WDS. RiOS scales across a range of applications and network topologies. In this
case study, one appliance was located at the primary site and another at the DR site at the WAN termination points.
RiOS overcomes the chattiness of transport protocols through transport streamlining. Transport Streamlining is a set of
features that reduce the number of roundtrips necessary to transport information across the WAN while maintaining the
reliability and resiliency of the transport. This is accomplished through a combination of window scaling, intelligent
repacking of payloads, connection management and other protocol optimization techniques. RiOS accomplishes these
improvements while still maintaining TCP as a transport protocol. As a result RiOS adapts to network conditions on the
fly, responding appropriately to events such as congestion or packet loss.
RiOS also provides Application Streamlining allowing additional Layer 7 acceleration to important (but poorly
behaved) protocols through transaction prediction and pre-population features. The application streamlining modules
provide additional performance improvement for applications built on particular facilities such as Microsoft Windows
file systems better known as Common Internet File Systems or CIFS. DataCore uses CIFS to transfer I/Os buffered at
the source. Application streamlining modules eliminate roundtrips that would have been generated by the application
protocol. Reducing roundtrips may be necessary even with a very efficient implementation of TCP, because otherwise
the inefficiency of the application-level protocol can overwhelm any improvements made by the transport layer. It
eliminates up to 98% of the roundtrips taken by specific applications, delivering a significant improvement in
throughput, in addition to what data streamlining and transport streamlining already provide.
With particular relation to CIFS, RiOS contains approximately a dozen CIFS optimizations for various operations such
as file sharing, folder browsing, accessing files from within other applications, and more. RiOS’ out-of-the-box
acceleration of back-up and replication operations already generated significant performance gains for data transfer jobs
at this location. The traffic recognition capability identifies a large-scale data transfer flowing through a Steelhead
Appliance and applies some specific system optimizations to enhance the throughput and handling of high rate, high
volume back-up data sets. These additional optimizations improve disk utilization, while also dramatically applying
data reduction and compression algorithms. The resulting throughput enhancement further reduces the lag between the
production and DR sites.
2
3. Customer’s Environment
All tests were performed in the customer’s live environment, with production data operating across the two sites. The
customer is a syndicate of salvage companies that specialize in the recovery and safe disposal of vehicles that have been
written off in an accident or similar incident. The customer’s primary competitive advantage is their ability to cost
effectively manage the complete cycle of disposal on behalf of their insurance company clients and to furnish them with
comprehensive reports as to progress and overall performance.
Their elevated DR concern arose after undergoing a business continuity analysis, in which recommendations were made
to enhance the security and resilience of the technical infrastructure. After many of these changes were made, they
experienced performance issues with respect to network and application responsiveness.
For example, client/server response times were affected by scheduled backup jobs which took a long time to complete
and negatively impacted their prime business hours. Additionally, the existing Microsoft cluster, hosting SQL and
Exchange 2003 servers, provided business continuity, but at a high complexity and cost inefficiency with regards to
administrative ownership.
The existing BI (Business Intelligence) information is housed on the clustered SQL server and occupies approximately
60GB. Running the SQL Analysis Services cube takes a number of hours to complete and historical data requires a
large degree of management upon its retirement. The customer has a resilient communications environment between the
primary and DR site using a 2Mbps Megastream primary circuit and 1Mbps ADSL failover; both routed through a
Cisco 2611 router. This is routed from the local LAN via a MSA2020 ISA appliance to a DMZ, then through a Cisco
PIX506E to the external facing Cisco 2611.
Current Topology Overview
3
4. Methodology
Since the environment was live and production data was being transferred whilst our testing was taking place, it became
necessary to ensure that conditions remained identical for all transfers. This was felt to be a important in this scenario
since one of the aims of the whitepaper was to demonstrate the improvement in performance in a live environment.
In order to ensure a measure of scientific control, the following methodology was introduced:
• Capture screenshots of current data being transferred across the WAN through DataCore remote
replication to record transfer rates and latency times.
• Create 2 Virtual Servers running Windows Server 2003 Enterprise Edition with a size of
approximately 12GB.
• Begin replicating these Virtual Machines (VMs) across the link, utilizing a pass-through rule within
Riverbed to ensure there is no optimization of the data stream.
• Record a period of screenshots for this period to track transfer rates.
• Create a 3rd Virtual Machine running Windows Server 2003 Enterprise Edition with a size of
approximately 12GB
• Begin replicating the newly created Virtual Machine
• Remove the pass-through rule on the Riverbed appliance to optimize the remote replication over the
WAN
• Take screen shots of the transfer behaviour over a similar period of time and perform a comparison of
throughput rates and latency times.
• Document any change in performance.
It was important as part of the testing that the Virtual Machines were created from the same clone template, on the same
host, although within separate virtual machines, to prevent serious degradation of the end user experience.
Results & Explanations
We chose to undertake the testing over the period between 12pm to 2:30pm to reduce the impact on the end users. The
pair of Riverbed Steelhead appliances were put in place, but configured in “Pass Through” mode to disable any
optimization, as seen in the screenshot below.
4
5. The screenshot below shows the transfer rate being generated by the remote replication software before the templates
were created. In theory, a 2Mbps (Megabits per second) Megastream line should carry up to 240 KB/sec (Kilo Bytes
per second). You can see that the aggregate transfer rate across the three replicated volumes is 170KB/sec (75KB/sec
[highlighted] + 46KB/sec +49KB/sec). The remaining 70 KB/sec is being used by other inter-site traffic sharing the line,
including Voice over IP (VoIP) circuits.
In order to verify that the Pass-Through rule was working correctly, a screenshot was taken of the Riverbed Data-
Reduction chart. If there was a lack of data reduction, it proves that no data is passing through the Riverbed appliance
for optimization. It is obvious from the flat line in the chart below that this is the case. The chart on the right hand side
is an increased magnification of the full chart on the left.
5
6. At this point, the first Virtual Machine was created from a template and the DataCore software began to asynchronously
replicate it to the DR site. This is shown by the increase in Src Data (AIM Source Data) from 0 to 2949 MB for the
third Src Volume in the screen shot below.
As is evident from the increasing Time Latency below, when competition for the line heats up, it slows down other
volumes being remotely replicated. Their latency, which is increasing, measures the disparity between the production
and DR site. Were there to be an incident at the production site in this instance, RTO and RPO SLA’s could be
compromised. This slowing trend continues as the newly created Virtual Machine size continues to grow until it
reaches the defined 12GB limit.
6
7. As you can see, by the time the second Virtual Machine has been created and stored, the latency has reached almost 40
minutes, from its initial starting point of 20 minutes and is expected to increase further. Replication traffic for the rest
of the production site is also beginning to be affected.
7
8. It is also possible to see that the transfer rate has remained relatively constant, implying that since there is more data to
be transferred over the link, at a constant rate, the time to catch up will increase. This has potentially serious
consequences for the RTO and RPO objectives of the organization, especially in a regulated DR environment.
At 12:38 we instate the rule for Riverbed to optimize the CIFS traffic flowing through the appliances. The screenshot
below illustrates the changes made.
8
9. The increase in throughput to 1479 KB/sec is evident immediately. We witnessed an 8x improvement in throughput
instantly compared with the final screenshot from the previously non-optimized replication streams.
In order to further highlight the impressive performance that DataCore and Riverbed provide, we cloned a 3rd VM from
the template and set it to replicate over the link. The screenshot below shows that progress, approximately halfway
through.
Even with the addition of an extra 6GB to the AIM buffer, the latencies are remaining roughly similar.
It is also relevant to consider that the Riverbed appliance takes a short while to “come up to speed” and since this was a
mere 6 minutes after initializing the CIFS pass-through command, full speed has not yet been achieved.
9
10. By 12:51 the clone had completed and the Riverbed appliance was beginning to accelerate the CIFS data well. This is
shown by an aggregate throughput of 5100KB/Sec, well in excess of 20x the theoretical maximum achievable
previously.
The Riverbed Appliance continued to accelerate the AIM transfer rate, and proof of this is displayed in the following
screenshots. The first shows Riverbed passing through the CIFS traffic, and the subsequent DataCore screenshots show
the transfer rate improvements as the Riverbed reaches full acceleration, as well as the reduction in source buffer size
and a decrease in time latency.
10
12. The corresponding DataCore screenshots below further serve to illustrate the excellent performance of this solution.
The previous screenshot was taken at 14:33 and the final VM Clone completed its buffering at 12:51. If we subtract the
total AIM Src Data figures, to find the volume of data transferred across the link in the 102 minutes we left it processing,
we arrive at a figure of 20,063MB or 20.063GB of data transferred in 102 minutes. The full theoretical calculation and
the actual transfer calculations are shown below.
12