A growing number of organizations are using
EMC VPLEX and other off-the-shelf technologies
to ‘stretch’ application processing and data
access across distance—for continuous
application availability that is practical,
affordable, and automatic.
The Journey Toward the Software-Defined Data CenterCognizant
Computing's evolution toward a software-defined data center (SDDC) -- an extension of the cloud delivery model of infrastructure as a service (IaaS) -- is complex and multifaceted, involving multiple layers of virtualization: servers, storage, and networking. We provide a detailed adoption roadmap to guide your efforts.
White Paper: Rethink Storage: Transform the Data Center with EMC ViPR Softwar...EMC
This white paper discusses the software-defined data center (SDCC) and challenges of heterogeneous storage silos in making SDDC a reality. It introduces EMC ViPR software-defined storage, which enables enterprise IT departments and service providers to transform physical storage arrays into simple, extensible, open virtual storage platform.
The Journey Toward the Software-Defined Data CenterCognizant
Computing's evolution toward a software-defined data center (SDDC) -- an extension of the cloud delivery model of infrastructure as a service (IaaS) -- is complex and multifaceted, involving multiple layers of virtualization: servers, storage, and networking. We provide a detailed adoption roadmap to guide your efforts.
White Paper: Rethink Storage: Transform the Data Center with EMC ViPR Softwar...EMC
This white paper discusses the software-defined data center (SDCC) and challenges of heterogeneous storage silos in making SDDC a reality. It introduces EMC ViPR software-defined storage, which enables enterprise IT departments and service providers to transform physical storage arrays into simple, extensible, open virtual storage platform.
Read how IBM and NC State created a “cloud computing” model for provisioning technology that offered a quantum improvement in access, efficiency and convenience over traditional computer labs.
Cloud computing challenges with emphasis on amazon ec2 and windows azureIJCNCJournal
Cloud Computing has received much attention by the IT-Business world. As compared to the common
computing platforms, cloud computing is more flexible in supporting real-time computation and is
considered a more powerful model for hosting and delivering services over the Internet. However, since
cloud computing is still at its infancy, it faces many challenges that stand against its growth and spread.
This article discusses some challenges facing cloud computing growth and conducts a comparison study
between Amazon EC2 and Windows Azure in dealing with such challenges. It concludes that Amazon EC2
generally offers better solutions than Windows Azure. Nevertheless, the selection between them depends on
the needs of customers.
Best cloud computing training institute in noidataramandal
TECHAVERA is offering best In Class, Corporate and Online cloud computing Training in Noida. TECHAVERA Delivers best cloud Live Project visit us - http://www.techaveranoida.in/best-cloud-computing-training-in-noida.php
Optimizing Cloud Computing Through Cross- Domain ProvisioningGaletech
Read how the Enterprise Management Associates (EMA) assesses GaleForce a flagship of Gale technologies as a provider of effective & flexible ways of optimizing virtualized & hybrid infrastructures without hampering the application performance.
Application Report: University of Chicago Lab SchoolIT Brand Pulse
In late 2010, Apple announced it will no longer offer Xserve servers after January, 2011. As a result, the University of Chicago had to migrate their Laboratory School servers and applications from Mac OS X to Windows and Red Hat Linux running on VMware ESXi.
Thought Leader Interview: Dr. William Turner on the Software-Defined Future ...Iver Band
As the Vice President, Datacenter Architecture at Presidio,
William Turner, PhD has more than 20 years of hand-son,
full-project-cycle experience in strategizing, designing and
deploying large-scale Fortune 500 networks and security
solutions. His extensive background in banking, security,
and government has yielded several well regarded industry
standards and noted reference models.
Dr. Turner envisions and drives a future in which sophisticated software provisions and de-provisions IT infrastructure automatically in response to business needs. The specialized appliances enterprises traditionally rely upon will be replaced by industry-standard hardware playing necessary roles on demand.
EAPJ conducted this interview from the perspective of an infrastructure architect considering a software-defined future for the networking, hosting and storage underlying a
major upcoming application investment.
VMware: Innovate and Thrive in the Mobile-Cloud EraVMware
Journey to IT as a Service
Deliver new products and services faster. Get closer to customers. Make increasingly mobile workers more productive. The pressure on your business to innovate has never been greater. But is your existing IT infrastructure capable of advancing your business goals?
As we enter the mobile-cloud era, your IT model must be able to accommodate more users, more applications, and more data. According to IDC, delivering the third platform for IT growth and innovation1—beyond mainframes and client-server—requires investment in cloud computing, social, mobile, and big-data technologies. It also means making fundamental changes to your IT operations to enable self-service access to information and applications—anytime, anywhere, on any device.
Guddu Kumar. “A Review on Data Protection of Cloud Computing Security, Benefits, Risks and Suggestions” United International Journal for Research & Technology (UIJRT) 1.2 (2019): 26-34.
Security Requirements and Security Threats In Layers Cloud and Security Issue...Editor IJCATR
Euacalyptus, OpenNebula and Nimbus are three major open-source cloud-computing software platforms. The overall
function of these systems is to manage the provisioning of virtual machines for a cloud providing infrastructure-as-a-service. These
various open-source projects provide an important alternative for those who do not wish to use a commercially provide cloud. This is a
fundamental concept in cloud computing, providing resources to deliver infrastructure as a service cloud customers, making users have
to buy and maintain computing resources and storage. In other hand, cloud service providers to provide better resources and facilities
customers need to know they are using cloud infrastructure services. In this end, we intend to security threats in the cloud layer and
then to analyse the security services in cloud computing infrastructure as a service to pay
Cloud computing is a technique that has a great capabilities and benefits for users. Cloud characteristics
encourage many organizations to move to this technology. But many consideration faces transmission
process. This paper outline some of these considerations and considerable efforts solved cloud scalability
issues.
Dynamic environment provisioning portal for a US based insurance solutions pr...Aspire Systems
The customer, in order to effectively adapt to industry changes had deployed a cloud-ops solution with a dedicated set of internal resources. The team was in charge of manual provisioning of cloud-based resources to the developers and testers anytime resulting in increased demand and dependency on the team.
Read how IBM and NC State created a “cloud computing” model for provisioning technology that offered a quantum improvement in access, efficiency and convenience over traditional computer labs.
Cloud computing challenges with emphasis on amazon ec2 and windows azureIJCNCJournal
Cloud Computing has received much attention by the IT-Business world. As compared to the common
computing platforms, cloud computing is more flexible in supporting real-time computation and is
considered a more powerful model for hosting and delivering services over the Internet. However, since
cloud computing is still at its infancy, it faces many challenges that stand against its growth and spread.
This article discusses some challenges facing cloud computing growth and conducts a comparison study
between Amazon EC2 and Windows Azure in dealing with such challenges. It concludes that Amazon EC2
generally offers better solutions than Windows Azure. Nevertheless, the selection between them depends on
the needs of customers.
Best cloud computing training institute in noidataramandal
TECHAVERA is offering best In Class, Corporate and Online cloud computing Training in Noida. TECHAVERA Delivers best cloud Live Project visit us - http://www.techaveranoida.in/best-cloud-computing-training-in-noida.php
Optimizing Cloud Computing Through Cross- Domain ProvisioningGaletech
Read how the Enterprise Management Associates (EMA) assesses GaleForce a flagship of Gale technologies as a provider of effective & flexible ways of optimizing virtualized & hybrid infrastructures without hampering the application performance.
Application Report: University of Chicago Lab SchoolIT Brand Pulse
In late 2010, Apple announced it will no longer offer Xserve servers after January, 2011. As a result, the University of Chicago had to migrate their Laboratory School servers and applications from Mac OS X to Windows and Red Hat Linux running on VMware ESXi.
Thought Leader Interview: Dr. William Turner on the Software-Defined Future ...Iver Band
As the Vice President, Datacenter Architecture at Presidio,
William Turner, PhD has more than 20 years of hand-son,
full-project-cycle experience in strategizing, designing and
deploying large-scale Fortune 500 networks and security
solutions. His extensive background in banking, security,
and government has yielded several well regarded industry
standards and noted reference models.
Dr. Turner envisions and drives a future in which sophisticated software provisions and de-provisions IT infrastructure automatically in response to business needs. The specialized appliances enterprises traditionally rely upon will be replaced by industry-standard hardware playing necessary roles on demand.
EAPJ conducted this interview from the perspective of an infrastructure architect considering a software-defined future for the networking, hosting and storage underlying a
major upcoming application investment.
VMware: Innovate and Thrive in the Mobile-Cloud EraVMware
Journey to IT as a Service
Deliver new products and services faster. Get closer to customers. Make increasingly mobile workers more productive. The pressure on your business to innovate has never been greater. But is your existing IT infrastructure capable of advancing your business goals?
As we enter the mobile-cloud era, your IT model must be able to accommodate more users, more applications, and more data. According to IDC, delivering the third platform for IT growth and innovation1—beyond mainframes and client-server—requires investment in cloud computing, social, mobile, and big-data technologies. It also means making fundamental changes to your IT operations to enable self-service access to information and applications—anytime, anywhere, on any device.
Guddu Kumar. “A Review on Data Protection of Cloud Computing Security, Benefits, Risks and Suggestions” United International Journal for Research & Technology (UIJRT) 1.2 (2019): 26-34.
Security Requirements and Security Threats In Layers Cloud and Security Issue...Editor IJCATR
Euacalyptus, OpenNebula and Nimbus are three major open-source cloud-computing software platforms. The overall
function of these systems is to manage the provisioning of virtual machines for a cloud providing infrastructure-as-a-service. These
various open-source projects provide an important alternative for those who do not wish to use a commercially provide cloud. This is a
fundamental concept in cloud computing, providing resources to deliver infrastructure as a service cloud customers, making users have
to buy and maintain computing resources and storage. In other hand, cloud service providers to provide better resources and facilities
customers need to know they are using cloud infrastructure services. In this end, we intend to security threats in the cloud layer and
then to analyse the security services in cloud computing infrastructure as a service to pay
Cloud computing is a technique that has a great capabilities and benefits for users. Cloud characteristics
encourage many organizations to move to this technology. But many consideration faces transmission
process. This paper outline some of these considerations and considerable efforts solved cloud scalability
issues.
Dynamic environment provisioning portal for a US based insurance solutions pr...Aspire Systems
The customer, in order to effectively adapt to industry changes had deployed a cloud-ops solution with a dedicated set of internal resources. The team was in charge of manual provisioning of cloud-based resources to the developers and testers anytime resulting in increased demand and dependency on the team.
Evaluator Group: NEC UNIVERGE BLUE BACKUP & RECOVER Product BriefInteractiveNEC
NEC UNIVERGE BLUE is a suite of cloud-delivered managed services, ranging from Unified Communications such a voice, video conferencing and messaging to Data Services such as contact center, and Backup and Disaster Recovery services. From a data protection standpoint, NEC UNIVERGE BLUE offers backup and recovery-as-a-service (UNIVERGE BLUE BACKUP) and disaster recovery-as-a-service (UNIVERGE BLUE RECOVER).
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
The F5 Networks Application Services Reference Architecture (White Paper)F5 Networks
Build elastic, flexible application delivery fabrics that are ready to meet the challenges of optimizing and securing applications in a constantly evolving environment.
The F5 Networks Application Services Reference Architecture (White Paper)
Today Cloud computing is used in a wide range of domains. By using cloud computing a user
can utilize services and pool of resources through internet. The cloud computing platform
guarantees subscribers that it will live up to the service level agreement (SLA) in providing
resources as service and as per needs. However, it is essential that the provider be able to
effectively manage the resources. One of the important roles of the cloud computing platform is
to balance the load amongst different servers in order to avoid overloading in any host and
improve resource utilization.
It is defined as a distributed system containing a collection of computing and communication
resources located in distributed data enters which are shared by several end users. It has widely
been adopted by the industry, though there are many existing issues like Load Balancing, Virtual
Machine Migration, Server Consolidation, Energy Management, etc.
A Study on Replication and Failover Cluster to Maximize System UptimeYogeshIJTSRD
Different types of clients over the globe uses Cloud services because cloud computing involves various features and advantages such as building cost effectives solutions for business, scale resources up and down according to the current demand and many more. But from the cloud provider point of view, there are many challenges that need to be faced in order to ensure a hassle free service delivery to the clients. One such problem is to maintain high availability of services. This project aims at presenting a high available HA solution for business continuity and disaster recovery through configuration of various other services such as load balancing, elasticity and replication. Miss Pratiksha Bhagawati | Mrs. Priya N "A Study on Replication and Failover Cluster to Maximize System Uptime" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-4 , June 2021, URL: https://www.ijtsrd.compapers/ijtsrd41249.pdf Paper URL: https://www.ijtsrd.comcomputer-science/other/41249/a-study-on-replication-and-failover-cluster-to-maximize-system-uptime/miss-pratiksha-bhagawati
Data Protection and Disaster Recovery Solutions: Ensuring Business ContinuityMaryJWilliams2
In today's digital landscape, data protection and disaster recovery are critical components of any robust IT strategy. This article delves into various solutions designed to safeguard your data against loss, corruption, and cyber threats. Explore the latest technologies and best practices for effective data protection, from backup strategies to comprehensive disaster recovery plans. To know more: https://stonefly.com/white-papers/data-protection-disaster-recovery-solution/
Shielding Data Assets: Exploring Data Protection and Disaster Recovery Strate...MaryJWilliams2
Delve into comprehensive data protection and disaster recovery strategies with our detailed PDF submission. Discover best practices, methodologies, and technologies to safeguard critical data and ensure operational continuity in the face of unforeseen events. Gain insights into designing resilient backup plans, implementing disaster recovery solutions, and mitigating risks effectively. Equip yourself with the knowledge needed to protect your organization's data assets and maintain business continuity. To Know more: https://stonefly.com/white-papers/data-protection-disaster-recovery-solution/
Enjoy the convenience and flexibility of cloud without sacrificing security o...Principled Technologies
Public cloud solutions and on-premises private cloud solutions each offer distinct attractions, and hybrid cloud solutions can deliver flexibility and a best-of-both-worlds approach. As you consider your industry and the nature of your data and workloads to determine the most appropriate approach for your unique situation, keep in mind the way our hands-on performance testing has demonstrated the value of modernizing your infrastructure and applications using Dell PowerEdge R7525 servers powered by 3rd Gen AMD EPYC 7763 processors.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Knowledge engineering: from people to machines and back
CONTINUOUS APPLICATION AVAILABILITY WITH EMC VPLEX
1. CONTINUOUS APPLICATION
AVAILABILITY WITH EMC VPLEX
A growing number of organizations are using
EMC VPLEX and other off-the-shelf technologies
to ‘stretch’ application processing and data
access across distance—for continuous
application availability that is practical,
affordable, and automatic.
ABSTRACT
Enterprises are consolidating HA and DR practices and technologies to achieve
Continuous Availability (CA) that provides users with uninterrupted access to data
and applications during planned and unplanned outages.
Certified by VMware, and proven to work with Oracle, Microsoft, and other off-theshelf cluster technologies, EMC Continuous Application Availability Solutions
powered by EMC VPLEX provides data coherency across distance for simultaneous
access to the same data in multiple locations, enabling active/active CA—without
changes to application code or custom integration.
This paper describes how EMC’s deployment models for applications work; how the
assets in your existing application sets can be redeployed to implement pure CA
and near-line CA services; and how to accelerate putting CA to work to protect
mission-critical applications and data for business advantage.
EMC WHITE PAPER
2. IT TRUST
Increasingly, organizations need more than the ability to minimize or recover from
downtime. They need to be able to trust that the data and applications their
business and users depend on will continue to be available and operate, through
both planned and unplanned outages.
When business is conducted digitally and globally, business operations are by
definition 24x7x365. Beyond the traditional risks to revenue and reputation, in an
always-connected world, even a minor outage has the potential to ‘go public’ and do
damage very quickly.
A recent Forrester Consulting study1 found ‘increased reliance on technology’ the
number one factor driving increased risk in their business. And a 2013 global
survey2 of business and IT decision makers concludes that IT trust is an opportunity,
as well as a risk.
THE END OF APPLICATION DOWNTIME
As anyone who works with business application owners to schedule planned outages
will attest, getting approval for downtime to perform tasks such as updates,
upgrades, and migrations is increasingly difficult. Users—employees, partners,
customers, and consumers—increasingly expect uninterrupted access to data and IT
services.
As stakes and expectations rise, high availability (HA) and disaster recovery (DR)
are not enough. Increasingly, organizations are recognizing the need for zero
downtime—uninterrupted operation through both unplanned and planned outages.
Figure 1. Ratio of planned to unplanned sources of downtime. Scheduled
events account for approximately 85 percent of application downtime.
1
2
How Organizations Are Improving Business Resiliency With Continuous IT Availability, 2013. Forrester Research, Inc.
IT Trust Curve—2013 Global Survey, 2013. EMC Corporation and Vanson Bourne.
3. LACK OF CONFIDENCE IN DR
VPLEX: CERTIFIED
AND PROVEN
Companies make a considerable investment in DR because the potential impact of
DELIVERING CA TODAY
Experience shows that companies do not leverage their DR solutions for planned
FOR ORACLE, VMWARE,
outages, which account for 85 percent of application unavailability. Less obvious
MICROSOFT, SAP, & MORE
perhaps, is the fact that, in practice, the DR solution or site is typically not invoked
CA solutions, designed and
implemented by EMC Global Services
leveraging EMC VPLEX integrated
with common, off-the-shelf
an extended outage would be devastating. Yet in practice, organizations realize little
or no return on this investment.
for most unplanned outages. Short of a complete and protracted data center site
loss—the cause of less than one percent of all downtime—the DR solution is rarely
used. Instead, the decision is usually to forgo DR failover, even when the
workaround restore will take hours or even days.
technology from leading solution
Why? Most organizations do not have confidence in their ability to recover using
providers, are enabling CA across
their DR solution. When asked about their current application and data availability
data centers today. Integrated CA
strategies, 46 percent of respondents in the latest Independent Oracle Users Group
solutions leveraging VPLEX have been
(IOUG) survey3 were less than satisfied.
proven in configurations including,
but not limited to, the following:
For one thing, DR is typically architected as “all or nothing” event. Failover, restart
or restore are manual, requiring a decision to act. Many organizations do not do the
Oracle Real Application
testing or updating of their DR plan required to keep pace with changes in their
Clusters (RAC) customers
production environment. When an unforeseen outage occurs, they lack of confidence
have found Extended Oracle
in a smooth failover/failback. The fear is that invoking DR will “make it worse” and
RAC with VPLEX Metro an easy-
•
extend the outage.
to-deploy way to move from
single- to dual-site CA
•
VMware has certified EMC
VPLEX Metro to work with
In addition to the investment made in a capability that is not used, the vast majority
of DR today relies on idle, redundant infrastructure in a recovery site, incurring
considerable capital and operational (space, power, maintenance) expense.
Cluster (vMSC) across
CONTINUOUS AVAILABILITY: A NEW
APPROACH
geographic locations. VPLEX
For all these reasons, organizations are moving beyond traditional HA and DR to
stretches vSphere clusters
achieve “Continuous Availability.” Continuous Availability (CA) provides users with
across active/active datacenters
uninterrupted access to applications and data, enabling operations to continue
to achieve CA and transparent,
without interruption or degradation during planned and unplanned outages.
VMware vSphere Metro Storage
non-disruptive data mobility in a
cloud computing environment
Microsoft Failover Clustering
Stratus provide fault-tolerant systems, achieving multi-site CA, able to survive the
and VPLEX have enabled dual-
loss of one site without interruption, has traditionally required custom applications,
site CA for both Microsoft
multi-phased commit logic, and systems integration, making such solutions costly to
Hyper-V Server and Microsoft
•
While long the ideal, until recently CA has been prohibitively expensive and complex
build, operate, and maintain.
for all but the most demanding applications. For example, although Tandem and
SQL Server deployments
•
SAP ERP applications are
running today in dual-site CA
configurations enabled by VPLEX
Metro and Oracle RAC clusters
STRETCHING CLUSTERS OVER DISTANCE WITH EMC VPLEX
Now, however, organizations can achieve continuous application availability across
heterogeneous environments using commercially available off-the-shelf technology
available from multiple vendors. That’s because EMC VPLEX virtual storage delivers
data coherency and performance across distance, enabling organizations to “stretch”
For more information go to
their existing clusters across dual sites for active/active CA that is much simpler and
http://www.emc.com/microsites/it-
more affordable. In many instances, CA with VPLEX costs less than the traditional
trust/protecting-information-systems-
HA and DR solutions most organizations rely on today. What’s more, VPLEX Metro
processes.htm
provides CA for all of the block data in an organization, regardless of platform.
3
Managing the Always-On Data Center: 2013 IOUG Application and Data Availability Survey, October 7, 2013. Oracle Corporation
and Unisphere Research, a division of Information Today, Inc.
3
4. STRETCHING CLUSTERS OVER DISTANCE
CLUSTER BUILDING BLOCKS
Today most modern applications are deployed in either two or three tiers. Typically,
there is a presentation layer, an application layer, and a data or database layer. To
provide high and continuous availability, various forms of clustering are deployed.
MOVING FROM HA/DR
N+1 ACTIVE/PASSIVE
TO CA 2N
ACTIVE/ACTIVE
DATA LAYER CLUSTERS
In general, data layer clusters are used to provide high and continuous availability
for database servers or servers that manage dynamic data. There are three basic
types of widespread commercially used clusters and each has its own use case.
•
Wave 1: OS Clusters—Operating system (OS) based clusters were among the
first to gain widespread commercial use. They are typically an active/passive
From a technical perspective, EMC’s CA
(N+1) technology, with two nodes—one active node representing the need, and
deployment architecture is a 2N
a passive node representing the +1 or the backup. The backup node usually
solution. That is, everything the
application or service needs (N) to be
waits idle in standby mode.
available is actively provided, in parallel,
by two providers.
Both nodes are connected to the cluster’s resources, and both are configured to
In contrast, traditional HA and DR are
quorum is used to control which node owns the cluster’s resources. By
write to the disk resources. A locking mechanism based on the ownership of a
N+1 solutions, in which N represents the
agreement, the node that owns the quorum owns the cluster resources and can
need—plus (+) a set of redundant,
write to the cluster’s disks.
typically passive, backup resources
required to be able restore service. HA
In the event of the failure of the active node, the standby automatically takes
N+1 solutions are typically designed to
over and brings up the necessary services to take over the role of the active
fail over automatically; and DR N+1
node.
solutions are typically set up to be
manually invoked. In some modern HA
OS-based clusters are typically used for physical servers, although they can be
N+1 solutions, failover can happen so
used in a virtual environment, such as IBM AIX LPAR technology or inside a
fast that users and applications can
hypervisor cluster.
continue to operate without discernable
disruption. Traditionally, with HA and
DR, organizations have two N+1
solutions.
The primary difference between CA 2N
•
Wave 2: Hypervisor Clusters—Hypervisor clusters are more sophisticated
than OS clusters because the hypervisor owns and manages locking for many
virtual machines running on many physical servers, and some hypervisors can
orchestrate failover and restart.
and HA/DR N+1 architectures, is that
The hypervisor controls locking of the logical disk resources owned by the virtual
with CA 2N, both sets of resources are
machines. The VMware vSphere Hypervisor (based on ESXi), for example, uses
actively serving the application
(active/active); if one goes down, the
other continues to provide services at
the need level.
a number of locking mechanisms. One of them is a disk-locking mechanism
embedded in the VMware Virtual Machine File System (VMFS).
VMFS is a clustered file system that leverages shared storage to allow multiple
CA 2N makes active use of all assets
physical hosts to read and write to the same storage simultaneously. VMFS
and requires no recovery or restore. In
provides on-disk locking to ensure that the same virtual machine is not powered
the event of the loss of one set of
on by multiple servers at the same time. If a physical host fails, the on-disk lock
resources, operations continue at 1N on
the other site.
for each virtual machine is released so that virtual machines can be restarted on
other physical hosts.
Compared to an OS cluster, VMFS controls write operations to logical volumes
from many virtual servers intermixed on one or several LUNs. Whereas OS
clusters typically manage whole LUNs or volumes.
VMware vSphere High Availability (HA) provides an N+1 architecture
(active/passive) architecture for virtual machines and VMware vSphere Fault
Tolerance (FT) provides a 2N (active/active capable) CA architecture for virtual
machines.
4
5. •
Wave 3: Database Clusters—Database clusters are a scale-out architecture
where many servers are working on the same database. Locking mechanisms
managed by the database allow multiple servers to write to the same database
simultaneously. Database clusters are an xN plus active/active architecture,
where x is greater than one. Typically, they are deployed in a range of 1.2N
(120% of need) to 2N (twice the need) architecture. Some examples of
database clusters are Oracle RAC, IBM DB2 PureScale, and Sybase.
It is worthy to note that a database cluster can also reside inside of hypervisor
cluster, allowing deployment options for physical and virtual servers.
STATELESS SERVER CLUSTERS
•
Stateless Server Clusters—Stateless servers are different from data servers
in that they only hold onto transaction data for the duration of a transaction.
They are often deployed in the presentation and application tiers.
Typically, multiple servers are deployed in a farm and local load balancing
technologies are used to provide HA by distributing transactions based on
policies or load. If a server is unresponsive, transactions are directed to another
server in the farm.
In a network load balanced cluster, several servers will usually be banded
together to provide the need—and a spare server or two is added to the farm
for availability. As a result, this deployment architecture provides availability at
a relatively lower cost.
Network load balanced clusters are by nature a continuous availability
active/active xN architecture, where x is greater than one. Again, x is generally
in the range of 1.2N to 2N, or 120-200% of need.
Network load-based clusters can be used for physical servers or in conjunction
with a hypervisor based cluster for virtual environments.
PUTTING IT ALL TOGETHER
The EMC Global Services design and deployment model for CA over VPLEX uses offthe-shelf technology to enable CA applications stretched over two sites. Determining
the right deployment architecture depends on existing compute assets, availability
objectives, application, and other factors. EMC service professionals have helped
companies design hundreds of solutions using the building blocks below. They can
help identify crucial decision factors to select the right deployment architectures for
each application and a middleware service that is economical and resilient.
At the bottom of the stack is virtually any of the major block disk arrays, one in
each site. The next layer up is the VPLEX storage virtualization layer. VPLEX is the
component that keeps the arrays in of each of the sites in sync.
VPLEX presents the compute layer with a virtualized disk volume that spans the two
sites. Think of the virtual disk volume as a RAID-1 mirror where Mirror-A is in one
site and Mirror-B is in the other site. A disk-write operation to Mirror-A is mirrored to
Mirror-B and a write operation to Mirror-B is mirrored to Mirror-A. Since both sides
of the mirror are equal, a read operation can be performed in either site.
To traditional ways of thinking, this can sound both wonderful and dangerous—
wonderful, because application stacks can be deployed with a complete set of
resources in two-sites, and scary, because the question arises: what prevents a
simultaneous update to the same disk sector from each of the stacks? The answer—
and appeal of—this deployment model is that it leverages the write locking built into
5
6. the clustering mechanisms to prevent this from happening.
At the data layer, using one of the above clustering models, the database cluster is
stretched between the sites. Continuing up the processing stack, the model
stretches the network load balanced preso and app farms between sites. Above the
compute layer at the distribution layer, which routes incoming traffic to one of the
processing sites, the model deploys a global load distributor. Global load distributor
solutions are available from Cisco, Citrix, F5, and others.
Other requirements include: a data center interconnect or DCI, common name
space, and virtual LAN (VLAN) spanning both sites.
•
The DCI is a high-speed dual-path link that connects the sites—typically, it uses
two 10Gb Ethernet (or faster) links. The DCI is used for VPLEX disk mirroring,
inter-application, and inter-cluster communication, and command and control.
Usually, user based transactions do not traverse the DCI.
•
The common name space requires DNS services in both sites. A replication
mechanism that keeps all of the DNS servers in sync is built into most DNS
solutions. AD is also required in both sites along with your single-sign-on (SSO)
solutions.
•
VLAN spanning is required by most of the stretched clustering deployments.
Traditionally, VLAN spanning has had a bad reputation and most network
architects were taught to 'never do that' because historically network chatter
would flood the DCI and little real work could be done. Today, however, modern
stretched VLAN technologies such as Cisco's OTV and VPLS from Brocade and
other vendors have solved the issues associated with spanned VLANs.
Using the above technologies, an abstracted application stack can be deployed over
two sites. If a component fails, redundancy in the compute clusters and other
technologies will take over and provide the necessary service. And if a whole site
fails, the other site automatically takes over, with transactions automatically routed
by the global load balancing technology to the surviving site.
Figure 2. With VPLEX, organizations can extend the data locking provided
by database clusters across distance for dual-site active/active CA
capability.
6
7. CONTINUOUS APPLICATION AVAILABILITY
Using the building blocks described above, organizations can achieve true CA for
their applications. But if the existing database clustering technology used only
supports an active/passive technology what are the options?
First, CA architectures are really deployment architectures; the underlying
application architecture is unchanged. In a CA deployment, the compute and
infrastructure assets are deployed in two different sites and the application is
abstracted from an individual site. As a result, the number of components for the
two active sites is greater than one active site, but can be less than an active site
with HA technologies and a backup DR site.
Second, the VPLEX technology presents virtual data disks in both sites in a 2N or CA
mode. Network load balanced clusters are CA, whether deployed in one site or two.
Finally, when it comes to clustering technologies for the database servers, database
clusters and VMware FT are 2N active/active CA technologies. While stretched OS
based clusters and hypervisor clusters are N+1 HA active/passive technologies, it is
possible to create deployment models with CA up and down the stack, or a near-line
CA model, with a minimum of CA VPLEX provided data, and various combinations of
the cluster models.
To be certain, the current partial CA models do not have all of the benefits of a pure
CA deployment, but they are foundation to build on as more applications become
active/active enabled. Another consideration is that improvements in some OS
based clustering models can mask the failover time, maintaining the appearance to
the user as if the application is “always-on.” Microsoft Always-On SQL and the EMC
deployment architecture for SAP are examples of always-on solutions.
COMMON QUESTIONS AND CONCERNS
VPLEX enables a whole new approach to continuous application and database
availability that makes continued operation and access to data and applications
during planned and unplanned outages feasible for a large number of organizations.
Some of the most frequently asked questions and common concerns about this new
approach follow.
HOW DOES VPLEX AFFECT APPLICATION AND DATABASE
PERFORMANCE?
The impact depends on application, distance, and data center configuration. A
maximum of five milliseconds (5ms) round trip time (RTT) is recommended for most
VPLEX implementations. As a rule of thumb, a 5ms RTT translates to a round trip
distance of about 60 miles/100km connected by fiber optic cables or 10Gb Ethernet.
The ping command can provide a quick snapshot of RTT.4
VMware has certified VPLEX with vSphere HA auto-failover at up to 5ms RTT and
allows manual failover with storage vMotion at up to10ms RTT. Microsoft has
published a 34-page paper on implementing EMC VPLEX and Microsoft Hyper-V and
SQL Server with enhanced failover clustering support over synchronous distances.5
4
5
EMC offers a service with tools that can precisely measure RTT and check the cleanliness of data center interconnects.
http://download.microsoft.com/download/2/6/6/26660F7D-301E-48DE-8200-8CFF64C7BCEA/h7116_vplex_hyper_v_sql_wp.pdf
7
8. Other factors to consider:
•
Only write data needs to be mirrored to between sites. Latency and its
impact on application response time is a concern only for writes. Most
applications have something like an 80/20 read/write ratio (generally falling
between 70/30 and 90/10). In our experience, organizations deploying VPLEX
will see read response times improve, because read requests are serviced
directly out of the VPLEX cache. Considering also that most applications read
several records first and then do a write, overall throughput tends to average
out or be only very marginally impacted.
•
Improvements in disk write response time. Just a few years ago, local
SCSI disk response time averaged between 15-18ms. Today, FC response time
is in the range of 3-5ms and Flash is in the range of 1-2ms.
•
SAN degradation. We have also observed SANs, initially rolled out with a 35ms average response time, become degraded through changes and additions
over time. In some cases, the response time has increased to as much as 15ms
for an average write.6 With a well-tuned FC (Fibre Channel) or FCoE (Fibre
Channel over Ethernet) SAN, VPLEX typically adds 1-5ms to each disk write,
depending on distance.
•
Tradeoffs. For many applications, the gain of CA, elimination of RTO, and
HA/DR complexity and uncertainty will justify the increase in write time.
If the performance is still unacceptable, write response times can be improved in
other ways to offset any latency caused to achieve CA with VPLEX. These include:
adding FAST Caching or Flash storage and/or, as noted above, tuning SANs to spec.
In some Oracle RAC environments, adding additional servers can increase
parallelism and overall throughput. EMC application and storage architects can work
with organizations explore options and determine how best to integrate VPLEX
across two sites with little or no latency impact.
WHAT ABOUT MIDDLEWARE?
Applications need to get data from other sources and need to pass data to other
applications. If an upstream app is in one location and downstream in another, the
middleware data queues need to be mirrored and protected over VPLEX. Middleware,
such as TIBCO, Oracle Enterprise Service Bus, and IBM MQ Series can be deployed
over VPLEX with CA availability using an active/active database cluster stretched
across sites.
WHAT ABOUT NAS DATA?
VPLEX currently supports block data only. For low-to-medium use, NAS mounts and
shares can be served by a Linux or Windows server that stores data on block SAN
virtualized over VPLEX. The server can then be protected by either a Wave-1 or
Wave-2 cluster.
For large-scale NAS deployments on platforms such as EMC Isilon or EMC VNX with
EMC Celerra, an active node in one site and a passive node in the second site can be
mirrored using the native array replication software. The combination of a global file
virtualization solution, such as EMC Rainfinity, and a manual failover trigger can be
used to move the NAS service between locations, in the event of site loss.
6
EMC offers a health check service that measures SAN response time and bottlenecks.
8
9. In either case, for applications that hit NAS share/mounts heavily in one site only—
and where bandwidth is not an issue, NAS services can be hosted in one site for all
users and the lighter NAS traffic can traverse the DCI.
ORACLE HAS AN EXTENDED ORACLE RAC DEPLOYMENT
OPTION—CAN I JUST USE THAT?
A number of years ago, Oracle introduced an active/active dual-site solution called
Extended Oracle RAC using Real Application Clusters (RAC) with Automatic Storage
Management (ASM) mirroring. However, the complexity and reliability of the hostbased ASM mirroring overhead made the 2-site extension unworkable for most
organizations and applications. The solution was also limited to only Oracle-based
applications and data.
Because ASM Mirroring mirrors Oracle data only, in a dual-site configuration,
another technology is needed to sync other types of data. There is no way to
provide replication consistency across upper-level applications or multiple dependent
databases without a third-party solution.
EMC’s CA solution with Extended Oracle RAC over VPLEX stretches the RAC servers
between the two sites and the ASM mirroring role is provided by VPLEX.
WHAT IF VPLEX GOES DOWN?
VPLEX is itself based on clustering technology with shared memory cache. One
VPLEX cluster is deployed in each site. If storage or a component of the VPLEX
cluster fails, the other engines in the cluster automatically take over the load.
For more information, please refer to:
TechBook: EMC VPLEX Metro Witness Technology and High Availability
SUMMARY
EMC VPLEX virtual storage, combined with off-the-shelf clustering and global
workload balancing technologies, enables dual active/active production data centers,
separated by distance, to deliver continuous availability—typically at a lower cost
than today’s HA and DR solutions. CA with VPLEX is different from traditional HA/DR
approaches.
Traditional HA/DR requires:
•
Separate HA and DR efforts and solutions
•
Redundant, passive, standby infrastructure
•
Recovery using replicated data—whether from a backup, data replication, and/or
snapshot—so there is always a delay and risk in failback and recovery from the
last known uncorrupted point for data and applications
In contrast, CA with VPLEX:
•
Combines HA and DR into one solution, enabling continuous simultaneous
operation in two active/active production data centers, separated by distance
•
In the event of a planned or unplanned outage in one data center, CA with
VPLEX automatically shifts workloads to the other site and operations continue—
providing users with uninterrupted access to applications and data
9
10. The advantages of active/active CA enabled by VPLEX include:
•
Zero downtime—Applications and data continue to be available and continue
processing during outages caused by technical failure, human error, site-loss, as
well as during planned outages for application testing, migrations, non-data
upgrades, and the like.
•
Off-the-shelf choices—CA is implemented with off-the-shelf technologies
available from multiple vendors—with no application coding, customization, or
integration required.
•
Increased asset utilization—Passive redundant recovery infrastructure can be
actively used—and resources pooled across sites to share workload. Not only do
idle resources become useable, but also the ability to distribute load means
organizations can typically reduce total server count by 30-40 percent.
•
Simpler management—Organizations no longer need to manage separate HA
and DR solutions, maintain and update DR runbooks, or test recovery plans;
dynamic load balancing within and across data centers automatically optimizes
performance; workloads can be easily shifted from site to site so migrations,
maintenance, and technology refreshes can occur without disrupting user access
or application and data processing.
•
Greater confidence—Active/active operation and automated failover eliminate
the risks associated with orchestrating manual and error-prone recovery tasks.
•
Reduced CAPEX and OPEX—Better utilization, less infrastructure (licensing,
maintenance, etc.) and simplified management frees budget and resources that
can be shifted to strategic application innovation and IT transformation
initiatives.
In a business case developed by Forrester Research for a composite organization
based on interviews with EMC customers already using EMC VPLEX-enabled CA, the
analysts found an ROI of more than 250 percent over three years, gained through
minimized downtime, improved asset utilization and load balancing, and increased
efficiency in data migration and hardware maintenance.7
REVISIT AND RETHINK AVAILABILITY NOW
VPLEX-enabled active/active CA implementations are at work today in company data
centers, supporting organizations most critical applications, including SAP on
Extended Oracle RAC and SAP on VMware HA and vMSC. In a recent Forrester
research survey, 44 percent of respondents reported running some production loads
in an active/active mode—and 12 percent of this group reported using stretch
clusters.8
Organizations should act now to address availability risks and achieve the
continuous application and data availability and mobility required in today’s business
and digital and cloud-enabled marketplaces.
7
The Total Economic Impact of EMC Continuous Availability: A Case Study Focused On Continuous Availability With EMC VPLEX,
November 2013. A Forrester Total Economic Impact™ Study commissioned by EMC. Project Director: Reggie Lau, Forrester
Research, Inc.
8
How Organizations Are Improving Business Resiliency With Continuous IT Availability, 2013. Forrester Research, Inc.
10