HP 3PAR Utility Storage is a next-generation storage system designed for virtual and cloud data centers. It was built from the ground up to maximize efficiency and agility through features like thin provisioning and thin copying that reduce storage capacity needs and increase utilization. Experts say 3PAR brings enterprise-class features to the midrange market and provides the flexibility required for cloud environments through secure multi-tenancy and scalability.
Storage Analytics: Transform Storage Infrastructure Into a Business EnablerHitachi Vantara
View this webinar session to learn how you can transform your storage infrastructure into a business enabler. You will learn: Tips and tricks to streamline storage performance monitoring across your Hitachi environment. How to define and enforce performance and capacity objectives for key business applications by establishing storage service level management. How to create storage service level management reports that satisfy the needs of multiple IT stakeholders (that is, CIO, architect, administrator). For more information on controlling costs of sprawling storage with storage analytics white paper: http://www.hds.com/assets/pdf/hitachi-white-paper-control-costs-and-sprawling-storage-with-storage-analytics.pdf
"ESG Whitepaper: Hitachi Data Systems VSP G1000: - Pushing the Functionality ...Hitachi Vantara
The document is a white paper that discusses the Hitachi Virtual Storage Platform G1000 storage system. It provides an overview of the business demands driving a need for more software-defined and agile storage capabilities. It then describes the key capabilities of the Hitachi Virtual Storage Platform G1000, which is presented as a solution that provides enterprise-class storage software and functionality to help customers address these business needs. The white paper evaluates the applicability of this storage platform for various market segments.
A-B-C Strategies for File and Content BrochureHitachi Vantara
Explains each strategy, including archive 1st, back up less, consolidate more, distributed IT efficiency, enable e-discovery and compliance, and facilitate cloud. For more information on Unstructured Data Management Solutions by HDS please visit: http://www.hds.com/solutions/it-strategies/unstructured-data-management.html?WT.ac=us_mg_sol_udm
Five Best Practices for Improving the Cloud ExperienceHitachi Vantara
This document summarizes a report on best practices for improving the cloud experience based on lessons learned from 232 global IT executives. The five best practices are: 1) Ensure cloud providers meet business and IT requirements through service level agreements. 2) Choose the right cloud service model based on needed control over security and data protection. 3) Use architectures that integrate cloud services with existing infrastructure. 4) Consider benefits beyond cost like improved operations and innovation. 5) Define business requirements for IT and have IT act as a cloud broker. The Hitachi Content Platform portfolio aligns with these practices by providing a secure, scalable cloud that meets business needs and accelerates cloud adoption.
Simplify Data Center Monitoring With a Single-Pane ViewHitachi Vantara
Keeping IT systems up and well tuned requires constant attention, but the task is too often complicated by separate monitoring tools required to watch applications, servers, networks and storage. This white paper discusses how system administrators can consolidate oversight of these components, particularly where DataCore SANsymphony V storage hypervisor virtualizes the storage resources. Such visibility is made possible through the integration of SANsymphony-V with Hitachi IT Operations Analyzer.
Hu Yoshida's Point of View: Competing In An Always On WorldHitachi Vantara
The document discusses how businesses need to adapt to constant and rapid changes in technology by embracing a "continuous cloud infrastructure" and "business-defined IT" approach. This involves having an automated, scalable IT infrastructure that is software-defined, virtualized and optimized to meet changing business needs. A continuous cloud infrastructure provides increased agility, automation, security and reliability to help businesses innovate faster, improve productivity and gain a competitive advantage in an "always-on" world of data growth, new technologies and changing customer demands.
A More Efficient Way to Automate Cloud Infrastructure Solution ProfileHitachi Vantara
Hitachi Unified Compute Platform Director provides a single point of management for Hitachi converged infrastructure elements. It allows administrators to inventory, provision, operate, and monitor all virtual and physical components from a centralized interface. This simplifies tasks, improves efficiency and helps ensure predictable performance, reliability and protection of resources and data.
Microsoft SQL Server 2012 Data Warehouse on Hitachi Converged PlatformHitachi Vantara
Accelerate breakthrough insights across your organization with Microsoft SQL Server 2012 Data Warehouse running on the mission-critical and ready-to-deploy Hitachi server-storage-networking platform, Hitachi Unified Compute Platform. Amplify infrastructure performance with Hitachi and Microsoft SQL Server 2012 Fast Track Data Warehouse xVelocity in-memory technologies. Learn how your organization can extract 100 million+ records in 2 or 3 seconds versus the 30 minutes required previously. With SQL Server 2012 Fast Track Data Warehouse and Hitachi software, your organization will be able to leverage a data platform that processes any data anywhere. View this webcast and learn:How to reduce deployment time with ready-to-deploy solutions that have been engineered and pre-configured by Hitachi and validated by the Microsoft Fast Track Data Warehouse program. How Hitachi and Microsoft have optimized performance for your data warehouse requirements. How your organization can realize immediate ROI from your data warehouse investment. For more information on Hitachi Unified Compute Platform please visit: http://www.hds.com/products/hitachi-unified-compute-platform/?WT.ac=us_mg_pro_ucp
Storage Analytics: Transform Storage Infrastructure Into a Business EnablerHitachi Vantara
View this webinar session to learn how you can transform your storage infrastructure into a business enabler. You will learn: Tips and tricks to streamline storage performance monitoring across your Hitachi environment. How to define and enforce performance and capacity objectives for key business applications by establishing storage service level management. How to create storage service level management reports that satisfy the needs of multiple IT stakeholders (that is, CIO, architect, administrator). For more information on controlling costs of sprawling storage with storage analytics white paper: http://www.hds.com/assets/pdf/hitachi-white-paper-control-costs-and-sprawling-storage-with-storage-analytics.pdf
"ESG Whitepaper: Hitachi Data Systems VSP G1000: - Pushing the Functionality ...Hitachi Vantara
The document is a white paper that discusses the Hitachi Virtual Storage Platform G1000 storage system. It provides an overview of the business demands driving a need for more software-defined and agile storage capabilities. It then describes the key capabilities of the Hitachi Virtual Storage Platform G1000, which is presented as a solution that provides enterprise-class storage software and functionality to help customers address these business needs. The white paper evaluates the applicability of this storage platform for various market segments.
A-B-C Strategies for File and Content BrochureHitachi Vantara
Explains each strategy, including archive 1st, back up less, consolidate more, distributed IT efficiency, enable e-discovery and compliance, and facilitate cloud. For more information on Unstructured Data Management Solutions by HDS please visit: http://www.hds.com/solutions/it-strategies/unstructured-data-management.html?WT.ac=us_mg_sol_udm
Five Best Practices for Improving the Cloud ExperienceHitachi Vantara
This document summarizes a report on best practices for improving the cloud experience based on lessons learned from 232 global IT executives. The five best practices are: 1) Ensure cloud providers meet business and IT requirements through service level agreements. 2) Choose the right cloud service model based on needed control over security and data protection. 3) Use architectures that integrate cloud services with existing infrastructure. 4) Consider benefits beyond cost like improved operations and innovation. 5) Define business requirements for IT and have IT act as a cloud broker. The Hitachi Content Platform portfolio aligns with these practices by providing a secure, scalable cloud that meets business needs and accelerates cloud adoption.
Simplify Data Center Monitoring With a Single-Pane ViewHitachi Vantara
Keeping IT systems up and well tuned requires constant attention, but the task is too often complicated by separate monitoring tools required to watch applications, servers, networks and storage. This white paper discusses how system administrators can consolidate oversight of these components, particularly where DataCore SANsymphony V storage hypervisor virtualizes the storage resources. Such visibility is made possible through the integration of SANsymphony-V with Hitachi IT Operations Analyzer.
Hu Yoshida's Point of View: Competing In An Always On WorldHitachi Vantara
The document discusses how businesses need to adapt to constant and rapid changes in technology by embracing a "continuous cloud infrastructure" and "business-defined IT" approach. This involves having an automated, scalable IT infrastructure that is software-defined, virtualized and optimized to meet changing business needs. A continuous cloud infrastructure provides increased agility, automation, security and reliability to help businesses innovate faster, improve productivity and gain a competitive advantage in an "always-on" world of data growth, new technologies and changing customer demands.
A More Efficient Way to Automate Cloud Infrastructure Solution ProfileHitachi Vantara
Hitachi Unified Compute Platform Director provides a single point of management for Hitachi converged infrastructure elements. It allows administrators to inventory, provision, operate, and monitor all virtual and physical components from a centralized interface. This simplifies tasks, improves efficiency and helps ensure predictable performance, reliability and protection of resources and data.
Microsoft SQL Server 2012 Data Warehouse on Hitachi Converged PlatformHitachi Vantara
Accelerate breakthrough insights across your organization with Microsoft SQL Server 2012 Data Warehouse running on the mission-critical and ready-to-deploy Hitachi server-storage-networking platform, Hitachi Unified Compute Platform. Amplify infrastructure performance with Hitachi and Microsoft SQL Server 2012 Fast Track Data Warehouse xVelocity in-memory technologies. Learn how your organization can extract 100 million+ records in 2 or 3 seconds versus the 30 minutes required previously. With SQL Server 2012 Fast Track Data Warehouse and Hitachi software, your organization will be able to leverage a data platform that processes any data anywhere. View this webcast and learn:How to reduce deployment time with ready-to-deploy solutions that have been engineered and pre-configured by Hitachi and validated by the Microsoft Fast Track Data Warehouse program. How Hitachi and Microsoft have optimized performance for your data warehouse requirements. How your organization can realize immediate ROI from your data warehouse investment. For more information on Hitachi Unified Compute Platform please visit: http://www.hds.com/products/hitachi-unified-compute-platform/?WT.ac=us_mg_pro_ucp
This document discusses three customer case studies of telecom companies using Cloudera's Enterprise Data Hub:
1) SFR used the data hub to create a centralized data store and 360-degree view of customers, combining structured and unstructured data from multiple sources for real-time search, reporting and analysis. This improved the customer experience and increased data warehouse performance.
2) British Telecom used the data hub to accelerate data processing from 24+ hours to near real-time, addressing issues with disparate customer databases and long ETL windows that limited access to up-to-date customer information.
3) Telkomsel deployed the data hub to gain insights from customer, network and transactional data to
Solve the Top 6 Enterprise Storage Issues White PaperHitachi Vantara
Storage virtualization can help organizations solve common enterprise storage issues by consolidating multiple physical storage systems into a single virtual pool. This allows for increased utilization of existing assets, simplified management across heterogeneous systems, and reduced costs through measures like thin provisioning and automation. Virtualization helps organizations address issues like exponential data growth, low storage utilization, increasing management complexity, and rising capital and operating expenditures on storage infrastructure.
Hitachi Data Systems offers private cloud solutions that provide flexible, scalable cloud storage infrastructures. These solutions allow organizations to lower costs by paying only for consumed storage resources and improving efficiency by reducing management overhead. Key offerings include file tiering services that move inactive files to cloud storage, freeing up resources on primary storage, and fully managed private cloud services where Hitachi remotely manages the on-premises cloud infrastructure.
High-Performance Storage for the Evolving Computational Requirements of Energ...Hitachi Vantara
Richer data from oil and gas exploration is placing new demands on storage infrastructure as more advanced analysis techniques generate larger datasets. High-performance storage is needed to accelerate seismic analysis and avoid bottlenecks. Hitachi's intelligent storage solutions provide massive scalability, simplified data management, high performance, and other features to meet the evolving computational needs of energy exploration.
Achieve Higher Quality Decisions Faster for a Competitive Edge in the Oil and...Hitachi Vantara
Hitachi next-generation unified storage solutions meet the challenges of today’s data-intensive oil and gas exploration and production activities. For more information on Hitachi Unified Storage and Hitachi NAS Platform 4000 series please visit: http://www.hds.com/products/file-and-content/network-attached-storage/?WT.ac=us_mg_pro_hnasp
Storage virtualization can help organizations address key challenges like managing storage growth demands, leveraging existing assets, and simplifying data movement issues. It allows pooling of storage resources and thin provisioning to improve capacity utilization and reduce costs. Controller-based storage virtualization in particular separates logical views from physical assets, allowing heterogeneous storage systems to be managed as a single pool. This provides benefits like reduced complexity, improved flexibility, and leveraged cost savings.
Meet the Data Processing Workflow Challenges of Oil and Gas Exploration with ...Hitachi Vantara
The document discusses a test conducted by Hitachi Data Systems and Halliburton Landmark to evaluate the performance of Hitachi's networked storage solution for use with Halliburton Landmark's SeisSpace seismic processing software. The initial test configuration showed improvements over other vendors but still took over 4 hours to complete certain tasks. Various configuration changes were made and optimized the solution, reducing completion times by over 60%. Only Hitachi demonstrated the ability to meet the high performance requirements for both primary and secondary storage simultaneously with a single solution.
Dynamic Hyper-Converged Future Proof Your Data CenterDataCore Software
IT organizations are continuously striving to reduce the amount of time and effort to deploy new resources for the business. Data center and remote office infrastructures are often complex and rigid to deploy, causing operational delays. As a result, many IT organizations are looking at a hyper-converged infrastructure.
Read this whitepaper to discover that a hyper-converged approach is flexible and easy to deploy and offers:
• Lower CAPEX because of lower up-front prices for infrastructure
• Lower OPEX through reductions in operational expenses and personnel
• Faster time-to-value for new business needs
The document summarizes in-memory systems and how they enable faster and more informed decision making. It discusses how leading companies in various industries are exploring in-memory to improve decisions around staffing, dispatching, pricing and more. In-memory allows real-time processing of vast data volumes to gain insights where traditional systems took days or weeks. SAP has seen strong growth with its in-memory HANA platform. Innovation centers help users identify the right in-memory applications for their unique needs.
Hitachi Unified Storage 100 family systems consolidate and manage block, file and object data on a central platform. For more information on our unified storage please visit: http://www.hds.com/products/storage-systems/hitachi-unified-storage-100-family.html?WT.ac=us_mg_pro_hus100
Over the last decade, cloud computing has transformed the market for IT services. But the journey to cloud adoption has not been without its share of twists and turns. This report looks at lessons that can be derived from companies' experiences implementing cloud computing technology.
Learn more about Hitachi Content Platform Anywhere by visiting http://www.hds.com/products/file-and-content/hitachi-content-platform-anywhere.html
and more information on the Hitachi Content Platform is at http://www.hds.com/products/file-and-content/content-platform
The economics of storage virtualization webinarHitachi Vantara
Virtualization in the data center is a stable and proven approach to mke IT more efficient, from desktops to servers and from networks to storage. Whether storage virtualization is host-based, controller-based or through an appliance, it is a core ingredient in economical IT architectures. As with most new technology investments, you need a clear understanding of benefits versus costs.
This document provides an overview and analysis of the cloud-enabled managed hosting market, including definitions, use cases, and a Magic Quadrant evaluating major vendors. It discusses the strengths and weaknesses of six vendors - AT&T, CenturyLink, CSC, Datapipe, Dimension Data, and FireHost. Key findings include that no single vendor excels in all areas, customization is limited, and shorter contract terms are now common compared to traditional managed hosting.
Access to large amounts of seismic data is essential for oil and gas companies to make timely decisions about new prospects and reduce the time to discovery. As energy demands increase, more sophisticated analysis of greater volumes of data is needed. Speed and access to rapidly expanding datasets is key to accelerating analysis workflows and high-quality decision making within project deadlines.
This document welcomes users to Stock Exchange CIO, a website providing news, articles, slideshows, whitepapers, books, and software toolkits about information technology, business management, and stock exchange operations. The content is intended for information technology professionals, chief information officers, and executives who work in stock exchanges worldwide and aims to describe the future of IT, keep users up to date on market operations, offer reliable and applicable information, and add value to their work.
This document discusses three customer case studies of telecom companies using Cloudera's Enterprise Data Hub:
1) SFR used the data hub to create a centralized data store and 360-degree view of customers, combining structured and unstructured data from multiple sources for real-time search, reporting and analysis. This improved the customer experience and increased data warehouse performance.
2) British Telecom used the data hub to accelerate data processing from 24+ hours to near real-time, addressing issues with disparate customer databases and long ETL windows that limited access to up-to-date customer information.
3) Telkomsel deployed the data hub to gain insights from customer, network and transactional data to
Solve the Top 6 Enterprise Storage Issues White PaperHitachi Vantara
Storage virtualization can help organizations solve common enterprise storage issues by consolidating multiple physical storage systems into a single virtual pool. This allows for increased utilization of existing assets, simplified management across heterogeneous systems, and reduced costs through measures like thin provisioning and automation. Virtualization helps organizations address issues like exponential data growth, low storage utilization, increasing management complexity, and rising capital and operating expenditures on storage infrastructure.
Hitachi Data Systems offers private cloud solutions that provide flexible, scalable cloud storage infrastructures. These solutions allow organizations to lower costs by paying only for consumed storage resources and improving efficiency by reducing management overhead. Key offerings include file tiering services that move inactive files to cloud storage, freeing up resources on primary storage, and fully managed private cloud services where Hitachi remotely manages the on-premises cloud infrastructure.
High-Performance Storage for the Evolving Computational Requirements of Energ...Hitachi Vantara
Richer data from oil and gas exploration is placing new demands on storage infrastructure as more advanced analysis techniques generate larger datasets. High-performance storage is needed to accelerate seismic analysis and avoid bottlenecks. Hitachi's intelligent storage solutions provide massive scalability, simplified data management, high performance, and other features to meet the evolving computational needs of energy exploration.
Achieve Higher Quality Decisions Faster for a Competitive Edge in the Oil and...Hitachi Vantara
Hitachi next-generation unified storage solutions meet the challenges of today’s data-intensive oil and gas exploration and production activities. For more information on Hitachi Unified Storage and Hitachi NAS Platform 4000 series please visit: http://www.hds.com/products/file-and-content/network-attached-storage/?WT.ac=us_mg_pro_hnasp
Storage virtualization can help organizations address key challenges like managing storage growth demands, leveraging existing assets, and simplifying data movement issues. It allows pooling of storage resources and thin provisioning to improve capacity utilization and reduce costs. Controller-based storage virtualization in particular separates logical views from physical assets, allowing heterogeneous storage systems to be managed as a single pool. This provides benefits like reduced complexity, improved flexibility, and leveraged cost savings.
Meet the Data Processing Workflow Challenges of Oil and Gas Exploration with ...Hitachi Vantara
The document discusses a test conducted by Hitachi Data Systems and Halliburton Landmark to evaluate the performance of Hitachi's networked storage solution for use with Halliburton Landmark's SeisSpace seismic processing software. The initial test configuration showed improvements over other vendors but still took over 4 hours to complete certain tasks. Various configuration changes were made and optimized the solution, reducing completion times by over 60%. Only Hitachi demonstrated the ability to meet the high performance requirements for both primary and secondary storage simultaneously with a single solution.
Dynamic Hyper-Converged Future Proof Your Data CenterDataCore Software
IT organizations are continuously striving to reduce the amount of time and effort to deploy new resources for the business. Data center and remote office infrastructures are often complex and rigid to deploy, causing operational delays. As a result, many IT organizations are looking at a hyper-converged infrastructure.
Read this whitepaper to discover that a hyper-converged approach is flexible and easy to deploy and offers:
• Lower CAPEX because of lower up-front prices for infrastructure
• Lower OPEX through reductions in operational expenses and personnel
• Faster time-to-value for new business needs
The document summarizes in-memory systems and how they enable faster and more informed decision making. It discusses how leading companies in various industries are exploring in-memory to improve decisions around staffing, dispatching, pricing and more. In-memory allows real-time processing of vast data volumes to gain insights where traditional systems took days or weeks. SAP has seen strong growth with its in-memory HANA platform. Innovation centers help users identify the right in-memory applications for their unique needs.
Hitachi Unified Storage 100 family systems consolidate and manage block, file and object data on a central platform. For more information on our unified storage please visit: http://www.hds.com/products/storage-systems/hitachi-unified-storage-100-family.html?WT.ac=us_mg_pro_hus100
Over the last decade, cloud computing has transformed the market for IT services. But the journey to cloud adoption has not been without its share of twists and turns. This report looks at lessons that can be derived from companies' experiences implementing cloud computing technology.
Learn more about Hitachi Content Platform Anywhere by visiting http://www.hds.com/products/file-and-content/hitachi-content-platform-anywhere.html
and more information on the Hitachi Content Platform is at http://www.hds.com/products/file-and-content/content-platform
The economics of storage virtualization webinarHitachi Vantara
Virtualization in the data center is a stable and proven approach to mke IT more efficient, from desktops to servers and from networks to storage. Whether storage virtualization is host-based, controller-based or through an appliance, it is a core ingredient in economical IT architectures. As with most new technology investments, you need a clear understanding of benefits versus costs.
This document provides an overview and analysis of the cloud-enabled managed hosting market, including definitions, use cases, and a Magic Quadrant evaluating major vendors. It discusses the strengths and weaknesses of six vendors - AT&T, CenturyLink, CSC, Datapipe, Dimension Data, and FireHost. Key findings include that no single vendor excels in all areas, customization is limited, and shorter contract terms are now common compared to traditional managed hosting.
Access to large amounts of seismic data is essential for oil and gas companies to make timely decisions about new prospects and reduce the time to discovery. As energy demands increase, more sophisticated analysis of greater volumes of data is needed. Speed and access to rapidly expanding datasets is key to accelerating analysis workflows and high-quality decision making within project deadlines.
This document welcomes users to Stock Exchange CIO, a website providing news, articles, slideshows, whitepapers, books, and software toolkits about information technology, business management, and stock exchange operations. The content is intended for information technology professionals, chief information officers, and executives who work in stock exchanges worldwide and aims to describe the future of IT, keep users up to date on market operations, offer reliable and applicable information, and add value to their work.
This document welcomes users to Stock Exchange CIO, a website providing news, articles, slideshows, whitepapers, books, and software toolkits about information technology, business management, and stock exchange operations. The content is intended for information technology professionals, chief information officers, and executives who work in stock exchanges worldwide and aims to describe the future of IT, keep users up to date on market operations, offer reliable and applicable information, and add value to their work.
The document outlines the rules of the London Stock Exchange, including definitions, core rules, order book trading rules, off order book trading rules, market making rules, settlement and clearing rules, compliance rules, and default rules. It introduces the rulebook structure and notes that chapter and section headings are for guidance only. Rules are identified by type (e.g. core, direction) for reference. The document also references supporting documentation like the Guide to the Trading System and Parameters spreadsheet.
The document provides an update on Borsa Italiana's migration from TradElect to Millennium IT, including completed activities like publishing technical specifications, ongoing activities like enabling the new CDS environment for testing, and next steps like migrating additional markets by Q2 2012. It notes that TAH will migrate in November and other markets will migrate in two releases, with the first being TAH and the second being larger equity markets.
This document outlines 10 things one needs to know to be a successful researcher. It discusses believing strongly in something, being curious and always learning, knowing yourself and where you currently are in your research, thinking differently than others, keeping things simple, accepting that failure is normal, making friends in the research community, and having fun. The document is presented by Putchong Uthayopas and provides an overview of their research focusing on parallel data mining, efficient cloud resource modeling, middleware design, and GPU computing applications.
This document provides information about Techgate PLC, including their contact details and address in London. It then discusses how disaster recovery strategies have changed with the emergence of cloud computing. Specifically, it notes that disaster recovery as a service (DRaaS) solutions can now be more flexible, cost-effective and on-demand compared to traditional on-premise disaster recovery systems. The document outlines some key considerations for selecting a DRaaS solution, such as recovery time objectives, regulatory requirements, and making "like-for-like" comparisons between provider offerings and services.
Disaster recovery in the Cloud - whitepaper Karolina Dryja
This whitepaper discusses disaster recovery strategies in the cloud. It notes that cloud technologies have transformed disaster recovery by making solutions more flexible, scalable, and cost-effective. The document examines factors to consider when selecting a disaster recovery provider such as recovery time objectives, regulatory requirements, security, service level agreements, and testing capabilities. It emphasizes the importance of performing due diligence and testing solutions to ensure they meet organizations' recovery needs.
1) SAS based storage provides advantages over SATA for enterprise environments by offering higher performance, reliability, and suitability for multi-drive systems through features like dual-port connectivity and enhanced data integrity.
2) As data center workloads increase in complexity due to trends like cloud computing and multi-core processing, the demands for storage performance will also grow, benefiting SAS which is designed for enterprise settings.
3) Choosing the right drive interface involves considering factors like workload requirements, capacity needs, robustness, and suitability for large scale deployments, where SAS excels over SATA particularly for performance-oriented applications.
As a leading Managed service provider with datacenters in India, Netmagic solutions, fulfills your entire IT infrastructure requirements: from collocation services to backup solutions.
Modular blade server architectures address many challenges facing modern data centers by consolidating computing components into smaller, modular form factors that share resources to lower costs and complexity. Blades can satisfy computing needs for servers, desktops, networking and storage. They provide world-class solutions by delivering high performance, reliability, efficiency and scalability without disruption. Proper planning is required, but blade servers are highly efficient platforms for consolidating distributed servers into a common data center through their small size and ability to maximize resource utilization through virtualization.
Despite years of industry advocacy, cloud adoption in larger firms remains slow. There are many logos for many vendors dotting the cloud technology landscape and many competing architectures. But there are also few standards that guarantee the interoperability of different approaches.
The latest buzz in enterprise cloud technology is around “hybrid cloud data centers” in which large enterprises “build their base” – that is, their core infrastructure, possibly as a “private cloud” – and “buy their burst” – that is, obtain additional public cloud- based resources and services to augment their on-premises capabilities during periods of peak workload handling, for application development, or for business continuity.
Ultimately, the adoption of cloud architecture will be gated by how successfully organizations are able to leverage emerging technologies in a secure and reliable manner and whether the resulting infrastructure actually delivers in the key areas of cost-containment, risk reduction and improved productivity.
This document discusses the five pillars of federal data center networking: 1) application effectiveness, 2) programmatic control and orchestration, 3) security and data integrity, 4) elasticity and scalability, and 5) automation and simplified management. It describes how Brocade technology delivers on these pillars through high-performance networking, software-defined networking, security, scalability, and automation capabilities.
The document discusses the five pillars of federal data center excellence: (1) application effectiveness, (2) programmatic control and orchestration, (3) security and data integrity, (4) elasticity and scalability, and (5) automation and simplified management. It describes how Brocade delivers on these pillars through technologies that increase application performance, provide programmatic control of resources, ensure security and data integrity, allow for flexible scaling of resources, and enable automated management of data centers. The document also categorizes different types of federal data centers and discusses how Brocade technologies address the specific needs of customer-facing, analytical, and tactical/transportable data centers.
Software-defined storage (SDS) provides storage software that runs on standard server hardware to deliver data services. The document discusses the top five use cases and benefits of SDS, including reducing storage costs through scalable commodity hardware, improving performance by optimizing storage I/O, better provisioning and automation of storage resources, robust management of heterogeneous storage arrays, and tightly aligning storage with broader infrastructure management. SDS can lower costs while improving performance, efficiency, and flexibility compared to proprietary storage systems. However, SDS also presents challenges around integration, support skills, and interoperability that must be addressed.
Why would you should trust Stack Harbor with your Data
The Most performance and security oriented Canadian cloud company.
Learn more about our all SSD instances comparable and outperforming AWS, Azure, soft layer, iWeb etc..
The document discusses a new hyperscale data center solution from Ericsson called Ericsson Hyperscale Datacenter System 8000. It claims this solution can deliver significant cost savings for enterprise data centers by enabling higher utilization rates compared to traditional architectures. Specifically, it argues the solution can achieve CPU utilization up to 4 times higher, network utilization 26% higher, and allow one administrator to manage thousands of servers rather than hundreds. These efficiencies are achieved through a pooled infrastructure that allows dynamic allocation of resources and improved matching of capacity to workload demands.
This document summarizes a Forrester Consulting study on private clouds. The study found that while enterprises want the cost benefits of private clouds, many are not realizing these benefits due to a lack of understanding of true cloud models and an underestimation of critical components like storage. Specifically, the study found that decision-makers were confused about private clouds versus virtualization; storage was being treated as "business as usual" without considering cloud implications; and storage was not a key consideration in decision-making processes. The document provides best practices for private cloud storage architectures to maximize efficiencies.
Data warehouse-optimization-with-hadoop-informatica-clouderaJyrki Määttä
This white paper proposes a reference architecture for optimizing data warehouses using Hadoop. It combines Informatica and Cloudera technologies to offload processing and infrequently used data from data warehouses to Hadoop. This alleviates strain on warehouses and frees up storage space. The architecture provides universal data access, flexible data ingestion methods, streamlined data pipelines, scalable processing and storage using Hadoop, end-to-end data management, and real-time queries of Hadoop data. The goal is to optimize warehouse performance and costs by leveraging Hadoop for large-scale data storage and preprocessing.
The document discusses the need for converged backup solutions that can simplify and consolidate data protection across mixed server environments. It notes that individual vendor solutions often only address specific proprietary platforms. An optimal solution is a cross-platform approach using intelligent converged backup that applies appropriate data protection services based on each data set's criticality. The document then introduces Storage Director by Tributary Systems as a policy-based data management solution that connects any host to any storage technology and applies services to data based on business importance. Storage Director allows for data backup consolidation and virtualization across heterogeneous environments.
This document discusses cloud infrastructure scenarios in data centers and hybrid cloud models. It notes that while many large firms have been slow to adopt public clouds due to security and reliability concerns, the hybrid cloud model that uses a combination of on-premises private clouds and public cloud resources is gaining popularity. The key is achieving performance, agility, availability and lowest total cost of ownership through standards that allow interoperability between different cloud approaches and storage technologies.
Whitepaper the data management revolutionPetr Nemec
Get the complete whitepaper for free at http://sieag.at/scc48
The power of cell division in core networks - the whitepaper describes a new approach to the management of both static and dynamic subscriber and session data in the context of a world of subscribers on the move.
Insider's Guide- Building a Virtualized Storage ServiceDataCore Software
This document discusses how storage virtualization can enable storage to be delivered as a dependable service through a software layer called a storage hypervisor. A storage hypervisor translates complex storage hardware into a centrally managed resource that can be dynamically allocated. It addresses issues like inefficient storage management, high product costs, and lack of flexibility. It allows organizations to manage more storage capacity with fewer administrators, keep hardware in service longer, and purchase less expensive gear. It also contributes to data protection and provides predictability in the face of changing technologies like server virtualization, desktop virtualization, and cloud computing.
The document discusses data center infrastructure and operations. It explains that data centers must transform from traditional environments to ones that are efficient, automated, and service-oriented to reduce costs and complexity while enabling growth. A typical data center securely houses an organization's IT systems and provides power, cooling, and redundancy to ensure maximum availability. It also discusses business benefits of data centers like availability, continuity, lower total cost of ownership, and agility. The document provides considerations for data center design like power usage efficiency and virtualization strategy. It includes a glossary of terms.
The document discusses data center infrastructure and operations. It explains that data centers must transform from traditional environments to ones that are efficient, automated, and service-oriented to reduce costs and complexity while enabling growth. A typical data center securely houses an organization's IT systems and provides power, cooling, and redundancy to ensure maximum availability and resilience. It also discusses considerations for data center design like power usage efficiency and virtualization strategy.
The document discusses the benefits of bare metal clouds compared to virtualized clouds. Bare metal clouds provide dedicated physical servers to individual tenants, avoiding many of the performance limitations of virtualized clouds like inconsistent performance due to resource oversubscription. Bare metal clouds also allow for complete hardware customization and isolation of workloads, which can help meet regulatory compliance requirements. While virtualized clouds are convenient, bare metal is presented as a better option for applications that require high and consistent performance, like large databases, as well as for matching an on-premises environment without performance compromises.
Similar to Ast 0043791 hp-3_par_utility_storage_next-generation_storage_for_virtual_and_cloud_data_centers (20)
London is a leading global financial center and home to the London Stock Exchange, one of the world's largest stock exchanges. The Exchange offers companies access to deep pools of capital through various cost-effective markets. Its central location allows it to span global time zones. London is also a hub for specialist financial advisors and strict standards of corporate governance provide confidence to investors. The document provides an overview of why London and its capital markets are an ideal place for companies to raise funds.
This document provides an overview of listing on the London Stock Exchange's Main Market. The Main Market has over 1,400 companies from over 60 countries and a combined market capitalization of £3.7 trillion. Listing on the Main Market demonstrates a commitment to high standards and provides access to capital from international investors. Key benefits of joining the Main Market include raising capital for growth, creating a market for a company's shares to broaden its shareholder base, and placing an objective market value on the business. The listing process involves approval from the UK Listing Authority and admission to trading on the London Stock Exchange Main Market.
This document summarizes an IDC white paper on measuring the business impact and ROI of using JBoss Operations Network (JBoss ON) for managing JBoss Enterprise Middleware environments. The study found that organizations using JBoss ON achieved an average 634% ROI over 3 years, with an average payback period of 5.3 months. Key benefits included more than doubling the number of Linux servers managed per administrator, from 38 to 84 on average, reducing downtime, and improving IT staff efficiency.
This document outlines the coursework assignment for an Information Architecture module. Students must create an information portal about a chosen professional role, applying information architecture principles. They must produce both the portal using WordPress, and a 5-10 page reflective report connecting the concepts learned to their design choices. The report must include the portal's URL and be submitted by January 9th, 2012. Support is available from the module leader and teaching assistants during the process.
This document outlines the marking scheme and criteria for assessing the Information Architecture coursework assignment in 2011-12. It will be assessed based on:
1. Describing the role, context, and objectives (15%)
2. Organizing information through structuring, hierarchies, and metadata (40%)
3. Matching the information organization to the navigation structure (20%)
4. Organizing information on each page (15%)
5. The report's coherence and reflectiveness (10%)
The rationale provides further details on what will be considered for each criteria, including thoughtful information organization based on IA principles, correlating information structure with navigation, using page layout effectively, and having a clear
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
OpenID AuthZEN Interop Read Out - AuthorizationDavid Brossard
During Identiverse 2024 and EIC 2024, members of the OpenID AuthZEN WG got together and demoed their authorization endpoints conforming to the AuthZEN API
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
AI-Powered Food Delivery Transforming App Development in Saudi Arabia.pdfTechgropse Pvt.Ltd.
In this blog post, we'll delve into the intersection of AI and app development in Saudi Arabia, focusing on the food delivery sector. We'll explore how AI is revolutionizing the way Saudi consumers order food, how restaurants manage their operations, and how delivery partners navigate the bustling streets of cities like Riyadh, Jeddah, and Dammam. Through real-world case studies, we'll showcase how leading Saudi food delivery apps are leveraging AI to redefine convenience, personalization, and efficiency.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Things to Consider When Choosing a Website Developer for your Website | FODUUFODUU
Choosing the right website developer is crucial for your business. This article covers essential factors to consider, including experience, portfolio, technical skills, communication, pricing, reputation & reviews, cost and budget considerations and post-launch support. Make an informed decision to ensure your website meets your business goals.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
CAKE: Sharing Slices of Confidential Data on BlockchainClaudio Di Ciccio
Presented at the CAiSE 2024 Forum, Intelligent Information Systems, June 6th, Limassol, Cyprus.
Synopsis: Cooperative information systems typically involve various entities in a collaborative process within a distributed environment. Blockchain technology offers a mechanism for automating such processes, even when only partial trust exists among participants. The data stored on the blockchain is replicated across all nodes in the network, ensuring accessibility to all participants. While this aspect facilitates traceability, integrity, and persistence, it poses challenges for adopting public blockchains in enterprise settings due to confidentiality issues. In this paper, we present a software tool named Control Access via Key Encryption (CAKE), designed to ensure data confidentiality in scenarios involving public blockchains. After outlining its core components and functionalities, we showcase the application of CAKE in the context of a real-world cyber-security project within the logistics domain.
Paper: https://doi.org/10.1007/978-3-031-61000-4_16
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
1. HP 3PAR Utility Storage
Benefits Summary
Next-Generation Storage for Virtual and Cloud Data Centers
WHITE PAPER
Table of contents
What experts are saying...................................................................................................................... 2
Introduction ........................................................................................................................................ 3
What is utility storage? ........................................................................................................................ 3
Reducing CAPEX................................................................................................................................. 5
OPEX savings ................................................................................................................................... 13
Greater ROI for applications and services ............................................................................................ 16
Conclusions...................................................................................................................................... 19
2. What experts are saying
ESG Lab’s hands-on testing has confirmed that 3PAR has developed a solution that can not only
reduce provisioning time by up to 90% as compared to traditional storage systems that use wizards,
but which works with any high-availability cluster, database cluster, or virtual server environment.
–Tony Palmer, Enterprise Strategy Group Lab
3PAR has delivered a confluence of high-end features, advanced virtualization, and attractive
cost-of-ownership to the high-end storage market. What’s more, 3PAR has brought this exciting
combination to the midrange market as well.
– Laura DuBois, IDC Program Vice President, Storage Software and Solutions
As competition heats up, cloud service providers must rein in capital and operating costs while
retaining the ability to scale to meet unpredictable but potentially massive data storage requirements.
Thin provisioning in HP 3PAR Utility Storage does this most efficiently and provides the needed agility
to operate at large scale.
–Mike Kahn, Managing Director, The Clipper Group
The design of all 3PAR systems provide optimal performance to the random I/O workloads driving
today’s growing application mix….Features such as thin provisioning and wide striping were the
primary design points for 3PAR, not added as an afterthought to a legacy architecture.
– Russ Fellows, Evaluator Group
The incumbency of 3PAR Utility Storage in the global service provider market—with many of the
world’s top service providers relying on 3PAR Utility Storage for their cloud-based service
offerings—makes it a valuable addition to the CSA.
–Jim Reavis, Cloud Security Alliance
Basically 3PAR took thin provisioning and the other storage management features that won high
grades for their enterprise systems and brought those features down to the midrange with the F-Class
series. The timing was good, as 3PAR had just recently upgraded its high-end T-Class platform with
new features that came down to the F-Class, such as mesh-active quad-controller technology, thin
provisioning, ASIC and new management software for 3PAR's thin technologies.
–Dave Raffo, Storage magazine/SearchStorage.com
2
2
3. Introduction
The growth and use of information has outpaced traditional storage technologies whose fundamental
designs are in some cases now decades old. In many cases, the limited scalability and functionality of
these outdated storage platforms have forced organizations to cobble together storage infrastructures
with multiple layers of interlocking hardware and software. This perpetuates not only IT sprawl, but
prevents businesses from becoming what HP calls “instant-on.” The Instant-On Enterprise serves
customers, employees, partners, and citizens with precisely what they want and need, precisely when
they need it—instantly, at any point in time, and through any channel. It uses technology to integrate
and automate the value chain. It adapts easily and innovates rapidly. It manages risk and
environmental responsibilities. Behind the scenes, the Instant-On Enterprise streamlines everything that
is required to deliver a service.
Escalating investment in storage management and SAN purchases ultimately frustrating efforts to
consolidate, simplify, virtualize, and achieve cost-efficiency through realizing a converged
infrastructure that is at the heart of the Instant-On Enterprise. The continuing trends toward server and
client virtualization only serve to amplify the rigidity of technology silos, the complexity of legacy
application architectures, and the inefficiency of traditional storage arrays.
Virtualization is transforming data centers and the businesses they fuel. This shift is placing new
demands on infrastructure that legacy storage platforms were never designed to handle. A converged
infrastructure holds the keys to enabling organizations to overcome the inflexibility and high costs
created by IT sprawl in order to have the freedom to shift resources away from operations in favor of
fostering innovation and driving strategic initiatives that will grow the business. A fundamental
element of this strategy is the deployment of a storage infrastructure that addresses the specific needs
of virtual and cloud data centers with the fundamental flexibility to handle not only today’s demands,
but to serve as the foundation for a data center transformation with the power to poise that data
center for what comes next. While there is indeed no way to “future-proof” the data center, there are
certainly actions that can be taken today to maximize infrastructure efficiency and to build in the
agility necessary to meet even rapidly changing business demands.
HP 3PAR Utility Storage was built from the ground up to exceed the economic and operational
requirements of even the most demanding and dynamic IT environments, and to support a converged
infrastructure by providing the SAN performance, scalability, and availability that clients need to
transform their data centers. The next generation of Tier 1 storage, HP 3PAR Utility Storage delivers
100% of the agility and efficiency demanded by virtual data centers and cloud computing
environments. It does this through an innovative system architecture that offers secure multi-tenancy,
built-in thin processing capabilities, and autonomic management and storage tiering features that are
unique in the industry.
What is utility storage?
Utility storage is a category of storage systems developed to support the “on-demand” utility
computing model that first introduced the concept of delivering IT resources as a service, much like a
public utility (such as water, electricity, and natural gas). Utility storage systems were developed to not
only deliver storage as if it were a utility—by enabling users to buy only what they need and pay for
only what they use—but to support the broader goals of delivering IT as a Service (ITaaS) and helping
businesses become “instant-on.” By far, the form of utility computing that receives the most attention
today is cloud computing, a service delivery model that leverages the Internet instead of using a
specific network—a condition that adds the new requirement of massive infrastructure scalability and
security.
The leading global provider of utility storage, 3PAR and its unique technologies were acquired by HP
in 2010. With particularly strong adoption within the hosting and enterprise cloud computing
markets, prior to this acquisition 3PAR was also ranked as the preferred utility storage vendor to 7 of
the world’s top 10 revenue-generating managed service providers1. The platform’s delivery of secure
multi-tenancy was a particularly important element of this success. In addition, the company’s utility
storage arrays continually achieved high marks from both enterprise and service provider clients for
1
Tier 1 Research, “Winter 2009 Managed Hosting Report,”16 December 2008.
3
4. providing green storage advantages, with 95% of users recommending 3PAR arrays as a strategic
component of any green IT initiative2.
The green advantages of the HP 3PAR Utility Storage platform were a direct result of improved
efficiency, including the elimination of unnecessary storage capacity purchases, increased utilization
of the storage capacity that is purchased, and thin copy technologies to eliminate the needless
replication of empty space. According to Arun Taneja, Founder of the Taneja Group, the platform’s
pioneering introduction of thin provisioning and thin copy technologies within open-systems storage
arrays, combined with its administrative simplicity, was responsible for bringing to market a business
model that changed the cost structure of storage and paved the way for new Hardware- and
Software-as-a-Service offerings, Web 2.0 innovation, and cloud computing3.
As part of the HP Converged Infrastructure, HP 3PAR Utility Storage helps promote infrastructure
efficiency and agility, enabling clients to overcome the inflexibility and high costs of IT sprawl so that
resources can be shifted toward innovation and strategic initiatives instead of operations. The
integration of HP 3PAR Utility Storage with HP CloudSystem and HP BladeSystem Matrix provides
clients with a simplified way to provision and scale storage for public and private cloud environments.
The combination of the HP X9300 Network Storage Gateway (based on IBRIX technology) and HP
3PAR Storage Systems helps clients gain control over massive amounts of unstructured file data by
enabling file and block data access on a consolidated storage infrastructure.
Whether deployed on its own or in combination with other HP storage products and solutions, the
efficiency and agility benefits of HP 3PAR Utility Storage can be grouped into three broad categories:
• Reduced capital expense (CAPEX)
• Reduced operating expense (OPEX)
• Greater return on investment (ROI) for applications and services
This paper discusses these benefits in detail by exploring the unique advantages and benefits of the
HP 3PAR Utility Storage platform.
2
Based on a 2009 survey of 3PAR customers performed by TechValidate (www.techvalidate.com).
3
3PAR Inc., “3PAR Celebrates a Decade of Innovation,” press release, 27 May 2009,
http://www.3par.com/about_us_overview/news_events/press_releases/20090527.html (1 March 2011).
4
5. Reducing CAPEX
With the efficiency and agility of HP 3PAR Utility Storage, clients do not need to overprovision
capacity to get the flexibility they need either today or in the future. Designed to deliver massive
consolidation and scalability, HP 3PAR Utility Storage allows clients to reduce storage infrastructure
while gaining the ability to respond rapidly to changing business needs. With HP 3PAR Utility
Storage, clients can:
• Start small and scale smartly: With HP 3PAR Storage Systems, users can start with a small,
cost-efficient footprint, avoiding the cost premiums associated with monolithic arrays. The unique HP
3PAR Architecture also allows users to scale in a granular and independent way to deliver higher
performance, greater capacity, or more connectivity when and where it is needed. And since
elements of the HP 3PAR Architecture are common across the entire platform, HP 3PAR Utility
Storage delivers maximum investment protection as business needs grow.
All HP 3PAR Storage Systems feature a unique Mesh-Active controller technology as part of a
next-generation storage architecture designed for virtual and cloud data centers. This Mesh-Active
design combines the benefits of monolithic and modular architectures while eliminating price
premiums and scaling complexities. Unlike legacy “active-active” controller architectures—where a
storage volume is active on only a single controller—this design allows each volume to be active on
every mesh controller in the system. The result is an architecture that delivers robust, load-balanced
performance and greater headroom for cost-effective scalability, overcoming the tradeoffs typically
associated with modular and monolithic storage.
• Purchase less storage infrastructure: HP 3PAR Utility Storage enables massive consolidation
through secure multi-tenancy that enables clients to support multiple external or internal customers
(user groups, departments, business units, lines of business, etc.) from a single, consolidated storage
array. The key to this capability is the platform’s highly flexible and secure Virtual Private Array
(VPA) technology and high-performance, massively load-balanced architecture. The result is a
simple, efficient, and scalable approach to delivering secure segregation within a consolidated
platform and without performance tradeoffs.
5
6. With HP 3PAR Virtual Domains Software, clients can deliver secure administrative segregation of
users and hosts within a consolidated, massively parallel HP 3PAR Storage System, allowing
individual user groups and applications to leverage the high-performance architecture of HP 3PAR
Storage Systems to achieve greater storage service levels (performance, availability, and
functionality). Virtual Domains enables IT organizations to deliver customized, secure, and even
“self-service” storage to multiple administrators, applications, departments, or user groups while
retaining the efficiency benefits of storage consolidation. It enables service providers to be more
competitive by offering shared storage options without compromising security or service levels.
With HP 3PAR Utility Storage, application-tailored volumes with assured and measurable service
levels enable the centralization of volume management activities, which reduces the requirement
to purchase multiple host-based volume management software licenses. Clients can also simplify,
delay, or eliminate complex SAN infrastructures with the enormous connectivity potential and
built-in LUN security of HP 3PAR Storage Systems, which allow users to connect directly to more
than a hundred physical host servers and to eliminate switching layers associated with “fanning-out”
to storage devices4.
Purchase storage capacity only for written data: Today’s storage arrays often force users
to provision storage in odd or over-sized amounts, or hold large amounts of storage in reserve for
future uses. This can result in large pockets of unutilized capacity. HP 3PAR Thin Provisioning
Software allows users to safely allocate more storage capacity to host applications than has
actually been purchased. This means that clients can eliminate physical disk purchases immediately
and postpone them indefinitely—until capacity is physically required. HP 3PAR Utility Storage takes
a reservationless, dedicate-on-write approach to thin provisioning that enables the platform’s thin
software applications to draw and configure capacity in fine-grained increments from a single free
space reservoir without prior dedication of any kind. Other vendors claim to offer thin provisioning
but actually require separate thin provisioning pools for each data service level. Such pools are
silos of allocated-but-unused capacity that can require manual setup, provisioning, and
management that decreases thin provisioning return on investment (ROI) and flexibility.
In addition, HP 3PAR InForm Operating System Software (InForm OS) improves capacity efficiency
by allowing a given application to use fine-grained portions of hundreds of drives—a feature which
also has performance benefits discussed later. The system uses sub-disk virtualization to divide each
physical disk into granular allocation units, each of which can be independently assigned and
dynamically reassigned to virtual volumes of different QoS levels. These chunklets are selected and
grouped to meet user-defined levels of performance, cost, and availability—varying such
parameters as RAID level, drive type, radial placement, and stripe width. This fine-grained
virtualization means that each disk drive can support many QoS levels, enabling HP 3PAR Storage
Systems to use physical assets in the most efficient manner possible. Fine-grained virtualization also
delivers the flexibility to respond to changing application workloads quickly and non-disruptively so
organizations have the agility to meet dynamic business needs.
4
The HP 3PAR T800 Storage System supports a maximum of 128 direct Fibre Channel host connections.
6
7. • Only copy what has changed: On legacy storage arrays, when “fat” volumes are copied, the
extra space within those volumes is also copied. As if this weren’t enough, volume copies created
for data recovery purposes can compound this waste by replicating the empty space yet again. The
resulting inefficiency is a serious challenge, particularly in virtual server and virtual desktop
deployments where large amounts of capacity are typically required at the outset. HP 3PAR Virtual
Copy Software allows clients to take capacity-minimizing, non-duplicative, copy-on-write snapshots
of data. This represents a significant savings in capacity purchases versus the use of traditional full
physical copies. Virtual Copy’s fine-grained auto-growth capability also makes it more efficient than
other copy-on-write technologies, which, in addition to duplicating unused space, require
conservative reservations of copy space.
7
8. HP 3PAR Remote Copy Software leverages Virtual Copy for remote replication and disaster
recovery and is “thin-aware,” so target volumes provide the same cost and ease-of-use benefits as
thin source volumes. With the combination of HP 3PAR Thin Provisioning Software and HP 3PAR
Remote Copy Software, both primary and remote sites can share in the benefits of allocating
volumes just once while consuming only necessary physical capacity. The result is unprecedented
efficiency in data replication.
8
9. As an example of this efficiency, consider a scenario where an application user requests 30 TB of
capacity with a traditional storage array. If we assume a 33% capacity utilization rate, actual
written data only amounts to approximately 10 TB. With traditional RAID 1 mirroring, the 30 TB
request translates into 60 TB of raw capacity required. Include remote data replication, and now
the required capacity expands to 120 TB raw. All of this capacity is required for only 10 TB of
actual written application data. With HP 3PAR Thin Provisioning Software and HP 3PAR Remote
Copy Software, capacity is required only when an application writes data, so by comparison, only
40 TB of raw capacity is needed to support the user’s request, RAID 1 protection, and remote data
replication. This is a savings of 80 TB or roughly 67% of the capacity otherwise required with
legacy solutions.
• Save 50% on a technology refresh, guaranteed: The HP 3PAR Gen3 ASIC inside each HP
3PAR Storage System features a silicon-based, zero-detection mechanism for converting “fat”
volumes to "thin" volumes without impacting storage performance. This technology—known as Thin
Built In—leverages a unique, software-based virtualization mapping engine for space reclamation
that works with HP 3PAR Thin Conversion Software to remove allocated but unused space in
existing storage volumes. HP 3PAR Utility Storage is the only storage platform with this fat-to-thin
processing capability built into its system hardware5. With the HP 3PAR Get Thin Guarantee, new
clients deploying HP 3PAR Storage Systems and HP 3PAR Thin Provisioning and Thin Conversion
Software as part of a storage technology refresh are guaranteed to halve the amount of capacity
required to store their data, or HP will make up the difference with free disk capacity and related
software and support6.
• Shrink RAID protection overhead: HP 3PAR Utility Storage offers hardware-accelerated,
hyper-efficient Fast RAID 5 and RAID 6 (also known as RAID Multi-Parity, or RAID MP) on all
storage system models. Fast RAID 5 boosts RAID 5 performance to within 10% of RAID 17 but with
5
See the “HP 3PAR Thin Technologies Solution Brief” for details:
http://www.3par.com/SiteObjects/E79E9D14AE7ACA31541DDEA841E2BC91/4AA3-2545ENW.pdf
6
Eligibility for the Get Thin Guarantee program is subject to acceptance of a Get Thin Offer containing the Terms and satisfaction of those Terms.
Contact your sales representative for full details
7
RAID 1 mirroring requires 100% capacity overhead; RAID 5 (3+1) only consumes 33% more capacity for a 67% savings.
9
10. significantly less capacity overhead.8 HP 3PAR RAID MP introduces Fast RAID 6 technology backed
by the accelerated performance and rapid RAID rebuild capabilities of the HP 3PAR Gen3 ASIC.
RAID MP delivers this enhanced protection while maintaining performance levels within 15% of
RAID 10 and with capacity overheads comparable to popular RAID 5 modes.9
• Purchase fewer arrays to get the job done: Traditionally, organizations have been forced
to purchase additional arrays to meet application service level requirements and for cost
optimization of data. With the InForm OS, the massively parallel and fine-grained striping of data
across internal resources assures high and predictable levels of service for all workload types to
enable clients to consolidate with confidence. At the same time, autonomic tiering capabilities and
support for multiple drive types enable clients to achieve an optimal balance of price and
performance for multiple types of data within a single, cost-efficient array.
– Performance advantages
With HP 3PAR Utility Storage, data is striped widely across all system resources (controllers,
cache, disks, and loops) to leverage fine-grained virtualization capabilities and provide superior
performance. This is particularly important in virtual server and virtual desktop environments,
where storage is often the performance bottleneck. With the InForm OS, wide striping enables the
system to collectively leverage resources across all disk and controller resources to eliminate
tradeoffs between utilization and performance. Even the smallest volumes can leverage the
performance of 50 or 100 disk drives and all of the system’s clustered Controller Nodes for
optimal performance without compromising utilization—something that can’t be said about
manual performance boosting techniques such as “short stroking.”
The HP 3PAR Architecture also features mixed workload support that enables transaction- and
throughput-intensive workloads to run without contention on a single storage system without
manual segregation of workloads to different physical resources. This capability is a key enabler
of multi-tenancy, and eliminates the need to purchase and maintain separate arrays to support
individual applications. The resulting alleviation of data center sprawl can reduce storage
footprint by 50% or more.
8
3PAR Inc. and Oracle Corporation, “Simplified Database Storage Management That Lowers Management Costs and Yields High Storage
Utilization,” April 2008, http://www.3par.com/SiteObjects/FAD0993865AC1636B4E2A3CEF6B64131/oracle_3par_wp_final_0.pdf (1
March 2011).
9
Based on internal 3PAR testing using RAID 6 (6+2).
10
11. – Autonomic storage tiering
HP 3PAR Utility Storage features unique autonomic storage tiering capabilities that reduce costs
while delivering the agility and efficiency to meet changing and unpredictable workloads in even
the most demanding virtual and cloud data centers. The platform’s autonomic approach to service
level optimization is designed to pair data with the most cost-effective resource capable of
meeting service level requirements at any given time, giving organizations the agility to react
quickly to changing application and infrastructure requirements with total confidence. Clients can
achieve service level targets at the lowest possible cost while increasing infrastructure agility and
minimizing the risks typically associated with moving data between storage tiers.
Traditional approaches to service level optimization rely on static, application-level tiering. Key
limitations of this approach include: complexity; inability to move data without downtime or
service level impacts; and time-consuming planning, configuration, and migration. HP 3PAR
Dynamic Optimization Software offers non-disruptive, autonomic workload rebalancing across HP
3PAR Storage Systems. With Dynamic Optimization, application volumes are non-disruptively
distributed and redistributed across tiers to align application requirements with data QoS levels
on demand.
HP 3PAR Adaptive Optimization Software leverages the same fine-grained data movement
engine as Dynamic Optimization but applies it to independent regions within a volume. The result
is highly reliable, non-disruptive autonomic tiered storage that delivers the right QoS to the right
data at the right time to meet service level targets for the lowest possible cost.
With Adaptive Optimization, HP 3PAR Storage Systems are also able to deliver high
performance levels in even the most challenging environments using an extremely lean Solid State
Drive (SSD) tier in combination with highly affordable Nearline (Enterprise SATA) drives. At any
given time, only the most performance-intensive data is placed onto SSDs, meaning that service
level targets can be met with a minimal number of these premium drives. Meanwhile, the
ability to stripe writes widely across all system resources, combined with abundantly scalable
levels of performance, enables the use of highly cost-efficient Nearline drives to meet broader
capacity requirements. This combination of HP 3PAR Adaptive Optimization Software, SSDs, and
widely striped SATA drives delivers a savings of up to 30% over the cost of using Fibre Channel
drives alone.10
10
Savings based on the comparison of an HP 3PAR T400 Storage System configured with 320 x 300-GB 15K Fibre Channel drives and an HP
3PAR T400 Storage System configured with 24 x 50-GB Solid State Drives and 96 x 1-TB Serial ATA drives using HP 3PAR Adaptive
Optimization Software.
11
12. • Spend less on remote replication and disaster recovery: HP 3PAR Remote Copy
Software allows clients to protect and share data from any application more simply, efficiently, and
affordably. Remote Copy dramatically reduces the cost of remote data replication and disaster on
several fronts: by leveraging virtual and thin copy technologies unique to HP 3PAR Utility
Storage; by enabling the use of a combination of mid-range and high-end arrays; by eliminating
costly professional services engagements; and by providing both Fibre Channel and native
IP-over-Ethernet support so clients are not forced to convert or extend Fibre Channel connections
with expensive devices. Unique to HP 3PAR Remote Copy, the Synchronous Long Distance
replication mode gives clients an affordable, multi-site alternative for achieving low Recovery Time
Objectives (RTOs) and zero-data-loss Recovery Point Objectives (RPOs) with complete distance
flexibility. Synchronous Long Distance replication combines the best of both worlds by offering the
data integrity of synchronous mode disaster recovery and the extended distances (including
cross-continental reach) possible with asynchronous replication.
12
13. OPEX savings
According to Gartner, not only does operational spending consistently account for a higher
percentage of overall IT spending on a global scale than do capital expenses, but in 2009-2010,
operational expenses rose while capital expenditures fell.11 HP 3PAR Utility Storage lowers operating
expenses year after year by optimizing storage efficiency and maintaining it autonomically.
Simplified management reduces storage administration while thin replication technologies enable thin
capacity to stay thin over time to maximize the ongoing cost benefits of deploying thin storage. These
savings allow clients to maintain a sustained focused on innovation rather than operations. With HP
3PAR Utility Storage, clients can:
• Reduce storage footprint: The ultra-dense Controller Nodes and Drive Chassis of HP 3PAR
Storage Systems, combined with Mixed workload support and autonomic tiering enable clients to
consolidate onto fewer arrays, that consume less real estate and thus cost less to house. This
footprint reduction is particularly important for cost control when leasing data center space, and for
data centers located in highly dense urban areas such as London and New York City, where real
estate and power resources are limited and controlling data center sprawl is crucial to keeping
operating costs down.
Capacity reduction using HP 3PAR Thin Provisioning Software and HP 3PAR Thin Conversion
software further reduce storage requirements to conserve precious data center floor space and save
on operating costs related to housing storage equipment. In addition, for multiple-cabinet systems,
no adjacency requirements and the ability to place individual cabinets up to 100 meters apart
eliminate the need for costly data center space reservations and planning when using HP 3PAR
Storage Systems. This allows customers to reduce operating expenses by squeezing every inch out
of their data centers.
• Maximize administrative efficiency: Most arrays available today require as many as 30
steps just to provision a single volume, and involve burdensome restrictions that users must
remember to ensure proper capacity planning. The InForm OS provides dramatically simplified,
autonomic provisioning and management that relieves users of tedious planning chores and reduces
the potential for error. With HP 3PAR Storage Systems, there is no pre-planning, and provisioning is
cut to just two steps that can be completed in less than 15 seconds. With HP 3PAR Thin
Provisioning Software, users can provision just once for the lifetime of an application. This
represents a dramatic savings, year after year.
The multi-dimensional scalability of HP 3PAR Storage Systems, coupled with the ease-of-use and
autonomic management capabilities built into the InForm OS, allow clients to achieve more with
less. Competing systems require customers to manage complex and layered storage infrastructures
composed of multiple storage devices, switches, and host elements—all with their related
management software. When multiplied by the typical activities of any storage environment and the
need to maintain compatibility between the many levels of hardware and software, management
and training for such an environment can be daunting, time consuming, and expensive.
In comparison, HP 3PAR Utility Storage has been shown to improve administrative efficiency tenfold
by reducing administration time by up to 90%.12 For example, HP 3PAR Rapid Provisioning
eliminates array planning by delivering instant, application-tailored, autonomic provisioning
through the fine-grained virtualization of lower-level components. Provisioning is managed
intelligently and automatically while striping of data across internal resources assures high and
predictable service levels for all workload types. Now three clicks and 60 seconds are all that is
needed to fully create and provision multiple volumes to multiple hosts.13
11
Gartner, Inc., “IT Metrics: IT Spending and Staffing Report, 2011,” 25 January, 2011.
12
Based on documented client results that are subject to unique business conditions, client IT environment, HP products deployed, and other
factors. These results may not be typical; your results may vary.
13
Based on the use of HP 3PAR Autonomic Groups Software, which is provided as part of the HP 3PAR InForm Operating System Software.
13
14. The latest version of the HP 3PAR Management Console is fully integrated with signature HP 3PAR
applications such as HP 3PAR Thin Provisioning, Virtual Copy, Dynamic Optimization, Virtual
Domains, and Remote Copy Software so that it consolidates all the tools that clients need to
provision, manage, optimize, and protect their entire utility storage deployment from a single
console. This one console enables administrators to do it all, including: unified management of all
arrays (local and remote; all storage system models, including mid-range and high-end arrays);
multi-site replication set up and tested in just minutes (even using multiple replication modes and
system models); and autonomic disaster recovery configuration.
• Maintain capacity efficiency over time: By dramatically reducing overall capacity
requirements and keeping utilization rates high over time, HP 3PAR thin technologies not only
minimize ongoing storage administration and real estate requirements, but also the cost of
powering, cooling, and managing storage—which are three major contributors to OPEX. Only HP
3PAR Utility Storage features a multifaceted thin storage approach that gives clients the ability to
start thin, get thin, and stay thin.
To optimize the cost savings achieved with Thin Provisioning and Thin Conversion, thin
environments must stay thin, which is where HP 3PAR Thin Persistence and Thin Reclamation
Software come into play. These unique solutions keep thin storage lean and efficient by
autonomically reclaiming free but unused space on an ongoing basis. In addition, for environments
that use VMware vSphere™, Microsoft® Windows® (with SDelete), and Oracle® Database (with the
ASM Storage Reclamation Utility), Thin Persistence can help free significant amounts of stranded
storage. HP 3PAR Utility Storage can also drive additional capacity benefits for environments using
Veritas Storage Foundation™ by Symantec™. The HP 3PAR Thin Persistence Software package
includes HP 3PAR Thin Reclamation Software for Veritas Storage Foundation, which enables the use
of granular file system-level information to autonomically reclaim unused space within thin volumes
so they remain thin over time.
14
15. • Reduce server change management costs: Managing patches, new releases, and
parameters for operating systems and applications across multiple servers is a laborious,
error-prone, and capacity-intensive ongoing process. HP 3PAR Utility Storage solves this problem by
allowing users to maintain a few “golden” boot images, and then to distribute these tested images
to countless servers using space-efficient read-write snapshots. Using Virtual Copy, an administrator
can create a golden image—a read-only snapshot of a given operating system or application. This
enables the administrator to then create multiple read-write instances of this image, one for each
server. The writable nature of the image allows simple customization (like unique parameter
settings) to be specified for each server as required. Administrators gain a high degree of
centralized control and a highly scalable solution applicable to multiple servers. The results: rapid,
“bare-metal” provisioning and simplified server patch management.
15
16. Greater ROI for applications and services
For companies in the information business, the potential to generate returns from applications and
services is directly related to the ability to access and serve up data to customers. For service
providers, it’s about getting new clients up and running more quickly or speeding time to market for
new services. For IT organizations, it’s all about new project ROI. HP 3PAR Utility Storage improves
both efficiency and agility to enable enterprise and service provider clients alike to improve return on
investment as compared to competing storage technologies. With HP 3PAR Utility Storage, clients
can:
• Initiate revenue-generating projects sooner: With HP 3PAR Thin Provisioning Software,
clients are not required to wait until next quarter’s or next year’s budget allocation in order to
deploy additional applications or provision new clients to grow the business. Nor are they required
to wait until needed storage is planned, sized, negotiated, procured, and installed before storage
can be allocated. By maintaining a small buffer of physical capacity, clients can quickly and easily
deploy new applications and services or provision new clients as needed for scheduled use and
when needed for surges in demand. In either case, delays are eliminated and the focus shifted
away from resource procurement to adding value to the business. Capacity is always available to
start new projects, and administrator productivity does not depend on storage purchase, planning,
or installation.
• Accelerate time-to-deployment: Deploying, maintaining, and upgrading mission-critical
applications and services affects the ability to generate revenue and decrease costs. The rapid,
autonomic provisioning capabilities of the InForm OS enable clients to reduce new project
deployment windows and speed time-to-market for new applications and services.
• Reduce planned downtime: In brittle IT environments, growth or change often means
downtime. With HP 3PAR Utility Storage, change management is non-disruptive—making it easy to
upgrade, reconfigure, or reconnect HP 3PAR Storage Systems. Costly and time-consuming data
migrations are also a thing of the past.
• Increase availability: The hardware and software fault tolerance of HP 3PAR Storage Systems
represents a new paradigm in availability measurement. With a clustered architecture that
combines the best of modular and monolithic arrays, high levels of performance can be sustained
even under major component failure conditions.
• Protect against demand volatility: Sustained growth and demand spikes can strain the
attainable service levels of any IT department or service provider. As indicated in the table below,
HP 3PAR Utility Storage improves the level of storage performance by delivering 2 to 6 times
greater performance than competing monolithic or modular arrays. In addition, increasing
performance is non-disruptive and easy to implement. As a result, clients can react quickly to new
opportunities or unplanned demands.
• Maintain business continuity: HP 3PAR Utility Storage delivers fast and economical
application and disaster recovery based on HP 3PAR Virtual Copy and Remote Copy Software.
With Virtual Copy, clients can maintain an archive of frequent online copies of production data
sets. After a database corruption event, administrators can quickly and automatically recover data
from a “clean” copy while retaining a complete set of protected copies. This minimizes the costs of
downtime and improves service levels─efficiently and with reduced chance for error. By backing up
from snapshots, Virtual Copy allows clients to eliminate impact on production servers and, in the
case of SAN-based backups, network resources as well.
16
17. HP 3PAR Utility Storage also offers an automated backup solution with database-awareness and
backup server integration that delivers fast, efficient database backup with minimal impact on the
production server. The intelligence and automation of the platform’s data protection solutions
minimize complexity, human resource requirements, and potential errors at critical moments. HP
3PAR Recovery Manager Software for Oracle, HP 3PAR Recovery Manager Software for Microsoft
SQL Server, and HP 3PAR Recovery Manager Software for Exchange are application-aware
snapshot management solutions that ensure complete and consistent snapshot sets. A point-and-click
management console minimizes operator error and speeds time to recovery. For Oracle
environments, additional backup integration is included with Symantec™ NetBackup™, providing
"one-click," non-disruptive, immediate off-host backup.
HP 3PAR Recovery Manager Software for VMware vSphere gives VMware administrators simple
cost-effective control over storage resources and superior granularity when it comes to data
protection and recovery. This plug-in powers the creation of hundreds of VM-aware, point-in-time
snapshots via a simple, automated process for protecting and recovering Virtual Machine Disks
(VMDKs), VMware vStorage Virtual Machine File Systems (VMFS), individual VMs, and even
individual files within VMware vSphere environments.
HP 3PAR GeoCluster Software for Microsoft Windows works with Windows Server Failover
Clustering (WSFC) and HP 3PAR Remote Copy Software to automate application failover and
recovery. As a result, the entire disaster recovery process becomes simpler, quicker, and more
efficient. These benefits transfer to disaster recovery testing as well, which supports compliance
efforts and eases the administrative burden.
17
18. • Accelerate time-to-decision: The scale of HP 3PAR Utility Storage allows users to centrally
maintain data, thus avoiding the delays and complexity associated with aggregating data from
dispersed locations. For example, HP 3PAR Virtual Copy Software can be used to enable real-time
data analysis. Users can make an instant copy of production data available to a data warehouse
application for extraction and analysis, all with minimal performance impact to production
resources. By allowing administrators to co-locate production and decision support datasets, Virtual
Copy enables IT departments to create instant snapshots of production datasets for use in extract,
transform, and load (ETL) operations, thereby accelerating data warehouse creation. Afterwards,
snapshots of the data warehouse can become multiple sources for parallel datamart creation,
speeding data mining and reporting application delivery.
18