Metron provides capacity management tools and services for storage area networks (SANs). The document discusses two aspects of storage capacity - disk space and performance. It emphasizes the importance of tracking storage usage and costs, implementing tiered billing models, and using tools like Athene to forecast needs, track utilization across virtual and clustered environments, and establish performance baselines and thresholds. Effective capacity management requires collaboration between business and IT stakeholders to understand usage and ensure storage supports business goals.
Synaptic Storage as a Service is a scalable and pay-per-use cloud storage solution that allows customers to store data without upfront costs. Customers pay monthly based on usage and storage scales on demand as needs change. The service offers reliable online access to stored data from any location through a simple API. It aims to help businesses avoid storage planning challenges and save costs compared to owning physical storage infrastructure.
Are your storage requirements growing too fast? Are the costs of managing this growth taking more and more of your IT budget? Would you like to make a better use of existing storage without adding more complexity to the infrastructure? IBM System Storage SAN Volume Controller can help solve these problems and get you on the road...
Universal Replicator is a replication engine from Hitachi that asynchronously replicates data across storage platforms. It significantly reduces resource consumption compared to traditional replication methods through disk-based journaling. Universal Replicator also enhances disaster recovery capabilities by supporting advanced three data center configurations over long distances while maintaining data integrity. It provides benefits such as network optimization, continuous data protection during outages, and compliance with business continuity requirements.
1) Grid computing virtualizes and pools IT resources to improve utilization rates and efficiency while lowering capital and operational expenses.
2) By consolidating workloads and enabling on-demand scale-out, grid computing provides a more cost effective approach to information management compared to traditional dedicated silos.
3) Automated grid management through tools like Oracle Enterprise Manager can further reduce operational costs and improve staff productivity by automating tasks like provisioning, patching, monitoring and problem resolution.
Arul Murugan Subramanian is an experienced IT Operations Manager and Storage/Backup specialist with over 15 years of experience managing infrastructure, projects, and teams. He has extensive expertise in storage platforms like IBM SVC, IBM XIV, HP EVA, and NetApp as well as backup software such as HP Data Protector, IBM TSM, and CommVault. Currently he works as a Technical Manager at Mindtree, where he handles storage and backup operations and provides improvement suggestions to optimize processes, tools, and personnel management.
STN Event 12.8.09 - Chris Vain Powerpoint Presentationmcini
Traditional disaster recovery approaches have limitations around cost, complexity, and long recovery times. Virtualization provides opportunities to simplify and automate disaster recovery management while reducing costs. Solutions like VMware Site Recovery Manager leverage storage replication between sites to automate the failover and recovery of virtual workloads, enabling non-disruptive testing and faster recovery. Specialized vendors also offer workload-focused solutions for virtual machine protection and recovery.
HP 3PAR Utility Storage is designed for virtual or cloud datacenters. It supports unpredictable and mixed workloads in a multi-tenant and scalable environment. It provides efficient storage that reduces costs through features like thin provisioning, adaptive optimization, and dynamic optimization. These features allow for over-provisioning capacity and reducing space and power requirements. HP 3PAR also increases storage management efficiency by halving administrative time and eliminating performance-related support calls. It is suited for organizations transitioning to private, public, or hybrid cloud models.
Synaptic Storage as a Service is a scalable and pay-per-use cloud storage solution that allows customers to store data without upfront costs. Customers pay monthly based on usage and storage scales on demand as needs change. The service offers reliable online access to stored data from any location through a simple API. It aims to help businesses avoid storage planning challenges and save costs compared to owning physical storage infrastructure.
Are your storage requirements growing too fast? Are the costs of managing this growth taking more and more of your IT budget? Would you like to make a better use of existing storage without adding more complexity to the infrastructure? IBM System Storage SAN Volume Controller can help solve these problems and get you on the road...
Universal Replicator is a replication engine from Hitachi that asynchronously replicates data across storage platforms. It significantly reduces resource consumption compared to traditional replication methods through disk-based journaling. Universal Replicator also enhances disaster recovery capabilities by supporting advanced three data center configurations over long distances while maintaining data integrity. It provides benefits such as network optimization, continuous data protection during outages, and compliance with business continuity requirements.
1) Grid computing virtualizes and pools IT resources to improve utilization rates and efficiency while lowering capital and operational expenses.
2) By consolidating workloads and enabling on-demand scale-out, grid computing provides a more cost effective approach to information management compared to traditional dedicated silos.
3) Automated grid management through tools like Oracle Enterprise Manager can further reduce operational costs and improve staff productivity by automating tasks like provisioning, patching, monitoring and problem resolution.
Arul Murugan Subramanian is an experienced IT Operations Manager and Storage/Backup specialist with over 15 years of experience managing infrastructure, projects, and teams. He has extensive expertise in storage platforms like IBM SVC, IBM XIV, HP EVA, and NetApp as well as backup software such as HP Data Protector, IBM TSM, and CommVault. Currently he works as a Technical Manager at Mindtree, where he handles storage and backup operations and provides improvement suggestions to optimize processes, tools, and personnel management.
STN Event 12.8.09 - Chris Vain Powerpoint Presentationmcini
Traditional disaster recovery approaches have limitations around cost, complexity, and long recovery times. Virtualization provides opportunities to simplify and automate disaster recovery management while reducing costs. Solutions like VMware Site Recovery Manager leverage storage replication between sites to automate the failover and recovery of virtual workloads, enabling non-disruptive testing and faster recovery. Specialized vendors also offer workload-focused solutions for virtual machine protection and recovery.
HP 3PAR Utility Storage is designed for virtual or cloud datacenters. It supports unpredictable and mixed workloads in a multi-tenant and scalable environment. It provides efficient storage that reduces costs through features like thin provisioning, adaptive optimization, and dynamic optimization. These features allow for over-provisioning capacity and reducing space and power requirements. HP 3PAR also increases storage management efficiency by halving administrative time and eliminating performance-related support calls. It is suited for organizations transitioning to private, public, or hybrid cloud models.
During its beta test of TPC 4.2, Insurer reported improved productivity and time-to-value. Enhanced storage resource agents reduced scan run times. New APIs and enhanced topology maps provided an end-to-end view of the environment for better decision making. Real-time monitoring of replication models and role-based access eliminated previously time-consuming manual processes...
Using multi tiered storage systems for storing both structured & unstructured...ORACLE USER GROUP ESTONIA
The document discusses the challenges of managing growing data in a tiered storage environment and introduces Oracle's Storage Archive Manager (SAM) as a solution. SAM provides automated policy-based movement of data between tiers of flash, disk, and tape storage. It integrates the Quick File System (QFS) to provide a POSIX compliant interface and simplify management across the different tiers. SAM aims to provide the right data on the right tier at the right time for optimal cost, performance and protection throughout the data lifecycle.
Learn the facts about replication in mainframe storage webinarHitachi Vantara
Business continuity is essential for today's enterprise computing environments, and protecting your data and information is key. However, the many myths associated with data replication can be confusing. How do you sort truth from fiction. Join Hitachi solution architect Joe Amato to learn about in-system replication as well as replication to remote locations, both synchronously and asynchronously. You'll come away equipped with valuable insight into the business continuity solutions available for mainframe storage.
Periyakaruppan Neelamegam has over 11 years of experience in information technology, including 9 years of experience in storage administration. He has extensive expertise in EMC storage solutions such as VMAX, DMX, VNX, CLARiiON, Celerra, and RecoverPoint. He has worked as a storage administrator for various companies in India and Qatar, where he performed tasks such as storage allocation, replication, migration, and disaster recovery. He also has experience with VMware, Cisco, and Brocade networking solutions.
Ajith has over 9 years of experience in IT storage and systems administration. He has expertise in EMC, IBM, HP, and Brocade storage arrays and networking equipment. He is currently a Senior Storage Engineer at Cerner Health Services, where he manages 16PB of data across their data centers. Previously, he held storage engineering roles at Fidelity Investments and Mindtree. Ajith has technical certifications from IBM, Brocade, and other vendors. He aims to pursue a challenging career in storage technology where he can apply his skills and work with experienced professionals.
This document summarizes an individual's experience working in IT with over 9 years of experience in storage and backup technologies. Their experience includes working with IBM, HP, EMC, and NetApp storage solutions as well as Commvault, IBM TSM, and HPDP backup software. They have experience in capacity planning, performance management, infrastructure implementation, and project transitions. Their current role is as a Senior Technical Specialist at Mindtree focusing on storage solutions like IBM XIV and SVC as well as backup tools like Commvault.
VIRBAK ABIO v3.2 is an enterprise backup application designed for flexibility, scalability, and simplicity. It provides the fastest deployment through a simple installation process and intuitive GUI. ABIO supports heterogeneous environments, all major operating systems and databases, and can scale from small to large enterprises. It aims to make backup configuration and management easy through features like centralized monitoring and automated job configuration.
This document analyzes the impact of virtualizing workloads onto servers using different generations of Intel Xeon processors, including the 7500 series. It finds consolidation ratios onto the 7500 series are 2.2-2.8 times higher than the previous generation. For a sample 554 server environment, consolidation onto the 7500 series reduced power consumption by 51% compared to the previous generation. The 7500 series also better balances CPU and memory utilization.
This document discusses techniques for reducing data storage footprints. It outlines a four-step process: 1) discover and categorize data, 2) automate data lifecycle management, 3) avoid data duplication through techniques like progressive incremental backup, and 4) compress and deduplicate data. The document promotes IBM solutions for reducing data footprints such as data discovery and categorization tools, hierarchical storage management, data deduplication in Tivoli Storage Manager, and storage optimization solutions.
The document discusses trends in data warehousing and analytics, including the rise of data warehouse appliances, column-oriented databases, and in-memory databases. It then introduces Informix Warehouse Accelerator, which combines row and columnar storage, compression, and in-memory technologies to provide extreme performance for data warehousing workloads. Key technologies of the accelerator include 3:1 data compression, frequency partitioning for efficient parallel scanning, and predicate evaluation directly on compressed data.
Designing Highly-Available Architectures for OTMMavenWire
The document discusses designing highly available architectures for OTM applications. It begins by emphasizing the importance of understanding business requirements and budget constraints when designing redundancy. It then outlines some real-world risks like hardware and application failures. The presentation provides an overview of traditional HA solutions and emerging virtualization technologies. It also includes a cheat sheet on options for scaling and clustering the web, application, and database tiers based on service level agreements.
Phanindra S V has over 10 years of experience in IT backup and storage. He is proficient with Commvault, Tivoli Storage Manager, NetBackup, and HP Data Protector. He has expertise managing tape libraries, performing backups, troubleshooting issues, and ensuring backup success rates of over 99%. Phanindra seeks to leverage his skills and experience in a position offering growth at an organization matching his abilities.
This document discusses Business Process Insight (BPI), an approach and platform for discovering and analyzing end-to-end business processes. It presents the BPI lifecycle, architecture, and addresses key research challenges. The architecture uses a cloud-based data storage and includes modules for data integration, correlation, process mining, comparison and predictive analytics. It aims to provide process intelligence through analytics on both historical and real-time data to improve business operations and manage risks. Future work areas include balancing data scale and query capabilities and parallelizing algorithms.
Get a Pretested, Validated Infrastructure
Cisco and NetApp have collaborated to create FlexPod, a prevalidated data center solution built on a flexible, shared infrastructure. This predesigned base configuration can:
Scale easily
Be optimized for a variety of mixed application workloads
Be configured for virtual desktop or server infrastructure, secure multi-tenancy, or cloud environments
Implementing a Disaster Recovery Solution using VMware Site Recovery Manager ...Paula Koziol
IBM Spectrum Virtualize delivers business continuity capabilities using a stretched cluster configuration together with VMware Site Recovery Manager (SRM). The result is an end-to-end disaster recovery solution for organizations of all sizes. Join this session to understand how IBM Spectrum Virtualize, including offerings like IBM SAN Volume Controller (SVC) and IBM Storwize Family, integrates with VMware SRM to automate and optimize disaster recovery operations. Everyone who works in mission critical environments understands the need for high availability and effective solutions for planned and unplanned outages. Organizations demand disaster recovery operations that are fully automated and can be executed in a repeatable manner, so that they are always prepared for disaster situations. This IBM-VMware solution offers SMB and enterprise customers the ability to survive a wide range of failures and enables seamless migration of applications across company sites for various planned activities, enabling zero-downtime application mobility.
Polyserve DB Consolidation Platform - Clemens EsserHPDutchWorld
HP's PolyServe platform allows for consolidating multiple SQL Server instances onto a single physical server or across multiple servers for higher utilization and fault tolerance compared to virtualization. Key benefits include: (1) Increasing SQL Server utilization from 5% to over 75% (2) Guaranteeing high availability for all instances (3) Reducing ongoing administration costs through features like one-click updates. PolyServe offers more efficient consolidation and management of SQL Server workloads than VMware by utilizing shared storage and enabling rapid instance failover between physical servers.
The customer had workloads across 3 datacenters with manual HA and DR approaches. They procured triple the resources for Platinum workloads to support production, HA, and DR. With IBM Enterprise Pools and Mobility, they reduced physical cores by 96, lowering software licensing and acquisition costs. This introduced cloud characteristics like load balancing and provisioning tools, providing a true cloud platform with reduced costs and resources.
2013 OTM EU SIG evolv applications Data ManagementMavenWire
This document discusses the history of Oracle Transportation Management (OTM) implementation processes in Europe and outlines best practices for data management and user access management. It describes how early OTM implementations relied on individual efforts which led to inconsistencies. As the user base grew, common tools and processes were developed but still varied between projects. The document advocates defining standardized practices to improve consistency, supportability and efficiency across implementations. It provides recommendations for best practices in loading reference data, managing data changes over time, and provisioning user access roles and privileges in a centralized manner.
The document outlines steps for establishing formal capacity management in an organization. It argues that real-time monitors are a waste of time and advocates for a proactive approach using tools that can predict potential problems in advance. The key things needed are senior management commitment, process definition, and the right people and tools. Things that help are tracking performance and resource consumption over time and obtaining business information to translate workload forecasts into resource needs.
Open Canarias es una empresa de tecnologías de la información fundada en 1996 con sede en Canarias. Cuenta con más de 80 empleados y ofrece una variedad de servicios y soluciones TIC, incluyendo desarrollo de software, sistemas, consultoría y formación. Algunos de sus clientes incluyen bancos, gobiernos, hospitales y empresas turísticas.
Bass Chorng is a principal capacity engineer at eBay who specializes in database performance, availability, and scalability. He established eBay's database capacity team in 2003. eBay uses both NoSQL and RDBMS databases including Cassandra, MongoDB, CouchBase, and Oracle. eBay sees over 400 billion database calls per day across 2000 NoSQL nodes and 450 Oracle nodes while hosting 800 million active items and 120 million active users. Capacity planning involves analyzing traffic, utilization, forecasting growth, and converting resource needs into costs. It requires knowledge of the platform, bottlenecks, and new technologies.
During its beta test of TPC 4.2, Insurer reported improved productivity and time-to-value. Enhanced storage resource agents reduced scan run times. New APIs and enhanced topology maps provided an end-to-end view of the environment for better decision making. Real-time monitoring of replication models and role-based access eliminated previously time-consuming manual processes...
Using multi tiered storage systems for storing both structured & unstructured...ORACLE USER GROUP ESTONIA
The document discusses the challenges of managing growing data in a tiered storage environment and introduces Oracle's Storage Archive Manager (SAM) as a solution. SAM provides automated policy-based movement of data between tiers of flash, disk, and tape storage. It integrates the Quick File System (QFS) to provide a POSIX compliant interface and simplify management across the different tiers. SAM aims to provide the right data on the right tier at the right time for optimal cost, performance and protection throughout the data lifecycle.
Learn the facts about replication in mainframe storage webinarHitachi Vantara
Business continuity is essential for today's enterprise computing environments, and protecting your data and information is key. However, the many myths associated with data replication can be confusing. How do you sort truth from fiction. Join Hitachi solution architect Joe Amato to learn about in-system replication as well as replication to remote locations, both synchronously and asynchronously. You'll come away equipped with valuable insight into the business continuity solutions available for mainframe storage.
Periyakaruppan Neelamegam has over 11 years of experience in information technology, including 9 years of experience in storage administration. He has extensive expertise in EMC storage solutions such as VMAX, DMX, VNX, CLARiiON, Celerra, and RecoverPoint. He has worked as a storage administrator for various companies in India and Qatar, where he performed tasks such as storage allocation, replication, migration, and disaster recovery. He also has experience with VMware, Cisco, and Brocade networking solutions.
Ajith has over 9 years of experience in IT storage and systems administration. He has expertise in EMC, IBM, HP, and Brocade storage arrays and networking equipment. He is currently a Senior Storage Engineer at Cerner Health Services, where he manages 16PB of data across their data centers. Previously, he held storage engineering roles at Fidelity Investments and Mindtree. Ajith has technical certifications from IBM, Brocade, and other vendors. He aims to pursue a challenging career in storage technology where he can apply his skills and work with experienced professionals.
This document summarizes an individual's experience working in IT with over 9 years of experience in storage and backup technologies. Their experience includes working with IBM, HP, EMC, and NetApp storage solutions as well as Commvault, IBM TSM, and HPDP backup software. They have experience in capacity planning, performance management, infrastructure implementation, and project transitions. Their current role is as a Senior Technical Specialist at Mindtree focusing on storage solutions like IBM XIV and SVC as well as backup tools like Commvault.
VIRBAK ABIO v3.2 is an enterprise backup application designed for flexibility, scalability, and simplicity. It provides the fastest deployment through a simple installation process and intuitive GUI. ABIO supports heterogeneous environments, all major operating systems and databases, and can scale from small to large enterprises. It aims to make backup configuration and management easy through features like centralized monitoring and automated job configuration.
This document analyzes the impact of virtualizing workloads onto servers using different generations of Intel Xeon processors, including the 7500 series. It finds consolidation ratios onto the 7500 series are 2.2-2.8 times higher than the previous generation. For a sample 554 server environment, consolidation onto the 7500 series reduced power consumption by 51% compared to the previous generation. The 7500 series also better balances CPU and memory utilization.
This document discusses techniques for reducing data storage footprints. It outlines a four-step process: 1) discover and categorize data, 2) automate data lifecycle management, 3) avoid data duplication through techniques like progressive incremental backup, and 4) compress and deduplicate data. The document promotes IBM solutions for reducing data footprints such as data discovery and categorization tools, hierarchical storage management, data deduplication in Tivoli Storage Manager, and storage optimization solutions.
The document discusses trends in data warehousing and analytics, including the rise of data warehouse appliances, column-oriented databases, and in-memory databases. It then introduces Informix Warehouse Accelerator, which combines row and columnar storage, compression, and in-memory technologies to provide extreme performance for data warehousing workloads. Key technologies of the accelerator include 3:1 data compression, frequency partitioning for efficient parallel scanning, and predicate evaluation directly on compressed data.
Designing Highly-Available Architectures for OTMMavenWire
The document discusses designing highly available architectures for OTM applications. It begins by emphasizing the importance of understanding business requirements and budget constraints when designing redundancy. It then outlines some real-world risks like hardware and application failures. The presentation provides an overview of traditional HA solutions and emerging virtualization technologies. It also includes a cheat sheet on options for scaling and clustering the web, application, and database tiers based on service level agreements.
Phanindra S V has over 10 years of experience in IT backup and storage. He is proficient with Commvault, Tivoli Storage Manager, NetBackup, and HP Data Protector. He has expertise managing tape libraries, performing backups, troubleshooting issues, and ensuring backup success rates of over 99%. Phanindra seeks to leverage his skills and experience in a position offering growth at an organization matching his abilities.
This document discusses Business Process Insight (BPI), an approach and platform for discovering and analyzing end-to-end business processes. It presents the BPI lifecycle, architecture, and addresses key research challenges. The architecture uses a cloud-based data storage and includes modules for data integration, correlation, process mining, comparison and predictive analytics. It aims to provide process intelligence through analytics on both historical and real-time data to improve business operations and manage risks. Future work areas include balancing data scale and query capabilities and parallelizing algorithms.
Get a Pretested, Validated Infrastructure
Cisco and NetApp have collaborated to create FlexPod, a prevalidated data center solution built on a flexible, shared infrastructure. This predesigned base configuration can:
Scale easily
Be optimized for a variety of mixed application workloads
Be configured for virtual desktop or server infrastructure, secure multi-tenancy, or cloud environments
Implementing a Disaster Recovery Solution using VMware Site Recovery Manager ...Paula Koziol
IBM Spectrum Virtualize delivers business continuity capabilities using a stretched cluster configuration together with VMware Site Recovery Manager (SRM). The result is an end-to-end disaster recovery solution for organizations of all sizes. Join this session to understand how IBM Spectrum Virtualize, including offerings like IBM SAN Volume Controller (SVC) and IBM Storwize Family, integrates with VMware SRM to automate and optimize disaster recovery operations. Everyone who works in mission critical environments understands the need for high availability and effective solutions for planned and unplanned outages. Organizations demand disaster recovery operations that are fully automated and can be executed in a repeatable manner, so that they are always prepared for disaster situations. This IBM-VMware solution offers SMB and enterprise customers the ability to survive a wide range of failures and enables seamless migration of applications across company sites for various planned activities, enabling zero-downtime application mobility.
Polyserve DB Consolidation Platform - Clemens EsserHPDutchWorld
HP's PolyServe platform allows for consolidating multiple SQL Server instances onto a single physical server or across multiple servers for higher utilization and fault tolerance compared to virtualization. Key benefits include: (1) Increasing SQL Server utilization from 5% to over 75% (2) Guaranteeing high availability for all instances (3) Reducing ongoing administration costs through features like one-click updates. PolyServe offers more efficient consolidation and management of SQL Server workloads than VMware by utilizing shared storage and enabling rapid instance failover between physical servers.
The customer had workloads across 3 datacenters with manual HA and DR approaches. They procured triple the resources for Platinum workloads to support production, HA, and DR. With IBM Enterprise Pools and Mobility, they reduced physical cores by 96, lowering software licensing and acquisition costs. This introduced cloud characteristics like load balancing and provisioning tools, providing a true cloud platform with reduced costs and resources.
2013 OTM EU SIG evolv applications Data ManagementMavenWire
This document discusses the history of Oracle Transportation Management (OTM) implementation processes in Europe and outlines best practices for data management and user access management. It describes how early OTM implementations relied on individual efforts which led to inconsistencies. As the user base grew, common tools and processes were developed but still varied between projects. The document advocates defining standardized practices to improve consistency, supportability and efficiency across implementations. It provides recommendations for best practices in loading reference data, managing data changes over time, and provisioning user access roles and privileges in a centralized manner.
The document outlines steps for establishing formal capacity management in an organization. It argues that real-time monitors are a waste of time and advocates for a proactive approach using tools that can predict potential problems in advance. The key things needed are senior management commitment, process definition, and the right people and tools. Things that help are tracking performance and resource consumption over time and obtaining business information to translate workload forecasts into resource needs.
Open Canarias es una empresa de tecnologías de la información fundada en 1996 con sede en Canarias. Cuenta con más de 80 empleados y ofrece una variedad de servicios y soluciones TIC, incluyendo desarrollo de software, sistemas, consultoría y formación. Algunos de sus clientes incluyen bancos, gobiernos, hospitales y empresas turísticas.
Bass Chorng is a principal capacity engineer at eBay who specializes in database performance, availability, and scalability. He established eBay's database capacity team in 2003. eBay uses both NoSQL and RDBMS databases including Cassandra, MongoDB, CouchBase, and Oracle. eBay sees over 400 billion database calls per day across 2000 NoSQL nodes and 450 Oracle nodes while hosting 800 million active items and 120 million active users. Capacity planning involves analyzing traffic, utilization, forecasting growth, and converting resource needs into costs. It requires knowledge of the platform, bottlenecks, and new technologies.
Database tuning is the process of optimizing a database to maximize performance. It involves activities like configuring disks, tuning SQL statements, and sizing memory properly. Database performance issues commonly stem from slow physical I/O, excessive CPU usage, or latch contention. Tuning opportunities exist at the level of database design, application code, memory settings, disk I/O, and eliminating contention. Performance monitoring tools like the Automatic Workload Repository and wait events help identify problem areas.
Capacity planning is the process of determining a company's production capacity needed to meet changing demands. It involves determining the type, amount, and timing of capacity required. Key decisions include selecting the appropriate level and flexibility of facilities while maintaining balance. The process includes estimating future needs, evaluating existing capacity, identifying alternatives, analyzing costs, assessing qualitative factors, selecting an alternative, and monitoring results. Efficiency and utilization are measured by comparing actual output to effective and design capacities. Economies and diseconomies of scale affect costs based on output levels. Cost-volume analysis examines the relationships between costs, revenues, and profits at different volumes.
Presentación en México del IBM Storwize V7000, dirigido al mercado medio, pero con prestaciones que anteriormente sólo se encontraban en equipos más avanzados.
The document discusses capacity planning for products and services. It explains key concepts like capacity, effective capacity, and utilization. It also outlines factors to consider when developing capacity alternatives and approaches for evaluating alternatives, including cost-volume analysis, break-even analysis, financial analysis, and waiting-line analysis. The goal of capacity planning is to determine the appropriate level and timing of capacity to meet future demand in a cost-effective manner.
A scalable server environment for your applicationsGigaSpaces
This document discusses building applications for the cloud and provides best practices. It notes that deploying applications on the cloud introduces challenges related to scalability, reliability, security, and management. It recommends that applications be designed to be elastic, memory-based, and easy to operate in order to fully take advantage of the cloud. Specific steps are outlined, such as using in-memory data grids for messaging and as the system of record, and auto-scaling the web tier.
5 Keys to Delivering Storage-as-a-Service Without Losing ControlJeannette Grand
Learn how to deliver storage-as-a-service within your organization to improve end user satisfaction, all while maintaining control of your storage environment. Presentation given at NetApp Insight event, November 2012.
Beyond EBS Stroage Alternatives in the CloudNetApp
This document discusses the evolution of Amazon EC2 and EBS cloud storage offerings over time from 2006 to 2012. It notes limitations of early offerings like local instance storage and EBS in terms of performance, durability, and suitability only for basic applications. More recent additions like Provisioned IOPS EBS improved performance but gaps remain compared to traditional enterprise storage. The document argues for a new generation of block storage for the cloud that provides independent scaling of performance and capacity, guaranteed quality of service, higher durability and availability, efficiency through data reduction, automation, and true cloud scale. It suggests all-flash storage designed for cloud providers could help by restoring balance between performance and capacity while reducing costs of storage infrastructure.
This document provides an overview of caching and distributed caching principles. It discusses the goals of caching to improve performance by storing frequently accessed data closer to where it is needed. It explains concepts like memory hierarchy and why distributed caching is needed to manage huge amounts of data across multiple servers. Some key use cases of caching are also mentioned. The document discusses caching topologies like partitioned and replicated caching. It provides examples of caching patterns and load techniques. Finally, it discusses some prominent distributed caching solutions and shows sample code for using Hazelcast.
Storage for Microsoft®Windows EnfironmentsMichael Hudak
This document explores some of the common challenges Windows® architects and administrators face in managing storage for Microsoft® environments with workloads such as:
• Microsoft Exchange Server,
• Microsoft Office SharePoint® Server,
• Microsoft SQL Serv
This document discusses HP's converged infrastructure (CI) storage solutions. It summarizes how CI helps address storage challenges through optimized performance, reduced complexity, maximized utilization, and reduced power footprint. It then discusses how HP's virtual resource pools require a virtualized storage infrastructure to transform information economics. Finally, it provides details on HP StorageWorks virtualization solutions and the benefits of the HP LeftHand P4000 storage solution.
Elastic storage in the cloud session 5224 final v2BradDesAulniers2
IBM Spectrum Scale (formerly Elastic Storage) provides software defined storage capabilities using standard commodity hardware. It delivers automated, policy-driven storage services through orchestration of the underlying storage infrastructure. Key features include massive scalability up to a yottabyte in size, built-in high availability, data integrity, and the ability to non-disruptively add or remove storage resources. The software provides a single global namespace, inline and offline data tiering, and integration with applications like HDFS to enable analytics on existing storage infrastructure.
The document discusses MapR's distribution for Apache Hadoop. It provides an enterprise-grade and open source distribution that leverages open source components and makes targeted enhancements to make Hadoop more open and enterprise-ready. Key features include integration with other big data technologies like Accumulo, high availability, easy management at scale, and a storage architecture based on volumes to logically organize and manage data placement and policies across a Hadoop cluster.
This document discusses leveraging solid state drives (SSDs) in tiered storage environments to improve application performance and reduce costs compared to using only hard disk drives (HDDs). It describes how SSDs can deliver substantially better I/O performance than HDDs. Experiments showed significant performance improvements and substantial reduction in the number of drives needed when using SSDs, resulting in reduced costs from smaller footprint, lower energy usage, and less hardware to maintain. The document provides guidance on implementing tiered storage with SSDs and HDDs to optimize performance and costs.
The document discusses and compares several data storage solutions used by SLC, including Dell EqualLogic storage arrays, Dell Compellent storage arrays, and SyncSort NSB backup software with NetApp storage. The Dell EqualLogic arrays were used to establish a VMware vSphere 5 test and development environment, allowing migration of virtual machines and freeing up production storage space. The Dell Compellent arrays were chosen over Hitachi and NetApp options due to lower total cost of ownership and benefits like automated tiered storage, data replication, and expandability. SyncSort NSB backup software improved on the previous backup solution by reducing full backup time from 80 hours to under 8 hours.
This document discusses Oracle's storage and Linux portfolio. It provides an overview of Oracle's storage offerings including Exadata, Sun ZFS Storage Appliance, and tape storage. It then discusses how Oracle Storage is engineered for Oracle software. The document also summarizes Oracle Linux and how it provides a reliable, high-performing Linux environment along with tools for management and clustering. It compares support and pricing of Oracle Linux to Red Hat Enterprise Linux. Finally, it outlines Oracle's x86 server strategy and differentiation.
Vendor Landscape Small to Midrange Storage ArraysNetApp
Review this InfoTech report that evaluates the latest storage array vendor landscape to help IT staff find the best match for their business and IT needs.
Based on my article published in the Microsoft Architecture Journal : Issue 17Available on-line at http://www.msarchitecturejournal.com/pdf/Journal17.pdfAbhijitGadkari1
VMworld 2014: Virtualize Active Directory, the Right Way!VMworld
Virtualizing Active Directory domain controllers can provide benefits like increased availability and scalability. However, there are some safety considerations to take into account, such as preventing "USN rollback" which occurs when a domain controller's state is reverted, like after restoring from a snapshot. New features in Windows Server 2012 and VMware vSphere help address this, such as the VM Generation ID which changes when the domain controller state is modified, triggering safety mechanisms to isolate changes. Proper configuration following best practices is important for successfully virtualizing Active Directory.
How Can Hypervisors Leverage Advanced Storage Features ? - VMFS(x) on the storage attached to the ESX/ESXi hosts works perfectly fine, but the network usage(IP/FC/etc) goes up significantly when the storage is coming from NAS or SAN.The goal is to offload the file operations to the NAS/SAN based Arrays and leverage maximum benefits to increase I/O performance,storage utilization and reduced network usage.
The document discusses designing a scalable and high performing portal infrastructure using a portal farm topology rather than a conventional clustered topology. A portal farm consists of standalone portal instances with load balancing rather than a managed cell. This provides simplicity but lacks some clustering features. The document recommends a common caching tier using DataPower XC10 appliances to provide session caching and improve performance and scalability.
Covers the problems of achieving scalability in server farm environments and how distributed data grids provide in-memory storage and boost performance. Includes summary of ScaleOut Software product offerings including ScaleOut State Server and Grid Computing Edition.
EvoApp's Bermuda platform addresses the challenges of big data by enabling real-time analysis of massive datasets through iterative, ad-hoc querying of both structured and unstructured data in a cloud-native environment. Bermuda leverages virtual machines, cloud storage, and in-memory processing to handle terabytes of data across servers with sub-second response times. This allows for flexible, low-cost analysis of large datasets for applications such as dynamic pricing, predictive maintenance, and fraud detection.
EvoApp's Bermuda platform addresses the challenges of big data by enabling real-time analysis of massive datasets through iterative, ad-hoc querying of both structured and unstructured data in a cloud-native environment. Bermuda leverages virtual machines, cloud storage, and in-memory processing to handle terabytes of data across servers with sub-second response times. This allows for flexible, low-cost analysis of data volumes and velocities too large for traditional batch processing systems.
AWS Summit 2013 | Auckland - Building Web Scale Applications with AWSAmazon Web Services
This document discusses scaling web applications on AWS. It provides the following key points:
1. Loose coupling of application components allows them to scale independently and improves fault tolerance. Data and services should reside outside components that need to scale.
2. Architecting for horizontal scaling across multiple servers or instances allows applications to scale more easily than vertical scaling on single, larger instances.
3. Session and application state should be stored in a separate, scalable service like DynamoDB to avoid bottlenecks.
4. AWS services like ELB, Auto Scaling, RDS, DynamoDB and S3 help applications scale dynamically based on load and eliminate the need to manage infrastructure.
AWS Summit 2013 | Auckland - Building Web Scale Applications with AWS
Capacity Management for SAN
1. Metron
Capacity Management
for SAN Attached Storage
Warning: Low Disk Space
2. Metron-Athene
• Established 1986
• Stable ownership
• Consistent Focus on CM
• Industry Leadership
www.metron-athene.com
3. Athene
z/OS, HP-UX, AIX, Solaris, Linux
Data Source
Acquire Framework DB/Application
Virtual Server Custom
Control Center
Capacity Database
4. Objectives
• Trends in storage technology.
• Define two distinct aspects of storage capacity.
• Examine key areas related to capacity management of SAN
attached storage.
• Equate with business value.
• Show how tools like Athene can help you achieve your goals.
• Provide ideas about how to proceed with improving storage
capacity management processes in your environment.
5. Trends
• Solid state devices
• Cloud storage
• Embedded storage (e.g. Exadata, vBlock)
• Big data (e.g. Hadoop)
• Tiered storage
• Primary de-duplication
• FCoE, 16 Gbps Fiber, and 10 Gbps Ethernet
6. Two Distinct Aspects of Storage Capacity
Disk Performance Capacity
Response, IOPs
Disk Space Capacity
Bytes
7. Space Capacity – Growth (measureable)
Changing demands for storage – Slope of line
8. Space Capacity - History
Growth can result in increasing cost and complexity
9. Space Capacity – Growth and Cost Factors
Growth
• Business as usual (Trend)
• Acquisitions
• New applications and projects
Costs
• Equipment, including power
• Resource management, including people
• Storage use by application (Billable Customers)
10. Space Capacity – Storage as a Service
How much are customers consuming?
Don’t forget
about the IT
department
and other
insiders!
11. Space Capacity – Tiered Service Model
Define what tiers are (platinum, gold, silver, etc…)
Rates should be
adjusted on a
frequent basis.
Estimate growth
versus storage cost
declines.
Billing is an
effective way to
create
accountability.
12. Space Capacity – Management Support
Effective storage management happens with a bridge to
business results, and building that bridge begins with a solid
foundation. Show business value to be self evident.
13. Space Capacity – Business View
With management backing, important processes can be implemented
Business IT
• Capacity budgeting and
inventory management
• Mandatory storage
request process
• Storage mapping to
determine ownership
• Chargeback of some form
• Define executive reporting
requirements
Once the bridge is built reporting information can flow freely
14. Space Capacity – Who is Responsible
Managing storage capacity requires work.
Storage administrators typically have limited time and
higher priorities in their complex environments.
15. Space Capacity – Over and Under Provisioning
Administrators may have no choice but to over
allocate which results in low utilization.
It is important to define
exactly what ‘Utilization’ is
for your storage.
Many factors determine
what ‘Right Sized’ means
for each system.
But, running out of space
means only one thing to all.
16. Space Capacity – Doing the Technical Work
After roles and responsibilities are assigned and business
requirements are complete, technical solutions can be implemented
to optimize storage space management, including databases.
Trending, forecasting, and exceptions.
17. Space Capacity – Different Viewpoints
Business, Application, Host, Storage Array, Billing Tier
If billing for storage ensure transparency with detail reports
18. Space Capacity – Virtual Environments and Clusters
Managing storage in clustered and/or virtual environment can be challenging
because it is shared among all hosts and virtual machines running on it.
• Manage capacity at a high level
• Account for storage use at a low
level, e.g. VM or DB
• If billing be cautious of different
tiers being allocated to the same
cluster.
• Don’t forget about overhead
Overcommit with thin provisioning
19. Space Capacity – Storage Virtualization
Pooling physical storage from multiple sources into logical groupings
• Simplifies Administration
• Can be a centralized source for
collecting data
• If using as a data source beware
of double counting with backend
• Don’t forget about overhead for
replication
Wide variety of techniques for virtualizing storage, be aware of
the implications for data collection and reporting
20. Space Capacity – Best Practices
Find dark and hidden storage, where it has been
allocated and never used, or plugged into a different box.
Use thin provisioning and de-duplication
where possible.
Include data retention policies for
storage space management.
Account for overhead from
RAID, replication, file systems, etc…
Understand the value of data in deciding where to put
it, how to protect it, and how long to keep it.
21. Space Capacity – Best Practices
Understand the limitations of linear regression when trending
and forecasting data. Use statistics like R^2 to confirm.
Be sure to account for all
variables when ‘Right Sizing’!
Include directory and file level
reporting for file servers if possible.
22. Performance Capacity – Response Impacts
SAN or storage array performance problems can have serious
impacts over a long duration, and be difficult to identify.
23. Performance Capacity – Metrics
Understand the limitations of certain metrics
• Measured response is the best metric
for identifying trouble.
• Host utilization only shows busy time,
it doesn’t give capacity for SAN.
• Physical IOPs is an important
measure of throughput, all disks have
their limitation.
• Queue Length is a good indicator that
a limitation has been reached
somewhere.
24. Performance Capacity – Metric Thresholds
Many times critical host disk metrics are not
breached during impactful events.
Consider using
Statistical
Process Control
Are these potential problems having a real impact?
25. Performance Capacity – Metric Thresholds (Host)
Other times certain metrics like utilization are indicating
impactful events, but ample capacity is still available.
26. Performance Capacity – Metric Thresholds (Host)
Queue lengths from the previous utilization indicate that it may
not currently be impacting response, but headroom is unknown.
27. Performance Capacity – Metric Thresholds (Host)
The high utilization can be seen generating large amounts
of I/O in this chart.
28. Performance Capacity – Architecture (Array)
• Front End Processors
• Shared Cache
• Back End Processors
• Disk Storage
29. Performance Capacity – Metric Thresholds (Array)
Front end processors are typically the first to bottleneck
30. Performance Capacity – Metric Thresholds (Array)
Impact of utilization on response for a single processor
Curves based on simple queuing with normal distribution
31. Performance Capacity – Component Breakdown
Service time versus response time – different metrics
32. Performance Capacity – Workload Profiles
I/O profile has a big impact on performance. Be sure to
include it when comparing applications.
Test with tools like Iometer, IOzone, Bonnie, etc…
35. Performance Capacity – Best Practices
• Choose service levels and establish baselines.
• Use available data sources, vendor utilities, etc…
• Consolidate reporting tools and data. (Athene)
36. Storage Capacity – Final Thoughts
• Talk with storage team about current state of reporting and fill in the gaps.
• Fabric and network utilization might be in scope.
• Set priorities for where to spend time and effort.
• Simplify where possible.
• Work to establish formal naming conventions where needed.
• Tools - without knowledge, experience, and commitment won’t help.
37. Storage Capacity – Thank you for attending
Capacity Management
for SAN Attached Storage
Dale Feiste
Metron-Athene Inc.
dale@metron-athene.com
Editor's Notes
- A good first step to implementing effective capacity management for SAN attached storage is to ensure that you are managing the non-SAN specific aspects of storage first. A second important step is recognizing what limitations and gaps exist from the host perspective.
Keep in mind the level at which disk space runs out (e.g. file systems, drives, volumes, etc…). Typically this is where monitoring is configured, but it can be proactive.Also remember that multiple I/O requests can be in flight at the same time just like other networking protocols, controlled by queue depth settings.
- Aggregate data to the appropriate level for reporting to a given audience.
Highlight storage for IT, unknown, and other unbillable storage.If customers have a blank check they will consume a lot more storage.Having many tools that all consume data can add up. Athene consolidates your data for capacity management.Make sure all allocated storage has an owner.
All storage is not created equal.Opposing forces of growth and decreasing cost of storage. If costs stop decreasing, like CPU speeds stopped increasing, look out. Physical limits can be reached for storage density.Primary focus on billing is giving accountability first, rather than ensuring exact financial accounting of real costs. Yeah, it may not be all real, but it’s better than an open checkbook.
Ideally you could do a business study, then create a business plan based on those results (i.e. cost/benefit analysis).Need a compelling story to generate interest.
- How much storage can administrators manage? It depends on many factors.
Are we talking utilization on the host or SAN side? Does it include overheads for file systems, RAID, DR, etc…?Right size for backups, growth, variability, etc…Start with most important low hanging fruit.
Proactive management with automated trending. Be aware that fighting fires is more glamorous and visible.It’s easy to get buried with data, filter out the noise with exceptions and filters (10% of 10GB vs. 10% of 1TB).All trend lines are not created equal.
Storage vmotion in vSphere 5 will load balance based on datastore performance.Thin provisioning may not be appropriate in situations where delays for expanding storage are not acceptable
Compare advantages of using virtual storage to distribute over more spindles versus specific placement, admin and performance.Mention types of in band versus out of band virtualization. Host, SAN, and Array components required.
- How do you find dark and hidden storage? Compare allocated versus what shows up on hosts and asset management.
- Also, proportion of samples over a threshold and variablity.
It can also be in the reverse where the host looks okay, but there is an impact. Measured I/O response is the best way to determine what the OS is experiencing.Also, significant changes from normal can indicate problems.
If the line waiting for service increases, either your throughput or service time has increased.Queues don’t typically increase in a linear fashion, things can fall apart quickly when this spikes up. Can be good for monitoring and diagnosis but not planning.
- Individual disks may go to completely different areas of backend storage. An impact in one area can be to traced back through to the root problem.