With the football season in full swing, the baseball season heading into the playoffs, and the hockey season just starting, it is time to raid the refrigerator for snacks, head for the most comfortable chair in the family room, and settle in for a full day of viewing sports. Unfortunately, it is not always easy to turn on the myriad number of devices required to watch a game broadcast over cable, on that wide-screen hi-def TV, with the wrap-around sound from the latest audio system available. There is the re-mote for the cable system; there is a remote for the TV; there is one for the satellite dish; there is anoth-er for the sound system. There are so many remote controls on the coffee table that there is hardly room for the snacks! What you need is a universal remote; a single, simplified command center that can control all of the hi-tech equipment in the family room. Unfortunately, even that universal remote will not do the job for any device released after the remote was manufactured. What is required is a universal remote with a learning capability to take the complexity out of turning on the TV, one than can reprogram itself from the remote that comes with every new device.
Striving for excellence is a human trait shared by many, as we all try to be the best that we can in at least one area under our control. Achieving excellence is a little harder to accomplish; it requires an amount of hard work and dedication that only a select few are willing to deliver. Improving on excellence, on the other hand, requires that rare individual who sets his sights on being the best in the world at whatever he attempts and continues to work harder than everyone else, even after he has arrived at the pinnacle of his quest. Individuals like Olympic athletes Michael Phelps and Usain Bolt each set world records (in swimming and track), yet each continues to train even harder to break their own records and reap the rewards of these continuing efforts.
This same quality of continuing to improve on success is an essential requirement for every enterprise data center looking to improve upon the performance of its IT infrastructure, ensure the security and reliability of its environment, and continue to lower the total cost of ownership (TCO) of that infrastructure in the face of increasing demands. The deployment of new applications on new servers and the continuing explosion of data, which tends to be doubling every 12-to-18 months, are putting a strain on the budgets of every enterprise data center around the globe. Programs are being implemented to consolidate and virtualize both servers and storage to reduce the TCO and preserve valuable resources, both human and natural. By reducing the number of physical servers populating the data center, the CIO can reduce the number of systems administrators required to drive the IT infrastructure, as well as reducing the amount of energy necessary to power the data center, and the amount of floor space required to house it. These last two points are especially critical as enterprise data centers approach maximum capacity in both of these categories. In fact, if either is exceeded, the enterprise may be forced to build out a brand new data center at a cost of millions of dollars.
This document discusses storage class memory (SCM), a new type of non-volatile storage medium that aims to bridge the gap between memory and storage. SCM has the capacity and cost of hard disk drives/solid state drives while providing performance similar to RAM. Key candidate technologies for SCM include improved flash memory, MRAM, FeRAM, RRAM, and PCM. While SCM promises high speed and low latency access, concerns remain around existing interface limitations and ensuring file system compatibility.
The document is a chapter from a textbook about evaluating computer hardware systems. It discusses how to assess key components like the CPU, memory, storage, video, audio, and overall reliability. It provides guidance on determining whether an existing system should be upgraded or replaced by looking at factors such as processing power, storage needs, and application requirements. The chapter also includes review questions to test the reader's understanding.
This document provides an in-depth look at solid state drive (SSD) performance in the IBM DS8000 storage system. It discusses SSD performance best practices, such as placing hot data on SSDs for applications requiring low response times. It also covers selecting which data is best suited for SSDs versus HDDs on the DS8000, which now supports SSDs as a high performance tier along with 15K RPM and 7K RPM HDDs. Tools for analyzing I/O patterns on AIX and System z servers are also described to help identify hot data candidates for migration to SSDs.
The document discusses Avamar, a backup and recovery software from EMC. It provides client-side data deduplication to reduce backup sizes and speeds. Avamar protects virtual and physical environments with scalable management. It discusses how Avamar improves backup performance for various environments like virtual machines, applications, remote offices, and desktops. Avamar provides reliable protection with features like daily integrity checks and fault tolerance.
IBM's 5th generation eX5 server architecture delivers innovations that address key customer pain points around memory limitations and licensing costs. The eX5 introduces MAX5 memory expansion technology that allows servers to double their memory capacity without performance impacts. This enables customers to consolidate more workloads on fewer servers to reduce licensing fees. The eX5 also uses SSD technology through the FlashPack and NEXT IO offerings to provide thousands of IOPS of performance previously requiring hundreds of spinning disks.
Tony Pearson presented on the future of storage. Key points included:
1) Advances in technologies like flash storage, storage virtualization, data deduplication, and cloud storage are changing how organizations manage their storage assets and treat storage as a pooled resource.
2) The roles of different storage types are shifting, with solid state drives becoming more optimal for primary data and disk and tape more suited to backup data and long term retention.
3) Improvements in bandwidth are driving convergence of networks and protocols like FCoE, iSCSI, and CIFS.
4) Cloud computing is driving standardization, automation, and management approaches that also impact internal IT departments.
Striving for excellence is a human trait shared by many, as we all try to be the best that we can in at least one area under our control. Achieving excellence is a little harder to accomplish; it requires an amount of hard work and dedication that only a select few are willing to deliver. Improving on excellence, on the other hand, requires that rare individual who sets his sights on being the best in the world at whatever he attempts and continues to work harder than everyone else, even after he has arrived at the pinnacle of his quest. Individuals like Olympic athletes Michael Phelps and Usain Bolt each set world records (in swimming and track), yet each continues to train even harder to break their own records and reap the rewards of these continuing efforts.
This same quality of continuing to improve on success is an essential requirement for every enterprise data center looking to improve upon the performance of its IT infrastructure, ensure the security and reliability of its environment, and continue to lower the total cost of ownership (TCO) of that infrastructure in the face of increasing demands. The deployment of new applications on new servers and the continuing explosion of data, which tends to be doubling every 12-to-18 months, are putting a strain on the budgets of every enterprise data center around the globe. Programs are being implemented to consolidate and virtualize both servers and storage to reduce the TCO and preserve valuable resources, both human and natural. By reducing the number of physical servers populating the data center, the CIO can reduce the number of systems administrators required to drive the IT infrastructure, as well as reducing the amount of energy necessary to power the data center, and the amount of floor space required to house it. These last two points are especially critical as enterprise data centers approach maximum capacity in both of these categories. In fact, if either is exceeded, the enterprise may be forced to build out a brand new data center at a cost of millions of dollars.
This document discusses storage class memory (SCM), a new type of non-volatile storage medium that aims to bridge the gap between memory and storage. SCM has the capacity and cost of hard disk drives/solid state drives while providing performance similar to RAM. Key candidate technologies for SCM include improved flash memory, MRAM, FeRAM, RRAM, and PCM. While SCM promises high speed and low latency access, concerns remain around existing interface limitations and ensuring file system compatibility.
The document is a chapter from a textbook about evaluating computer hardware systems. It discusses how to assess key components like the CPU, memory, storage, video, audio, and overall reliability. It provides guidance on determining whether an existing system should be upgraded or replaced by looking at factors such as processing power, storage needs, and application requirements. The chapter also includes review questions to test the reader's understanding.
This document provides an in-depth look at solid state drive (SSD) performance in the IBM DS8000 storage system. It discusses SSD performance best practices, such as placing hot data on SSDs for applications requiring low response times. It also covers selecting which data is best suited for SSDs versus HDDs on the DS8000, which now supports SSDs as a high performance tier along with 15K RPM and 7K RPM HDDs. Tools for analyzing I/O patterns on AIX and System z servers are also described to help identify hot data candidates for migration to SSDs.
The document discusses Avamar, a backup and recovery software from EMC. It provides client-side data deduplication to reduce backup sizes and speeds. Avamar protects virtual and physical environments with scalable management. It discusses how Avamar improves backup performance for various environments like virtual machines, applications, remote offices, and desktops. Avamar provides reliable protection with features like daily integrity checks and fault tolerance.
IBM's 5th generation eX5 server architecture delivers innovations that address key customer pain points around memory limitations and licensing costs. The eX5 introduces MAX5 memory expansion technology that allows servers to double their memory capacity without performance impacts. This enables customers to consolidate more workloads on fewer servers to reduce licensing fees. The eX5 also uses SSD technology through the FlashPack and NEXT IO offerings to provide thousands of IOPS of performance previously requiring hundreds of spinning disks.
Tony Pearson presented on the future of storage. Key points included:
1) Advances in technologies like flash storage, storage virtualization, data deduplication, and cloud storage are changing how organizations manage their storage assets and treat storage as a pooled resource.
2) The roles of different storage types are shifting, with solid state drives becoming more optimal for primary data and disk and tape more suited to backup data and long term retention.
3) Improvements in bandwidth are driving convergence of networks and protocols like FCoE, iSCSI, and CIFS.
4) Cloud computing is driving standardization, automation, and management approaches that also impact internal IT departments.
Flash Ahead: IBM Flash System Selling PointCTI Group
IBM's FlashSystem storage is designed to radically accelerate critical applications by providing consistent low latency flash performance. It can integrate with existing disk arrays to offload I/O-intensive workloads while improving overall performance. FlashSystem utilizes IBM's flash technology and software to deliver microsecond response times for applications such as databases, virtual infrastructures, and cloud computing. The FlashSystem family includes the all-flash 710, 720, 810, and 820 models that are optimized for performance, capacity, and mixed workloads.
The document discusses Oracle's In-Memory Option for Exadata databases. It provides an overview of the key features of Exadata, how the In-Memory Option works, and how it is configured. The In-Memory Option stores data in an in-memory columnar format for improved performance. It can be enabled for tables and tablespaces and configured for compression and population priority levels. Tests showed the In-Memory Option can provide significant performance benefits when used with Exadata's smart scan and storage capabilities.
IBM Tivoli Storage Productivit Center overview and updateTony Pearson
The document provides an overview and update on IBM Tivoli Storage Productivity Center (TPC) version 4.2.2. TPC is IBM's premier storage infrastructure management tool that provides a centralized view and management capabilities for storage infrastructure, including disk arrays, tape libraries, and SAN fabrics from IBM and other vendors. The update highlights new features in TPC 4.2.2 such as enhanced replication management, disk performance monitoring, and file and database reporting.
EMC's VNX unified storage system is:
1) Optimized for flash with a powerful, flexible modular architecture designed for high performance.
2) Features efficient packaging with dense disk options and built-in energy efficiency.
3) Provides a mix of ultra-performance, performance, and capacity drives for optimal economics.
During its beta test of TPC 4.2, Insurer reported improved productivity and time-to-value. Enhanced storage resource agents reduced scan run times. New APIs and enhanced topology maps provided an end-to-end view of the environment for better decision making. Real-time monitoring of replication models and role-based access eliminated previously time-consuming manual processes...
Traditional tape backup is pulling apart at the seams. Tape requires manual intervention to manage which leaves backup data wide open to human error. Transporting tape also exposes data to the risk of loss and off-site storage facilities are costly. Testing disaster recovery (DR) plans in this manual environment is an exercise in futility, and even when the process works restoring data from tape is painfully slow.
The document summarizes a test of the EMC XtremIO storage array's ability to scale mixed database workloads in a VMware vSphere environment. Key findings:
- The array supported 3 production databases (Oracle, SQL Server, DB2) using only 664GB of physical storage while delivering over 116,000 IOPS with sub-millisecond latency.
- As additional copies of the databases were added, physical storage usage increased only slightly, while addressable capacity grew significantly due to data reduction technologies. This resulted in over 5,600GB of storage savings with 9 VMs.
- IOPS demands increased linearly as VMs were added, and latency remained under 1ms, showing the
Introduction to the EMC XtremIO All-Flash ArrayEMC
This white paper introduces the EMC XtremIO's storage array and provides detailed descriptions of the system architecture, theory of operation, and its features.
The document discusses using Oracle Recovery Manager (RMAN) to back up Oracle databases to a Sun ZFS Storage Appliance for efficient, high performance backups that can meet recovery time and point objectives. It outlines challenges with traditional backups and how the ZFS Storage Appliance addresses these through features like compression, integrity checking, and replication for disaster recovery. Best practices are proposed around architectures that leverage disk and tape for backups with different retention requirements.
Semper Continuity Suite is a business continuity software solution that allows for the rapid recovery of PC and server software. It uses technologies like preboot execution, single instance storage, and local image caching to restore systems to a working state within minutes without deleting user data. The software provides benefits like improving software deployment times by up to 80% and maintaining system availability in excess of 99.9%. It enables automatic repair of systems as well as disaster recovery and multiple restore points for protection.
This White Paper provides an overview of EMC VFCache. It describes the implementation details of the product and provides performance, usage considerations, and major customer benefits when using VFCache.
This document describes Symmetric Computing's Distributed Symmetric Multiprocessing (DSMP) technology, which transforms an InfiniBand-connected cluster of commodity servers into a distributed shared memory supercomputer. DSMP addresses limitations of Message Passing Interface (MPI) by enabling a global address space across cluster nodes. It features a transactional distributed shared memory system, optimized InfiniBand drivers, and an application-driven memory page coherency scheme. DSMP aims to make shared memory supercomputing affordable and accessible for researchers through leveraging commodity hardware.
The document describes the IBM System x3850 X5 and x3950 X5 servers. The x3850 X5 offers flexible configurations to meet changing workload demands and can be expanded from 2 to 4 processors and 32GB to 2TB of memory. The x3950 X5 comes with preconfigured systems optimized for specific workloads like databases, virtualization, and SAP HANA. Both systems provide high performance, reliability, and energy efficiency to allow for server consolidation and reducing costs.
This document discusses leveraging solid state drives (SSDs) in tiered storage environments to improve application performance and reduce costs compared to using only hard disk drives (HDDs). It describes how SSDs can deliver substantially better I/O performance than HDDs. Experiments showed significant performance improvements and substantial reduction in the number of drives needed when using SSDs, resulting in reduced costs from smaller footprint, lower energy usage, and less hardware to maintain. The document provides guidance on implementing tiered storage with SSDs and HDDs to optimize performance and costs.
This document lists various academic, research, technology, and outdoor recreational assets located along or near the Route 81 corridor in Virginia. It includes universities like Radford, Virginia Tech, and Roanoke College as well as research centers and technology parks. It also mentions the Smart Road test facility located at mile 121 on Route 81.
This property at 81 TESLIN ROAD sold for $317,000. It has 4 bedrooms and 3 bathrooms including an en-suite in the master bedroom. The home is located in a family-friendly neighborhood in Riverdale and has an open concept living and dining room with many large windows. Kitchen appliances include a compactor, dishwasher, fridge, stove, and washer/dryer. The lower level has potential for an in-law suite and a newly serviced hot tub. The large 72x100 foot corner lot offers ample parking and outdoor space.
Backing up the server parameter file is recommended for recovery purposes. This can be done using the CREATE PFILE statement to create a backup of the server parameter file. RMAN can also be used to create backups of the server parameter file. If the parameter file is lost, the database instance can be started using a client-side initialization parameter file and a new server parameter file created with CREATE SPFILE.
Flash Ahead: IBM Flash System Selling PointCTI Group
IBM's FlashSystem storage is designed to radically accelerate critical applications by providing consistent low latency flash performance. It can integrate with existing disk arrays to offload I/O-intensive workloads while improving overall performance. FlashSystem utilizes IBM's flash technology and software to deliver microsecond response times for applications such as databases, virtual infrastructures, and cloud computing. The FlashSystem family includes the all-flash 710, 720, 810, and 820 models that are optimized for performance, capacity, and mixed workloads.
The document discusses Oracle's In-Memory Option for Exadata databases. It provides an overview of the key features of Exadata, how the In-Memory Option works, and how it is configured. The In-Memory Option stores data in an in-memory columnar format for improved performance. It can be enabled for tables and tablespaces and configured for compression and population priority levels. Tests showed the In-Memory Option can provide significant performance benefits when used with Exadata's smart scan and storage capabilities.
IBM Tivoli Storage Productivit Center overview and updateTony Pearson
The document provides an overview and update on IBM Tivoli Storage Productivity Center (TPC) version 4.2.2. TPC is IBM's premier storage infrastructure management tool that provides a centralized view and management capabilities for storage infrastructure, including disk arrays, tape libraries, and SAN fabrics from IBM and other vendors. The update highlights new features in TPC 4.2.2 such as enhanced replication management, disk performance monitoring, and file and database reporting.
EMC's VNX unified storage system is:
1) Optimized for flash with a powerful, flexible modular architecture designed for high performance.
2) Features efficient packaging with dense disk options and built-in energy efficiency.
3) Provides a mix of ultra-performance, performance, and capacity drives for optimal economics.
During its beta test of TPC 4.2, Insurer reported improved productivity and time-to-value. Enhanced storage resource agents reduced scan run times. New APIs and enhanced topology maps provided an end-to-end view of the environment for better decision making. Real-time monitoring of replication models and role-based access eliminated previously time-consuming manual processes...
Traditional tape backup is pulling apart at the seams. Tape requires manual intervention to manage which leaves backup data wide open to human error. Transporting tape also exposes data to the risk of loss and off-site storage facilities are costly. Testing disaster recovery (DR) plans in this manual environment is an exercise in futility, and even when the process works restoring data from tape is painfully slow.
The document summarizes a test of the EMC XtremIO storage array's ability to scale mixed database workloads in a VMware vSphere environment. Key findings:
- The array supported 3 production databases (Oracle, SQL Server, DB2) using only 664GB of physical storage while delivering over 116,000 IOPS with sub-millisecond latency.
- As additional copies of the databases were added, physical storage usage increased only slightly, while addressable capacity grew significantly due to data reduction technologies. This resulted in over 5,600GB of storage savings with 9 VMs.
- IOPS demands increased linearly as VMs were added, and latency remained under 1ms, showing the
Introduction to the EMC XtremIO All-Flash ArrayEMC
This white paper introduces the EMC XtremIO's storage array and provides detailed descriptions of the system architecture, theory of operation, and its features.
The document discusses using Oracle Recovery Manager (RMAN) to back up Oracle databases to a Sun ZFS Storage Appliance for efficient, high performance backups that can meet recovery time and point objectives. It outlines challenges with traditional backups and how the ZFS Storage Appliance addresses these through features like compression, integrity checking, and replication for disaster recovery. Best practices are proposed around architectures that leverage disk and tape for backups with different retention requirements.
Semper Continuity Suite is a business continuity software solution that allows for the rapid recovery of PC and server software. It uses technologies like preboot execution, single instance storage, and local image caching to restore systems to a working state within minutes without deleting user data. The software provides benefits like improving software deployment times by up to 80% and maintaining system availability in excess of 99.9%. It enables automatic repair of systems as well as disaster recovery and multiple restore points for protection.
This White Paper provides an overview of EMC VFCache. It describes the implementation details of the product and provides performance, usage considerations, and major customer benefits when using VFCache.
This document describes Symmetric Computing's Distributed Symmetric Multiprocessing (DSMP) technology, which transforms an InfiniBand-connected cluster of commodity servers into a distributed shared memory supercomputer. DSMP addresses limitations of Message Passing Interface (MPI) by enabling a global address space across cluster nodes. It features a transactional distributed shared memory system, optimized InfiniBand drivers, and an application-driven memory page coherency scheme. DSMP aims to make shared memory supercomputing affordable and accessible for researchers through leveraging commodity hardware.
The document describes the IBM System x3850 X5 and x3950 X5 servers. The x3850 X5 offers flexible configurations to meet changing workload demands and can be expanded from 2 to 4 processors and 32GB to 2TB of memory. The x3950 X5 comes with preconfigured systems optimized for specific workloads like databases, virtualization, and SAP HANA. Both systems provide high performance, reliability, and energy efficiency to allow for server consolidation and reducing costs.
This document discusses leveraging solid state drives (SSDs) in tiered storage environments to improve application performance and reduce costs compared to using only hard disk drives (HDDs). It describes how SSDs can deliver substantially better I/O performance than HDDs. Experiments showed significant performance improvements and substantial reduction in the number of drives needed when using SSDs, resulting in reduced costs from smaller footprint, lower energy usage, and less hardware to maintain. The document provides guidance on implementing tiered storage with SSDs and HDDs to optimize performance and costs.
This document lists various academic, research, technology, and outdoor recreational assets located along or near the Route 81 corridor in Virginia. It includes universities like Radford, Virginia Tech, and Roanoke College as well as research centers and technology parks. It also mentions the Smart Road test facility located at mile 121 on Route 81.
This property at 81 TESLIN ROAD sold for $317,000. It has 4 bedrooms and 3 bathrooms including an en-suite in the master bedroom. The home is located in a family-friendly neighborhood in Riverdale and has an open concept living and dining room with many large windows. Kitchen appliances include a compactor, dishwasher, fridge, stove, and washer/dryer. The lower level has potential for an in-law suite and a newly serviced hot tub. The large 72x100 foot corner lot offers ample parking and outdoor space.
Backing up the server parameter file is recommended for recovery purposes. This can be done using the CREATE PFILE statement to create a backup of the server parameter file. RMAN can also be used to create backups of the server parameter file. If the parameter file is lost, the database instance can be started using a client-side initialization parameter file and a new server parameter file created with CREATE SPFILE.
Backing up the server parameter file is recommended for recovery purposes. This can be done using the CREATE PFILE statement to create a backup PFILE from the SPFILE. RMAN can also be used to create backups of the server parameter file. If the SPFILE becomes unavailable, the database instance can be started using a client-side initialization parameter file and a new SPFILE created using CREATE SPFILE. Ensuring the parameter file contains the appropriate RAC instance-specific settings is important.
IBM has announced a new mid-range disk storage system called the Storwize V7000 to address the needs of mid-sized datacenters. The Storwize V7000 provides enterprise-level functionality including support for high-performance SSDs, SAS drives, and high-capacity SATA drives to satisfy the requirements of mission-critical and business-critical applications on a single, scalable platform. It also includes features like thin provisioning, replication, and virtualization to improve utilization and reduce costs. The Storwize V7000 aims to provide mid-sized datacenters with an affordable solution that has the simplified management and multi-tiered capabilities they require.
With the TS7610 ProtecTIER Appliance Express, IBM has enabled enterprise-quality data deduplication for the mid-market. With better reliability and a faster recovery time than tape, the TS7610 provides affordable data backup and recovery for both the mid-market and remote offices. It is significantly better than non-deduplicated disk, storing more data on less disk, while consuming fewer energy, cooling, and space resources. With its simplified GUI, the TS7610 enables an improved management for backup without having to implement radical changes. Learn More: http://ibm.co/PNJ7kr
Learn how upcoming changes in the persistent memory market will affect deployments of in-memory computing and traditional applications. Using software innovations from SanDisk and the broad portfolio of flash storage hardware options, customers and developers can optimize applications for “flash extended memory”, the intersection of in-memory computing and persistent memory technologies.
IBM is the first major storage vendor to deliver eMLC Flash Storage Systems and has been incorporating flash into its servers and storage products for many years. This presentation explains the benefits of using IBM FlashSystems with I/O Intensive workloads where lower latency can make the difference; use cases include Online Transaction processing (OLTP), Business Intelligence (BI), Online Analytical Processing (OLAP), Virtual Desktop Infrastructure (VDI), High Performance Computing (HPC), Content delivery solutions (such as cloud storage and video on demand).
Removing Storage Related Barriers to Server and Desktop VirtualizationDataCore Software
An IDC Viewpoint Paper: Virtualization is among the technologies that have become increasingly attractive in the current economic climate. Organizations are implementing virtualization solutions to obtain the following benefits: Focus on efficiency and cost reduction, Simplify management and maintenance, and Improve availability and disaster recovery.
This document discusses using IBM TotalStorage Productivity Center for Disk to monitor performance of an IBM SAN Volume Controller (SVC). It describes a test environment consisting of SVC nodes, Windows and Linux hosts, Brocade switches, a DS4300 storage array, and IBM TotalStorage Productivity Center. The document outlines a scenario where workloads are run on the hosts to stress the backend storage, and IBM TotalStorage Productivity Center for Disk is used to measure performance and identify bottlenecks. When a bottleneck is detected, virtual disks are migrated to resolve the issue.
Application acceleration from the data storage perspectiveInterop
The document discusses new advances in caching and solid state storage for accelerating application performance. It describes how solid state drives (SSDs) offer significantly higher input/output performance than spinning hard disks. SSDs can be used to cache frequently accessed data and improve performance for databases, file systems, virtualized applications, and other workloads limited by random disk access. The document provides examples of inserting SSDs at different points in storage systems, such as directly on application servers or in storage area networks, to optimize performance.
1. The document discusses various software-defined storage solutions from vendors like IBM, DataCore, and Nimble that can maximize availability, increase performance, and reduce costs for organizations.
2. It provides an overview of different storage platforms like IBM Storwize, IBM Spectrum Virtualize, DataCore VDSA appliances, and Nimble hybrid storage arrays that offer features like virtualization, high availability, flexibility, efficiency, and automation.
3. Recommendations are provided on which solutions are best suited for different use cases and storage requirements.
Fulcrum Group Storage And Storage Virtualization PresentationSteve Meek
The document discusses storage solutions and SANs. Exponential data growth is expected to continue challenging data protection efforts. Different storage types fit different business needs. By understanding storage design and an organization's needs, storage virtualization may be a good fit. SANs can help with general server needs, virtualization, and disaster recovery/backup needs. Planning is key to deploying storage in a centralized way.
This document summarizes strategies for managing storage performance, including using disk arrays, caching, and solid state drives (SSDs). It discusses how storage performance is an important metric for evaluating IT efficiency and is leveraged by vendors. While SSDs provide much faster performance than disks, they have endurance limitations like wear from repeated writes. Storage virtualization has the potential to optimize performance without significant added costs.
This is a paper was written by David Reine, an IT analyst for The Clipper Group, and highlights IBM’s SAN Volume Controller new features, capabilities and benefits. These new capabilities were announced on October 20, 2009Virtualization is at the center of all 21st Century IT systems, yet many CIOs fail to fully understand all of the benefits it can deliver to the data center operation. When we think of virtualization, we think compute, network, and storage—and we mostly think about driving up utilization on each. Storage controllers have always offered the ability to carve out pieces of real storage from a large pool and deliver them efficiently to a number of hosts, but it is storage virtualization itself that offers improvements that drive operational efficiency. IBM has been quietly addressing storage virtualization with SAN Volume Controller (SVC) for the last six years, building up a significant technical lead in this space.
Strange but true: most infrastructure architectures are deliberately designed from the outset to need little or no change over their lifetimes. There are two main reasons for this:
1. Change often means outages and customer impact and must be avoided
2. Budgets are set at the beginning of a project and getting more cash later is tough
Typically, then, applications are configured with all of the storage capacity they need to support the wildest dreams of their business sponsors (and then some extra is added for contingency by IT). Equally, storage is always configured with the performance level (storage tier) set to cope with the wildest transactional dreams of the business sponsor (and guess what? IT generally adds a bit more for good measure.).
No wonder storage is now one of the largest cost components involved in delivering and running a business application.
The Lenovo Storage S3200 array delivers best-in-class price/performance with a modular architecture facilitating simple performance upgrades and maintenance.
SANsymphony-V software, running between the hosts and the arrays, further accelerates applications by leveraging powerful processors and large memories of commodity x86-64 servers for read and write caching. Its auto-tiering software optimally utilizes the array’s SSDs to speed up active workloads, while migrating less-frequently accessed data to lower cost, higher capacity SAS disks. DataCore also converts host random write patterns known to suffer high disk latencies, into sequential IOs far more favorable for disks.
The document discusses EMC's new VNX family of midrange storage systems. It highlights how the VNX addresses the challenges of Moore's Law and exponential data growth through its flash-optimized hybrid architecture and ability to automatically tier data across flash, disk, and nearline disk drives based on activity levels. The document also outlines the key features and performance advantages of the new VNX platforms compared to previous and competing solutions.
This short paper discusses the work happening in the Fibre Channel Industry Association's T-11 committee to develop a new low latency protocol for a flash drive world. This paper is an excellent introduction to it.
This document discusses storage considerations for VMware View environments. It begins with an introduction to storage systems and their history, then discusses planning storage needs for VMware View. Some key challenges with storage in virtual desktop environments are large amounts of centralized user data and "storms" of access that can impact performance. The document recommends addressing these through good sizing and performance assessment, optimizing desktop images, leveraging technologies appropriately, and using resources on optimizing for View.
Dell whitepaper busting solid state storage mythsNatalie Cerullo
This document provides an overview of solid-state storage technology, including its uses, applications, and innovations. Some key points:
- Solid-state storage is becoming more widely adopted as NAND flash chip density and capacity increases while prices drop significantly. Over the next few years, most actively accessed data is expected to move to solid-state storage.
- Solid-state storage comes in different form factors like solid-state drives (SSDs), solid-state cards (SSCs), and can be deployed in servers, shared storage, and dedicated arrays. It provides much higher I/O performance than hard disk drives.
- While early adoption focused on read caches, new solutions now provide write caching and data protection.
The document discusses testing done by IBM to evaluate the performance improvements provided by the IBM MAX5 memory expansion technology. The testing showed that by adding 512GB of memory via a MAX5 unit, increasing total memory to 1TB, the following benefits were achieved:
- Response time for business intelligence reports was 1.5-2.8 times faster.
- The cost of producing business intelligence reports could be decreased by 31%-64% over 3 years.
- The throughput of web-facing applications was 2.4-4.9 times greater.
- Read/write response time was decreased by 60%-80%.
Demartek Lenovo Storage S3200 i a mixed workload environment_2016-01Lenovo Data Center
This document evaluates the Lenovo S3200 storage array's ability to support multiple workloads simultaneously. Testing showed that while an all-HDD configuration met performance requirements, one application suffered high latency. Enabling SSD caching or tiering significantly improved performance for that application specifically, reducing latency by 70% and increasing bandwidth by up to 7x, without impacting other applications. The Lenovo S3200 is suitable for consolidating diverse workloads due to its flexibility to configure HDDs with SSDs for optimized performance tailored to each use case.
Similar to IBM Upgrades SVC with Solid State Drives — Achieves Better Storage Utilization (20)
This IBM Redpaper provides a brief overview of OpenStack and a basic familiarity of its usage with the IBM XIV Storage System Gen3. The illustration scenario that is presented uses the OpenStack Folsom release implementation IaaS with Ubuntu Linux servers and the IBM Storage Driver for OpenStack. For more information on IBM Storage Systems, visit http://ibm.co/LIg7gk.
Visit http://bit.ly/KWh5Dx to 'Follow' the official Twitter handle of IBM India Smarter Computing.
Learn how all flash needs end to end Storage efficiency. For more information on IBM FlashSystem, visit http://ibm.co/10KodHl.
Visit http://bit.ly/KWh5Dx to 'Follow' the official Twitter handle of IBM India Smarter Computing.
Learn about vSphere Storage API for Array Integration on the IBM Storwize family. IBM Storwize V7000 Unified combines the block storage capabilities of Storwize V7000 with file storage capabilities into a single system for greater ease of management and efficiency. For more information on IBM Storage Systems, visit http://ibm.co/LIg7gk.
Visit http://bit.ly/KWh5Dx to 'Follow' the official Twitter handle of IBM India Smarter Computing.
Learn about IBM FlashSystem 840 and its complete product specification in this Redbook. FlashSystem 840 provides scalable performance for the most demanding enterprise class applications. IBM FlashSystem 840 accelerates response times with IBM MicroLatency to enable faster decision making. For more information on IBM FlashSystem, visit http://ibm.co/10KodHl.
Visit http://on.fb.me/LT4gdu to 'Like' the official Facebook page of IBM India Smarter Computing.
Learn about the IBM System x3250 M5,.The x3250 M5 offers the following energy-efficiency features to save energy, reduce operational costs, increase energy availability, and contribute to a green environment, energy-efficient planar components help lower operational costs. For more information on System x, visit http://ibm.co/Q7m3iQ.
http://www.scribd.com/doc/210746104/IBM-System-x3250-M5
This Redbook talks about the product specification of IBM NeXtScale nx360 M4. The NeXtScale nx360 M4 server provides a dense, flexible solution with a low total cost of ownership (TCO). The half-wide, dual-socket NeXtScale nx360 M4 server is designed for data centers that require high performance but are constrained by floor space. For more information on System x, visit http://ibm.co/Q7m3iQ.
http://www.scribd.com/doc/210745680/IBM-NeXtScale-nx360-M4
The IBM System x3650 M4 HD is a (1) 2-socket 2U rack-optimized server that supports up to 32 internal drives and features an innovative design for optimal performance, uptime, and dense storage. It offers (2) excellent reliability, availability, and serviceability for improved business environments. The server is (3) designed for easy deployment, integration, service, and management.
Here are the product specification for IBM System x3300 M4. This product can be managed remotely.The x3300 M4 server contains IBM IMM2, which provides advanced service-processor control, monitoring, and an alerting function. The IMM2 lights LEDs to help you diagnose the problem, records the error in the event log, and alerts you to the problem. For more information on System x, visit http://ibm.co/Q7m3iQ.
Visit http://on.fb.me/LT4gdu to 'Like' the official Facebook page of IBM India Smarter Computing.
Learn about IBM System x iDataPlex dx360 M4. IBM System x iDataPlex is an innovative data center solution that maximizes performance and optimizes energy and space efficiency. The iDataPlex solution provides customers with outstanding energy and cooling efficiency, multi-rack level manageability, complete flexibility in configuration, and minimal deployment effort. For more information on System x, visit http://ibm.co/Q7m3iQ.
http://www.scribd.com/doc/210744055/IBM-System-x-iDataPlex-dx360-M4
The IBM System x3500 M4 server provides powerful and scalable performance for business applications in an energy efficient tower or rack design. It features the latest Intel Xeon E5-2600 v2 or E5-2600 processors with up to 24 cores, 768GB RAM, 32 hard drives, and 8 PCIe slots. Comprehensive systems management tools and redundant components help ensure high availability, while its small footprint and 80 Plus Platinum power supplies reduce data center costs.
Learn about system specification for IBM System x3550 M4. The x3550 M4 offers numerous features to boost performance, improve scalability, and reduce costs. Improves productivity by offering superior system performance with up to 12-core processors, up to 30 MB of L3 cache, and up to two 8 GT/s QPI interconnect links. For more information on System x, visit http://ibm.co/Q7m3iQ.
Learn about IBM System x3650 M4. The x3650 M4 is an outstanding 2U two-socket business-critical server, offering improved performance and pay-as-you grow flexibility along with new features that improve server management capability. For more information on System x, visit http://ibm.co/Q7m3iQ.
http://www.scribd.com/doc/210741926/IBM-System-x3650-M4
Learn about the product specification of IBM System x3500 M3. System x3500 M3 has an energy-efficient design which works in conjunction with the IMM to govern fan rotation based on the readings that it delivers. This saves money under normal conditions because the fans do not have to spin at high speed. For more information on System x, visit http://ibm.co/Q7m3iQ.
http://www.scribd.com/doc/210741626/IBM-System-x3500-M3
Learn about IBM System x3400 M3. The x3400 M3 offers numerous features to boost performance and reduce costs, x3400 M3 has the ability to grow with your application requirements with these features. Powerful systems management features simplify local and remote management of the x3400 M3. For more information on System x, visit http://ibm.co/Q7m3iQ.
Visit http://on.fb.me/LT4gdu to 'Like' the official Facebook page of IBM India Smarter Computing.
Learn about IBM System 3250 M3 which is a single-socket server that offers new levels of performance and flexibility
to help you respond quickly to changing business demands. Cost-effective and compact, it is well suited to small to mid-sized businesses, as well as large enterprises. For more information on System x, visit http://ibm.co/Q7m3iQ.
http://www.scribd.com/doc/210740347/IBM-System-x3250-M3
Learn about IBM System x3200 M3 and its specifications. The System x3200 M3 features easy installation and management with a rich set of options for hard disk drives and memory. The efficient design helps to save energy and provide a better work environment with less heat and noise. For more information on System x, visit http://ibm.co/Q7m3iQ.
http://www.scribd.com/doc/210739508/IBM-System-x3200-M3
Learn about the configuration of IBM PowerVC. IBM PowerVC is built on OpenStack that controls large pools of server, storage, and networking resources throughout a data center. IBM Power Virtualization Center provides security services that support a secure environment. Installation requires just 20 minutes to get a virtual machine up and running. For more information on Power Systems, visit http://ibm.co/Lx6hfc.
Visit http://on.fb.me/LT4gdu to 'Like' the official Facebook page of IBM India Smarter Computing.
Learn about Ibm POWER7 Virtualization Performance. PowerVM Lx86 is a cross-platform virtualization solution that enables the running of a wide range of x86 Linux applications on Power Systems platforms within a Linux on Power partition without modifications or recompilation of the workloads. For more information on Power Systems, visit http://ibm.co/Lx6hfc.
http://www.scribd.com/doc/210734237/A-Comparison-of-PowerVM-and-Vmware-Virtualization-Performance
This reference architecture document describes deploying the VMware vCloud Enterprise Suite on the IBM PureFlex System hardware platform. Key points:
- The vCloud Suite software provides components for managing and delivering cloud services, while the IBM PureFlex System provides an integrated hardware platform in a single chassis.
- The reference architecture focuses on installing the vCloud Suite management components as virtual machines on an ESXi host to manage consumer resources.
- The IBM PureFlex System provides servers, networking, and storage in a single chassis that can then be easily scaled out. This standardized deployment accelerates provisioning of cloud infrastructure.
- Deployment considerations cover systems management using IBM Flex System Manager, server, networking, storage configurations
Learn how x6: The sixth generation of EXA Technology is fast, agile and Resilient for Emerging Workloads from Alex Yost. Vice President, IBM PureSystems and System x
IBM Systems and Technology Group. x6 drives cloud and big data for enterprises by achieving insight faster thereby outperforming competitors. For more information on System x, visit http://ibm.co/Q7m3iQ.
http://www.scribd.com/doc/210715795/X6-The-sixth-generation-of-EXA-Technology
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Connector Corner: Seamlessly power UiPath Apps, GenAI with prebuilt connectorsDianaGray10
Join us to learn how UiPath Apps can directly and easily interact with prebuilt connectors via Integration Service--including Salesforce, ServiceNow, Open GenAI, and more.
The best part is you can achieve this without building a custom workflow! Say goodbye to the hassle of using separate automations to call APIs. By seamlessly integrating within App Studio, you can now easily streamline your workflow, while gaining direct access to our Connector Catalog of popular applications.
We’ll discuss and demo the benefits of UiPath Apps and connectors including:
Creating a compelling user experience for any software, without the limitations of APIs.
Accelerating the app creation process, saving time and effort
Enjoying high-performance CRUD (create, read, update, delete) operations, for
seamless data management.
Speakers:
Russell Alfeche, Technology Leader, RPA at qBotic and UiPath MVP
Charlie Greenberg, host
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Digital Banking in the Cloud: How Citizens Bank Unlocked Their MainframePrecisely
Inconsistent user experience and siloed data, high costs, and changing customer expectations – Citizens Bank was experiencing these challenges while it was attempting to deliver a superior digital banking experience for its clients. Its core banking applications run on the mainframe and Citizens was using legacy utilities to get the critical mainframe data to feed customer-facing channels, like call centers, web, and mobile. Ultimately, this led to higher operating costs (MIPS), delayed response times, and longer time to market.
Ever-changing customer expectations demand more modern digital experiences, and the bank needed to find a solution that could provide real-time data to its customer channels with low latency and operating costs. Join this session to learn how Citizens is leveraging Precisely to replicate mainframe data to its customer channels and deliver on their “modern digital bank” experiences.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Discover top-tier mobile app development services, offering innovative solutions for iOS and Android. Enhance your business with custom, user-friendly mobile applications.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/how-axelera-ai-uses-digital-compute-in-memory-to-deliver-fast-and-energy-efficient-computer-vision-a-presentation-from-axelera-ai/
Bram Verhoef, Head of Machine Learning at Axelera AI, presents the “How Axelera AI Uses Digital Compute-in-memory to Deliver Fast and Energy-efficient Computer Vision” tutorial at the May 2024 Embedded Vision Summit.
As artificial intelligence inference transitions from cloud environments to edge locations, computer vision applications achieve heightened responsiveness, reliability and privacy. This migration, however, introduces the challenge of operating within the stringent confines of resource constraints typical at the edge, including small form factors, low energy budgets and diminished memory and computational capacities. Axelera AI addresses these challenges through an innovative approach of performing digital computations within memory itself. This technique facilitates the realization of high-performance, energy-efficient and cost-effective computer vision capabilities at the thin and thick edge, extending the frontier of what is achievable with current technologies.
In this presentation, Verhoef unveils his company’s pioneering chip technology and demonstrates its capacity to deliver exceptional frames-per-second performance across a range of standard computer vision networks typical of applications in security, surveillance and the industrial sector. This shows that advanced computer vision can be accessible and efficient, even at the very edge of our technological ecosystem.
What is an RPA CoE? Session 1 – CoE VisionDianaGray10
In the first session, we will review the organization's vision and how this has an impact on the COE Structure.
Topics covered:
• The role of a steering committee
• How do the organization’s priorities determine CoE Structure?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
How information systems are built or acquired puts information, which is what they should be about, in a secondary place. Our language adapted accordingly, and we no longer talk about information systems but applications. Applications evolved in a way to break data into diverse fragments, tightly coupled with applications and expensive to integrate. The result is technical debt, which is re-paid by taking even bigger "loans", resulting in an ever-increasing technical debt. Software engineering and procurement practices work in sync with market forces to maintain this trend. This talk demonstrates how natural this situation is. The question is: can something be done to reverse the trend?
The Microsoft 365 Migration Tutorial For Beginner.pptxoperationspcvita
This presentation will help you understand the power of Microsoft 365. However, we have mentioned every productivity app included in Office 365. Additionally, we have suggested the migration situation related to Office 365 and how we can help you.
You can also read: https://www.systoolsgroup.com/updates/office-365-tenant-to-tenant-migration-step-by-step-complete-guide/
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
"Frontline Battles with DDoS: Best practices and Lessons Learned", Igor IvaniukFwdays
At this talk we will discuss DDoS protection tools and best practices, discuss network architectures and what AWS has to offer. Also, we will look into one of the largest DDoS attacks on Ukrainian infrastructure that happened in February 2022. We'll see, what techniques helped to keep the web resources available for Ukrainians and how AWS improved DDoS protection for all customers based on Ukraine experience
"Frontline Battles with DDoS: Best practices and Lessons Learned", Igor Ivaniuk
IBM Upgrades SVC with Solid State Drives — Achieves Better Storage Utilization
1. IBM Upgrades SVC with Solid State Drives - Achieves Better Storage Utilization
THE CLIPPER GROUP
Navigator
TM
SM
SM
Navigating Information Technology Horizons
Published Since 1993 Report #TCG2009046LI October 19, 2009
IBM Upgrades SVC with Solid State Drives —
Achieves Better Storage Utilization
Analyst: David Reine
Management Summary
With the football season in full swing, the baseball season heading into the playoffs, and the hockey
season just starting, it is time to raid the refrigerator for snacks, head for the most comfortable chair in
the family room, and settle in for a full day of viewing sports. Unfortunately, it is not always easy to
turn on the myriad number of devices required to watch a game broadcast over cable, on that wide-
screen hi-def TV, with the wrap-around sound from the latest audio system available. There is the re-
mote for the cable system; there is a remote for the TV; there is one for the satellite dish; there is anoth-
er for the sound system. There are so many remote controls on the coffee table that there is hardly
room for the snacks! What you need is a universal remote; a single, simplified command center that
can control all of the hi-tech equipment in the family room. Unfortunately, even that universal remote
will not do the job for any device released after the remote was manufactured. What is required is a
universal remote with a learning capability to take the complexity out of turning on the TV, one than
can reprogram itself from the remote that comes with every new device.
Similar problems confront the data center storage administrator. How can he manage a dozen dif-
ferent disk arrays and virtual tape libraries (VTLs) from any number of different vendors, improving
storage utilization? How can he manage multiple SANs and multiple NAS arrays without having to
learn a dozen different command sets? How can the administrator deploy multiple tiers of storage
across these arrays efficiently and economically, to ensure that the storage is being properly utilized?
How can the administrator ensure that the HPC application is assigned to the highest performing (and
most expensive) solid-state disk device (SSD), while mission-critical applications write to the fastest
Fibre Channel (FC) devices, and the backup application to the highest capacity (and lowest cost) SATA
disk? When confronted with similar issues, the server administrator has resorted to the consolidation
and virtualization of under-utilized platforms. The solution is the same for the storage administrator.
Virtualization of the storage architecture can ensure that the data center gets the maximum value
out of its storage resources, maintaining the lowest possible total cost of ownership (TCO) of the
enterprise storage infrastructure. By deploying a storage virtualization appliance, the IT staff can
achieve the highest possible utilization of its storage capacity, while simplifying management control.
It can manage FC and iSCSI from the one GUI. It can deploy SSDs for those applications that demand
the highest possible IOPS, while assigning the
most reliable HDDs to those mission-critical ap-
plications that require them, and the least expen-
sive devices to archive applications. The question IN THIS ISSUE
that remains: Which virtualization engine to de-
ploy? To date, the most successful storage vir- Storage Utilization in the Data Center .. 2
tualization system has been IBM’s SAN Volume
IBM’s SAN Volume Controller 5 ............ 3
Controller (SVC). Is it the right engine for your
enterprise? To find out, please read on. Conclusion .............................................. 4
The Clipper Group, Inc. - Technology Acquisition Consultants Strategic Advisors
888 Worcester Street Suite 140 Wellesley, Massachusetts 02482 U.S.A. 781-235-0085 781-235-5454 FAX
Visit Clipper at www.clipper.com Send comments to editor@clipper.com